We literally just lost Belgium, the country that described Chat Control as a “monster that invades your privacy and cannot be tamed”.
And more countries in undecided (which generally means they will vote yes, once they get some stuff they want in exchange).
Which essentially means open source software will be in effect banned throughout the EU. I would have thought this would be a bigger issue for people, but surprisingly it doesn't seem to get much attention. Especially given the whole plan won't actually work to catch more than a handful of very careless criminals and is obviously just intended to put infrastructure in place to expand the scope to terrorism and then "extremism" and anti government sentiment within a couple of years.
Meanwhile all the real criminals will just download the non EU versions of everything.
I’ve been experimenting with BIND, and I wanted a way to manage zones/records through a REST API instead of editing configs or using rndc directly. So I built a small project as a proof of concept.
The technically interesting parts were:
- Safely interacting with BIND without breaking existing configs.
- Handling zone/record updates in a way that’s idempotent and script-friendly.
- Balancing between simplicity (just a wrapper) vs. feature creep (turning into a full DNS management system).
- Security concerns: exposing DNS management over HTTP means you have to think hard about access control and potential abuse.
I’d be curious how others have approached similar problems. If you had to expose DNS management via an API, what would you watch out for?
Hey r/programming, I've been thinking a lot about the common pain points of dealing with unvalidated or "dirty" data, especially when working with large datasets. Manual cleaning is incredibly time-consuming and often a huge bottleneck for getting projects off the ground or maintaining data pipelines. It feels like a constant battle against inaccurate reports, compliance risks, and just generally wasted effort.
Specifically, I'm looking into approaches for automating validation across different data types—like email addresses, mobile numbers, IP addresses, and even browser user-agents—for batch processing.
Has anyone here implemented solutions using external APIs for this kind of batch data validation? What were your experiences?
What are your thoughts on:
* The challenges of integrating such third-party validation services?
* Best practices for handling asynchronous batch processing (submission, polling, retrieval)?
* The ROI you've seen from automating these processes versus maintaining manual checks or in-house solutions?
* Any particular types of validation (e.g., email deliverability, mobile line type, IP threat detection) that have given you significant headaches or major wins with automation?
Would love to hear about your experiences, cautionary tales, or success stories in building robust, automated data validation workflows!
I recently joined a super welcoming and helpful community : OpsiMate, an open-source project aiming to simplify infrastructure management.
The idea is simple but powerful: instead of juggling a dozen monitoring tools, scattered dashboards, and manual processes, OpsiMate wants to give teams one unified, intelligent platform to monitor, manage, and optimize infrastructure.
It’s still in a very early stage, but that’s what makes it exciting—we’re at the point where contributors can shape the direction of the project. The maintainers are incredibly supportive, and I’ve already learned a lot just being part of it.
We’re especially looking for feedback, ideas, and contributors who want to get their hands dirty—whether that’s code, docs, or just sharing thoughts on what would make infra management less painful.
Would love to see some of you there and grow this together 🚀
I’ve been diving deep into software architecture and design patterns, and I noticed most resources are either too academic or language-specific. So I built a comprehensive, code-driven repo covering all 22 Gang of Four (GoF) Design Patterns, implemented in 9 different languages. https://github.com/ragulnathMB/Modern-Design-Patterns--by-RN
The purpose of this article is to share the technical realities of security patching for the Linux kernel, and the intended scope of the Linux kernel’s livepatch capability. We’ll cover when kernel live patching is most appropriate, and when rebooting is the best option.
Canonical Livepatch is a service that allows Ubuntu long-term support (LTS) users to apply critical kernel security patches without rebooting. Canonical Livepatch delivers live, rebootless security updates for high-priority kernel vulnerabilities on Ubuntu LTS systems, and is included with Ubuntu Pro. Ubuntu Pro is a subscription for security, hardening, compliance, and support for open source software.
In this blog, we’ll run through Livepatch in more detail, in order to bust the myths that often arise around the subject of kernel live patching. We’ll highlight when live patching is the most effective option, and run through instances where a reboot is the more appropriate option. Crucially, we’ll explain why live patching is a complement to rebooting, not a full-on replacement for all CVEs, in all situations.
Canonical Livepatch is built for Ubuntu LTS, and is tightly coupled with the application binary interface (ABI), the layer between user-space applications and the operating system’s kernel. Extensible Firmware Interface (EFI) Secure Boot and Linux Kernel Lockdown are complementary security features that work together to protect the boot process and the running kernel, and Canonical Livepatch conforms with these standardized security processes and their chain of trust requirements. Canonical Livepatch does not insert updates through proprietary or nonstandard mechanisms that bypass these kernel lockdown rules, and guarantees the updates only come from a trusted publisher. Enterprises using Ubuntu choose Canonical Livepatch for this precise integration.
Myth #1: every vulnerability should be live patched
A common misunderstanding we encounter is that live patching is desirable for every CVE, in every situation. However, not every CVE requires urgent attention or can be safely livepatched without increasing operational risk. Live patching the kernel in memory is an exploit window mitigation solution, and is not a comparable substitute for installing software updates with apt or Landscape. Indeed, as others suggest, live patching 90-95% of vulnerabilities would be impractical, and doing this could introduce operational risks that hang machines. It’s likely that users are left looking for rollback capabilities, when this happens.
Canonical Livepatch patches kernel vulnerabilities with critical and high Common Vulnerability Scoring System (CVSS) and Ubuntu Priority ratings. Canonical targets vulnerabilities that have security implications, such as privilege escalation or remote code execution, and are practical to patch safely in-memory.
Additionally, Canonical Livepatch does not patch userspace libraries like OpenSSL or glibc, because that is the responsibility of unattended-upgrades or a systems management tool, like Canonical Landscape.
Myth #2: Live patches can trap you in a patched state
There are two primary approaches to kernel live patching: incremental and cumulative patching. Incremental live patching stacks one security update upon another, and patching live-running code with in-memory overrides on an incremental basis can become fragile over long periods, especially when these updates are stacked in perpetuity without prior testing on a variety of hardware configurations. In this scenario, it’s understandable why readers believe in the need for rollback functionality, in order to resolve any unexpected interactions between patches, which can cause instability.
However, Canonical Livepatch eliminates the risk of instability by providing cumulative patches instead of stacking incremental ones, which makes rollbacks unnecessary in practice.
Before being published, Canonical runs every livepatch through the same testing and rigor as their kernel packages. Each patch is tested cumulatively with previous patches on real hardware, without emulation. Livepatch Client downloads a single cumulative patch for the running kernel. In practice, the failure rate of Canonical Livepatch deployments is vanishingly small, and for Canonical Livepatch, reboots remain the safest and most sensible rollback path.
Myth #3: Live kernel patching is always preferable to rebooting
As mentioned in our introduction, you should see live patching and rebooting as two tools which complement each other – not as two alternate approaches to security. Canonical Livepatch provides a timely, quick solution to critical CVEs, as these need to be actioned immediately. By patching in the live kernel, Canonical Livepatch shrinks the exploit window for a given CVE and keeps your customers safe, without requiring an immediate reboot.
However, Canonical Livepatch is not a replacement for rebooting. It is a tool that gives you more control by preventing unscheduled reboots. Indeed, scheduling reboots at sensible intervals is the responsible approach to security hygiene. No enterprise Linux operating system publisher recommends avoiding reboots indefinitely. Kernel modules like NVIDIA drivers often can’t be reloaded safely without rebooting. In some edge cases, kernel updates can’t be livepatched due to ABI changes or complex restructuring. Security vulnerabilities exist in userland packages, and installing security patches and restarting services is not sufficient (or applicable) in certain situations:
libc6 is fundamental to virtually every process on a Linux system, and is packaged as glibc on Ubuntu. Even small security patches require restarting all linked processes, or more practically, a reboot. When updating libc6 even if services are restarted manually, some processes such as systemd, won’t reload libc6 cleanly. True safety comes with a reboot.
Intel and AMD microcode in the cpu-microcode package is loaded at boot or via a special mechanism at runtime. Runtime updates apply to cores already idle, or not running privileged code. To fully ensure all cores and threads operate with the updated microcode, a reboot is required. The Spectre and Meltdown vulnerabilities were remediated with cpu-microcode updates.
The systemd and init packages are critical to system management, if vulnerabilities exist here. Mitigations for systemd require a restart of PID 1, which is impractical without a reboot.
dbus is a package responsible for inter-process communication (IPC) that allows applications to communicate with each other, and a reboot ensures a clean state.
udev is a package which monitors kernel events and applies rules to create, configure, and control device nodes in the /dev directory. Changes to udev require a reboot to ensure consistency.
Patches to crypto libraries such as openssl, gnutls, and libssl require restarting all dependent processes. Technically no system reboot is required, but practically it’s safer and faster to reboot than to manually hunt down and restart everything safely.
Updates to hypervisors require restarting running virtual machines. Rebooting hypervisor hosts is often preferred, operationally.
Patches to display servers such as xorg and wayland require restarting the session, potentially disrupting desktop environments. While this is not a full reboot, it is disruptive enough that in practice it might as well be.
Reboots cleanly flush accumulated state inconsistencies from memory leaks, hung file handles, and other problems that could fester with long uptime. Eliminating reboots would expose any operating system to these inefficiencies over time.
In short, trying to avoid all reboots results in incomplete security patching coverage, and leads to fragile patching strategies that involve service restarts and error prone dependency tracing. Rebooting provides a guaranteed clean slate.
Conclusion
Kernel live patching is part of a broader security strategy. Kernel live patching is not a replacement for traditional patch management, reboot policies, and defense-in-depth – however, tools like Canonical Livepatch provide you with more control over when, and how, reboots occur for the most critical vulnerabilities.
Component
Reboot Required after patching?
Frequency Recommendation
Kernel updates
Yes
When critical and high patches are patched. Every May and November if Livepatch is enabled, after each Ubuntu release.
CPU Microcode updates
Yes
When updated.
glibc, libssl, systemd, and dbus updates
Yes
When updated.
Container runtimes
No
Restart containers, reboot only if needed.
Userland apps
Sometimes
Restarting services individually may be sufficient.
Canonical Livepatch is provided as part of Ubuntu Pro, a subscription which also includes other security patching solutions that cover over 30,000 open source software titles and their 100,000+ package dependencies, for up to 12 years. Organizations value Canonical’s transparency around testing and process, and trust Canonical’s machinery to build their kernels. That’s why organizations trust Canonical to use that same testing, process, and machinery to publish kernel livepatches. As the publisher and maintainer of Ubuntu, Canonical’s Ubuntu Pro bundle which includes Landscape and Livepatch is a smart fit for any organization.
To find out more about Canonical Livepatch, what customers have to say about it, and how you can enable it in your organization, visit our Livepatch page.
Elementary OS 8.0.2 is available to download — the final minor update before the next major release due late 2025. On offer is a new kernel, GPU drivers and fixes.