Thankfully no tragedies to report this week! I thank each and everyone of you that has donated to my car fund. I still have a ways to go and could use some more help so that we can go to the funeral. https://gofund.me/033eb25d I am between contracts and work packages, so all of my work is currently for free. Thanks for your consideration.
Another very busy week getting qt6 updates in Debian, Kubuntu, and KDE snaps.
Kubuntu:
Debian:
KDE Snaps:
Updated QT to 6.7.2 which required a rebuild of all our snaps. Also found an issue with mismatched ffmpeg libraries, we have to bundle them for now until versioning issues are resolved.
Made new theme snaps for KDE breeze: gtk-theme-breeze, icon-theme-breeze so if you use the plasma theme breeze please install these and run
for PLUG in $(snap connections | grep gtk-common-themes:icon-themes | awk '{print $2}'); do sudo snap connect ${PLUG} icon-theme-breeze:icon-themes; done for PLUG in $(snap connections | grep gtk-common-themes:gtk-3-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-3-themes; done for PLUG in $(snap connections | grep gtk-common-themes:gtk-2-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-2-themes; doneThis should resolve most theming issues. We are still waiting for kdeglobals to be merged in snapd to fix colorscheme issues, it is set for next release. I am still working on qt6 themes and working out how to implement them in snaps as they are more complex than gtk themes with shared libraries and file structures.
Please note: Please help test the –edge snaps so I can promote them to stable.
WIP Snaps or MR’s made
Check out these awesome terminal themes at http://gogh-co.github.io/Gogh/
My Debian contributions this month were all sponsored by Freexian.
You can also support my work directly via Liberapay.
OpenSSHAt the start of the month, I uploaded a quick fix (via Salvatore Bonaccorso) for a regression from CVE-2006-5051, found by Qualys; this was because I expected it to take me a bit longer to merge OpenSSH 9.8, which had the full fix.
This turned out to be a good guess: it took me until the last day of the month to get the merge done. OpenSSH 9.8 included some substantial changes to split the server into a listener binary and a per-session binary, which required some corresponding changes in the GSS-API key exchange patch. At this point I was very grateful for the GSS-API integration test contributed by Andreas Hasenack a little while ago, because otherwise I might very easily not have noticed my mistake: this patch adds some entries to the key exchange algorithm proposal, and on the server side I’d accidentally moved that to after the point where the proposal is sent to the client, which of course meant it didn’t work at all. Even with a failing test, it took me quite a while to spot the problem, involving a lot of staring at strace output and comparing debug logs between versions.
There are still some regressions to sort out, including a problem with socket activation, and problems in libssh2 and Twisted due to DSA now being disabled at compile-time.
Speaking of DSA, I wrote a release note for this change, which is now merged.
GCC 14 regressionsI fixed a number of build failures with GCC 14, mostly in my older packages: grub (legacy), imaptool, kali, knews, and vigor.
autopkgtestI contributed a change to allow maintaining Incus container and VM images in parallel. I use both of these regularly (containers are faster, but some tests need full machine isolation), and the build tools previously didn’t handle that very well.
I now have a script that just does this regularly to keep my images up to date (although for now I’m running this with PATH pointing to autopkgtest from git, since my change hasn’t been released yet):
RELEASE=sid autopkgtest-build-incus images:debian/trixie RELEASE=sid autopkgtest-build-incus --vm images:debian/trixie Python teamI fixed dnsdiag’s uninstallability in unstable, and contributed the fix upstream.
I reverted python-tenacity to an earlier version due to regressions in a number of OpenStack packages, including octavia and ironic. (This seems to be due to #486 upstream.)
I fixed a build failure in python3-simpletal due to Python 3.12 removing the old imp module.
I added non-superficial autopkgtests to a number of packages, including httmock, py-macaroon-bakery, python-libnacl, six, and storm.
I switched a number of packages to build using PEP 517 rather than calling setup.py directly, including alembic, constantly, hyperlink, isort, khard, python-cpuinfo, and python3-onelogin-saml2. (Much of this was by working through the missing-prerequisite-for-pyproject-backend Lintian tag, but there’s still lots to do.)
I upgraded frozenlist, ipykernel, isort, langtable, python-exceptiongroup, python-launchpadlib, python-typeguard, pyupgrade, sqlparse, storm, and uncertainties to new upstream versions. In the process, I added myself to Uploaders for isort, since the previous primary uploader has retired.
Other odds and endsI applied a suggestion by Chris Hofstaedtler to create /etc/subuid and /etc/subgid in base-passwd, since the login package is no longer essential.
I fixed a wireless-tools regression due to iproute2 dropping its (/usr)/sbin/ip compatibility symlink.
I applied a suggestion by Petter Reinholdtsen to add AppStream metainfo to pcmciautils.
With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.
In this write-up, I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:
$ mkdir d-i_tmp && cd d-i_tmp $ apt install ovmf qemu-utils qemu-system-x86Now let’s download the official (daily) mini.iso, linux kernel image and initrd.gz containing the Netplan enablement changes:
$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/mini.iso $ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/debian-installer/amd64/initrd.gz $ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/debian-installer/amd64/linuxNext we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:
$ cp /usr/share/OVMF/OVMF_CODE_4M.fd . $ cp /usr/share/OVMF/OVMF_VARS_4M.fd . $ qemu-img create -f qcow2 ./data.qcow2 20GFinally, let’s launch the debian-installer using a preseed.cfg file, that will automatically install Netplan (netplan-generator) for us in the target system. A minimal preseed file could look like this:
# Install minimal Netplan generator binaryFor this demo, we’re installing the full netplan.io package (incl. the interactive Python CLI), as well as the netplan-generator package and systemd-resolved, to show the full Netplan experience. You can choose the preseed file from a set of different variants to test the different configurations:
We’re using the linux kernel and initrd.gz here to be able to pass the preseed URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the official debian-installer in its netboot/gtk form:
$ export U=https://people.ubuntu.com/~slyon/d-i/netplan-preseed+full.cfg $ qemu-system-x86_64 \ -M q35 -enable-kvm -cpu host -smp 4 -m 2G \ -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \ -drive if=pflash,format=raw,unit=1,file=OVMF_VARS_4M.fd,readonly=off \ -device qemu-xhci -device usb-kbd -device usb-mouse \ -vga none -device virtio-gpu-pci \ -net nic,model=virtio -net user \ -kernel ./linux -initrd ./initrd.gz -append "url=$U" \ -hda ./data.qcow2 -cdrom ./mini.iso;Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.
After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.
During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.
Done! After the installation finished, you can reboot into your virgin Debian Sid/Trixie system.
To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was modified by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:
$ cp ./OVMF_VARS_4M.fd ./EFIVARS.fd $ qemu-system-x86_64 \ -M q35 -enable-kvm -cpu host -smp 4 -m 2G \ -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \ -drive if=pflash,format=raw,unit=1,file=EFIVARS.fd,readonly=off \ -device qemu-xhci -device usb-kbd -device usb-mouse \ -vga none -device virtio-gpu-pci \ -net nic,model=virtio -net user \ -drive file=./data.qcow2,if=none,format=qcow2,id=disk0 \ -device virtio-blk-pci,drive=disk0,bootindex=1 -serial mon:stdioFinally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.
In our case, we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:
Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more, find us at GitHub:netplan.
We have quite a few exciting changes going on for Ubuntu Studio 24.10, including one that some might find controversial. However, this is not without a lot of thought and foresight, and even research, testing, and coordination.
With that, let’s just dive right into the controversial change.
Switching to Ubuntu’s Generic KernelThis is the one that’s going to come as a shock. However, with the release of 24.04 LTS, the generic kernel is now fully capable of preemptable low latency workloads. Because of this, the lowlatency kernel in Ubuntu will eventually be depricated.
Rather than take a reactive approach to this, we at Ubuntu Studio decided to be proactive and switch to the generic kernel starting with 24.10. To facilitate this, we will be enabling not only threadirqs like we had done before, but also preempt=full by default.
If you had read the first link above, you’ll also notice that nohz_full=all was also recommended, but we noticed that created performance degradation in high video workloads, so we decided to leave that off by default but give users a GUI option in Ubuntu Studio Audio Configuration to enable and disable these three kernel parameters as they need.
This has been tested on 24.04 LTS with results equivalent to or better than with the lowlatency kernel. The Ubuntu Kernel Team also has mentioned even more improvements coming to the kernel in 24.10, including the potential of ability to change these settings and more on-the-fly without reboot.
There have also been numerous improvements for gaming with these settings, for those of you that like to game. You can explore more of that on the Ubuntu Discourse.
Plasma 6We are in cooperation with the Kubuntu team doing what we can to help with the transition to KDE Plasma Desktop 6. The work is going along slowly but surely, and we hope to have more information on this in the future. For right now, most testing on new stuff is being done on Ubuntu Studio 24.04 LTS for this reason since desktop environment breakages can be catastrophic for application testing. Hence, any screenshots will be on Plasma 5.
New Theming for Ubuntu StudioWe’ve been using the Materia theme for the past five years, since 19.04, with a brief break for 22.04 LTS. Unfortunately, that is coming to an end as the Materia theme is no longer maintained. Its successor has been found in Orchis, which was forked from Materia. Here’s a general screenshot our Project Leader, Erich Eickmeyer, made from his personal desktop using Ubuntu Studio 24.04 LTS and the Orchis theme:
Message from Erich: “Yes, that’s Microsoft Edge and yes, my system needs a reboot. Don’t @ me. XD” Contributions Needed, and Help a Family in Need!Ubuntu Studio is a community-run project, and donations are always welcome. If you find Ubuntu Studio useful and want to support its ongoing development, please contribute!
Erich’s wife, Edubuntu Project Leader Amy Eickmeyer, lost her full-time job two weeks ago and the family is in desperate need of help in this time of hardship. If you could find it in your heart to donate extra to Ubuntu Studio, those funds will help the Eickmeyer family at this time.
Contribution options are on the sidebar to the right or at ubuntustudio.org/contribute.
Introduction
When managing Unix-like operating systems, understanding permission settings and security practices is crucial for maintaining system integrity and protecting data. FreeBSD and Linux, two popular Unix-like systems, offer distinct approaches to permission settings and security. This article delves into these differences, providing a comprehensive comparison to help system administrators and users navigate these systems effectively.
1. Overview of FreeBSD and Linux
FreeBSD is a Unix-like operating system derived from the Berkeley Software Distribution (BSD), renowned for its stability, performance, and advanced networking features. It is widely used in servers, network appliances, and embedded systems.
Linux, on the other hand, is a free and open-source operating system kernel created by Linus Torvalds. It is the foundation of numerous distributions (distros) like Ubuntu, Fedora, and CentOS. Linux is known for its flexibility, broad hardware support, and extensive community-driven development.
2. File System Hierarchy
Both FreeBSD and Linux follow the Unix file system hierarchy but with slight variations. Understanding these differences is key to grasping permission settings on each system.
3. Permissions and Ownership
Both systems use a similar model for file permissions but have some differences in implementation and additional features.
3.1 Basic File Permissions
3.2 Special Permissions
4. Extended Attributes and ACLs
4.1 FreeBSD:
FreeBSD supports Extended File Attributes (EAs) and Access Control Lists (ACLs) to provide more granular permission control.
4.2 Linux:
Linux also supports Extended Attributes and ACLs.
5. Security Models and Practices
5.1 FreeBSD Security Model:
FreeBSD includes several features for enhanced security:
5.2 Linux Security Model:
Linux employs a range of security modules and practices:
6. System Configuration and Management
6.1 FreeBSD Configuration:
FreeBSD uses configuration files located in /etc and other directories for system management. The rc.conf file is central for system startup and service configuration. The sysctl command is used for kernel parameter adjustments.
6.2 Linux Configuration:
Linux configurations are distributed across various directories like /etc for system-wide settings and /proc for kernel parameters. Systemd is the most common init system, managing services and their dependencies. The sysctl command is also used in Linux for kernel parameter adjustments.
7. User Management
7.1 FreeBSD:
FreeBSD manages users and groups through /etc/passwd, /etc/group, and /etc/master.passwd. User and group management commands include adduser, pw, and groupadd.
7.2 Linux:
Linux also uses /etc/passwd and /etc/group for user management. User and group management commands include useradd, usermod, groupadd, and passwd.
8. Network Security
8.1 FreeBSD:
FreeBSD offers robust network security features, including:
8.2 Linux:
Linux provides several options for network security:
9. Backup and Recovery
9.1 FreeBSD:
FreeBSD supports several backup and recovery tools:
9.2 Linux:
Linux offers a range of backup and recovery tools:
10. Conclusion
Both FreeBSD and Linux offer robust permission settings and security features, each with its strengths and specific implementations. FreeBSD provides a comprehensive suite of security features, including jails and Capsicum, while Linux offers a variety of security modules like SELinux and AppArmor. Understanding these differences is crucial for system administrators to effectively manage and secure their systems. By leveraging the unique features of each operating system, administrators can enhance their systems’ security and maintain a robust and reliable computing environment.
The post Understanding Permission Setting and Security on FreeBSD vs. Linux appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.
This release includes the long awaited OCI/Docker image support!
With this, users who previously were either running Docker alongside Incus or Docker inside of an Incus container just to run some pretty simple software that’s only distributed as OCI images can now just do it directly in Incus.
In addition to the OCI container support, this release also comes with:
The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:
You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
Enjoy!
Critical OpenSSH Vulnerability (CVE-2024-6387): Please Update Your Linux
A critical security flaw (CVE-2024-6387) has been identified in OpenSSH, a program widely used for secure remote connections. This vulnerability could allow attackers to completely compromise affected systems (remote code execution).
Who is Affected?
Only specific versions of OpenSSH (8.5p1 to 9.7p1) running on glibc-based Linux systems are vulnerable. Newer versions are not affected.
What to Do?
Update OpenSSH: Check your version by running ssh -V in your terminal. If you're using a vulnerable version (8.5p1 to 9.7p1), update immediately.
Temporary Workaround (Use with Caution): Disabling the login grace timeout (setting LoginGraceTime=0 in sshd_config) can mitigate the risk, but be aware it increases susceptibility to denial-of-service attacks.
Recommended Security Enhancement: Install fail2ban to prevent brute-force attacks. This tool automatically bans IPs with too many failed login attempts.
Optional: IP Whitelisting for Increased Security
Once you have fail2ban installed, consider allowing only specific IP addresses to access your server via SSH. This can be achieved using:
ufw for Ubuntu
firewalld for AlmaLinux or Rocky Linux
Additional Resources
OpenSSH Security Page: https://www.openssh.com/security.html
DevSec Hardening Framework - SSH Baseline: https://dev-sec.io/
Fail2ban: https://github.com/fail2ban
About Fail2ban
Fail2ban monitors log files like /var/log/auth.log and bans IPs with excessive failed login attempts. It updates firewall rules to block connections from these IPs for a set duration. Fail2ban is pre-configured to work with common log files and can be easily customized for other logs and errors.
Installation Instructions:
Ubuntu: sudo apt install fail2ban
AlmaLinux/Rocky Linux: sudo dnf install fail2ban
About DevSec Hardening Framework
The DevSec Hardening Framework is a set of tools and resources that helps automate the process of securing your server infrastructure. It addresses the challenges of manually hardening servers, which can be complex, error-prone, and time-consuming, especially when managing a large number of servers. The framework integrates with popular infrastructure automation tools like Ansible, Chef, and Puppet. It provides pre-configured modules that automatically apply secure settings to your operating systems and services such as OpenSSH, Apache and MySQL. This eliminates the need for manual configuration and reduces the risk of errors.
Prepare by LinuxMalaysia with the help of Google Gemini
5 July 2024
In Google Doc Format
As the tech world comes together to celebrate FreeBSD Day 2024, we are thrilled to bring you an exclusive interview with none other than Beastie, the iconic mascot of BSD! In a rare and exciting appearance, Beastie joins Kim McMahon to share insights about their journey, their role in the BSD community, and some fun personal preferences. Here’s a sneak peek into the life of the beloved mascot that has become synonymous with BSD.
From Icon to Legend: How Beastie Became the BSD MascotBeastie, with their distinct and endearing devilish charm, has been the face of BSD for decades. But how did they land this coveted role? During the interview, Beastie reveals that their journey began back in the early days of BSD. The character was originally drawn by John Lasseter of Pixar fame, and quickly became a symbol of the BSD community’s resilience and innovation. Beastie’s playful yet formidable appearance captured the spirit of BSD, making them an instant hit among developers and users alike.
A Day in the Life of BeastieWhat does a typical day look like for the BSD mascot? Beastie shares that their role goes beyond just being a symbol. They actively participate in community events, engage with developers, and even help in promoting BSD at various conferences around the globe. Beastie’s presence is a source of inspiration and motivation for the BSD community, reminding everyone of the project’s rich heritage and vibrant future.
Beastie’s Favorite Tools and EditorsNo interview with a tech mascot would be complete without delving into their favorite tools. Beastie is an advocate of keeping things simple and efficient. When asked about their preferred text editor, Beastie enthusiastically endorsed Vim, praising its versatility and powerful features. They also shared their admiration for the classic Unix philosophy, which aligns perfectly with the minimalist yet powerful nature of Vim.
Engaging with the BSD CommunityBeastie’s role is not just about representation; it’s about active engagement. They spoke about the importance of community in the BSD ecosystem and how it has been pivotal in driving the project forward. From organizing hackathons to participating in mailing lists, Beastie is deeply involved in fostering a collaborative and inclusive environment. They highlighted the incredible contributions of the BSD community, acknowledging that it’s the collective effort that makes BSD a robust and reliable operating system.
Looking Ahead: The Future of BSDAs we look to the future, Beastie remains optimistic about the path ahead for BSD. They emphasized the ongoing developments and the exciting projects in the pipeline that promise to enhance the BSD experience. Beastie encouraged new users and seasoned developers alike to explore BSD, contribute to its growth, and be a part of its dynamic community.
Join the CelebrationTo mark FreeBSD Day 2024, the community is hosting a series of events, including workshops, Q&A sessions, and more. Beastie’s interview with Kim McMahon is just one of the highlights. Be sure to tune in and catch this rare glimpse into the life of BSD’s beloved mascot.
Final ThoughtsBeastie’s interview is a testament to the enduring legacy and vibrant community of BSD. As we celebrate FreeBSD Day 2024, let’s take a moment to appreciate the contributions of everyone involved and look forward to an exciting future for BSD.
Don’t miss out on this exclusive interview—check it out on YouTube and join the celebration of FreeBSD Day 2024!
The post Celebrating FreeBSD Day 2024: An Exclusive Interview with Beastie appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.
Download And Use latest Version Of Nginx Stable
To ensure you receive the latest security updates and bug fixes for Nginx, configure your system's repository specifically for it. Detailed instructions on how to achieve this can be found on the Nginx website. Setting up the repository allows your system to automatically download and install future Nginx updates, keeping your web server running optimally and securely.
Visit this websites for information on how to configure your repository for Nginx.
https://nginx.org/en/linux_packages.html
https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/
Installing Nginx on different Linux distributionsExample from https://docs.bunkerweb.io/latest/integrations/#linux
Ubuntusudo apt install -y curl gnupg2 ca-certificates lsb-release debian-archive-keyring && \Create the following file at /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
https://thenewstack.io/freenginx-a-fork-of-nginx/
Use this Web tool to configure nginx.https://www.digitalocean.com/community/tools/nginx
ExampleHarisfazillah Jamel - LinuxMalaysia - 20240619In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs.
You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don’t actually have any pure literals in there). We can control it in a bunch of ways:
This is about all that we really want to do, we can’t go if we reach a conflict, say “oh but this conflict was introduced by that upgrade, and it seems more important, so let’s not backtrack on the upgrade request but on this dependency instead.”.
This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a “which of these packages should I flip the opposite way to break the conflict” kind of thinking.
Now our test suite has a whole bunch of these semantics encoded in it, and I’m going to share some problems and ideas for how to solve them. I can’t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let’s be honest).
apt upgrade is hardThe apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages.
Now, consider the following package is installed:
X Depends: A (= 1) | BAn upgrade from A=1 to A=2 is available. What should happen?
The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it’s answer is quite clear: Keep back the upgrade of A.
The new solver however sees two possible solutions:
Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So
If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) | A (= 1) and sees it is satisfied already and is content.
If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) | B, sees that A (= 1) is not satisfiable, and picks B.
We have two ways to approach this issue:
See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases.
But let’s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:
This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, “promotions”:
A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.
This neatly solves the problem for us. We will never break Recommends that are satisfied.
Likewise, we already have a Recommends demotion rule:
A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).
Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn’t autoremove them, but treat them as optional?
tightening of versioned dependenciesAnother case of versioned dependencies with alternatives that has complex behavior is something like
X Depends: A (>= 2) | B X Recommends: A (>= 2) | BIn both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) | A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B.
We can solve this again as in the previous example by ordering the “keep A installed” requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.
version narrowing instead of version choosingA different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate
Depends: A (>= 2)into two rules:
The package selection rule:
Depends: AThis ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) | A (= 2) in an example with two versions for A.
The version narrowing rule:
Conflicts: A (<< 2)This outright would reject a choice of A (= 1).
So now we have 3 kinds of clauses:
If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions.
This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) | B but e.g. Depends: A (= 3) | B | A (= 2). He’d expect us to fall back to B if A (= 3) is not installable, and not to B. But we’d like to enqueue A and reject all choices other than 3 and 2. I think it’s fair to say: “Don’t do that, then” here.
Implementing strict pinning correctlyAPT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions.
But of course, APT allows you to specify a non-candidate version of a package to install, for example:
apt install foo/oracular-proposedThe way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy.
The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I’d really like to get rid of it.
But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache.
The current implementation of “allowed version” is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) | A (= 1).
However this has two disadvantages. (1) It means if we show you why A could not be installed, you don’t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides.
So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn’t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.
pulling up common dependencies to minimize backtracking costOne of the common issues we have is that when we have a dependency group
`A | B | C | D`we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn’t perhaps the best choice of operation.
I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don’t do this here: We have already lowered the representation of the dependency group into a list of versions, so we’d need to extract the package back out of it.
This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:
Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway).
The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly
#A * (#B+#C+#D)
Each dependency group we need to check i.e. is X|Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X|Y and Y|X are different dependencies of course, but that is to be expected – they are. But any dependency of the same order will have the same memory layout.
So really the cost is roughly N^4. This isn’t nice.
You can apply various heuristics here on how to improve that, or you can even apply binary logic:
This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one.
Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.
Contrary to what you may be thinking, this is not a tale of an inexperienced coder pretending to know what they’re doing. I have something even better for you.
It all begins in the dead of night, at my workplace. In front of me is a typical programmer’s desk - two computers, three monitors (one of which isn’t even plugged in), a mess of storage drives, SD cards, 2FA keys, and an arbitrary RPi 4, along with a host of items that most certainly don’t belong on my desk, and a tangle of cables that would give even a rat a migraine. My dev laptop is sitting idle on the desk, while I stare intently at the screen of a system running a battery of software tests. In front of me is the logs of a failed script run.
Generally when this particular script fails, it gives me some indication as to what went wrong. There are thorough error catching measures (or so I thought) throughout the code, so that if anything goes wrong, I know what went wrong and where. This time though, I’m greeted by something like this:
$ systemctl status test-sh.service
test-sh.service - does testing things
...
May 20 23:00:00 desktop-pc systemd[1]: Starting test-sh.service - does testing things
May 20 23:00:00 desktop-pc systemd[1]: test-sh.service: Failed with result ‘exit-code’.
May 20 23:00:00 desktop-pc systemd[1]: Failed to start test-sh.service.
I stare at the screen in bewilderment for a few seconds. No debugging info, no backtraces, no logs, not even an error message. It’s as if the script simply decided it needed some coffee before it would be willing to keep working this late at night. Having heard the tales of what happens when you give a computer coffee, I elected to try a different approach.
$ vim /usr/bin/test-sh
1 #!/bin/bash
2 #
3 # Copyright 2024 ...
4 set -u;
5 set -e;
Before I go into what exactly is wrong with this picture, I need to explain a bit about how Bash handles the ultimate question of life, “what is truth?”
(RED ALERT: I do not know if I’m correct about the reasoning behind the design decisions I talk about in the rest of this article. Don’t use me as a reference for why things work like this, and please correct me if I’ve botched something. Also, a lot of what I describe here is simplified, so don’t be surprised if you notice or discover that things are a bit more complex in reality than I make them sound like here.)
Bash, as many of you probably know, is primarily a “glue” language - it glues applications to each other, it glues the user to the applications, and it glues one’s sanity to the ceiling, far out of the user’s reach. As such, it features a bewildering combination of some of the most intuitive and some of the least intuitive behaviors one can dream up, and the handling of truth and falsehood is one of these bewildering things.
Every command you run in Bash reports back whether or not what it did “worked”. (“Worked” is subjective and depends on the command, but for the most part if a command says “It worked”, you can trust that it did what you told it to, at least mostly.) This is done by means of an “exit code”, which is nothing more than a number between 0 and 255. If a program exits and hands the shell an exit code of 0, it usually means “it worked”, whereas a non-zero exit code usually means “something went wrong”. (This makes sense if you know a bit about how programs written in C work - if your program is written to just “do things” and then exit, it will default to exiting with code zero.)
Because zero = good and non-zero = not good, it makes sense to treat zero as meaning “true” and non-zero as meaning “false”. That’s exactly what Bash does - if you do something like “if command; then commandIfTrue; else commandIfFalse; fi”, Bash will run “commandIfTrue” if “command” exits with 0, and will run “commandIfFalse” if “command” exits with 1 or higher.
Now since Bash is a glue language, it has to be able to handle it if a command runs and fails. This can be done with some amount of difficulty by testing (almost) every command the script runs, but that can be quite tedious. There’s a (generally) easier way however, which is to tell the script to immediately exit if any command exits with a non-zero exit code. This is done by using the command “set -e” at or near the top of the script. Once “set -e” is active, any command that fails will cause the whole script to stop.
So back to my script. I’m using “set -e” so that if anything goes wrong, the script stops. What could go wrong other than a failed command? To answer that question, we have to take a look at how some things work in C.
C is a very different language than Bash. Whereas Bash is designed to take a bunch of pieces and glue them together, C is designed to make the pieces themselves. You can think of Bash as being a glue gun and C as being a 3d printer. As such, C does not concern itself nearly as much with things like return codes and exiting when a command fails. It focuses on taking data and doing stuff with it.
Since C is more data- and algorithm-oriented, true and false work significantly differently here. C sees 0 as meaning “none, empty, all bits set to 0, etc.” and thus treats it as meaning “false”. Any number greater than 0 has a value, and can be treated as “on” or “true”. An astute reader will notice this is exactly the opposite of how Bash works, where 0 is true and non-zero is false. (In my opinion this is a rather lamentable design decision, but sadly these behaviors have been standardized for longer than I’ve been alive, so there’s not much point in trying to change them. But I digress.)
C also of course has features for doing math, called “operators”. One of the most common operators is the assignment operator, “=”. The assignment operator’s job is to take whatever you put on the right side of it, and store it in whatever you put on the left side. If you say “a = 0”, the value “0” will be stored in the variable “a” (assuming things work right). But the assignment operator has a trick up its sleeve - not only does it assign the value to the variable, it also returns the value. Basically what that means is that the statement “a = 0” spits out an extra value that you can do things with. This allows you to do things like “a = b = 0”, which will assign 0 to “b”, return zero, and then assign that returned zero to "a”. (The assignment of the second zero to “a” also returns a zero, but that simply gets ignored by the program since there’s nothing to do with it.)
You may be able to see where I’m going with this. Assigning a value to a variable also returns that value… and 0 means “false”… so “a = 0” succeeds, but also returns what is effectively “false”. That means if you do something like “if (a = 0) { ... } else { explodeComputer(); }”, the computer will explode. “a = 0” returns “false”, thus the “if” condition does not run and the “else” condition does. (Coincidentally, this is also a good example of the “world’s last programming bug” - the comparison operation in C is “==”, which is awfully easy to mistype as the assignment operator, “=”. Using an assignment operator in an “if” statement like this will almost always result in the code within the “if” being executed, as the value being stored in the variable will usually be non-zero and thus will be seen as “true” by the “if” statement. This also corrupts the variable you thought you were comparing something to. Some fear that a programmer with access to nuclear weapons will one day write something like “if (startWar = 1) { destroyWorld(); }” and thus the world will be destroyed by a missing equals sign.)
“So what,” you say. “Bash and C are different languages.” That’s true, and in theory this would mean that everything here is fine. Unfortunately theory and practice are the same in theory but much different in practice, and this is one of those instances where things go haywire because of weird differences like this. There’s one final piece of the puzzle to look at first though - how to do math in Bash.
Despite being a glue language, Bash has some simple math capabilities, most of which are borrowed from C. Yes, including the behavior of the assignment operator and the values for true and false. When you want to do math in Bash, you write “(( do math here... ))”, and everything inside the double parentheses is evaluated. Any assignment done within this mode is executed as expected. If I want to assign the number 5 to a variable, I can do “(( var = 5 ))” and it shall be so.
But wait, what happens with the return value of the assignment operator?
Well, take a guess. What do you think Bash is going to do with it?
Let’s look at it logically. In C (and in Bash’s math mode), 0 is false and non-zero is true. In Bash, 0 is true and non-zero is false. Clearly if whatever happen within math mode fails and returns false (0), Bash should not misinterpret this as true! Things like “(( 5 == 6 ))” shouldn’t be treated as being true, right? So what do we do with this conundrum? Easy solution - convert the return value to an exit code so that its semantics are retained across the C/Bash barrier. If the return value of the math mode statement is false (0), it should be converted to Bash’s concept of false (non-zero), therefore the return value of 0 is converted to an exit code of 1. On the other hand, if the return value of the math mode statement is true (non-zero), it should be converted to Bash’s concept of true (0), therefore the return value of anything other than 0 is converted to an exit code of 0. (You probably see the writing on the wall at this point. Spoiler, my code was weighed in the balances and found wanting.)
So now we can put all this nice, logical, sensible behavior together and make a glorious mess with it. Guess what happens if you run “(( var = 0 ))” in a script where “set -e” is enabled.
“0” is assigned to “var”.
The statement returns 0.
Bash dutifully converts that to a 1 (false/failure).
Bash now sees the command as having failed.
“set -e” says the script should immediately stop if anything fails.
The script crashes.
You can try this for yourself - pop open a terminal and run “set -e; (( var = 0 ));” and watch in awe as your terminal instantly closes (or otherwise shows an indication that Bash has exited).
So back to the code. In my script, I have a function that helps with generating random numbers within any specified bounds. Basically it just grabs the value of “$RANDOM” (which is a special variable in Bash that always returns an integer between 0 and 32767) and does some manipulations on it so that it becomes a random number between a “lower bound” and an “upper bound” parameter. In the guts of that function’s code I have many “math mode” statements for getting those numbers into shape. Those statements include variable assignments, and those variable assignments were throwing exit codes into the script. I had written this before enabling “set -e”, so everything was fine before, but now “set -e” was enabled and Bash was going to enforce it as ruthlessly as possible.
While I will never know what line of code triggered the failure, it’s a fairly safe bet that the culprit was:
88 (( _val = ( _val % ( _adj_upper_bound + 1 ) ) ));
This basically takes whatever is in “_val” , divides it by “_adj_upper_bound + 1”, and then assigns the remainder of that operation to “_val”. This makes sure that “_val” is lower than “_adj_upper_bound + 1”. (This is typically known as a “getting the modulus”, and the “%” operator here is the “modulo operator”. For the math people reading this, don’t worry, I did the requisite gymnastics to ensure this code didn’t have modulo bias.) If “_val” happens to be equal to “_adj_upper_bound + 1”, the code on the right side of the assignment operator will evaluate to 0, which will become an exit code of 1, thus exploding my script because of what appeared to be a failed command.
Sigh.
So there’s the problem. What’s the solution? Turns out it’s pretty simple. Among Bash’s feature set, there is the profoundly handy “logical or operator”, “||”. This operator lets us say “if this OR that is true, return true.” In other words, “Run whatever’s on the left hand of the ||. If it exits 0, move on. If it exits non-zero, run whatever’s on the right hand of the ||. If it exits 0, move on and ignore the earlier failure. Only return non-zero if both commands fail.” There’s also a handy command in Bash called “true” that does nothing except for give an exit code of 0. That means that if you ever have a line of code in Bash that is liable to exit non-zero but it’s no big deal if it does, you can just slap an “|| true” on the end and it will magically make everything work by pretending that nothing went wrong. (If only this worked in real life!) I proceeded to go through and apply this bandaid to every standalone math mode call in my script, and it now seems to be behaving itself correctly again. For now anyway.
tl;dr: Faking success is sometimes a perfectly valid way to solve a computing problem. Just don’t live the way you code and you’ll be alright.
APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the –solver 3.0 option. The new solver works fundamentally different from the old one.
How does it work?Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies.
Deferring the choices is implemented multiple ways:
First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package.
Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a|b before a|b|c.
Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not “nest” in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue.
Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all.
Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.
Comparison to SAT solver design.If you have studied SAT solver design, you’ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy).
As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B).
Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.
What changes can you expect in behavior?The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other.
Implementing that policy is rather trivial: We just need to queue obsolete | replacement as a dependency to solve, rather than mark the obsolete package for install.
Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc | c-compiler is enough to keep them around.
New featuresThe new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible.
The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.
What is left to do?At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful.
Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely.
The test suite is not passing yet, I haven’t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn’t remove those.
We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed.
Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make.
Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I’d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to.
At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.