One of the most optimized algorithms in any standard library is sorting. It is used everywhere so it must be fast. Thousands upon thousands of developer hours have been sunk into inventing new algorithms and making sort implementations faster. Pystd has a different design philosophy where fast compilation times and readability of the implementation have higher priority than absolute performance. Perf still very much matters, it has to be fast, but not at the cost of 10x compilation time.
This leads to the natural question of how much slower such an implementation would be compared to a production quality one. Could it even be faster? (Spoilers: no) The only way to find out is to run performance benchmarks on actual code.
To keep things simple there is only one test set, sorting 10'000'000 consecutive 64 bit integers that have been shuffled to a random order which is the same for all algorithms. This is not an exhaustive test by any means but you have to start somewhere. All tests used GCC 15.2 using -O2 optimization. Pystd code was not thoroughly hand optimized, I only fixed (some of the) obvious hotspots.
Stable sortPystd uses mergesort for stable sorting. The way the C++ standard specifies stable sort means that most implementations probably use it as well. I did not dive in the code to find out. Pystd's merge sort implementation consists of ~220 lines of code. It can be read on this page.
Stdlibc++ can do the sort in 0.9 seconds whereas Pystd takes .94 seconds. Getting to within 5% with such a simple implementation is actually quite astonishing. Even when considering all the usual caveats where it might completely fall over with a different input data distribution and all that.
Regular sortBoth stdlibc++ and Pystd use introsort. Pystd's implementation has ~150 lines of code but it also uses heapsort, which has a further 100 lines of code). Code for introsort is here, and heapsort is here.
Stdlibc++ gets the sort done in 0.76 seconds whereas Pystd takes 0.82 seconds. This makes it approximately 8% slower. It's not great, but getting within 10% with a few evening's work is still a pretty good result. Especially since, and I'm speculating here, std::sort has seen a lot more optimization work than std::stable_sort because it is used more.
For heavy duty number crunching this would be way too slow. But for moderate data set sizes the performance difference might be insignificant for many use cases.
Note that all of these are faster (note: did not measure) than libc's qsort because it requires an indirect function call on every comparison i.e. the comparison method can not be inlined.
Where does the time go?Valgrind will tell you that quite easily.
This picture shows quite clearly why big O notation can be misleading. Both quicksort (the inner loop of introsort) and heapsort have "the same" average time complexity but every call to heapsort takes approximately 4.5 times as long.
Two years have passed since I last shared my Friday app icon sketches, but the sketching itself hasn't stopped.
For me, it's the best way to figure out the right metaphors before we move to final pixels. These sketches are just one part of the GNOME Design Team's wider effort to keep our icons consistent and meaningful—it is an endeavor that’s been going on for years.
If you design a GNOME app following the GNOME Design Guidelines, feel free to request an icon to be made for you. If you are serious and apply for inclusion in GNOME Circle, you are way more likely to get a designer's attention.
It’s clear LLMs are one of the biggest changes in technology ever. The rate of progress is astounding: recently due to a configuration mistake I accidentally used Claude Sonnet 3.5 (released ~2 years ago) instead of Opus 4.6 for a task and looked at the output and thought “what is this garbage”?
But daily now: Opus 4.6 is able to generate reasonable PoC level Rust code for complex tasks for me. It’s not perfect – it’s a combination of exhausting and exhilarating to find the 10% absolutely bonkers/broken code that still makes it past subagents.
So yes I use LLMs every day, but I will be clear: if I could push a button to “un-invent” them I absolutely would because I think the long term issues in larger society (not being able to trust any media, and many of the things from Dario’s recent blog etc.) will outweigh the benefits.
But since we can’t un-invent them: here’s my opinion on how they should be used. As a baseline, I agree with a lot from this doc from Oxide about LLMs. What I want to talk about is especially around some of the norms/tools that I see as important for LLM use, following principles similar to those.
On framing: there’s “core” software vs “bespoke”. An entirely new capability of course is for e.g. a nontechnical restaurant owner to use an LLM to generate (“vibe code”) a website (excepting hopefully online orderings and payments!). I’m not overly concerned about this.
Whereas “core” software is what organizations/businesses provide/maintain for others. I work for a company (Red Hat) that produces a lot of this. I am sure no one would want to run for real an operating system, cluster filesystem, web browser, monitoring system etc. that was primarily “vibe coded”.
And while I respect people and groups that are trying to entirely ban LLM use, I don’t think that’s viable for at least my space.
Hence the subject of this blog is my perspective on how LLMs should be used for “core” software: not vibe coding, but using LLMs responsibly and intelligently – and always under human control and review.
Agents should amplify and be controlled by humansI think most of the industry would agree we can’t give responsibility to LLMs. That means they must be overseen by humans. If they’re overseen by a human, then I think they should be amplifying what that human thinks/does as a baseline – intersected with the constraints of the task of course.
On “amplification”: Everyone using a LLM to generate content should inject their own system prompt (e.g. AGENTS.md) or equivalent. Here’s mine – notice I turn off all the emoji etc. and try hard to tune down bulleted lists because that’s not my style. This is a truly baseline thing to do.
Now most LLM generated content targeted for core software is still going to need review, but just ensuring that the baseline matches what the human does helps ensure alignment.
Pull request reviewsLet’s focus on a very classic problem: pull request reviews. Many projects have wired up a flow such that when a PR comes in, it gets reviewed by a model automatically. Many projects and tools pitch this. We use one on some of my projects.
But I want to get away from this because in my experience these reviews are a combination of:
In practice, we just want the first of course.
How I think it should work:
I wrote this agent skill to try to make this work well, and if you search you can see it in action in a few places, though I haven’t truly tried to scale this up.
I think the above matches the vision of LLMs amplifying humans.
Code GenerationThere’s no doubt that LLMs can be amazing code generators, and I use them every day for that. But for any “core” software I work on, I absolutely review all of the output – not just superficially, and changes to core algorithms very closely.
At least in my experience the reality is still there’s that percentage of the time when the agent decided to reimplement base64 encoding for no reason, or disable the tests claiming “the environment didn’t support it” etc.
And to me it’s still a baseline for “core” software to require another human review to merge (per above!) with their own customized LLM assisting them (ideally a different model, etc).
FOSS vs closedOf course, my position here is biased a bit by working on FOSS – I still very much believe in that, and working in a FOSS context can be quite different than working in a “closed environment” where a company/organization may reasonably want to (and be able to) apply uniform rules across a codebase.
While for sure LLMs allow organizations to create their own Linux kernel filesystems or bespoke Kubernetes forks or virtual machine runtime or whatever – it’s not clear to me that it is a good idea for most to do so. I think shared (FOSS) infrastructure that is productized by various companies, provided as a service and maintained by human experts in that problem domain still makes sense. And how we develop that matters a lot.
In Chapter 1 I gave the context for this project and in Chapter 2 I showed the bare minimum: an ELF that Open Firmware loads, a firmware service call, and an infinite loop.
That was July 2024. Since then, the project has gone from that infinite loop to a bootloader that actually boots Linux kernels. This post covers the journey.
The filesystem problemThe Boot Loader Specification expects BLS snippets in a FAT filesystem under loaders/entries/. So the bootloader needs to parse partition tables, mount FAT, traverse directories, and read files. All #![no_std], all big-endian PowerPC.
I tried writing my own minimal FAT32 implementation, then integrating simple-fatfs and fatfs. None worked well in a freestanding big-endian environment.
HadrisThe breakthrough was hadris, a no_std Rust crate supporting FAT12/16/32 and ISO9660. It needed some work to get going on PowerPC though. I submitted fixes upstream for:
All patches were merged upstream.
Disk I/OHadris expects Read + Seek traits. I wrote a PROMDisk adapter that forwards to OF’s read and seek client calls, and a Partition wrapper that restricts I/O to a byte range. The filesystem code has no idea it’s talking to Open Firmware.
Partition tables: GPT, MBR, and CHRPPowerVM with modern disks uses GPT (via the gpt-parser crate): a PReP partition for the bootloader and an ESP for kernels and BLS entries.
Installation media uses MBR. I wrote a small mbr-parser subcrate using explicit-endian types so little-endian LBA fields decode correctly on big-endian hosts. It recognizes FAT32, FAT16, EFI ESP, and CHRP (type 0x96) partitions.
The CHRP type is what CD/DVD boot uses on PowerPC. For ISO9660 I integrated hadris-iso with the same Read + Seek pattern.
Boot strategy? Try GPT first, fall back to MBR, then try raw ISO9660 on the whole device (CD-ROM). This covers disk, USB, and optical media.
The firmware allocator wallThis cost me a lot of time.
Open Firmware provides claim and release for memory allocation. My initial approach was to implement Rust’s GlobalAlloc by calling claim for every allocation. This worked fine until I started doing real work: parsing partitions, mounting filesystems, building vectors, sorting strings. The allocation count went through the roof and the firmware started crashing.
It turns out SLOF has a limited number of tracked allocations. Once you exhaust that internal table, claim either fails or silently corrupts state. There is no documented limit; you discover it when things break.
The fix was to claim a single large region at startup (1/4 of physical RAM, clamped to 16-512 MB) and implement a free-list allocator on top of it with block splitting and coalescing. Getting this right was painful: the allocator handles arbitrary alignment, coalesces adjacent free blocks, and does all this without itself allocating. Early versions had coalescing bugs that caused crashes which were extremely hard to debug – no debugger, no backtrace, just writing strings to the OF console on a 32-bit big-endian target.
And the kernel boots!March 7, 2026. The commit message says it all: “And the kernel boots!”
The sequence:
BLS discovery: walk loaders/entries/*.conf, parse into BLSEntry structs, filter by architecture (ppc64le), sort by version using rpmvercmp.
ELF loading: parse the kernel ELF, iterate PT_LOAD segments, claim a contiguous region, copy segments to their virtual address offsets, zero BSS.
Initrd: claim memory, load the initramfs.
Bootargs: set /chosen/bootargs via setprop.
Jump: inline assembly trampoline – r3=initrd address, r4=initrd size, r5=OF client interface, branch to kernel:
One gotcha: do NOT close stdout/stdin before jumping. On some firmware, closing them corrupts /chosen and the kernel hits a machine check. We also skip calling exit or release – the kernel gets its memory map from the device tree and avoids claimed regions naturally.
The boot menuI implemented a GRUB-style interactive menu:
This runs on the OF console with ANSI escape sequences. Terminal size comes from OF’s Forth interpret service (#columns / #lines), with serial forced to 80×24 because SLOF reports nonsensical values.
Secure boot (initial, untested)IBM POWER has its own secure boot: the ibm,secure-boot device tree property (0=disabled, 1=audit, 2=enforce, 3=enforce+OS). The Linux kernel uses an appended signature format – PKCS#7 signed data appended to the kernel file, same format GRUB2 uses on IEEE 1275.
I wrote an appended-sig crate that parses the appended signature layout, extracts an RSA key from a DER X.509 certificate (compiled in via include_bytes!), and verifies the signature (SHA-256/SHA-512) using the RustCrypto crates, all no_std.
The unit tests pass, including an end-to-end sign-and-verify test. But I have not tested this on real firmware yet. It needs a PowerVM LPAR with secure boot enforced and properly signed kernels, which QEMU/SLOF cannot emulate. High on my list.
The ieee1275-rs crateThe crate has grown well beyond Chapter 2. It now provides: claim/release, the custom heap allocator, device tree access (finddevice, getprop, instance-to-package), block I/O, console I/O with read_stdin, a Forth interpret interface, milliseconds for timing, and a GlobalAlloc implementation so Vec and String just work.
Published on crates.io at github.com/rust-osdev/ieee1275-rs.
What’s nextI would like to test the Secure Boot feature on an end to end setup but I have not gotten around to request access to a PowerVM PAR. Beyond that I want to refine the menu. Another idea would be to perhaps support the equivalent of the Unified Kernel Image using ELF. Who knows, if anybody finds this interesting let me know!
The source is at the powerpc-bootloader repository. Contributions welcome, especially from anyone with POWER hardware access.
GNOME 50 just got released! To celebrate, I thought it’d be fun to look into the background (ding) of the newest additions to the collection.
While the general aesthetic remains consistent, you might be surprised to see the default shifting from the long-standing triangular theme to hexagons.
Well, maybe not that surprised if you’ve been following the gnome-backgrounds repo closely during the development cycle. We saw a rounded hexagon design surface back in 2024, but it was pulled after being deemed a bit too "flat" despite various lighting and color iterations. We’ve actually seen other hex designs pop up in 2020 and 2022, but they never quite got the ultimate spot as the default. Until now.
Beyond the geometry shift of the default, the Symbolics wallpaper has also received its latest makeover. Truth be told, it hasn’t historically been a fan favorite. I rarely see it in the wild in "show off your desktop" threads. Let's see if this new incarnation does any better.
Similarly, the glass chip wallpaper has undergone a bit of a makeover as well. I'll also mention a… let's say less original design that caters to the dark theme folks out there. While every wallpaper in GNOME features a light and dark variant, Tubes comes with a dark and darker variant. I hope to see more of those "where did you get that wallpaper?" on reddit in 2026.
Last week, Igalia finally announced Moonforge, a project we’ve been working on for basically all of 2025. It’s been quite the rollercoaster, and the announcement hit various news outlets, so I guess now is as good a time as any to talk a bit about what Moonforge is, its goal, and its constraints.
Of course, as soon as somebody announces a new Linux-based OS, folks immediately think it’s a new general purpose Linux distribution, as that’s the square shaped hole where everything OS-related ends up. So, first things first, let’s get a couple of things out of the way about Moonforge:
Moonforge is a set of feature-based, well-maintained layers for Yocto, that allows you to assemble your own OS for embedded devices, or single-application environments, with specific emphasys on immutable, read-only root file system OS images that are easy to deploy and update, through tight integration with CI/CD pipelines.
Why?Creating a whole new OS image out of whole cloth is not as hard as it used to be; on the desktop (and devices where you control the hardware), you can reasonably get away with using existing Linux distributions, filing off the serial numbers, and removing any extant packaging mechanism; or you can rely on the containerised tech stack, and boot into it.
When it comes to embedded platforms, on the other hand, you’re still very much working on bespoke, artisanal, locally sourced, organic operating systems. A good number of device manufacturers coalesced their BSPs around the Yocto Project and OpenEmbedded, which simplifies adaptations, but you’re still supposed to build the thing mostly as a one off.
While Yocto has improved leaps and bounds over the past 15 years, putting together an OS image, especially when it comes to bundling features while keeping the overall size of the base image down, is still an exercise in artisanal knowledge.
A little detour: PokyTwenty years ago, I moved to London to work for this little consultancy called OpenedHand. One of the projects that OpenedHand was working on was taking OpenEmbedded and providing a good set of defaults and layers, in order to create a “reference distribution” that would help people getting started with their own project. That reference was called Poky.
We had a beaver mascot before it was cool
These days, Poky exists as part of the Yocto Project, and it’s still the reference distribution for it, but since it’s part of Yocto, it has to abide to the basic constraint of the project: you still need to set up your OS using shell scripts and copy-pasting layers and recipes inside your own repository. The Yocto project is working on a setup tool to simplify those steps, but there are alternatives…
Another little detour: KasOne alternative is kas, a tool that allows you to generate the local.conf configuration file used by bitbake through various YAML fragments exported by each layer you’re interested in, as well as additional fragments that can be used to set up customised environments.
Another feature of kas is that it can spin up the build environment inside a container, which simplifies enourmously its set up time. It avoids unadvertedly contaminating the build, and it makes it very easy to run the build on CI/CD pipelines that already rely on containers.
What Moonforge providesMoonforge lets you create a new OS in minutes, selecting a series of features you care about from various available layers.
Each layer provides a single feature, like:
Every layer comes with its own kas fragment, which describes what the layer needs to add to the project configuration in order to function.
Since every layer is isolated, we can reason about their dependencies and interactions, and we can combine them into a final, custom product.
Through various tools, including kas, we can set up a Moonforge project that generates and validates OS images as the result of a CI/CD pipeline on platforms like GitLab, GitHub, and BitBucket; OS updates are also generated as part of that pipeline, just as comprehensive CVE reports and Software Bill of Materials (SBOM) through custom Yocto recipes.
More importantly, Moonforge can act both as a reference when it comes to hardware enablement and support for BSPs; and as a reference when building applications that need to interact with specific features coming from a board.
While this is the beginning of the project, it’s already fairly usable; we are planning a lot more in this space, so keep an eye out on the repository.
Trying Moonforge outIf you want to check out Moonforge, I will point you in the direction of its tutorials, as well as the meta-derivative repository, which should give you a good overview on how Moonforge works, and how you can use it.
Last week was the end of Malika' internship within Papers about signatures that I had the pleasure to mentor. After a post about the first phase of Outreachy, here is the sequel of the story.
Nowadays, people expect to be able to fill and sign PDF documents. We previously worked on features to insert text into documents and signatures needed to be improved.
There is actually some ambiguity when speaking about signatures in PDFs: there are cryptographic signatures that guarantee that a certificate owner approved a document (now denoted by "digital" signatures) and there are also signatures that are just drawings on the document. These latter ones of course do not guarantee any authenticity but are more or less accepted in various situations, depending on the country. Moreover, getting a proper certificate to digitally sign documents may be complicated or costly (with the notable exception of a few countries providing them to their residents such as Spain).
Papers lacked any support for this second category (that I will call "visual" signatures from now on). On the other hand, digital signing was implemented a few releases ago, but it heavily relies on Firefox certificate database 1 and in particular there is no way to manage personal certificates within Papers.
During her three months internship, Malika implemented a new visual signatures management dialog and the corresponding UI to insert them, including nice details such as image processing to import signature pictures properly. She also contributed to the poppler PDF rendering library to compress signature data.
Then she looked into digital signatures and improved the insertion dialog, letting users choose visual signatures for them as well. If all goes well, all of this should be merged before Papers 51!
Malika also implemented a prototype that allows users to import certificates and also deal with multiple NSS databases. While this needs more testing and code review2, it should significantly simplify digital signing.
I would like to thank everyone who made this internship possible, and especially everyone who took the time to do calls and advise us during the internship. And of course, thanks to Malika for all the work she put into her internship!
1or on NSS command line tools.
2we don't have enough NSS experts, so help is very welcomed.
Another slow cycle, same as last time. Still, a few new things to showcase.
SidebarsThe most visible addition is the new sidebar widget. This is a bit confusing, because we already had widgets for creating windows with sidebars - AdwNavigationSplitView and AdwOverlaySplitView, but nothing to actually put into the sidebar pane. The usual recommendation is to build your own sidebar using GtkListBox or GtkListView, combined with the .navigation-sidebar style class.
This isn't too difficult, but the result is zero consistency between different apps, not unlike what we had with GtkNotebook-based tabs in the past:
It's even worse on mobile. In the best scenario it will just be a strangely styled flat list. Sometimes it will also have selection, and depending on how it's implemented it may be impossible to activate the selected row, like in libadwaita demo.
So we have a pre-built one now. It doesn't aim to support every single use case (sidebars can get very complex, see e.g. GNOME Builder), but just to be good enough for the basic situations.
How basic is basic? Well, it has selection, sections (with or without titles), tooltips, context menus, a drop target, suffix widgets at the end of each item's row, auto-activation when hovered during drag-n-drop.
A more advanced feature is built-in search filter - via providing a GtkFilter and a placeholder page.
And that's about it. There will likely be more features in future, like collapsible sections and drag source on items, rather than just a drop target, but this should already be enough for quite a lot of apps. Not everything, but that's not the goal here.
Internally, it's using GtkListBox. This means that it doesn't scale to thousands of items the way GtkListView would, but we can have much tighter API and mobile integration.
Now, let's talk about mobile. Ideally sidebars on mobile wouldn't really be sidebars at all. This pattern inherently requires a second pane, and falls apart otherwise. AdwNavigationSplitView already presents the sidebar pane as a regular page, so let's go further and turn sidebars into boxed lists. We're already using GtkListBox, after all.
So - AdwSidebar has the mode property. When set to ADW_SIDEBAR_MODE_PAGE, it becomes a page of boxed lists - indistinguishable from any others. It hides item selection, but it's still tracked internally. It can still be changed programmatically, and changes when an item is activated. Once the sidebar mode is set back to ADW_SIDEBAR_MODE_SIDEBAR, it will reappear.
Internally it's nothing special, as it just presents the same data using different widgets.
The adaptive layouts page has a detailed example for how to create UIs like this, as well as the newly added section about overlay sidebars that don't change as drastically.
View switcher sidebarOnce we have a sidebar, a rather obvious thing to do is to provide a GtkStackSidebar replacement. So AdwViewSwitcherSidebar is exactly that.
It works with AdwViewStack rather than GtkStack, and has all the same features as existing view switcher, as well as an extra one - sections.
To support that, AdwViewStackPage has new API for defining sections - the :starts-section and :section-title properties, while the AdwViewStack:pages) model is now a section model.
Like regular sidebars, it supports the boxed list mode and search filtering.
Unlike other view switchers or GtkStackSidebar, it also exposes AdwSidebar's item activation signal. This is required to make it work on mobile.
Demo improvementsThe lack of sidebar was the main blocker for improving libadwaita demo in the past. Now that it's solved, the demo is at last, fully adaptive. The sidebar has been reorganized into sections, and has icons and search now.
This also unblocks other potential improvements, such as having a more scalable preferences dialog.
Reduced motionWhile there isn't any new API, most widgets with animations have been updated to respect the new reduced motion preference - mostly by replacing sliding/scaling animations with crossfades, or otherwise toning down animations when it's impossible:
AdwOverlaySplitView is unaffected for now. Same for toasts, those are likely small enough to not cause motion sickness. If it turns out to be a problem, it can be changed later.
I also didn't update any of the deprecated widgets, like AdwLeaflet. Applications still using those should switch to the modern alternatives.
The prefers-reduced-motion media feature is available for use from app CSS as well, following the GTK addition.
Other changesAdwAboutDialog rows that contain links have a context menu now. Link rows may become a public widget in future if there's interest.
GTK_DEBUG=builder diagnostics are now supported for all libadwaita widgets. This can be used to find places where <child> tags are used in UI when equivalent properties exist.
Following GTK, all GListModel implementations now come with :item-type and :n-item properties, to make it easier to use them from expressions.
The AdwTabView:pages model implements sections now: one for pinned pages and one for everything else.
AdwToggle has a new :description property that can be used to set accessible description for individual toggles separately from tooltips.
Adrien Plazas improved accessibility in a bunch of widgets. The majority of this work has been backported to 1.8.x as well. For example, AdwViewSwitcher and AdwInlineViewSwither now read out number badges and needs attention status.
AdwNoneAnimationTarget now exists for situations where animations are used as frame clock-based timers, as an alternative to using AdwCallbackAnimationTarget with empty callback.
AdwPreferencesPage will refuse to add children of types other than AdwPreferencesGroup, instead of overlaying them over the page and then leaking them after the page is destroyed. This change was backported to 1.8.2 and subsequently reverted in 1.8.3 as it turned out multiple apps were relying on the broken behavior.
Maximiliano made non-nullable string setter functions automatically replace NULL parameters with empty strings, since allowing NULL breaks Rust bindings, while rejecting them means apps using expressions get unexpected criticals - for example, when accessing a non-nullable string property on an object, and that object itself is NULL.
As mentioned in the 1.8 blog post, style-dark.css, style-hc.css and style-hc-dark.css resources are now deprecated and apps using them will get warnings on startup. Apps are encouraged to switch to a single style.css and conditionally load styles using media queries instead.
While not a user-visible change (hopefully!), the internal stylesheet has been refactored to use prefers-contrast media queries for high contrast styles instead of 2 conditionally loaded variants - further reducing the need on SCSS, even if not entirely replacing it just yet. (the main blocker is @extend, as well nesting and a few mixins, such as focus ring)
A big change in works is a revamp of icon API. GTK has a new icon format that supports stateful icons with animated transitions, variable stroke weight, and many other capabilities. Currently, libadwaita doesn't make use of this, but it will in future.
In fact, a few smaller changes are already in 1.9: all of the internal icons in libadwaita itself, as well as in the demo and docs, have been updated to use the new format.
Thanks to the GNOME Foundation for their support and thanks to all the contributors who made this release possible.
Because 2026 is such an interesting period of time to live in, I feel I should explicitly say that libadwaita does not contain any AI slop, nor does allow such contributions, nor do I have any plans to change that. Same goes for all of my other projects, including this website.