GNOME’s GitLab runners use Podman as the container runtime with SELinux in Enforcing mode on Fedora. The GitLab Runner Docker/Podman executor spawns multiple containers per job: a helper container that clones the repository and handles artifacts, and a build container that runs the actual CI script. Both containers need to share a /builds volume — and this is where SELinux’s Multi-Category Security (MCS) becomes a problem.
The MCS problemAn SELinux label has four fields: user:role:type:level. For containers the interesting part is the level, also called the MCS field. A level looks like s0:c123,c456 — s0 is the sensitivity (always s0 in targeted policy), and c123,c456 are the categories. A process or file can carry up to two categories.
MCS access is based on dominance. A subject’s label dominates an object’s label if the subject’s categories are a superset of (or equal to) the object’s categories:
Subject Object Access? Why s0:c100,c200 s0:c100,c200 Yes Exact match s0:c100,c200 s0:c100 Yes Subject’s categories are a superset s0:c100,c200 s0:c100,c300 No Subject lacks c300 s0:c0.c1023 s0:c100,c200 Yes Full range dominates everything s0 s0:c100,c200 No No categories can’t dominate any s0 s0 Yes Both have no categoriesHow this applies to the runners:
The range syntax (s0-s0:c0.c1023) is used for processes that need to operate across multiple levels. It means “my low clearance is s0 and my high clearance is s0:c0.c1023.” The process can read objects at any level within that range and create objects at any level within it. This is why Podman needs the full range — it creates containers with different MCS labels and needs to access all of them.
When Podman starts a container, it picks a random pair of categories (e.g., s0:c512,c768) from within its allowed range and assigns that as the container’s process label. Files created by the container inherit that label. Another container gets a different random pair (e.g., s0:c33,c901). Since c512,c768 and c33,c901 do not match — neither is a superset of the other — SELinux denies cross-container file access. This is the isolation mechanism, and the root cause of the problem with GitLab Runner’s multi-container-per-job architecture.
The helper container gets one random MCS pair, writes the cloned repo to /builds labeled with that pair, and the build container gets a different pair. The build container cannot read or write those files. The :Z volume flag (exclusive relabel) relabels the volume to the mounting container’s category, but that only helps the first container — the second one still has a different label.
The test scriptI wrote a script that demonstrates the problem with both standard containers (crun) and microVMs (libkrun). The script creates two containers per test — a helper that writes a file to a shared /builds volume, and a build container that tries to read it — simulating the GitLab Runner workflow:
#!/bin/bash # Description: SELinux MCS Diagnostic (crun vs krun) if [ "$(getenforce)" != "Enforcing" ]; then echo "WARNING: SELinux is not in Enforcing mode. This test requires Enforcing mode." exit 1 fi TEST_BASE="/tmp/gitlab-runner-mcs-test" CRUN_DIR="$TEST_BASE/crun-builds" KRUN_DIR="$TEST_BASE/krun-builds" # Cleanup from previous runs rm -rf "$TEST_BASE" mkdir -p "$CRUN_DIR" "$KRUN_DIR" echo "=======================================================" echo " TEST 1: Standard Container Isolation (crun)" echo "=======================================================" # 1. CREATE Helper podman create --name crun-helper -v "$CRUN_DIR:/builds:Z" fedora bash -c " echo '[crun] -> Helper Process Context (Inside):' cat /proc/self/attr/current echo 'crun-data' > /builds/artifact.txt echo '[crun] -> File Label INSIDE Helper:' ls -Z /builds/artifact.txt " > /dev/null echo "[crun] Starting Helper Container (applying :Z relabel)..." HELPER_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-helper) echo "[crun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_CRUN" podman start -a crun-helper echo "" echo "[crun] -> File Label ON HOST (Notice the specific MCS category):" ls -Z "$CRUN_DIR/artifact.txt" # 2. CREATE Build Container (The Victim) podman create --name crun-build -v "$CRUN_DIR:/builds" fedora bash -c " echo ' [Build-Internal] Process Context:' cat /proc/self/attr/current 2>/dev/null echo ' [Build-Internal] Executing ls -laZ /builds :' ls -laZ /builds 2>&1 | sed 's/^/ /' echo ' [Build-Internal] Executing cat /builds/artifact.txt :' cat /builds/artifact.txt 2>&1 | sed 's/^/ /' " > /dev/null echo "" echo "[crun] Starting Build Container to inspect shared volume..." BUILD_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-build) echo "[crun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_CRUN" podman start -a crun-build podman rm -f crun-helper crun-build > /dev/null echo "" echo "=======================================================" echo " TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED" echo "=======================================================" # --- Write the execution scripts to the host to avoid parsing errors --- cat << 'EOF' > "$TEST_BASE/krun_helper.sh" #!/bin/bash echo '[krun] -> Helper Process Context (Inside VM):' cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)' echo 'krun-data' > /builds/artifact.txt echo '[krun] -> File Label INSIDE Helper VM (Blindspot):' ls -laZ /builds/artifact.txt 2>&1 | sed 's/^/ /' EOF cat << 'EOF' > "$TEST_BASE/krun_build.sh" #!/bin/bash echo ' [Build-Internal] Process Context (Inside VM):' cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)' echo ' [Build-Internal] Executing ls -laZ /builds :' ls -laZ /builds 2>&1 | sed 's/^/ /' echo ' [Build-Internal] Executing cat /builds/artifact.txt :' cat /builds/artifact.txt 2>&1 | sed 's/^/ /' EOF chmod +x "$TEST_BASE/krun_helper.sh" "$TEST_BASE/krun_build.sh" # --------------------------------------------------------------------- # 1. CREATE Helper MicroVM podman create --name krun-helper --runtime krun --memory=1024m \ -v "$KRUN_DIR:/builds:Z" \ -v "$TEST_BASE/krun_helper.sh:/script.sh:ro,Z" \ fedora /script.sh > /dev/null echo "[krun] Starting Helper MicroVM (applying :Z relabel)..." HELPER_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-helper) echo "[krun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_KRUN" podman start -a krun-helper echo "" echo "[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):" ls -Z "$KRUN_DIR/artifact.txt" # 2. CREATE Build MicroVM (The Victim) podman create --name krun-build --runtime krun --memory=1024m \ -v "$KRUN_DIR:/builds" \ -v "$TEST_BASE/krun_build.sh:/script.sh:ro,Z" \ fedora /script.sh > /dev/null echo "" echo "[krun] Starting Build MicroVM to inspect shared volume..." BUILD_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-build) echo "[krun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_KRUN" echo " *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***" podman start -a krun-build # Cleanup podman rm -f krun-helper krun-build > /dev/null echo "" echo "=======================================================" echo " Test Complete."Test 1 (crun) creates a helper container that mounts the builds directory with :Z (exclusive relabel) and writes artifact.txt. Podman assigns it a random MCS label — in this run it was s0:c20,c540. The file on disk inherits that label. Then a second container (the build container) mounts the same path without :Z and gets a different random label (s0:c46,c331). Since c46,c331 does not dominate c20,c540, the build container is denied access to the file.
Test 2 (krun) runs the same scenario but with --runtime krun, which boots each container inside a lightweight microVM via libkrun. The helper VM gets container_kvm_t:s0:c823,c999 and the build VM gets container_kvm_t:s0:c309,c405 — same MCS mismatch, same denial. The type changes from container_t to container_kvm_t, but the MCS mechanism is identical. On the host side, virtiofsd — the daemon that serves the volume into the VM via virtio-fs — runs under the MCS label Podman assigned to the VM. The build VM’s virtiofsd is trapped in s0:c309,c405 and cannot access files labeled s0:c823,c999.
An interesting detail: inside the libkrun VMs, cat /proc/self/attr/current returns just kernel — SELinux is not available in the guest. The VM thinks it has no mandatory access control, but the host-side virtiofsd is still fully subject to MCS enforcement. This is a blindspot worth being aware of.
The output from a run on Fedora with SELinux Enforcing and Podman 5.8.2:
======================================================= TEST 1: Standard Container Isolation (crun) ======================================================= [crun] Starting Helper Container (applying :Z relabel)... [crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c20,c540 [crun] -> Helper Process Context (Inside): system_u:system_r:container_t:s0:c20,c540 [crun] -> File Label INSIDE Helper: system_u:object_r:container_file_t:s0:c20,c540 /builds/artifact.txt [crun] -> File Label ON HOST (Notice the specific MCS category): system_u:object_r:container_file_t:s0:c20,c540 /tmp/gitlab-runner-mcs-test/crun-builds/artifact.txt [crun] Starting Build Container to inspect shared volume... [crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c46,c331 *** COMPARE THE cXXX,cYYY ABOVE TO THE FILE LABEL. THIS MISMATCH CAUSES THE DENIAL *** [Build-Internal] Process Context: system_u:system_r:container_t:s0:c46,c331 [Build-Internal] Executing ls -laZ /builds : ls: cannot open directory '/builds': Permission denied [Build-Internal] Executing cat /builds/artifact.txt : cat: /builds/artifact.txt: Permission denied ======================================================= TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED ======================================================= [krun] Starting Helper MicroVM (applying :Z relabel)... [krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c823,c999 [krun] -> Helper Process Context (Inside VM): kernel [krun] -> File Label INSIDE Helper VM (Blindspot): -rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c823,c999 10 May 2 2026 /builds/artifact.txt [krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z): system_u:object_r:container_file_t:s0:c823,c999 /tmp/gitlab-runner-mcs-test/krun-builds/artifact.txt [krun] Starting Build MicroVM to inspect shared volume... [krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c309,c405 *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT *** [Build-Internal] Process Context (Inside VM): kernel [Build-Internal] Executing ls -laZ /builds : ls: /builds: Permission denied ls: cannot open directory '/builds': Permission denied [Build-Internal] Executing cat /builds/artifact.txt : cat: /builds/artifact.txt: Permission denied ======================================================= Test Complete. GitLab’s official suggestion and why it falls shortGitLab’s documentation on configuring SELinux MCS suggests applying the same MCS label to all containers launched by a runner:
[[runners]] [runners.docker] security_opt = ["label=level:s0:c1000,c1000"]This works — all containers get the same category pair, so the helper and build containers can share files. But it collapses MCS isolation between all concurrent jobs on that runner. With concurrent = 4, four simultaneous jobs all run as s0:c1000,c1000 and can read each other’s /builds content — cloned source code, build artifacts, cached dependencies. On a shared or multi-tenant runner, this is a security regression: it trades MCS isolation for functionality.
For runners with concurrent = 1 or dedicated single-tenant runners this is an acceptable tradeoff, but it does not generalize to shared infrastructure where multiple untrusted projects run side by side.
How GNOME currently handles thisGNOME’s runners are managed via an Ansible role that enforces SELinux in Enforcing mode, installs rootless Podman running as a dedicated podman system user with linger enabled, and deploys custom SELinux policy modules. The Podman service runs under SELinuxContext=system_u:system_r:container_runtime_t:s0-s0:c0.c1023 via a systemd override — the full MCS range (s0-s0:c0.c1023) gives the container runtime the ability to spawn containers at any MCS level and relabel volumes accordingly, as explained in the dominance rules above.
Four custom SELinux .te modules are compiled and loaded on every runner host: pydocuum (allows the image cleanup daemon to talk to the Podman socket), podman (grants user_namespace create and /dev/null mapping), flatpak (permits the filesystem mounts flatpak builds need), and gnome_runner (covers binfmt_misc access, device nodes, and other permissions GNOME OS builds require).
For the MCS problem specifically, the runner config.toml — rendered from a Jinja2 template via per-host Ansible variables — sets a fixed MCS label per runner type. Here’s a representative snippet from one of the runner hosts:
[[runners]] name = "a15948139c78" executor = "docker" [runners.docker] image = "quay.io/fedora/fedora:latest" privileged = false security_opt = ["label=level:s0:c100,c100"] devices = ["/dev/kvm", "/dev/udmabuf"] cap_add = ["SYS_PTRACE", "SYS_CHROOT"] [[runners]] name = "a15948139c78-flatpak" executor = "docker" [runners.docker] image = "quay.io/gnome_infrastructure/gnome-runtime-images:gnome-master" privileged = false security_opt = ["seccomp:/home/podman/gitlab-runner/flatpak.seccomp.json", "label=level:s0:c200,c200"] cap_drop = ["all"]This is the same approach GitLab’s documentation suggests, with one refinement: we use different fixed categories per runner type — c100,c100 for untagged runners and c200,c200 for flatpak runners — so that flatpak builds and regular builds remain MCS-isolated from each other, even though builds of the same type share a category.
This is a pragmatic compromise, not an ideal solution. All concurrent jobs on the same runner type share the same MCS category. With concurrent: 4 on our Hetzner runners, four simultaneous untagged jobs can read each other’s /builds content. For GNOME’s use case — a community CI infrastructure where the runners are shared by GNOME project maintainers — this is an acceptable tradeoff. The alternative, leaving MCS labels random, would break every single job. But it is precisely this tradeoff that motivates exploring per-job VM isolation via microVMs.
Exploring libkrunlibkrun is a lightweight Virtual Machine Monitor (VMM) that integrates with Podman via --runtime krun, running each container inside a microVM with its own lightweight kernel. The appeal is strong: per-container VM isolation would give each job its own kernel and address space, making the MCS cross-container problem irrelevant inside the VM.
I tested libkrun on a Fedora system and hit an immediate blocker: Fatal glibc error: rseq registration failed. The rseq (Restartable Sequences) syscall was introduced in Linux kernel 5.3 and is required by glibc >= 2.35. libkrun uses a custom minimal kernel that does not expose rseq support. Since the guest images — Fedora in our case — ship modern glibc that expects rseq to be available, the process aborts at startup before any user code runs.
The libkrun kernel is compiled into the library itself and cannot be modified or replaced by the user. This is not a configuration issue but a fundamental limitation of the current libkrun release.
Even if the rseq issue were resolved, the MCS challenge would still be there — as the test script demonstrates in Test 2. On the host side, Podman assigns MCS labels to the virtiofsd process that serves the volume into the VM via virtio-fs. Different VMs get different host-side MCS labels, meaning the same :Z relabel / cross-container access denial applies. The mechanism changes from overlay mounts to virtio-fs, but the SELinux enforcement is identical: virtiofsd for the build VM runs at container_kvm_t:s0:c309,c405 and cannot access files labeled s0:c823,c999 by the helper VM’s virtiofsd.
Firecracker and the custom executor pathFirecracker is another microVM technology, the one behind AWS Lambda and Fly.io, that could provide strong per-job isolation. However, there is no native GitLab Runner executor for Firecracker. The only integration path is the Custom Executor, which requires implementing prepare, run, and cleanup scripts from scratch.
The job image is exposed via CUSTOM_ENV_CI_JOB_IMAGE, but everything else is on the operator: pulling the OCI image, extracting a rootfs, booting a Firecracker VM with the right kernel and network configuration, injecting the build script, mounting or copying the cloned repository into the VM, collecting artifacts and cache after the job finishes, and tearing the VM down. GitLab provides an LXD-based example that shows the pattern — prepare creates a container and installs dependencies, run pipes the job script into it, cleanup destroys it — but adapting that to microVMs adds the complexity of VM lifecycle management, kernel and rootfs preparation, networking, and storage. This is a significant engineering effort, essentially rebuilding the entire Docker executor workflow from scratch.
What comes nextMCS is a core SELinux feature. Type enforcement (TE) already confines processes by type — container_t can only access container_file_t, not user_home_t or httpd_sys_content_t — but TE alone cannot distinguish one container_t process from another. MCS adds that layer: by assigning each container a unique category pair, the kernel enforces isolation between processes that share the same type. Container A at s0:c100,c100 and Container B at s0:c200,c200 are both container_t, but MCS ensures they cannot touch each other’s files. The conflict with GitLab Runner’s multi-container-per-job architecture is that two containers that need to share a volume are given different categories by default. The workarounds we deploy today, including the fixed MCS labels on GNOME’s runners, trade that inter-container isolation for functionality.
The most promising direction I’ve found so far is the combination of Cloud Hypervisor and the fleeting-plugin-fleetingd plugin. Cloud Hypervisor is built on Intel’s Rust-VMM crate and is essentially a more capable sibling of Firecracker — it supports CPU and memory hotplugging, VFIO device passthrough, and virtio-fs, features that are often necessary for complex CI tasks like building large binaries or running UI tests and that Firecracker’s minimalist design deliberately omits. The fleeting-plugin-fleetingd is a community plugin for GitLab’s Instance Executor (the modern evolution of the Custom Executor) that automates the full VM lifecycle: downloading cloud images, creating Copy-on-Write disks, launching Cloud Hypervisor VMs with direct kernel boot, provisioning them via cloud-init, and tearing them down after each build. Each job gets a fresh disposable VM, which is exactly the per-job isolation model we need. The plugin already handles networking via TAP interfaces and nftables SNAT, and supports customization of the VM image through cloud-init commands — so preinstalling Podman or other build tools is straightforward.
Beyond that, I’ll also keep evaluating libkrun (promising Red Hat technology), Firecracker with a hand-rolled custom executor, and QEMU’s microvm machine type. The common denominator across all of these — except for the fleeting-plugin-fleetingd path — is that none of them have an existing GitLab Runner integration. Regardless of which microVM technology we settle on, the path forward involves either building a workflow from scratch using the Custom Executor and its prepare, run, cleanup hooks, or leveraging the fleeting plugin ecosystem that GitLab has been building around the Instance and Docker Autoscaler executors.
That should be all for today, stay tuned!
It’s the first day of May, and it’s time for another update on what’s been happening at the GNOME Foundation. It’s been two weeks since my last post, and this update covers highlights of what we’ve been doing since then.
Remembering Seth NickellThis week we received the very sad news of the death of Seth Nickell. It’s been a long time since Seth was active in the GNOME project, so many of our members won’t be familiar with him or his work. However, Seth played an important part in GNOME’s history, and was a special and unique character.
Jonathan wrote a wonderful post about Seth, with some great stories. Federico migrated the memorial page from the old wiki to the handbook, and added Seth there (work is currently ongoing to develop that page). Seth’s death has also been covered by LWN, which includes dedications from GNOME contributors.
Whether you knew Seth or came to GNOME after his time, I think we can all appreciate the contributions that he made, which live on in the project and wider ecosystem to this day.
GNOME FellowshipApplications for the first round of the new GNOME Fellowship program closed last week, on 20th April. We had a great response and received some excellent proposals, and now we have the tough job of deciding who is going to receive support through the program.
To that end, the Fellowship Committee met this week to review the proposals and begin the selection process. We have identified a shortlist of candidates, and will be meeting again next week to narrow the selection further.
Since this is the first round of the Fellowship, we are establishing the selection process as we go. Hopefully we’ll get to put this to use again in future Fellowship rounds!
ConferencesLinux App Summit (LAS) will be held in Berlin on 16-17 May – that’s in a little over two weeks! The schedule has been finalized and looks great, and this year’s LAS is shaping up to be a fantastic event. Please do consider going, and please do register!
Due to high demand, the organizing team have decided to stream the talks from this year, so look out for details about remote participation.
Aside from LAS, preparations for July’s GUADEC conference continue to be worked on. Travel sponsorship is still available if you need assistance in order to attend, so do consider applying for that.
Office transitions ongoingWork to update many of our backoffice systems and processes has continued at a steady pace over the past fortnight. Many of the big moves are done (new payments system, email accounts, mailing system, accounting procedures, credit card platform), and we are now firmly in the final stages, making sure that our new address is used everywhere, emails are going to the right places, recurring payments are transferred over to new credit cards, and vendors are setup on the new payments system.
The value of this work is already showing, with smoother accounting procedures, more up to date finance reports, and better tracking of incoming queries.
That’s it for this update. Thanks for reading, and take care.
Update on what happened across the GNOME project in the week from April 24 to May 01.
GNOME Circle Apps and Libraries NewsFlash feed reader ↗Follow your favorite blogs & news sites.
Jan Lukas announces
Hi TWIG. Newsflash can now swipe between articles. This closes off one of the oldest still standing feature requests. And hopefully makes all the mobile users happy.
Third Party Projectsxjuan reports
Casilda 1.2.4 Released!
I am very happy to announce a new version of Casilda!
A simple Wayland compositor widget for Gtk 4 and GNOME
This release comes with several new features like fractional scaling support, bug fixes and extra polish that it is making it start to feel like a proper compositor. You can read more about it at https://blogs.gnome.org/xjuan/2026/04/19/casilda-1-2-4-released/
Anton Isaiev says
RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)
Versions 0.11.0–0.12.7 bring the three biggest features since the project started, plus a mountain of polish driven by community feedback.
Cloud Sync landed. You can now synchronize connection configurations between devices and team members through any shared directory - Google Drive, Syncthing, Nextcloud, Dropbox, or even a USB stick. Two modes: Group Sync (per-group .rcn files with Master/Import access) and Simple Sync (single-file bidirectional merge). A file watcher auto-imports changes, and the new Cloud Sync settings page shows sync status, synced groups, and available files. CLI got sync status, sync list, sync export, sync import, and sync now commands.
SSH Tunnel Manager is a standalone window for managing headless SSH port-forwarding tunnels without terminal sessions - Local, Remote, and Dynamic forwards with auto-start on launch and auto-reconnect. SSH jump host support was extended to RDP, VNC, and SPICE connections, so you can tunnel graphical sessions through a bastion host. Ctrl+T opens the tunnel manager.
Tab management was completely reworked around AdwTabView. Tab Overview (Ctrl+Shift+O) gives a GNOME Web-style grid of all open tabs. Tab Pinning keeps important tabs at the left edge. A tab switcher in the Command Palette (% prefix) provides fuzzy search across open tabs. Right-click context menu gained Close Others / Left / Right / All / Ungrouped actions.
Other highlights: custom terminal color themes with full 16-color ANSI palette editor; terminal scrollbar; font zoom (Ctrl+Scroll); copy-on-select; SSH Keep-Alive and verbose mode; Hoop.dev as the 11th Zero Trust provider; custom SSH agent socket override (fixes KeePassXC/Bitwarden agent in Flatpak); RDP mouse jiggler; terminal activity/silence monitor; host online check with auto-connect; highlight rules now render with actual colors via Cairo overlay; connection dialog rebuilt with adw:: widgets following GNOME HIG.
Packaging grew significantly. RustConn is now available as Flatpak on Flathub, Snap with strict confinement, AppImage, native .deb and .rpm packages via OBS repositories (Debian 13, Ubuntu 24.04/26.04, Fedora 43/44, openSUSE Tumbleweed/Slowroll/Leap 16.0), plus ARM64 builds. A huge thank you to the community maintainers: the AUR package for Arch Linux, the FreeBSD port, and there is an open request to include RustConn in Debian proper.
Thank you to everyone who reported issues, contributed translations, and tested pre-releases - your feedback shaped every one of these 25 releases. Special thanks to GaaChun for the complete Simplified Chinese translation, and to Phil Dodd and Todor Todorov for the support.
Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn
Capypara says
Field Monitor 50.0
Field Monitor - the remote desktop viewer focused on accessing VMs - has been updated to version 50.0.
Some highlights:
Field Monitor is available via Flathub: https://flathub.org/apps/de.capypara.FieldMonitor
Christian says
The first public release of Gitte is out!
Gitte is a GTK4/libadwaita git GUI written in Rust, built on Relm4 and git2 (no shelling out to the git binary).
What’s in the initial release:
It’s early days, so expect rough edges. Bug reports and feedback are very welcome.
Get Gitte from Flathub: https://flathub.org/apps/de.wwwtech.gitte
Parabolic ↗Download web video and audio.
Nick reports
Parabolic V2026.4.1 is here with plenty of bug fixes!
Here’s the full changelog:
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
GNOME is once again participating in GSoC. This year, we have contributors working on adding Debug Adapter Protocol support to GJS, incorporating vocab-style puzzles into GNOME Crosswords, creating a native GTK4/Rust rewrite of the Pitivi timeline ruler, porting gitg to GTK4, implementing app uninstallation in the GNOME Shell app grid, and enabling recovery from GPU resets.
As we onboard the contributors, we will be adding them to Planet GNOME, where you can get to know them better and follow their project updates.
GSoC is a great opportunity to welcome new people into our project. Please help them get started and make them feel at home in our community!
Special thanks to our community mentors, who are donating their time and energy to help welcome and guide our new contributors: Philip Chimento, Jonathan Blandford, Yatin, Alex Băluț, Alberto Fanjul, Adrian Vovk, Jonas Ådahl, and Robert Mader.
Yesterday, I wanted to debug a glycin (or Shell) issue on GNOME OS. Turns out, there is currently no documentation that works or includes all necessary steps.
Here is the simplest variant if you don’t develop on GNOME OS and have an internet connection that can download 16 GB in a reasonable amount of time.
First we get a toolbox image to build our code.
$ toolbox create gnomeos-nightly -i quay.io/gnome_infrastructure/gnome-build-meta:gnomeos-devel-nightlyAfter entering the toolbox with
$ toolbox enter gnomeos-nightlywe can clone and build our project with sysext-utils that are included in our image:
$ meson setup ./build --prefix /usr --libdir="lib/$(gcc -print-multiarch)" $ sysext-build example ./buildThis creates a example.sysext.raw file.
Now, we need a GNOME OS to test our build. We can download the image and install it in Boxes. After logging in, we can just drag and drop the example.sysext.raw into the VM.
Before we can install it, we need to get the development tools for our VM:
$ run0 updatectl enable devel --nowAfter that, we need to restart the VM.
Finally, we can test our build:
$ run0 sysext-add ~/Downloads/example.sysext.rawAdding the --persistent flag to this command will make the changes stay active across reboots.
If the changes made it impossible to boot into the VM again, we can start the VM in “Safe mode” from the boot menu. After logging in, we can manually remove the extension:
$ run0 rm /var/lib/extensions/example.rawHappy hacking!
I heard the news about Seth Nickell’s passing last week, and have been in a bit of a funk ever since.
Seth was brilliant, iconoclastic, fearless.
It’s been a long while since Seth was an active part of the GNOME Community, but his influence on the project can still be seen in its DNA if you know where to look. He arrived on the GNOME scene while still in school with hundreds of ideas on how to improve things. It was an interesting time: We had just launched GNOME 1.5 and were searching for a new path towards GNOME 2.0. The Sun usability study had been published and the community had internalized the need to change directions. Seth rolled up his sleeves and did the work needed to help light that path.
Seth championed radical proposals such as instant apply, button ordering, message dialog fixes, and more. He cleaned up the control-center proposing some of the most visible changes from GNOME 1 to 2. He also did the initial designs for epiphany, pushing for a cleaner browser experience during an era of high browser complexity. He had a vision of desktops as a democratic tool, as easy and natural to use as any other tool in the human experience.
As a designer, Seth was focused on trying to understand who we were designing for and making sure we were solving problems for them. While he wasn’t beyond fixing paddings / layouts, he wanted to get the Big Picture right. He wasn’t beyond rolling up his sleeves writing code to move things forward, but was at his best as a champion and visionary, arguing for us to take risks and continue to innovate.
Spending time was Seth was a hoot. He had such a flair for the dramatic. I remember…
Being one of the public faces of GNOME2 was hard, and he moved on. Later, he worked on OLPC and Sugar, and made his mark there. After that, he seemed to travel a lot. We lost touch, though he’d reappear every couple of years to say hi. I hope he found what he was looking for.
Farewell, my friend. The world now has less color in it.
I got myself a Yubikey recently, and I wanted to use it as a nice convenience to:
I've only managed to do the first two, since they both rely on Linux Pluggable Authentication Modules (PAM). Luckily for me, one of PAM's modules supports U2F, the standard Yubikeys rely on.
First I need to install pam-u2f to add U2F support to PAM, and pamu2fcfg to configure my key.
$ sudo rpm-ostree install pam-u2f pamu2fcfgSince I'm running an immutable OS I need to reboot, and then I can create the correct directory and file to dump an U2F key into it.
$ mkdir -p ~/.config/Yubico $ pamu2fcfg > ~/.config/Yubico/u2f_keysThen I make sure to have a root session open in case I lock myself out of sudoers.
$ sudo su #In a different terminal, I can edit the sudoers file to add this line
#%PAM-1.0 auth sufficient pam_u2f.so cue openasuser auth include system-auth account include system-auth password include system-auth session optional pam_keyinit.so revoke session required pam_limits.so session include system-authI save this file and open a new terminal. I type in sudo vi and it asks me to touch my FIDO authenticator before opening vi! If I touch the Yubikey, it indeed opens vi with root privileges.
Let's break down the line:
It's also possible to use it to unlock my session, but it would be a bit reckless to allow anyone with my Yubikey to log into my laptop. If my backpack gets stolen and it has both my Yubikey and my laptop, anyone can log in.
It's possible to make the login screen require either my user password, or all of
If someone fails more than three times to enter the correct PIN, the Yubikey will lock itself and require a PUK to be unlocked. This gives me an additional layer of security, and it's more convenient than having to type a full length passphrase.
I've added the following line to /etc/pam.d/greetd (the greeter I use):
#%PAM-1.0 auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1 auth substack system-auth [...][!warning] I can lose my Yubikey
I use my Yubikey as a nice convenience to set up a weaker PIN while not compromising too much on security. I use it instead of a password, no in addition to it.
Since I can lose or break my Yubikey and I don't want to buy two of them, I make the U2F login sufficient but not required. This means I can still fallback to password authentication if I lose my Yubikey.
Finally, DankMaterialShell uses its own lockscreen manager too. I still want to be able to fallback to password authentication if need be, so I'll configure it to accept U2F OR the password, not both.
This means that the lockscreen will call /etc/pam.d/dankshell-u2f to know what to do when the screen is locked. Since this file doesn't exist, I can create it with the following content.
#%PAM-1.0 auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1I need a fallback for when I don't have my Yubikey, so I also create the one for this occasion
#%PAM-1.0 auth include system-authFinally, I have a consistent setup where both my login and lock screen require me to plug my key, enter its PIN and touch it, or enter my full password. When it comes to sudo, I can only touch my key without requiring an PIN.
My next quest will be to use my Yubikey to unlock my LUKS-encrypted disk.
At the start of the month, Bilal gave us all a giant gift with Goblint. On the first week it was already impressive. Now it’s an invaluable tool for anyone that ever interfaced with GObject, glib or GTK. It will catch leaks, bugs, or even offer to auto fix and modernize your code to the modern paradigms we use. It’s one of those things that is going to save countless hours of debugging and more importantly, prevent the issues before they even get committed. Jonathan Blandford wrote about using it two days ago, and I suggest you read the post.
Everyone is trying to use goblint, and we are all stumbling upon the same issues integrating it into our tooling. Initially, it was only able to produce Sarif reports, which GitLab still has behind a feature flag, in addition to only be available in GitLab Enterprise Editions.
I added an export for GitLab’s Code Quality format which has some support in the non-proprietary Community Edition we use in the GNOME and Freedesktop.org instances. Sadly, almost everything nice is still only available in the enterprise editions, but at least there is this little Widget in the Merge Requests page.
Additionally, we now have CI templates for Goblint. One is adding a job to the existing gnomeos-basic-ci component we use everywhere. Simply go to your latest pipeline and look for the job.
The report will also show up in Merge Requests that have been updated since yesterday. The gnomeos-basic-ci has other goodies like sanitizers, static analyzers, test coverage, etc wired out of the box, so you should give it a try if you are not using it yet.
If you do but don’t want the goblint job, you can disable it easily with inputs: goblint: "disabled" similar to all the other tools the component provides.
include: - project: "GNOME/citemplates" file: "templates/default-rules.yml" - component: "gitlab.gnome.org/GNOME/citemplates/gnomeos-basic-ci@26.1"If you want only a goblint job, I’ve also added a standalone template that you can use. (Or copy-paste from it).
include: - component: "gitlab.gnome.org/GNOME/citemplates/goblint@26.1" inputs: job-stage: "lint"In order for the Code Quality report to work, you will need to have a report uploaded from your target branch, so GitLab will have something to compare the one from the merge request with. The template rules will handle that for you, but keep it in mind.
At this moment all the lints are warnings so the job will never be fatal. This is why we can enabled it by default without worrying about breaking pipelines for now. You can further configure its behavior to your needs, and error out if you want to, through the configuration file.
min_glib_version = "2.76" [rules.g_declare_semicolon] level = "ignore" [rules.untranslated_string] level = "error" ignore = ["**/test-*.c"]It’s also very likely that we are going to add goblint and its LSP server to the GNOME SDK Flatpak runtime, along with GNOME OS, so it will always be available for use with tools like Builder and foundry.
Enjoy
A few years back I did a quick exploration of what GNOME app icons might look like in an alternate universe where we kept on using VGA displays. Chiselling pixels away is therapeutic. So while there is absolutely no use for these, I keep on making them if only to bring some attention to what really matters for GNOME, having nice apps.
Here's a batch of mostly GNOME Circle app icons, with some 3rd party ones thrown in.
If you're reading this on my site rather than Planet GNOME or some flickering terminal in an abandoned Vault, then congratulations. You've stumbled upon a working Pip-Boy module! Found it half-buried under irradiated rubble, its phosphor display still humming with that familiar green glow. Enjoy these icons the way the dwellers of Vault 101 were always meant to, one glorious scanline at a time.
If you work with patches and git am, then you’re probably used to seeing patches fail to apply. For example:
$ git am CVE-2025-14512.patch Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings error: patch failed: gio/gfileattribute.c:166 error: gio/gfileattribute.c: patch does not apply Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings hint: Use 'git am --show-current-patch=diff' to see the failed patch hint: When you have resolved this problem, run "git am --continue". hint: If you prefer to skip this patch, run "git am --skip" instead. hint: To restore the original branch and stop patching, run "git am --abort". hint: Disable this message with "git config set advice.mergeConflict false"This is sad and frustrating because the entire patch has failed, and now you have to apply the entire thing manually. That is no good.
Here is the solution, which I wish I had learned long ago:
$ git config --global am.threeWay trueThis enables three-way merge conflict resolution, same as if you were using git cherry-pick or git merge. For example:
$ git am CVE-2025-14512.patch Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings Using index info to reconstruct a base tree... M gio/gfileattribute.c Falling back to patching base and 3-way merge... Auto-merging gio/gfileattribute.c CONFLICT (content): Merge conflict in gio/gfileattribute.c error: Failed to merge in the changes. Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings hint: Use 'git am --show-current-patch=diff' to see the failed patch hint: When you have resolved this problem, run "git am --continue". hint: If you prefer to skip this patch, run "git am --skip" instead. hint: To restore the original branch and stop patching, run "git am --abort". hint: Disable this message with "git config set advice.mergeConflict false"Now you have merge conflicts, which you can handle as usual. This seems like a better default for pretty much everybody, so if you use git am, you should probably enable it.
I’ve no doubt that many readers will have known about this already, but it’s new to me, and it makes me happy, so I wanted to share. You’re welcome, Internet!
I was excited to see Bilal’s announcement of goblint, and I’ve spent the past week getting Crosswords to work with it. This is a tool I’ve always wanted and I’m pretty convinced it will be a great boon for the GNOME ecosystem. I’m posting my notes in hope that more people try it out:
YMMV
Hello there,
You thought I’d given up on “status update” blog posts, did you ? I haven’t given up, despite my better judgement, this one is just even later than usual.
Recently I’ve been using my rather obscure platform as a blogger to theorize about AI and the future of the tech industry, mixed with the occasional life update, couched in vague terms, perhaps due to the increasing number of weirdos in the world who think doxxing and sending death threats to open source contributors is a meaningful use of their time.
In fact I do have some theories about how George Orwell (in “Why I Write”) and Italo Calvino (in “If On a Winter’s Night a Traveller”) made some good guesses from the 20th century about how easy access to LLMs would affect communication, politics and art here in the 21st. But I’ll leave that for another time.
It’s also 8 years since I moved to this new country where I live now, driving off the boat in a rusty transit van to enjoy a series of unexpected and amazing opportunities. Next week I’m going to mark the occasion with a five day bike ride through the mountains of Asturias, something I’ve been dreaming of doing for several years.
The original idea of writing a monthly post was to keep tabs on various open source software projects I sometimes manage to contribute to, and perhaps even to motivate me to do more such volunteering. Well that part didn’t work, house renovations and an unexpectedly successful gig playing synth and trombone took over all my free time; but after many years of working on corporate consultancy and doing a little open source in the background, I’m trying to make a space at work to contribute in the open again.
I could tell the whole story here of how Codethink became “the build system people”. Maybe I will actually. It all started with BuildStream. In fact, that’s not even true. it all started in 2011 when some colleagues working with MeeGo and Yocto thought, “This is horrible, isn’t it?”
They set out to create something better, and produced Baserock, which unfortunately turned out even worse. But it did have some good ideas. The concept of “cache keys” to identify build inputs and content-addressed storage to hold build outputs began there, as did the idea of opening a “workspace” to make drive-by changes in build inputs within a large project.
BuildStream took this core idea, extended it to support arbitrary source kinds and element kinds defined by plugins, and added a shiny interface on top. It used OSTree to store and distribute build artifacts initially, later migrating to the Google REAPI with the goal of supporting Enterprise(TM) infrastructure. You can even use it alongside Bazel, if you like having three thousand commandline options at your disposal.
Unfortunately it was 2016, so we wrote the whole thing in Python. (In our defence, the Rust programming language had only recently hit 1.0 and crates.io was still a ghost town, and we’d probably still be rewriting the ruamel.yaml package in Rust if we had taken that road.) But the company did make some great decisions, particularly making a condition of success for the BuildStream project that it could unify the 5 different build+integration systems that GNOME release team were maintaining. And that success meant not making a prototype, but the release team actually using BuildStream to make releases. Tristan even ended up joining the GNOME release team for a while. We discussed it all at the 2017 Manchester GUADEC, coincidentally. It was a great time. (Aside from the 6 months leading up to the conference.)
At this point, the Freedesktop SDK already existed, with the same rather terrible name that it has today, and was already the base runtime for this new app container tool that was named… xdg-app. (At least that eventually gained a better name). However, if you can remember 8 years ago, it had a very different form than today. Now, my memory of what happened next is especially hazy at this point, because like I told you in the beginning, I was on a boat with my transit van heading towards a new life in Spain. All I have to go on 8 years later is the Git history, but somehow the Freedesktop SDK grew a 3-stage compiler bootstrap, over 600 reusable BuildStream elements, its own Gitlab namespace, and even some controversial stickers. As a parting gift I apparently added support for building VMs, the idea being that we’d reinstate the old GNOME Continuous CI system that had unfortunately died of neglect several years earlier. This idea got somewhat out of hand, let’s say.
It took me a while to realize this, but today Freedesktop SDK is effectively the BuildStream reference distribution. What Poky is to BitBake in the Yocto project, this is what Freedesktop SDK is to BuildStream. And this is a pretty important insight. It explains the problem you may have experienced with the BuildStream documentation: you want to build some Linux package, so you read through the manual right to the end, and then you still have no fucking idea how to integrate that package.
This isn’t a failure on the part of the authors, instead the issue is that your princess is in another castle. Every BuildStream project I’ve ever worked on has junctioned freedesktop-sdk.git and re-used the elements, plugins, aliases, configurations and conventions defined there, all of which are rigorously undocumented. The Freedesktop SDK Guide, for reasons that I won’t go into, doesn’t venture much further than than reminding you how to call Make targets.
And this is something of a point of inflection. The BuildStream + Freedesktop SDK ecosystem has clearly not displaced Yocto, nor for that matter Linux Mint. But, like many of my favourite musicians, it has been quietly thriving in obscurity. People I don’t know are using it to do things that I don’t completely understand. I’ve seen it in comparison articles, and even job adverts. ChatGPT can generate credible BuildStream elements about as well as it can generate Dockerfiles (i.e. not very well, but it indicates a certain level of ubiquity). There have been conferences, drama, mistakes, neglect. It’s been through an 8 person corporate team hyper-optimizing the code, and its been though a mini dark age where volunteers thanklessly kept the lights on almost single handledly, and its even survived its transition to the Apache Foundation.
Through all of this, the secret to its success probably that its just a really nice tool to work with. As much as you can enjoy software integration, I enjoy using BuildStream to do it; things rarely break, when they do its rarely difficult to fix them, and most importantly the UI is really colourful! I’m now using it to build embedded system images for a product named CTRL, which you can think of as.. a Linux distribution. There are some technical details to this which I’m working to improve, which I won’t bore you with here.
I also won’t bore you with the topic of community governance this month, but that’s what’s currently on my mind. If you’ve been part of the GNOME Foundation for a few years, you’ll know this something that’s usually boring and occasionally becomes of almost life-or-death importance. The “let’s just be really sound” model works great, until one day when you least expect it, and then suddenly it really doesn’t. There is no perfect defence against this, and in open source communities its our diversity that brings the most resilience. When GNOME loses, KDE gains, and that way at least we still don’t have to use Windows. Indeed, this is one argument for investing in BuildStream even if it remains forever something of a minority spot. I guess I just need to remember that when you have to start thinking hard about governance, that’s a sign of success.
It’s a question I had to ask myself multiple times over the last few months. Depending on the context the answer can be:
If you are an app developer, you’re lucky and it’s almost always the first answer. If you develop something with a security boundary which involves files in any way, the correct answer is very likely the second one.
Opening a File, the Hard WayLike so often, the details depend on the specifics, but in the worst-case scenario, there is a process on either side of the security boundary, which operate on a filesystem tree which is shared by both processes.
Let’s say that the process with more privileges operates on a file on behalf of the process with less privileges. You might want to restrict this to files in a certain directory, to prevent the less privileged process from, for example, stealing your SSH key, and thus take a subpath that is relative to that directory.
The first obvious problem is that the subpath can refer to files outside of the directory if it contains ... If the privileged process gets called with a subpath of ../.ssh/id_ed25519, you are in trouble. Easy fix: normalize the path, and if we ever go outside of the directory, fail.
The next issue is that every component of the path might be a symlink. If the privileged process gets called with a subpath of link, and link is a symlink to ../.ssh/id_ed25519, you might be in trouble. If the process with less privileges cannot create files in that part of the tree, it cannot create a malicious symlink, and everything is fine. In all other scenarios, nothing is fine. Easy fix: resolve the symlinks, expand the path, then normalize it.
This is usually where most people think we’re done, opening a file is not that hard after all, we can all do more fun things now. Really, this is where the fun begins.
The fix above works, as long as the less privileged process cannot change the file system tree anywhere in the file’s path while the more privileged process tries to access it. Usually this is the case if you unpack an attacker-provided archive into a directory the attacker does not have access to. If it can however, we have a classic TOCTOU (time-of-check to time-of-use) race.
We have the path foo/id_ed25519, we resolve the smlinks, we expand the path, we normalize it, and while we did all of that, the other process just replaced the regular directory foo that we just checked with a symlink which points to ../.ssh. We just checked that the path resolves to a path inside the target directory though, and happily open the path foo/id_ed25519 which now points to your ssh key. Not an easy fix.
So, what is the fundamental issue here? A path string like /home/user/.local/share/flatpak/app/org.example.App/deploy describes a location in a filesystem namespace. It is not a reference to a file. By the time you finish speaking the path aloud, the thing it names may have changed.
The safe primitive is the file descriptor. Once you have an fd pointing at an inode, the kernel pins that inode. The directory can be unlinked, renamed, or replaced with a symlink; the fd does not care. A common misconception is that file descriptors represent open files. It is true that they can do that, but fds opened with O_PATH do not require opening the file, but still provide a stable reference to an inode.
The lesson that should be learned here is that you should not call any privileged process with a path. Period. Passing in file descriptors also has the benefit that they serve as proof that the calling process actually has access to the resource.
Another important lesson is that dropping down from a file descriptor to a path makes everything racy again. For example, let’s say that we want to bind mount something based on a file descriptor, and we only have the traditional mount API, so we convert the fd to a path, and pass that to mount. Unfortunately for the user, the kernel resolves the symlinks in the path that an attacker might have managed to place there. Sometimes it’s possible to detect the issue after the fact, for example by checking that the inode and device of the mounted file and the file descriptor match.
With that being said, sometimes it is not entirely avoidable to use paths, so let’s also look into that as well!
In the scenario above, we have a directory in which we want all the paths to resolve in, and that the attacker does not control. We can thus open it with O_PATH and get a file descriptor for it without the attacker being able to redirect it somewhere else.
With the openat syscall, we can open a path relative to the fd we just opened. It has all the same issues we discussed above, except that we can also pass O_NOFOLLOW. With that flag set, if the last segment of the path is a symlink, it does not follow it and instead opens the actual symlink inode. All the other components can still be symlinks, and they still will be followed. We can however just split up the path, and open the next file descriptor for the next path segment and resolve symlinks manually until we have done so for the entire path.
libglnx chaselibglnx is a utility library for GNOME C projects that provides fd-based filesystem operations as its primary API. Functions like glnx_openat_rdonly, glnx_file_replace_contents_at, and glnx_tmpfile_link_at all take directory fds and operate relative to them. The library is built around the discipline of “always have an fd, never use an absolute path when you can use an fd.”
The most recent addition is glnx_chaseat, which provides safe path traversal, and was inspired by systemd’s chase(), and does precisely what was described above.
int glnx_chaseat (int dirfd, const char *path, GlnxChaseFlags flags, GError **error);It returns an O_PATH | O_CLOEXEC fd for the resolved path, or -1 on error. The real magic is in the flags:
typedef enum _GlnxChaseFlags { /* Default */ GLNX_CHASE_DEFAULT = 0, /* Disable triggering of automounts */ GLNX_CHASE_NO_AUTOMOUNT = 1 << 1, /* Do not follow the path's right-most component. When the path's right-most * component refers to symlink, return O_PATH fd of the symlink. */ GLNX_CHASE_NOFOLLOW = 1 << 2, /* Do not permit the path resolution to succeed if any component of the * resolution is not a descendant of the directory indicated by dirfd. */ GLNX_CHASE_RESOLVE_BENEATH = 1 << 3, /* Symlinks are resolved relative to the given dirfd instead of root. */ GLNX_CHASE_RESOLVE_IN_ROOT = 1 << 4, /* Fail if any symlink is encountered. */ GLNX_CHASE_RESOLVE_NO_SYMLINKS = 1 << 5, /* Fail if the path's right-most component is not a regular file */ GLNX_CHASE_MUST_BE_REGULAR = 1 << 6, /* Fail if the path's right-most component is not a directory */ GLNX_CHASE_MUST_BE_DIRECTORY = 1 << 7, /* Fail if the path's right-most component is not a socket */ GLNX_CHASE_MUST_BE_SOCKET = 1 << 8, } GlnxChaseFlags;While it doesn’t sound too complicated to implement, a lot of details are quite hairy. The implementation uses openat2, open_tree and openat depending on what is available and what behavior was requested, it handles auto-mount behavior, ensures that previously visited paths have not changed, and a few other things.
An Aside on Standard LibrariesThe POSIX APIs are not great at dealing with the issue. The GLib/Gio APIs (GFile, etc.) are even worse and only accept paths. Granted, they also serve as a cross-platform abstraction where file descriptors are not a universal concept. Unfortunately, Rust also has this cross-platform abstraction which is based entirely on paths.
If you use any of those APIs, you very likely created a vulnerability. The deeper issue is that those path-based APIs are often the standard way to interact with files. This makes it impossible to reason about the security of composed code. You can audit your own code meticulously, open everything with O_PATH | O_NOFOLLOW, chain *at() calls carefully — and then call a third-party library that calls open(path) internally. The security property you established in your code does not compose through that library call.
This means that any system-level code that cares about filesystem security has to audit all transitive dependencies or avoid them in the first place.
So what would a better GLib cross-platform API look like? I would say not too different from chaseat(), but returning opaque handles instead of file descriptors, which on Unix would carry the O_PATH file descriptor and a path that can be used for printing, debugging and things like that. You would open files from those handles, which would yield another kind of opaque handle for reading, writing, and so on.
The current GFile was also designed to implement GVfs: g_file_new_for_uri("smb://server/share/file") gives you a GFile you can g_file_read() just like a local file. This is the right goal, but the wrong abstraction layer. Instead, this kind of access should be provided by FUSE, and the URI should be translated to a path on a specific FUSE mount. This would provide a few benefits:
Nowadays I maintain a small project called Flatpak. Codean Labs recently did a security analysis on it and found a number of issues. Even though Flatpak developers were aware of the dangers of filesystems, and created libglnx because of it, most of the discovered issues were just about that. One of them (CVE-2026-34078) was a complete sandbox escape.
flatpak run was designed as a command-line tool for trusted users. When you type flatpak run org.example.App, you control the arguments. The code that processes the arguments was written assuming the caller is legitimate. It accepted path strings, because that’s what command-line tools accept.
The Flatpak portal was then built as a D-Bus service that sandboxed apps could call to start subsandboxes — and it did this by effectively constructing a flatpak run invocation and executing it. This connected a component designed for trusted input directly to an untrusted caller (the sandboxed app).
Once that connection exists, every assumption baked into flatpak run about caller trustworthiness becomes a potential vulnerability. The fix wasn’t “change one function” — it was “audit the entire call chain from portal request to bubblewrap execution and replace every path string with an fd.” That’s commits touching the portal, flatpak-run, flatpak_run_app, flatpak_run_setup_base_argv, and the bwrap argument construction, plus new options (--app-fd, --usr-fd, --bind-fd, --ro-bind-fd) threaded through all of them.
If the GLib standard file and path APIs were secure, we would not have had this issue.
Another annoyance here is that the entire subsandboxing approach in Flatpak comes from 15 years ago, when unprivileged user namespaces were not common. Nowadays we could (and should) let apps use kernel-native unprivileged user namespaces to create their own subsandboxes.
Unfortunately with rather large changes comes a high likelihood of something going wrong. For a few days we scrambled to fix a few regressions that prevented Steam, WebKit, and Chromium-based apps from launching. Huge thanks to Simon McVittie!
In the end, we managed to fix everything, made Flatpak more secure, the ecosystem is now better equipped to handle this class of issues, and hopefully you learned something as well.