USB-C is excellent, provided you don’t look too closely.
I’ve been seeing a drum beat of interest in the internals of USB-C. Darryl Morley’s macOS WhatCable, Chromebooks exposing lots of lovely info about emarkers, USB cable testers and a bit more. Very infrastructure club topics. So I made a small GTK app also called WhatCable which is intended to show what Linux knows about your USB ports, cables, chargers and devices, but written as a GNOME/libadwaita app and using the interfaces Linux exposes through sysfs.
The hope was fairly straightforward: plug things into my Framework 13, ask Linux what is going on, and present the answer in a way that doesn’t require remembering which bit of /sys to poke. In particular I wanted cable identity and e-marker details. These are the useful little facts that tell you whether a cable is what it claims to be, or at least what it claims to be electronically. Given the number of USB-C cables in the house whose origin story is “came in a box with something”, this felt like a public service, or at least a satisfying evening.
The first bit is pleasantly sensible. Linux has standard-ish places for this information:
/sys/bus/usb/devices /sys/class/typec /sys/class/usb_power_delivery /sys/bus/thunderbolt/devicesWhen those are populated, a normal unprivileged app can learn quite a lot. It can show USB devices, Type-C ports, partners, cables, roles, power data, Thunderbolt and USB4 domains. That’s exactly the sort of thing a small Flatpak app should be good at: read some public kernel state, translate it into something at least moderately human friendly and then depart.
On my Framework 13, the USB device and Thunderbolt sides were useful. The Type-C side was not. /sys/class/typec existed but had no ports. /sys/class/usb_power_delivery existed but was empty. This is a slightly annoying result, because it means the nice standard API is present as a signpost rather than a destination.
The next clue was that the machine clearly does have USB-C machinery, and not just because I could look at the side of the device. It is a Framework 13 with the embedded controller and Cypress CCG power delivery controllers doing real work. The relevant kernel modules were loaded, including UCSI and Chrome EC pieces. There was also an ACPI UCSI device at:
/sys/bus/acpi/devices/USBC000:00but ucsi_acpi did not appear to bind to it and create the Type-C class ports. So the hardware and firmware know things, but they were not arriving in the standard Linux userspace shape.
Framework’s own tooling gives another route in. I built framework_tool from FrameworkComputer/framework-system and asked the EC what it could see. The Framework-specific PD port command did not work on this firmware:
USB-C Port 0: [ERROR] EC Response Code: InvalidCommandand similarly for the other ports. That’s not very poetic, but it is at least clear.
The Chromebook-style power command was more useful. With a charger connected it reported, for example:
USB-C Port 0 (Right Back): Role: Sink Charging Type: PD Voltage Now: 19.776 V, Max: 20.0 V Current Lim: 2250 mA, Max: 2250 mA Dual Role: Charger Max Power: 45.0 WThat’s good information. It’s not cable identity, but it is the kind of port state people actually want when they are trying to work out why a laptop is charging slowly, or not charging, or doing something else mildly USB-C shaped.
framework_tool --pd-info could also talk through the EC to the Cypress controllers and report their firmware details:
Right / Ports 01 Silicon ID: 0x2100 Mode: MainFw Ports Enabled: 0, 1 FW2 (Main) Version: Base: 3.4.0.A10, App: 3.8.00 Left / Ports 23 Silicon ID: 0x2100 Mode: MainFw Ports Enabled: 0, 1 FW2 (Main) Version: Base: 3.4.0.A10, App: 3.8.00Again, useful. Again, not the cable.
Much of this investigation and app code was written with AI tools in the loop. That was useful for chasing down boring plumbing and generating probes. The decisive test was asking the Chrome EC for the newer Type-C discovery data directly. The EC advertised USB PD support, but not the newer Type-C command set. EC_CMD_TYPEC_STATUS and EC_CMD_TYPEC_DISCOVERY both came back as invalid commands on all four ports.
That means that on this Framework 13 firmware path I cannot get Discover Identity results, SOP/SOP’ discovery data, SVIDs, mode lists or e-marker details through Chrome EC host commands. The cable may well be telling the PD controller interesting things, but those things are not exposed through a stable unprivileged interface I can sensibly use in a desktop app.
This is the main lesson from the whole exercise: USB-C inspection on Linux is not one API. It is a set of possible stories. Sometimes the kernel Type-C class tells you lots of things. Sometimes Thunderbolt sysfs tells you a different useful slice. Sometimes a vendor EC can tell you power state, but only as root. Sometimes the information exists below you somewhere, but not in a form you should build an app around.
So WhatCable needs to be honest. It should show the sources it can read, and it should say when a source is unavailable rather than pretending absence means certainty. “No cable identity exposed on this machine” is a very different statement from “this cable has no identity”. The former is boring but true. The latter is how you end up lying with an icon (it is not a nice icon).
The current shape I think is right is:
That last point matters. On the host /dev/cros_ec exists, but it is root-only. Making a normal app require broad device access would be a poor bargain. A small privileged helper that answers a few known-safe questions might be acceptable. A graphical app with arbitrary EC command execution would be exciting in the wrong way.
This is not quite the result I wanted when I started. I wanted to show a friendly “this is a 100W e-marked cable” label and feel very clever about it. What I have instead is a more modest app and a better understanding of where the bodies are buried. That’s still useful. A tool that tells you what your machine actually exposes is better than one that implies the USB-C universe is more orderly than it is. Given this, I’m not going to be sharing this one more widely, but fork away if you wish, or come back with a better idea.
It’s very easy to run with GNOME Builder, so just check out the source and ‘press play’ or get an artifact out of the Github Actions. If you run WhatCable on a different laptop and see rich Type-C data, lovely. If you run it on a Framework 13 like mine and mostly see USB devices, Thunderbolt controllers and a note that Type-C data is missing, that is also information. Not as glamorous as catching a suspicious cable in the act, but much more likely to be true.
GNOME’s GitLab runners use Podman as the container runtime with SELinux in Enforcing mode on Fedora. The GitLab Runner Docker/Podman executor spawns multiple containers per job: a helper container that clones the repository and handles artifacts, and a build container that runs the actual CI script. Both containers need to share a /builds volume — and this is where SELinux’s Multi-Category Security (MCS) becomes a problem.
The MCS problemAn SELinux label has four fields: user:role:type:level. For containers the interesting part is the level, also called the MCS field. A level looks like s0:c123,c456 — s0 is the sensitivity (always s0 in targeted policy), and c123,c456 are the categories. A process or file can carry up to two categories.
MCS access is based on dominance. A subject’s label dominates an object’s label if the subject’s categories are a superset of (or equal to) the object’s categories:
Subject Object Access? Why s0:c100,c200 s0:c100,c200 Yes Exact match s0:c100,c200 s0:c100 Yes Subject’s categories are a superset s0:c100,c200 s0:c100,c300 No Subject lacks c300 s0:c0.c1023 s0:c100,c200 Yes Full range dominates everything s0 s0:c100,c200 No No categories can’t dominate any s0 s0 Yes Both have no categoriesHow this applies to the runners:
The range syntax (s0-s0:c0.c1023) is used for processes that need to operate across multiple levels. It means “my low clearance is s0 and my high clearance is s0:c0.c1023.” The process can read objects at any level within that range and create objects at any level within it. This is why Podman needs the full range — it creates containers with different MCS labels and needs to access all of them.
When Podman starts a container, it picks a random pair of categories (e.g., s0:c512,c768) from within its allowed range and assigns that as the container’s process label. Files created by the container inherit that label. Another container gets a different random pair (e.g., s0:c33,c901). Since c512,c768 and c33,c901 do not match — neither is a superset of the other — SELinux denies cross-container file access. This is the isolation mechanism, and the root cause of the problem with GitLab Runner’s multi-container-per-job architecture.
The helper container gets one random MCS pair, writes the cloned repo to /builds labeled with that pair, and the build container gets a different pair. The build container cannot read or write those files. The :Z volume flag (exclusive relabel) relabels the volume to the mounting container’s category, but that only helps the first container — the second one still has a different label.
The test scriptI wrote a script that demonstrates the problem with both standard containers (crun) and microVMs (libkrun). The script creates two containers per test — a helper that writes a file to a shared /builds volume, and a build container that tries to read it — simulating the GitLab Runner workflow:
#!/bin/bash # Description: SELinux MCS Diagnostic (crun vs krun) if [ "$(getenforce)" != "Enforcing" ]; then echo "WARNING: SELinux is not in Enforcing mode. This test requires Enforcing mode." exit 1 fi TEST_BASE="/tmp/gitlab-runner-mcs-test" CRUN_DIR="$TEST_BASE/crun-builds" KRUN_DIR="$TEST_BASE/krun-builds" # Cleanup from previous runs rm -rf "$TEST_BASE" mkdir -p "$CRUN_DIR" "$KRUN_DIR" echo "=======================================================" echo " TEST 1: Standard Container Isolation (crun)" echo "=======================================================" # 1. CREATE Helper podman create --name crun-helper -v "$CRUN_DIR:/builds:Z" fedora bash -c " echo '[crun] -> Helper Process Context (Inside):' cat /proc/self/attr/current echo 'crun-data' > /builds/artifact.txt echo '[crun] -> File Label INSIDE Helper:' ls -Z /builds/artifact.txt " > /dev/null echo "[crun] Starting Helper Container (applying :Z relabel)..." HELPER_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-helper) echo "[crun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_CRUN" podman start -a crun-helper echo "" echo "[crun] -> File Label ON HOST (Notice the specific MCS category):" ls -Z "$CRUN_DIR/artifact.txt" # 2. CREATE Build Container (The Victim) podman create --name crun-build -v "$CRUN_DIR:/builds" fedora bash -c " echo ' [Build-Internal] Process Context:' cat /proc/self/attr/current 2>/dev/null echo ' [Build-Internal] Executing ls -laZ /builds :' ls -laZ /builds 2>&1 | sed 's/^/ /' echo ' [Build-Internal] Executing cat /builds/artifact.txt :' cat /builds/artifact.txt 2>&1 | sed 's/^/ /' " > /dev/null echo "" echo "[crun] Starting Build Container to inspect shared volume..." BUILD_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-build) echo "[crun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_CRUN" podman start -a crun-build podman rm -f crun-helper crun-build > /dev/null echo "" echo "=======================================================" echo " TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED" echo "=======================================================" # --- Write the execution scripts to the host to avoid parsing errors --- cat << 'EOF' > "$TEST_BASE/krun_helper.sh" #!/bin/bash echo '[krun] -> Helper Process Context (Inside VM):' cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)' echo 'krun-data' > /builds/artifact.txt echo '[krun] -> File Label INSIDE Helper VM (Blindspot):' ls -laZ /builds/artifact.txt 2>&1 | sed 's/^/ /' EOF cat << 'EOF' > "$TEST_BASE/krun_build.sh" #!/bin/bash echo ' [Build-Internal] Process Context (Inside VM):' cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)' echo ' [Build-Internal] Executing ls -laZ /builds :' ls -laZ /builds 2>&1 | sed 's/^/ /' echo ' [Build-Internal] Executing cat /builds/artifact.txt :' cat /builds/artifact.txt 2>&1 | sed 's/^/ /' EOF chmod +x "$TEST_BASE/krun_helper.sh" "$TEST_BASE/krun_build.sh" # --------------------------------------------------------------------- # 1. CREATE Helper MicroVM podman create --name krun-helper --runtime krun --memory=1024m \ -v "$KRUN_DIR:/builds:Z" \ -v "$TEST_BASE/krun_helper.sh:/script.sh:ro,Z" \ fedora /script.sh > /dev/null echo "[krun] Starting Helper MicroVM (applying :Z relabel)..." HELPER_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-helper) echo "[krun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_KRUN" podman start -a krun-helper echo "" echo "[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):" ls -Z "$KRUN_DIR/artifact.txt" # 2. CREATE Build MicroVM (The Victim) podman create --name krun-build --runtime krun --memory=1024m \ -v "$KRUN_DIR:/builds" \ -v "$TEST_BASE/krun_build.sh:/script.sh:ro,Z" \ fedora /script.sh > /dev/null echo "" echo "[krun] Starting Build MicroVM to inspect shared volume..." BUILD_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-build) echo "[krun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_KRUN" echo " *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***" podman start -a krun-build # Cleanup podman rm -f krun-helper krun-build > /dev/null echo "" echo "=======================================================" echo " Test Complete."Test 1 (crun) creates a helper container that mounts the builds directory with :Z (exclusive relabel) and writes artifact.txt. Podman assigns it a random MCS label — in this run it was s0:c20,c540. The file on disk inherits that label. Then a second container (the build container) mounts the same path without :Z and gets a different random label (s0:c46,c331). Since c46,c331 does not dominate c20,c540, the build container is denied access to the file.
Test 2 (krun) runs the same scenario but with --runtime krun, which boots each container inside a lightweight microVM via libkrun. The helper VM gets container_kvm_t:s0:c823,c999 and the build VM gets container_kvm_t:s0:c309,c405 — same MCS mismatch, same denial. The type changes from container_t to container_kvm_t, but the MCS mechanism is identical. On the host side, virtiofsd — the daemon that serves the volume into the VM via virtio-fs — runs under the MCS label Podman assigned to the VM. The build VM’s virtiofsd is trapped in s0:c309,c405 and cannot access files labeled s0:c823,c999.
An interesting detail: inside the libkrun VMs, cat /proc/self/attr/current returns just kernel — SELinux is not available in the guest. The VM thinks it has no mandatory access control, but the host-side virtiofsd is still fully subject to MCS enforcement. This is a blindspot worth being aware of.
The output from a run on Fedora with SELinux Enforcing and Podman 5.8.2:
======================================================= TEST 1: Standard Container Isolation (crun) ======================================================= [crun] Starting Helper Container (applying :Z relabel)... [crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c20,c540 [crun] -> Helper Process Context (Inside): system_u:system_r:container_t:s0:c20,c540 [crun] -> File Label INSIDE Helper: system_u:object_r:container_file_t:s0:c20,c540 /builds/artifact.txt [crun] -> File Label ON HOST (Notice the specific MCS category): system_u:object_r:container_file_t:s0:c20,c540 /tmp/gitlab-runner-mcs-test/crun-builds/artifact.txt [crun] Starting Build Container to inspect shared volume... [crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c46,c331 *** COMPARE THE cXXX,cYYY ABOVE TO THE FILE LABEL. THIS MISMATCH CAUSES THE DENIAL *** [Build-Internal] Process Context: system_u:system_r:container_t:s0:c46,c331 [Build-Internal] Executing ls -laZ /builds : ls: cannot open directory '/builds': Permission denied [Build-Internal] Executing cat /builds/artifact.txt : cat: /builds/artifact.txt: Permission denied ======================================================= TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED ======================================================= [krun] Starting Helper MicroVM (applying :Z relabel)... [krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c823,c999 [krun] -> Helper Process Context (Inside VM): kernel [krun] -> File Label INSIDE Helper VM (Blindspot): -rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c823,c999 10 May 2 2026 /builds/artifact.txt [krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z): system_u:object_r:container_file_t:s0:c823,c999 /tmp/gitlab-runner-mcs-test/krun-builds/artifact.txt [krun] Starting Build MicroVM to inspect shared volume... [krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c309,c405 *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT *** [Build-Internal] Process Context (Inside VM): kernel [Build-Internal] Executing ls -laZ /builds : ls: /builds: Permission denied ls: cannot open directory '/builds': Permission denied [Build-Internal] Executing cat /builds/artifact.txt : cat: /builds/artifact.txt: Permission denied ======================================================= Test Complete. GitLab’s official suggestion and why it falls shortGitLab’s documentation on configuring SELinux MCS suggests applying the same MCS label to all containers launched by a runner:
[[runners]] [runners.docker] security_opt = ["label=level:s0:c1000,c1000"]This works — all containers get the same category pair, so the helper and build containers can share files. But it collapses MCS isolation between all concurrent jobs on that runner. With concurrent = 4, four simultaneous jobs all run as s0:c1000,c1000 and can read each other’s /builds content — cloned source code, build artifacts, cached dependencies. On a shared or multi-tenant runner, this is a security regression: it trades MCS isolation for functionality.
For runners with concurrent = 1 or dedicated single-tenant runners this is an acceptable tradeoff, but it does not generalize to shared infrastructure where multiple untrusted projects run side by side.
How GNOME currently handles thisGNOME’s runners are managed via an Ansible role that enforces SELinux in Enforcing mode, installs rootless Podman running as a dedicated podman system user with linger enabled, and deploys custom SELinux policy modules. The Podman service runs under SELinuxContext=system_u:system_r:container_runtime_t:s0-s0:c0.c1023 via a systemd override — the full MCS range (s0-s0:c0.c1023) gives the container runtime the ability to spawn containers at any MCS level and relabel volumes accordingly, as explained in the dominance rules above.
Four custom SELinux .te modules are compiled and loaded on every runner host: pydocuum (allows the image cleanup daemon to talk to the Podman socket), podman (grants user_namespace create and /dev/null mapping), flatpak (permits the filesystem mounts flatpak builds need), and gnome_runner (covers binfmt_misc access, device nodes, and other permissions GNOME OS builds require).
For the MCS problem specifically, the runner config.toml — rendered from a Jinja2 template via per-host Ansible variables — sets a fixed MCS label per runner type. Here’s a representative snippet from one of the runner hosts:
[[runners]] name = "a15948139c78" executor = "docker" [runners.docker] image = "quay.io/fedora/fedora:latest" privileged = false security_opt = ["label=level:s0:c100,c100"] devices = ["/dev/kvm", "/dev/udmabuf"] cap_add = ["SYS_PTRACE", "SYS_CHROOT"] [[runners]] name = "a15948139c78-flatpak" executor = "docker" [runners.docker] image = "quay.io/gnome_infrastructure/gnome-runtime-images:gnome-master" privileged = false security_opt = ["seccomp:/home/podman/gitlab-runner/flatpak.seccomp.json", "label=level:s0:c200,c200"] cap_drop = ["all"]This is the same approach GitLab’s documentation suggests, with one refinement: we use different fixed categories per runner type — c100,c100 for untagged runners and c200,c200 for flatpak runners — so that flatpak builds and regular builds remain MCS-isolated from each other, even though builds of the same type share a category.
This is a pragmatic compromise, not an ideal solution. All concurrent jobs on the same runner type share the same MCS category. With concurrent: 4 on our Hetzner runners, four simultaneous untagged jobs can read each other’s /builds content. For GNOME’s use case — a community CI infrastructure where the runners are shared by GNOME project maintainers — this is an acceptable tradeoff. The alternative, leaving MCS labels random, would break every single job. But it is precisely this tradeoff that motivates exploring per-job VM isolation via microVMs.
Exploring libkrunlibkrun is a lightweight Virtual Machine Monitor (VMM) that integrates with Podman via --runtime krun, running each container inside a microVM with its own lightweight kernel. The appeal is strong: per-container VM isolation would give each job its own kernel and address space, making the MCS cross-container problem irrelevant inside the VM.
I tested libkrun on a Fedora system and hit an immediate blocker: Fatal glibc error: rseq registration failed. The rseq (Restartable Sequences) syscall was introduced in Linux kernel 5.3 and is required by glibc >= 2.35. libkrun uses a custom minimal kernel that does not expose rseq support. Since the guest images — Fedora in our case — ship modern glibc that expects rseq to be available, the process aborts at startup before any user code runs.
The libkrun kernel is compiled into the library itself and cannot be modified or replaced by the user. This is not a configuration issue but a fundamental limitation of the current libkrun release.
Even if the rseq issue were resolved, the MCS challenge would still be there — as the test script demonstrates in Test 2. On the host side, Podman assigns MCS labels to the virtiofsd process that serves the volume into the VM via virtio-fs. Different VMs get different host-side MCS labels, meaning the same :Z relabel / cross-container access denial applies. The mechanism changes from overlay mounts to virtio-fs, but the SELinux enforcement is identical: virtiofsd for the build VM runs at container_kvm_t:s0:c309,c405 and cannot access files labeled s0:c823,c999 by the helper VM’s virtiofsd.
Firecracker and the custom executor pathFirecracker is another microVM technology, the one behind AWS Lambda and Fly.io, that could provide strong per-job isolation. However, there is no native GitLab Runner executor for Firecracker. The only integration path is the Custom Executor, which requires implementing prepare, run, and cleanup scripts from scratch.
The job image is exposed via CUSTOM_ENV_CI_JOB_IMAGE, but everything else is on the operator: pulling the OCI image, extracting a rootfs, booting a Firecracker VM with the right kernel and network configuration, injecting the build script, mounting or copying the cloned repository into the VM, collecting artifacts and cache after the job finishes, and tearing the VM down. GitLab provides an LXD-based example that shows the pattern — prepare creates a container and installs dependencies, run pipes the job script into it, cleanup destroys it — but adapting that to microVMs adds the complexity of VM lifecycle management, kernel and rootfs preparation, networking, and storage. This is a significant engineering effort, essentially rebuilding the entire Docker executor workflow from scratch.
What comes nextMCS is a core SELinux feature. Type enforcement (TE) already confines processes by type — container_t can only access container_file_t, not user_home_t or httpd_sys_content_t — but TE alone cannot distinguish one container_t process from another. MCS adds that layer: by assigning each container a unique category pair, the kernel enforces isolation between processes that share the same type. Container A at s0:c100,c100 and Container B at s0:c200,c200 are both container_t, but MCS ensures they cannot touch each other’s files. The conflict with GitLab Runner’s multi-container-per-job architecture is that two containers that need to share a volume are given different categories by default. The workarounds we deploy today, including the fixed MCS labels on GNOME’s runners, trade that inter-container isolation for functionality.
The most promising direction I’ve found so far is the combination of Cloud Hypervisor and the fleeting-plugin-fleetingd plugin. Cloud Hypervisor is built on Intel’s Rust-VMM crate and is essentially a more capable sibling of Firecracker — it supports CPU and memory hotplugging, VFIO device passthrough, and virtio-fs, features that are often necessary for complex CI tasks like building large binaries or running UI tests and that Firecracker’s minimalist design deliberately omits. The fleeting-plugin-fleetingd is a community plugin for GitLab’s Instance Executor (the modern evolution of the Custom Executor) that automates the full VM lifecycle: downloading cloud images, creating Copy-on-Write disks, launching Cloud Hypervisor VMs with direct kernel boot, provisioning them via cloud-init, and tearing them down after each build. Each job gets a fresh disposable VM, which is exactly the per-job isolation model we need. The plugin already handles networking via TAP interfaces and nftables SNAT, and supports customization of the VM image through cloud-init commands — so preinstalling Podman or other build tools is straightforward.
Beyond that, I’ll also keep evaluating libkrun (promising Red Hat technology), Firecracker with a hand-rolled custom executor, and QEMU’s microvm machine type. The common denominator across all of these — except for the fleeting-plugin-fleetingd path — is that none of them have an existing GitLab Runner integration. Regardless of which microVM technology we settle on, the path forward involves either building a workflow from scratch using the Custom Executor and its prepare, run, cleanup hooks, or leveraging the fleeting plugin ecosystem that GitLab has been building around the Instance and Docker Autoscaler executors.
That should be all for today, stay tuned!
It’s the first day of May, and it’s time for another update on what’s been happening at the GNOME Foundation. It’s been two weeks since my last post, and this update covers highlights of what we’ve been doing since then.
Remembering Seth NickellThis week we received the very sad news of the death of Seth Nickell. It’s been a long time since Seth was active in the GNOME project, so many of our members won’t be familiar with him or his work. However, Seth played an important part in GNOME’s history, and was a special and unique character.
Jonathan wrote a wonderful post about Seth, with some great stories. Federico migrated the memorial page from the old wiki to the handbook, and added Seth there (work is currently ongoing to develop that page). Seth’s death has also been covered by LWN, which includes dedications from GNOME contributors.
Whether you knew Seth or came to GNOME after his time, I think we can all appreciate the contributions that he made, which live on in the project and wider ecosystem to this day.
GNOME FellowshipApplications for the first round of the new GNOME Fellowship program closed last week, on 20th April. We had a great response and received some excellent proposals, and now we have the tough job of deciding who is going to receive support through the program.
To that end, the Fellowship Committee met this week to review the proposals and begin the selection process. We have identified a shortlist of candidates, and will be meeting again next week to narrow the selection further.
Since this is the first round of the Fellowship, we are establishing the selection process as we go. Hopefully we’ll get to put this to use again in future Fellowship rounds!
ConferencesLinux App Summit (LAS) will be held in Berlin on 16-17 May – that’s in a little over two weeks! The schedule has been finalized and looks great, and this year’s LAS is shaping up to be a fantastic event. Please do consider going, and please do register!
Due to high demand, the organizing team have decided to stream the talks from this year, so look out for details about remote participation.
Aside from LAS, preparations for July’s GUADEC conference continue to be worked on. Travel sponsorship is still available if you need assistance in order to attend, so do consider applying for that.
Office transitions ongoingWork to update many of our backoffice systems and processes has continued at a steady pace over the past fortnight. Many of the big moves are done (new payments system, email accounts, mailing system, accounting procedures, credit card platform), and we are now firmly in the final stages, making sure that our new address is used everywhere, emails are going to the right places, recurring payments are transferred over to new credit cards, and vendors are setup on the new payments system.
The value of this work is already showing, with smoother accounting procedures, more up to date finance reports, and better tracking of incoming queries.
That’s it for this update. Thanks for reading, and take care.
Update on what happened across the GNOME project in the week from April 24 to May 01.
GNOME Circle Apps and Libraries NewsFlash feed reader ↗Follow your favorite blogs & news sites.
Jan Lukas announces
Hi TWIG. Newsflash can now swipe between articles. This closes off one of the oldest still standing feature requests. And hopefully makes all the mobile users happy.
Third Party Projectsxjuan reports
Casilda 1.2.4 Released!
I am very happy to announce a new version of Casilda!
A simple Wayland compositor widget for Gtk 4 and GNOME
This release comes with several new features like fractional scaling support, bug fixes and extra polish that it is making it start to feel like a proper compositor. You can read more about it at https://blogs.gnome.org/xjuan/2026/04/19/casilda-1-2-4-released/
Anton Isaiev says
RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)
Versions 0.11.0–0.12.7 bring the three biggest features since the project started, plus a mountain of polish driven by community feedback.
Cloud Sync landed. You can now synchronize connection configurations between devices and team members through any shared directory - Google Drive, Syncthing, Nextcloud, Dropbox, or even a USB stick. Two modes: Group Sync (per-group .rcn files with Master/Import access) and Simple Sync (single-file bidirectional merge). A file watcher auto-imports changes, and the new Cloud Sync settings page shows sync status, synced groups, and available files. CLI got sync status, sync list, sync export, sync import, and sync now commands.
SSH Tunnel Manager is a standalone window for managing headless SSH port-forwarding tunnels without terminal sessions - Local, Remote, and Dynamic forwards with auto-start on launch and auto-reconnect. SSH jump host support was extended to RDP, VNC, and SPICE connections, so you can tunnel graphical sessions through a bastion host. Ctrl+T opens the tunnel manager.
Tab management was completely reworked around AdwTabView. Tab Overview (Ctrl+Shift+O) gives a GNOME Web-style grid of all open tabs. Tab Pinning keeps important tabs at the left edge. A tab switcher in the Command Palette (% prefix) provides fuzzy search across open tabs. Right-click context menu gained Close Others / Left / Right / All / Ungrouped actions.
Other highlights: custom terminal color themes with full 16-color ANSI palette editor; terminal scrollbar; font zoom (Ctrl+Scroll); copy-on-select; SSH Keep-Alive and verbose mode; Hoop.dev as the 11th Zero Trust provider; custom SSH agent socket override (fixes KeePassXC/Bitwarden agent in Flatpak); RDP mouse jiggler; terminal activity/silence monitor; host online check with auto-connect; highlight rules now render with actual colors via Cairo overlay; connection dialog rebuilt with adw:: widgets following GNOME HIG.
Packaging grew significantly. RustConn is now available as Flatpak on Flathub, Snap with strict confinement, AppImage, native .deb and .rpm packages via OBS repositories (Debian 13, Ubuntu 24.04/26.04, Fedora 43/44, openSUSE Tumbleweed/Slowroll/Leap 16.0), plus ARM64 builds. A huge thank you to the community maintainers: the AUR package for Arch Linux, the FreeBSD port, and there is an open request to include RustConn in Debian proper.
Thank you to everyone who reported issues, contributed translations, and tested pre-releases - your feedback shaped every one of these 25 releases. Special thanks to GaaChun for the complete Simplified Chinese translation, and to Phil Dodd and Todor Todorov for the support.
Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn
Capypara says
Field Monitor 50.0
Field Monitor - the remote desktop viewer focused on accessing VMs - has been updated to version 50.0.
Some highlights:
Field Monitor is available via Flathub: https://flathub.org/apps/de.capypara.FieldMonitor
Christian says
The first public release of Gitte is out!
Gitte is a GTK4/libadwaita git GUI written in Rust, built on Relm4 and git2 (no shelling out to the git binary).
What’s in the initial release:
It’s early days, so expect rough edges. Bug reports and feedback are very welcome.
Get Gitte from Flathub: https://flathub.org/apps/de.wwwtech.gitte
Parabolic ↗Download web video and audio.
Nick reports
Parabolic V2026.4.1 is here with plenty of bug fixes!
Here’s the full changelog:
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
GNOME is once again participating in GSoC. This year, we have contributors working on adding Debug Adapter Protocol support to GJS, incorporating vocab-style puzzles into GNOME Crosswords, creating a native GTK4/Rust rewrite of the Pitivi timeline ruler, porting gitg to GTK4, implementing app uninstallation in the GNOME Shell app grid, and enabling recovery from GPU resets.
As we onboard the contributors, we will be adding them to Planet GNOME, where you can get to know them better and follow their project updates.
GSoC is a great opportunity to welcome new people into our project. Please help them get started and make them feel at home in our community!
Special thanks to our community mentors, who are donating their time and energy to help welcome and guide our new contributors: Philip Chimento, Jonathan Blandford, Yatin, Alex Băluț, Alberto Fanjul, Adrian Vovk, Jonas Ådahl, and Robert Mader.
Yesterday, I wanted to debug a glycin (or Shell) issue on GNOME OS. Turns out, there is currently no documentation that works or includes all necessary steps.
Here is the simplest variant if you don’t develop on GNOME OS and have an internet connection that can download 16 GB in a reasonable amount of time.
First we get a toolbox image to build our code.
$ toolbox create gnomeos-nightly -i quay.io/gnome_infrastructure/gnome-build-meta:gnomeos-devel-nightlyAfter entering the toolbox with
$ toolbox enter gnomeos-nightlywe can clone and build our project with sysext-utils that are included in our image:
$ meson setup ./build --prefix /usr --libdir="lib/$(gcc -print-multiarch)" $ sysext-build example ./buildThis creates a example.sysext.raw file.
Now, we need a GNOME OS to test our build. We can download the image and install it in Boxes. After logging in, we can just drag and drop the example.sysext.raw into the VM.
Before we can install it, we need to get the development tools for our VM:
$ run0 updatectl enable devel --nowAfter that, we need to restart the VM.
Finally, we can test our build:
$ run0 sysext-add ~/Downloads/example.sysext.rawAdding the --persistent flag to this command will make the changes stay active across reboots.
If the changes made it impossible to boot into the VM again, we can start the VM in “Safe mode” from the boot menu. After logging in, we can manually remove the extension:
$ run0 rm /var/lib/extensions/example.rawHappy hacking!
Recently, I have been using GNOME OS, as my daily driver.
After being a seasoned Linux for long, dabbling in distros like Alpine Linux, Arch Linux, Fedora (and even Silverblue), I tried switching to something more opinionated and that "works by default" all while being hard to break.
And given my existing relationship with GNOME, GNOME OS was a choice worth looking into.
One feature of GNOME OS is that it is immutable (i.e. system files are read-only). It also doesn't ship with a package manager, so it doesn't have functionality built-in to install extra packages.
You can install GUI Applications normally using Flathub (and Snap/AppImage), but installing non-GUI applications like development tools or CLI packages is not built-in.
There are of course several solutions you can use, such as homebrew, coldbrew, but today we will focus on mise.
What is mise?mise pitches itself as "One tool to manage languages, env vars, and tasks per project, reproducibly."
However, I only use a fraction of it's functionality, in that I only use it to install packages.
How to install it?The instructions are here: https://mise.jdx.dev/getting-started.html
But essentially it's as easy as running this (remember to read the source of the installer first):
curl https://mise.run | sh Activating miseThen you will need to "activate" mise, which essentially makes tools installed by mise available by modifying your $PATH variable
echo 'eval "$(~/.local/bin/mise activate bash --shims)"' >> ~/.bashrcThe instructions above are for bash, so you will need to consult the docs to get instructions for your shell.
You will need to re-login for the mise command to be available, or open a new shell.
A note on shimsFeel free to skip this section, as it's just an explainer
Also, note that the above command use the --shims flag, which is NOT the default. It essentially means that mise will modify the $PATH variable, instead of doing a weird thing where it will re-activate itself after each command you run.
The non-shim way to activate mise is useful when you use mise to install different package versions across different repositories, but that sometimes breaks IDEs and is our of the scope of this blog post.
Installing packagesYou can start installing your first package with mise:
mise use -g javaThe above command installs java globally (hence the -g flag), which you can now confirm by running:
$ java --version openjdk 26.0.1 2026-04-21 OpenJDK Runtime Environment (build 26.0.1+8-34) OpenJDK 64-Bit Server VM (build 26.0.1+8-34, mixed mode, sharing)You can install much more tools, of which you can find a non-complete list here: mise-tools.
For example, you can similarly install a specific major version of nodejs
mise use -g node@22Or install the latest LTS version of node
mise use -g node@ltsOr you can be overlay specific
mise use -g node@v25.9.0 mise use -g node@25.9.0 # this works too! SearchingUse mise search to find packages.
mise search typ Tool Description typos Source code spell checker. https://github.com/crate-ci/typos typst A new markup-based typesetting system that is powerful and easy to learn. https://github.com/typst/typst typstyle Beautiful and reliable typst code formatter. https://github.com/Enter-tainer/typstyle quicktype Generate types and converters from JSON, Schema, and GraphQL provided by https://quicktype.io. https://www.npmjs.com/package/quicktype Uninstalling mise unuse -g node Updating mise self-update # updating mise itself mise up # updating tools installed by mise mise outdated # checking if you have outdated tools Config FileTools you install with mise globally will be saved in the file ~/.config/mise/config.toml, which you can commit to your dotfiles so you can have similar tools across different machines.
Here's an example of my mise config file at the time of writing this blog post.
# ~/.config/mise/config.toml [tools] bat = "latest" btop = "latest" bun = "latest" caddy = "latest" "cargo:mergiraf" = "latest" deno = "latest" difftastic = "latest" doggo = "latest" fastfetch = "latest" fzf = "latest" github-cli = "latest" "github:railwayapp/railpack" = "latest" glab = "latest" helix = "latest" java = "latest" lazygit = "latest" node = "latest" "npm:vscode-langservers-extracted" = "latest" oha = "latest" pipx = "latest" pnpm = "latest" prettier = "latest" rust = "latest" scooter = "latest" tmux = "latest" usage = "latest" yt-dlp = { version = "latest", rename_exe = "yt-dlp" } zellij = "latest" "github:patryk-ku/music-discord-rpc" = { version = "latest", asset_pattern = "music-discord-rpc" } rclone = "latest" mc = "latest" go = "latest" "go:git.sr.ht/~migadu/alps/cmd/alps" = "latest" "npm:localtunnel" = "latest"After the tools inside the config has changed, you can run the following comand to make mise re-install packages from the config file
mise install Mise BackendsMise is able to install packages from multiple sources. These sources are called "backends" by mise.
When you type mise use -g node@22, it will resolve node against the registry and figure out that the default backend for node is core
CoreThe default backend is called core and tools from this backend are usually provided from the official source.
Other tools that are available from core include Node.js, Ruby, Python, etc...
We could also have been explicit with the backend we want to use
mise use -g core:nodeYou can find a list of all core packages here.
AquaYou can also install packages from the Aqua registry.
Language Package ManagersYou can also install tools from their respective package managers. Here are a few examples
npmYou can install prettier, typescript, oxlint and other JavaScript/TypeScript tools published on the npm registry. Find the tools on npm
mise use -g npm:prettier pipxYou can install black, poetry and other Python tools from pypi. Find the tools on pypi
mise use -g pipx:black pipx:git+https://github.com/psf/black.git # from a github repo cargoYou can install cargo packages with this backed. You need to have rust installed beforehand though, which you can do with mise
mise use -g rustThen install your packages
mise use -g cargo:ezaThere are more language package manager backends like: gem, go and more.
GithubYou can install packages from Github directly, as long as the project you are trying to install from uses Github releases
mise use -g github:railwayapp/railpackmise will usually auto-detect which asset you want to use, but you can also specify the asset glob in ~/.config/mise/config.toml
[tools] "github:patryk-ku/music-discord-rpc" = { version = "latest", asset_pattern = "music-discord-rpc" }I heard the news about Seth Nickell’s passing last week, and have been in a bit of a funk ever since.
Seth was brilliant, iconoclastic, fearless.
It’s been a long while since Seth was an active part of the GNOME Community, but his influence on the project can still be seen in its DNA if you know where to look. He arrived on the GNOME scene while still in school with hundreds of ideas on how to improve things. It was an interesting time: We had just launched GNOME 1.5 and were searching for a new path towards GNOME 2.0. The Sun usability study had been published and the community had internalized the need to change directions. Seth rolled up his sleeves and did the work needed to help light that path.
Seth championed radical proposals such as instant apply, button ordering, message dialog fixes, and more. He cleaned up the control-center proposing some of the most visible changes from GNOME 1 to 2. He also did the initial designs for epiphany, pushing for a cleaner browser experience during an era of high browser complexity. He had a vision of desktops as a democratic tool, as easy and natural to use as any other tool in the human experience.
As a designer, Seth was focused on trying to understand who we were designing for and making sure we were solving problems for them. While he wasn’t beyond fixing paddings / layouts, he wanted to get the Big Picture right. He wasn’t beyond rolling up his sleeves writing code to move things forward, but was at his best as a champion and visionary, arguing for us to take risks and continue to innovate.
Spending time was Seth was a hoot. He had such a flair for the dramatic. I remember…
Being one of the public faces of GNOME2 was hard, and he moved on. Later, he worked on OLPC and Sugar, and made his mark there. After that, he seemed to travel a lot. We lost touch, though he’d reappear every couple of years to say hi. I hope he found what he was looking for.
Farewell, my friend. The world now has less color in it.
I got myself a Yubikey recently, and I wanted to use it as a nice convenience to:
I've only managed to do the first two, since they both rely on Linux Pluggable Authentication Modules (PAM). Luckily for me, one of PAM's modules supports U2F, the standard Yubikeys rely on.
First I need to install pam-u2f to add U2F support to PAM, and pamu2fcfg to configure my key.
$ sudo rpm-ostree install pam-u2f pamu2fcfgSince I'm running an immutable OS I need to reboot, and then I can create the correct directory and file to dump an U2F key into it.
$ mkdir -p ~/.config/Yubico $ pamu2fcfg > ~/.config/Yubico/u2f_keysThen I make sure to have a root session open in case I lock myself out of sudoers.
$ sudo su #In a different terminal, I can edit the sudoers file to add this line
#%PAM-1.0 auth sufficient pam_u2f.so cue openasuser auth include system-auth account include system-auth password include system-auth session optional pam_keyinit.so revoke session required pam_limits.so session include system-authI save this file and open a new terminal. I type in sudo vi and it asks me to touch my FIDO authenticator before opening vi! If I touch the Yubikey, it indeed opens vi with root privileges.
Let's break down the line:
It's also possible to use it to unlock my session, but it would be a bit reckless to allow anyone with my Yubikey to log into my laptop. If my backpack gets stolen and it has both my Yubikey and my laptop, anyone can log in.
It's possible to make the login screen require either my user password, or all of
If someone fails more than three times to enter the correct PIN, the Yubikey will lock itself and require a PUK to be unlocked. This gives me an additional layer of security, and it's more convenient than having to type a full length passphrase.
I've added the following line to /etc/pam.d/greetd (the greeter I use):
#%PAM-1.0 auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1 auth substack system-auth [...][!warning] I can lose my Yubikey
I use my Yubikey as a nice convenience to set up a weaker PIN while not compromising too much on security. I use it instead of a password, no in addition to it.
Since I can lose or break my Yubikey and I don't want to buy two of them, I make the U2F login sufficient but not required. This means I can still fallback to password authentication if I lose my Yubikey.
Finally, DankMaterialShell uses its own lockscreen manager too. I still want to be able to fallback to password authentication if need be, so I'll configure it to accept U2F OR the password, not both.
This means that the lockscreen will call /etc/pam.d/dankshell-u2f to know what to do when the screen is locked. Since this file doesn't exist, I can create it with the following content.
#%PAM-1.0 auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1I need a fallback for when I don't have my Yubikey, so I also create the one for this occasion
#%PAM-1.0 auth include system-authFinally, I have a consistent setup where both my login and lock screen require me to plug my key, enter its PIN and touch it, or enter my full password. When it comes to sudo, I can only touch my key without requiring an PIN.
My next quest will be to use my Yubikey to unlock my LUKS-encrypted disk.
At the start of the month, Bilal gave us all a giant gift with Goblint. On the first week it was already impressive. Now it’s an invaluable tool for anyone that ever interfaced with GObject, glib or GTK. It will catch leaks, bugs, or even offer to auto fix and modernize your code to the modern paradigms we use. It’s one of those things that is going to save countless hours of debugging and more importantly, prevent the issues before they even get committed. Jonathan Blandford wrote about using it two days ago, and I suggest you read the post.
Everyone is trying to use goblint, and we are all stumbling upon the same issues integrating it into our tooling. Initially, it was only able to produce Sarif reports, which GitLab still has behind a feature flag, in addition to only be available in GitLab Enterprise Editions.
I added an export for GitLab’s Code Quality format which has some support in the non-proprietary Community Edition we use in the GNOME and Freedesktop.org instances. Sadly, almost everything nice is still only available in the enterprise editions, but at least there is this little Widget in the Merge Requests page.
Additionally, we now have CI templates for Goblint. One is adding a job to the existing gnomeos-basic-ci component we use everywhere. Simply go to your latest pipeline and look for the job.
The report will also show up in Merge Requests that have been updated since yesterday. The gnomeos-basic-ci has other goodies like sanitizers, static analyzers, test coverage, etc wired out of the box, so you should give it a try if you are not using it yet.
If you do but don’t want the goblint job, you can disable it easily with inputs: goblint: "disabled" similar to all the other tools the component provides.
include: - project: "GNOME/citemplates" file: "templates/default-rules.yml" - component: "gitlab.gnome.org/GNOME/citemplates/gnomeos-basic-ci@26.1"If you want only a goblint job, I’ve also added a standalone template that you can use. (Or copy-paste from it).
include: - component: "gitlab.gnome.org/GNOME/citemplates/goblint@26.1" inputs: job-stage: "lint"In order for the Code Quality report to work, you will need to have a report uploaded from your target branch, so GitLab will have something to compare the one from the merge request with. The template rules will handle that for you, but keep it in mind.
At this moment all the lints are warnings so the job will never be fatal. This is why we can enabled it by default without worrying about breaking pipelines for now. You can further configure its behavior to your needs, and error out if you want to, through the configuration file.
min_glib_version = "2.76" [rules.g_declare_semicolon] level = "ignore" [rules.untranslated_string] level = "error" ignore = ["**/test-*.c"]It’s also very likely that we are going to add goblint and its LSP server to the GNOME SDK Flatpak runtime, along with GNOME OS, so it will always be available for use with tools like Builder and foundry.
Enjoy