You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 2 ditë 6 orë më parë

Jakub Steiner: USB MIDI Controllers on the M8

Mar, 28/10/2025 - 12:04md

The M8 has extensive USB audio and MIDI capabilities, but it cannot be a USB MIDI host. So you can control other devices through USB MIDI, but cannot sent to it over USB.

Control Surface & Pots for M8

Controlling things via USB devices has to be done through the old TRS (A) jacks. There’s two devices that can aid in that. I’ve used the RK06 which is very featureful, but in a very clumsy plastic case and USB micro cable that has a splitter for the HOST part and USB Power in. It also sometimes doesn’t reset properly when having multiple USB devices attached through a hub. The last bit is why I even bother with this setup.

The Dirtywave M8 has amazing support for the Novation Launchpad Pro MK3. Majority of peolpe hook it up directly to the M8 using the TRS MIDI cables. The Launchpad lacks any sort of pots or encoders though. Thus the need to fuss with USB dongles. What you need is to use the Launchpad Pro as a USB controller and shun at the reliable MIDI connection. The RK06 allows to combine multiple USB devices attached through an unpowered USB hub. Because I am flabbergasted how I did things here’s a schema that works.

If it doesn’t work, unplug the RK06 and turn LPPro off and on in the M8. I hate this setup but it is the only compact one that works (after some fiddling that you absolutely hate when doing a gig).

Intech Knot

The Hungarians behind the Grid USB controlles (with first class Linux support) have a USB>MIDI device called Knot. It has one great feature of a switch between TRS A/B for the non-standard devices.

It is way less fiddly than the RK06, uses nice aluminium housing and is sturdier. Hoewer it doesn’t seem to work with the Launchpad Pro via USB and it seems to be completely confused by a USB hub, so it’s not useful for my use case of multiple USB controllers.

Non-compact but Reliable

Novation came out with the Launch Control XL, which sadly replaced pots in the old one with encoders (absolute vs relative movement), but added midi in/ou/through with a MIDI mixer even. That way you can avoid USB altogether and get a reliable setup with control surfaces and encoders and sliders.

One day someone comes up with a compact midi capable pots to play along with Launchpad Pro ;) This post has been brought to you by an old man who forgets things.

Colin Walters: Thoughts on agentic AI coding as of Oct 2025

Hën, 27/10/2025 - 10:08md
Sandboxed, reviewed parallel agents make sense

For coding and software engineering, I’ve used and experimented with various frontends (FOSS and proprietary) to multiple foundation models (mostly proprietary) trying to keep up with the state of the art. I’ve come to strongly believe in a few things:

  • Agentic AI for coding needs strongly sandboxed, reproducible environments
  • It makes sense to run multiple agents at once
  • AI output definitely needs human review
Why human review is necessary Prompt injection is a serious risk at scale

All AI is at risk of prompt injection to some degree, but it’s particularly dangerous with agentic coding. All the state of the art today knows how to do is mitigate it at best. I don’t think it’s a reason to avoid AI, but it’s one of the top reasons to use AI thoughtfully and carefully for products that have any level of criticality.

OpenAI’s Codex documentation has a simple and good example of this.

Disabling the tests and claiming success

Beyond that, I’ve experienced multiple times different models happily disabling the tests or adding a println!("TODO add testing here") and claim success. At least this one is easier to mitigate with a second agent doing code review before it gets to human review.

Sandboxing

The “can I do X” prompting model that various interfaces default to is seriously flawed. Anthropic has a recent blog post on Claude Code changes in this area.

My take here is that sandboxing is only part of the problem; the other part is ensuring the agent has a reproducible environment, and especially one that can be run in IaaS environments. I think devcontainers are a good fit.

I don’t agree with the statement from Anthropic’s blog

without the overhead of spinning up and managing a container.

I don’t think this is overhead for most projects because Where it feels like it has overhead, we should be working to mitigate it.

Running code as separate login users

In fact, one thing I think we should popularize more on Linux is the concept of running multiple unprivileged login users. Personally for the tasks I work on, it often involves building containers or launching local VMs, and isolating that works really well with a full separate “user” identity. An experiment I did was basically useradd ai and running delegated tasks there instead. To log in I added %wheel ALL=NOPASSWD: /usr/bin/machinectl shell ai@ to /etc/sudoers.d/ai-login so that my regular human user could easily get a shell in the ai user’s context.

I haven’t truly “operationalized” this one as juggling separate git repository clones was a bit painful, but I think I could automate it more. I’m interested in hearing from folks who are doing something similar.

Parallel, IaaS-ready agents…with review

I’m today often running 2-3 agents in parallel on different tasks (with different levels of success, but that’s its own story).

It makes total sense to support delegating some of these agents to work off my local system and into cloud infrastructure.

In looking around in this space, there’s quite a lot of stuff. One of them is Ona (formerly Gitpod). I gave it a quick try and I like where they’re going, but more on this below.

Github Copilot can also do something similar to this, but what I don’t like about it is that it pushes a model where all of one’s interaction is in the PR. That’s going to be seriously noisy for some repositories, and interaction with LLMs can feel too “personal” sometimes to have permanently recorded.

Credentials should be on demand and fine grained for tasks

To me a huge flaw with Ona and one shared with other things like Langchain Open-SWE is basically this:

Sorry but: no way I’m clicking OK on that button. I need a strong and clearly delineated barrier between tooling/AI agents acting “as me” and my ability to approve and push code or even do basic things like edit existing pull requests.

Github’s Copilot gets this more right because its bot runs as a distinct identity. I haven’t dug into what it’s authorized to do. I may play with it more, but I also want to use agents outside of Github and I also am not a fan of deepening dependence on a single proprietary forge either.

So I think a key thing agent frontends should help do here is in granting fine-grained ephemeral credentials for dedicated write access as an agent is working on a task. This “credential handling” should be a clearly distinct component. (This goes beyond just git forges of course but also other issue trackers or data sources that may be in context).

Conclusion

There’s so much out there on this, I can barely keep track while trying to do my real job. I’m sure I’m not alone – but I’m interested in other’s thoughts on this!

Sam Thursfield: Slow Fedora VMs

Hën, 27/10/2025 - 12:00md

Good morning!

I spent some time figuring out why my build PC was running so slowly today. Thanks to some help from my very smart colleagues I came up with this testcase in Nushell to measure CPU performance:

~: dd if=/dev/random of=./test.in bs=(1024 * 1024) count=10 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.0111184 s, 943 MB/s ~: time bzip2 test.in 0.55user 0.00system 0:00.55elapsed 99%CPU (0avgtext+0avgdata 8044maxresident)k 112inputs+20576outputs (0major+1706minor)pagefaults 0swap

We are copying 10MB of random data into a file and compressing it with bzip2. 0.55 seconds is a pretty good time to compress 10MB of data with bzip2.

But! As soon as I ran a virtual machine, this same test started to take 4 or 5 seconds, both on the host and in the virtual machine.

There is already a new Fedora kernel available and with that version (6.17.4-200.fc42.x86_64) I don’t see any problems. I guess some issue affecting AMD Ryzen virtualization that got fixed already.

Have a fun day!

edit: The problem came back with the new kernel as well. I guess this not going to be a fun day.

Cassidy James Blaede: I’ve Joined ROOST

Hën, 27/10/2025 - 1:00pd

A couple of months ago I shared that I was looking for what was next for me, and I’m thrilled to report that I’ve found it: I’m joining ROOST as OSS Community Manager!

What is ROOST?

I’ll let our website do most of the talking, but I can add some context based on my conversations with the rest of the incredible ROOSTers over the past few weeks. In a nutshell, ROOST is a relatively new nonprofit focused on building, distributing, and maintaining the open source building blocks for online trust and safety. It was founded by tech industry veterans who saw the need for truly open source tools in the space, and were sick of rebuilding the exact same internal tools across multiple orgs and teams.

The way I like to frame it is how you wouldn’t roll your own encryption; why would you roll your own trust and safety tooling? It turns out that currently every platform, service, and community has to reinvent all of the hard work to ensure people are safe and harmful content doesn’t spread. ROOST is teaming up with industry partners to release existing trust and safety tooling as open source and to build the missing pieces together, in the open. The result is that teams will no longer have to start from scratch and take on all of the effort (and risk!) of rolling their own trust and safety tools; instead, they can reach for the open source projects from ROOST to integrate into their own products and systems. And we know open source is the right approach for critical tooling: the tools themselves must be transparent and auditable, while organizations can customize and even help improve this suite of online safety tools to benefit everyone.

This Platformer article does a great job of digging into more detail; give it a read. :) Oh, and why the baby chick? ROOST has a habit of naming things after birds—and I’m a baby ROOSTer. :D

What is trust and safety?

I’ve used the term “trust and safety” a ton in this post; I’m no expert (I’m rapidly learning!), but think about protecting users from harm including unwanted sexual content, misinformation, violent/extremist content, etc. It’s a field that’s much larger in scope and scale than most people probably realize, and is becoming ever more important as it becomes easier to generate massive amounts of text and graphic content using LLMs and related generative “AI” technologies. Add in that those generative technologies are largely trained using opaque data sources that can themselves include harmful content, and you can imagine how we’re at a flash point for trust and safety; robust open online safety tools like those that ROOST is helping to build and maintain are needed more than ever.

What I’ll be doing

My role is officially “OSS Community Manager,” but “community manager” can mean ten different things to ten different people (which is why people in the role often don’t survive long at a company…). At ROOST, I feel like the team knows exactly what they need me to do—and importantly, they have a nice onramp and initial roadmap for me to take on! My work will mostly focus on building and supporting an active and sustainable contributor community around our tools, as well as helping improve the discourse and understanding of open source in the trust and safety world. It’s an exciting challenge to take on with an amazing team from ROOST as well as partner organizations.

My work with GNOME

I’ll continue to serve on the GNOME Foundation board of directors and contribute to both GNOME and Flathub as much as I can; there may be a bit of a transition time as I get settled into this role, but my open source contributions—both to trust and safety and the desktop Linux world—are super important to me. I’ll see you around!

Aryan Kaushik: Balancing Work and Open Source

Dje, 26/10/2025 - 1:00pd
Work pressure + Burnout == Low contributions?

Over the past few months, I’ve been struggling with a tough question. How do I balance my work commitments and personal life while still contributing to open source?

On the surface, it looks like a weird question. Like I really enjoy contributing and working with contributors, and when I was in college, I always thought... "Why do people ever step back? It is so fun!". It was the thing that brought a smile to my face and took off any "stress". But now that I have graduated, things have taken a turn.

It is now that when work pressure mounts, you use the little time you get to not focus on writing code and instead perform some kind of hobby, learn something new or spend time with family. Or, just endless video scroll and sleep.

This has led me to be on my lowest contributions streak and not able to work on all those cool things I imagined, like reworking the Pitivi timeline in Rust, finishing that one MR in GNOME Settings that is stuck for ages, or fixing some issues in GNOME Extensions website, or work on my own extension's feature request, or contributing to the committees I am a part of.

It’s reached a point where I’m genuinely unsure how to balance things anymore, and hence wanted to give all whom I might not have been able to reply to or have not seen me for a long time an update, that I'm there but just in a dilemma of how to return.

I believe I'm not the only one who faces this. After guiding my juniors for a long while on how to contribute and study at the same time and still manage time for other things, I now am at a road where I am in the same situation. So, if anyone has any insights on how they manage their time, or keep up the motivation and juggle between tasks, do let me know (akaushik [at] gnome [dot] org), I'd really appreciate any insights :)

One of them would probably be to take fewer things on my plate?

Perhaps this is just a new phase of learning? Not about code, but about balance.

Flathub Blog: Enhanced License Compliance Tools for Flathub

Pre, 24/10/2025 - 2:00pd

tl;dr: Flathub has improved tooling to make license compliance easier for developers. Distros should rebuild OS images with updated runtimes from Flathub; app developers should ensure they're using up-to-date runtimes and verify that licenses and copyright notices are properly included.

In early August, a concerned community member brought to our attention that copyright notices and license files were being omitted when software was bundled as Flatpaks and distributed via Flathub. This was a genuine oversight across multiple projects, and we're glad we've been able to take the opportunity to correct and improve this for runtimes and apps across the Flatpak ecosystem.

Over the past few months, we've been working to enhance our tooling and infrastructure to better support license compliance. With the support of the Flatpak, freedesktop-sdk, GNOME, and KDE teams, we've developed and deployed significant improvements that make it easier than ever for developers to ensure their applications properly include license and copyright notices.

What's New

In coordination with maintainers of the freedesktop-sdk, GNOME, and KDE runtimes, we've implemented enhanced license handling that automatically includes license and copyright notice files in the runtimes themselves, deduplicated to be as space-efficient as possible. This improvement has been applied to all supported freedesktop-sdk, GNOME, and KDE runtimes, plus backported to freedesktop-sdk 22.08 and newer, GNOME 45 and newer, KDE 5.15-22.08 and newer, and KDE 6.6 and newer. These updated runtimes cover over 90% of apps on Flathub and have already rolled out to users as regular Flatpak updates.

We've also worked with the Flatpak developers to add new functionality to flatpak-builder 1.4.5 that automatically recognizes and includes common license files. This enhancement, now deployed to the Flathub build service, helps ensure apps' own licenses as well as the licenses of any bundled libraries are retained and shipped to users along with the app itself.

These improvements represent an important milestone in the maturity of the Flatpak ecosystem, making license compliance easier and more automatic for the entire community.

Recommended Actions App Developers

We encourage you to rebuild your apps with flatpak-builder 1.4.5 or newer to take advantage of the new automatic license detection. You can verify that license and copyright notices are properly included in your Flatpak's /app/share/licenses, both for your app and any included dependencies. In most cases, simply rebuilding your app will automatically include the necessary licenses, but you can also fine-tune which license files are included using the license-files key in your app's Flatpak manifest if needed.

For apps with binary sources (e.g. debs or rpms), we encourage app maintainers to explicitly include relevant license files in the Flatpak itself for consistency and auditability.

End-of-life runtime transition: To focus our resources on maintaining high-quality, up-to-date runtimes, we'll be completing the removal of several end-of-life runtimes in January 2026. Apps using runtimes older than freedesktop-sdk 22.08, GNOME 45, KDE 5.15-22.08 or KDE 6.6 will be marked as EOL shortly. Once these older runtimes are removed, the apps will need to be updated to use a supported runtime to remain available on Flathub. While this won't affect existing app installations, after this date, new users will be unable to install these apps from Flathub until they're rebuilt against a current runtime. Flatpak manifests of any affected apps will remain on the Flathub GitHub organization to enable developers to update them at any time.

If your app currently targets an end-of-life runtime that did receive the backported license improvements, we still strongly encourage you to upgrade to a newer, supported runtime to benefit from ongoing security updates and platform improvements.

Distributors

If you redistribute binaries from Flathub, such as pre-installed runtimes or apps, you should rebuild your distributed images (ISOs, containers, etc.) with the updated runtimes and apps from Flathub. You can verify that appropriate licenses are included with the Flatpaks in the runtime filesystem at /usr/share/licenses inside each runtime.

Get in Touch

App developers, distributors, and community members are encouraged to connect with the team and other members of the community in our Discourse forum and Matrix chat room. If you are an app developer or distributor and have any questions or concerns, you may also reach out to us at admins@flathub.org.

Thank You!

We are grateful to Jef Spaleta from Fedora for his care and confidentiality in bringing this to our attention and working with us collaboratively throughout the process. Special thanks to Boudhayan Bhattcharya (bbhtt) for his tireless work across Flathub, Flatpak and freedesktop-sdk, on this as well as many other important areas. And thank you to Abderrahim Kitouni (akitouni), Adrian Vovk (AdrianVovk), Aleix Pol Gonzalez (apol), Bart Piotrowski (barthalion), Ben Cooksley (bcooksley), Javier Jardón (jjardon), Jordan Petridis (alatiera), Matthias Clasen (matthiasc), Rob McQueen (ramcq), Sebastian Wick (swick), Timothée Ravier (travier), and any others behind the scenes for their hard work and timely collaboration across multiple projects to deliver these improvements.

Our Linux app ecosystem is truly strongest when individuals from across companies and projects come together to collaborate and work towards shared goals. We look forward to continuing to work together to ensure app developers can easily ship their apps to users across all Linux distributions and desktop environments. ♥

Jonathan Blandford: Crosswords 0.3.16: 2025 Internship Results

Enj, 23/10/2025 - 8:00pd

Time for another GNOME Crosswords release! This one highlights the features our interns did this past summer. We had three fabulous interns — two through GSoC and one through Outreachy. While this release really only has three big features — one from each — they were all fantastic.

Thanks goes to to my fellow GSoC mentors Federico and Tanmay. In addition, Tilda and the folks at Outreachy were extremely helpful. Mentorship is a lot of work, but it’s also super-rewarding. If you’re interested in participating as a mentor in the future and have any questions about the process, let me know. I’ll be happy to speak with you about them.

Dictionary pipeline improvements

First, our Outreachy intern Nancy spent the summer improving the build pipeline to generate the internal dictionaries we use. These dictionaries are used to provide autofill capabilities and add definitions to the Editor, as well as providing near-instant completions for both the Editor and Player. The old pipeline was buggy and hard to maintain. Once we had a cleaned it up, Nancy was able to use it to effortlessly produce a dictionary in her native tongue: Swahili.

A Grid in swahili

We have no distribution story yet, but it’s exciting to have it so much easier to create dictionaries in other languages. It opens the door to the Editor being more universally useful (and fulfills a core GNOME tenet).

You can read about it more details in Nancy’s final report.

Word List

Victor did a ton of research for Crosswords, almost acting like a Product Manager. He installed every crossword editor he could find and did a competitive analysis, noting possible areas for improvement. One of the areas he flagged was the word list in our editor. This list suggests words that could be used in a given spot in the grid. We started with a simplistic implementation that listed every possible word in our dictionary that could fit. This approach— while fast — provided a lot of dead words that would make the grid unsolvable. So he set about trying to narrow down that list.

New Word List showing possible options

It turns out that there’s a lot of tradeoffs to be made here (Victor’s post). It’s possible to find a really good set of words, at the cost of a lot of computational power. A much simpler list is quick but has dead words. In the end, we found a happy medium that let us get results fast and had a stable list across a clue. He’ll be blogging about this shortly.

Victor also cleaned up our development docs, and researched satsolve algorithms for the grid. He’s working on a lovely doc on the AC-3 algorithm, and we can use it to add additional functionality to the editor in the future.

Printing

Toluwaleke implemented printing support for GNOME Crosswords.

This was a tour de force, and a phenomenal addition to the Crosswords codebase. When I proposed it for a GSoC project, I had no idea how much work this project could involve. We already had code to produce an svg of the grid — I thought that we could just quickly add support for the clues and call it a day. Instead, we ended up going on a wild ride resulting in a significantly stronger feature and code base than we had going in.

His blog has more detail and it’s really quite cool (go read it!). But from my perspective, we ended up with a flexible and fast rendering system that can be used in a lot more places. Take a look:

https://blogs.gnome.org/jrb/files/2025/10/output_video.webm

The resulting PDFs are really high quality — they seem to look better than some of the newspaper puzzles I’ve seen. We’ll keep tweaking them as there are still a lot of improvements we’d like to add, such as taking the High Contrast / Large Text A11Y options into account. But it’s a tremendous basis for future work.

Increased Polish

There were a few other small things that happened

  • I hooked Crosswords up to Damned Lies. This led to an increase in our translation quality and count
  • This included a Polish translation, which came with a new downloader!
  • I ported all the dialogs to AdwDialog, and moved on from (most) of the deprecated Gtk4 widgets
  • A lot of code cleanups and small fixes

Now that these big changes have landed, it’s time to go back to working on the rest of the changes proposed for GNOME Circle.

Until next time, happy puzzling!

Toluwaleke Ogundipe: GSoC Final Report: Printing in GNOME Crosswords

Enj, 23/10/2025 - 12:50pd

A few months ago, I introduced my GSoC project: Adding Printing Support to GNOME Crosswords. Since June, I’ve been working hard on it, and I’m happy to share that printing puzzles is finally possible!

The Result

GNOME Crosswords now includes a Print option in its menu, which opens the system’s print dialog. After adjusting printer settings and page setup, the user is shown a preview dialog with a few crossword-specific options, such as ink-saving mode and whether (and how) to include the solution. The options are intentionally minimal, keeping the focus on a clean and straightforward printing experience.

Below is a short clip showing the feature in action:

The resulting file: output.pdf

Crosswords now also ships with a standalone command-line tool, ipuz2pdf, which converts any IPUZ puzzle file into a print-ready PDF. It offers a similarly minimal set of layout and crossword-specific options.

The Process
  • Studied and profiled the existing code and came up with an overall approach for the project.
  • Built a new grid rendering framework, resulting in a 10× speedup in rendering. Dealt with a ton of details around text placement and rendering, colouring, shapes, and more.
  • Designed and implemented a print layout engine with a templating system, adjusted to work with different puzzle kinds, grid sizes, and paper sizes.
  • Integrated the layout engine with the print dialog and added a live print preview.
  • Bonus: Created ipuz2pdf, a standalone command-line utility (originally for testing) that converts an IPUZ file into a printable PDF.
The Challenges

Working on a feature of this scale came with plenty of challenges. Getting familiar with a large codebase took patience, and understanding how everything fit together often meant careful study and experimentation. Balancing ideas with the project timeline and navigating code reviews pushed me to grow both technically and collaboratively.

On the technical side, rendering and layout had their own hurdles. Handling text metrics, scaling, and coordinate transformations required a mix of technical knowledge, critical thinking, and experimentation. Even small visual glitches could lead to hours of debugging. One notably difficult part was implementing the box layout system that powers the dynamic print layout engine.

The Lessons

This project taught me a lot about patience, focus, and iteration. I learned to approach large problems by breaking them into small, testable pieces, and to value clarity and simplicity in both code and design. Code reviews taught me to communicate ideas better, accept feedback gracefully, and appreciate different perspectives on problem-solving.

On the technical side, working with rendering and layout systems deepened my understanding of graphics programming. I also learned how small design choices can ripple through an entire codebase, and how careful abstraction and modularity can make complex systems easier to evolve.

Above all, I learned the value of collaboration, and that progress in open source often comes from many small, consistent improvements rather than big leaps.

The Conclusion

In the end, I achieved all the goals set out for the project, and even more. It was a long and taxing journey, but absolutely worth it.

The Gratitude

I’m deeply grateful to my mentors, Jonathan Blandford and Federico Mena Quintero, for their guidance, patience, and support throughout this project. I’ve learned so much from working with them. I’m also grateful to the GNOME community and Google Summer of Code for making this opportunity possible and for creating such a welcoming environment for new contributors.

What Comes After

No project is ever truly finished, and this one is no exception. There’s still plenty to be done, and some already have tracking issues. I plan to keep improving the printing system and related features in GNOME Crosswords.

I also hope to stay involved in the GNOME ecosystem and open-source development in general. I’m especially interested in projects that combine design, performance, and system-level programming. More importantly, I’m a recent CS graduate looking for a full-time role in the field of interest stated earlier. If you have or know of any opportunities, please reach out at feyidab01@gmail.com.

Finally, I plan to write a couple of follow-up posts diving into interesting parts of the process in more detail. Stay tuned!

Thank you!

Jussi Pakkanen: CapyPDF 1.8.0 released

Mër, 22/10/2025 - 12:27pd

I have just released CapyPDF 1.8. It's mostly minor fixes and tweaks but there are two notable things. The first one is that CapyPDF now supports variable axis fonts. The other one is that CapyPDF will now produce PDF version 2.0 files instead of 1.7 by default. This might seem like a big leap but really isn't. PDF 2.0 is pretty much the same as 1.7, just with documentation updates and deprecating (but not removing) a bunch of things. People using PDF have a tendency to be quite conservative in their versions, but PDF 2.0 has been out since 2017 with most of it being PDF 1.7 from 2008.

It is still possible to create version with older PDF specs. If you specify, say, PDF/X3, CapyPDF will output PDF 1.3 as the spec requires that version and no other even though, for example, Adobe's PDF tools accept PDF/X3 whose version later than 1.3.

The PDF specification is currently undergoing major changes and future versions are expected to have backwards incompatible features such as HDR imaging. But 2.0 does not have those yet.

Things CapyPDF supports

CapyPDF has implemented a fair chunk of the various PDF specs:

  • All paint and text operations
  • Color management
  • Optional content groups
  • PDF/X and PDF/A support
  • Tagged PDF (i.e. document structure and semantic information)
  • TTF, OTF, TTC and CFF fonts
  • Forms (preliminary)
  • Annotations
  • File attachments
  • Outlines
  • Page naming
In theory this should be enough to support things like XRechnung and documents with full accessibility information as per PDF/UA. These have not been actually tested as I don't have personal experience in German electronic invoicing or document accessibility.

Dorothy Kabarozi: Laravel Mix “Unable to Locate Mix File” Error: Causes and Fixes

Mar, 21/10/2025 - 6:40md
Laravel Mix “Unable to Locate Mix File” Error: Causes and Fixes

If you’re working with Laravel and using Laravel Mix to manage your CSS and JavaScript assets, you may have come across an error like this:

Spatie\LaravelIgnition\Exceptions\ViewException Message: Unable to locate Mix file: /assets/vendor/css/rtl/core.css

Or in some cases:

Illuminate\Foundation\MixFileNotFoundException Unable to locate Mix file: /assets/vendor/fonts/boxicons.css

This error can be frustrating, especially when your project works perfectly on one machine but fails on another. Let’s break down what’s happening and how to solve it.

What Causes This Error?

Laravel Mix is a wrapper around Webpack, designed to compile your resources/ assets (CSS, JS, images) into the public/ directory. The mix() helper in Blade templates references these compiled assets using a special file: mix-manifest.json.

This error occurs when Laravel cannot find the compiled asset. Common reasons include:

  1. Assets are not compiled yet
    If you’ve just cloned a project, the public/assets folder might be empty. Laravel is looking for files that do not exist yet.
  2. mix-manifest.json is missing or outdated
    This file maps original asset paths to compiled paths. If it’s missing, Laravel Mix won’t know where to find your assets.
  3. Incorrect paths in Blade templates
    If your code is like: <link rel="stylesheet" href="{{ asset(mix('assets/vendor/css/rtl/core.css')) }}" /> but the RTL folder or the file doesn’t exist, Mix will throw an exception.
  4. Wrong configuration
    Some themes use variables like $configData['rtlSupport'] to toggle right-to-left CSS. If it’s set incorrectly, Laravel will try to load files that don’t exist.
How to Fix It

Here’s a step-by-step solution:

1. Install NPM dependencies

Make sure you have Node.js installed, then run:

npm install 2. Compile your assets
  • Development mode (fast, unminified):
npm run dev
  • Production mode (optimized, minified):
npm run build

This will generate your CSS and JS files in the public folder and update mix-manifest.json.

3. Check mix-manifest.json

Ensure the manifest contains the file Laravel is looking for:

"/assets/vendor/css/rtl/core.css": "/assets/vendor/css/rtl/core.css" 4. Adjust Blade template paths

If you don’t use RTL, you can set:

$configData['rtlSupport'] = '';

so the code doesn’t try to load /rtl/core.css unnecessarily.

5. Clear caches

Laravel may cache old views and configs. Clear them:

php artisan view:clear php artisan config:clear php artisan cache:clear Pro Tips
  • Always check if the file exists in public/assets/... after compiling.
  • If you move your project to another machine or server, you must run npm install and npm run dev again.
  • For production, make sure your server has Node.js and NPM installed, otherwise Laravel Mix cannot build the assets.
Conclusion

The “Unable to locate Mix file” error is not a bug in Laravel, but a result of missing compiled assets or misconfigured paths. Once you:

  1. Install dependencies.
  2. Compile assets,
  3. Correct Blade paths, and
  4. Clear caches; your Laravel project should load CSS and JS files without issues.

Daniel García Moreno: GNOME Tour in openSUSE and welcome app

Mar, 21/10/2025 - 2:00md

As a follow up of the Hackweek 24 project, I've continued working on the gnome-tour fork for openSUSE with custom pages to replace the welcome application for openSUSE distributions.

GNOME Tour modifications

All the modifications are on top of upstream gnome-tour and stored in the openSUSE/gnome-tour repo

  • Custom initial page

  • A new donations page. In openSUSE we remove the popup from GNOME shell for donations, so it's fair to add it in this place.

  • Last page with custom openSUSE links, this one is the used for opensuse-welcome app.
opensuse-welcome package

The original opensuse-welcome is a qt application, and this one is used for all desktop environments, but it's more or less unmaintained and looking for a replacement, we can use the gnome-tour fork as the default welcome app for all desktop without a custom app.

To do a minimal desktop agnostic opensuse-welcome application, I've modified the gnome-tour to also generate a second binary but just with the last page.

The new opensuse-welcome rpm package is built as a subpackage of gnome-tour. This new application is minimal and it doesn't have lots of requirements, but as it's a gtk4 application, it requires gtk and libadwaita, and also depends on gnome-tour-data to get the resoures of the app.

To improve this welcome app we need to review the translations, because I added three new pages to the gnome-tour and that specific pages are not translated, so I should regenerate the .po files for all languages and upload to openSUSE Weblate for translations.

Matthew Garrett: Where are we on X Chat security?

Mar, 21/10/2025 - 1:36pd
AWS had an outage today and Signal was unavailable for some users for a while. This has confused some people, including Elon Musk, who are concerned that having a dependency on AWS means that Signal could somehow be compromised by anyone with sufficient influence over AWS (it can't). Which means we're back to the richest man in the world recommending his own "X Chat", saying The messages are fully encrypted with no advertising hooks or strange “AWS dependencies” such that I can’t read your messages even if someone put a gun to my head.

Elon is either uninformed about his own product, lying, or both.

As I wrote back in June, X Chat genuinely end-to-end encrypted, but ownership of the keys is complicated. The encryption key is stored using the Juicebox protocol, sharded between multiple backends. Two of these are asserted to be HSM backed - a discussion of the commissioning ceremony was recently posted here. I have not watched the almost 7 hours of video to verify that this was performed correctly, and I also haven't been able to verify that the public keys included in the post were the keys generated during the ceremony, although that may be down to me just not finding the appropriate point in the video (sorry, Twitter's video hosting doesn't appear to have any skip feature and would frequently just sit spinning if I tried to seek to far and I should probably just download them and figure it out but I'm not doing that now). With enough effort it would probably also have been possible to fake the entire thing - I have no reason to believe that this has happened, but it's not externally verifiable.

But let's assume these published public keys are legitimately the ones used in the HSM Juicebox realms[1] and that everything was done correctly. Does that prevent Elon from obtaining your key and decrypting your messages? No.

On startup, the X Chat client makes an API call called GetPublicKeysResult, and the public keys of the realms are returned. Right now when I make that call I get the public keys listed above, so there's at least some indication that I'm going to be communicating with actual HSMs. But what if that API call returned different keys? Could Elon stick a proxy in front of the HSMs and grab a cleartext portion of the key shards? Yes, he absolutely could, and then he'd be able to decrypt your messages.

(I will accept that there is a plausible argument that Elon is telling the truth in that even if you held a gun to his head he's not smart enough to be able to do this himself, but that'd be true even if there were no security whatsoever, so it still says nothing about the security of his product)

The solution to this is remote attestation - a process where the device you're speaking to proves its identity to you. In theory the endpoint could attest that it's an HSM running this specific code, and we could look at the Juicebox repo and verify that it's that code and hasn't been tampered with, and then we'd know that our communication channel was secure. Elon hasn't done that, despite it being table stakes for this sort of thing (Signal uses remote attestation to verify the enclave code used for private contact discovery, for instance, which ensures that the client will refuse to hand over any data until it's verified the identity and state of the enclave). There's no excuse whatsoever to build a new end-to-end encrypted messenger which relies on a network service for security without providing a trustworthy mechanism to verify you're speaking to the real service.

We know how to do this properly. We have done for years. Launching without it is unforgivable.

[1] There are three Juicebox realms overall, one of which doesn't appear to use HSMs, but you need at least two in order to obtain the key so at least part of the key will always be held in HSMs

comments

Dorothy Kabarozi: Deploying a Simple HTML Project on Linode Using Nginx

Sht, 18/10/2025 - 6:14md
Deploying a Simple HTML Project on Linode Using Nginx: My Journey and Lessons Learned

Deploying web projects can seem intimidating at first, especially when working with a remote server like Linode. Recently, I decided to deploy a simple HTML project (index.html) on a Linode server using Nginx. Here’s a detailed account of the steps I took, the challenges I faced, and the solutions I applied.

Step 1: Accessing the Linode Server

The first step was to connect to my Linode server via SSH:

ssh root@<your-linode-ip>

Initially, I encountered a timeout issue, which reminded me to check network settings and ensure SSH access was enabled for my Linode instance. Once connected, I had access to the server terminal and could manage files and services.

Step 2: Preparing the Project

My project was simple—it only contained an index.html file. I uploaded it to the server under:

/var/www/hng13-stage0-devops

I verified the project folder structure with:

ls -l /var/www/hng13-stage0-devops

Since there was no public folder or PHP files, I knew I needed to adjust the Nginx configuration to serve directly from this folder.

Step 3: Setting Up Nginx

I opened the Nginx configuration for my site:

sudo nano /etc/nginx/sites-available/hng13

Initially, I mistakenly pointed root to a non-existent folder (public), which caused a 404 Not Found error. The correct configuration looked like this:

server { listen 80; server_name <your_linode-ip>; root /var/www/hng13-stage0-devops; # points to folder containing index.html index index.html index.htm; location / { try_files $uri $uri/ =404; } } Step 4: Enabling the Site and Testing

After creating the configuration file, I enabled the site:

sudo ln -s /etc/nginx/sites-available/hng13 /etc/nginx/sites-enabled/

I also removed the default site to avoid conflicts:

sudo rm /etc/nginx/sites-enabled/default

Then I tested the configuration:

sudo nginx -t

If the syntax was OK, I reloaded Nginx:

sudo systemctl reload nginx Step 5: Checking Permissions

Nginx must have access to the project files. I ensured the correct permissions:

sudo chown -R www-data:www-data /var/www/hng13-stage0-devops sudo chmod -R 755 /var/www/hng13-stage0-devops Step 6: Viewing the Site

Finally, I opened my browser and navigated to

http://<your-linode-ip>

And there it was—my index.html page served perfectly via Nginx.

Challenges and Lessons Learned
  1. Nginx server_name Error
    • Error: "server_name" directive is not allowed here
    • Lesson: Always place server_name inside a server { ... } block.
  2. 404 Not Found
    • Cause: Nginx was pointing to a public folder that didn’t exist.
    • Solution: Update root to the folder containing index.html.
  3. Permissions Issues
    • Nginx could not read files initially.
    • Solution: Ensure ownership by www-data and proper read/execute permissions.
  4. SSH Timeout / Connection Issues
    • Double-check firewall rules and Linode network settings.
Key Takeaways
  • For static HTML projects, Nginx is simple and effective.
  • Always check the root folder matches your project structure.
  • Testing the Nginx config (nginx -t) before reload saves headaches.
  • Proper permissions are crucial for serving files correctly.

Deploying my project was a learning experience. Even small mistakes like pointing to the wrong folder or placing directives in the wrong context can break the site—but step-by-step debugging and understanding the errors helped me fix everything quickly.This has kick started my devOps journey and I truly loved the challenge

Sam Thursfield: Status update, 17/10/2025

Pre, 17/10/2025 - 6:16md

Greetings readers. I’m writing to you from a hotel room in Manchester which I’m currently sharing with a variant of COVID 19. We are listening to disco funk music.

This virus prevents me from working or socializing, but I at least I have time to do some cyber-janitorial tasks like updating my “dotfiles” (which holds configuration for all the programs i use on Linux, stored in Git… for those who aren’t yet converts).

I also caught up with some big upcoming changes in the GNOME 50 release cycle — more on that below.

nvim

I picked up Vim as my text editor ten years ago while working on a very boring project. This article by Jon Beltran de Heredia, “Why, oh WHY, do those #?@! nutheads use vi?” sold me on the key ideas: you use “normal mode” for everything, which gives you powerful and composable edit operations. I printed out this Vim quick reference card by Michael Goerz and resolved to learn one new operation every day.

It worked and I’ve been a convert ever since. Doing consultancy work makes you a nomad: often working via SSH or WSL on other people’s computers. So I never had the luxury of setting up an IDE like GNOME Builder, or using something that isn’t packaged in 99% of distros. Luckily Vim is everywhere.

Over the years, I read a newletter named Vimtricks and I picked up various Vim plugins like ALE, ctrlp, and sideways. But there’s a problem: some of these depend on extra Vim features like Python support. If a required feature is missing, you get an error message that appears on like… every keystroke:

In this case, on a Debian 12 build machine, I could work around by installing the vim-gtk3 package. But it’s frustrating enough that I decided it was time to try Neovim.

The Neovim project began around the time I was switching to Vim, and is based on the premise that “Vim is, without question, the worst C codebase I have seen.”.

So far its been painless to switch and everything works a little better. The :terminal feels better integrated. I didn’t need to immediately disable mouse mode. I can link to online documentation! The ALE plugin (which provides language server integration) is even ready packaged in Fedora.

I’d send a screenshot but my editor looks… exactly the same as before. Boring!

I also briefly tried out Helix, which appears to take the good bits of Vim (modal editing) and run in a different direction (visible selection and multiple cursors). I need a more boring project before I’ll be able to learn a completely new editor. Give me 10 years.

Endless OS 7

I’ve been working flat out on Endless OS 7, as last month. Now that the basics work and the system boots, we were mainly looking at integrating Endless-specific Pay as you Go functionality that they use for affordable laptop programs.

I learned more than I wanted to about Linux early boot process, particularly the dracut-ng initramfs generator (one of many Linux components that seems to be named after a town in Massachusetts).

GNOME OS actually dropped Dracut altogether, in “vm-secure: Get rid of dracut and use systemd’s ukify” by Valentin David, and now uses a simple Python script. A lot of Dracut’s features aren’t necessary for building atomic, image-based distros. For EOS we decided to stick with Dracut, at least for now.

So we get to deal with fun changes such as the initramfs growing from 90MB to 390MB after we updated to latest Dracut. Something which is affecting Fedora too (LWN: “Last-minute /boot boost for Fedora 43”).

I requested time after the contract finishes to write up a technical article on the work we did, so I won’t go into more details yet. Watch this space!

GNOME 50

I haven’t had a minute to look at upstream GNOME this month, but there are some interesting things cooking there.

Jordan merged the GNOME OS openQA tests into the main gnome-build-meta repo. This is a simple solution to a number of basic questions we had around testing, such as, “how do we target tests to specific versions of GNOME?”.

We separated the tests out of gnome-build-meta because, at the time, each new CI pipeline would track new versions of each GNOME module. This meant, firstly that pipelines could take anywhere from 10 minutes to 4 hours rebuilding a disk image before the tests even started, and secondly that the system under test would change every time you ran the pipeline.

While that sounds dumb, it worked this way for historical reasons: GNOME OS has been an under-resourced ad-hoc project ongoing since 2011, whose original goal was simply to continuously build: already a huge challenge if you remember GNOME in the early 2010s. Of course, such as CI pipeline is highly counterproductive if you’re trying to develop and review changes to the tests, and not the system: so the separate openqa-tests repo was a necessary step.

Thanks to Abderrahim’s work in 2022 (“Commit refs to the repository” and “Add script to update refs”), plus my work on a tool to run the openQA tests locally before pushing to CI (ssam_openqa), I hope we’re not going to have those kinds of problems any more. We enter a brave new world of testing!

The next thing the openQA tests need, in my opinion, is dedicated test infrastructure. The shared Gitlab CI runners we have are in high demand. The openQA tests have timeouts, as they ultimately are doing this in a loop:

  • Send an input event
  • Wait for the system under test to react

If a VM is running on a test runner with overloaded CPU or IO then tests will start to time out in unhelpful ways. So, if you want to have better testing for GNOME, finding some dedicated hardware to run tests would be a significant help.

There are also some changes cooking in Localsearch thanks to Carlos Garnacho:

The first of these is a nicely engineered way to allow searching files on removable disks like external HDs. This should be opt-in: so you can opt in to indexing your external hard drive full of music, but your machine wouldn’t be vulnerable to an attack where someone connects a malicious USB stick while your back is turned. (The sandboxing in localsearch makes it non-trivial to construct such an attack, but it would require a significantly greater level of security auditing before I’d make any guarantees about that).

The second of these changes is pretty big: in GNOME 50, localsearch will now consider everything in your homedir for indexing.

As Carlos notes in the commit message, he has spent years working on performance optimisations and bug fixes in localsearch to get to a point where he considers it reasonable to enable by default. From a design point of view, discussed in the issue “Be more encompassing about what get indexed“, it’s hard to justify a search feature that only surfaces a subset of your files.

I don’t know if it’s a great time to do this, but nothing is perfect and sometimes you have to take a few risks to move forwards.

There’s a design, testing and user support element to all of this, and it’s going to require help from the GNOME community and our various downstream distributors, particularly around:

  • Widely testing the new feature before the GNOME 50 release.
  • Making sure users are aware of the change and how to manage the search config.
  • Handling an expected increase in bug reports and support requests.
  • Highlighting how privacy-focused localsearch is.

I never got time to extend the openQA tests to cover media indexing; it’s not a trivial job. We will rely on volunteers and downstream testers to try out the config change as widely as possible over the next 6 months.

One thing that makes me support this change is that the indexer in Android devices already works like this: everything is scanned into a local cache, unless there’s a .nomedia file. Unfortunately Google don’t document how the Android media scanner works. But it’s not like this is GNOME treading a radical new path.

The localsearch index lives in the same filesystem as the data, and never leaves your PC. In a world where Microsoft Windows can now send your boss screenshots of everything you looked at, GNOME is still very much on your side. Let’s see if we can tell that story.

Gedit Technology blog: Mid-October News

Mër, 15/10/2025 - 12:00md

Misc news about the gedit text editor, mid-October edition! (Some sections are a bit technical).

Rework of the file loading and saving (continued)

The refactoring continues in the libgedit-gtksourceview module, this time to tackle a big class that takes too much responsibilities. A utility is in development which will permit to delegate a part of the work.

The utility is about character encoding conversion, with support of invalid bytes. It takes as input a single GBytes (the file content), and transforms it into a list of chunks. A chunk contains either valid (successfully converted) bytes, or invalid bytes. The output format - the "list of chunks" - is subject to change to improve memory consumption and performances.

Note that invalid bytes are allowed, to be able to open really any kind of files with gedit.

I must also note that this is quite sensitive work, at the heart of document loading for gedit. Normally all these refactorings and improvements will be worth it!

Progress in other modules

There has been some progress on other modules:

  • gedit: version 48.1.1 has been released with a few minor updates.
  • The Flatpak on Flathub: update to gedit 48.1.1 and the GNOME 49 runtime.
  • gspell: version 1.14.1 has been released, mainly to pick up the updated translations.
GitHub Sponsors

In addition to Liberapay, you can now support the work that I do on GitHub Sponsors. See the gedit donations page.

Thank you ❤️

Victor Ma: This is a test post

Mër, 15/10/2025 - 2:00pd

Over the past few weeks, I’ve been working on improving some test code that I had written.

Refactoring time!

My first order of business was to refactor the test code. There was a lot of boilerplate, which made it difficult to add new tests, and also created visual clutter.

For example, have a look at this test case:

static void test_egg_ipuz (void) { g_autoptr (WordList) word_list = NULL; IpuzGrid *grid; g_autofree IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; word_list = get_broda_word_list (); grid = create_grid (EGG_IPUZ_FILE_PATH); clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2); clue_matches = word_list_find_clue_matches (word_list, clue, grid); g_assert_cmpint (word_array_len (clue_matches), ==, 3); g_assert_cmpstr (word_list_get_indexed_word (word_list, word_array_index (clue_matches, 0)), ==, "EGGS"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 1)), ==, "EGGO"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 2)), ==, "EGGY"); }

That’s an awful lot of code just to say:

  1. Use the EGG_IPUZ_FILE_PATH file.
  2. Run the word_list_find_clue_matches() function on the 2-Across clue.
  3. Assert that the results are ["EGGS", "EGGO", "EGGY"].

And this was repeated in every test case, and needed to be repeated in every new test case I added. So, I knew that I had to refactor my code.

Fixtures and functions

My first step was to extract all of this setup code:

g_autoptr (WordList) word_list = NULL; IpuzGrid *grid; g_autofree IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; word_list = get_broda_word_list (); grid = create_grid (EGG_IPUZ_FILE_PATH); clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2); clue_matches = word_list_find_clue_matches (word_list, clue, grid);

To do this, I used a fixture:

typedef struct { WordList *word_list; IpuzGrid *grid; } Fixture; static void fixture_set_up (Fixture *fixture, gconstpointer user_data) { const gchar *ipuz_file_path = (const gchar *) user_data; fixture->word_list = get_broda_word_list (); fixture->grid = create_grid (ipuz_file_path); } static void fixture_tear_down (Fixture *fixture, gconstpointer user_data) { g_object_unref (fixture->word_list); }

My next step was to extract all of this assertion code:

g_assert_cmpint (word_array_len (clue_matches), ==, 3); g_assert_cmpstr (word_list_get_indexed_word (word_list, word_array_index (clue_matches, 0)), ==, "EGGS"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 1)), ==, "EGGO"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 2)), ==, "EGGY");

To do this, I created a new function that runs word_list_find_clue_matches() and asserts that the result equals an expected_words parameter.

static void test_clue_matches (WordList *word_list, IpuzGrid *grid, IpuzClueDirection clue_direction, guint clue_index, const gchar *expected_words[]) { const IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; g_autoptr (WordArray) expected_word_array = NULL; clue = get_clue (grid, clue_direction, clue_index); clue_matches = word_list_find_clue_matches (word_list, clue, grid); expected_word_array = str_array_to_word_array (expected_words, word_list); g_assert_true (word_array_equals (clue_matches, expected_word_array)); }

After all that, here’s what my test case looked like:

static void test_egg_ipuz (Fixture *fixture, gconstpointer user_data) { test_clue_matches (fixture->word_list, fixture->grid, IPUZ_CLUE_DIRECTION_ACROSS, 2, (const gchar*[]){"EGGS", "EGGO", "EGGY", NULL}); }

Much better!

Macro functions

But as great as that was, I knew that I could take it even further, with macro functions.

I created a macro function to simplify test case definitions:

#define ASSERT_CLUE_MATCHES(DIRECTION, INDEX, ...) \ test_clue_matches (fixture->word_list, \ fixture->grid, \ DIRECTION, \ INDEX, \ (const gchar*[]){__VA_ARGS__, NULL})

Now, test_egg_ipuz() looked like this:

static void test_egg_ipuz (Fixture *fixture, gconstpointer user_data) { ASSERT_CLUE_MATCHES (IPUZ_CLUE_DIRECTION_ACROSS, 2, "EGGS", "EGGO", "EGGY"); }

I also made a macro function for the test case declarations:

#define ADD_IPUZ_TEST(test_name, file_name) \ g_test_add ("/clue_matches/" #test_name, \ Fixture, \ "tests/clue-matches/" #file_name, \ fixture_set_up, \ test_name, \ fixture_tear_down)

Which turned this:

g_test_add ("/clue_matches/test_egg_ipuz", Fixture, EGG_IPUZ, fixture_set_up, test_egg_ipuz, fixture_tear_down);

Into this:

ADD_IPUZ_TEST (test_egg_ipuz, egg.ipuz); An unfortunate bug

So, picture this: You’ve just finished refactoring your test code. You add some finishing touches, do a final test run, look over the diff one last time…and everything seems good. So, you open up an MR and start working on other things.

But then, the unthinkable happens—the CI pipeline fails! And apparently, it’s due to a test failure? But you ran your tests locally, and everything worked just fine. (You run them again just to be sure, and yup, they still pass.) And what’s more, it’s only the Flatpak CI tests that failed. The native CI tests succeeded.

So…what, then? What could be the cause of this? I mean, how do you even begin debugging a test failure that only happens in a particular CI job and nowhere else? Well, let’s just try running the CI pipeline again and see what happens. Maybe the problem will go away. Hopefully, the problem goes away.

Nope. Still fails.

Rats.

Well, I’ll spare you the gory details that it took for me to finally figure this one out. But the cause of the bug was me accidentally freeing an object that I should never have freed.

This meant that the corresponding memory segment could be—but, importantly, did not necessarily have to be—filled with garbage data. And this is why only the Flatpak job’s test run failed…well, at first, anyway. By changing around some of the test cases, I was able to get the native CI tests and local tests to fail. And this is what eventually clued me into the true nature of this bug.

So, after spending the better part of two weeks, here is the fix I ended up with:

@@ -94,7 +94,7 @@ test_clue_matches (WordList *word_list, guint clue_index, const gchar *expected_words[]) { - g_autofree IpuzClue *clue = NULL; + const IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; g_autoptr (WordArray) expected_word_array = NULL;

Jordan Petridis: Nightly Flatpak CI gets a cache

Mar, 14/10/2025 - 8:00md

Recently I got around tackling a long standing issue for good. There were multiple attempts in the past 6 years to cache flatpak-builder artifacts with Gitlab but none had worked so far.

On the technical side of things, flatpak-builder relies heavily on extended attributes (xattrs) on files to do cache validation. Using gitlab’s built-in cache or artifacts mechanisms results in a plain zip archive which strips all the attributes from the files, causing the cache to always be invalid once restored. Additionally the hardlinks/symlinks in the cache break. One workaround for this is to always tar the directories and then manually extract them after they are restored.

On the infrastructure of things we stumble once again into Gitlab. When a cache or artifact is created, it’s uploaded into the Gitlab’s instance storage so it can later be reused/redownloaded into any runner. While this is great, it also quickly ramps up the network egress bill we have to pay along with storage.
 And since its a public gitlab instance that anyone can make request against repositories, it gets out of hand fast.

Couple weeks ago Bart pointed me out to Flathub’s workaround for this same problem. It comes down to making it someone else problem, and ideally one someone who is willing to fund FOSS infrastructure. We can use ORAS to wrap files and directories into an OCI wrapper and publish it to public registries. And it worked. Quite handy! OCI images are the new tarballs.

Now when a pipeline run against your default branch (and assuming it’s protected) it will create a cache artifact and upload to the currently configured OCI registry. Afterwards, any build, including Merge Request pipelines, will download the image, extract the artifacts and check how much of it is still valid.

From some quick tests and numbers, GNOME Builder went from a ~16 minute build to 6 minutes for our x86_64 runners. While on the AArch64 runner the impact was even bigger, going from 50 minutes to 16 minutes. Not bad. The more modules you are building in your manifest, the more noticeable it is.

Unlike Buildstream, there is no Content Addressable Server and flatpak-builder itself isn’t aware of the artifacts we publish or can associate them with the cache keys. The OCI/ORAS cache artifacts are manual and a bit hacky of a solution but works well in practice and until we have better tooling. To optimize a bit better for less cache-misses consider building modules from pinned commits/tags/tarballs and building modules from moving branches as late as possible.

If you are curious in the details, take a look at the related Merge Request in the templates repository and the follow up commits.

Free Palestine

Bilal Elmoussaoui: Testing a Rust library - Code Coverage

Hën, 13/10/2025 - 2:00pd

It has been a couple of years since I started working on a Rust library called oo7 as a Secret Service client implementation. The library ended up also having support for per-sandboxed app keyring using the Secret portal with a seamless API for end-users that makes usage from the application side straightforward.

The project, with time, grew support for various components:

  • oo7-cli: A secret-tool replacement but much better, as it allows not only interacting with the Secret service on the DBus session bus but also with any keyring. oo7-cli --app-id com.belmoussaoui.Authenticator list, for example, allows you to read the sandboxed app with app-id com.belmoussaoui.Authenticator's keyring and list its contents, something that is not possible with secret-tool.
  • oo7-portal: A server-side implementation of the Secret portal mentioned above. Straightforward, thanks to my other library ASHPD.
  • cargo-credential-oo7: A cargo credential provider built using oo7 instead of libsecret.
  • oo7-daemon: A server-side implementation of the Secret service.

The last component was kickstarted by Dhanuka Warusadura, as we already had the foundation for that in the client library, especially the file backend reimplementation of gnome-keyring. The project is slowly progressing, but it is almost there!

The problem with replacing such a very sensitive component like gnome-keyring-daemon is that you have to make sure the very sensitive user data is not corrupted, lost, or inaccessible. For that, we need to ensure that both the file backend implementation in the oo7 library and the daemon implementation itself are well tested.

That is why I spent my weekend, as well as a whole day off, working on improving the test suite of the wannabe core component of the Linux desktop.

Coverage Report

One metric that can give the developer some insight into which lines of code or functions of the codebase are executed when running the test suite is code coverage.

In order to get the coverage of a Rust project, you can use a project like Tarpaulin, which integrates with the Cargo build system. For a simple project, a command like this, after installing Tarpaulin, can give you an HTML report:

cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,native_crypto" \ --ignore-panics \ --out Html \ --output-dir coverage

Except in our use case, it is slightly more complicated. The client library supports switching between Rust native cryptographic primitives crates or using OpenSSL. We must ensure that both are tested.

For that, we can export our report in LCOV for native crypto and do the same for OpenSSL, then combine the results using a tool like grcov.

mkdir -p coverage-raw cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,native_crypto" \ --ignore-panics \ --out Lcov \ --output-dir coverage-raw mv coverage-raw/lcov.info coverage-raw/native-tokio.info cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,openssl_crypto" \ --ignore-panics \ --out Lcov \ --output-dir coverage-raw mv coverage-raw/lcov.info coverage-raw/openssl-tokio.info

and then combine the results with

cat coverage-raw/*.info > coverage-raw/combined.info grcov coverage-raw/combined.info \ --binary-path target/debug/ \ --source-dir . \ --output-type html \ --output-path coverage \ --branch \ --ignore-not-existing \ --ignore "**/portal/*" \ --ignore "**/cli/*" \ --ignore "**/tests/*" \ --ignore "**/examples/*" \ --ignore "**/target/*"

To make things easier, I added a bash script to the project repository that generates coverage for both the client library and the server implementation, as both are very sensitive and require intensive testing.

With that script in place, I also used it on CI to generate and upload the coverage reports at https://bilelmoussaoui.github.io/oo7/coverage/. The results were pretty bad when I started.

Testing

For the client side, most of the tests are straightforward to write; you just need to have a secret service implementation running on the DBus session bus. Things get quite complicated when the methods you have to test require a Prompt, a mechanism used in the spec to define a way for the user to be prompted for a password to unlock the keyring, create a new collection, and so on. The prompter is usually provided by a system component. For now, we just skipped those tests.

For the server side, it was mostly about setting up a peer-to-peer connection between the server and the client:

let guid = zbus::Guid::generate(); let (p0, p1) = tokio::net::UnixStream::pair().unwrap(); let (client_conn, server_conn) = tokio::try_join!( // Client zbus::connection::Builder::unix_stream(p0).p2p().build(), // Server zbus::connection::Builder::unix_stream(p1) .server(guid) .unwrap() .p2p() .build(), ) .unwrap();

Thanks to the design of the client library, we keep the low-level APIs under oo7::dbus::api, which allowed me to straightforwardly write a bunch of server-side tests already.

There are still a lot of tests that need to be written and a few missing bits to ensure oo7-daemon is in an acceptable shape to be proposed as an alternative to gnome-keyring.

Don't overdo it

The coverage report is not meant to be targeted at 100%. It’s not a video game. You should focus only on the critical parts of your code that must be tested. Testing a Debug impl or a From trait (if it is straightforward) is not really useful, other than giving you a small dose of dopamine from "achieving" something.

Till then, may your coverage never reach 100%.

Hubert Figuière: Dev Log September 2025

Sht, 11/10/2025 - 2:00pd

Not as much as I wanted to do was done in September.

libopenraw

Extracting more of the calibration values for colour correction on DNG. Currently work on fixing the purple colour cast.

Added Nikon ZR and EOS C50.

ExifTool

Submitted some metadata updates to ExifTool. Because it nice to have, and also because libopenraw uses some of these autogenerated: I have a Perl script to generate Rust code from it (it used to do C++).

Niepce

Finally merged the develop branch with all the import dialog work after having requested that it be removed from Damned Lies to not strain the translator is there is a long way to go before we can freeze the strings.

Supporting cast

Among the number of packages I maintain / update on flathub, LightZone is a digital photo editing application written in Java1. Updating to the latest runtime 25.08 cause it to ignore the HiDPI setting. It will honour GDK_SCALE environment but this isn't set. So I wrote the small command line too gdk-scale to output the value. See gdk-scale on gitlab. And another patch in the wrapper script.

HiDPI support remains a mess across the board. Fltk just recently gained support for it (it's used by a few audio plugins).

1

Don't try this at home.

Sebastian Wick: SO_PEERPIDFD Gets More Useful

Pre, 10/10/2025 - 7:04md

A while ago I wrote about the limited usefulness of SO_PEERPIDFD. for authenticating sandboxed applications. The core problem was simple: while pidfds gave us a race-free way to identify a process, we still had no standardized way to figure out what that process actually was - which sandbox it ran in, what application it represented, or what permissions it should have.

The situation has improved considerably since then.

cgroup xattrs

Cgroups now support user extended attributes. This feature allows arbitrary metadata to be attached to cgroup inodes using standard xattr calls.

We can change flatpak (or snap, or any other container engine) to create a cgroup for application instances it launches, and attach metadata to it using xattrs. This metadata can include the sandboxing engine, application ID, instance ID, and any other information the compositor or D-Bus service might need.

Every process belongs to a cgroup, and you can query which cgroup a process belongs to through its pidfd - completely race-free.

Standardized Authentication

Remember the complexity from the original post? Services had to implement different lookup mechanisms for different sandbox technologies:

  • For flatpak: look in /proc/$PID/root/.flatpak-info
  • For snap: shell out to snap routine portal-info
  • For firejail: no solution

All of this goes away. Now there’s a single path:

  1. Accept a connection on a socket
  2. Use SO_PEERPIDFD to get a pidfd for the client
  3. Query the client’s cgroup using the pidfd
  4. Read the cgroup’s user xattrs to get the sandbox metadata

This works the same way regardless of which sandbox engine launched the application.

A Kernel Feature, Not a systemd One

It’s worth emphasizing: cgroups are a Linux kernel feature. They have no dependency on systemd or any other userspace component. Any process can manage cgroups and attach xattrs to them. The process only needs appropriate permissions and is restricted to a subtree determined by the cgroup namespace it is in. This makes the approach universally applicable across different init systems and distributions.

To support non-Linux systems, we might even be able to abstract away the cgroup details, by providing a varlink service to register and query running applications. On Linux, this service would use cgroups and xattrs internally.

Replacing Socket-Per-App

The old approach - creating dedicated wayland, D-Bus, etc. sockets for each app instance and attaching metadata to the service which gets mapped to connections on that socket - can now be retired. The pidfd + cgroup xattr approach is simpler: one standardized lookup path instead of mounting special sockets. It works everywhere: any service can authenticate any client without special socket setup. And it’s more flexible: metadata can be updated after process creation if needed.

For compositor and D-Bus service developers, this means you can finally implement proper sandboxed client authentication without needing to understand the internals of every container engine. For sandbox developers, it means you have a standardized way to communicate application identity without implementing custom socket mounting schemes.

Faqet