For those of you who are attending FOSDEM, we’re doing a GNOME OS hackfest and invite those of you who might be interested on our experiments on concepts as the ‘anti-distro’, eg an OS with no distro packaging that integrates GNOME desktop patterns directly.
The hackfest is from January 28th – January 29th. If you’re interested, feel free to respond on the comments. I don’t have an exact location yet.
We’ll likely have some kind of BigBlueButton set up so if you’re not available to come in-person you can join us remotely.
Agenda and attendees are linked here here.
There is likely a limited capacity so acceptance will be “first come, first served”.
See you there!
Welcome to my regular weekly update on what’s been happening at the GNOME Foundation. As usual, this post just covers highlights, and there are plenty of smaller and in progress items that haven’t been included.
Board meetingThe Board of Directors had a regular meeting this week. Topics on the agenda included:
According to our new schedule, the next meeting will be on 9th February.
New finance platformAs mentioned last week, we started using a new platform for payments processing at the beginning of the year. Overall the new system brings a lot of great features which will make our processes more reliable and integrated. However, as we adopt the tool we are having to deal with some ongoing setup tasks which mean that it is taking additional time in the short term.
GUADEC 2026 planningKristi has been extremely busy with GUADEC 2026 planning in recent weeks. She has been working closely with the local team to finalise arrangements for the venue and accommodation, as well as preparing the call for papers and sponsorship brochure.
If you or your organisation are interested in sponsoring this fantastic event, just reach out to me directly, or email guadec@gnome.org. We’d love to hear from you.
FOSDEM preparationFOSDEM 2026 is happening over the weekend of 31st January and 1st February, and preparations for the event continue to be a focus. Maria has been organising the booth, and I have been arranging the details for the Advisory Board meeting which will happen on 30 January. Together we have also been hunting down a venue for a GNOME social event on the Saturday night.
Digital WellbeingThis week the final two merge requests landed for the bedtime and screen time parental controls features. These features were implemented as part of our Digital Wellbeing program, and it’s great to see them come together in advance of the GNOME 50 release. More details can be found in gnome-shell!3980 and gnome-shell!3999.
Many thanks to Ignacy for seeing this work through to completion!
FlathubAmong other things, Bart recently wrapped up a chunk of work on Flathub’s build and publishing infrastructure, which he’s summarised in a blog post. It’s great to see all the improvements that have been made recently.
That’s it for this week. Thanks for reading, and have a great weekend!
gedit 49.0 has been released! Here are the highlights since version 48.0 which dates back from September 2024. (Some sections are a bit technical).
File loading and saving enhancementsA lot of work went into this area. It's mostly under-the-scene changes where there was a lot of dusty code. It's not entirely finished, but there are already user-visible enhancements:
There is now a "Reset All..." button in the Preferences dialog. And it is now possible to configure the default language used by the spell-checker.
Python plugins removalInitially due to an external factor, plugins implemented in Python were no longer supported.
During some time a previous version of gedit was packaged in Flathub in a way that still enabled Python plugins, but it is no longer the case.
Even though the problem is fixable, having some plugins in Python meant to deal with a multi-language project, which is much harder to maintain for a single individual. So for now it's preferable to keep only the C language.
So the bad news is that Python plugins support has not been re-enabled in this version, not even for third-party plugins.
Summary of changes for pluginsThe following plugins have been removed:
Only Python plugins have been removed, the C plugins have been kept. The Code Comment plugin which was written in Python has been rewritten in C, so it has not disappeared. And it is planned and desired to bring back some of the removed plugins.
Summary of other newsThe total number of commits in gedit and gedit-related git repositories in 2025 is: 884. More precisely:
138 enter-tex 310 gedit 21 gedit-plugins 10 gspell 4 libgedit-amtk 41 libgedit-gfls 290 libgedit-gtksourceview 70 libgedit-teplIt counts all contributions, translation updates included.
The list contains two apps, gedit and Enter TeX. The rest are shared libraries (re-usable code available to create other text editors).
If you do a comparison with the numbers for 2024, you'll see that there are fewer commits, the only module with more commits is libgedit-gtksourceview. But 2025 was a good year nevertheless!
For future versions: superset of the subsetWith Python plugins removed, the new gedit version is a subset of the previous version, when comparing approximately the list of features. In the future, we plan to have a superset of the subset. That is, to bring in new features and try hard to not remove any more functionality.
In fact, we have reached a point where we are no longer interested to remove any more features from gedit. So the good news is that gedit will normally be incrementally improved from now on without major regressions. We really hope there won't be any new bad surprises due to external factors!
Side note: this "superset of the subset" resembles the evolution of C++, but in the reverse order. Modern C++ will be a subset of the superset to have a language in practice (but not in theory) as safe as Rust (it works with compiler flags to disable the unsafe parts).
Onward to 2026Since some plugins have been removed, this makes gedit a less advanced text editor. It has become a little less suitable for heavy programming workloads, but for that there are lots of alternatives.
Instead, gedit could become a text editor of choice for newcomers in the computing science field (students and self-learners). It can be a great tool for markup languages too. It can be your daily companion for quite a while, until your needs evolve for something more complete at your workplace. Or it can be that you prefer its simplicity and its not-going-in-the-way default setup, plus the fact that it launches quickly. In short, there are a lot of reasons to still love gedit ❤️ !
If you have any feedback, even for a small thing, I would like to hear from you :) ! The best places are on GNOME Discourse, or GitLab for more actionable tasks (see the Getting in Touch section).
Many years ago when I was a kid, I took typing lessons where they introduced me to a program called Mecawin. With it, I learned how to type, and it became a program I always appreciated not because it was fancy, but because it showed step by step how to work with a keyboard.
Now the circle of life is coming back: my kid will turn 10 this year. So I started searching for a good typing tutor for Linux. I installed and tried all of them, but didn’t like any. I also tried a couple of applications on macOS, some were okish, but they didn’t work properly with Spanish keyboards. At this point, I decided to build something myself. Initially, I hacked out keypunch, which is a very nice application, but I didn’t like the UI I came up with by modifying it. So in the end, I decided to write my own. Or better yet, let Kiro write an application for me.
Mecalin is meant to be a simple application. The main purpose is teaching people how to type, and the Lessons view is what I’ll be focusing on most during development. Since I don’t have much time these days for new projects. I decided to take this opportunity to use Kiro to do most of the development for me. And to be honest, it did a pretty good job. Sure, there are things that could be better, but I definitely wouldn’t have finished it in this short time otherwise.
So if you are interested, give it a try, go to flathub and install it: https://flathub.org/apps/io.github.nacho.mecalin
In this application, you’ll have several lessons that guide you step by step through the different rows of the keyboard, showing you what to type and how to type it.
This is an example of the lesson view.
You also have games.
The falling keys game: keys fall from top to bottom, and if one reaches the bottom of the window, you lose. This game can clearly be improved, and if anybody wants to enhance it, feel free to send a PR.
The scrolling lanes game: you have 4 rows where text moves from right to left. You need to type the words before they reach the leftmost side of the window, otherwise you lose.
For those who want to support your language, there are two JSON files you’ll need to add:
Note that the Spanish lesson is the source of truth; the English one is just a translation done by Kiro.
If you have any questions, feel free to contact me.
When I started writing this blog, I didn’t fully understand what “think about your audience” really meant. At first, it sounded like advice meant for marketers or professional writers. But over time, I’ve realized it’s one of the most important lessons I’m learning, not just for writing, but for building software and contributing to open source.
Who I’m Writing (and Building) ForWhen I sit down to write, I think about a few people.
I think about aspiring developers from non-traditional backgrounds, people who didn’t follow a straight path into tech, who might be self-taught, switching careers, or learning in community-driven programs. I think about people who feel like they don’t quite belong in tech yet, and are looking for proof that they do.
I also think about my past self, about some months ago. Back then, everything felt overwhelming: the tools, the terminology, the imposter syndrome. I remember wishing I could read honest stories from people who were still in the process, not just those who had already “made it.”
And finally, I think about the open-source community I’m now part of: contributors, maintainers, and users who rely on the software we build.
Why My Audience Matters to My WorkThinking about my audience has changed how I approach my work on Papers.
Papers isn’t just a codebase, it’s a tool used by researchers, students, and academics to manage references and organize their work. When I think about those users, I stop seeing bugs as abstract issues and start seeing them as real problems that affect real people’s workflows.
The same applies to documentation. Remembering how confusing things felt when I was a beginner pushes me to write clearer commit messages, better explanations, and more accessible documentation. I’m no longer writing just to “get the task done”. I’m writing so that someone else, maybe a first-time contributor, can understand and build on my work.
Even this blog is shaped by that mindset. After my first post, someone commented and shared how it resonated with them. That moment reminded me that words can matter just as much as code.
What My Audience Needs From MeI’ve learned that people don’t just want success stories. They want honesty.
They want to hear about the struggle, the confusion, and the small wins in between. They want proof that non-traditional paths into tech are valid. They want practical lessons they can apply, not just motivation quotes.
Most of all, they want representation and reassurance. Seeing someone who looks like them, or comes from a similar background, navigating open source and learning in public can make the journey feel possible.
That’s a responsibility I take seriously.
How I’ve Adjusted Along the WayBecause I’m thinking about my audience, I’ve changed how I share my journey.
I explain things more clearly. I reflect more deeply on what I’m learning instead of just listing achievements. I’m more intentional about connecting my experiences, debugging a feature, reading unfamiliar code, asking questions in the GNOME community, to lessons others can take away.
Understanding the Papers user base has also influenced how I approach features and fixes. Understanding my blog audience has influenced how I communicate. In both cases, empathy plays a huge role.
Moving ForwardThinking about my audience has taught me that good software and good writing have something in common: they’re built with people in mind.
As I continue this internship and this blog, I want to keep building tools that are accessible, contributing in ways that lower barriers, and sharing my journey honestly. If even one person reads this and feels more capable, or more encouraged to try, then it’s worth it.
That’s who I’m writing for. And that’s who I’m building for.
It is almost a year since the switch to Vorarbeiter for building and publishing apps. We've made several improvements since then, and it's time to brag about them.
RunsOnIn the initial announcement, I mentioned we were using RunsOn, a just-in-time runner provisioning system, to build large apps such as Chromium. Since then, we have fully switched to RunsOn for all builds. Free GitHub runners available to open source projects are heavily overloaded and there are limits on how many concurrent builds can run at a time. With RunsOn, we can request an arbitrary number of threads, memory and disk space, for less than if we were to use paid GitHub runners.
We also rely more on spot instances, which are even cheaper than the usual on demand machines. The downside is that jobs sometimes get interrupted. To avoid spending too much time on retry ping-pong, builds retried with the special bot, retry command use the on-demand instances from the get-go. The same catch applies to large builds, which are unlikely to finish in time before spot instances are reclaimed.
The cost breakdown since May 2025 is as follows:
Once again, we are not actually paying for anything thanks to the AWS credits for open source projects program. Thank you RunsOn team and AWS for making this possible!
CachingVorarbeiter now supports caching downloads and ccache files between builds. Everything is an OCI image if you are feeling brave enough, and so we are storing the per-app cache with ORAS in GitHub Container Registry.
This is especially useful for cosmetic rebuilds and minor version bumps, where most of the source code remains the same. Your mileage may vary for anything more complex.
End-of-life without rebuildingOne of the Buildbot limitations was that it was difficult to retrofit pull requests marking apps as end-of-life without rebuilding them. Flat-manager itself exposes an API call for this since 2019 but we could not really use it, as apps had to be in a buildable state only to deprecate them.
Vorarbeiter will now detect that a PR modifies only the end-of-life keys in the flathub.json file, skip test and regular builds, and directly use the flat-manager API to republish the app with the EOL flag set post-merge.
Web UIGitHub's UI isn't really built for a centralized repository building other repositories. My love-hate relationship with Buildbot made me want to have a similar dashboard for Vorarbeiter.
The new web UI uses PicoCSS and HTMX to provide a tidy table of recent builds. It is unlikely to be particularly interesting to end users, but kinkshaming is not nice, okay? I like to know what's being built and now you can too here.
Reproducible buildsWe have started testing binary reproducibility of x86_64 builds targetting the stable repository. This is possible thanks to flathub-repro-checker, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub.
While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts. The current status is on the reproducible builds page.
Failures are not currently acted on. When we collect more results, we may start to surface them to app maintainers for investigation. We also don't test direct uploads at the moment.
I, too, have (or as you can probably guess from the title of this post, had) a Facebook account. I only ever used it for two purposes.
Still, every now and then I get a glimpse of a post by the people I actively chose to follow. Specifically a friend was pondering about the behaviour of people who do happy birthday posts on profiles of deceased people. Like, if you have not kept up with someone enough to know that they are dead, why would you feel the need to post congratulations on their profile pages.
I wrote a reply which is replicated below. It is not accurate as it is a translation and I no longer have access to the original post.
Some of these might come via recommendations by AI assistants. Maybe in the future AI bots from people who themselves are dead carry on posting birthday congratulations on profiles of other dead people. A sort of a social media for the deceased, if you will.
Roughly one minute later my account was suspended. Let that be a lesson to you all. Do not mention the Dead Internet Theory, for doing so threatens Facebook's ad revenue and is thus taboo. (A more probable explanation is that using the word "death" is prohibited by itself regardless of context, leading to idiotic phrasing in the style of "Person X was born on [date] and d!ed [other date]" that you see all over IG, FB and YT nowadays.)
Apparently to reactivate the account I would need to prove that "[I am] a human being". That might be a tall order given that there are days when I doubt that myself.
The reactivation service is designed in the usual deceptive way where it does not tell you all the things you need to do in advance. Instead it bounces you from one task to another in the hopes that sunk cost fallacy makes you submit to ever more egregious demands. I got out when they demanded a full video selfie where I look around different directions. You can make up your own theories as to why Meta, a known advocate for generative AI and all that garbage, would want a high resolution scans of people's faces. I mean, surely they would not use it for AI training without paying a single cent for usage rights to the original model. Right? Right?
The suspension email ends with this ultimatum.
If you think we suspended your account by mistake, you have 180 days to appeal our decision. If you miss this deadline your account will be permanently disabled.
Well, mr Zuckerberg, my response is the following:
Close it! Delete it! Burn it down to the ground! I'd do it myself this very moment, but I can't delete the account without reactivating it first.
Let it also be noted that this post is a much better way of proving that I am a human being than some video selfie thing that could be trivially faked with genAI.
If you maintain a Linux audio settings component, we now have a way to globally enable/disable mono audio for users who do not want stereo separation of their audio (for example, due to hearing loss in one ear). Read on for the details on how to do this.
BackgroundMost systems support stereo audio via their default speaker output or 3.5mm analog connector. These devices are exposed as stereo devices to applications, and applications typically render stereo content to these devices.
Visual media use stereo for directional cues, and music is usually produced using stereo effects to separate instruments, or provide a specific experience.
It is not uncommon for modern systems to provide a “mono audio” option that allows users to have all stereo content mixed together and played to both output channels. The most common scenario is hearing loss in one ear.
PulseAudio and PipeWire have supported forcing mono audio on the system via configuration files for a while now. However, this is not easy to expose via user interfaces, and unfortunately remains a power-user feature.
ImplementationRecently, Julian Bouzas implemented a WirePlumber setting to force all hardware audio outputs (MR 721 and 769). This lets the system run in stereo mode, but configures the audioadapter around the device node to mix down the final audio to mono.
This can be enabled using the WirePlumber settings via API, or using the command line with:
wpctl settings node.features.audio.mono trueThe WirePlumber settings API allows you to query the current value as well as clear the setting and restoring to the default state.
I have also added (MR 2646 and 2655) a mechanism to set this using the PulseAudio API (via the messaging system). Assuming you are using pipewire-pulse, PipeWire’s PulseAudio emulation daemon, you can use pa_context_send_message_to_object() or the command line:
pactl send-message /core pipewire-pulse:force-mono-output trueThis API allows for a few things:
This feature will become available in the next release of PipeWire (both 1.4.10 and 1.6.0).
I will be adding a toggle in Pavucontrol to expose this, and I hope that GNOME, KDE and other desktop environments will be able to pick this up before long.
Hit me up if you have any questions!