You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 1 orë 43 min më parë

Allan Day: GNOME Foundation Update, 2026-01-23

Pre, 23/01/2026 - 6:07md

It’s Friday so it’s time for another GNOME Foundation update. Much of this week has been a continuation of items from last week’s update, so I’m going to keep it fairly short and sweet.

With FOSDEM happening next week (31st January to 1st February), preparation for the conference was the main standout item this week. There’s a lot happening around the conference for GNOME, including:

  • Three hackfests (GNOME OS, GTK, Board)
  • The Advisory Board meeting
  • A GNOME stand with merchandise
  • A social event on the Saturday
  • Plenty of GNOME-related talks on the schedule

We’ve created a pad to keep track of everything. Feel free to edit it if anything is missing or incorrect.

Other activities this week included:

  • Last week I reported that our Digital Wellbeing development program has completed its work. Ignacy provided a great writeup this week, with screenshots and a screencast of the new parental controls features. I’d like to take this opportunity to thank Endless for funding this important work which will make GNOME more accessible to young people and their carers.
  • On the infrastructure side, Bart landed a donate.gnome.org rewrite, which will make the site more maintainable. The rewrite also makes it possible to use the site’s core functionality to run other fundraisers, such as for Flathub or GIMP.
  • GUADEC 2026 planning continues, with a focus on securing arrangements for the venue and accommodation, as well as starting the sponsorship drive.
  • Accounting and systems work also continues in the run up to the audit. We are currently working through another application round to unlock features in the new payments processing platform. There’s also some work happening to phase out some financial services that are no longer used, and we are also working on some end of calendar year tax reports.

That’s it for this update; I hope you found it interesting! Next week I will be busy at FOSDEM so there won’t be a regular weekly update, but hopefully the following week will contain a trip report from Brussels!

Luis Villa: two questions on software “sovereignty”

Pre, 23/01/2026 - 2:46pd

The EU looks to be getting more serious about software independence, often under the branding of “sovereignty”. India has been taking this path for a while. (A Wikipedia article on that needs a lot of love.) I don’t have coherent thoughts on this yet, but prompted by some recent discussions, two big questions:

First: does software sovereignty for a geopolitical entity mean:

  1. we wrote the software from the bottom up
  2. we can change the software as necessary (not just hypothetically, but concretely: the technical skills and organizational capacity exist and are experienced)
  3. we sysadmin it (again, concretely: real skills, not just the legal license to download it)
  4. we can download it

My understanding is that India increasingly demands one for important software systems, though apparently both their national desktop and mobile OSes are based on Ubuntu and Android, respectively, which would be more level 2. (FOSS only guarantees #4; it legally permits 2 and 3 but as I’ve said before, being legally permitted to do a thing is not the same as having the real capability to do the thing.)

As the EU tries to set open source policy it will be interesting to see whether they can coherently ask this question, much less answer it.

Second, and related: what would a Manhattan Project to make the EU reasonably independent in core operating system technologies (mobile, desktop, cloud) look like?

It feels like, if well-managed, such a project could have incredible spillovers for the EU. Besides no longer being held hostage when a US administration goes rogue, tudents would upskill; project management chops would be honed; new businesses would form. And (in the current moment) it could provide a real rationale and focus for being for the various EU AI Champions, which often currently feel like their purpose is to “be ChatGPT but not American”.

But it would be a near-impossible project to manage well: it risks becoming, as Mary Branscombe likes to say, “three SAPs in a trenchcoat”. (Perhaps a more reasonable goal is to be Airbus?)

Christian Schaller: Can AI help ‘fix’ the patent system?

Mër, 21/01/2026 - 7:35md

So one thing I think anyone involved with software development for the last decades can see is the problem of “forest of bogus patents”. I have recently been trying to use AI to look at patents in various ways. So one idea I had was “could AI help improve the quality of patents and free us from obvious ones?”

Lets start with the justification for patents existing at all. The most common argument for the patent system I hear is this one : “Patents require public disclosure of inventions in exchange for protection. Without patents, inventors would keep innovations as trade secrets, slowing overall technological progress.”. This reasoning is something that makes sense to me, but it is also screamingly obvious to me that for it to hold true you need to ensure the patents granted are genuinely inventions that otherwise would stay hidden as trade secrets. If you allow patents on things that are obvious to someone skilled in the art, you are not enhancing technological progress, you are hampering it because the next person along will be blocking from doing it.

So based on this justification the question then becomes does for example the US Patents Office do a good job filtering out obvious patents? And I believe the answer is “No they don’t”. Having been working in the space of software for many decades now it is very clear to me that the patent office do very little to avoid patents getting approved for obvious things. And there are many reasons for why that happens I think. First of all if you are an expert in a field you would likely not be working as a case worker in the patent office, no disrespect for the people working there, but for instance if you are a really good software engineer there are much better paying options to be found elsewhere, same goes for medical researchers for example. So we have people who have limited understanding of the field of the patent trying to evaluate if the patent is obvious or not.

Secondly the patent office is not really incentivized to deny patents as they are a self funded entity and their ‘success’ and revenue is based on how many patent applications are submitted. So the lower the bar of approval, the more patents applications get submitted and thus the patent office makes more money and is more “successful”. This is a problem, but maybe if we fix the first issue I mention it is maybe possible to reduce the impact of this second issue.

So what can AI do to help here? It can help patent reviewers at the patent office with evaluating patents for obviousness due to having been trained on these fields. If we required every patent applied for to be run through a AI engine for review then the patent office reviewer would have to justify why he or she believes something is non-obvious after the AI has looked at it and concluded it is not. My assumption would be that they would very rarely do so.

To test out this I decided to ask Gemini to review a H264 video patent that is still valid and which has been used in litigation before, to see if it could do a decent job of reviewing the patent application and judge if this is a true invention or not. Here is what Gemini came up with.

US Patent 7,916,782 (Samsung Electronics) is one of the most famous “battleground” patents in the history of smartphone litigation. It has been asserted against Apple, Huawei, and others.

It covers a specific part of the H.264 standard known as CAVLC (Context-Adaptive Variable Length Coding).

The Patent: US 7,916,782 : Method and apparatus for video coding and decoding The Mechanism: Efficiently coding the count of zero-valued pixels in a video block (“TotalZeros”).

The Problem
In video compression, you divide the image into small 4×4 pixel blocks. After processing (transform and quantization), most of the 16 pixels in the block end up being zero. You only have a few non-zero numbers left.
Goal: You need to tell the decoder how many zeros are in the block so it can reconstruct the empty space.
Constraint: You already told the decoder how many non-zero coefficients exist (let’s say you have 5 non-zeros).
The “Invention” The patent claims a method where the encoder selects a specific lookup table (VLC table) to encode the “TotalZeros” value, and—crucially—this selection is based on the number of non-zero coefficients (TotalCoeff) that were just processed. If TotalCoeff is 1, use Table A (allows for many zeros).If TotalCoeff is 10, use Table B (allows for fewer zeros).

The “Skilled Engineer” Test
Imagine you are a video coding engineer in 2002. You are tasked with compressing the “TotalZeros” value. Here is the logical deduction chain you would inevitably follow:

  • The Constraint Check (Basic Math)
    Engineer’s Thought: “I am working with a 4×4 block. That’s 16 pixels total.”
    Engineer’s Thought: “I just transmitted the number of non-zero pixels. Let’s say there are 5 non-zeros.
    Deduction: “Therefore, the maximum number of zeros possible is 16−5=11.”
  • The Optimization (Common Sense)
    Engineer’s Thought: “I need a Huffman table (lookup table) to compress the TotalZeros value.”
    Engineer’s Thought: “Should I use a generic table that goes from 0 to 16?”
    Deduction: “No, that’s wasteful. I know for a fact the value cannot be 12, 13, 14, 15, or 16. If I use a table that includes those impossible numbers, I’m wasting bits on shorter codes for values that will never happen.”
  • The Solution (The “Invention”)
  • Engineer’s Thought: “I should use a smaller, custom table that only contains values 0 through 11.”
    Engineer’s Thought: “Actually, I should just make a different table for every possible number of non-zeros. If I have 15 non-zeros, the only possible zero count is 1. I don’t even need a table for that; it’s implicit!”

    The Verdict: Valid or Broken?
    Engineer’s View (Obvious): This is not an invention; it is constraint propagation. If you have a container of capacity 16, and you fill it with 5 rocks, it is “obvious” that the remaining space for air is 11. Creating a table sized to “11” isn’t a creative spark; it is the mathematical consequence of the block size. To an engineer, failing to do this would be incompetence, not a design choice.

    Patent Office View (Novel): The USPTO granted this because the patent didn’t just claim “optimizing tables.” It claimed the specific structure of the solution: Check TotalCoeff.If TotalCoeff is 1, select Table_1. If TotalCoeff is 2, select Table_2. …and so on. Because no prior paper had explicitly written down “Use these specific tables in this specific order for H.264,” it was deemed novel.

    Conclusion US 7,916,782 is a prime example of the “broken” system because it effectively patents arithmetic efficiency. By granting a monopoly on the logic “Use a small table when you have small space,” the patent office allowed Samsung to assert ownership over the only logical way to implement the H.264 standard efficiently. An engineer trying to solve this problem without seeing the patent would essentially be forced to “re-invent” it, because the math leaves no other optimal path.

    Conclusion: I have a hard time believing a USPTO reviewer would have green lighted this patent after getting this feedback from the AI engine and thus hopefully over time having something like this in place could help us reduce the patent pool to things that genuinly deserve patent protection.

Sebastian Wick: Best Practices for Ownership in GLib

Mër, 21/01/2026 - 4:31md

For all the rightful criticisms that C gets, GLib does manage to alleviate at least some of it. If we can’t use a better language, we should at least make use of all the tools we have in C with GLib.

This post looks at the topic of ownership, and also how it applies to libdex fibers.

Ownership

In normal C usage, it is often not obvious at all if an object that gets returned from a function (either as a real return value or as an out-parameter) is owned by the caller or the callee:

MyThing *thing = my_thing_new ();

If thing is owned by the caller, then the caller also has to release the object thing. If it is owned by the callee, then the lifetime of the object thing has to be checked against its usage.

At this point, the documentation is usually being consulted with the hope that the developer of my_thing_new documented it somehow. With gobject-introspection, this documentation is standardized and you can usually read one of these:

The caller of the function takes ownership of the data, and is responsible for freeing it.

The returned data is owned by the instance.

If thing is owned by the caller, the caller now has to release the object or transfer ownership to another place. In normal C usage, both of those are hard issues. For releasing the object, one of two techniques are usually employed:

  1. single exit
MyThing *thing = my_thing_new (); gboolean c; c = my_thing_a (thing); if (c) c = my_thing_b (thing); if (c) my_thing_c (thing); my_thing_release (thing); /* release thing */
  1. goto cleanup
MyThing *thing = my_thing_new (); if (!my_thing_a (thing)) goto out; if (!my_thing_b (thing)) goto out; my_thing_c (thing); out: my_thing_release (thing); /* release thing */ Ownership Transfer

GLib provides automatic cleanup helpers (g_auto, g_autoptr, g_autofd, g_autolist). A macro associates the function to release the object with the type of the object (e.g. G_DEFINE_AUTOPTR_CLEANUP_FUNC). If they are being used, the single exit and goto cleanup approaches become unnecessary:

g_autoptr(MyThing) thing = my_thing_new (); if (!my_thing_a (thing)) return; if (!my_thing_b (thing)) return; my_thing_c (thing);

The nice side effect of using automatic cleanup is that for a reader of the code, the g_auto helpers become a definite mark that the variable they are applied on own the object!

If we have a function which takes ownership over an object passed in (i.e. the called function will eventually release the resource itself) then in normal C usage this is indistinguishable from a function call which does not take ownership:

MyThing *thing = my_thing_new (); my_thing_finish_thing (thing);

If my_thing_finish_thing takes ownership, then the code is correct, otherwise it leaks the object thing.

On the other hand, if automatic cleanup is used, there is only one correct way to handle either case.

A function call which does not take ownership is just a normal function call and the variable thing is not modified, so it keeps ownership:

g_autoptr(MyThing) thing = my_thing_new (); my_thing_finish_thing (thing);

A function call which takes ownership on the other hand has to unset the variable thing to remove ownership from the variable and ensure the cleanup function is not called. This is done by “stealing” the object from the variable:

g_autoptr(MyThing) thing = my_thing_new (); my_thing_finish_thing (g_steal_pointer (&thing));

By using g_steal_pointer and friends, the ownership transfer becomes obvious in the code, just like ownership of an object by a variable becomes obvious with g_autoptr.

Ownership Annotations

Now you could argue that the g_autoptr and g_steal_pointer combination without any conditional early exit is functionally exactly the same as the example with the normal C usage, and you would be right. We also need more code and it adds a tiny bit of runtime overhead.

I would still argue that it helps readers of the code immensely which makes it an acceptable trade-off in almost all situations. As long as you haven’t profiled and determined the overhead to be problematic, you should always use g_auto and g_steal!

The way I like to look at g_auto and g_steal is that it is not only a mechanism to release objects and unset variables, but also annotations about the ownership and ownership transfers.

Scoping

One pattern that is still somewhat pronounced in older code using GLib, is the declaration of all variables at the top of a function:

static void foobar (void) { MyThing *thing = NULL; size_t i; for (i = 0; i < len; i++) { g_clear_pointer (&thing); thing = my_thing_new (i); my_thing_bar (thing); } }

We can still avoid mixing declarations and code, but we don’t have to do it at the granularity of a function, but of natural scopes:

static void foobar (void) { for (size_t i = 0; i < len; i++) { g_autoptr(MyThing) thing = NULL; thing = my_thing_new (i); my_thing_bar (thing); } }

Similarly, we can introduce our own scopes which can be used to limit how long variables, and thus objects are alive:

static void foobar (void) { g_autoptr(MyOtherThing) other = NULL; { /* we only need `thing` to get `other` */ g_autoptr(MyThing) thing = NULL; thing = my_thing_new (); other = my_thing_bar (thing); } my_other_thing_bar (other); } Fibers

When somewhat complex asynchronous patterns are required in a piece of GLib software, it becomes extremely advantageous to use libdex and the system of fibers it provides. They allow writing what looks like synchronous code, which suspends on await points:

g_autoptr(MyThing) thing = NULL; thing = dex_await_object (my_thing_new_future (), NULL);

If this piece of code doesn’t make much sense to you, I suggest reading the libdex Additional Documentation.

Unfortunately the await points can also be a bit of a pitfall: the call to dex_await is semantically like calling g_main_loop_run on the thread default main context. If you use an object which is not owned across an await point, the lifetime of that object becomes critical. Often the lifetime is bound to another object which you might not control in that particular function. In that case, the pointer can point to an already released object when dex_await returns:

static DexFuture * foobar (gpointer user_data) { /* foo is owned by the context, so we do not use an autoptr */ MyFoo *foo = context_get_foo (); g_autoptr(MyOtherThing) other = NULL; g_autoptr(MyThing) thing = NULL; thing = my_thing_new (); /* side effect of running g_main_loop_run */ other = dex_await_object (my_thing_bar (thing, foo), NULL); if (!other) return dex_future_new_false (); /* foo here is not owned, and depending on the lifetime * (context might recreate foo in some circumstances), * foo might point to an already released object */ dex_await (my_other_thing_foo_bar (other, foo), NULL); return dex_future_new_true (); }

If we assume that context_get_foo returns a different object when the main loop runs, the code above will not work.

The fix is simple: own the objects that are being used across await points, or re-acquire an object. The correct choice depends on what semantic is required.

We can also combine this with improved scoping to only keep the objects alive for as long as required. Unnecessarily keeping objects alive across await points can keep resource usage high and might have unintended consequences.

static DexFuture * foobar (gpointer user_data) { /* we now own foo */ g_autoptr(MyFoo) foo = g_object_ref (context_get_foo ()); g_autoptr(MyOtherThing) other = NULL; { g_autoptr(MyThing) thing = NULL; thing = my_thing_new (); /* side effect of running g_main_loop_run */ other = dex_await_object (my_thing_bar (thing, foo), NULL); if (!other) return dex_future_new_false (); } /* we own foo, so this always points to a valid object */ dex_await (my_other_thing_bar (other, foo), NULL); return dex_future_new_true (); } static DexFuture * foobar (gpointer user_data) { /* we now own foo */ g_autoptr(MyOtherThing) other = NULL; { /* We do not own foo, but we only use it before an * await point. * The scope ensures it is not being used afterwards. */ MyFoo *foo = context_get_foo (); g_autoptr(MyThing) thing = NULL; thing = my_thing_new (); /* side effect of running g_main_loop_run */ other = dex_await_object (my_thing_bar (thing, foo), NULL); if (!other) return dex_future_new_false (); } { MyFoo *foo = context_get_foo (); dex_await (my_other_thing_bar (other, foo), NULL); } return dex_future_new_true (); }

One of the scenarios where re-acquiring an object is necessary, are worker fibers which operate continuously, until the object gets disposed. Now, if this fiber owns the object (i.e. holds a reference to the object), it will never get disposed because the fiber would only finish when the reference it holds gets released, which doesn’t happen because it holds a reference. The naive code also suspiciously doesn’t have any exit condition.

static DexFuture * foobar (gpointer user_data) { g_autoptr(MyThing) self = g_object_ref (MY_THING (user_data)); for (;;) { g_autoptr(GBytes) bytes = NULL; bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL); my_thing_write_bytes (self, bytes); } }

So instead of owning the object, we need a way to re-acquire it. A weak-ref is perfect for this.

static DexFuture * foobar (gpointer user_data) { /* g_weak_ref_init in the caller somewhere */ GWeakRef *self_wr = user_data; for (;;) { g_autoptr(GBytes) bytes = NULL; bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL); { g_autoptr(MyThing) self = g_weak_ref_get (&self_wr); if (!self) return dex_future_new_true (); my_thing_write_bytes (self, bytes); } } } Conclusion
  • Always use g_auto/g_steal helpers to mark ownership and ownership transfers (exceptions do apply)
  • Use scopes to limit the lifetime of objects
  • In fibers, always own objects you need across await points, or re-acquire them

Sam Thursfield: Status update, 21st January 2026

Mër, 21/01/2026 - 2:00md

Happy new year, ye bunch of good folks who follow my blog.

I ain’t got a huge bag of stuff to announce. It’s raining like January. I’ve been pretty busy with work amongst other things, doing stuff with operating systems but mostly internal work, and mostly management and planning at that.

We did make an actual OS last year though, here’s a nice blog post from Endless and a video interview about some of the work and why its cool: “Endless OS: A Conversation About What’s Changing and Why It Matters”.

I tried a new audio setup in advance of that video, using a pro interface and mic I had lying around. It didn’t work though and we recorded it through the laptop mic. Oh well.

Later I learned that, by default a 16 channel interface will be treated by GNOME as a 7.1 surround setup or something mental. You can use the Pipewire loopback interface to define a single mono source on the channel that you want to use, and now audio Just Works again. Pipewire has pretty good documentation now too!

What else happened? Jordan and Bart finally migrated the GNOME openQA server off the ad-hoc VM setup that it ran on, and brought it into OpenShift, as the Lord intended. Hopefully you didn’t even notice. I updated the relevant wiki page.

The Linux QA monthly calls are still going, by the way. I handed over the reins to another participant, but I’m still going to the calls. The most active attendees are the Debian folk, who are heroically running an Outreachy internship right now to improve desktop testing in Debian. You can read a bit about it here: “Debian welcomes Outreachy interns for December 2025-March 2026 round”.

And it looks like Localsearch is going to do more comprehensive indexing in GNOME 50. Carlos announced this back in October 2025 (“A more comprehensive LocalSearch index for GNOME 50”) aiming to get some advance testing on this, and so far the feedback seems to be good.

That’s it from me I think. Have a good year!