You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 1 ditë 1 orë më parë

Michael Meeks: 2026-04-07 Tuesday

Mar, 07/04/2026 - 11:00md
  • Up early, mail chew, planning call, sync with Julie & Thorsten. Lunch, catch-up with Anna & Andras.
  • Poked at an update in response to TDF's blog with Chris.
  • TDF board call, Staff + Board + MC with no community questions - surreal, Dennis arrived but too late for the questions I suppose.
  • Read Die L%C3%B6sung which someone pointed me to.

Thibault Martin: TIL that Git can locally ignore files

Mar, 07/04/2026 - 7:35md

When editing markdown, I love using Helix (best editor in the world). I rely on three language servers to help me do it:

  • rumdl to check markdown syntax and enforce the rules decided by a project
  • marksman to get assistance when creating links
  • harper-ls to check for spelling or grammar mistakes

All of these are configured in my ~/.config/helix/languages.toml configuration file, so it applies globally to all the markdown I edit. But when I edit This Week In Matrix at work, things are different.

To edit those posts, we let our community report their progress in a Matrix room, we collect them into a markdown file that we then editorialized. This is a perfect fit for Helix (best editor in the world) and its language servers.

Helix has two features that make it a particularly good fit for the job

  1. The diagnostics view
  2. Jumping to next error with ]d

It is possible to filter out pickers, but it becomes tedious to do so. For this project specifically, I want to disable harper-ls entirely. Helix supports per-project configuration by creating a .helix/languages.toml file at the project's root.

It's a good solution to override my default config, but now I have an extra .helix directory that git wants to track. I could add it to the .gitignore, but that would also add it to everyone else's .gitignore, even if they don't use Helix (best editor in the world) yet.

It turns out that there is a local-only equivalent to .gitignore, and it's .git/info/exclude. The syntax is the same as .gitignore but it's not committed.

Update: several people reached out to point out that there are global options to locally ignore files, if you don't need to do it per-project. Those options are:

  • The global ~/.config/git/ignore file, with the same syntax as .gitignore
  • The configuration variable core.excludesFile to specify a file that contains which patterns to ignore, like a .gitignore

I can't believe I didn't need this earlier in my life.

Thibault Martin: TIL that You can filter Helix pickers

Mar, 07/04/2026 - 7:20md

Helix has a system of pickers. It's a pop up window to open files, or open diagnostics coming from a Language Server.

The diagnostics picker displays data in columns:

  • severity
  • source
  • code
  • message

Sometimes it can get very crowded, especially when you have plenty of hints but few actual errors. I didn't know it, but Helix supports filtering in pickers!

By typing %severity WARN I only get warnings. I can even shorten it to %se (and not %s, since source also starts with an s). The full syntax is well documented in the pickers documentation.

Andy Wingo: the value of a performance oracle

Mar, 07/04/2026 - 2:49md

Over on his excellent blog, Matt Keeter posts some results from having ported a bytecode virtual machine to tail-calling style. He finds that his tail-calling interpreter written in Rust beats his switch-based interpreter, and even beats hand-coded assembly on some platforms.

He also compares tail-calling versus switch-based interpreters on WebAssembly, and concludes that performance of tail-calling interpreters in Wasm is terrible:

1.2× slower on Firefox, 3.7× slower on Chrome, and 4.6× slower in wasmtime. I guess patterns which generate good assembly don't map well to the WASM stack machine, and the JITs aren't smart enough to lower it to optimal machine code.

In this article, I would like to argue the opposite: patterns that generate good assembly map just fine to the Wasm stack machine, and the underperformance of V8, SpiderMonkey, and Wasmtime is an accident.

some numbers

I re-ran Matt’s experiment locally on my x86-64 machine (AMD Ryzen Threadripper PRO 5955WX). I tested three toolchains:

  • Compiled natively via cargo / rustc

  • Compiled to WebAssembly, then run with Wasmtime

  • Compiled to WebAssembly, then run with Wastrel

For each of these toolchains, I tested Raven as implemented in Rust in both “switch-based” and “tail-calling” modes. Additionally, Matt has a Raven implementation written directly in assembly; I test this as well, for the native toolchain. All results use nightly/git toolchains from 7 April 2026.

My results confirm Matt’s for the native and wasmtime toolchains, but wastrel puts them in context:

We can read this chart from left to right: a switch-based interpreter written in Rust is 1.5× slower than a tail-calling interpreter, and the tail-calling interpreter just about reaches the speed of hand-written assembler. (Testing on AArch64, Matt even sees the tail-calling interpreter beating his hand-written assembler.)

Then moving to WebAssembly run using Wasmtime, we see that Wasmtime takes 4.3× as much time to run the switch-based interpreter, compared to the fastest run from the hand-written assembler, and worse, actually shows 6.5× overhead for the tail-calling interpreter. Hence Matt’s conclusions: there must be something wrong with WebAssembly.

But if we compare to Wastrel, we see a different story: Wastrel runs the basic interpreter with 2.4× overhead, and the tail-calling interpreter improves on this marginally with a 2.3x overhead. Now, granted, two-point-whatever-x is not one; Matt’s Raven VM still runs slower in Wasm than when compiled natively. Still, a tail-calling interpreter is inherently a pretty good idea.

where does the time go

When I think about it, there’s no reason that the switch-based interpreter should be slower when compiled via Wastrel than when compiled via rustc. Memory accesses via Wasm should actually be cheaper due to 32-bit pointers, and all the rest of it should be pretty much the same. I looked at the assembly that Wastrel produces and I see most of the patterns that I would expect.

I do see, however, that Wastrel repeatedly reloads a struct memory value, containing the address (and size) of main memory. I need to figure out a way to keep this value in registers. I don’t know what’s up with the other Wasm implementations here; for Wastrel, I get 98% of time spent in the single interpreter function, and surely this is bread-and-butter for an optimizing compiler such as Cranelift. I tried pre-compilation in Wasmtime but it didn’t help. It could be that there is a different Wasmtime configuration that allows for higher performance.

Things are more nuanced for the tail-calling VM. When compiling natively, Matt is careful to use a preserve_none calling convention for the opcode-implementing functions, which allows LLVM to allocate more registers to function parameters; this is just as well, as it seems that his opcodes have around 9 parameters. Wastrel currently uses GCC’s default calling convention, which only has 6 registers for non-floating-point arguments on x86-64, leaving three values to be passed via global variables (described here); this obviously will be slower than the native build. Perhaps Wastrel should add the equivalent annotation to tail-calling functions.

On the one hand, Cranelift (and V8) are a bit more constrained than Wastrel by their function-at-a-time compilation model that privileges latency over throughput; and as they allow Wasm modules to be instantiated at run-time, functions are effectively closures, in which the “instance” is an additional hidden dynamic parameter. On the other hand, these compilers get to choose an ABI; last I looked into it, SpiderMonkey used the equivalent of preserve_none, which would allow it to allocate more registers to function parameters. But it doesn’t: you only get 6 register arguments on x86-64, and only 8 on AArch64. Something to fix, perhaps, in the Wasm engines, but also something to keep in mind when making tail-calling virtual machines: there are only so many registers available for VM state.

the value of time

Well friends, you know us compiler types: we walk a line between collegial and catty. In that regard, I won’t deny that I was delighted when I saw the Wastrel numbers coming in better than Wasmtime! Of course, most of the credit goes to GCC; Wastrel is a relatively small wrapper on top.

But my message is not about the relative worth of different Wasm implementations. Rather, it is that performance oracles are a public good: a fast implementation of a particular algorithm is of use to everyone who uses that algorithm, whether they use that implementation or not.

This happens in two ways. Firstly, faster implementations advance the state of the art, and through competition-driven convergence will in time result in better performance for all implementations. Someone in Google will see these benchmarks, turn them into an OKR, and golf their way to a faster web and also hopefully a bonus.

Secondly, there is a dialectic between the state of the art and our collective imagination of what is possible, and advancing one will eventually ratchet the other forward. We can forgive the conclusion that “patterns which generate good assembly don’t map well to the WASM stack machine” as long as Wasm implementations fall short; but having shown that good performance is possible, our toolkit of applicable patterns in source languages also expands to new horizons.

Well, that is all for today. Until next time, happy hacking!

Michael Meeks: 2026-04-06 Monday

Hën, 06/04/2026 - 11:00md
  • Up early, out for a run with J. poked at mail, and web bits. hmm. Contemplated the latest barrage of oddity from TDF.
  • Set too cutting out cardboard pieces to position tools in the workshop with H. spent qiute some time re-arranging the workshop to try to fit everything in. Binned un-needed junk, moved things around happily. Mended an old light - a wedding present from a friend.
  • Relaxed with J. and watched Upload in the evening, tired.

Jussi Pakkanen: Sorting performance rabbit hole

Hën, 06/04/2026 - 5:42md

In an earlier blog post we found out that Pystd's simple sorting algorithm implementations were 5-10% slower than their stdlibc++ counterparts. The obvious follow up nerd snipe is to ask "can we make the Pystd implementation faster than stdlibc++?"

For all tests below the data set used was 10 million consecutive 64 bit integers shuffled in a random order. The order was the same for all algorithms.

Stable sort

It turns out that the answer for stable sorting is "yes, surprisingly easily". I made a few obvious tweaks (whose details I don't even remember any more) and got the runtime down to 0.86 seconds. This is approximately 5% faster than std::stable_sort. Done. Onwards to unstable sort.

Unstable sort

This one was not, as they say, a picnic. I suspect that stdlib developers have spent more time optimizing std::sort than std::stable_sort simply because it is used a lot more.

After all the improvements I could think of were done, Pystd's implementation was consistently 5-10% percent slower. At this point I started cheating and examined how stdlibc++'s implementation worked to see if there are any optimization ideas to steal. Indeed there were, but they did not help.

Pystd's insertion sort moves elements by pairwise swaps. Stdlibc++ does it by moving the last item to a temporary, shifting the array elements onwards and then moving the stored item to its final location. I implemented that. It made things slower.

Stdlibc++'s moves use memmove instead of copying (at least according to code comments).  I implemented that. It made things slower.

Then I implemented shell sort to see if it made things faster. It didn't. It made them a lot slower.

Then I reworked the way pivot selection is done and realized that if you do it in a specific way, some elements move to their correct partitions as a side effect of median selection. I implemented that and it did not make things faster. It did not make them slower, either, but the end result should be more resistant against bad pivot selection so I left it in.

At some point the implementation grew a bug which only appeared with very large data sets. For debugging purposes I reduce the limit where introsort switches from qsort to insertion sort from 16 to 8. I got the bug fixed but the change made sorting a lot slower. As it should.

But this raises a question, namely would increasing the limit from 16 to 32 make things faster? It turns out that it did. A lot. Out of all perf improvements I implemented, this was the one that yielded the biggest improvement. By a lot. Going to 64 elements made it even faster, but that made other algorithms using insertion sort slower, so 32 it is. For now at least.

After a few final tweaks I managed to finally beat stdlibc++. By how much you ask? Pystd's best observed time was 0.754 seconds while stdlibc++'s was 0.755 seconds. And it happened only once. But that's enough for me.

Michael Meeks: 2026-04-05 Sunday

Dje, 05/04/2026 - 11:00md
  • Slept badly; All Saints in the morning - Easter day. Lovely to play with H. and Jenny - fun - lots of people there.
  • Back for a big roast-lamb lunch with visiting family, Tried to sleep somewhat, prepped for the evening service - preaching at the last minute too; ran that, back.
  • Caught up with S&C&boys a bit; bid them 'bye, A. staying; rested somewhat, headed to bed.

Jakub Steiner: Japan

Dje, 05/04/2026 - 2:00pd

Last year we went to Japan to finally visit friends after two decades of planning to. Because they live in Fukuoka, we only ended up visiting Hiroshima, Kyoto and Osaka afterwards. We loved it there and as soon as cheap flights became available, booked another one for Tokio, to be legally allowed to cross off Japan as visited.

Now if I were to book the trip today, I probably wouldn't. It's quite a gamble given the geopolitical situation and Asia running out of oil. But making it back, it's been as good as the first one. Visiting only Tokio with a short trip to Kawaguchiko in the Sakura blooming season worked out great.

At the start of the year I promised myself to shoot my Fuji more. And I don't mean the volcano, I mean the my X-T20. I haven't kept the promise at all, always relying on the iphone. Luckily for the trip I didn't chicken out carrying the extra weight and I think it paid off. I did only take my 35mm, as the desire to carry gear has really faded away with the years. As we walked over 120km in the few days my back didn't feel very young even with the little gear I did have.

While the difference in quality isn't quite visible on Pixelfed or my photo website (I don't post to Instagram anymore), working through the set on a 4K display has been a pleasure. Bigger sensor is a bigger sensor.

Check out more photos on photo.jimmac.eu -- use arrow keys of swipe to navigate the set.

Weeklybeats #13.

I also managed to get both of my weeklybeats tracks done on the flight so that's a bonus too!

Japan is probably quite difficult to live in, but as a tourist you get so much to feast your eyes on. It's like another planet. I hope to find more time to draw some of the awesome little cars and signs and white tiles and electric cables everywhere.

Michael Meeks: 2026-04-04 Saturday

Sht, 04/04/2026 - 11:00md
  • Up earlyish, poked at some work. We all drove to Aldeburgh with the family to begin the sad task of sorting through Bruce's things with Anne, S&C& boys.
  • A somewhat draining day; death is such a sad thing, but good wider family spirit.
  • Picked up fish & chips in Aldeburgh on the way back; tragically helped at the aftermath of an extremely grisly, run-over pedestrian in Aldeburgh high-street, even sadder.

Michael Meeks: <h1>TDF ejects its core developers</h1>

Enj, 02/04/2026 - 8:52md

For a rather more polished write-up, complete with pretty pictures please see TDF ejects its core developers. Here is a more personal take. My feeling is that this action has been planned by the TDF rump board's majority for many months, if not for some years. While we have tried to avoid this outcome, it has been eventually forced on us.

Trends in TDF board membership

There are many great ways to contribute to FLOSS projects and coding is only one of them - let me underline that. However - coding is the primary production that drives many of the other valuable contributions: translation, marketing, etc. We have been blessed to have many excellent developers around LibreOffice but coder's board representation has been declining. This means losing a valuable part of the board's perspective on the complexity of the problem. The elected board is (typically) ten people - seven full board members, and three deputies as spares if needed, here is how that looks:

Another way of looking at board composition is to look at board members' affiliations over time. Of course affiliations change - sometimes during a board term, but this is the same graph broken down by (rough) affiliation:

What is easy to see is the huge drop in corporate affiliation - along with all the business experience that brings. The added '2026' is for the current 'rump' board which continues to over-stay its term and has also lost several of its most popular developer members - including Eike - recently of RedHat (because of a person tragedy) and Bjoern. In 'Interested' I include those with a business interest in LibreOffice who are not part of a larger corporate, one: Laszlo is the last coder on the board.

One of the major surprises of the 2024 election is the 'TDF' chunk in which I bucket paid TDF staff, and those closely related to them. The current chair of the TDF board (Eliane) who manages the Executive Director (ED) is curiously related to a staff member who is managed by the (ED) - arguably an extremely poor governance practice. Having three TDF affiliated directors is also in contradiction of the statutes.

It is also worth noting that for over two years, no Collaboran or any of our partners have been on the TDF board. It was hoped that this would give ample time and space to address any of the issues left from previous boards.

Meritocracy - do-ers decide

TDF is defined as a meritocracy in its statutes. Why is that ? the experience we had from the OpenOffice project was that often those who were doing the work were excluded from decision making. That made it hard to get teams to scale, and to make quick decisions, let leaders grow in areas, and also gives an incentive to contribute more among many other reasons.

Some claim that the sole manifestation of the statute's requirement for meritocracy is a flat entry / membership criteria (as every other organization has). This seems to me to be near to the root of the problems here. Those used to functioning FLOSS projects find it hard to understand why you wouldn't at least listen to those who are working hardest to improve things in whatever area. These days some at TDF seem to emphasize equality instead.

It is interesting then to see the (controversially appointed) Membership Committee overturning the last election - ejecting people without any thanks or apology who have contributed so very much over so many years. We built a quick tool to count those. This excludes the long departed Sun Microsystem's release engineers who committed many other people's patches for them - and it struggles to 'see' into the CVS era where branches were flattened; but ... as far as git and gitdm-alias files can easily tell this is the picture of the top committers to the largest 'core' code repository over all time.

NameCommitsLast commitAffiliation Caolán McNamara37,556Collabora Stephan Bergmann21,732Collabora Noel Grandin20,851Collabora Miklos Vajna10,466Collabora Tor Lillqvist9,233Collabora Michael Stahl8,742Collabora Kohei Yoshida5,655Collabora Eike Rathke5,398Volunteer/RedHat Markus Mohrhard5,230Volunteer Frank Schönheit5,0252011Sun/Oracle Michael Weghorn4,956TDF Mike Kaganski4,864Collabora Andrea Gelmini4,582Volunteer Xisco Fauli4,215TDF Julien Nabet4,031Volunteer Tomaž Vajngerl3,797Collabora David Tardon3,6482021RedHat Luboš Luňák3,201Collabora Hans-Joachim Lankenau3,0072011Sun/Oracle Ocke Janssen2,8522011Sun/Oracle Oliver Specht2,699Sun/Oracle Jan Holesovsky2,689Collabora Mathias Bauer2,5802011Sun/Oracle Olivier Hallot2,561TDF Michael Meeks2,553Collabora Bjoern Michaelsen2,503Volunteer/Canonical Norbert Thiebaud2,1762017Volunteer Thomas Arnhold2,1762014Volunteer Andras Timar2,099Collabora Philipp Lohmann2,0962011Sun/Oracle

It is a humbling privilege for me to serve in such a dedicated team of people who have contributed so much. Take just one example - Cáolan has worked from StarDivision to Sun to Oracle to RedHat to Collabora; 37000 commits in ~25 years - ~four per day sustained, every day. By grokking his commits quickly you can see that this is far more than a job - over 6000 of those were at the weekend, and of course commits don't show the reviews, mentoring, love and care and more. That is just one contributor - but the passion scales across the rest of the team.

Why remove individuals ?

While writing this a response from TDF showed up. While there are things to welcome, it seems that this speculative concern about individual contributors is at the core of the concern:

"people made decisions in the interest of their employers rather than in the interest of The Document Foundation."

Really!? the primary privilege that members of TDF have is voting for their representatives in elections, and this right is earned only by contribution. Elections are secret ballots. So it seems the most plausible reason for disenfranchising so many is a unhealthy fear of the electorate. Is it possible that the board majority want to avoid accountability for their actions at the next election (which is already delayed without adequate explanation), like this:

I have no idea how our staff voted in past elections - but I have to assume they did this with integrity and for the best for TDF as they saw it at the time. It seems that a more plausible reason to remove such long term contributors is electoral gerrymandering.

Some thank yous

After 15+ years of service with LibreOffice, it is unfortunate to be ejected. It is possible to imagine a counter-factual world where this might actually be necessary. But even in this case - to do so with no thank-you, or apology is unconscionable. It is great to see the team making up for that by publicly thanking their colleagues as they are kicked out. I found it deeply encouraging to remember and celebrate all the fantastic work that has been contributed, let me add my own big thank you to everyone!

Where we are now

Well much more can be said, perhaps I'll update this later with more details as they emerge, but for now we're re-focusing on making Collabora Office great, getting our gerrit and CI humming smoothly, and starting to dung-out bits we are not using in the code-base. If you're interested in getting involved have a wave in #cool-dev:matrix.org and join in, we welcome anyone to join us. Thanks for reading and trying to understand this tangled topic !