Thanks to the work of Christian Gmeiner, support for annotating time regions using Sysprof marks has landed in Mesa.
That means you’ll be able to open captures with Sysprof and see the data along other useful information including callgraphs and flamegraphs.
I do think there is a lot more we can do around better visualizations in Sysprof. If that is something you’re interested in working on please stop by #gnome-hackers on Libera.chat or drop me an email and I can find things for you to work on.
See the merge request here.
Hello. It is May, my favourite month. I’m in Manchester, mainly as I’m moving projects at work, and its useful to do that face-to-face.
For the last 2 and a half years, my job has mostly involved a huge, old application inside a big company, which I can’t tell you anything about. I learned a lot about how to tackle really, really big software problems where nobody can tell you how the system works and nobody can clearly describe the problem they want you to solve. It was the first time in a long time that I worked on production infrastructure, in that, we could have caused major outages if we rolled out bad changes. Our team didn’t cause any major outages in all that time. I will take that as a sign of success. (There’s still plenty of legacy application to decommission, but it’s no longer my problem).
During that project I tried to make time to work on end to end testing of GNOME using openQA as well… with some success, in the sense that GNOME OS still has working openQA tests, but I didn’t do very well at making improvements, and I still don’t know if or when I’ll ever have time to look further at end-to-end testing for graphical desktops. We did a great Outreachy internship at least with Tanju and Dorothy adding quite a few new tests.
Several distros test GNOME downstream, but we still don’t have much of a story of how they could collaborate upstream. We do still have the monthly Linux QA call so we have a space to coordinate work in that area… but we need people who can do the work.
My job now, for the moment, involves a Linux-based operating system that is intended to be used in safety-critical contexts. I know a bit about operating systems and not much about functional safety. I have seen enough to know there is nothing magic about a “safety certificate” — it represents some thinking about risks and how to detect and mitigate them. I know Codethink is doing some original thinking in this area. It’s interesting to join in and learn about what we did so far and where it’s all going.
Giving credit to peopleThe new GNOME website, which I really like, describes the project as “An independent computing platform for everyone”.
There is something political about that statement: it’s implying that we should work towards equal access to computer technology. Something which is not currently very equal. Writing software isn’t going to solve that on its own, but it feels like a necessary part of the puzzle.
If I was writing a more literal tagline for the GNOME project, I might write: “A largely anarchic group maintaining complex software used by millions of people, often for little or no money.” I suppose that describes many open source projects.
Something that always bugs me is how a lot of this work is invisible. That’s a problem everywhere: from big companies and governments, down to families and local community groups, there’s usually somebody who does more work than they get credit for.
But we can work to give credit where credit is due. And recently several people have done that!
Outgoing ED Richard Littauer in “So Long and Thanks For All the Fish” shouted out a load of people who work hard in the GNOME Foundation to make stuff work.
Then incoming GNOME ED, Steven Deobald wrote a very detailed “2025-05-09 Foundation Report” (well done for using the correct date format, as well), giving you some idea about how much time it takes to onboard a new director, and how many people are involved.
And then Georges wrote about some people working hard on accessibility in “In celebration of accessibility”.
Giving credit is important and helpful. In fact, that’s just given me an idea, but explaining that will have to wait til next month.
Greets all! Another brief note today. I have gotten Guile working with one of the Nofl-based collectors, specifically the one that scans all edges conservatively (heap-conservative-mmc / heap-conservative-parallel-mmc). Hurrah!
It was a pleasant surprise how easy it was to switch—from the user’s point of view, you just pass --with-gc=heap-conservative-parallel-mmc to Guile’s build (on the wip-whippet branch); when developing I also pass --with-gc-debug, and I had a couple bugs to fix—but, but, there are still some issues. Today’s note thinks through the ones related to heap sizing heuristics.
growable heapsWhippet has three heap sizing strategies: fixed, growable, and adaptive (MemBalancer). The adaptive policy is the one I would like in the long term; it will grow the heap for processes with a high allocation rate, and shrink when they go idle. However I won’t really be able to test heap shrinking until I get precise tracing of heap edges, which will allow me to evacuate sparse blocks.
So for now, Guile uses the growable policy, which attempts to size the heap so it is at least as large as the live data size, times some multiplier. The multiplier currently defaults to 1.75×, but can be set on the command line via the GUILE_GC_OPTIONS environment variable. For example to set an initial heap size of 10 megabytes and a 4× multiplier, you would set GUILE_GC_OPTIONS=heap-size-multiplier=4,heap-size=10M.
Anyway, I have run into problems! The fundamental issue is fragmentation. Consider a 10MB growable heap with a 2× multiplier, consisting of a sequence of 16-byte objects followed by 16-byte holes. You go to allocate a 32-byte object. This is a small object (8192 bytes or less), and so it goes in the Nofl space. A Nofl mutator holds on to a block from the list of sweepable blocks, and will sequentially scan that block to find holes. However, each hole is only 16 bytes, so we can’t fit our 32-byte object: we finish with the current block, grab another one, repeat until no blocks are left and we cause GC. GC runs, and after collection we have an opportunity to grow the heap: but the heap size is already twice the live object size, so the heuristics say we’re all good, no resize needed, leading to the same sweep again, leading to a livelock.
I actually ran into this case during Guile’s bootstrap, while allocating a 7072-byte vector. So it’s a thing that needs fixing!
observationsThe root of the problem is fragmentation. One way to solve the problem is to remove fragmentation; using a semi-space collector comprehensively resolves the issue, modulo any block-level fragmentation.
However, let’s say you have to live with fragmentation, for example because your heap has ambiguous edges that need to be traced conservatively. What can we do? Raising the heap multiplier is an effective mitigation, as it increases the average hole size, but for it to be a comprehensive solution in e.g. the case of 16-byte live objects equally interspersed with holes, you would need a multiplier of 512× to ensure that the largest 8192-byte “small” objects will find a hole. I could live with 2× or something, but 512× is too much.
We could consider changing the heap organization entirely. For example, most mark-sweep collectors (BDW-GC included) partition the heap into blocks whose allocations are of the same size, so you might have some blocks that only hold 16-byte allocations. It is theoretically possible to run into the same issue, though, if each block only has one live object, and the necessary multiplier that would “allow” for more empty blocks to be allocated is of the same order (256× for 4096-byte blocks each with a single 16-byte allocation, or even 4096× if your blocks are page-sized and you have 64kB pages).
My conclusion is that practically speaking, if you can’t deal with fragmentation, then it is impossible to just rely on a heap multiplier to size your heap. It is certainly an error to live-lock the process, hoping that some other thread mutates the graph in such a way to free up a suitable hole. At the same time, if you have configured your heap to be growable at run-time, it would be bad policy to fail an allocation, just because you calculated that the heap is big enough already.
It’s a shame, because we lose a mooring on reality: “how big will my heap get” becomes an unanswerable question because the heap might grow in response to fragmentation, which is not deterministic if there are threads around, and so we can’t reliably compare performance between different configurations. Ah well. If reliability is a goal, I think one needs to allow for evacuation, one way or another.
for nofl?In this concrete case, I am still working on a solution. It’s going to be heuristic, which is a bit of a disappointment, but here we are.
My initial thought has two parts. Firstly, if the heap is growable but cannot defragment, then we need to reserve some empty blocks after each collection, even if reserving them would grow the heap beyond the configured heap size multiplier. In that way we will always be able to allocate into the Nofl space after a collection, because there will always be some empty blocks. How many empties? Who knows. Currently Nofl blocks are 64 kB, and the largest “small object” is 8kB. I’ll probably try some constant multiplier of the heap size.
The second thought is that searching through the entire heap for a hole is a silly way for the mutator to spend its time. Immix will reserve a block for overflow allocation: if a medium-sized allocation (more than 256B and less than 8192B) fails because no hole in the current block is big enough—note that Immix’s holes have 128B granularity—then the allocation goes to a dedicated overflow block, which is taken from the empty block set. This reduces fragmentation (holes which were not used for allocation because they were too small).
Nofl should probably do the same, but given its finer granularity, it might be better to sweep over a variable number of blocks, for example based on the logarithm of the allocation size; one could instead sweep over clz(min-size)–clz(size) blocks before taking from the empty block list, which would at least bound the sweeping work of any given allocation.
finWelp, just wanted to get this out of my head. So far, my experience with this Nofl-based heap configuration is mostly colored by live-locks, and otherwise its implementation of a growable heap sizing policy seems to be more tight-fisted regarding memory allocation than BDW-GC’s implementation. I am optimistic though that I will be able to get precise tracing sometime soon, as measured in development time; the problem as always is fragmentation, in that I don’t have a hole in my calendar at the moment. Until then, sweep on Wayne, cons on Garth, onwards and upwards!
I savored every episode, knowing this was going to be one of those rare shows, like Severance season one, that you only get to experience for the first time once. It pulls you into a vivid, immersive world that’s equal parts mesmerizing and unsettling. A place you’re fascinated by, but would never want to be put in. The atmosphere seeps into you — the sound design, the environments, the way it all just lingers under your skin. You can’t shake it off.
And now I’ve watched the final 12. episode and I already miss it. So I need to say: watch it. It’s something special.
The series is a full-length expansion of the short Scavengers by Joseph Bennett and Charles Huettner (With visible improvements across the board). They’ve cited Nausicaä as a major influence, but if you’re into Akira, you’ll catch a few visual nods there too. It’s brutal. It’s gorgeous. And honestly, I haven’t been this excited about an animated series in a long time.
Neither Netflix nor HBO wanted to greenlight the second season. But the show has come to a very satisfying closure, so I’m not complaining.
★★★★★
First of all, what's outlined here should be available in libinput 1.29 but I'm not 100% certain on all the details yet so any feedback (in the libinput issue tracker) would be appreciated. Right now this is all still sitting in the libinput!1192 merge request. I'd specifically like to see some feedback from people familiar with Lua APIs. With this out of the way:
Come libinput 1.29, libinput will support plugins written in Lua. These plugins sit logically between the kernel and libinput and allow modifying the evdev device and its events before libinput gets to see them.
The motivation for this are a few unfixable issues - issues we knew how to fix but we cannot actually implement and/or ship the fixes without breaking other devices. One example for this is the inverted Logitech MX Master 3S horizontal wheel. libinput ships quirks for the USB/Bluetooth connection but not for the Bolt receiver. Unlike the Unifying Receiver the Bolt receiver doesn't give the kernel sufficient information to know which device is currently connected. Which means our quirks could only apply to the Bolt receiver (and thus any mouse connected to it) - that's a rather bad idea though, we'd break every other mouse using the same receiver. Another example is an issue with worn out mouse buttons - on that device the behavior was predictable enough but any heuristics would catch a lot of legitimate buttons. That's fine when you know your mouse is slightly broken and at least it works again. But it's not something we can ship as a general solution. There are plenty more examples like that - custom pointer deceleration, different disable-while-typing, etc.
libinput has quirks but they are internal API and subject to change without notice at any time. They're very definitely not for configuring a device and the local quirk file libinput parses is merely to bridge over the time until libinput ships the (hopefully upstreamed) quirk.
So the obvious solution is: let the users fix it themselves. And this is where the plugins come in. They are not full access into libinput, they are closer to a udev-hid-bpf in userspace. Logically they sit between the kernel event devices and libinput: input events are read from the kernel device, passed to the plugins, then passed to libinput. A plugin can look at and modify devices (add/remove buttons for example) and look at and modify the event stream as it comes from the kernel device. For this libinput changed internally to now process something called an "evdev frame" which is a struct that contains all struct input_events up to the terminating SYN_REPORT. This is the logical grouping of events anyway but so far we didn't explicitly carry those around as such. Now we do and we can pass them through to the plugin(s) to be modified.
The aforementioned Logitech MX master plugin would look like this: it registers itself with a version number, then sets a callback for the "new-evdev-device" notification and (where the device matches) we connect that device's "evdev-frame" notification to our actual code: libinput:register(1) -- register plugin version 1 libinput:connect("new-evdev-device", function (_, device) if device:vid() == 0x046D and device:pid() == 0xC548 then device:connect("evdev-frame", function (_, frame) for _, event in ipairs(frame.events) do if event.type == evdev.EV_REL and (event.code == evdev.REL_HWHEEL or event.code == evdev.REL_HWHEEL_HI_RES) then event.value = -event.value end end return frame end) end end) This file can be dropped into /etc/libinput/plugins/10-mx-master.lua and will be loaded on context creation. I'm hoping the approach using named signals (similar to e.g. GObject) makes it easy to add different calls in future versions. Plugins also have access to a timer so you can filter events and re-send them at a later point in time. This is useful for implementing something like disable-while-typing based on certain conditions.
So why Lua? Because it's very easy to sandbox. I very explicitly did not want the plugins to be a side-channel to get into the internals of libinput - specifically no IO access to anything. This ruled out using C (or anything that's a .so file, really) because those would run a) in the address space of the compositor and b) be unrestricted in what they can do. Lua solves this easily. And, as a nice side-effect, it's also very easy to write plugins in.[1]
Whether plugins are loaded or not will depend on the compositor: an explicit call to set up the paths to load from and to actually load the plugins is required. No run-time plugin changes at this point either, they're loaded on libinput context creation and that's it. Otherwise, all the usual implementation details apply: files are sorted and if there are files with identical names the one from the highest-precedence directory will be used. Plugins that are buggy will be unloaded immediately.
If all this sounds interesting, please have a try and report back any APIs that are broken, or missing, or generally ideas of the good or bad persuation. Ideally before we ship it and the API is stable forever :)
[1] Benjamin Tissoires actually had a go at WASM plugins (via rust). But ... a lot of effort for rather small gains over Lua
We are putting out a call for participation to create a small team to promote End of 10. Supporting the endof10 gives us a unique opportunity to reach disaffected Windows users who are forced to buy a new computer to use the latest Windows 11.
WhatThis team will work on promoting the End Of 10 Project (and its website) and encourage migration from Windows to GNOME or a sister desktop project. Most important is to grow our user base and grow our app ecosystem.
WhoWe are looking for a diverse global team 4-5 people who can help with creating a promotion and coordinating with the KDE and End Of 10 teams.
WhyThe promotion will be used to educate the public on re-using their windows 10 computers in new ways by running on a community supported operating system.
HowYou can participate by reaching out to us on #engagement:gnome.org on Matrix.
If you are interested in End Of 10 but don’t feel you can commit to the Promo Team, you are welcome to join #endof10-en:kde.org on Matrix.
I’m not a professional web designer. I’ve been making websites for decades, but I haven’t kept up with the latest browser quirks and common approaches. What I do have is a solid grasp of the web’s foundations—thanks to my time teaching IP networking at the university.
My journey with Linux started when I struggled to get PHP running on Windows. (To my surprise, my student side project autoroku.cz kept running in production for years.)
At SUSE I’ve tasted the DRY principles while working on a Rails project, SUSE Studio. I left PHP behind and embraced static site generators like Middleman, then Jekyll as it integrated into GitHub. But over time, maintenance fatigue pushed me further—back to basics. No SASS. No site generators. Just clean, modern HTML and CSS.
People are often surprised to see major projects like gnome.org, brand.gnome.org, circle.gnome.org and my own jimmac.eu built with plain HTML. Yes you do repeat yourself and inconsistencies creep in. But with integrated version control and web based editors, fixes are a click away. More people can edit plain HTML than any bespoke stack.
Do I miss some form of include()? Sure. Would I reach for Jekyll+markdown when someone else is responsible for the content? Probably. But for focused, small sites, nothing beats good old HTML.
With Foundry I want to make LSP management much easier than it currently is in Builder.
We have the foundry lsp run python3 command where python3 can be replaced with any language for which there is an installed LSP plugin. This will start an LSP using all the abstractions (including cross-container execution) and provide it via stdin/stdout.
But what happens when you have a half-dozen language servers for Python with new ones added every week? There is a simple builtin tool now.
Keep in mind the language identifiers should match GtkSourceView language identifiers.
# Make clangd the preferred LSP for C foundry lsp prefer clangd c # Make sourcekit-lsp preferred LSP for C++ foundry lsp prefer sourcekit-lsp cpp # Make ruff the preferred LSP for Python3 foundry lsp prefer ruff python3If there is a clear LSP that your project should be using by all contributors, add --project and it will update the value in the projects settings.
It’s everyone’s favorite time of year, election season! …Okay, maybe not the most exciting thing—but an extremely important one nonetheless.
For anyone who doesn’t know, GNOME is comprised of many parts: individual contributors and maintainers, adhoc teams of volunteers, a bunch of open source software in the form of apps and libraries, a whole bunch of infrastructure, and—importantly—a nonprofit foundation. The GNOME Foundation exists to help manage and support the organizational side of GNOME, act as the official face of the project to third parties, and delegate authority when/where it makes the most sense. The GNOME Foundation itself is governed by its elected Board of Directors.
If you contribute to GNOME, you’re eligible to become a member of the GNOME Foundation, which gets you some perks (like an @gnome.org email address and Matrix account, blog hosting and syndication, and access to Nextcloud and video conferencing tools)—but most importantly, GNOME Foundation members vote to elect the Board of Directors. If you contribute to GNOME, I highly recommend you become a member: it looks good for you, but it also helps ensure the GNOME Foundation is directly influenced and governed by contributors themselves.
I’m Running for the Board!I realized today I never actually announced this on my blog (just via social media), but this past March I was appointed to the GNOME Foundation Board of Directors to fill a vacancy.
However, the seat I filled was up for re-election in this very next cycle, so I’m happy to officially announce: I’m running for the GNOME Foundation Board of Directors! As part of announcing my candidacy, I was asked to share why I would like to serve on the board. I posted this on the GNOME Discourse, but for convenience, I’ve copied it below:
Hey everyone,
I’m Cassidy (cassidyjames pretty much everywhere)! I have been involved in GNOME design since 2015, and was a contributor to the wider FreeDesktop ecosystem before that via elementary OS since around 2010. I am employed by Endless, where I am the community architect/experience lead.
I am particularly proud of my work in early design, communication, and advocacy around both the FreeDesktop color scheme (i.e. dark style) and accent color preferences, both of which are now widely supported across FreeDesktop OSes and the app ecosystem. At elementary I coordinated volunteer projects, lead the user experience design, launched and managed OEM partnerships, and largely maintained our communication by writing and editing regular update announcements and other blog posts. Over the past year I helped organize GUADEC 2024 in Denver, and continue to contribute to the GNOME design team and Flathub documentation and curation.
I was appointed to the GNOME Foundation board in March to fill a vacancy, and I am excited to earn your vote to continue my work on the board. If elected, I will continue my focus on:
Clearer and more frequent communication from the GNOME Foundation, including by helping write and edit blog posts and announcements
Exploring and supporting fundraising opportunities including with users, OEMs, and downstream projects
Ensuring Flathub continues to be recognized as the premier Linux app store, especially as it moves to enable financially supporting the developers of FOSS apps
More widely communicating the impact, influence, and importance of GNOME and Flathub to raise awareness beyond the existing contributor community
Helping ensure that the Foundation reflects the interests of the contributor community
I feel like the GNOME Foundation is at an important transformation point, and I look forward to helping steer things in the right direction for an effective, sustainable organization in support of the GNOME community. Regardless of whether I am elected, I will continue to contribute to design and communication as much as I’m able.
Thank you for your consideration!
Become a Member, and Vote!Voting will be open for two weeks beginning June 5, 2025. If you contribute to GNOME, now is a great time to ensure you’re a member so you can vote in time; check the GNOME Discourse announcement for all of the specific dates and details. And don’t forget to actually vote once it begins. :)
We’re happy to have released gst-dots-viewer, a new development tool that makes it easier to visualize and debug GStreamer pipelines. This tool, included in GStreamer 1.26, provides a web-based interface for viewing pipeline graphs in real-time as your application runs and allows to easily request all pipelines to be dumped at any time.
What is gst-dots-viewer?gst-dots-viewer is a server application that monitors a directory for .dot files generated by GStreamer’s pipeline visualization system and displays them in your web browser. It automatically updates the visualization whenever new .dot files are created, making it simpler to debug complex applications and understand the evolution of the pipelines at runtime.
Key FeaturesThe web page will automatically update whenever new pipeline are dumped, and you will be able to dump all pipelines from the web page.
New Dots TracerAs part of this release, we’ve also introduced a new dots tracer that replaces the previous manual approach to specify where to dump pipelines. The tracer can be activated simply by setting the GST_TRACERS=dots environment variable.
Interactive Pipeline DumpsThe dots tracer integrates with the pipeline-snapshot tracer to provide real-time pipeline visualization control. Through a WebSocket connection, the web interface allows you to trigger pipeline dumps. This means you can dump pipelines exactly when you need them during debugging or development, from your browser.
Future ImprovementsWe plan on adding more feature and have this list of possibilities:
This could transform gst-dots-viewer into a more complete debugging and monitoring dashboard for GStreamer applications.
Demo