You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 2 ditë 5 orë më parë

Gedit Technology blog: Mid-September News

Hën, 15/09/2025 - 12:00md

Misc news about the gedit text editor, mid-September edition! (Some sections are a bit technical).

Next version will be released when Ready

While the release of GNOME 49.0 was approaching (it's this week!), I came to the conclusion that it's best for gedit to wait more, and to follow the Debian way of releasing software: when it's Ready. "Ready" with an uppercase letter 'R'!

So the question is: what is not ready? Two main things:

  • The rework of the file loading and saving: it is something that takes time, and I prefer to be sure that it'll be a solid solution.
  • The question about the Python support for implementing plugins. Time will tell what is the answer.
Rework of the file loading and saving (next steps)

Work continues to refactor that part of the code, both in libgedit-gtksourceview and gedit.

I won't go into too much technical details this time. But what the previous developer (Ignacio Casal Quinteiro, aka nacho) wrote (in 2011) in a comment at the top of a class is "welcome to a really big headache."

And naturally, I want to improve the situation. For a long time this class was used as a black box, using only its interface. Time has come to change things! It takes time, but I already see the end of the tunnel and I have good hopes that the code will be better structured. I intend to write about it more once finished.

But I can reveal that there is already a visible improvement: loading a big file (e.g. 200 MB) is now super fast! Previously, it could take one minute to load such file, with a progress bar shown and a Cancel button. Now there is not enough time to even click on (or to see) the Cancel button! (I'm talking about local files, for remote files with a slow network connection, the progress bar is still useful).

To be continued...

If you appreciate the work that I do, you can send a thank-you donation. Your support is much appreciated! For years to come, it will be useful for the project.

Alley Chaggar: Final Report

Sht, 13/09/2025 - 2:00pd
Intro:

Hi everyone, it’s the end of GSoc! I had a great experience throughout this whole process. I’ve learned so much. This is essentially the ‘final report’ for GSoC, but not my final report for this project in general by a long shot. I still have so much more I want to do, but here is what I’ve done so far.

Project:

JSON, YAML, and/or XML emitting and parsing integration into Vala’s compiler.

Mentor:

I would like to thank Lorenz Wildberg for being my mentor for this project, as well as the Vala community.

Description:

The main objective of this project is to integrate direct syntax support for parsing and emitting JSON, XML, and/or YAML formats in Vala. This will cut back the boilerplate code, making it more user-friendly and efficient for developers working with these formatting languages.

What I’ve done: Research
  • I’ve done significant research in both JSON and YAML parsing and emitting in various languages like C#, Java, Rust and Python.
  • Looked into how Vala currently handles JSON using JSON GLib classes, and I then modelled the C code after the examples I collected.
  • Modelled the JSON module after other modules in the codegen, specifically, mainly after Dbus, Gvariant, GObject, and GTK.
Custom JSON Overrides and Attribute
  • Created Vala syntax sugar specifically making a [JSON] attribute to do serialization.
  • Built support for custom overrides as in mapping JSON keys to differently named fields/properties.
  • Reduced boilerplate by generating C code behind the scenes.
Structs
  • I’ve created both Vala functions to deserialize and serialize structs using JSON boxed functions.
  • I created a Vala generate_struct_serialize_func function to create a C code function called _%s_serialize_func to serialize fields.
  • I then created a Vala function generate_struct_to_json to create a C code function called _json_%s_serialize_mystruct to fully serialize the struct by using boxed serialize functions.

  • I created a Vala generate_struct_deserialize_func function to create a C code function called _%s_deserialize_func to deserialize fields.
  • I then created a Vala function generate_struct_to_json to create a C code function called _json_%s_deserialize_mystruct to fully deserialize the struct by using boxed deserialized functions.
GObjects
  • I’ve created both Vala functions to deserialize and serialize GObjects using json_gobject_serialize and JSON generator.
  • I then created a Vala function generate_gclass_to_json to create a C code function called _json_%s_serialize_gobject_myclass to fully serialize GObjects.

  • I created a Vala generate_gclass_from_json function to create a C code function called _json_%s_deserialize_class to deserialize fields.
Non-GObjects
  • I’ve done serializing of non-GObjects using JSON GLib’s builder functions.
  • I then created a Vala function generate_class_to_json to create a C code function called _json_%s_serialize_myclass to fully serialize non-objects that aren’t inheriting from Object or Json.Serializable.
Future Work: Research
  • Research still needs to be put into integrating XML and determining which library to use.
  • The integration of YAML and other formatting languages not only JSON, YAML, or XML.
Custom Overrides and Attributes
  • I want to create more specialized attributes for JSON that only do serialization or deserialization. Such as [JsonDeserialize] and [JsonSerialize] or something similar.
  • [JSON] attribute needs to do both deserializing and serializing, and at the moment, the deserializing code has problems.
  • XML, YAML, and other formating languages will follow very similar attribute patterns: [Yaml], [Xml], [Json].
Bugs
  • unref c code functions are calling nulls, which shouldn’t be the cause. They need proper types going through.
  • Deserializing prompts a redefinition that needs to be corrected.
  • Overridden GObject properties need to have setters made to be able to get the values.
Links

Alice Mikhaylenko: Libadwaita 1.8

Pre, 12/09/2025 - 2:00pd

Another six months have passed, and with that comes another libadwaita release to go with GNOME 49.

This cycle doesn't have a lot of changes due to numerous IRL circumstances I've been dealing with, but let's look at them anyway.

Shortcuts dialog

Last cycle GTK deprecated GtkShortcutsWindow and all of the related classes. Unfortunately, this left it with no replacement, despite being widely used. So, now there is a replacement: AdwShortcutsDialog. Same as shortcuts window, it has very minimal API and is intended to be static and constructed from UI files.

Structure

While the new dialog has a similar feature set to the old one, it has a very different organization, and is not a drop-in replacement.

The old dialog was structured as: GtkShortcutsWindowGtkShortcutsSectionGtkShortcutsGroupGtkShortcutsShortcut.

Most apps only have a single shortcuts section, but those that have multiple would have them shown in a dropdown in the dialog's header bar, as seen in Builder:

Each section would have one or more shortcuts groups. When a section has too many groups, it would be paginated. Each group has a title and optionally a view, we'll talk about that a bit later.

Finally each groups contains shortcuts. Or shortcuts shortcuts, I suppose - which describe the actual shortcuts.

When sections and groups specify a view, the dialog can be launched while only showing a subset of shortcuts. This can be seen in Clocks, but was never very widely used. And specifically in Clocks it was also a bit silly, since the dialog actually becomes shorter when the button is clicked.

The new dialog drops the rarely used sections and views, so it has a simpler structure: AdwShortcutsDialogAdwShortcutsSectionAdwShortcutsItem.

Sections here are closer to the old groups, but are slightly different. Their titles are optional, and sections without titles behave as if they were a part of the previous section with an extra gap. This allows to subdivide the sections further, without adding an extra level of hierarchy when it's not necessary.

Since shortcuts are shown as boxed lists, apps should avoid having too many in a single section. It was already not great with the old dialog, but is much worse in the new one.

Finally, AdwShortcutsItem is functionally identical to GtkShortcutsShortcut, except it doesn't support specifying gestures and icons.

Why not gestures?

This feature was always rather questionable, and sometimes doing more harm than good. For example, take these 2 apps - the old and the current image viewer, also known as Eye of GNOME and Loupe respectively:

Both of them specify a two-finger swipe left/right to go to the next/previous image. Well, does it work? The answer depends on what input device you're using.

In Loupe it will work on a touchpad, but not touchscreen: on a touchscreen you use one finger instead.

Meanwhile, in EoG it only works on touchscreen instead. On touchpad 2-finger swipe scrolls the current image if it's zoomed in.

So - while both of these apps have a swipe gesture, they are completely different - yet the dialog makes no distinction between them.

It's also not discoverable. HIG recommends naming the menu entry Keyboard Shortcuts, and it doesn't make a lot of sense that these gestures would be in there too - they have nothing to do with keyboard or shortcuts.

A much better place to document this would be help pages. And of course, ideally apps should have all of the typical gestures people are used to from other systems (pinch to zoom and rotate, double tap to zoom, swipes to navigate, long press to open context menus when it's not available via other means), and clear feedback while those gestures are performed - so that there's less of a need to remember which app has which gestures in the first place and they can be documented system-wide instead.

Why not icons?

As for icons, the only app I'm aware of that did this was gnome-games - it used them to show gamepad navigation:

This was problematic in a similar way, but also there was no way to open this dialog using a gamepad in the first place. A much better solution (and pretty much the standard for gamepad navigation) would have been always visible hints at the bottom of the window or inline.

Auto-loading

Most apps using GtkShortcutsWindow weren't creating it programmatically - GtkApplication loads it automatically and creates an action for it. So, we do the same thing: if a resource with the name shortcuts-dialog.ui is present in the resource base path, AdwApplication will create the app.shortcuts action which will create and show the dialog in the active window when activated.

Some apps were already using an action with this name, in these cases no action will be created.

One thing that's not possible anymore is overriding the dialog for specific windows (gtk_application_window_set_help_overlay()). This feature was extremely rarely used, and apps that really want different dialogs for different windows can just create the dialogs themselves instead of using auto-loading - this is just convenience API for the most common case.

Shortcut label

One of the widgets that was deprecated is GtkShortcutLabel. However, it had uses outside of the shortcuts dialog as well. So, libadwaita has a replacement as well - AdwShortcutLabel. Unlike the dialog itself, this is a direct fork of the GTK widget, and works the same way - though the separation between individual keycaps looks a bit different now, hopefully to make it clearer:

It also has a slightly different style, but it's been backported for GtkShortcutLabel as well for the most part.

And, unlike the shortcuts dialog, AdwShortcutLabel is a drop-in replacement.

CSS improvements Media queries

This cycle, GTK has added support for CSS media queries, allowing to define styles for light and dark, as well as regular and high contrast styles in the same file.

Media queries is fully supported on libadwaita side, and apps are encouraged to use them instead of style-dark.css, style-hc.css and style-hc-dark.css. Since this happened right at the end of the cycle (after the feature and API freeze, in fact, since GTK doesn't follow it), they are not deprecated just yet, but will be early next cycle.

Since we now have support for both variables and media queries, it's possible to do things like this now:

:root { --card-border: var(--card-shade-color); } @media (prefers-contrast: more) { :root { --card-border: var(--border-color); } } .card-separator { background: var(--card-border); } Typography

Last cycle, I added document and monospace font variables and mentioned that the document font may change in future to be distinct from the UI font.

This has happened now, and it is actually distinct - Adwaita Sans 12pt instead of 11pt.

So - to mirror .monospace, there's now a .document style class as well. It uses the document font, and also increases the line height for better readability.

Additionally, the formerly mostly useless .body style class increases line height as well now, instead of just setting the default font size and weight. Apps should use it when displaying medium-long text, and libadwaita is using it in a bunch of standard widgets, such as in preferences group and status page descriptions, alert dialog body, or various pages in the about dialog.

Fractal and Podcasts are already making use of both, and hopefully soon more apps will follow suit.

Other changes Future

While this cycle was pretty short and unexciting, there's a thing in works for the next cycle.

One of the most glaring omissions right now is sidebars. While we have split views, we don't have anything pre-built that could go into the sidebar pane - it's up to the apps to invent something using GtkListBox or GtkListView, combined with the .navigation-sidebar style class.

This is a lot messier than it may seem, and results in every app having sidebars that look and behave slightly different. We have helpers for boxed lists, so why not sidebars too?

There is also GtkStackSidebar, but it's not flexible at all and doesn't play well with mobile phones.

Additionally, on mobile especially sidebars look and behave extremely out of place, and it would be nice to do something about - e.g. use boxed lists instead.

So, next cycle we'll (hopefully) have both a generic sidebar widget, and a stack sidebar replacement. They won't cover all of the use cases (I expect it to be useful for Builder's preferences dialog but not the main window), but a lot of apps don't do anything extraordinary and it should save them a lot of effort.

Thanks to the GNOME STF Team for providing the funding for this work. Also thanks to the GNOME Foundation for their support and thanks to all the contributors who made this release possible.

Varun R Mallya: PythonBPF - Writing eBPF Programs in Pure Python

Pre, 12/09/2025 - 2:00pd
Introduction

Python-BPF offers a new way to write eBPF programs entirely in Python, compiling them into real object files. This project is open-source and available on GitHub and PyPI. I wrote it alongside R41k0u.

Update: This article has now taken off on Hacker News.

Published Library with Future Plans

Python-BPF is a published Python library with plans for further development towards production-ready use.
You can pip install pythonbpf but it’s certainly not at all production ready and the code is hacky at best with more bugs than I could count. (This was a hackathon project afterall. We plan to fix it after we are done with the hackathon.)

The Old Way: Before Python-BPF

Before Python-BPF, writing eBPF programs in Python typically involved embedding C code within multiline strings, often using libraries like bcc. eBPF allows for small programs to run based on kernel events, similar to kernel modules.

Here’s an example of how it used to be:

from bcc import BPF from bcc.utils import printb # define BPF program prog = """ int hello(void *ctx) { bpf_trace_printk("Hello, World!\\n"); return 0; } """ # load BPF program b = BPF(text=prog) b.attach_kprobe(event=b.get_syscall_fnname("clone"), fn_name="hello") # header print("%-18s %-16s %-6s %s" % ("TIME(s)", "COMM", "PID", "MESSAGE")) # format output while 1: try: (task, pid, cpu, flags, ts, msg) = b.trace_fields() except ValueError: continue except KeyboardInterrupt: exit() printb(b"%-18.9f %-16s %-6d %s" % (ts, task, pid, msg))

This approach, while functional, meant writing C code within Python, lacking support from modern Python development tools like linters.

Features of the Multiline C Program Approach # load BPF program b = BPF(text=""" #include <uapi/linux/ptrace.h> BPF_HASH(last); int do_trace(struct pt_regs *ctx) { u64 ts, *tsp, delta, key = 0; // attempt to read stored timestamp tsp = last.lookup(&key); if (tsp != NULL) { delta = bpf_ktime_get_ns() - *tsp; if (delta < 1000000000) { // output if time is less than 1 second bpf_trace_printk("%d\\n", delta / 1000000); } last.delete(&key); } // update stored timestamp ts = bpf_ktime_get_ns(); last.update(&key, &ts); return 0; } """)

The multiline C program approach allowed for features like BPF MAPS (hashmap type), map lookup, update, and delete, BPF helper functions (e.g., bpf_ktime_get_ns, bpf_printk), control flow, assignment, binary operations, sections, and tracepoints.

Similar Program in Reduced C

For production environments, eBPF programs are typically written in pure C, compiled by clang into a bpf target object file, and loaded into the kernel with tools like libbpf. This approach features map sections, license global variables, and section macros specifying tracepoints.

#include <linux/bpf.h> #include <bpf/bpf_helpers.h> #define u64 unsigned long long #define u32 unsigned int struct { __uint(type, BPF_MAP_TYPE_HASH); __uint(max_entries, 1); __type(key, u32); __type(value, u64); } last SEC(".maps"); SEC("tracepoint/syscalls/sys_enter_execve") int hello(struct pt_regs *ctx) { bpf_printk("Hello, World!\\n"); return 0; } char LICENSE[] SEC("license") = "GPL"; Finally! Python-BPF

Python-BPF brings the true eBPF experience to Python by allowing the exact same functionality to be replaced by valid Python code. This is a significant improvement over multiline C strings, offering support from existing Python tools.

from pythonbpf import bpf, map, section, bpfglobal, compile from ctypes import c_void_p, c_int64, c_int32, c_uint64 from pythonbpf.helpers import ktime from pythonbpf.maps import HashMap @bpf @map def last() -> HashMap: return HashMap(key_type=c_uint64, value_type=c_uint64, max_entries=1) @bpf @section("tracepoint/syscalls/sys_enter_execve") def hello(ctx: c_void_p) -> c_int32: print("entered") return c_int32(0) @bpf @section("tracepoint/syscalls/sys_exit_execve") def hello_again(ctx: c_void_p) -> c_int64: print("exited") key = 0 last().update(key) ts = ktime() return c_int64(0) @bpf @bpfglobal def LICENSE() -> str: return "GPL" compile()

Python-BPF uses ctypes to preserve compatibility, employs decorators to separate the BPF program from other Python code, allows intuitive creation of global variables, and defines sections and tracepoints similar to its C counterpart. It also provides an interface to compile and run in the same file.

How it Works Under the Hood
  1. Step 1: Generate AST The Python ast module is used to generate the Abstract Syntax Tree (AST).

  2. Step 2: Emit LLVM IR llvmlite from Numba emits LLVM Intermediate Representation (IR) and debug information for specific parts like BPF MAPs. The .py file is converted into LLVM Intermediate Representation.

  3. Step 3: Compile LLVM IR The .ll file, containing all code written under the @bpf decorator, is compiled using llc -march=bpf -O2.

Salient Features

Previous Python options for eBPF relied on bcc for compilation, which is not ideal for production use. The only two real options for production-quality eBPF programs were aya in Rust and Clang with kernel headers in C. Python-BPF introduces a third, new option, expanding the horizons for eBPF development.

It currently supports:

  • Control flow
  • Hash maps (with plans to add support for other map types)
  • Binary operations
  • Helper functions for map manipulation
  • Kernel trace printing functions
  • Timestamp helpers
  • Global variables (implemented as maps internally with syntactical differences)
TL;DR
  • Python-BPF allows writing eBPF programs directly in Python.
  • This library compiles Python eBPF code into actual object files.
  • Previously, eBPF programs in Python were written as C code strings.
  • Python-BPF simplifies eBPF development with Python decorators.
  • It offers a new option for production quality BPF programs in Python.
  • The tool supports BPF maps, helper functions, and control flow, with plans to extend to completeness later.

Thanks for reading my poorly written blog :)

Debarshi Ray: Toolbx — about version numbers

Mër, 10/09/2025 - 11:49md

Those of you who follow the Toolbx project might have noticed something odd about our latest release that came out a month ago. The version number looked shorter than usual even though it only had relatively conservative and urgent bug-fixes, and no new enhancements.

If you were wondering about this, then, yes, you are right. Toolbx will continue to use these shorter version numbers from now on.

The following is a brief history of how the Toolbx version numbers evolved over time since the beginning of the project till this present moment.

Toolbx started out with a MAJOR.MINOR.MICRO versioning scheme. eg., 0.0.1, 0.0.2, etc.. Back then, the project was known as fedora-toolbox, was implemented in POSIX shell, and this versioning scheme was meant to indicate the nascent nature of the project and the ideas behind it.

To put it mildly, I had absolutely no idea what I was doing. I was so unsure that for several weeks or few months before the first Git commit in August 2018, it was literally a single file that implemented the fedora-toolbox(1) executable and a Dockerfile for the fedora-toolbox image on my laptop that I would email around to those who were interested.

A nano version was reserved for releases to address brown paper bag bugs or other critical issues, and for release candidates. eg., several releases between 0.0.98 and 0.1.0 used it to act as an extended set of release candidates for the dot-zero 0.1.0 release. More on that later.

After two years, in version 0.0.90, Toolbx switched from the POSIX shell implementation to a Go implementation authored by Ondřej Míchal. The idea was to do a few more 0.0.9x releases to shake out as many bugs in the new code as possible, implement some of the bigger items on our list that had gotten ignored due to the Go rewrite, and follow it up with a dot-zero 0.1.0 release. That was in May 2020.

Things went according to plan until the beginning of 2021, when a combination of factors put a spanner in the works, and it became difficult to freeze development and roll out the dot-zero release. It was partly because we kept getting an endless stream of bugs and feature requests that had to be addressed; partly because real life and shifting priorities got in the way for the primary maintainers of the project; and partly because I was too tied to the sanctity of the first dot-zero release. This is how we ended up doing the extended set of release candidates with a nano version that I mentioned above.

Eventually, version 0.1.0 arrived in October 2024, and since then we have had three more releases — 0.1.1, 0.1.2 and 0.2. Today, the Toolbx project is seven years old, and some things have changed enough that it requires an update to the versioning scheme.

First, both Toolbx and the ideas that it implements are a lot more mature and widely adopted than they were at the beginning. So much so, that there are a few independent reimplementations of it. It’s time for the project to stop hiding behind a micro version.

Second, the practice of bundling and statically linking the Go dependencies sometimes makes it necessary to update the dependencies to address security bugs or other critical issues. It’s more convenient to do this as part of an upstream release than through downstream patches by distributors. So far, we have managed to avoid the need to do minimal releases targeting only specific issues for conservative downstream distributors, but the recent NVIDIAScape or CVE-2025-23266 and CVE-2025-23267 in the NVIDIA Container Toolkit gave me pause. We managed to escape this time too, but it’s clear that we need a plan to deal with these scenarios.

Hence, from now on, Toolbx releases will default to not having a micro version and use a MAJOR.MINOR versioning scheme. A micro version will be reserved for the same purposes that a nano version was reserved for until now — to address critical issues and for release candidates.

It’s easier to read and remember a shorter MAJOR.MINOR version than a longer one, and appropriately conveys the maturity of the project. When a micro version is needed, it will also be easier to read and remember than a longer one with a nano version. Being easy to read and remember is important for version numbers, because it separates them from Git commit hashes.

So, this is why the latest release is 0.2, not 0.1.3.

Development blog for GNOME Shell and Mutter: GNOME Kiosk Updates

Mër, 10/09/2025 - 9:03pd

GNOME Kiosk is a separate Wayland compositor built on the same core components as GNOME Shell, such as Mutter.

While it does not provide a desktop UI, it is intended for kiosk and appliance use cases.

Originally designed to run a single application in fullscreen mode, recent development has expanded its scope toward more versatile window management and system integration.

Recent Releases Overview

47

  • Support for Shell introspection API (in --unsafe-mode).

48

  • Initial support for configurable windows via window-config.ini.
  • Added Shell Screenshot D-Bus API.

49

  • Extended window configuration: set-on-monitor, set-window-type, window tags.
  • Added support for remote sessions (Systemd).
  • Fixes for GrabAccelerators, media keys, and compositor shortcut inhibition.
Window Configuration and Tagged Clients

One of the recent main areas of development has been window configuration.

  • In GNOME 48, Kiosk gained initial support for configuring windows via a static configuration file (window-config.ini).
  • In GNOME 49, this functionality was extended with additional options:
    • set-on-monitor: place windows on a specific monitor.
    • set-window-type: assign specific roles to windows (e.g. desktop, dock, splash).
    • Matching based on Window tags: allow selection of windows based on toplevel tags, a new feature in Wayland protocols 1.43.

Additionally, with the new (in mutter from GNOME 49) gnome-service-client utility, toplevel windows tags can be assigned to clients at launch, making it possible to configure their behavior in Kiosk without modification to the client.

Example: configuring a tagged client in Kiosk

GNOME Kiosk searches for the window configuration file window-config.ini in the following locations:

  • The base directory for user-specific application configuration usually $HOME/.config/gnome-kiosk/window-config.ini
  • The system-wide list of directories for application data $XDG_DATA_DIRS This list usually includes:
    • /var/lib/flatpak/exports/share/gnome-kiosk/window-config.ini
    • /usr/local/share/gnome-kiosk/window-config.ini
    • /usr/share/gnome-kiosk/window-config.ini

Therefore, for a user configuration, edit $HOME/.config/gnome-kiosk/window-config.ini to read:

[all] set-fullscreen=false set-above=false [desktop] match-tag=desktop set-window-type=desktop set-fullscreen=true

With this configuration, GNOME Kiosk will treat any surface with the toplevel tag desktop as a „desktop“ type of window.

launching a tagged client gnome-service-client -t desktop weston-simple-shm

This command starts the weston-simple-shm client and associates the tag desktop with its surface.

The end result is the weston-simple-shm window running as a background window placed at the bottom of the windows stack.


This combination makes it possible to build structured kiosk environments with different Wayland client used as docks or desktop windows for implementing root menus.

Accessibility and Input

Several improvements have been made to input handling and accessibility:

  • Fixes for GrabAccelerators support.
  • Support for media keys in Systemd sessions.
  • Ability to inhibit compositor shortcuts.
  • Compatibility with screen reader usage.
Remote Sessions

As of GNOME 49, Kiosk supports remote sessions when run under Systemd. This allows kiosk sessions to be used not only on local displays but also in remote session contexts.

D-Bus APIs

Although GNOME Kiosk is a separate compositor, it implements selected D-Bus APIs also available in GNOME Shell for compatibility purposes. These include:

  • Screenshot API (added in 48).
  • Shell introspection when started with --unsafe-mode (added in 47).

This makes it possible to use existing GNOME testing and automation frameworks such as Ponytail and Dogtail with kiosk sessions.

These APIs allow automation scripts to inspect and interact with the user interface, enabling the creation of automated tests and demonstrations for kiosk application (using tools like GNOME ponytail and dogtail).

GNOME Kiosk is the Wayland compositor used with the Wayland enabled version of Anaconda, the installer for Fedora (and Red Hat Enterprise Linux as well). The support for introspection and screenshots is used by anabot, the framework for automated testing of the installer.

Development Direction

Future development of GNOME Kiosk is expected to continue along the following lines:

  • Configuration refinement: further improving flexibility of the window configuration system.
  • Accessibility: ensuring kiosk sessions benefit from GNOME’s accessibility technologies.

The goal remains to provide a focused, reliable compositor for kiosk and appliance deployments, without implementing the full desktop UI features of GNOME Shell.

Marcus Lundblad: Maps and GNOME 49

Mër, 10/09/2025 - 12:43pd

    

As time is approaching the release of GNOME 49, I thought I should probably put together a small recap post covering some of the new things in Maps. 

 

 Metro Station Symbols

 The map style now supports showing localized symbols for rail- and metro stations (relying on places being tagged with reference to the networks' entry in Wikidata.




 Highway Symbols in Place Details

 The existing code for showing custom highways shields in the map view (based on code from the OpenStreetMap Americana project) has been extended to expose the necessary bits to use it more generally as icon surfaces in a GtkImage widget. So now custom shields are shown in place details when clicking on a road label.



 Adwaita Shortcuts Dialog

The keyboard shortcuts help dialog was ported by Maximiliano to use AdwShortcutsDialog, improving adaptivity.

 


 Showing OSM Account Avatars in OSM Account Dialog

If a user has set up OAuth for an OpenStreetMap account, and has set a personal profile picture in their OSM account this is now shown in place of the generic „face“ icon.


 And speaking of editing points-of-interests, the edit dialog has been compacted a bit to better accomodate smaller screen sizes.


 This screenshot also showcases the (fairly) new mobile form-factor emulation option in the GTK inspector.

 

Softer Labels

Some smaller adjustments has also been made to the map style, such as using slightly softer color for the place labels for towns and cities rather than pitch black (or bright white for dark mode).



 Marker Alignments

Thanks to work done by Corentin Noël for libshumate 1.5, the center point for map markers can now be adjusted.

This means the place markers in Maps can now actually point to the actually coordinate (e.g. having the “tip of the needle” at the actual location).


 Updating the Highway Shields Defintions

And finally of the last changes before the release was updating the definition for custom highway shields from OpenStreetMap Americana. So now, among others we support shields for national and regional highways in Argentina.

And that's some of the highlights from the 49 release cycle!
 

Christian Schaller: More adventures in the land of AI and Open Source

Mar, 09/09/2025 - 4:39md

I been doing a lot of work with AI recently, both as part of a couple of projects I am part of at work, but I have also taken a personal interest in understanding the current state and what is possible. My favourite AI tool currently is Claude.ai. Anyway I have a Prusa Core One 3D printer now that I also love playing with and one thing I been wanting to do is to print some multicolor prints with it. So the Prusa Core One is a single extruder printer, which means it only has 1 filament loaded at any given time. Other printers on the market, like the PrusaXL has 5 extruders, so it can have 5 filaments or colors loaded at the same time.

Prusa Single Extruder Multimaterial setting


The thing is that the Prusa Slicer (the slicer is the software that takes a 3d model and prepares the instructions for the printer based on that 3d model) got this feature called Single Extruder Multi Material. And while it is a process that wastes a lot of filament and takes a lot of manual intervention during the print, it does basically work.

What I quickly discovered was that using this feature is non-trivial. First of all I had to manually add some G Code to the model to actually get it to ask me to switch filament for each color in my print, but the bigger issue is that the printer will ask you to change the color or filament, but you have no way of knowing which one to switch to, so for my model I had 15 filament changes and no simple way of knowing which order to switch in. So people where solving this among other things through looking through the print layer by layer and writing down the color changes, but I thought that this must be possible to automate with an application. So I opened Claude and started working on this thing I ended up calling Prusa Color Mate..

So the idea for the application was simple enough, have it analyze the project file, extract information about the order of color changes and display them for the user in a way that allows them to manually check of each color as its inserted. So I started off with doing a simple python script that would just print to the console. So it quickly turned out that the hard part of this project was to parse the input files and it was made worse by my ignorance. So what I learned the hard way is that if you store a project in Prusa Slicer it will use this format called 3mf. So my thought was, lets just analyze the 3mf file and extract the information I need. It took my quite a bit of back and forth with Claude, feeding claude source code from Prusa’s implementation and pdf files with specifications, but eventually the application did spit out a list of 15 toolchanges and the colors associated with them. So I happily tried to use it to print my model. I quickly discovered that the color ordering was all wrong. And after even more back and forth with Claude and reading online I realized that the 3mf file is a format for storing 3d models, but that is not what is being fed your 3d printer, instead for the printer the file provided is a bgcode file. And while the 3mf file did contain the information that you had to change filament 15 times, the information on in which order is simply not stored in the 3mf file as that is something chosen as part of composing your print. That print composition file is using a file format called bgcode. So I now had to extract the information from the bgcode file which took me basically a full day to figure out with the help of Claude. I could probably have gotten over the finish line sooner by making some better choices underway, but the extreme optimism of the AI probably lead me to believe it was going to be easier than it was to for instance just do everything in Python.
At first I tried using this libbgcode library written in C++, but I had a lot of issues getting Claude to incorporate it properly into my project, with Meson and CMAKE interaction issues (in retrospect I should have just made a quick RPM of libbgcode and used that). After a lot of struggles with this Claude thought that parsing the bgcode file in python natively would be easier than trying to use the C++ library, so I went down that route. I started by feeding Claude a description of the format that I found online and asked it to write me a parser for it. It didn’t work very well and I ended up having a lot of back and forth, testing and debugging, finding more documentation, including a blog post about this meatpack format used inside the file, but it still didn’t really work very well. In the end what probably helped the most was asking it to use the relevant files from libbgcode and Prusa Slicer as documentation, because even if that too took a lot of back and forth, eventually I had a working application that was able to extract the tool change data and associated colors from the file. I ended up using one external dependency which was the heatshrink2 library that I PIP installed, but while that worked correctly, it took a look time for me and Claude to figure out exactly what parameters to feed it to work with the Prusa generated file.

Screenshot of Prusa Color Mate

So know I had the working application going and was able to verify it with my first print. I even polished it up a little, by also adding detection of the manual filament change code, so that people who try to use the application will be made aware they need to add that through Prusa Slicer. Maybe I could bake that into the tool, but atm I got only bgcode decoders, not encoders, in my project.

Warning showed for missing G Code Dialog that gives detailed instructions for how to add G Code

So to conclude, it probably took me 2.5 days to write this application using Claude, it is a fairly niche tool, so I don’t expect a lot of users, but I made it to solve a problem for myself. If I had to write this pre-AI myself it would have taken me weeks, like figuring out the different formats and how library APIs worked etc. would have taken me a long time. So I am not an especially proficient coder, so a better coder than me could probably put this together quicker than I would, but I think this is part of what I think will change with AI, that even with limited time and technical skills you can put together simple applications like this to solve your own problems.

If you are a Prusa Core One user and would like to play with multicolor prints you can find Prusa Color Mate on Gitlab. I have not tested it on any other system or printer than my own, so I don’t even know if it will work with other non-Core One Prusa printers. There are rpms for Fedora you can download in the packaging directory of the gitlab repo, which also includes a RPM for the heatshrink2 library.

As for future plans for this application I don’t really have any. It solves my issue the way it is today, but if there turns out to be an interested user community out there maybe I will try to clean it up and create a proper flatpak for it.

Hubert Figuière: Dev Log August 2025

Dje, 07/09/2025 - 2:00pd

Some of the stuff I did in August.

AbiWord

More memory leaks fixing.

gudev-rs

Updated gudev-rs to the latest glib-rs, as a requirement to port any code using it to the latest glib-rs.

libopenraw

A minor fix so that it can be used to thumbnail JPEG file extracting the preview.

Released alpha.12.

Converted the x-trans interpolation to use floats. Also removed a few unnecessary unsafe blocks.

Niepce

A lot of work on the importer. Finally finished that UI bit I had in progress of a while and all the downfall with it. It is in the develop branch which mean it will be merged to main. The includes some UI layout changes to the dialog.

Then I fixed the camera importer that was assuming everyone followed the DCIM specification (narrator: no they didn't). This mean it was broken on iPhone 14 and the Fujifilm X-T3 that has two card slot (really, use a card reader if the camera uses memory cards). Also sped it up, it's still really slow.

Also better handle the asynchronous tasks running on a thread like the thumbnailing or camera import list content. I'm almost ready to move on.

Tore up code using gdkpixbuf for many reasons. It's incompatible with multiple threads, gdk texture were already created from raw buffers. This simplify a lot of things.

Aryan Kaushik: GNOME Outreachy Dec 2025 Cohort

Sht, 06/09/2025 - 10:19md

The GNOME Foundation is interested in participating in the December-March cohort of Outreachy and is looking for 1 intern.

If you are interested in mentoring AND have a project idea in mind, please visit the Internship project ideas repository and submit your proposal by 10th September 2025. All proposals are triaged by Allan Day, Matthias Clasen and Sri Ramkrishna before approval.

We are always on the lookout for project ideas that move the GNOME project forward

If you have any questions, please feel free e-mail soc-admins@gnome.org, which is a private mailing list with the GNOME internship coordinators or join our matrix channel at - #internships:gnome.org.

Looking forward to your proposals!