You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 1 ditë 22 orë më parë

Jordan Petridis: Thoughts on employing PGO and BOLT on the GNOME stack

Mar, 26/03/2024 - 4:42md

Christian was looking at PGO and BOLT recently I figured I’d write down my notes from the discussions we had on how we’d go about making things faster on our stack, since I don’t have time or the resource to pursue those plans myself atm.

First off let’s start with the basics, PGO (profile guided optimizations) and BOLT (Binary Optimization and Layout Tool) work in similar ways. You capture one or more “profiles” of a workload that’s representative of a usecase of your code and then the tools do their magic to make the common hot paths more efficient/cache-friendly/etc. Afterwards they produce a new binary that is hopefully faster than the old one and functionally identical so you can just replace it.

Now already we have two issues here that arise here:

First of all we don’t really have any benchmarks in our stack, let alone, ones that are rounded enough to account for the majority of usecases. Additionally we need better instrumentation to capture stats like frames, frame-times, and export them both for sysprof and so we can make the benchmark runners more useful.

Once we have the benchmarks we can use them to create the profiles for optimizations and to verify that any changes have the desired effect. We will need multiple profiles of all the different hardware/software configurations.

For example for GTK ideally we’d want to have a matrix of profiles for the different render backends (NGL/Vulkan) along with the mesa drivers they’d use depending on different hardware AMD/Intel and then also different architectures, so additional profile for Raspberrypi5 and Asahi stacks. We might also want to add a profile captured under qemu+virtio while we are it too.

Maintaining the benchmarks and profiles would be a lot of work and very tailored to each project so they would all have to live in their upstream repositories.

On the other hand, the optimization itself has to be done during the Tree/userland/OS composition and we’d have to aggregate all the profiles from all the projects to apply them. This is easily done when you are in control of the whole deployment as we can do for the GNOME Flatpak Runtime. It’s also easy to do if you are targeting an embedded deployment where most of the time you have custom images you are in full control off and know exactly the workload you will be running.

If we want distros to also apply these optimizations and for this to be done at scale, we’d have to make the whole process automatic and part of the usual compilation process so there would be no room for error during integration. The downside of this would be that we’d have a lot less opportunities for aggregating different usecases/profiles as projects would either have to own optimizations of the stack beneath them (ex: GTK being the one relinking pango) or only relink their own libraries.

To conclude, Post-linktime optimization would be a great avenue to explore as it seems to be one of the lower-hanging fruits when it comes to optimizing the whole stack. But it also would be quite the effort and require a decent amount of work to be committed to it. It would be worth it in the long run.

Andy Wingo: hacking v8 with guix, bis

Mar, 26/03/2024 - 12:51md

Good day, hackers. Today, a pragmatic note, on hacking on V8 from a Guix system.

I’m going to skip a lot of the background because, as it turns out, I wrote about this already almost a decade ago. But following that piece, I mostly gave up on doing V8 hacking from a Guix machine—it was more important to just go with the flow of the ever-evolving upstream toolchain. In fact, I ended up installing Ubuntu LTS on my main workstations for precisely this reason, which has worked fine; I still get Guix in user-space, which is better than nothing.

Since then, though, Guix has grown to the point that it’s easier to create an environment that can run a complicated upstream source management project like V8’s. This is mainly guix shell in the --container --emulate-fhs mode. This article is a step-by-step for how to get started with V8 hacking using Guix.

get the code

You would think this would be the easy part: just git clone the V8 source. But no, the build wants a number of other Google-hosted dependencies to be vendored into the source tree. To perform the initial fetch for those dependencies and to keep them up to date, you use helpers from the depot_tools project. You also use depot_tools to submit patches to code review.

When you live in the Guix world, you might be tempted to look into what depot_tools actually does, and to replicate its functionality in a more minimal, Guix-like way. Which, sure, perhaps this is a good approach for packaging V8 or Chromium or something, but when you want to work on V8, you need to learn some humility and just go with the flow. (It’s hard for the kind of person that uses Guix. But it’s what you do.)

You can make some small adaptations, though. depot_tools is mostly written in Python, and it actually bundles its own virtualenv support for using a specific python version. This isn’t strictly needed, so we can set the funny environment variable VPYTHON_BYPASS="manually managed python not supported by chrome operations" to just use python from the environment.

Sometimes depot_tools will want to run some prebuilt binaries. Usually on Guix this is anathema—we always build from source—but there’s only so much time in the day and the build system is not our circus, not our monkeys. So we get Guix to set up the environment using a container in --emulate-fhs mode; this lets us run third-party pre-build binaries. Note, these binaries are indeed free software! We can run them just fine if we trust Google, which you have to when working on V8.

no, really, get the code

Enough with the introduction. The first thing to do is to check out depot_tools.

mkdir src cd src git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git

I’m assuming you have git in your Guix environment already.

Then you need to initialize depot_tools. For that you run a python script, which needs to run other binaries – so we need to make a specific environment in which it can run. This starts with a manifest of packages, is conventionally placed in a file named manifest.scm in the project’s working directory, though you don’t have one yet, so you can just write it into v8.scm or something anywhere:

(use-modules (guix packages) (gnu packages gcc)) (concatenate-manifests (list (specifications->manifest '( "bash" "binutils" "clang-toolchain" "coreutils" "diffutils" "findutils" "git" "glib" "glibc" "glibc-locales" "grep" "less" "ld-gold-wrapper" "make" "nss-certs" "nss-mdns" "openssh" "patch" "pkg-config" "procps" "python" "python-google-api-client" "python-httplib2" "python-pyparsing" "python-requests" "python-tzdata" "sed" "tar" "wget" "which" "xz" )) (packages->manifest `((,gcc "lib")))))

Then, you guix shell -m v8.scm. But you actually do more than that, because we need to set up a container so that we can expose a standard /lib, /bin, and so on:

guix shell --container --network \ --share=$XDG_RUNTIME_DIR --share=$HOME \ --preserve=TERM --preserve=SSH_AUTH_SOCK \ --emulate-fhs \ --manifest=v8.scm

Let’s go through these options one by one.

  • --container: This is what lets us run pre-built binaries, because it uses Linux namespaces to remap the composed packages to /bin, /lib, and so on.

  • --network: Depot tools are going to want to download things, so we give them net access.

  • --share: By default, the container shares the current working directory with the “host”. But we need not only the checkout for V8 but also the sibling checkout for depot tools (more on this in a minute); let’s just share the whole home directory. Also, we share the /run/user/1000 directory, which is $XDG_RUNTIME_DIR, which lets us access the SSH agent, so we can check out over SSH.

  • --preserve: By default, the container gets a pruned environment. This lets us pass some environment variables through.

  • --emulate-fhs: The crucial piece that lets us bridge the gap between Guix and the world.

  • --manifest: Here we specify the list of packages to use when composing the environment.

We can use short arguments to make this a bit less verbose:

guix shell -CNF --share=$XDG_RUNTIME_DIR --share=$HOME \ -ETERM -ESSH_AUTH_SOCK -m manifest.scm

I would like it if all of these arguments could somehow be optional, that I could get a bare guix shell invocation to just apply them, when run in this directory. Perhaps some day.

Running guix shell like this drops you into a terminal. So let’s initialize depot tools:

cd $HOME/src export VPYTHON_BYPASS="manually managed python not supported by chrome operations" export PATH=$HOME/src/depot_tools:$PATH export SSL_CERT_DIR=/etc/ssl/certs/ export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt gclient

This should download a bunch of things, I don’t know what. But at this point we’re ready to go:

fetch v8

This checks out V8, which is about 1.3 GB, and then probably about as much again in dependencies.

build v8

You can build V8 directly:

# note caveat below! cd v8 tools/dev/gm.py x64.release

This will build fine... and then fail to link. The precise reason is obscure to me: it would seem that by default, V8 uses a whole Debian sysroot for Some Noble Purpose, and ends up linking against it. But it compiles against system glibc, which seems to have replaced fcntl64 with a versioned symbol, or some such nonsense. It smells like V8 built against a too-new glibc and then failed trying to link to an old glibc.

To fix this, you need to go into the args.gn that was generated in out/x64.release and then add use_sysroot = false, so that it links to system glibc instead of the downloaded one.

echo 'use_sysroot = false' >> out/x64.release/args.gn tools/dev/gm.py x64.release

You probably want to put the commands needed to set up your environment into some shell scripts. For Guix you could make guix-env:

#!/bin/sh guix shell -CNF --share=$XDG_RUNTIME_DIR --share=$HOME \ -ETERM -ESSH_AUTH_SOCK -m manifest.scm -- "$@"

Then inside the container you need to set the PATH and such, so we could put this into the V8 checkout as env:

#!/bin/sh # Look for depot_tools in sibling directory. depot_tools=`cd $(dirname $0)/../depot_tools && pwd` export PATH=$depot_tools:$PATH export VPYTHON_BYPASS="manually managed python not supported by chrome operations" export SSL_CERT_DIR=/etc/ssl/certs/ export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt exec "$@"

This way you can run ./guix-env ./env tools/dev/gm.py x64.release and not have to “enter” the container so much.

notes

This all works fine enough, but I do have some meta-reflections.

I would prefer it if I didn’t have to use containers, for two main reasons. One is that the resulting build artifacts have to be run in the container, because they are dynamically linked to e.g. /lib, at least for the ELF loader. It would be better if I could run them on the host (with the host debugger, for example). Using Guix to make the container is better than e.g. docker, though, because I can ensure that the same tools are available in the guest as I use on the host. But also, I don’t like adding “modes” to my terminals: are you in or out of this or that environment. Being in a container is not like being in a vanilla guix shell, and that’s annoying.

The build process uses many downloaded tools and artifacts, including clang itself. This is a feature, in that I am using the same compiler that colleagues at Google use, which is important. But it’s also annoying and it would be nice if I could choose. (Having the same clang-format though is an absolute requirement.)

There are two tests failing, in this configuration. It is somehow related to time zones. I have no idea why, but I just ignore them.

If the build system were any weirder, I would think harder about maybe using Docker or something like that. Colleagues point to distrobox as being a useful wrapper. It is annoying though, because such a docker image becomes like a little stateful thing to do sysadmin work on, and I would like to avoid that if I can.

Welp, that’s all for today. Hopefully if you are contemplating installing Guix as your operating system (rather than just in user-space), this can give you a bit more information as to what it might mean when working on third-party projects. Happy hacking and until next time!

Jan Lukas Gernert: Newsflash 3.2

Hën, 25/03/2024 - 11:34md

Another small feature update just in time for gnome 46.

Subscribe via CLI

Lets start with something that already went into version 3.1.4: you can subscribe to feeds via CLI now. The idea is that this is a building block for seamlessly subscribing to websites from within a browser or something similar. Lets see how this develops further.

Scrap all new Articles of a Feed

If Gitlab upvotes is a valid metric, this feature was the most requested one so far. Feed settings gained a new toggle to scrap the content of new articles. The sync will complete normally and in a second operation Newsflash tries to download the full content of all new articles in the background.

This is especially useful when there is no permanent internet connection. Now you can let Newsflash sync & download content while on WiFi and read the complete articles later even without an internet connection.

Update Feed URL

The local RSS backend gained the ability to update the URL where the feed is located (see the screenshot above). Sadly none of the other services support this via their APIs as far as I know.

Clean Database

The preferences dialog gained the ability to drop all old article and “vacuum” the database right away. Depending on the size of the database file this can take a few seconds, that’s why it is not done in the background during normal operations yet.

(btw: I’m not sure if I should keep the button as “destructive-action”) Internal Refactoring

Just a heads up that a lot of code managing the loading of the article list and keeping track of the displayed article and its state was refactored. If there are any regressions, please let me know.

Profiling

Christian Hergerts constant stream of profiling blog posts finally got to me. So I fired up sysprof. Fully expecting to not be knowledgeable enough to draw any meaningful conclusions from the data. After all, the app is pretty snappy on my machine ™, so any improvements must be hard to find and even harder to solve. But much to my surprise about 30 minutes later two absolutely noticeable low hanging fruit performance problems were discovered and fixed.

So I encourage everyone to just try profiling your code. You may be surprised what you find.

Adwaita Dialogs & Removing Configurable Shortcuts

Of course this release makes use of the new Adwaita Dialogs. For all the dialogs but one:

Configuring custom keybindings still spawns a new modal window. Multiple overlapping dialogs isn’t the greatest thing in the world. This and another annoying issue made me think about removing the feature from Newsflash completely.

The problem is that all shortcuts need to be disabled whenever the user is about to enter text. Otherwise the keybindings with a single letter cannot be entered as text.

All major feed readers (feedly, innoreader, etc) have a fixed set of cohesive keyboard shortcuts. I’ve been thinking about either having 2-3 shortcut configurations to choose from or just hard-coding keybindings all together.

I’d like to hear your thoughts. Do you use custom shortcuts? Would you be fine with a well thought out but hard-coded set of shortcuts? Would you prefer to choose from a few pre-defined shorcut configurations? Let me know, and help me find the best keybindings for all the actions that can be triggered via keyboard.

Dave Patrick Caberto: Kooha 2.3 Released!

Dje, 24/03/2024 - 5:29pd

Kooha is a simple screen recorder for Linux with a minimal interface. You can simply click the record button without having to configure a bunch of settings.

While we strive to keep Kooha simple, we also want to make it better. This release, composed of over 300 commits, is focused on quality-of-life improvements and bug fixes.

This release includes a refined interface, improved area selection, more informative notifications, and other changes. Read on to learn more about the new features and improvements.

New Features and Improvements Refined Interface

The main screen now has a more polished look. It now shows the selected format and FPS. This makes it easier to see the current settings at a glance, without having to open the settings window.

Other than that, progress is now shown when flushing the recording. This gives a better indication when encoding or saving is taking longer than expected.

Furthermore, the preferences window is also improved. It is now more descriptive and selecting FPS is now easier with a dropdown menu.

Improved Area Selection

The area selection window is now resizable. You can now resize the window to fit your screen better. Additionally, the previously selected area is now remembered across sessions. This means that if you close Kooha and open it again, the area you selected will be remembered. Other improvements include improved focus handling, sizing fixes, better performance, and a new style.

More Informative Notifications

Record-done notifications now show the duration and size of the recorded video. This is inspired by GNOME Shell screencast notifications.

Moreover, the notification actions now work even when the application is closed.

Other Changes

Besides the mentioned features, this release also includes:

  • Logout and idle are now inhibited while recording.
  • The audio no longer stutters and gets corrupted when recording for a long time.
  • The audio is now recorded in stereo instead of mono when possible.
  • The recordings are no longer deleted when flushing is canceled.
  • Incorrect output video orientation on certain compositors is now fixed.
  • Performance and stability are improved.
Getting Kooha 2.3

Kooha is available on Flathub. You can install it from there, and since all of our code is open-source and can be freely modified and distributed according to the license, you can also download and build it from source.

Closing Words

Thanks to everyone who has supported Kooha, be it through donations, bug reports, translations, or just using it. Your support is what keeps this project going. Enjoy the new release!

Tobias Bernard: Mini GUADEC 2024: We have a Venue!

Pre, 22/03/2024 - 7:51md

We’ve had a lot of questions from people planning to attend this year’s edition of the Berlin Mini GUADEC from outside Berlin about where it’s going to happen, so they can book accommodation nearby. We have two good news on that front: First, we have secured (pending a few last organizational details) a very cool venue, and second: The venue has a hostel next to it, so there’s the possibility to stay very close by for cheap :)

Come join us at Regenbogenfabrik

The event will happen at Regenbogenfabrik in Kreuzberg (Lausitzerstraße 21a). The venue is a self-organized cultural center with a fascinating history, and consists of, in addition to the event space, a hostel, bike repair and woodworking workshops, and a kindergarten (lucky for us closed during the GUADEC days).

The courtyard at Regenbogenfabrik

Some of the perks of this venue:

  • Centrally located (a few blocks from Kottbusser Tor)
  • We can stay as late as we want (no being kicked out at 6pm!)
  • Plenty of space for hacking
  • Lots of restaurants, bars, and cafes nearby
  • Right next to the Landwehrkanal and close to Görlitzer Park
  • There’s a ping pong table!

Regenbogenfabrik on Openstreetmap

Stay at the venue

If you’re coming to Berlin from outside and would like to stay close to the venue there’s no better option than staying directly at the venue: We’ve talked to the Regebogenfabrik Hostel, and there’s still somewhere around a dozen spots available during the GUADEC days (in rooms for 2, 3, or 8 people).

Prices range between 20 and 75 Euro per person per night, depending on the size of the room. You can book using the form here (german, but Firefox Translate works well these days :) ).

As the organizing team we don’t have the capacities to get directly involved in booking the accommodations, but we’re in touch with the hostel people and can help with coordination.

Note: If you’re interested in staying at the hostel act fast, because spots are limited. To be sure to get one of the open spots, please book by next Tuesday (March 26th) and mention the codeword “GNOME” so they know to put you in rooms with other GUADEC attendees.

Also, if you’re coming don’t forget to add your name to the attendee list on Hedgedoc, so we know roughly how many people are coming :)

If you have any other questions feel free to join our Matrix room.

See you in Berlin!

Sam Thursfield: Status update, 20/03/2024 – TinySPARQL and Tracker Miners

Mër, 20/03/2024 - 5:59md

GNOME 46 just released, and with it comes TinySPARQL 3.7 (aka Tracker SPARQL) and Tracker Miners 3.7. Here’s what I’ve been involved with this month in those projects.

Google Summer of Code

It wasn’t my intention to prepare another internship before the last one was even finished. It seems that in GNOME we have fewer projects and mentors than ever – only eight ideas this year, compared to fourteen confirmed projects back in 2020. So I proposed an idea for TinySPARQL, and here we are.

The idea, in brief: I’ve been working a bit with GraphQL recently, which doesn’t live up to the hype, but does have nice query frontends such as GraphQL Playground and graphiql that let you develop and test queries in realtime. This is a screenshot of graphiql:

In TinySPARQL, we have a commandline tool tracker3 sparql which can run queries and print the results. This is handy for developing testing queries independently of the app logic, but it’s only useful if you’re already something a SPARQL expert.

What if TinySPARQL had a web interface similar to the GraphQL Playground?

Besides running queries and showing the output, this could have example queries, resource browsing, as-you-type error checks, integrated documentation, and more fun things listed in this issue. My hope is this would encourage more folk to play around with the data running interesting queries and would help to visualize what you can do with a detailed metadata index for your local content. I think a lot of people see Tracker Miner FS as a black box that does basic string matching, and not the flexible database that it actually is.

Lots of schools teach HTML and JavaScript so this project seems like a great opportunity for an intern to take ownership of and show their skills. Applications are open until 2nd April, and we’ll be running a couple of online meetups later this week (Thursday 21st and/or Friday 22nd March) to help you create a good application. Join the #tracker:gnome.org Matrix room if you’re interested.

By the way, it’s only recently been possible to separate your queries from the rest of your app’s code. I wrote about this here: Standalone SPARQL Queries. The TrackerSparqlStatement class is flexible and fun and you can read your SPARQL statements straight from a GResource file. If you used libtracker-sparql around 1.x you’ll remember a horrible thing named TrackerSparqlBuilder – the query developer experience has come a long way since then.

New security features

There are some new features this cycle thanks to hard work by Carlos. I’ll let him write up the fun parts. One part that’s not much fun, is the increased security protections for tracker-extract. The background here is that tracker-extract uses many different media parsing libraries, and if any one of those libraries shipped by your distro contains a vulnerability, that could potentially be exploited by getting you to download a malicious file which would then be processed by tracker-extract.

We have no evidence that anyone’s ever actually done this. But there was a writeup on how it could happen recently using a vulnerability in a library named libcue which nobody is maintaining, including a clever bypass of the existing SECCOMP protection. Carlos did a writeup of this on his blog: On CVE-2023-43641.

With Tracker Miners 3.7, Carlos extended the existing SECCOMP sandbox to cover the entire extractor process rather than just the processing thread, which prevents that theoretical line of attack. And, he added an additional layer of sandboxing using a new kernel API called Landlock, which lets a process block itself from accessing any files except those it specifically needs.

From my perspective it’s rather draining to help maintain the sandboxing. When it works, nobody notices. When the sandboxing causes issues, we hear about it straight away. And there are plenty of issues! Even the build-time configuration for Landlock seems to need hours of debate.

SECCOMP works by denying access to any kernel APIs except those legitimately needed by the extractor process and the libraries it uses. Linux has 450+ syscalls and counting, and we maintain an explicit allowlist. Any change to GLibc, GIO, GStreamer or any media parsing library may then change what syscall gets used. If an unexpected syscall is called the tracker-extract process is killed with SIGSYS, which gets reported as a crash in just the same way as segfaults caused by programming errors.

It’s draining to support something that can break randomly by things that are out of our control. What else can we do though?

What’s next?

It might seem like openQA testing and desktop search are unrelated, but there is a clear connection.

Making reproducible integration tests for a search engine is a very hard problem. Back last decade I worked on the project’s Gitlab CI setup and “functional tests”. These tests live in the tracker-miners.git source tree, and run the real the crawler and extractor, testing that we can create a file named hello.txt, wait for it to be indexed and search for its contents. Quite a step forwards from unreproducible “works on my machine” testing that came before, but not representative of real use cases.

Real GNOME users do not have a single file in their home dir named hello.txt. Rather they have GBs or TBs of content to be indexed, and they have expectations about what constitutes the “best match” for a given search term.

I’m not interested in working to solve this kind of thing until we can build regression tests so that things don’t just work, but keep working in the long term. Hence, the work-in-progress gnome_search test for openQA, and the example-desktop-content repo. This is at the “working prototype” stage, and is now ready for some deeper thinking about what specific scenarios we want to test.

Some other things that may or may not happen next cycle in desktop search, depending on whether people care to help push them forwards:

  • beginning the rename: this won’t happen all at once, but we want to start calling the database TinySPARQL, and the indexer something else, still to be decided. (Ideas welcome!)
  • a ‘limiter’ to detect when a directory contains so much content that the indexer would burn significant CPU and IO resource trying to index everything up front (which requires corresponding UI changes so that there’s a way to “opt in” to indexing such locations on demand)
  • indexing the whole $HOME directory (which I personally don’t want to land without the ‘limiter’ in place, but let’s see)

One thing is certain, next month things are certainly going to slow down for me… I’m holiday for two full weeks over Easter, spring is coming and I plan to spend most of my time relaxing in a hammock. Hopefully we’ve sowed a lot of seeds this month which will soon turn into flowers.

Ondřej Holý: What’s new in GVfs for GNOME 46?

Mër, 20/03/2024 - 8:05pd

It has been 3 years since my last post with release news for GVfs. This is mainly because previous releases were more or less just bug fixes. In contrast, GVfs 1.54 comes with two new backends. Let’s take a look at them.

OneDrive

One of the backends adds OneDrive support thanks to Jan-Michael Brummer. This requires setting up a Microsoft 365 account through the Online Accounts panel in the Settings application. Then the OneDrive share can be accessed from the sidebar of the Files application.

However, creating the account is a bit tricky now. You need to register on the Microsoft Entra portal to get a client ID. The specific steps can be found in the gnome-online-accounts#308 issue. Efforts are underway to register a client ID for GNOME, so this step will soon be unnecessary.

WS-Discovery

The other backend brings WS-Discovery support. It automatically discovers the shared SMB folders of the Windows devices available on your network. You can find them in the Other Locations view of the Files application. This has not worked since the NT1 protocol was deprecated. For more information on this topic, see my previous post.

You won’t find the Windows Network folder in the Other Locations view, all the discovered shares are directly listed in the Networks section now.

Finally, I would like to thank all the GVfs contributors. Let me know in the comments if you like the new backends. I hope the next releases will also bring some great news.

Arun Raghavan: Asymptotic: A 2023 Review

Mar, 19/03/2024 - 3:54md

It’s been a busy few several months, but now that we have some breathing room, I wanted to take stock of what we have done over the last year or so.

This is a good thing for most people and companies to do of course, but being a scrappy, (questionably) young organisation, it’s doubly important for us to introspect. This allows us to both recognise our achievements and ensure that we are accomplishing what we have set out to do.

One thing that is clear to me is that we have been lagging in writing about some of the interesting things that we have had the opportunity to work on, so you can expect to see some more posts expanding on what you find below, as well as some of the newer work that we have begun.

(note: I write about our open source contributions below, but needless to say, none of it is possible without the collaboration, input, and reviews of members of the community)

WHIP/WHEP client and server for GStreamer

If you’re in the WebRTC world, you likely have not missed the excitement around standardisation of HTTP-based signalling protocols, culminating in the WHIP and WHEP specifications.

Tarun has been driving our client and server implementations for both these protocols, and in the process has been refactoring some of the webrtcsink and webrtcsrc code to make it easier to add more signaller implementations. You can find out more about this work in his talk at GstConf 2023 and we’ll be writing more about the ongoing effort here as well.

Low-latency embedded audio with PipeWire

Some of our work involves implementing a framework for very low-latency audio processing on an embedded device. PipeWire is a good fit for this sort of application, but we have had to implement a couple of features to make it work.

It turns out that doing timer-based scheduling can be more CPU intensive than ALSA period interrupts at low latencies, so we implemented an IRQ-based scheduling mode for PipeWire. This is now used by default when a pro-audio profile is selected for an ALSA device.

In addition to this, we also implemented rate adaptation for USB gadget devices using the USB Audio Class “feedback control” mechanism. This allows USB gadget devices to adapt their playback/capture rates to the graph’s rate without having to perform resampling on the device, saving valuable CPU and latency.

There is likely still some room to optimise things, so expect to more hear on this front soon.

Compress offload in PipeWire

Sanchayan has written about the work we did to add support in PipeWire for offloading compressed audio. This is something we explored in PulseAudio (there’s even an implementation out there), but it’s a testament to the PipeWire design that we were able to get this done without any protocol changes.

This should be useful in various embedded devices that have both the hardware and firmware to make use of this power-saving feature.

GStreamer LC3 encoder and decoder

Tarun wrote a GStreamer plugin implementing the LC3 codec using the liblc3 library. This is the primary codec for next-generation wireless audio devices implementing the Bluetooth LE Audio specification. The plugin is upstream and can be used to encode and decode LC3 data already, but will likely be more useful when the existing Bluetooth plugins to talk to Bluetooth devices get LE audio support.

QUIC plugins for GStreamer

Sanchayan implemented a QUIC source and sink plugin in Rust, allowing us to start experimenting with the next generation of network transports. For the curious, the plugins sit on top of the Quinn implementation of the QUIC protocol.

There is a merge request open that should land soon, and we’re already seeing folks using these plugins.

AWS S3 plugins

We’ve been fleshing out the AWS S3 plugins over the years, and we’ve added a new awss3putobjectsink. This provides a better way to push small or sparse data to S3 (subtitles, for example), without potentially losing data in case of a pipeline crash.

We’ll also be expecting this to look a little more like multifilesink, allowing us to arbitrary split up data and write to S3 directly as multiple objects.

Update to webrtc-audio-processing

We also updated the webrtc-audio-processing library, based on more recent upstream libwebrtc. This is one of those things that becomes surprisingly hard as you get into it — packaging an API-unstable library correctly, while supporting a plethora of operating system and architecture combinations.

Clients

We can’t always speak publicly of the work we are doing with our clients, but there have been a few interesting developments we can (and have spoken about).

Both Sanchayan and I spoke a bit about our work with WebRTC-as-a-service provider, Daily. My talk at the GStreamer Conference was a summary of the work I wrote about previously about what we learned while building Daily’s live streaming, recording, and other backend services. There were other clients we worked with during the year with similar experiences.

Sanchayan spoke about the interesting approach to building SIP support that we took for Daily. This was a pretty fun project, allowing us to build a modern server-side SIP client with GStreamer and SIP.js.

An ongoing project we are working on is building AES67 support using GStreamer for FreeSWITCH, which essentially allows bridging low-latency network audio equipment with existing SIP and related infrastructure.

As you might have noticed from previous sections, we are also working on a low-latency audio appliance using PipeWire.

Retrospective

All in all, we’ve had a reasonably productive 2023. There are things I know we can do better in our upstream efforts to help move merge requests and issues, and I hope to address this in 2024.

We have ideas for larger projects that we would like to take on. Some of these we might be able to find clients who would be willing to pay for. For the ideas that we think are useful but may not find any funding, we will continue to spend our spare time to push forward.

If you made this this far, thank you, and look out for more updates!

Christian Schaller: PipeWire camera handling is now happening!

Pre, 15/03/2024 - 5:30md

We hit a major milestones this week with the long worked on adoption of PipeWire Camera support finally starting to land!

Not long ago Firefox was released with experimental PipeWire camera support thanks to the great work by Jan Grulich.

Then this week OBS Studio shipped with PipeWire camera support thanks to the great work of Georges Stavracas, who cleaned up the patches and pushed to get them merged based on earlier work by himself, Wim Taymans and Colulmbarius. This means we now have two major applications out there that can use PipeWire for camera handling and thus two applications whose video streams that can be interacted with through patchbay applications like Helvum and qpwgraph.
These applications are important and central enough that having them use PipeWire are in itself useful, but they will now also provide two examples of how to do it for application developers looking at how to add PipeWire camera support to their own applications; there is no better documentation than working code.

The PipeWire support is also paired with camera portal support. The use of the portal also means we are getting closer to being able to fully sandbox media applications in Flatpaks which is an important goal in itself. Which reminds me, to test out the new PipeWire support be sure to grab the official OBS Studio Flatpak from Flathub.

PipeWire camera handling with OBS Studio, Firefox and Helvum.


Let me explain what is going on in the screenshot above as it is a lot. First of all you see Helvum there on the right showning all the connections made through PipeWire, both the audio and in yellow, the video. So you can see how my Logitech BRIO camera is feeding a camera video stream into both OBS Studio and Firefox. You also see my Magewell HDMI capture card feeding a video stream into OBS Studio and finally gnome-shell providing a screen capture feed that is being fed into OBS Studio. On the left you see on the top Firefox running their WebRTC test app capturing my video then just below that you see the OBS Studio image with the direct camera feed on the top left corner, the screencast of Firefox just below it and finally the ‘no signal’ image is from my HDMI capture card since I had no HDMI device connected to it as I was testing this.

For those wondering work is also underway to bring this into Chromium and Google Chrome browsers where Michael Olbrich from Pengutronix has been pushing to get patches written and merged, he did a talk about this work at FOSDEM last year as you can see from these slides with this patch being the last step to get this working there too.

The move to PipeWire also prepared us for the new generation of MIPI cameras being rolled out in new laptops and helps push work on supporting those cameras towards libcamera, the new library for dealing with the new generation of complex cameras. This of course ties well into the work that Hans de Goede and Kate Hsuan has been doing recently, along with Bryan O’Donoghue from Linaro, on providing an open source driver for MIPI cameras and of course the incredible work by Laurent Pinchart and Kieran Bingham from Ideas on board on libcamera itself.

The PipeWire support is of course fresh and I am sure we will find bugs and corner cases that needs fixing as more people test out the functionality in both Firefox and OBS Studio and there are some interface annoyances we are working to resolve. For instance since PipeWire support both V4L and libcamera as a backend you do atm get double entries in your selection dialogs for most of your cameras. Wireplumber has implemented de-deplucation code which will ensure only the libcamera listing will show for cameras supported by both v4l and libcamera, but is only part of the development version of Wireplumber and thus it will land in Fedora Workstation 40, so until that is out you will have to deal with the duplicate options.

Camera selection dialog


We are also trying to figure out how to better deal with infraread cameras that are part of many modern webcams. Obviously you usually do not want to use an IR camera for your video calls, so we need to figure out the best way to identify them and ensure they are clearly marked and not used by default.

Another recent good PipeWire new tidbit that became available with the PipeWire 1.0.4 release PipeWire maintainer Wim Taymans also fixed up the FireWire FFADO support. The FFADO support had been in there for some time, but after seeing Venn Stone do some thorough tests and find issues we decided it was time to bite the bullet and buy some second hand Firewire hardware for Wim to be able to test and verify himself.

Focusrite firewire device

.
Once the Focusrite device I bought landed at Wims house he got to work and cleaned up the FFADO support and make it both work and be performant.
For those unaware FFADO is a way to use Firewire devices without going through ALSA and is popular among pro-audio folks because it gives lower latencies. Firewire is of course a relatively old technology at this point, but the audio equipment is still great and many audio engineers have a lot of these devices, so with this fixed you can plop a Firewire PCI card into your PC and suddenly all those old Firewire devices gets a new lease on life on your Linux system. And you can buy these devices on places like ebay or facebook marketplace for a fraction of their original cost. In some sense this demonstrates the same strength of PipeWire as the libcamera support, in the libcamera case it allows Linux applications a way to smoothly transtion to a new generation of hardware and in this Firewire case it allows Linux applications to keep using older hardware with new applications.

So all in all its been a great few weeks for PipeWire and for Linux Audio AND Video, and if you are an application maintainer be sure to look at how you can add PipeWire camera support to your application and of course get that application packaged up as a Flatpak for people using Fedora Workstation and other distributions to consume.

Alice Mikhaylenko: Libadwaita 1.5

Pre, 15/03/2024 - 4:58md

Well, another cycle has passed.

This one was fairly slow, but nevertheless has a major new feature.

Adaptive Dialogs

The biggest feature this time is the new dialog widgetry.

Traditionally, dialogs have been separate windows. While this approach generally works, we never figured out how to reasonably support that on mobile. There was a downstream patch for auto-maximizing dialogs, which in turn required them to be resizable, which is not great on desktop, and the patch was hacky and never really supported upstream.

Another problem is close buttons – we want to keep them in dialogs instead of needing to go to overview to close every dialog, and that’s why mobile gnome-shell doesn’t hide close buttons at all atm. Ideally we want to keep them in dialogs, but be able to remove them everywhere else.

While it would be possible to have shell present dialogs differently, another approach is to move them to the client instead. That’s not a new approach, here are some existing examples:

This has both upsides and downsides. One upside is that the toolkit/app has much more control over them. For example, it’s very easy to ensure their size doesn’t exceed the parent window. While this is possible with windows (AdwMessageDialog does this), it’s hacky and can still break fairly easily with e.g. maximize – in fact, I’m not confident it works across compositors and in both Wayland and X11.

Having dialogs not exceed the parent’s size means not needing to limit their size quite so aggressively – previously it was needed so that the dialog doesn’t get ridiculously large on top of a small window.

The dimming behind the dialog can also vary between light and dark styles – shell cannot do that because it doesn’t know if this particular window is light or dark, only what the whole system prefers.

In future this should also allow to support per-tab dialogs. For apps like web browsers, a background tab spawning a dialog that takes over the whole window is not great.

Meanwhile the main downside is the same thing as was listed in upsides: these dialogs cannot exceed the parent window’s size. Sometimes it’s still needed, e.g. if the parent window is really small.

Bottom Sheets

So, how does that help on mobile? Well, aside from just implementing the existing size constraints on AdwMessageDialog more cleanly, it allows to present these dialogs as bottom sheets on mobile, instead of centered floating sheets.

A previous design has presented dialogs as pages with back buttons, but that had many other problems, especially on small windows on desktop. For example, what happens if you close the window? A dialog and a “regular” subpage would look identical, so you’d probably expect the close button to close the entire window? But if it’s floating above a larger window?

Bottom sheets avoid this issue – you still see the parent window with its own close button, so it’s obvious that they are closed separately – while still being allowed to take full width like a subpage.

They can also be swiped down, though because of GTK limitations this does not work together with scrolling content. It’s still possible to swipe down from header bar or the empty space above the sheet.

And the fact they are attached to the bottom edge makes them easier to reach on huge phones.

Meanwhile, AdwHeaderBar always shows a close button within dialogs, regardless of the system layout. The only hint it takes from the system is whether to display the close button on the right or left side.

API

For the most part they are used similarly to GtkWindow. The main differences are with presenting and closing dialogs.

The :transient-for property has been replaced with a parameter in adw_dialog_present(). It also doesn’t necessarily take a window anymore, but can accept any widget within that window as well. Currently it just fetches the root widget, but once we have per-tab dialogs, that can be controlled with a simple flag instead of needing a new variant of adw_tab_present() that would take a tab page instead of a window.

The ::close-request signal has been replaced as well. Because the dialogs can be swiped down on mobile, we need to know if they can be closed before the gesture starts. So, instead there’s a :can-close property that apps set ahead of time if there’s unsaved data or some other reason to prevent closing.

For close confirmation, there’s a ::close-attempt signal, which will be fired when trying to close a dialog using a close button or a shortcut while :can-close is set to FALSE (or calling adw_dialog_close()). For actual closing, there’s ::closed instead.

Finally, adw_dialog_force_close() closes the dialog while ignoring :can-close. It can be used to close the dialog after confirmation without needing to fiddle with :can-close or repeat ::close-attempt emissions.

If this works well, AdwWindow may have something similar in future.

The rest is fairly straightforward and is modelled after GtkWindow. See AdwDialog docs and migration guide for more details.

Since AdwPreferencesWindow and other widgets can’t be ported to new dialogs without a significant API break, they have been replaced:

For the most part they are identical, with a few differences:

  • AdwPreferencesDialog has search disabled by default, and gets rid of deprecated subpage API
  • AdwAlertDialog can scroll contents, so apps that add their own scrolled windows may want to remove them

Since the new widgets landed right at the end of the cycle, the old widgets are not deprecated yet. However, they will be deprecated next cycle, so it’s recommended to migrate your apps anyway.

Standalone bottom sheets (like in audio players) are not available yet either, but will be in future.

Esc to Close

Traditionally, dialogs have been done via GtkDialog which handled this automatically. But for the last few years, apps have been steadily moving away from GtkDialog and by now it’s deprecated. While that’s not really a problem on its own, one thing that GtkDialog was doing automatically and custom dialogs don’t is closing when pressing Esc. While it’s pretty easy to add that manually, a lot of apps forget to do so.

But since we have dedicated dialog API again, Esc to close is once again automatic.

What about standalone dialogs?

Some dialogs don’t have a parent window. Those are still presented as a window. Note that it still doesn’t work well on mobile: while there will be a close button, the sizing will work just as badly as before, so it’s recommended to avoid them.

Dialogs will also be presented as a window if you try to ad them to a parent that can’t host dialogs (anything that’s not an AdwWindow or AdwApplicationWindow), or the parent is not resizable. The reason for the last one is to accommodate apps like Emblem, which has a small non-resizable window, where dialogs won’t fully fit, and since it’s non-resizable, it doesn’t work on mobile anyway.

What about “Attach Modal Dialogs”

Since we have the window-backed mode, it would be fairly easy to support that preference… except there’s no way to read it from sandboxed apps.

What about portals?

This approach obviously doesn’t work for portals, since they are in a separate process. We do have a plan for them, involving a private protocol in mutter, but it didn’t make it for 46. So, next time.

What about GTK built-in dialogs?

Those will be replaced as well, but it takes time. For now yes, GtkShortcutsWindow etc won’t match other dialogs.

Other Changes

As usual, there are some smaller changes.

As always, thanks to all the contributors who helped to make this release happen.

Marcus Lundblad: Maps and GNOME 46

Pre, 15/03/2024 - 1:35md

It's that time again, a new GNOME release is just around the corner.

 The news in Maps for GNOME 46A lot of the new things we've been working on for the 46 release has already been covered, but here is few recaps. The new map styleThe map style used for the vector-based, client-side rendered map which is still considered experimental in 46 has been switched over to our new “GNOME-themed” style, which also supports a dark mode (enabled when the global dark mode is enabled).

The vector map still needs to be explicitly enabled via the “layers menu” (the second headerbar button from the left). This also require the backing installation of libshumate to be built with vector renderer support (which is the case when using the Flatpak from Flathub, and also libshumate will default to building the vector renderer from the 1.2.0 release, so distributions should likely have it enabled in their 46 installations).

The current plan looks like we're leaning towards flipping it on by default after the 46 release, so by 47 it will probably mean the old raster tiles from openstreetmap.org will be retired.

Also icons on the map (such as POIs) are now directly clickable. And labels should be localized to the user's language (when the appropriate language tags are available in the OpenStreetMap data).


Other visual improvementsFor 46 the zoom control buttons has been revamped (again), and put in the lower corner (as also shown in the above screenshots):

The pin used to marked places selected from search results, and other things like pin-pointed locations in GeoJSON files has gained a new modernized design by Jakub Steiner.


The dialog for adding an OpenStreetMap account to edit POIs gained a refresh sporting the new libadwaita dialog and widgets by Felipe Kinoshita.

Also information about which floor a place is located at is shown in the place bubbles when available. This can be useful to find your way around for example big shopping malls and the like (this was an idea that came when looking for a café in a galleria in Riga last summer…).


The favorites menu has also gotten a revamp. Instead of just showing a greyed-out inactive button when there's no favored places it now has an “empty state” hinting on the ability to “star” places.


And favorites can be removed directly from the list without having to open them (and animate to that place to show the bubble).


Looking further onFor the next cycle aside from continuing the refinements to the new map style and making the vector map the main thing another cool project that was initiated during FOSDEM in Februari has caught my attention: TransitousTransitous aims to setup a free and open public transit routing service: https://github.com/public-transport/transitous

It is using the MOTIS project (https://github.com/motis-project/motis) as the backend, with a cround sourcing approach to collect data feeds for timetable data.
The routing can already be tested out at https://transitous.org. Currently it only handles “station to station” routing, so there is not yet support for walking instructions.
Also, unlike the current public transit plugins support we have in Maps with Transitous you would also be able to cross-border planning (utilizing timetables from different data feeds).
When it becomes a bit more mature we should make use of it in Maps ☺.
So this another area to help out by creating PRs for adding transit schedule feeds for your local area that could potentially benefit both Maps and other FOSS projects (such as KDE Itinerary).
Problems aheadAnd now to something of a problem.
The location service backend that we are using (not just used by Maps, but also other parts like Weather, automatic timezone handling) GeoClue has been using Mozilla's location service API (MLS). This will unfortunately be retired https://github.com/mozilla/ichnaea/issues/2065 So there will be a need to come up with alternative solutions https://gitlab.freedesktop.org/geoclue/geoclue/-/issues/186 Maybe in worst case, we'd have to disable showing current location in Maps unless the device has an actual GPS unit. 

Matthew Garrett: Digital forgeries are hard

Enj, 14/03/2024 - 10:11pd
Closing arguments in the trial between various people and Craig Wright over whether he's Satoshi Nakamoto are wrapping up today, amongst a bewildering array of presented evidence. But one utterly astonishing aspect of this lawsuit is that expert witnesses for both sides agreed that much of the digital evidence provided by Craig Wright was unreliable in one way or another, generally including indications that it wasn't produced at the point in time it claimed to be. And it's fascinating reading through the subtle (and, in some cases, not so subtle) ways that that's revealed.

One of the pieces of evidence entered is screenshots of data from Mind Your Own Business, a business management product that's been around for some time. Craig Wright relied on screenshots of various entries from this product to support his claims around having controlled meaningful number of bitcoin before he was publicly linked to being Satoshi. If these were authentic then they'd be strong evidence linking him to the mining of coins before Bitcoin's public availability. Unfortunately the screenshots themselves weren't contemporary - the metadata shows them being created in 2020. This wouldn't fundamentally be a problem (it's entirely reasonable to create new screenshots of old material), as long as it's possible to establish that the material shown in the screenshots was created at that point. Sadly, well.

One part of the disclosed information was an email that contained a zip file that contained a raw database in the format used by MYOB. Importing that into the tool allowed an audit record to be extracted - this record showed that the relevant entries had been added to the database in 2020, shortly before the screenshots were created. This was, obviously, not strong evidence that Craig had held Bitcoin in 2009. This evidence was reported, and was responded to with a couple of additional databases that had an audit trail that was consistent with the dates in the records in question. Well, partially. The audit record included session data, showing an administrator logging into the data base in 2011 and then, uh, logging out in 2023, which is rather more consistent with someone changing their system clock to 2011 to create an entry, and switching it back to present day before logging out. In addition, the audit log included fields that didn't exist in versions of the product released before 2016, strongly suggesting that the entries dated 2009-2011 were created in software released after 2016. And even worse, the order of insertions into the database didn't line up with calendar time - an entry dated before another entry may appear in the database afterwards, indicating that it was created later. But even more obvious? The database schema used for these old entries corresponded to a version of the software released in 2023.

This is all consistent with the idea that these records were created after the fact and backdated to 2009-2011, and that after this evidence was made available further evidence was created and backdated to obfuscate that. In an unusual turn of events, during the trial Craig Wright introduced further evidence in the form of a chain of emails to his former lawyers that indicated he had provided them with login details to his MYOB instance in 2019 - before the metadata associated with the screenshots. The implication isn't entirely clear, but it suggests that either they had an opportunity to examine this data before the metadata suggests it was created, or that they faked the data? So, well, the obvious thing happened, and his former lawyers were asked whether they received these emails. The chain consisted of three emails, two of which they confirmed they'd received. And they received a third email in the chain, but it was different to the one entered in evidence. And, uh, weirdly, they'd received a copy of the email that was submitted - but they'd received it a few days earlier. In 2024.

And again, the forensic evidence is helpful here! It turns out that the email client used associates a timestamp with any attachments, which in this case included an image in the email footer - and the mysterious time travelling email had a timestamp in 2024, not 2019. This was created by the client, so was consistent with the email having been sent in 2024, not being sent in 2019 and somehow getting stuck somewhere before delivery. The date header indicates 2019, as do encoded timestamps in the MIME headers - consistent with the mail being sent by a computer with the clock set to 2019.

But there's a very weird difference between the copy of the email that was submitted in evidence and the copy that was located afterwards! The first included a header inserted by gmail that included a 2019 timestamp, while the latter had a 2024 timestamp. Is there a way to determine which of these could be the truth? It turns out there is! The format of that header changed in 2022, and the version in the email is the new version. The version with the 2019 timestamp is anachronistic - the format simply doesn't match the header that gmail would have introduced in 2019, suggesting that an email sent in 2022 or later was modified to include a timestamp of 2019.

This is by no means the only indication that Craig Wright's evidence may be misleading (there's the whole argument that the Bitcoin white paper was written in LaTeX when general consensus is that it's written in OpenOffice, given that's what the metadata claims), but it's a lovely example of a more general issue.

Our technology chains are complicated. So many moving parts end up influencing the content of the data we generate, and those parts develop over time. It's fantastically difficult to generate an artifact now that precisely corresponds to how it would look in the past, even if we go to the effort of installing an old OS on an old PC and setting the clock appropriately (are you sure you're going to be able to mimic an entirely period appropriate patch level?). Even the version of the font you use in a document may indicate it's anachronistic. I'm pretty good at computers and I no longer have any belief I could fake an old document.

(References: this Dropbox, under "Expert reports", "Patrick Madden". Initial MYOB data is in "Appendix PM7", further analysis is in "Appendix PM42", email analysis is "Sixth Expert Report of Mr Patrick Madden")

comments

Martín Abente Lahaye: Gameeky 0.6.0

Mër, 13/03/2024 - 5:54md

After a busy month, a new release is out! This new release comes with improved compatibility with other platforms, several usability additions and improvements.

It’s no longer necessary to run terminal commands. The most noticeable change in release is the addition of a properly-integrated development environment for Python. With this, the LOGO-like user experience was greatly improved.

The LOGO-like programming interface is also bit richer. A new Rotate action was added and the general interface was simplified to further improve the user experience.

It’s easier to share projects. A simple dialog to export and import projects was added, available through the redesigned project cards in the launcher.

As shown above, Gameeky now has a cute desktop icon thanks to @jimmac and @bertob.

Should be easier to run Gameeky on other platforms now. Under the hood, many things have changed to support other platforms, e.g., macOS. The sound backend was changed to GStreamer, the communication protocol was simplified, and the use of WebKit is now optional.

There are no installers for other platforms yet but, if anyone is experienced and interested in making these, that would be an awesome contribution.

As a small addition, it’s now possible to select a different entity as the user’s character. Recently, my nephews decided they wanted to their character to be a small boulder. They had a blast with their boulder-hero narrative, and it convinced me there should be more additions like that.

There’s more, so check the full list of changes.

On the community side of things, I already started building alliances with different organizations, e.g., the first-ever Gameeky workshop is planned for March 23 in Encarnación, Paraguay and it’s being organized by the local Python community.

If you’re in Paraguay or nearby in Argentina, feel free to contact me to participate!

Dorothy Kabarozi: Overall experience: My Outreachy internship with GNOME

Mar, 12/03/2024 - 5:16md

Embarking on an Outreachy internship is a great start into the heart of open-source , a journey I’ve longed to undertake. December 2023 to March 2024 marked this exhilarating chapter of my life, where I had the honor of diving deep into the GNOME world as an Outreachy intern. In this blog, I’m happy to share my experiences, painting a vivid picture of the growth, challenges, and invaluable experiences that have shaped my journey.

Discovering GNOME: A Gateway to Open-Source Excellence

At its core, GNOME (GNU Network Object Model Environment) is a graphical user interface (GUI) and set of computer desktop applications for users of the Linux operating system.GNOME brings companies, volunteers, professionals, and non-profits together from around the world.

We make GNOME, a completely free software solution for everyone.

Why GNOME Captured My Heart

The Outreachy internship presented a couple of projects to choose from, but my fascination with operating system functionalities—booting, scheduling, memory management, user interface and beyond—drew me irresistibly to GNOME. My mission? To work on the implementation of end-to-end tests, a challenge I embraced head on as i dived into the project documentation to understand the project better.

From the moment I introduced myself on the GNOME community channel in the first days of contribution phase, the warmth and promptness of their welcome were unmatched, shattering the myth of the “busy, distant mentor.” This immediate sense of belonging fueled my determination, despite the initial difficulties of setup procedures and technical trials.

My advice to future Outreachy aspirants

From my experience is to start early, Zero down on a project, try to set up early as this took me almost 2 weeks to finally make a merge request to the project.

Secondly ask questions publicly as this helps you easily get unblocked faster in cases when your mentor is busy.

Milestones and Mastery: The GNOME Journey

Our collective goal for the internship was to implement tests for accessibility features for GNOME desktop and also test some core apps on mobile. The creation of the gnome_accessibility test suite marked our first victory, followed by the genesis of gnome-locales and gnome_mobile test suites. Daily stand ups and weekly mentor meetings became our compass, guiding our efforts and honing our focus on the different tasks.Check out for more details here and share any feedback with us on discourse.

Technically ,I learned a lot about version control and Git workflows, how to actually contribute to a project with a large code base, writing clean, readable and efficient code and ensuring code is thoroughly tested for bugs and errors before pushing it. Some of the soft skills I learned were collaboration, communication skills and the continuous desire to learn new things and being teachable.

Overcoming Obstacles: Hardware Hurdles and Beyond

The revelation that my iOS-based machine was ill-equipped for the task at hand was a stark challenge. The lesson was clear: understanding project specifications is crucial, and adaptability is key. This obstacle, while daunting, taught me the value of preparation and the importance of choosing the right tools for the task.

Beyond Coding: Community, Engagement, and Impact

I have not only interacted with my mentors for the project but also participated in sharing about the work we have done on TWIG where I highlighted the work we had done writing tests for accessibility features ie, High contrast,Large text,Overlay scrollbars, Screen reader, Zoom, Over amplification,Visual alerts and On Screen Keyboard features and added more details on the discourse channel too.

I have had public engagements on contributing to Outreachy over twitter spaces in my community where I shared about how to apply to Outreachy and how to prepare for in the contribution phase and shared more about my internship with GNOME during the GNOME AFRICA Preparatory Boot camp for GSoC & Outreachy, check out my presentation here where I shared more about how to stand out as an Outreachy applicant and my experience working with GNOME .These experiences have not only boosted my technical skills but have also embedded in me a sense of community and courage to tackle the unknown.

A Heartfelt Thank You

As this chapter of my journey with GNOME and Outreachy draws to a close, I am overwhelmed with gratitude.To my selfless mentors , Sam Thursfield and Sonny Piers for the guidance and mentorship . I appreciate you all for what you have planted in us. To Tanjuate you have been an amazing co- intern I could ever ask for. To Kristi Progri and Felipe Borges for coordinating this internship with Outreachy and the GNOME Community.

To Outreachy, thank you for this opportunity. And to every soul who has walked this path with me: your support has been amazing. As I look forward to converging paths at GUADEC in July and beyond, I carry with me not just skills and knowledge, but a heart full of memories, ready to embark on new adventures in the open-source world.

Here’s to infinite learning, enduring friendships, and the unwavering spirit of contribution. May the journey continue to unfold, with success, learning, and boundless possibilities.

Here are some of the accessibility tests for gnome_accessibility testsuite, we added during the internship with GNOME .

Click here to take a more detailed look.

Peter Hutterer: Enforcing a touchscreen mapping in GNOME

Mar, 12/03/2024 - 5:33pd

Touchscreens are quite prevalent by now but one of the not-so-hidden secrets is that they're actually two devices: the monitor and the actual touch input device. Surprisingly, users want the touch input device to work on the underlying monitor which means your desktop environment needs to somehow figure out which of the monitors belongs to which touch input device. Often these two devices come from two different vendors, so mutter needs to use ... */me holds torch under face* .... HEURISTICS! :scary face:

Those heuristics are actually quite simple: same vendor/product ID? same dimensions? is one of the monitors a built-in one? [1] But unfortunately in some cases those heuristics don't produce the correct result. In particular external touchscreens seem to be getting more common again and plugging those into a (non-touch) laptop means you usually get that external screen mapped to the internal display.

Luckily mutter does have a configuration to it though it is not exposed in the GNOME Settings (yet). But you, my $age $jedirank, can access this via a commandline interface to at least work around the immediate issue. But first: we need to know the monitor details and you need to know about gsettings relocatable schemas.

Finding the right monitor information is relatively trivial: look at $HOME/.config/monitors.xml and get your monitor's vendor, product and serial from there. e.g. in my case this is:

<monitors version="2"> <configuration> <logicalmonitor> <x>0</x> <y>0</y> <scale>1</scale> <monitor> <monitorspec> <connector>DP-2</connector> <vendor>DEL</vendor> <--- this one <product>DELL S2722QC</product> <--- this one <serial>59PKLD3</serial> <--- and this one </monitorspec> <mode> <width>3840</width> <height>2160</height> <rate>59.997</rate> </mode> </monitor> </logicalmonitor> <logicalmonitor> <x>928</x> <y>2160</y> <scale>1</scale> <primary>yes</primary> <monitor> <monitorspec> <connector>eDP-1</connector> <vendor>IVO</vendor> <product>0x057d</product> <serial>0x00000000</serial> </monitorspec> <mode> <width>1920</width> <height>1080</height> <rate>60.010</rate> </mode> </monitor> </logicalmonitor> </configuration> </monitors> Well, so we know the monitor details we want. Note there are two monitors listed here, in this case I want to map the touchscreen to the external Dell monitor. Let's move on to gsettings.

gsettings is of course the configuration storage wrapper GNOME uses (and the CLI tool with the same name). GSettings follow a specific schema, i.e. a description of a schema name and possible keys and values for each key. You can list all those, set them, look up the available values, etc.:

$ gsettings list-recursively ... lots of output ... $ gsettings set org.gnome.desktop.peripherals.touchpad click-method 'areas' $ gsettings range org.gnome.desktop.peripherals.touchpad click-method enum 'default' 'none' 'areas' 'fingers' Now, schemas work fine as-is as long as there is only one instance. Where the same schema is used for different devices (like touchscreens) we use a so-called "relocatable schema" and that requires also specifying a path - and this is where it gets tricky. I'm not aware of any functionality to get the specific path for a relocatable schema so often it's down to reading the source. In the case of touchscreens, the path includes the USB vendor and product ID (in lowercase), e.g. in my case the path is: /org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ In your case you can get the touchscreen details from lsusb, libinput record, /proc/bus/input/devices, etc. Once you have it, gsettings takes a schema:path argument like this: $ gsettings list-recursively org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ org.gnome.desktop.peripherals.touchscreen output ['', '', ''] Looks like the touchscreen is bound to no monitor. Let's bind it with the data from above: $ gsettings set org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ output "['DEL', 'DELL S2722QC', '59PKLD3']" Note the quotes so your shell doesn't misinterpret things.

And that's it. Now I have my internal touchscreen mapped to my external monitor which makes no sense at all but shows that you can map a touchscreen to any screen if you want to.

[1] Probably the one that most commonly takes effect since it's the vast vast majority of devices

Ismael Olea: Modelling protected areas talks

Hën, 11/03/2024 - 12:00pd

Just keeping up to date with a talk I gave twice past year. I’m very proud of the work but never shared here neither in my Wikidata User:Olea page.

As a brief introduction, for some time I did a significant work importing to Wikidata the CDDA database of European protected areas as I found we have them completely infra represented. I have previous experience with historical heritage but this showed to be a harder work. I have collected some thoughts about lessons learned and a potential standarizing proposal for natural protected areas but never structured in a comprensive way until being invited to give a couple talks about this.

The first talk was in Lisboa, invited and sponsored by our friends of Wikimedia Portugal, in their Wikidata Days 2023. To be honest, my talk was a little disaster because I didn’t prepared the talk with time enough, but at least I could present a complete draft of the idea.

Then I have the opportunity to talk again about the same issue in the next Data Modelling Days 2023 virtual event.

My participation was sharing the session with VIGNERON, he talk about historical heritage and I did about natural heritage/natural protected area. For this session I was able to rewrite my proposal with the quality a communication requires. Now you have available the video recording of the full session:

And my slides, that would be the final text of my intended proposal:

As a conclusion: yes, I should promote this in Wikidata, but the amount of work it requires (editions and discussions) is, for the moment, outside my interest for my freetime.

Emmanuele Bassi: Accessibility improvements in GTK 4.14

Pre, 08/03/2024 - 10:30pd

GTK 4.14 brings various improvements on the accessibility front, especially for applications showing complex, formatted text; for WebKitGTK; and for notifications.

Accessible text interface

The accessibility rewrite for 4.0 provided an implementation for complex, selectable, and formatted text in widgets provided by GTK, like GtkTextView, but out of tree widgets would not be able to do the same, as the API was kept private while we discussed what ATs (assistive technologies) actually needed, and while we were looking at non-Linux implementations. For GTK 4.14 we finally have a public interface that out of tree widgets can implement to provide complex, formatted text to ATs: GtkAccessibleText.

GtkAccessibleText allows widgets to provide the text contents at given offsets; the text attributes applied to the contents; and to notify assistive technologies of changes in the text, caret position, or selection boundaries.

Text widgets implementing GtkAccessibleText should notify ATs in these cases:

Text attributes are mainly left to applications to implement—both in naming and serialization; GTK provides support for common text attributes already in use by various toolkits and assistive technologies, and they are available as constants under the GTK_ACCESSIBLE_ATTRIBUTE_* prefix in the API reference.

The GtkAccessibleText interface is a requirement for implementing the accessibility of virtual terminals; the most common GTK-based library for virtual terminals, VTE, has been ported to GTK4 thanks to the efforts of Christian Hergert and in GNOME 46 will support accessibility through the new GTK interface.

Bridging AT-SPI trees

There are cases when a library or an application implements its own accessible tree using AT-SPI, whether in the same process or out of process. One such library is WebKitGTK, which generates the accessible object tree from the web tree inside separate processes. These processes do not use GTK, so they cannot use the GtkAccessible API to describe their contents.

Thanks to the work of Georges Stavracas GTK now can bridge those accessibility object trees under the GTK widget’s own, allowing ATs to navigate into a web page using WebKit from the UI.

Currently, like the rest of the accessibility API in GTK, this is specific to the AT-SPI protocol on Linux, which means it requires libraries and applications that wish to take advantage of it to ensure that the API is available at compile time, through the use of a pkg-config file and a separate C header, similarly to how the printing API is exposed.

Notifications

Applications using in-app notifications that are decoupled by the current widget’s focus, like AdwToast in libadwaita, can now raise the notification message to ATs via the gtk_accessible_announce() method, thanks to Lukáš Tyrychtr, in a way that is respectful of the current ATs output.

Other improvements

GTK 4.12 ensured that the computed accessible labels and descriptions were up to date with the ARIA specification; GTK 4.14 iterates on those improvements, by removing special cases and duplicates.

Thanks to the work of Michael Weghorn from The Document Foundation, there are new roles for text-related accessible objects, like paragraphs and comments, as well as various fixes in the AT-SPI implementation of the accessibility API.

The accessibility support in GTK4 is incrementally improving with every cycle, thanks to the contributions of many people; ideally, these improvements should also lead to a better, more efficient protocol for toolkits and assistive technologies to share.

We are still exploring the possibility of adding backends for other accessibility platforms, like UIAutomation; and for other libraries, like AccessKit.

Faqet