You are here

Agreguesi i feed

Matthew Garrett: Where are we on X Chat security?

Planet GNOME - Mar, 21/10/2025 - 1:36pd
AWS had an outage today and Signal was unavailable for some users for a while. This has confused some people, including Elon Musk, who are concerned that having a dependency on AWS means that Signal could somehow be compromised by anyone with sufficient influence over AWS (it can't). Which means we're back to the richest man in the world recommending his own "X Chat", saying The messages are fully encrypted with no advertising hooks or strange “AWS dependencies” such that I can’t read your messages even if someone put a gun to my head.

Elon is either uninformed about his own product, lying, or both.

As I wrote back in June, X Chat genuinely end-to-end encrypted, but ownership of the keys is complicated. The encryption key is stored using the Juicebox protocol, sharded between multiple backends. Two of these are asserted to be HSM backed - a discussion of the commissioning ceremony was recently posted here. I have not watched the almost 7 hours of video to verify that this was performed correctly, and I also haven't been able to verify that the public keys included in the post were the keys generated during the ceremony, although that may be down to me just not finding the appropriate point in the video (sorry, Twitter's video hosting doesn't appear to have any skip feature and would frequently just sit spinning if I tried to seek to far and I should probably just download them and figure it out but I'm not doing that now). With enough effort it would probably also have been possible to fake the entire thing - I have no reason to believe that this has happened, but it's not externally verifiable.

But let's assume these published public keys are legitimately the ones used in the HSM Juicebox realms[1] and that everything was done correctly. Does that prevent Elon from obtaining your key and decrypting your messages? No.

On startup, the X Chat client makes an API call called GetPublicKeysResult, and the public keys of the realms are returned. Right now when I make that call I get the public keys listed above, so there's at least some indication that I'm going to be communicating with actual HSMs. But what if that API call returned different keys? Could Elon stick a proxy in front of the HSMs and grab a cleartext portion of the key shards? Yes, he absolutely could, and then he'd be able to decrypt your messages.

(I will accept that there is a plausible argument that Elon is telling the truth in that even if you held a gun to his head he's not smart enough to be able to do this himself, but that'd be true even if there were no security whatsoever, so it still says nothing about the security of his product)

The solution to this is remote attestation - a process where the device you're speaking to proves its identity to you. In theory the endpoint could attest that it's an HSM running this specific code, and we could look at the Juicebox repo and verify that it's that code and hasn't been tampered with, and then we'd know that our communication channel was secure. Elon hasn't done that, despite it being table stakes for this sort of thing (Signal uses remote attestation to verify the enclave code used for private contact discovery, for instance, which ensures that the client will refuse to hand over any data until it's verified the identity and state of the enclave). There's no excuse whatsoever to build a new end-to-end encrypted messenger which relies on a network service for security without providing a trustworthy mechanism to verify you're speaking to the real service.

We know how to do this properly. We have done for years. Launching without it is unforgivable.

[1] There are three Juicebox realms overall, one of which doesn't appear to use HSMs, but you need at least two in order to obtain the key so at least part of the key will always be held in HSMs

comments

Dorothy Kabarozi: Deploying a Simple HTML Project on Linode Using Nginx

Planet GNOME - Sht, 18/10/2025 - 6:14md
Deploying a Simple HTML Project on Linode Using Nginx: My Journey and Lessons Learned

Deploying web projects can seem intimidating at first, especially when working with a remote server like Linode. Recently, I decided to deploy a simple HTML project (index.html) on a Linode server using Nginx. Here’s a detailed account of the steps I took, the challenges I faced, and the solutions I applied.

Step 1: Accessing the Linode Server

The first step was to connect to my Linode server via SSH:

ssh root@<your-linode-ip>

Initially, I encountered a timeout issue, which reminded me to check network settings and ensure SSH access was enabled for my Linode instance. Once connected, I had access to the server terminal and could manage files and services.

Step 2: Preparing the Project

My project was simple—it only contained an index.html file. I uploaded it to the server under:

/var/www/hng13-stage0-devops

I verified the project folder structure with:

ls -l /var/www/hng13-stage0-devops

Since there was no public folder or PHP files, I knew I needed to adjust the Nginx configuration to serve directly from this folder.

Step 3: Setting Up Nginx

I opened the Nginx configuration for my site:

sudo nano /etc/nginx/sites-available/hng13

Initially, I mistakenly pointed root to a non-existent folder (public), which caused a 404 Not Found error. The correct configuration looked like this:

server { listen 80; server_name <your_linode-ip>; root /var/www/hng13-stage0-devops; # points to folder containing index.html index index.html index.htm; location / { try_files $uri $uri/ =404; } } Step 4: Enabling the Site and Testing

After creating the configuration file, I enabled the site:

sudo ln -s /etc/nginx/sites-available/hng13 /etc/nginx/sites-enabled/

I also removed the default site to avoid conflicts:

sudo rm /etc/nginx/sites-enabled/default

Then I tested the configuration:

sudo nginx -t

If the syntax was OK, I reloaded Nginx:

sudo systemctl reload nginx Step 5: Checking Permissions

Nginx must have access to the project files. I ensured the correct permissions:

sudo chown -R www-data:www-data /var/www/hng13-stage0-devops sudo chmod -R 755 /var/www/hng13-stage0-devops Step 6: Viewing the Site

Finally, I opened my browser and navigated to

http://<your-linode-ip>

And there it was—my index.html page served perfectly via Nginx.

Challenges and Lessons Learned
  1. Nginx server_name Error
    • Error: "server_name" directive is not allowed here
    • Lesson: Always place server_name inside a server { ... } block.
  2. 404 Not Found
    • Cause: Nginx was pointing to a public folder that didn’t exist.
    • Solution: Update root to the folder containing index.html.
  3. Permissions Issues
    • Nginx could not read files initially.
    • Solution: Ensure ownership by www-data and proper read/execute permissions.
  4. SSH Timeout / Connection Issues
    • Double-check firewall rules and Linode network settings.
Key Takeaways
  • For static HTML projects, Nginx is simple and effective.
  • Always check the root folder matches your project structure.
  • Testing the Nginx config (nginx -t) before reload saves headaches.
  • Proper permissions are crucial for serving files correctly.

Deploying my project was a learning experience. Even small mistakes like pointing to the wrong folder or placing directives in the wrong context can break the site—but step-by-step debugging and understanding the errors helped me fix everything quickly.This has kick started my devOps journey and I truly loved the challenge

Sam Thursfield: Status update, 17/10/2025

Planet GNOME - Pre, 17/10/2025 - 6:16md

Greetings readers. I’m writing to you from a hotel room in Manchester which I’m currently sharing with a variant of COVID 19. We are listening to disco funk music.

This virus prevents me from working or socializing, but I at least I have time to do some cyber-janitorial tasks like updating my “dotfiles” (which holds configuration for all the programs i use on Linux, stored in Git… for those who aren’t yet converts).

I also caught up with some big upcoming changes in the GNOME 50 release cycle — more on that below.

nvim

I picked up Vim as my text editor ten years ago while working on a very boring project. This article by Jon Beltran de Heredia, “Why, oh WHY, do those #?@! nutheads use vi?” sold me on the key ideas: you use “normal mode” for everything, which gives you powerful and composable edit operations. I printed out this Vim quick reference card by Michael Goerz and resolved to learn one new operation every day.

It worked and I’ve been a convert ever since. Doing consultancy work makes you a nomad: often working via SSH or WSL on other people’s computers. So I never had the luxury of setting up an IDE like GNOME Builder, or using something that isn’t packaged in 99% of distros. Luckily Vim is everywhere.

Over the years, I read a newletter named Vimtricks and I picked up various Vim plugins like ALE, ctrlp, and sideways. But there’s a problem: some of these depend on extra Vim features like Python support. If a required feature is missing, you get an error message that appears on like… every keystroke:

In this case, on a Debian 12 build machine, I could work around by installing the vim-gtk3 package. But it’s frustrating enough that I decided it was time to try Neovim.

The Neovim project began around the time I was switching to Vim, and is based on the premise that “Vim is, without question, the worst C codebase I have seen.”.

So far its been painless to switch and everything works a little better. The :terminal feels better integrated. I didn’t need to immediately disable mouse mode. I can link to online documentation! The ALE plugin (which provides language server integration) is even ready packaged in Fedora.

I’d send a screenshot but my editor looks… exactly the same as before. Boring!

I also briefly tried out Helix, which appears to take the good bits of Vim (modal editing) and run in a different direction (visible selection and multiple cursors). I need a more boring project before I’ll be able to learn a completely new editor. Give me 10 years.

Endless OS 7

I’ve been working flat out on Endless OS 7, as last month. Now that the basics work and the system boots, we were mainly looking at integrating Endless-specific Pay as you Go functionality that they use for affordable laptop programs.

I learned more than I wanted to about Linux early boot process, particularly the dracut-ng initramfs generator (one of many Linux components that seems to be named after a town in Massachusetts).

GNOME OS actually dropped Dracut altogether, in “vm-secure: Get rid of dracut and use systemd’s ukify” by Valentin David, and now uses a simple Python script. A lot of Dracut’s features aren’t necessary for building atomic, image-based distros. For EOS we decided to stick with Dracut, at least for now.

So we get to deal with fun changes such as the initramfs growing from 90MB to 390MB after we updated to latest Dracut. Something which is affecting Fedora too (LWN: “Last-minute /boot boost for Fedora 43”).

I requested time after the contract finishes to write up a technical article on the work we did, so I won’t go into more details yet. Watch this space!

GNOME 50

I haven’t had a minute to look at upstream GNOME this month, but there are some interesting things cooking there.

Jordan merged the GNOME OS openQA tests into the main gnome-build-meta repo. This is a simple solution to a number of basic questions we had around testing, such as, “how do we target tests to specific versions of GNOME?”.

We separated the tests out of gnome-build-meta because, at the time, each new CI pipeline would track new versions of each GNOME module. This meant, firstly that pipelines could take anywhere from 10 minutes to 4 hours rebuilding a disk image before the tests even started, and secondly that the system under test would change every time you ran the pipeline.

While that sounds dumb, it worked this way for historical reasons: GNOME OS has been an under-resourced ad-hoc project ongoing since 2011, whose original goal was simply to continuously build: already a huge challenge if you remember GNOME in the early 2010s. Of course, such as CI pipeline is highly counterproductive if you’re trying to develop and review changes to the tests, and not the system: so the separate openqa-tests repo was a necessary step.

Thanks to Abderrahim’s work in 2022 (“Commit refs to the repository” and “Add script to update refs”), plus my work on a tool to run the openQA tests locally before pushing to CI (ssam_openqa), I hope we’re not going to have those kinds of problems any more. We enter a brave new world of testing!

The next thing the openQA tests need, in my opinion, is dedicated test infrastructure. The shared Gitlab CI runners we have are in high demand. The openQA tests have timeouts, as they ultimately are doing this in a loop:

  • Send an input event
  • Wait for the system under test to react

If a VM is running on a test runner with overloaded CPU or IO then tests will start to time out in unhelpful ways. So, if you want to have better testing for GNOME, finding some dedicated hardware to run tests would be a significant help.

There are also some changes cooking in Localsearch thanks to Carlos Garnacho:

The first of these is a nicely engineered way to allow searching files on removable disks like external HDs. This should be opt-in: so you can opt in to indexing your external hard drive full of music, but your machine wouldn’t be vulnerable to an attack where someone connects a malicious USB stick while your back is turned. (The sandboxing in localsearch makes it non-trivial to construct such an attack, but it would require a significantly greater level of security auditing before I’d make any guarantees about that).

The second of these changes is pretty big: in GNOME 50, localsearch will now consider everything in your homedir for indexing.

As Carlos notes in the commit message, he has spent years working on performance optimisations and bug fixes in localsearch to get to a point where he considers it reasonable to enable by default. From a design point of view, discussed in the issue “Be more encompassing about what get indexed“, it’s hard to justify a search feature that only surfaces a subset of your files.

I don’t know if it’s a great time to do this, but nothing is perfect and sometimes you have to take a few risks to move forwards.

There’s a design, testing and user support element to all of this, and it’s going to require help from the GNOME community and our various downstream distributors, particularly around:

  • Widely testing the new feature before the GNOME 50 release.
  • Making sure users are aware of the change and how to manage the search config.
  • Handling an expected increase in bug reports and support requests.
  • Highlighting how privacy-focused localsearch is.

I never got time to extend the openQA tests to cover media indexing; it’s not a trivial job. We will rely on volunteers and downstream testers to try out the config change as widely as possible over the next 6 months.

One thing that makes me support this change is that the indexer in Android devices already works like this: everything is scanned into a local cache, unless there’s a .nomedia file. Unfortunately Google don’t document how the Android media scanner works. But it’s not like this is GNOME treading a radical new path.

The localsearch index lives in the same filesystem as the data, and never leaves your PC. In a world where Microsoft Windows can now send your boss screenshots of everything you looked at, GNOME is still very much on your side. Let’s see if we can tell that story.

Gedit Technology blog: Mid-October News

Planet GNOME - Mër, 15/10/2025 - 12:00md

Misc news about the gedit text editor, mid-October edition! (Some sections are a bit technical).

Rework of the file loading and saving (continued)

The refactoring continues in the libgedit-gtksourceview module, this time to tackle a big class that takes too much responsibilities. A utility is in development which will permit to delegate a part of the work.

The utility is about character encoding conversion, with support of invalid bytes. It takes as input a single GBytes (the file content), and transforms it into a list of chunks. A chunk contains either valid (successfully converted) bytes, or invalid bytes. The output format - the "list of chunks" - is subject to change to improve memory consumption and performances.

Note that invalid bytes are allowed, to be able to open really any kind of files with gedit.

I must also note that this is quite sensitive work, at the heart of document loading for gedit. Normally all these refactorings and improvements will be worth it!

Progress in other modules

There has been some progress on other modules:

  • gedit: version 48.1.1 has been released with a few minor updates.
  • The Flatpak on Flathub: update to gedit 48.1.1 and the GNOME 49 runtime.
  • gspell: version 1.14.1 has been released, mainly to pick up the updated translations.
GitHub Sponsors

In addition to Liberapay, you can now support the work that I do on GitHub Sponsors. See the gedit donations page.

Thank you ❤️

Victor Ma: This is a test post

Planet GNOME - Mër, 15/10/2025 - 2:00pd

Over the past few weeks, I’ve been working on improving some test code that I had written.

Refactoring time!

My first order of business was to refactor the test code. There was a lot of boilerplate, which made it difficult to add new tests, and also created visual clutter.

For example, have a look at this test case:

static void test_egg_ipuz (void) { g_autoptr (WordList) word_list = NULL; IpuzGrid *grid; g_autofree IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; word_list = get_broda_word_list (); grid = create_grid (EGG_IPUZ_FILE_PATH); clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2); clue_matches = word_list_find_clue_matches (word_list, clue, grid); g_assert_cmpint (word_array_len (clue_matches), ==, 3); g_assert_cmpstr (word_list_get_indexed_word (word_list, word_array_index (clue_matches, 0)), ==, "EGGS"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 1)), ==, "EGGO"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 2)), ==, "EGGY"); }

That’s an awful lot of code just to say:

  1. Use the EGG_IPUZ_FILE_PATH file.
  2. Run the word_list_find_clue_matches() function on the 2-Across clue.
  3. Assert that the results are ["EGGS", "EGGO", "EGGY"].

And this was repeated in every test case, and needed to be repeated in every new test case I added. So, I knew that I had to refactor my code.

Fixtures and functions

My first step was to extract all of this setup code:

g_autoptr (WordList) word_list = NULL; IpuzGrid *grid; g_autofree IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; word_list = get_broda_word_list (); grid = create_grid (EGG_IPUZ_FILE_PATH); clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2); clue_matches = word_list_find_clue_matches (word_list, clue, grid);

To do this, I used a fixture:

typedef struct { WordList *word_list; IpuzGrid *grid; } Fixture; static void fixture_set_up (Fixture *fixture, gconstpointer user_data) { const gchar *ipuz_file_path = (const gchar *) user_data; fixture->word_list = get_broda_word_list (); fixture->grid = create_grid (ipuz_file_path); } static void fixture_tear_down (Fixture *fixture, gconstpointer user_data) { g_object_unref (fixture->word_list); }

My next step was to extract all of this assertion code:

g_assert_cmpint (word_array_len (clue_matches), ==, 3); g_assert_cmpstr (word_list_get_indexed_word (word_list, word_array_index (clue_matches, 0)), ==, "EGGS"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 1)), ==, "EGGO"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 2)), ==, "EGGY");

To do this, I created a new function that runs word_list_find_clue_matches() and asserts that the result equals an expected_words parameter.

static void test_clue_matches (WordList *word_list, IpuzGrid *grid, IpuzClueDirection clue_direction, guint clue_index, const gchar *expected_words[]) { const IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; g_autoptr (WordArray) expected_word_array = NULL; clue = get_clue (grid, clue_direction, clue_index); clue_matches = word_list_find_clue_matches (word_list, clue, grid); expected_word_array = str_array_to_word_array (expected_words, word_list); g_assert_true (word_array_equals (clue_matches, expected_word_array)); }

After all that, here’s what my test case looked like:

static void test_egg_ipuz (Fixture *fixture, gconstpointer user_data) { test_clue_matches (fixture->word_list, fixture->grid, IPUZ_CLUE_DIRECTION_ACROSS, 2, (const gchar*[]){"EGGS", "EGGO", "EGGY", NULL}); }

Much better!

Macro functions

But as great as that was, I knew that I could take it even further, with macro functions.

I created a macro function to simplify test case definitions:

#define ASSERT_CLUE_MATCHES(DIRECTION, INDEX, ...) \ test_clue_matches (fixture->word_list, \ fixture->grid, \ DIRECTION, \ INDEX, \ (const gchar*[]){__VA_ARGS__, NULL})

Now, test_egg_ipuz() looked like this:

static void test_egg_ipuz (Fixture *fixture, gconstpointer user_data) { ASSERT_CLUE_MATCHES (IPUZ_CLUE_DIRECTION_ACROSS, 2, "EGGS", "EGGO", "EGGY"); }

I also made a macro function for the test case declarations:

#define ADD_IPUZ_TEST(test_name, file_name) \ g_test_add ("/clue_matches/" #test_name, \ Fixture, \ "tests/clue-matches/" #file_name, \ fixture_set_up, \ test_name, \ fixture_tear_down)

Which turned this:

g_test_add ("/clue_matches/test_egg_ipuz", Fixture, EGG_IPUZ, fixture_set_up, test_egg_ipuz, fixture_tear_down);

Into this:

ADD_IPUZ_TEST (test_egg_ipuz, egg.ipuz); An unfortunate bug

So, picture this: You’ve just finished refactoring your test code. You add some finishing touches, do a final test run, look over the diff one last time…and everything seems good. So, you open up an MR and start working on other things.

But then, the unthinkable happens—the CI pipeline fails! And apparently, it’s due to a test failure? But you ran your tests locally, and everything worked just fine. (You run them again just to be sure, and yup, they still pass.) And what’s more, it’s only the Flatpak CI tests that failed. The native CI tests succeeded.

So…what, then? What could be the cause of this? I mean, how do you even begin debugging a test failure that only happens in a particular CI job and nowhere else? Well, let’s just try running the CI pipeline again and see what happens. Maybe the problem will go away. Hopefully, the problem goes away.

Nope. Still fails.

Rats.

Well, I’ll spare you the gory details that it took for me to finally figure this one out. But the cause of the bug was me accidentally freeing an object that I should never have freed.

This meant that the corresponding memory segment could be—but, importantly, did not necessarily have to be—filled with garbage data. And this is why only the Flatpak job’s test run failed…well, at first, anyway. By changing around some of the test cases, I was able to get the native CI tests and local tests to fail. And this is what eventually clued me into the true nature of this bug.

So, after spending the better part of two weeks, here is the fix I ended up with:

@@ -94,7 +94,7 @@ test_clue_matches (WordList *word_list, guint clue_index, const gchar *expected_words[]) { - g_autofree IpuzClue *clue = NULL; + const IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; g_autoptr (WordArray) expected_word_array = NULL;

Jordan Petridis: Nightly Flatpak CI gets a cache

Planet GNOME - Mar, 14/10/2025 - 8:00md

Recently I got around tackling a long standing issue for good. There were multiple attempts in the past 6 years to cache flatpak-builder artifacts with Gitlab but none had worked so far.

On the technical side of things, flatpak-builder relies heavily on extended attributes (xattrs) on files to do cache validation. Using gitlab’s built-in cache or artifacts mechanisms results in a plain zip archive which strips all the attributes from the files, causing the cache to always be invalid once restored. Additionally the hardlinks/symlinks in the cache break. One workaround for this is to always tar the directories and then manually extract them after they are restored.

On the infrastructure of things we stumble once again into Gitlab. When a cache or artifact is created, it’s uploaded into the Gitlab’s instance storage so it can later be reused/redownloaded into any runner. While this is great, it also quickly ramps up the network egress bill we have to pay along with storage.
 And since its a public gitlab instance that anyone can make request against repositories, it gets out of hand fast.

Couple weeks ago Bart pointed me out to Flathub’s workaround for this same problem. It comes down to making it someone else problem, and ideally one someone who is willing to fund FOSS infrastructure. We can use ORAS to wrap files and directories into an OCI wrapper and publish it to public registries. And it worked. Quite handy! OCI images are the new tarballs.

Now when a pipeline run against your default branch (and assuming it’s protected) it will create a cache artifact and upload to the currently configured OCI registry. Afterwards, any build, including Merge Request pipelines, will download the image, extract the artifacts and check how much of it is still valid.

From some quick tests and numbers, GNOME Builder went from a ~16 minute build to 6 minutes for our x86_64 runners. While on the AArch64 runner the impact was even bigger, going from 50 minutes to 16 minutes. Not bad. The more modules you are building in your manifest, the more noticeable it is.

Unlike Buildstream, there is no Content Addressable Server and flatpak-builder itself isn’t aware of the artifacts we publish or can associate them with the cache keys. The OCI/ORAS cache artifacts are manual and a bit hacky of a solution but works well in practice and until we have better tooling. To optimize a bit better for less cache-misses consider building modules from pinned commits/tags/tarballs and building modules from moving branches as late as possible.

If you are curious in the details, take a look at the related Merge Request in the templates repository and the follow up commits.

Free Palestine

Bilal Elmoussaoui: Testing a Rust library - Code Coverage

Planet GNOME - Hën, 13/10/2025 - 2:00pd

It has been a couple of years since I started working on a Rust library called oo7 as a Secret Service client implementation. The library ended up also having support for per-sandboxed app keyring using the Secret portal with a seamless API for end-users that makes usage from the application side straightforward.

The project, with time, grew support for various components:

  • oo7-cli: A secret-tool replacement but much better, as it allows not only interacting with the Secret service on the DBus session bus but also with any keyring. oo7-cli --app-id com.belmoussaoui.Authenticator list, for example, allows you to read the sandboxed app with app-id com.belmoussaoui.Authenticator's keyring and list its contents, something that is not possible with secret-tool.
  • oo7-portal: A server-side implementation of the Secret portal mentioned above. Straightforward, thanks to my other library ASHPD.
  • cargo-credential-oo7: A cargo credential provider built using oo7 instead of libsecret.
  • oo7-daemon: A server-side implementation of the Secret service.

The last component was kickstarted by Dhanuka Warusadura, as we already had the foundation for that in the client library, especially the file backend reimplementation of gnome-keyring. The project is slowly progressing, but it is almost there!

The problem with replacing such a very sensitive component like gnome-keyring-daemon is that you have to make sure the very sensitive user data is not corrupted, lost, or inaccessible. For that, we need to ensure that both the file backend implementation in the oo7 library and the daemon implementation itself are well tested.

That is why I spent my weekend, as well as a whole day off, working on improving the test suite of the wannabe core component of the Linux desktop.

Coverage Report

One metric that can give the developer some insight into which lines of code or functions of the codebase are executed when running the test suite is code coverage.

In order to get the coverage of a Rust project, you can use a project like Tarpaulin, which integrates with the Cargo build system. For a simple project, a command like this, after installing Tarpaulin, can give you an HTML report:

cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,native_crypto" \ --ignore-panics \ --out Html \ --output-dir coverage

Except in our use case, it is slightly more complicated. The client library supports switching between Rust native cryptographic primitives crates or using OpenSSL. We must ensure that both are tested.

For that, we can export our report in LCOV for native crypto and do the same for OpenSSL, then combine the results using a tool like grcov.

mkdir -p coverage-raw cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,native_crypto" \ --ignore-panics \ --out Lcov \ --output-dir coverage-raw mv coverage-raw/lcov.info coverage-raw/native-tokio.info cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,openssl_crypto" \ --ignore-panics \ --out Lcov \ --output-dir coverage-raw mv coverage-raw/lcov.info coverage-raw/openssl-tokio.info

and then combine the results with

cat coverage-raw/*.info > coverage-raw/combined.info grcov coverage-raw/combined.info \ --binary-path target/debug/ \ --source-dir . \ --output-type html \ --output-path coverage \ --branch \ --ignore-not-existing \ --ignore "**/portal/*" \ --ignore "**/cli/*" \ --ignore "**/tests/*" \ --ignore "**/examples/*" \ --ignore "**/target/*"

To make things easier, I added a bash script to the project repository that generates coverage for both the client library and the server implementation, as both are very sensitive and require intensive testing.

With that script in place, I also used it on CI to generate and upload the coverage reports at https://bilelmoussaoui.github.io/oo7/coverage/. The results were pretty bad when I started.

Testing

For the client side, most of the tests are straightforward to write; you just need to have a secret service implementation running on the DBus session bus. Things get quite complicated when the methods you have to test require a Prompt, a mechanism used in the spec to define a way for the user to be prompted for a password to unlock the keyring, create a new collection, and so on. The prompter is usually provided by a system component. For now, we just skipped those tests.

For the server side, it was mostly about setting up a peer-to-peer connection between the server and the client:

let guid = zbus::Guid::generate(); let (p0, p1) = tokio::net::UnixStream::pair().unwrap(); let (client_conn, server_conn) = tokio::try_join!( // Client zbus::connection::Builder::unix_stream(p0).p2p().build(), // Server zbus::connection::Builder::unix_stream(p1) .server(guid) .unwrap() .p2p() .build(), ) .unwrap();

Thanks to the design of the client library, we keep the low-level APIs under oo7::dbus::api, which allowed me to straightforwardly write a bunch of server-side tests already.

There are still a lot of tests that need to be written and a few missing bits to ensure oo7-daemon is in an acceptable shape to be proposed as an alternative to gnome-keyring.

Don't overdo it

The coverage report is not meant to be targeted at 100%. It’s not a video game. You should focus only on the critical parts of your code that must be tested. Testing a Debug impl or a From trait (if it is straightforward) is not really useful, other than giving you a small dose of dopamine from "achieving" something.

Till then, may your coverage never reach 100%.

Hubert Figuière: Dev Log September 2025

Planet GNOME - Sht, 11/10/2025 - 2:00pd

Not as much as I wanted to do was done in September.

libopenraw

Extracting more of the calibration values for colour correction on DNG. Currently work on fixing the purple colour cast.

Added Nikon ZR and EOS C50.

ExifTool

Submitted some metadata updates to ExifTool. Because it nice to have, and also because libopenraw uses some of these autogenerated: I have a Perl script to generate Rust code from it (it used to do C++).

Niepce

Finally merged the develop branch with all the import dialog work after having requested that it be removed from Damned Lies to not strain the translator is there is a long way to go before we can freeze the strings.

Supporting cast

Among the number of packages I maintain / update on flathub, LightZone is a digital photo editing application written in Java1. Updating to the latest runtime 25.08 cause it to ignore the HiDPI setting. It will honour GDK_SCALE environment but this isn't set. So I wrote the small command line too gdk-scale to output the value. See gdk-scale on gitlab. And another patch in the wrapper script.

HiDPI support remains a mess across the board. Fltk just recently gained support for it (it's used by a few audio plugins).

1

Don't try this at home.

Sebastian Wick: SO_PEERPIDFD Gets More Useful

Planet GNOME - Pre, 10/10/2025 - 7:04md

A while ago I wrote about the limited usefulness of SO_PEERPIDFD. for authenticating sandboxed applications. The core problem was simple: while pidfds gave us a race-free way to identify a process, we still had no standardized way to figure out what that process actually was - which sandbox it ran in, what application it represented, or what permissions it should have.

The situation has improved considerably since then.

cgroup xattrs

Cgroups now support user extended attributes. This feature allows arbitrary metadata to be attached to cgroup inodes using standard xattr calls.

We can change flatpak (or snap, or any other container engine) to create a cgroup for application instances it launches, and attach metadata to it using xattrs. This metadata can include the sandboxing engine, application ID, instance ID, and any other information the compositor or D-Bus service might need.

Every process belongs to a cgroup, and you can query which cgroup a process belongs to through its pidfd - completely race-free.

Standardized Authentication

Remember the complexity from the original post? Services had to implement different lookup mechanisms for different sandbox technologies:

  • For flatpak: look in /proc/$PID/root/.flatpak-info
  • For snap: shell out to snap routine portal-info
  • For firejail: no solution

All of this goes away. Now there’s a single path:

  1. Accept a connection on a socket
  2. Use SO_PEERPIDFD to get a pidfd for the client
  3. Query the client’s cgroup using the pidfd
  4. Read the cgroup’s user xattrs to get the sandbox metadata

This works the same way regardless of which sandbox engine launched the application.

A Kernel Feature, Not a systemd One

It’s worth emphasizing: cgroups are a Linux kernel feature. They have no dependency on systemd or any other userspace component. Any process can manage cgroups and attach xattrs to them. The process only needs appropriate permissions and is restricted to a subtree determined by the cgroup namespace it is in. This makes the approach universally applicable across different init systems and distributions.

To support non-Linux systems, we might even be able to abstract away the cgroup details, by providing a varlink service to register and query running applications. On Linux, this service would use cgroups and xattrs internally.

Replacing Socket-Per-App

The old approach - creating dedicated wayland, D-Bus, etc. sockets for each app instance and attaching metadata to the service which gets mapped to connections on that socket - can now be retired. The pidfd + cgroup xattr approach is simpler: one standardized lookup path instead of mounting special sockets. It works everywhere: any service can authenticate any client without special socket setup. And it’s more flexible: metadata can be updated after process creation if needed.

For compositor and D-Bus service developers, this means you can finally implement proper sandboxed client authentication without needing to understand the internals of every container engine. For sandbox developers, it means you have a standardized way to communicate application identity without implementing custom socket mounting schemes.

Jiri Eischmann: Fedora & CentOS at LinuxDays 2025

Planet GNOME - Mar, 07/10/2025 - 6:23md

Another edition of LinuxDays took place in Prague last weekend – the country’s largest Linux event drawing more than 1200 attendees and as every yearm we had a Fedora booth there – this time we also representing CentOS.

I was really glad that Tomáš Hrčka helped me staff the booth. I’m focused on the desktop part of Fedora and don’t follow the rest of the project in such detail. As a member of FESCo and Fedora infra team he has a great overview of what is going on in the project and our knowledge complemented each other very well when answering visitors’ questions. I’d also like to thank Adellaide Mikova who helped us tremendously despite not being a technical person.

This year I took our heavy 4K HDR display and showcased HDR support in Fedora Linux whose implementation was a multi-year effort for our team. I played HDR videos in two different video players (one that supports HDR and one that doesn’t), so that people could see a difference, and explained what needed to be implemented to make it work.

Another highlight of our booth were the laptops that run Fedora exceptionally well: Slimbook and especially Framework Laptop. Visitors were checking them out and we spoke about how the Fedora community works with the vendors to make sure Fedora Linux runs flawlessly on their laptops.

We also got a lot of questions about CentOS. We met quite a few people who were surprised that CentOS still exists. We explained to them that it lives on in the form of CentOS Stream and tried to dispel some of common misconceptions surrounding it.

Exhausting as it is, I really enjoy going to LinuxDays, but it’s also a great opportunity to explain things and get direct feedback from the community.

Ignacio Casal Quinteiro: Servo GTK

Planet GNOME - Mër, 01/10/2025 - 3:49md

I just checked and it seems that it has been 9 years since my last post in this blog :O

As part of my job at Amazon I started working in a GTK widget which will allow embedding a Servo Webview inside a GTK application. This was mostly a research project just to understand the current state of Servo and whether it was at a good enough state to migrate from WebkitGTK to it. I have to admit that it is always a pleasure to work with Rust and the great gtk-rs bindings. Instead, Servo while it is not yet ready for production, or at least not for what we need in our product, it was simple to embed and to get something running in just a few days. The community is also amazing, I had some problems along the way and they were providing good suggestions to get me unblocked in no time.

This project can be found in the following git repo: https://github.com/nacho/servo-gtk

I also created some Issues with some tasks that can be done to improve the project in case that anyone is interested.

Finally I leave you here a the usual mandatory screenshot:

Debarshi Ray: Ollama on Fedora Silverblue

Planet GNOME - Mër, 01/10/2025 - 2:30pd

I found myself dealing with various rough edges and questions around running Ollama on Fedora Silverblue for the past few months. These arise from the fact that there are a few different ways of installing Ollama, /usr is a read-only mount point on Silverblue, people have different kinds of GPUs or none at all, the program that’s using Ollama might be a graphical application in a Flatpak or part of the operating system image, and so on. So, I thought I’ll document a few different use-cases in one place for future reference or maybe someone will find it useful.

Different ways of installing Ollama

There are at least three different ways of installing Ollama on Fedora Silverblue. Each of those have their own nuances and trade-offs that we will explore later.

First, there’s the popular single command POSIX shell script installer:

$ curl -fsSL https://ollama.com/install.sh | sh

There is a manual step by step variant for those who are uncomfortable with running a script straight off the Internet. They both install Ollama in the operating system’s /usr/local or /usr or / prefix, depending on which one comes first in the PATH environment variable, and attempts to enable and activate a systemd service unit that runs ollama serve.

Second, there’s a docker.io/ollama/ollama OCI image that can be used to put Ollama in a container. The container runs ollama serve by default.

Finally, there’s Fedora’s ollama RPM.

Surprise

Astute readers might be wondering why I mentioned the shell script installer in the context of Fedora Silverblue, because /usr is a read-only mount point. Won’t it break the script? Not really, or the script breaks but not in the way one might expect.

Even though, /usr is read-only on Silverblue, /usr/local is not, because it’s a symbolic link to /var/usrlocal, and Fedora defaults to putting /usr/local/bin earlier in the PATH environment variable than the other prefixes that the installer attempts to use, as long as pkexec(1) isn’t being used. This happy coincidence allows the installer to place the Ollama binaries in their right places.

The script does fail eventually when attempting to create the systemd service unit to run ollama serve, because it tries to create an ollama user with /usr/share/ollama as its home directory. However, this half-baked installation works surprisingly well as long as nobody is trying to use an AMD GPU.

NVIDIA GPUs work, if the proprietary driver and nvidia-smi(1) are present in the operating system, which are provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion; and so does CPU fallback.

Unfortunately, the results would be the same if the shell script installer is used inside a Toolbx container. It will fail to create the systemd service unit because it can’t connect to the system-wide instance of systemd.

Using AMD GPUs with Ollama is an important use-case. So, let’s see if we can do better than trying to manually work around the hurdles faced by the script.

OCI image

The docker.io/ollama/ollama OCI image requires the user to know what processing hardware they have or want to use. To use it only with the CPU without any GPU acceleration:

$ podman run \ --name ollama \ --publish 11434:11434 \ --rm \ --security-opt label=disable \ --volume ~/.ollama:/root/.ollama \ docker.io/ollama/ollama:latest

This will be used as the baseline to enable different kinds of GPUs. Port 11434 is the default port on which the Ollama server listens, and ~/.ollama is the default directory where it stores its SSH keys and artificial intelligence models.

To enable NVIDIA GPUs, the proprietary driver and nvidia-smi(1) must be present on the host operating system, as provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion. The user space driver has to be injected into the container from the host using NVIDIA Container Toolkit, provided by the nvidia-container-toolkit package from Fedora, for Ollama to be able to use the GPUs.

The first step is to generate a Container Device Interface (or CDI) specification for the user space driver:

$ sudo nvidia-ctk cdi generate --output /etc/cdi/nvidia.yaml … …

Then the container needs to be run with access to the GPUs, by adding the --gpus option to the baseline command above:

$ podman run \ --gpus all \ --name ollama \ --publish 11434:11434 \ --rm \ --security-opt label=disable \ --volume ~/.ollama:/root/.ollama \ docker.io/ollama/ollama:latest

AMD GPUs don’t need the driver to be injected into the container from the host, because it can be bundled with the OCI image. Therefore, instead of generating a CDI specification for them, an image that bundles the driver must be used. This is done by using the rocm tag for the docker.io/ollama/ollama image.

Then container needs to be run with access to the GPUs. However, the --gpus option only works for NVIDIA GPUs. So, the specific devices need to be spelled out by adding the --devices option to the baseline command above:

$ podman run \ --device /dev/dri \ --device /dev/kfd \ --name ollama \ --publish 11434:11434 \ --rm \ --security-opt label=disable \ --volume ~/.ollama:/root/.ollama \ docker.io/ollama/ollama:rocm

However, because of how AMD GPUs are programmed with ROCm, it’s possible that some decent GPUs might not be supported by the docker.io/ollama/ollama:rocm image. The ROCm compiler needs to explicitly support the GPU in question, and Ollama needs to be built with such a compiler. Unfortunately, the binaries in the image leave out support for some GPUs that would otherwise work. For example, my AMD Radeon RX 6700 XT isn’t supported.

This can be verified with nvtop(1) in a Toolbx container. If there’s no spike in the GPU and its memory then its not being used.

It will be good to support as many AMD GPUs as possible with Ollama. So, let’s see if we can do better.

Fedora’s ollama RPM

Fedora offers a very capable ollama RPM, as far as AMD GPUs are concerned, because Fedora’s ROCm stack supports a lot more GPUs than other builds out there. It’s possible to check if a GPU is supported either by using the RPM and keeping an eye on nvtop(1), or by comparing the name of the GPU shown by rocminfo with those listed in the rocm-rpm-macros RPM.

For example, according to rocminfo, the name for my AMD Radeon RX 6700 XT is gfx1031, which is listed in rocm-rpm-macros:

$ rocminfo ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 Runtime Ext Version: 1.6 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 7 5800X 8-Core Processor Uuid: CPU-XX Marketing Name: AMD Ryzen 7 5800X 8-Core Processor Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU … … ******* Agent 2 ******* Name: gfx1031 Uuid: GPU-XX Marketing Name: AMD Radeon RX 6700 XT Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU … …

The ollama RPM can be installed inside a Toolbx container, or it can be layered on top of the base registry.fedoraproject.org/fedora image to replace the docker.io/ollama/ollama:rocm image:

FROM registry.fedoraproject.org/fedora:42 RUN dnf --assumeyes upgrade RUN dnf --assumeyes install ollama RUN dnf clean all ENV OLLAMA_HOST=0.0.0.0:11434 EXPOSE 11434 ENTRYPOINT ["/usr/bin/ollama"] CMD ["serve"]

Unfortunately, for obvious reasons, Fedora’s ollama RPM doesn’t support NVIDIA GPUs.

Conclusion

From the puristic perspective of not touching the operating system’s OSTree image, and being able to easily remove or upgrade Ollama, using an OCI container is the best option for using Ollama on Fedora Silverblue. Tools like Podman offer a suite of features to manage OCI containers and images that are far beyond what the POSIX shell script installer can hope to offer.

It seems that the realities of GPUs from AMD and NVIDIA prevent the use of the same OCI image, if we want to maximize our hardware support, and force the use of slightly different Podman commands and associated set-up. We have to create our own image using Fedora’s ollama RPM for AMD, and the docker.io/ollama/ollama:latest image with NVIDIA Container Toolkit for NVIDIA.

Hans de Goede: Fedora 43 will ship with FOSS Meteor, Lunar and Arrow Lake MIPI camera support

Planet GNOME - Mar, 30/09/2025 - 8:55md
Good news the just released 6.17 kernel has support for the IPU7 CSI2 receiver and the missing USBIO drivers have recently landed in linux-next. I have backported the USBIO drivers + a few other camera fixes to the Fedora 6.17 kernel.

I've also prepared an updated libcamera-0.5.2 Fedora package with support for IPU7 (Lunar Lake) CSI2 receivers as well as backporting a set of upstream SwStats and AGC fixes, fixing various crashes as well as the bad flicker MIPI camera users have been hitting with libcamera 0.5.2.

Together these 2 updates should make Fedora 43's FOSS MIPI camera support work on most Meteor Lake, Lunar Lake and Arrow Lake laptops!

If you want to give this a try, install / upgrade to Fedora 43 beta and install all updates. If you've installed rpmfusion's binary IPU6 stack please run:

sudo dnf remove akmod-intel-ipu6 'kmod-intel-ipu6*'

to remove it as it may interfere with the FOSS stack and finally reboot. Please first try with qcam:

sudo dnf install libcamera-qcam
qcam

which only tests libcamera and after that give apps which use the camera through pipewire a try like gnome's "Camera" app (snapshot) or video-conferencing in Firefox.

Note snapshot on Lunar Lake triggers a bug in the LNL Vulkan code, to avoid this start snapshot from a terminal with:

GSK_RENDERER=gl snapshot

If you have a MIPI camera which still does not work please file a bug following these instructions and drop me an email with the bugzilla link at hansg@kernel.org.

comments

Matthew Garrett: Investigating a forged PDF

Planet GNOME - Enj, 25/09/2025 - 12:22pd
I had to rent a house for a couple of months recently, which is long enough in California that it pushes you into proper tenant protection law. As landlords tend to do, they failed to return my security deposit within the 21 days required by law, having already failed to provide the required notification that I was entitled to an inspection before moving out. Cue some tedious argumentation with the letting agency, and eventually me threatening to take them to small claims court.

This post is not about that.

Now, under Californian law, the onus is on the landlord to hold and return the security deposit - the agency has no role in this. The only reason I was talking to them is that my lease didn't mention the name or address of the landlord (another legal violation, but the outcome is just that you get to serve the landlord via the agency). So it was a bit surprising when I received an email from the owner of the agency informing me that they did not hold the deposit and so were not liable - I already knew this.

The odd bit about this, though, is that they sent me another copy of the contract, asserting that it made it clear that the landlord held the deposit. I read it, and instead found a clause reading SECURITY: The security deposit will secure the performance of Tenant’s obligations. IER may, but will not be obligated to, apply all portions of said deposit on account of Tenant’s obligations. Any balance remaining upon termination will be returned to Tenant. Tenant will not have the right to apply the security deposit in payment of the last month’s rent. Security deposit held at IER Trust Account., where IER is International Executive Rentals, the agency in question. Why send me a contract that says you hold the money while you're telling me you don't? And then I read further down and found this:

Ok, fair enough, there's an addendum that says the landlord has it (I've removed the landlord's name, it's present in the original).

Except. I had no recollection of that addendum. I went back to the copy of the contract I had and discovered:

Huh! But obviously I could just have edited that to remove it (there's no obvious reason for me to, but whatever), and then it'd be my word against theirs. However, I'd been sent the document via RightSignature, an online document signing platform, and they'd added a certification page that looked like this:

Interestingly, the certificate page was identical in both documents, including the checksums, despite the content being different. So, how do I show which one is legitimate? You'd think given this certificate page this would be trivial, but RightSignature provides no documented mechanism whatsoever for anyone to verify any of the fields in the certificate, which is annoying but let's see what we can do anyway.

First up, let's look at the PDF metadata. pdftk has a dump_data command that dumps the metadata in the document, including the creation date and the modification date. My file had both set to identical timestamps in June, both listed in UTC, corresponding to the time I'd signed the document. The file containing the addendum? The same creation time, but a modification time of this Monday, shortly before it was sent to me. This time, the modification timestamp was in Pacific Daylight Time, the timezone currently observed in California. In addition, the data included two ID fields, ID0 and ID1. In my document both were identical, in the one with the addendum ID0 matched mine but ID1 was different.

These ID tags are intended to be some form of representation (such as a hash) of the document. ID0 is set when the document is created and should not be modified afterwards - ID1 initially identical to ID0, but changes when the document is modified. This is intended to allow tooling to identify whether two documents are modified versions of the same document. The identical ID0 indicated that the document with the addendum was originally identical to mine, and the different ID1 that it had been modified.

Well, ok, that seems like a pretty strong demonstration. I had the "I have a very particular set of skills" conversation with the agency and pointed these facts out, that they were an extremely strong indication that my copy was authentic and their one wasn't, and they responded that the document was "re-sealed" every time it was downloaded from RightSignature and that would explain the modifications. This doesn't seem plausible, but it's an argument. Let's go further.

My next move was pdfalyzer, which allows you to pull a PDF apart into its component pieces. This revealed that the documents were identical, other than page 3, the one with the addendum. This page included tags entitled "touchUp_TextEdit", evidence that the page had been modified using Acrobat. But in itself, that doesn't prove anything - obviously it had been edited at some point to insert the landlord's name, it doesn't prove whether it happened before or after the signing.

But in the process of editing, Acrobat appeared to have renamed all the font references on that page into a different format. Every other page had a consistent naming scheme for the fonts, and they matched the scheme in the page 3 I had. Again, that doesn't tell us whether the renaming happened before or after the signing. Or does it?

You see, when I completed my signing, RightSignature inserted my name into the document, and did so using a font that wasn't otherwise present in the document (Courier, in this case). That font was named identically throughout the document, except on page 3, where it was named in the same manner as every other font that Acrobat had renamed. Given the font wasn't present in the document until after I'd signed it, this is proof that the page was edited after signing.

But eh this is all very convoluted. Surely there's an easier way? Thankfully yes, although I hate it. RightSignature had sent me a link to view my signed copy of the document. When I went there it presented it to me as the original PDF with my signature overlaid on top. Hitting F12 gave me the network tab, and I could see a reference to a base.pdf. Downloading that gave me the original PDF, pre-signature. Running sha256sum on it gave me an identical hash to the "Original checksum" field. Needless to say, it did not contain the addendum.

Why do this? The only explanation I can come up with (and I am obviously guessing here, I may be incorrect!) is that International Executive Rentals realised that they'd sent me a contract which could mean that they were liable for the return of my deposit, even though they'd already given it to my landlord, and after realising this added the addendum, sent it to me, and assumed that I just wouldn't notice (or that, if I did, I wouldn't be able to prove anything). In the process they went from an extremely unlikely possibility of having civil liability for a few thousand dollars (even if they were holding the deposit it's still the landlord's legal duty to return it, as far as I can tell) to doing something that looks extremely like forgery.

There's a hilarious followup. After this happened, the agency offered to do a screenshare with me showing them logging into RightSignature and showing the signed file with the addendum, and then proceeded to do so. One minor problem - the "Send for signature" button was still there, just below a field saying "Uploaded: 09/22/25". I asked them to search for my name, and it popped up two hits - one marked draft, one marked completed. The one marked completed? Didn't contain the addendum.

comments

Arun Raghavan: Asymptotic on hiatus

Planet GNOME - Mër, 24/09/2025 - 8:14md

Asymptotic was started 6 years ago, when I wanted to build something that would be larger than just myself.

We’ve worked with some incredible clients in this time, on a wide range of projects. I would be remiss to not thank all the teams that put their trust in us.

In addition to working on interesting challenges, our goal was to make sure we were making a positive impact on the open source projects that we are part of. I think we truly punched above our weight class (pardon the boxing metaphor), on this front – all the upstream work we have done stands testament to that.

Of course, the biggest single contributor to what we were able to achieve is our team. My partner, Deepa, was instrumental in shaping how the company was formed and run. Sanchayan (who took a leap of faith in joining us first), and Taruntej were stellar colleagues and friends on this journey.

It’s been an incredibly rewarding experience, but the time has come to move on to other things, and we have now paused operations. I’ll soon write about some recent work and what’s next.

Faqet

Subscribe to AlbLinux agreguesi