When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.
git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account1, but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.
SSH CertificatesAnd, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.
And, wonderfully, you can use them in git! Let’s find out how.
Local configThere’s two main parameters you need to set. First,
1 git config set gpg.format sshbecause unfortunately for historical reasons all the git signing config is under the gpg namespace even if you’re not using OpenPGP. Yes, this makes me sad. But you’re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it’s stored on a smartcard or something rather than on disk). Thankfully for you, I’ve written one. It will talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you’ll have a signature.
Validating signaturesThis is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen2, which validates signatures against a file in a format that looks somewhat like authorized-keys. This lets you add something like:
1 * cert-authority ssh-rsa AAAA…which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?
Haha. No.
Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can’t push to a repo unless they have a certificate signed by the configured CA), there’s currently no way to say “Trust all commits with an SSH certificate signed by this CA”. I am unclear on why. So, I wrote my own. It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS. Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you’ll need to handle that.
In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.
Doing it in hardwareOf course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.
So, obviously, I wrote my own. This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.
And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.
Wait, attestation?Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.
ConclusionUsing SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it4 and everyone wins.
Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys? Now you do ↩︎
Yes it is somewhat confusing that the keygen command does things other than generate keys ↩︎
This is more difficult than it sounds ↩︎
And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎
Hello and welcome to another update on what’s been happening at the GNOME Foundation. It’s been two weeks since my last update, and there’s been plenty going on, so let’s dive straight in.
GNOME 50!My update wouldn’t be complete without mentioning this week’s GNOME 50 release. It looks like an amazing release with lots of great improvements! Many thanks to everyone who contributed and made it such a success.
The Foundation plays a critical role in these releases, whether it’s providing development infrastructure, organising events where planning takes place, or providing development funding. If you are reading this and have the means, please consider signing up as a Friend of GNOME. Even small regular donations make a huge difference.
Board MeetingThe Board of Directors had its regular monthly meeting on March 9th, and we had a full agenda. Highlights from the meeting included:
The Travel Committee met both this week and last week, as it processed the initial batch of GUADEC sponsorship applications. As a result of this work the first set of approvals have been sent out. Documentation has also been provided for those who are applying for visas for their travel.
The membership of the current committee is quite new and it is having to figure out processes and decision-making principals as it goes, which is making its work more intensive than might normally be the case. We are starting to write up guidelines for future funding rounds, to help smooth the process.
Huge thanks to our committee members Asmit, Anisa, Julian, Maria, and Nirbeek, for taking on this important work.
ConferencesPlanning and preparation for the 2026 editions of LAS and GUADEC have continued over the past fortnight. The call for papers for both events is a particular focus right now, and there are a couple of important deadlines to be aware of:
There are teams behind each of these calls, reviewing and selecting proposals. Many thanks to the volunteers doing this work!
We are also excited to have sponsors come forward to support GUADEC.
AccountingThe Foundation has been undertaking a program of improvements to our accounting and finance systems in recent months. Those were put on hold for the audit fieldwork that took place at the beginning of March, but now that’s done, attention has turned to the remaining work items there.
We’ve been migrating to a new payments processing platform since the beginning of the year, and setup work has continued, including configuration to make it integrate correctly with our accounting software, migrating credit cards over from our previous solution, and creating new web forms which are going to be used for reimbursement requests in future.
There are a number of significant advantages to the new system, like the accounting integration, which are already helping to reduce workloads, and I’m looking forward to having the final pieces of the new system in place.
Another major change that is currently ongoing is that we are moving from a quarterly to a monthly cadence for our accounting. This is the cycle we move on to “complete” the accounts, with all data inputted and reconciled by the end of the cycle. The move to a monthly cycle will mean that we are generating finance reports on a more frequent basis, which will allow the Board to have a closer view on the organisation’s finances.
Finally, this week we also had our regular monthly “books” call with our accountant and finance advisor. This was our usual opportunity to resolve any questions that have come up in relation to the accounts, but we also discussed progress on the improvements that we’ve been making.
InfrastructureOn the infrastructure side, the main highlight in recent weeks has been the migration from Anubis to Fastly’s Next-Gen Web Application Firewall (WAF) for protecting our infrastructure. The result of this migration will be an increased level of protection from bots, while simultaneously not interfering in peoples’ way when they’re using our infra. The Fastly product provides sophisticated detection of threats plus the ability for us to write our own fine-grained detection rules, so we can adjust firewall behaviour as we go.
Huge thanks to Fastly for providing us with sponsorship for this service – it is a major improvement for our community and would not have been possible without their help.
That’s it for this update. Thanks for reading and be on the lookout for the next update, probably in two weeks!
One of the most optimized algorithms in any standard library is sorting. It is used everywhere so it must be fast. Thousands upon thousands of developer hours have been sunk into inventing new algorithms and making sort implementations faster. Pystd has a different design philosophy where fast compilation times and readability of the implementation have higher priority than absolute performance. Perf still very much matters, it has to be fast, but not at the cost of 10x compilation time.
This leads to the natural question of how much slower such an implementation would be compared to a production quality one. Could it even be faster? (Spoilers: no) The only way to find out is to run performance benchmarks on actual code.
To keep things simple there is only one test set, sorting 10'000'000 consecutive 64 bit integers that have been shuffled to a random order which is the same for all algorithms. This is not an exhaustive test by any means but you have to start somewhere. All tests used GCC 15.2 using -O2 optimization. Pystd code was not thoroughly hand optimized, I only fixed (some of the) obvious hotspots.
Stable sortPystd uses mergesort for stable sorting. The way the C++ standard specifies stable sort means that most implementations probably use it as well. I did not dive in the code to find out. Pystd's merge sort implementation consists of ~220 lines of code. It can be read on this page.
Stdlibc++ can do the sort in 0.9 seconds whereas Pystd takes .94 seconds. Getting to within 5% with such a simple implementation is actually quite astonishing. Even when considering all the usual caveats where it might completely fall over with a different input data distribution and all that.
Regular sortBoth stdlibc++ and Pystd use introsort. Pystd's implementation has ~150 lines of code but it also uses heapsort, which has a further 100 lines of code). Code for introsort is here, and heapsort is here.
Stdlibc++ gets the sort done in 0.76 seconds whereas Pystd takes 0.82 seconds. This makes it approximately 8% slower. It's not great, but getting within 10% with a few evening's work is still a pretty good result. Especially since, and I'm speculating here, std::sort has seen a lot more optimization work than std::stable_sort because it is used more.
For heavy duty number crunching this would be way too slow. But for moderate data set sizes the performance difference might be insignificant for many use cases.
Note that all of these are faster (note: did not measure) than libc's qsort because it requires an indirect function call on every comparison i.e. the comparison method can not be inlined.
Where does the time go?Valgrind will tell you that quite easily.
This picture shows quite clearly why big O notation can be misleading. Both quicksort (the inner loop of introsort) and heapsort have "the same" average time complexity but every call to heapsort takes approximately 4.5 times as long.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Two years have passed since I last shared my Friday app icon sketches, but the sketching itself hasn't stopped.
For me, it's the best way to figure out the right metaphors before we move to final pixels. These sketches are just one part of the GNOME Design Team's wider effort to keep our icons consistent and meaningful—it is an endeavor that’s been going on for years.
If you design a GNOME app following the GNOME Design Guidelines, feel free to request an icon to be made for you. If you are serious and apply for inclusion in GNOME Circle, you are way more likely to get a designer's attention.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
It’s clear LLMs are one of the biggest changes in technology ever. The rate of progress is astounding: recently due to a configuration mistake I accidentally used Claude Sonnet 3.5 (released ~2 years ago) instead of Opus 4.6 for a task and looked at the output and thought “what is this garbage”?
But daily now: Opus 4.6 is able to generate reasonable PoC level Rust code for complex tasks for me. It’s not perfect – it’s a combination of exhausting and exhilarating to find the 10% absolutely bonkers/broken code that still makes it past subagents.
So yes I use LLMs every day, but I will be clear: if I could push a button to “un-invent” them I absolutely would because I think the long term issues in larger society (not being able to trust any media, and many of the things from Dario’s recent blog etc.) will outweigh the benefits.
But since we can’t un-invent them: here’s my opinion on how they should be used. As a baseline, I agree with a lot from this doc from Oxide about LLMs. What I want to talk about is especially around some of the norms/tools that I see as important for LLM use, following principles similar to those.
On framing: there’s “core” software vs “bespoke”. An entirely new capability of course is for e.g. a nontechnical restaurant owner to use an LLM to generate (“vibe code”) a website (excepting hopefully online orderings and payments!). I’m not overly concerned about this.
Whereas “core” software is what organizations/businesses provide/maintain for others. I work for a company (Red Hat) that produces a lot of this. I am sure no one would want to run for real an operating system, cluster filesystem, web browser, monitoring system etc. that was primarily “vibe coded”.
And while I respect people and groups that are trying to entirely ban LLM use, I don’t think that’s viable for at least my space.
Hence the subject of this blog is my perspective on how LLMs should be used for “core” software: not vibe coding, but using LLMs responsibly and intelligently – and always under human control and review.
Agents should amplify and be controlled by humansI think most of the industry would agree we can’t give responsibility to LLMs. That means they must be overseen by humans. If they’re overseen by a human, then I think they should be amplifying what that human thinks/does as a baseline – intersected with the constraints of the task of course.
On “amplification”: Everyone using a LLM to generate content should inject their own system prompt (e.g. AGENTS.md) or equivalent. Here’s mine – notice I turn off all the emoji etc. and try hard to tune down bulleted lists because that’s not my style. This is a truly baseline thing to do.
Now most LLM generated content targeted for core software is still going to need review, but just ensuring that the baseline matches what the human does helps ensure alignment.
Pull request reviewsLet’s focus on a very classic problem: pull request reviews. Many projects have wired up a flow such that when a PR comes in, it gets reviewed by a model automatically. Many projects and tools pitch this. We use one on some of my projects.
But I want to get away from this because in my experience these reviews are a combination of:
In practice, we just want the first of course.
How I think it should work:
I wrote this agent skill to try to make this work well, and if you search you can see it in action in a few places, though I haven’t truly tried to scale this up.
I think the above matches the vision of LLMs amplifying humans.
Code GenerationThere’s no doubt that LLMs can be amazing code generators, and I use them every day for that. But for any “core” software I work on, I absolutely review all of the output – not just superficially, and changes to core algorithms very closely.
At least in my experience the reality is still there’s that percentage of the time when the agent decided to reimplement base64 encoding for no reason, or disable the tests claiming “the environment didn’t support it” etc.
And to me it’s still a baseline for “core” software to require another human review to merge (per above!) with their own customized LLM assisting them (ideally a different model, etc).
FOSS vs closedOf course, my position here is biased a bit by working on FOSS – I still very much believe in that, and working in a FOSS context can be quite different than working in a “closed environment” where a company/organization may reasonably want to (and be able to) apply uniform rules across a codebase.
While for sure LLMs allow organizations to create their own Linux kernel filesystems or bespoke Kubernetes forks or virtual machine runtime or whatever – it’s not clear to me that it is a good idea for most to do so. I think shared (FOSS) infrastructure that is productized by various companies, provided as a service and maintained by human experts in that problem domain still makes sense. And how we develop that matters a lot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.