You are here

Agreguesi i feed

Christian Schaller: Using AI to create some hardware tools and bring back the past

Planet GNOME - Hën, 23/03/2026 - 5:07md

As I talked about in a couple of blog posts now I been working a lot with AI recently as part of my day to day job at Red Hat, but also spending a lot of evenings and weekend time on this (sorry kids pappa has switched to 1950’s mode for now). One of the things I spent time on is trying to figure out what the limitations of AI models are and what kind of use they can have for Open Source developers.

One thing to mention before I start talking about some of my concrete efforts is that I more and more come to conclude that AI is an incredible tool to hypercharge someone in their work, but I feel it tend to fall short for fully autonomous systems. In my experiments AI can do things many many times faster than you ordinarily could, talking specifically in the context of coding here which is what is most relevant for those of us in the open source community.

So one annoyance I had for years as a Linux user is that I get new hardware which has features that are not easily available to me as a Linux user. So I have tried using AI to create such applications for some of my hardware which includes an Elgato Light and a Dell Ultrasharp Webcam.

I found with AI and this is based on using Google Gemini, Claude Sonnet and Opus and OpenAI codex, they all required me to direct and steer the AI continuously, if I let the AI just work on its own, more often than not it would end up going in circles or diverging from the route it was supposed to go, or taking shortcuts that makes wanted output useless.On the other hand if I kept on top of the AI and intervened and pointed it in the right direction it could put together things for me in very short time spans.
My projects are also mostly what I would describe as end leaf nodes, the kind of projects that already are 1 person projects in the community for the most part. There are extra considerations when contributing to bigger efforts, and I think a point I seen made by others in the community too is that you need to own the patches you submit, meaning that even if an AI helped your write the patch you still need to ensure that what you submit is in a state where it can be helpful and is merge-able. I know that some people feel that means you need be capable of reviewing the proposed patch and ensuring its clean and nice before submitting it, and I agree that if you expect your patch to get merged that has to be the case. On the other hand I don’t think AI patches are useless even if you are not able to validate them beyond ‘does it fix my issue’.

My friend and PipeWire maintainer Wim Taymans and I was talking a few years ago about what I described at the time as the problem of ‘bad quality patches’, and this was long before AI generated code was a thing. Wim response to me which I often thought about afterwards was “a bad patch is often a great bug report”. And that would hold true for AI generated patches to. If someone makes a patch using AI, a patch they don’t have the ability to code review themselves, but they test it and it fixes their problem, it might be a good bug report and function as a clearer bug report than just a written description by the user submitting the report. Of course they should be clear in their bug report that they don’t have the skills to review the patch themselves, but that they hope it can be useful as a tool for pinpointing what isn’t working in the current codebase.

Anyway, let me talk about the projects I made. They are all found on my personal website Linuxrising.org a website that I also used AI to update after not having touched the site in years.

Elgato Light GNOME Shell extension

Elgato Light GNOME Shell extension

The first project I worked on is a GNOME Shell extension for controlling my Elgato Key Wifi Lamp. The Elgato lamp is basically meant for podcasters and people doing a lot of video calls to be able to easily configure light in their room to make a good recording. The lamp announces itself over mDNS, and thus can be controlled via Avahi. For Windows and Mac the vendor provides software to control their lamp, but unfortunately not for Linux.

There had been GNOME Shell extensions for controlling the lamp in the past, but they had not been kept up to date and their feature set was quite limited. Anyway, I grabbed one of these old extensions and told Claude to update it for latest version of GNOME. It took a few iterations of testing, but we eventually got there and I had a simple GNOME Shell extension that could turn the lamp off and on and adjust hue and brightness. This was a quite straightforward process because I had code that had been working at some point, it just needed some adjustments to work with current generation of GNOME Shell.

Once I had the basic version done I decided to take it a bit further and try to recreate the configuration dialog that the windows application offers for the full feature set which took me quite a bit of back and forth with Claude. I found that if I ask Claude to re-implement from a screenshot it recreates the functionality of the user interface first, meaning that it makes sure that if the screenshot has 10 buttons, then you get a GUI with 10 buttons. You then have to iterate both on the UI design, for example telling Claude that I want a dark UI style to match the GNOME Shell, and then I also had to iterate on each bit of functionality in the UI. Like most of the buttons in the UI didn’t really do anything from the start, but when you go back and ask Claude to add specific functionality per button it is usually able to do so.

Elgato Light Settings Application

So this was probably a fairly easy thing for the AI because all the functionality of the lamp could be queried over Avahi, there was no ‘secret’ USB registers to be set or things like that.
Since the application was meant to be part of the GNOME Shell extension I didn’t want to to have any dependency requirements that the Shell extension itself didn’t have, so I asked Claude to make this application in JavaScript and I have to say so far I haven’t seen any major differences in terms of the AIs ability to generate different languages. The application now reproduce most of the functionality of the Windows application. Looking back I think it probably took me a couple of days in total putting this tool together.

Dell Ultrasharp Webcam 4K

Dell UltraSharp 4K settings application for Linux

The second application on the list is a controller application for my Dell UltraSharp Webcam 4K UHD (WB7022). This is a high end Webcam I that have been using for a while and it is comparable to something like the Logitech BRIO 4K webcam. It has mostly worked since I got it with the generic UVC driver and I been using it for my Google Meetings and similar, but since there was no native Linux control application I could not easily access a lot of the cameras features. To address this I downloaded the windows application installer and installed it under Windows and then took a bunch of screenshots showcasing all features of the application. I then fed the screenshots into Claude and told it I wanted a GTK+ version for Linux of this application. I originally wanted to have Claude write it in Rust, but after hitting some issues in the PipeWire Rust bindings I decided to just use C instead.

I took me probably 3-4 days with intermittent work to get this application working and Claude turned out to be really good and digging into Windows binaries and finding things like USB property values. Claude was also able to analyze the screenshots and figure out the features the application needed to have. It was a lot of trial and error writing the application, but one way I was able to automate it was by building a screenshot option into the application, allowing it to programmatically take screenshots of itself. That allowed me to tell Claude to try fixing something and then check the screenshot to see if it worked without me having to interact with the prompt. Also to get the user interface looking nicer, once I had all the functionality in I asked Claude to tweak the user interface to follow the guidelines of the GNOME Human Interface Guidelines, which greatly improved the quality of the UI.

At this point my application should have almost all the features of the Windows application. Since it is using PipeWire underneath it is also tightly integrated with the PipeWire media graph, allowing you to see it connect and work with your application in PipeWire patchbay applications like Helvum. The remaining features are software features of Dell’s application, like background removal and so on, but I think that if I decided to to implement that it should be as a standalone PipeWire tool that can be used with any camera, and not tied to this specific one.

Red Hat Planet

The application shows the worlds Red Hat offices and include links to latest Red Hat news.


The next application on my list is called Red Hat Planet. It is mostly a fun toy, but I made it to partly revisit the Xtraceroute modernisation I blogged about earlier. So as I mentioned in that blog, Xtraceroute while cute isn’t really very useful IMHO, since the way the modern internet works rarely have your packets jump around the world. Anyway, as people pointed out after I posted about the port is that it wasn’t an actual Vulkan application, it was a GTK+ application using the GTK+ Vulkan backend. The Globe animation itself was all software rendered.

I decided if I was going to revisit the Vulkan problem I wanted to use a different application idea than traceroute. The idea I had was once again a 3D rendered globe, but this one reading the coordinates of Red Hats global offices from a file and rendering them on the globe. And alongside that provide clickable links to recent Red Hat news items. So once again maybe not the worlds most useful application, but I thought it was a cute idea and hopefully it would allow me to create it using actual Vulkan rendering this time.

Creating this turned out to be quite the challenge (although it seems to have gotten easier since I started this effort), with Claude Opus 4.6 being more capable at writing Vulkan code than Claude Sonnet, Google Gemini or OpenAI Codex was when I started trying to create this application.
When I started this project I had to keep extremely close tabs on the AI and what is was doing in order to force it to keep working on this as a Vulkan application, as it kept wanting to simplify with Software rendering or OpenGL and sometimes would start down that route without even asking me. That hasn’t happened more recently, so maybe that was a problem of AI of 5 Months ago.

I also discovered as part of this that rendering Vulkan inside a GTK4 application is far from trivial and would ideally need the GTK4 developers to create such a widget to get rendering timings and similar correct. It is one of the few times I have had Claude outright say that writing a widget like that was beyond its capabilities (haven’t tried again so I don’t know if I would get the same response today). So I started moving the application to SDL3 first, which worked as I got a spinning globe with red dots on, but came with its own issues, in the sense that SDL is not a UI toolkit as such. So while I got the globe rendered and working the AU struggled badly with the news area when using SDL.

So I ended up trying to port the application to Qt, which again turned out to be non-trivial in terms of how much time it took with trial and error to get it right. I think in my mind I had a working globe using Vulkan, how hard could it be to move it from SDL3 to Qt, but there was a million rendering issues. In fact I ended up using the Qt Vulkan rendering example as a starting point in the end and then ‘porting’ the globe over bit by bit, testing it for each step, to finally get a working version. The current version is a Vulkan+Qt app and it basically works, although it seems the planet is not spinning correctly on AMD systems at the moment, while it seems to work well on Intel and NVIDIA systems.

WMDock

WmDock fullscreen with config application.


This project came out of a chat with Matthias Clasen over lunch where I mused about if Claude would be able to bring the old Window Maker dockapps to GNOME and Wayland. Turns out the answer is yes although the method of doing so changed as I worked on it.

My initial thought was for Claude to create a shim that the old dockapps could be compiled against, without any changes. That worked, but then I had a ton of dockapps showing up in things like the alt+tab menu. It also required me to restart my GNOME Shell session all the time as I was testing the extension to house the dockapps. In the end I decided that since a lot of the old dockapps don’t work with modern Linux versions anyway, and thus they would need to be actively ported, I should accept that I ship the dockapps with the tool and port them to work with modern linux technologies. This worked well and is what I currently have in the repo, I think the wildest port was porting the old dockapp webcam app from V4L1 to PipeWire. Although updating the soundcontroller from ESD to PulesAudio was also a generational jump.

XMMS resuscitated

XMMS brought back to life


So the last effort I did was reviving the old XMMS media player. I had tried asking Claude to do this for Months and it kept failing, but with Opus 4.6 it plowed through it and had something working in a couple of hours, with no input from me beyond kicking it off. This was a big lift,moving it from GTK2 and Esound, to GTK4, GStreamer and PipeWire. One thing I realized is that a challenge with bringing an old app back is that since keeping the themeable UI is a big part of this specific application adding new features is a little kludgy. Anyway I did set it up to be able to use network speakers through PipeWire and also you can import your Spotify playlists and play those, although you need to run the Spotify application in the background to be able to play sound on your local device.

Monkey Bubble

Monkey Bubble was a game created in the heyday of GNOME 2 and while I always thought it was a well made little game it had never been updated to never technologies. So I asked Claude to port it to GTK4 and use GStreamer for audio.This port was fairly straightforward with Claude having little problems with it. I also asked Claude to add highscores using the libmanette library and network game discovery with Avahi. So some nice little.improvements.

All the applications are available either as Flatpaks or Fedora RPMS, through the gitlab project page, so I hope people enjoy these applications and tools. And enoy the blasts from the past as much as I did.

Worries about Artifical Intelligence

When I speak to people both inside Red Hat and outside in the community I often come across negativity or even sometimes anger towards Artificial Intelligence in the coding space. And to be clear I to worry about where things could be heading and how it will affect my livelihood too, so I am not unsympathetic to those worries at all. I probably worry about these things at least a few times a day. At the same time I don’t think we can hide from or avoid this change, it is happening with or without us. We have to adapt to a world where this tool exists, just like our ancestors have adapted to jobs changing due to industrialization and science before. So do I worry about the future, yes I do. Do I worry about how I might personally get affected by this? yes, I do. Do I worry about how society might change for the worse due to this? yes, I do. But I also remind myself that I don’t know the future and that people have found ways to move forward before and society has survived and thrived. So what I can control is that I try to be on top of these changes myself and take advantage of them where I can and that is my recommendation to the wider open source community on this too. By leveraging them to move open source forward and at the same time trying to put our weight on the scale towards the best practices and policies around Artificial Intelligence.

The Next Test and where AI might have hit a limit for me.

So all these previous efforts did teach me a lot of tricks and helped me understand how I can work with an AI agent like Claude, but especially after the success with the webcam I decided to up the stakes and see if I could use Claude to help me create a driver for my Plustek OpticFilm 8200i scanner. So I have zero backround in any kind of driver development and probably less than zero in the field of scanner driver specifically. So I ended up going down a long row of deadends on this journey and I to this day has not been able to get a single scan out of the scanner with anything that even remotely resembles the images I am trying to scan.

My idea was to have Claude analyse the Windows and Mac driver and build me a SANE driver based on that, which turned out to be horribly naive and lead nowhere. One thing I realized is that I would need to capture USB traffic to help Claude contextualize some of the findings it had from looking at the Windows and Mac drivers.I started out with Wireshark and feeding Claude with the Wireshark capture logs. Claude quite soon concluded that the Wireshark logs wasn’t good enough and that I needed lower level traffic capture. Buying a USB packet analyzer isn’t cheap so I had the idea that I could use one of the ARM development boards floating around the house as a USB relay, allowing me to perfectly capture the USB traffic. With some work I did manage to set up my LibreComputer Solitude AML-S905D3-CC arm board going and setting it in device mode. I also had a usb-relay daemon going on the board. After a lot of back and forth, and even at one point trying to ask Claude to implement a missing feature in the USB kernel stack, I realized this would never work and I ended up ordering a Beagle USB 480 USB hardware analyzer.

At about the same time I came across the chipset documentation for the Genesys Logic GL845 chip in the scanner. I assumed that between my new USB analyzer and the chipset docs this would be easy going from here on, but so far no. I even had Claude decompile the windows driver using ghidra and then try to extract the needed information needed from the decompiled code.
I bought a network controlled electric outlet so that Claude can cycle the power of the scanner on its own.

So the problem here is that with zero scanner driver knowledge I don’t even know what I should be looking for, or where I should point Claude to, so I keept trying to brute force it by trial and error. I managed to make SANE detect the scanner and I managed to get motor and lamp control going, but that is about it. I can hear the scanner motor running and I ask for a scan, but I don’t know if it moves correctly. I can see light turning on and off inside the scanner, but I once again don’t know if it is happening at the correct times and correct durations. And Claude has of course no way of knowing either, relying on me to tell it if something seems like it has improved compared to how it was.

I have now used Claude to create two tools for Claude to use, once using a camera to detect what is happening with the light inside the scanner and the other recording sound trying to compare the sound this driver makes compared to the sounds coming out when doing a working scan with the MacOS X application. I don’t know if this will take me to the promised land eventually, but so far I consider my scanner driver attempt a giant failure. At the same time I do believe that if someone actually skilled in scanner driver development was doing this they could have guided Claude to do the right things and probably would have had a working driver by now.

So I don’t know if I hit the kind of thing that will always be hard for an AI to do, as it has to interact with things existing in the real world, or if newer versions of Claude, Gemini or Codex will suddenly get past a threshold and make this seem easy, but this is where things are at for me at the moment.

Reddit Is Weighing Identity Verification Methods To Combat Its Bot Problem

Slashdot - Hën, 23/03/2026 - 3:34md
An anonymous reader quotes a report from Engadget: There could be one more step required before creating an account and posting on Reddit in the future. According to Reddit's CEO, Steve Huffman, the social media platform is exploring different ways to verify a user is human and not a bot. When asked by the TBPN podcast how to confirm that it's a human using Reddit, Huffman responded with several verification methods with varying degrees of heavy-handedness. "The most lightweight way is with something like Face ID or Touch ID," Huffman said during the interview. "They actually require a human presence, like a human has to touch, or do or look at something, so that actually just proves there's a person there or gets you pretty far." Besides these passkey methods that use biometrics data, Huffman said there are other options like relying on third-party services that are decentralized or don't require ID. On the other end of the spectrum, Huffman also mentioned more burdensome options, like ID-checking services. [...] "Part of our promise for our users is we don't know your name but we do want to know you're a person," Huffman said. "It'll be an evolution for us for a while, and probably every platform to find the right middle ground here." Reddit co-founder and former executive chair, Alexis Ohanian, said on X that Reddit requiring Face ID wasn't something he expected but agreed that something had to be done about the fake content from bots, adding that, "I just don't know how to sell face-scanning to Redditors or even lurkers." We reached out to Reddit's communications team and will update the story when we hear back. The Digg beta shut down earlier this month after failing to fight the overwhelming influx of AI-driven bots and spam. "The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts," said CEO Justin Mezzell. "We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us." "We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on."

Read more of this story at Slashdot.

Jussi Pakkanen: Everything old is new again: memory optimization

Planet GNOME - Hën, 23/03/2026 - 3:06md

At this point in history, AI sociopaths have purchased all the world's RAM in order to run their copyright infringement factories at full blast. Thus the amount of memory in consumer computers and phones seems to be going down. After decades of not having to care about memory usage, reducing it has very much become a thing.

Relevant questions to this state of things include a) is it really worth it and b) what sort of improvements are even possible. The answers to these depend on the task and data set at hand. Let's examine one such case. It might be a bit contrived, unrepresentative and unfair, but on the other hand it's the one I already had available.

Suppose you have to write script that opens a text file, parses it as UTF-8, splits it into words according to white space, counts the number of time each word appears and prints the words and counts in decreasing order (most common first).

The Python baseline

This sounds like a job for Python. Indeed, an implementation takes fewer than 30 lines of code. Its memory consumption on a small text file looks like this.

Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

The native version

A fully native C++ version using Pystd requires 60 lines of code to implement the same thing. If you ignore the boilerplate, the core functionality fits in 20 lines. The steps needed are straightforward:

  1. Mmap the input file to memory.
  2. Validate that it is utf-8
  3. Convert raw data into a utf-8 view
  4. Split the view into words lazily
  5. Compute the result into a hash table whose keys are string views, not strings

The main advantage of this is that there are no string objects. The only dynamic memory allocations are for the hash table and the final vector used for sorting and printing. All text operations use string views , which are basically just a pointer + size.

In code this looks like the following:

Its memory usage looks like this.

Peak consumption is ~100 kB in this implementation. It uses only 7.7% of the amount of memory required by the Python version.

Isn't this a bit unfair towards Python?

In a way it is. The Python runtime has a hefty startup cost but in return you get a lot of functionality for free. But if you don't need said functionality, things start looking very different.

But we can make this comparison even more unfair towards Python. If you look at the memory consumption graph you'll quite easily see that 70 kB is used by the C++ runtime. It reserves a bunch of memory up front so that it can do stack unwinding and exception handling even when the process is out of memory. It should be possible to build this code without exception support in which case the total memory usage would be a mere 21 kB. Such version would yield a 98.4% reduction in memory usage.

Colin Walters: Agent security is just security

Planet GNOME - Hën, 23/03/2026 - 2:51md

Suddenly I have been hearing the term Landlock more in (agent) security circles. To me this is a bit weird because while Landlock is absolutely a useful Linux security tool, it’s been a bit obscure and that’s for good reason. It feels to me a lot like the how weird prevalence of the word delve became a clear tipoff that LLMs were the ones writing, not a human.

Here’s my opinion: Agentic LLM AI security is just security.

We do not need to reinvent any fundamental technologies for this. Most uses of agents one hears about provide the ability to execute arbitrary code as a feature. It’s how OpenCode, Claude Code, Cursor, OpenClaw and many more work.

Especially let me emphasize since OpenClaw is popular for some reason right now: You should absolutely not give any LLM tool blanket read and write access to your full user account on your computer. There are many issues with that, but everyone using an LLM needs to understand just how dangerous prompt injection can be. This post is just one of many examples. Even global read access is dangerous because an attacker could exfiltrate your browser cookies or other files.

Let’s go back to Landlock – one prominent place I’ve seen it mentioned is in this project nono.sh pitches itself as a new sandbox for agents. It’s not the only one, but indeed it heavily leans on Landlock on Linux. Let’s dig into this blog post from the author. First of all, I’m glad they are working on agentic security. We both agree: unsandboxed OpenClaw (and other tools!) is a bad idea.

Here’s where we disagree:

With AI agents, the core issue is access without boundaries. We give agents our full filesystem permissions because that’s how Unix works. We give them network access because they need to call APIs. We give them access to our SSH keys, our cloud credentials, our shell history, our browser cookies – not because they need any of that, but because we haven’t built the tooling to say “you can have this, but not that.”

No. We have had usable tooling for “you can have this, but not that” for well over a decade. Docker kicked off a revolution for a reason: docker run <app> is “reasonably completely isolated” from the host system. Since then of course, there’s many OCI runtime implementations, from podman to apple/container on MacOS and more.

If you want to provide the app some credentials, you can just use bind mounts to provide them like docker|podman|ctr -v ~/.config/somecred.json:/etc/cred.json:ro. Notice there the ro which makes it readonly. Yes, it’s that straightforward to have “this but not that”.

Other tools like Flatpak on Linux have leveraged Linux kernel namespacing similar to this to streamline running GUI apps in an isolated way from the host. For a decade.

There’s far more sophisticated tooling built on top of similar container runtimes since then, from having them transparently backed by virtual machines, Kubernetes and similar projects are all about running containers at scale with lots of built up security knowledge.

That doesn’t need reinventing. It’s generic workload technology, and agentic AI is just another workload from the perspective of kernel/host level isolation. There absolutely are some new, novel risks and issues of course: but again the core principle here is we don’t need to reinvent anything from the kernel level up.

Security here really needs to start from defaulting to fully isolating (from the host and other apps), and then only allow-listing in what is needed. That’s again how docker run worked from the start. Also on this topic, Flatpak portals are a cool technology for dynamic resource access on a single host system.

So why do I think Landlock is obscure? Basically because most workloads should already be isolated already per above, and Landlock has heavy overlap with the wide variety of Linux kernel security mechanisms already in use in containers.

The primary pitch of Landlock is more for an application to further isolate itself – it’s at its best when it’s a complement coarse-grained isolation techniques like virtualization or containers. One way to think of it is that often container runtimes don’t grant privileges needed for an application to further spawn its own sub-containers (for kernel attack surface reasons), but Landlock is absolutely a reasonable thing for an app to use to e.g. disable networking from a sub-process that doesn’t need it, etc.

Of course the challenge is that not every app is easy to run in a container or virtual machine. Some workloads are most convenient with that “ambient access” to all of your data (like an IDE or just a file browser).

But giving that ambient access by default to agentic AI is a terrible idea. So don’t do it: use (OCI) containers and allowlist in what you need.

(There’s other things nono is doing here that I find dubious/duplicative; for example I don’t see the need for a new filesystem snapshotting system when we have both git and OCI)

But I’m not specifially trying to pick on nono – just in the last two weeks I had to point out similar problems in two different projects I saw go by also pitched for AI security. One used bubblewrap, but with insufficient sandboxing, and the other was also trying to use Landlock.

On the other hand, I do think the credential problem (that nono and others are trying to address in differnet ways) is somewhat specific to agentic AI, and likely does need new tooling. When deploying a typical containerized app usually one just provisions a few relatively static credentials. In contrast, developer/user agentic AI is often a lot more freeform and dynamic, and while it’s hard to get most apps to leak credentials without completely compromising it, it’s much easier with agentic AI and prompt injection. I have thoughts on credentials, and absolutely more work here is needed.

It’s great that people want to work on FOSS security, and AI could certainly use more people thinking about security. But I don’t think we need “next generation” security here: we should build on top of the “previous generation”. I actually use plain separate Unix users for isolation for some things, which works quite well! Running OpenShell in a secondary user account where one only logs into a select few things (i.e. not your email and online banking) is much more reasonable, although clearly a lot of care is still needed. Landlock is a fine technology but is just not there as a replacement for other sandboxing techniques. So just use containers and virtual machines because these are proven technologies. And if you take one message away from this: absolutely don’t wire up an LLM via OpenShell or a similar tool to your complete digital life with no sandboxing.

Will AI Force Source Code to Evolve - Or Make it Extinct?

Slashdot - Hën, 23/03/2026 - 11:34pd
Will there be an AI-optimized programming language at the expense of human readability? There's now been experiments with minimizing tokens for "LLM efficiency, without any concern for how it would serve human developers." This new article asks if AI will force source code to evolve — or make it extinct, noting that Stephen Cass, the special projects editor at IEEE Spectrum, has even been asking the ultimate question about our future. "Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?" Cass acknowledged the obvious downsides. ("True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.") But "instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh." This leads to some mind-boggling hypotheticals, like "What's the role of the programmer in a future without source code?" Cass asked the question and announced "an emergency interactive session" in October to discuss whether AI is signaling the end of distinct programming languages as we know them. In that webinar, Cass said he believes programmers in this future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and "has to be able to explain what it's doing." But what kind of abstractions could go away? And then "What happens when we really let AIs off the hook on this?" Cass asked — when we "stop bothering" to have them code in high-level languages. (Since, after all, high-level languages "are a tool for human beings.") "What if we let the machines go directly into creating intermediate code?" (Cass thinks the machine-language level would be too far down the stack, "because you do want a compile layer too for different architecture....") In this future, the question might become 'What if you make fewer mistakes, but they're different mistakes?'" Cass said he's keeping an eye out for research papers on designing languages for AI, although he agreed that it's not a "tomorrow" thing — since, after all, we're still digesting "vibe coding" right now. But "I can see this becoming an area of active research." The article also quotes Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, who's seen the attempts at an "AI-first" languages, but nothing yet with meaningful adoption. So maybe AI coding agents will just make it easier to use our existing languages — especially typed languages with built-in safety advantages. And Scott Hanselman's podcast recently dubbed Chris Lattner's Mojo "a programming language for an AI world," just in the way it's designed to harness the computing power of today's multi-core chips.

Read more of this story at Slashdot.

GrapheneOS Refuses to Comply with Age-Verification Laws

Slashdot - Hën, 23/03/2026 - 8:34pd
An anonymous reader shared this report from Tom's Hardware: GrapheneOS, the privacy-focused Android fork, said in a post on X on Friday that it will not comply with emerging laws requiring operating systems to collect user age data at setup. "GrapheneOS will remain usable by anyone around the world without requiring personal information, identification or an account," the project stated. "If GrapheneOS devices can't be sold in a region due to their regulations, so be it." The statement came after Brazil's Digital ECA (Law 15.211) took effect on March 17, imposing fines of up to R$50 million (roughly $9.5 million) per violation on operating system providers that fail to implement age verification... Motorola and GrapheneOS announced a long-term partnership at MWC on March 2, to bring to bring the hardened OS to future Motorola hardware, ending GrapheneOS's long-standing exclusivity to Google Pixel devices. A GrapheneOS-powered Motorola phone is expected in 2027. If Motorola sells devices with GrapheneOS pre-installed, those devices would need to comply with local regulations in every market where they ship, or Motorola may need to restrict sales geographically. Or, "People can buy the devices without GrapheneOS and install it themselves in any region where that's an issue," according to a post on the GrapheneOS BlueSky account. "Motorola devices with GrapheneOS preinstalled is something we want but it doesn't have to happen right away and doesn't need to happen everywhere for the partnership to be highly successful. Pixels are sold in 33 countries which doesn't include many countries outside North America and Europe." Tom's Hardware also notes that GrapheneOS "isn't the first and won't be the last company to outright refuse compliance with incoming age verification laws." "The developers of open-source calculator firmware DB48X issued a legal notice recently, stating that their software 'does not, cannot and will not implement age verification,' while MidnightBSD updated its license to ban users in Brazil."

Read more of this story at Slashdot.

Some Microsoft Insiders Fight to Drop Windows 11's Microsoft Account Requirements

Slashdot - Hën, 23/03/2026 - 5:34pd
Yes, Microsoft announced it's fixing common Windows 11 complaints. But what about getting rid of that requirement to have a Microsoft account before installing Windows 11? While Microsoft didn't mention that at all, the senior editor at the blog Windows Central reports there's "a number of people" internally pushing at Microsoft to relax that requirement: Microsoft Vice President and overall developer legend Scott Hanselman has posted on X in response to someone asking him about possibly relaxing the Microsoft account requirements, saying "Ya I hate that. Working on it...." [Hanselman made that remark Friday, to his 328,200 followers.] The blog notes "It would be very easy for Microsoft to remove this requirement from a technical perspective, it's just whether or not the company can agree to make the change that needs to be decided." Elsewhere on X someone told Hanselman they wanted to see Windows "cut out the borderline malware tactics we've seen in recent years to push things like Edge, Bing, ads into the start menu, etc." Hanselman's reply? "Yes a calmer and more chill OS with fewer upsells is a goal." Q: When will we see first changes? for now it's just words... Hanselman: This month and every month this year.

Read more of this story at Slashdot.

Walmart Announces Digital Price Labels for Every Store in the U.S. By the End of 2026

Slashdot - Hën, 23/03/2026 - 2:34pd
Walmart is "rolling out digital price tags to replace the old paper ones," reports CNBC, planning to implement them in all U.S. stores by the end of the year: Amanda Bailey, a team leader in electronics who works at a Walmart in West Chester, Ohio, estimates that the digital shelf labels — known as DSLs — have cut the time she used to spend on pricing duties by 75%, time that has freed her up to help customers. She also said the DSLs are a game-changer because Walmart's Spark delivery drivers looking for an item will see a flashing DSL so they can more easily find the product... Sean Turner, chief technology officer of Swiftly, a retail technology and media platform serving the grocery industry, said that while it makes sense that people are raising questions about dynamic pricing, the real issue is store-level efficiency. "Digital shelf labels solve some very real operational headaches. They cut down on manual price changes, reduce checkout discrepancies, and make it easier to keep in-store and digital promotions aligned," Turner said. All of that can mean fewer surprises at the register for shoppers and better-tailored promotions. "For consumers, the biggest benefit is accuracy and consistency," Benedict said. "Shoppers want to know the price they see is the price they pay. Digital labels can also make it easier for stores to mark down perishable items in real time, which can lower food waste and create savings opportunities." A Walmart spokeswoman promised CNBC that "the price you see is the same for everyone in any given store." But the article also notes that several U.S. states "are looking to ban dynamic pricing. Pennsylvania became one of the latest states to introduce a bill outlawing the practice, following New York's Algorithmic Pricing Disclosure Act, which became law in November." And at the federal level, U.S. Senator Ben Ray Luján recently introduced the "Stop Price Gouging in Grocery Stores" act, which would ban digital labels in any grocery store over 10,000 square feet, while Congresswoman Val Hoyle is sponsoring similar legislation in the House. "There needs to be laws and enforcement to protect consumers," Hoyle tells CNBC, "and until then, I'd like to see them banned outright." CNBC adds that "While there is no reported use of digital shelf labeling being tied to surge pricing yet," in Hoyle's view "it's only a matter of time."

Read more of this story at Slashdot.

Trapped! Inside a Self-Driving Car During an Anti-Robot Attack

Slashdot - Dje, 22/03/2026 - 11:55md
A man crossing the street one San Francisco night spotted a self-driving car — and decided to confront its passenger, 37-year-old tech worker Doug Fulop. The New York Times reports the man yelled that "he wanted to kill Fulop and the other two passengers for giving money to a robot." A taxi driver would have simply driven away. But Fulop's vehicle had no driver — it was a self-driving Waymo... Self-driving cars are designed to stop moving if a person is nearby. People can take advantage of that function to harass and threaten their passengers.... It was unsettling to be trapped inside a Waymo during an attack, Fulop said. "If he had kept hammering on one window instead of alternating, I'm sure he would have eventually broken through," he said. The attacker did not appear to be on drugs or otherwise impaired, but seemed to be overtaken by extreme anger at the self-driving car, Fulop said. It did not seem safe to get out and run, he added, since the man was trying to open the locked doors and said he wanted to kill the passengers. They called 911 and Waymo's support line, Fulop said. Waymo told them that it would not manually direct the car away if someone was standing nearby, and that the passengers would be OK with the doors locked. The car's software does not allow riders to jump into the driver's seat and take over during an incident. The attack lasted around six minutes. By then, bystanders had begun cheering on the man, Fulop said. That distracted the man, who moved far enough away from the car that it could finally drive away... Fulop said he had stopped using Waymo for a time after the January attack and would avoid the service at night unless the company changed its policy of not intervening when a hostile person threatened riders. "As passengers, we deserve more safety than that if someone is trying to attack us," he said. "This can't be the policy to be trapped there." The article remembers other incidents — including a 2024 video showing three women screaming as their autonomous taxi is spray-painted by vandals. And technology author/speaker Anders Sorman-Nilsson says in Los Angeles five men on e-bikes surrounded his Waymo and forced it to stop. The author felt safe inside the vehicle, according to the times, which adds "He felt reassured knowing that Waymo's many exterior cameras were recording the men. After around five minutes, he said, they gave up and rode away."

Read more of this story at Slashdot.

Elon Musk Announces $20B 'Terafab' Chip Plant in Texas To Supply His Companies

Slashdot - Dje, 22/03/2026 - 10:55md
"Billionaire Elon Musk has announced plans to build a $20 billion chip plant in Austin, Texas" reports a local news station: Musk announced on Saturday night during a livestream on his social media platform X that the plant, called "Terafab," will be built near Tesla's campus and gigafactory in eastern Travis County. The long-anticipated project is a joint venture between Musk-owned properties Tesla, SpaceX and xAI... The Terafab plant is expected to begin production in 2027. Musk "has said the semiconductor industry is moving too slow to keep up with the supply of chips he expects to need," writes Bloomberg — quoting Musk as saying "We either build the Terafab or we don't have the chips, and we need the chips, so we build the Terafab." Musk detailed some specific plans, including producing chips that can support 100 to 200 gigawatts a year of computing power on Earth, and chips that can support a terawatt in space, but gave no timelines for the facility or its output... The facility is expected to make two types of chips, one of which will be optimized for edge and inference, primarily for his vehicle, robotaxi and Optimus humanoid robots. The other will be a high-power chip, designed for space that could be used by SpaceX and xAI... Musk said he expects xAI to use the vast majority of the chips. During the presentation, Musk also unveiled a speculative rendering of a future "mini" AI data center satellite, one piece of a much larger satellite system that he wants SpaceX to build to do complex computing in space. In January, SpaceX requested a license from the Federal Communications Commission to launch one million data center satellites into orbit around Earth. Musk said that the mini satellite he revealed would have the capacity for 100 kilowatts of power. "We expect future satellites to probably go to the megawatt range," Musk said. Raising money to build and launch AI data centers in space is one of the driving forces behind SpaceX's planned IPO later this year. SpaceX is expected to raise as much as $50 billion in a record-setting IPO this summer which could value it at more than $1.75 trillion, Bloomberg News reported earlier.

Read more of this story at Slashdot.

7.0-rc5: mainline

Kernel Linux - Dje, 22/03/2026 - 10:42md
Version:7.0-rc5 (mainline) Released:2026-03-22 Source:linux-7.0-rc5.tar.gz Patch:full (incremental)

Tech Leaders Support California Bill to Stop 'Dominant Platforms' From Blocking Competition

Slashdot - Dje, 22/03/2026 - 9:34md
A new bill proposed in California "goes after big tech companies" writes Semafor. Supported by Y Combinator, Cory Doctorow , and the nonprofit advocacy group Fight for the Future, it's called the "BASED" act — an acronym which stands for "Blocking Anticompetitive Self-preferencing by Entrenched Dominant platforms." As announced by San Francisco state representative Scott Wiener, the bill "will restore competition to the digital marketplace by prohibiting any digital platform with a market capitalization greater than $1 trillion and serving 100 million or more monthly users in the U.S., from favoring their own products and services on the platforms they operate." More from Scott Wiener;s announcement: For years, giant digital platforms like Apple, Amazon, Google, and Meta have used their immense power to promote their own products and services while stifling competitors — a practice also known as self-preferencing. The result has been higher prices, diminished service, and fewer options for consumers, and less innovation across the technology ecosystem. Self-preferencing also locks startups and mid-sized companies out of the online marketplace unless they play by rules set by their competitors. As a new generation of AI-powered startups seeks to enter the marketplace, their success — and public access to the innovations they produce — depends on their ability to compete on an even playing field. "Anticompetitive behavior is everywhere on the internet," said Senator Wiener, "from rigged search results, to manipulative nudges boosting the 'house' product, to anti-discount policies that raise prices, to the dreaded green bubble that 'breaks' the group chat. When the world's largest digital platforms rig the game to favor their own products and services, we all lose. By prohibiting these anticompetitive practices, the BASED Act will protect competition online, empower consumers and startups, and promote innovations to improve all our lives." The announcement includes a quote from Teri Olle, VP of the nonprofit Economic Security California Action, saying the act would "safeguard merit-based market competition. This legislation stands for a simple principle: owning the stadium doesn't mean that you get to rig the game." Some conduct prohibited by the proposed bill includes Manipulating the order of search results to favor a provider's products or services, irrespective of a merit-based process, Using non-public data generated by third-party sellers — including sales volumes, pricing, and customer behavior — to develop competing products that are subsequently boosted above the third-party sellers' product... And the announcement also notes that "under the terms of the bill, providers could not prevent consumers from obtaining a portable copy of their own data or restrict voluntary data sharing (by consumers) with third parties." Read on for reactions from DuckDuckGo, Proton, Yelp, Y Combinator, and Cory Doctorow.

Read more of this story at Slashdot.

Why Apple Temporarily Blocked Popular Vibe Coding Apps

Slashdot - Dje, 22/03/2026 - 8:19md
An anonymous reader shared this report from the tech-news blog Neowin: Apple appears to have temporarily prevented apps, including Replit and Vibecode, from pushing new updates. Apple seems bothered by how apps like Replit present vibe-coded apps in a web view within the original app. This process virtually allows the app to become something else. And the new app isn't distributed via the App Store, but it still runs on the user's device... [S]uch apps would also bypass the App Store Review process that ensures that apps are safe to use and meet Apple's design and performance standards... According to the publication (via MacRumors), Apple was close to approving pending updates for such apps if they changed how they work. For instance, Replit would get the green light if its developers configure the app to open vibe-coded apps in an external browser rather than the in-app web view. Vibecode is also close to being approved if it removes features, such as the ability to develop apps specifically for the App Store.

Read more of this story at Slashdot.

William Shatner Celebrates 95th Birthday, Smokes Cigar, Revisits 'Rocket Man' and Tests X Money

Slashdot - Dje, 22/03/2026 - 6:55md
It was 60 years ago when William Shatner — born in 1931 — portrayed Captain Kirk in the TV series Star Trek. Shatner turns 95 today — and celebrated by posting a picture of himself smoking a cigar. "At 95, I'm still smokin'!" Shatner joked, adding that in life he'd learned two things. "Never waste a good cigar. Never trust anyone who says you should 'act your age.'" For more celebrations, Paramount's free/ad-supported streaming platform Pluto TV announced a "Trek TV takeover birthday celebration" that will run through April 3rd, according to TrekMovie.com, with marathon of Star Trek movies and TV shows — and even that time he was roasted on Comedy Central. ("Freeâ½ My favorite price!" Shatner quipped on X.com.) Shatner still remains a popular celebrity, even travelling to space five years ago on a Blue Origin flight past the Kármán line. Since then he's led a cruise to Antarctica — and even performed an alternate take of Captain Kirk's final scene on the Jimmy Fallon show. And this week Shatner (along with hundreds of thousands attendees) appeared at Orlando's MegaCon — and shared stories about his life with Orlando Weekly: Shatner: Last month, I was on board a cruise ship, and they said the only thing I had to do over the next three days, "before we let you go home," is sing "Rocket Man." So I thought, "I'm not going to sing 'Rocket Man' the same way that what's-his-name did. ... So, I looked at the song very carefully to see if I could find what actors call a throughline. What is the character singing? What is he singing about? And so I look through all of these weird lyrics, and all of a sudden, the word sticks out to me: "alone." So I say to the band members, "OK, let's make this song about being alone in space." And I work on it with the band and the musicians, and again on a Saturday night, I perform the number, and 4,000 people stand up and applaud "Rocket Man." And they won't let me off the stage, again and again. Four times, I get a standing ovation, wild. And that's the progression for me, of science fiction for me, as exemplified by this song. The song went from superficial to something of depth and meaning... It touched people enough for them to stand up and applaud, and I realized that is the story of science fiction... Science fiction with all its great technology has evolved into great storytelling that reaches people in a manner that is very difficult for other types of drama to do. Shatner answered questions from Slashdot readers in 2002 ("My life is my statement...") and again in 2011. ("I used to try to assemble computers way back when and they came out looking like a skateboard...") And judging by his X.com posts, Shatner is now involved in early testing of the site's upcoming digital payment system X Money.

Read more of this story at Slashdot.

A CNN Producer Explores the 'Magic AI' Workout Mirror

Slashdot - Dje, 22/03/2026 - 5:34md
CNN looks at "the Magic AI fitness mirror," a new product "watching you, and giving you feedback automatically," while sometimes playing footage of a recorded personal trainer. Long-time Slashdot reader destinyland describes CNN's video report: CNN says the device "tracks form, counts reps, and corrects technique in real-time — and it doesn't go easy on you." (Although the company's CEO/cofounder, Varun Bhanot, says "we're not trying to completely replace personal trainers. What we are providing is a more accessible alternative.") CNN call the company "more a computer-vision firm than a fitness company, building the tech for this mirror from the ground up." CEO Bhanot tells CNN he'd hired a personal trainer in his 20s to get fit, but "Going through that journey, I realized how old-fashioned personal training was. Dumbbells were still dumb. There was no data or augmentation for the whole process!" "The AI fitness and wellness market is already huge — and it's growing," CNN adds. "In 2025 the global market was worth $11 billion, according to [market research firm] Insightace Analytic. By 2035, this market is expected to reach just shy of $58 billion. And Magic AI is far from alone. Form, Total, Speediance, and Echelon, to name a few, are all brands vying for a slice of this market. Even the most purely physical of activities — exercising your body — now gets "enhanced" with AI accessories...

Read more of this story at Slashdot.

Google Search Is Now Sometimes Using AI To Replace Headlines

Slashdot - Dje, 22/03/2026 - 4:34md
"Google is beginning to replace news headlines in its search results with ones that are AI-generated," reports the Verge: After doing something similar in its Google Discover news feed, it's starting to mess with headlines in the traditional "10 blue links," too. We've found multiple examples where Google replaced headlines we wrote with ones we did not, sometimes changing their meaning in the process. For example, Google reduced our headline "I used the 'cheat on everything' AI tool and it didn't help me cheat on anything" to just five words: "'Cheat on everything' AI tool." It almost sounds like we're endorsing a product we do not recommend at all. What we are seeing is a "small" and "narrow" experiment, one that's not yet approved for a fuller launch, Google spokespeople Jennifer Kutz, Mallory De Leon, and Ned Adriance tell The Verge. They would not say how "small" that experiment actually is. Over the past few months, multiple Verge staffers have seen examples of headlines that we never wrote appear in Google Search results — headlines that do not follow our editorial style, and without any indication that Google replaced the words we chose. And Google says it's tweaking how other websites show up in search, too, not just news. The good news, for now, is that these changed headlines seem to be few and far between, and they're not yet the kind of tripe we've seen in Google Discover. (For example, Google Discover told me this week that the PlayStation Portal was getting a 1080p streaming mode, when it actually got a higher bitrate mode instead.) Compared to that and other lying Google Discover headlines like "US reverses foreign drone ban" — on a story reporting the opposite — the nonsense headlines we're seeing in Google Search are downright tame. The article points out that Google "originally told us its AI headlines in Google Discover were an experiment too. A month later, it told us those AI headlines are now a feature..." "Google confirmed that the test uses generative AI, but claimed that 'if we were to actually launch something based on this experiment, it would not be using a generative model and we would not be creating headlines with gen AI'..."

Read more of this story at Slashdot.

Amazon Plans to Test Four-Legged Robots on Wheels for Deliveries

Slashdot - Dje, 22/03/2026 - 3:34md
CNBC reports: Amazon has acquired Rivr, a Swiss robotics company developing machines for "doorstep delivery," the company confirmed Thursday... It announced the deal in a notice sent to third-party delivery contractors... "We believe this technology, when working alongside your [delivery associates], has the potential to further improve safety outcomes and the overall customer experience, particularly in the last steps of the delivery process...." In its notice to delivery service partner owners, Amazon said Rivr's technology, which includes a four-legged robot on wheels, will allow it to research and test how the devices can be integrated into delivery operations, including "helping [delivery associates] carry packages from delivery vehicles to customer doorsteps."

Read more of this story at Slashdot.

US Cable TV Industry Faces 'Dramatic Collapse' as Local Operators Shut Down - or Become ISPs

Slashdot - Dje, 22/03/2026 - 12:34md
America's cable TV industry "is undergoing its most dramatic collapse in history," reports Cord Cutters News, "with operators large and small waving the white flag on traditional TV service and pointing their customers toward streaming platforms instead." Just in 2025 Comcast lost 1.25 million pay-TV subscribers (ending the year with just 11.3 million), while Charter Spectrum also lost hundreds of thousands of customers each quarter. But "for smaller regional operators, who lack the scale and diversified revenue streams of giants like Comcast, those kinds of losses are simply unsurvivable," they write. And "the companies that once delivered hundreds of channels through coaxial cables are now either shutting down entirely or reinventing themselves as internet providers." Pay-TV subscriptions have plummeted from nearly 90% of U.S. households in the mid-2010s to roughly half by the end of 2025, resulting in billions in lost revenue and forcing many smaller operators to conclude that continuing linear TV services is no longer viable... [This year over U.S. 50 cable TV companies — primarily smaller and midsize providers — are "expected to cease operations entirely or shut down their television services," Cord Cutters News reported earlier.] YouTube TV's pricing is so competitive that the platform is projected to have close to 12.6 million subscribers by the end of 2026, positioning it to become the largest paid TV distributor in the United States. Exclusive content deals, such as YouTube TV's acquisition of NFL Sunday Ticket rights, have further eroded the value proposition of traditional cable at every level of the market... As older cable subscribers age out of the market, there is no new generation of customers waiting to replace them... [Cable TV] operators like WOW! are betting that their physical infrastructure — now increasingly upgraded to fiber — is more valuable as an internet delivery system than as a cable TV platform. [WOW! serves customers across Michigan, Ohio, Illinois, and Alabama — but is "phasing out its proprietary streaming live TV service and directing all customers toward YouTube TV," the article notes.] Industry observers see this as part of a broader trend: operators shedding unprofitable video segments to focus on broadband, where returns and network investments are prioritized. By the end of 2026, non-pay-TV households are expected to surge to 80.7 million, outnumbering traditional pay-TV subscribers at 54.3 million — a milestone that would have seemed unthinkable just a decade ago. For the cable companies still standing, the math is now inescapable: the era of the cable bundle is ending, and the only real question left is how gracefully each operator manages its exit.

Read more of this story at Slashdot.

Meteor Rumbles Over Houston, as Six-Pound Fragment Crashes Into a Texas Home

Slashdot - Dje, 22/03/2026 - 8:34pd
"It is the talk of the town today — the loud boom, the flash of light in the sky experienced by a lot of folks across the Houston area this afternoon," says a local Texas newscaster. "And then there was this — a home in northwest Harris county hit by something that crashed through their roof." Travelling at very high speed, the six-pound meteorite crashed through their roof and through their attic, crashing again through the ceiling of the floor below. It then bounced off the floor, hit the ceiling again — and then fell onto the bed. CBS News reports: NASA said in a social media post that the meteor became visible at 49 miles above Stagecoach, northwest of Houston, at 4:40 p.m. local time. The meteor moved southeast at 35,000 miles per hour, breaking apart 29 miles above Bammel, just west of Cypress Station, NASA said. "The fragmentation of the meteor — which weighed about a ton with a diameter of 3 feet — created a pressure wave that caused booms heard by some in the area," NASA said in the post. Across the Houston area, residents described hearing a low, rumbling sound that many compared to thunder, even though the skies were clear, according to CBS affiliate KHOU. Earlier this week, an asteroid weighing about 7 tons and traveling at 45,000 mph traveled over multiple states. And last June, a bright meteor was seen across the southeastern U.S. and exploded over Georgia, creating similar booms heard by residents in the area.

Read more of this story at Slashdot.

Matthew Garrett: SSH certificates and git signing

Planet GNOME - Sht, 21/03/2026 - 8:38md

When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.

git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account1, but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.

SSH Certificates

And, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.

And, wonderfully, you can use them in git! Let’s find out how.

Local config

There’s two main parameters you need to set. First,

1 git config set gpg.format ssh

because unfortunately for historical reasons all the git signing config is under the gpg namespace even if you’re not using OpenPGP. Yes, this makes me sad. But you’re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it’s stored on a smartcard or something rather than on disk). Thankfully for you, I’ve written one. It will talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you’ll have a signature.

Validating signatures

This is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen2, which validates signatures against a file in a format that looks somewhat like authorized-keys. This lets you add something like:

1 * cert-authority ssh-rsa AAAA…

which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?

Haha. No.

Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can’t push to a repo unless they have a certificate signed by the configured CA), there’s currently no way to say “Trust all commits with an SSH certificate signed by this CA”. I am unclear on why. So, I wrote my own. It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS. Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you’ll need to handle that.

In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.

Doing it in hardware

Of course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.

So, obviously, I wrote my own. This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.

And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.

Wait, attestation?

Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.

Conclusion

Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it4 and everyone wins.

  1. Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys? Now you do ↩︎

  2. Yes it is somewhat confusing that the keygen command does things other than generate keys ↩︎

  3. This is more difficult than it sounds ↩︎

  4. And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎

Faqet

Subscribe to AlbLinux agreguesi