You are here

Agreguesi i feed

Sam Thursfield: Status update, 23rd April 2026

Planet GNOME - Enj, 23/04/2026 - 10:48md

Hello there,

You thought I’d given up on “status update” blog posts, did you ? I haven’t given up, despite my better judgement, this one is just even later than usual.

Recently I’ve been using my rather obscure platform as a blogger to theorize about AI and the future of the tech industry, mixed with the occasional life update, couched in vague terms, perhaps due to the increasing number of weirdos in the world who think doxxing and sending death threats to open source contributors is a meaningful use of their time.

In fact I do have some theories about how George Orwell (in “Why I Write”) and Italo Calvino (in “If On a Winter’s Night a Traveller”) made some good guesses from the 20th century about how easy access to LLMs would affect communication, politics and art here in the 21st. But I’ll leave that for another time.

It’s also 8 years since I moved to this new country where I live now, driving off the boat in a rusty transit van to enjoy a series of unexpected and amazing opportunities. Next week I’m going to mark the occasion with a five day bike ride through the mountains of Asturias, something I’ve been dreaming of doing for several years.

The original idea of writing a monthly post was to keep tabs on various open source software projects I sometimes manage to contribute to, and perhaps even to motivate me to do more such volunteering. Well that part didn’t work, house renovations and an unexpectedly successful gig playing synth and trombone took over all my free time; but after many years of working on corporate consultancy and doing a little open source in the background, I’m trying to make a space at work to contribute in the open again.

I could tell the whole story here of how Codethink became “the build system people”. Maybe I will actually. It all started with BuildStream. In fact, that’s not even true. it all started in 2011 when some colleagues working with MeeGo and Yocto thought, “This is horrible, isn’t it?”

They set out to create something better, and produced Baserock, which unfortunately turned out even worse. But it did have some good ideas. The concept of “cache keys” to identify build inputs and content-addressed storage to hold build outputs began there, as did the idea of opening a “workspace” to make drive-by changes in build inputs within a large project.

BuildStream took this core idea, extended it to support arbitrary source kinds and element kinds defined by plugins, and added a shiny interface on top. It used OSTree to store and distribute build artifacts initially, later migrating to the Google REAPI with the goal of supporting Enterprise(TM) infrastructure. You can even use it alongside Bazel, if you like having three thousand commandline options at your disposal.

Unfortunately it was 2016, so we wrote the whole thing in Python. (In our defence, the Rust programming language had only recently hit 1.0 and crates.io was still a ghost town, and we’d probably still be rewriting the ruamel.yaml package in Rust if we had taken that road.) But the company did make some great decisions, particularly making a condition of success for the BuildStream project that it could unify the 5 different build+integration systems that GNOME release team were maintaining. And that success meant not making a prototype, but the release team actually using BuildStream to make releases. Tristan even ended up joining the GNOME release team for a while. We discussed it all at the 2017 Manchester GUADEC, coincidentally. It was a great time. (Aside from the 6 months leading up to the conference.)

At this point, the Freedesktop SDK already existed, with the same rather terrible name that it has today, and was already the base runtime for this new app container tool that was named… xdg-app. (At least that eventually gained a better name). However, if you can remember 8 years ago, it had a very different form than today. Now, my memory of what happened next is especially hazy at this point, because like I told you in the beginning, I was on a boat with my transit van heading towards a new life in Spain. All I have to go on 8 years later is the Git history, but somehow the Freedesktop SDK grew a 3-stage compiler bootstrap, over 600 reusable BuildStream elements, its own Gitlab namespace, and even some controversial stickers. As a parting gift I apparently added support for building VMs, the idea being that we’d reinstate the old GNOME Continuous CI system that had unfortunately died of neglect several years earlier. This idea got somewhat out of hand, let’s say.

It took me a while to realize this, but today Freedesktop SDK is effectively the BuildStream reference distribution. What Poky is to BitBake in the Yocto project, this is what Freedesktop SDK is to BuildStream. And this is a pretty important insight. It explains the problem you may have experienced with the BuildStream documentation: you want to build some Linux package, so you read through the manual right to the end, and then you still have no fucking idea how to integrate that package.

This isn’t a failure on the part of the authors, instead the issue is that your princess is in another castle. Every BuildStream project I’ve ever worked on has junctioned freedesktop-sdk.git and re-used the elements, plugins, aliases, configurations and conventions defined there, all of which are rigorously undocumented. The Freedesktop SDK Guide, for reasons that I won’t go into, doesn’t venture much further than than reminding you how to call Make targets.

And this is something of a point of inflection. The BuildStream + Freedesktop SDK ecosystem has clearly not displaced Yocto, nor for that matter Linux Mint. But, like many of my favourite musicians, it has been quietly thriving in obscurity. People I don’t know are using it to do things that I don’t completely understand. I’ve seen it in comparison articles, and even job adverts. ChatGPT can generate credible BuildStream elements about as well as it can generate Dockerfiles (i.e. not very well, but it indicates a certain level of ubiquity). There have been conferences, drama, mistakes, neglect. It’s been through an 8 person corporate team hyper-optimizing the code, and its been though a mini dark age where volunteers thanklessly kept the lights on almost single handledly, and its even survived its transition to the Apache Foundation.

Through all of this, the secret to its success probably that its just a really nice tool to work with. As much as you can enjoy software integration, I enjoy using BuildStream to do it; things rarely break, when they do its rarely difficult to fix them, and most importantly the UI is really colourful! I’m now using it to build embedded system images for a product named CTRL, which you can think of as.. a Linux distribution. There are some technical details to this which I’m working to improve, which I won’t bore you with here.

I also won’t bore you with the topic of community governance this month, but that’s what’s currently on my mind. If you’ve been part of the GNOME Foundation for a few years, you’ll know this something that’s usually boring and occasionally becomes of almost life-or-death importance. The “let’s just be really sound” model works great, until one day when you least expect it, and then suddenly it really doesn’t. There is no perfect defence against this, and in open source communities its our diversity that brings the most resilience. When GNOME loses, KDE gains, and that way at least we still don’t have to use Windows. Indeed, this is one argument for investing in BuildStream even if it remains forever something of a minority spot. I guess I just need to remember that when you have to start thinking hard about governance, that’s a sign of success.

Sebastian Wick: How Hard Is It To Open a File?

Planet GNOME - Enj, 23/04/2026 - 10:41md

It’s a question I had to ask myself multiple times over the last few months. Depending on the context the answer can be:

  • very simple, just call the standard library function
  • extremely hard, don’t trust anything

If you are an app developer, you’re lucky and it’s almost always the first answer. If you develop something with a security boundary which involves files in any way, the correct answer is very likely the second one.

Opening a File, the Hard Way

Like so often, the details depend on the specifics, but in the worst-case scenario, there is a process on either side of the security boundary, which operate on a filesystem tree which is shared by both processes.

Let’s say that the process with more privileges operates on a file on behalf of the process with less privileges. You might want to restrict this to files in a certain directory, to prevent the less privileged process from, for example, stealing your SSH key, and thus take a subpath that is relative to that directory.

The first obvious problem is that the subpath can refer to files outside of the directory if it contains ... If the privileged process gets called with a subpath of ../.ssh/id_ed25519, you are in trouble. Easy fix: normalize the path, and if we ever go outside of the directory, fail.

The next issue is that every component of the path might be a symlink. If the privileged process gets called with a subpath of link, and link is a symlink to ../.ssh/id_ed25519, you might be in trouble. If the process with less privileges cannot create files in that part of the tree, it cannot create a malicious symlink, and everything is fine. In all other scenarios, nothing is fine. Easy fix: resolve the symlinks, expand the path, then normalize it.

This is usually where most people think we’re done, opening a file is not that hard after all, we can all do more fun things now. Really, this is where the fun begins.

The fix above works, as long as the less privileged process cannot change the file system tree anywhere in the file’s path while the more privileged process tries to access it. Usually this is the case if you unpack an attacker-provided archive into a directory the attacker does not have access to. If it can however, we have a classic TOCTOU (time-of-check to time-of-use) race.

We have the path foo/id_ed25519, we resolve the smlinks, we expand the path, we normalize it, and while we did all of that, the other process just replaced the regular directory foo that we just checked with a symlink which points to ../.ssh. We just checked that the path resolves to a path inside the target directory though, and happily open the path foo/id_ed25519 which now points to your ssh key. Not an easy fix.

So, what is the fundamental issue here? A path string like /home/user/.local/share/flatpak/app/org.example.App/deploy describes a location in a filesystem namespace. It is not a reference to a file. By the time you finish speaking the path aloud, the thing it names may have changed.

The safe primitive is the file descriptor. Once you have an fd pointing at an inode, the kernel pins that inode. The directory can be unlinked, renamed, or replaced with a symlink; the fd does not care. A common misconception is that file descriptors represent open files. It is true that they can do that, but fds opened with O_PATH do not require opening the file, but still provide a stable reference to an inode.

The lesson that should be learned here is that you should not call any privileged process with a path. Period. Passing in file descriptors also has the benefit that they serve as proof that the calling process actually has access to the resource.

Another important lesson is that dropping down from a file descriptor to a path makes everything racy again. For example, let’s say that we want to bind mount something based on a file descriptor, and we only have the traditional mount API, so we convert the fd to a path, and pass that to mount. Unfortunately for the user, the kernel resolves the symlinks in the path that an attacker might have managed to place there. Sometimes it’s possible to detect the issue after the fact, for example by checking that the inode and device of the mounted file and the file descriptor match.

With that being said, sometimes it is not entirely avoidable to use paths, so let’s also look into that as well!

In the scenario above, we have a directory in which we want all the paths to resolve in, and that the attacker does not control. We can thus open it with O_PATH and get a file descriptor for it without the attacker being able to redirect it somewhere else.

With the openat syscall, we can open a path relative to the fd we just opened. It has all the same issues we discussed above, except that we can also pass O_NOFOLLOW. With that flag set, if the last segment of the path is a symlink, it does not follow it and instead opens the actual symlink inode. All the other components can still be symlinks, and they still will be followed. We can however just split up the path, and open the next file descriptor for the next path segment and resolve symlinks manually until we have done so for the entire path.

libglnx chase

libglnx is a utility library for GNOME C projects that provides fd-based filesystem operations as its primary API. Functions like glnx_openat_rdonly, glnx_file_replace_contents_at, and glnx_tmpfile_link_at all take directory fds and operate relative to them. The library is built around the discipline of “always have an fd, never use an absolute path when you can use an fd.”

The most recent addition is glnx_chaseat, which provides safe path traversal, and was inspired by systemd’s chase(), and does precisely what was described above.

int glnx_chaseat (int dirfd, const char *path, GlnxChaseFlags flags, GError **error);

It returns an O_PATH | O_CLOEXEC fd for the resolved path, or -1 on error. The real magic is in the flags:

typedef enum _GlnxChaseFlags { /* Default */ GLNX_CHASE_DEFAULT = 0, /* Disable triggering of automounts */ GLNX_CHASE_NO_AUTOMOUNT = 1 << 1, /* Do not follow the path's right-most component. When the path's right-most * component refers to symlink, return O_PATH fd of the symlink. */ GLNX_CHASE_NOFOLLOW = 1 << 2, /* Do not permit the path resolution to succeed if any component of the * resolution is not a descendant of the directory indicated by dirfd. */ GLNX_CHASE_RESOLVE_BENEATH = 1 << 3, /* Symlinks are resolved relative to the given dirfd instead of root. */ GLNX_CHASE_RESOLVE_IN_ROOT = 1 << 4, /* Fail if any symlink is encountered. */ GLNX_CHASE_RESOLVE_NO_SYMLINKS = 1 << 5, /* Fail if the path's right-most component is not a regular file */ GLNX_CHASE_MUST_BE_REGULAR = 1 << 6, /* Fail if the path's right-most component is not a directory */ GLNX_CHASE_MUST_BE_DIRECTORY = 1 << 7, /* Fail if the path's right-most component is not a socket */ GLNX_CHASE_MUST_BE_SOCKET = 1 << 8, } GlnxChaseFlags;

While it doesn’t sound too complicated to implement, a lot of details are quite hairy. The implementation uses openat2, open_tree and openat depending on what is available and what behavior was requested, it handles auto-mount behavior, ensures that previously visited paths have not changed, and a few other things.

An Aside on Standard Libraries

The POSIX APIs are not great at dealing with the issue. The GLib/Gio APIs (GFile, etc.) are even worse and only accept paths. Granted, they also serve as a cross-platform abstraction where file descriptors are not a universal concept. Unfortunately, Rust also has this cross-platform abstraction which is based entirely on paths.

If you use any of those APIs, you very likely created a vulnerability. The deeper issue is that those path-based APIs are often the standard way to interact with files. This makes it impossible to reason about the security of composed code. You can audit your own code meticulously, open everything with O_PATH | O_NOFOLLOW, chain *at() calls carefully — and then call a third-party library that calls open(path) internally. The security property you established in your code does not compose through that library call.

This means that any system-level code that cares about filesystem security has to audit all transitive dependencies or avoid them in the first place.

So what would a better GLib cross-platform API look like? I would say not too different from chaseat(), but returning opaque handles instead of file descriptors, which on Unix would carry the O_PATH file descriptor and a path that can be used for printing, debugging and things like that. You would open files from those handles, which would yield another kind of opaque handle for reading, writing, and so on.

The current GFile was also designed to implement GVfs: g_file_new_for_uri("smb://server/share/file") gives you a GFile you can g_file_read() just like a local file. This is the right goal, but the wrong abstraction layer. Instead, this kind of access should be provided by FUSE, and the URI should be translated to a path on a specific FUSE mount. This would provide a few benefits:

  • The fd-chasing approach works everywhere because it is a real filesystem managed by the kernel
  • The filesystem becomes independent of GLib and can be used for example from Rust as well
  • It stacks with other FUSE filesystems, such as the XDG Desktop Document Portal used by Flatpak
Wait, Why Are You Talking About This?

Nowadays I maintain a small project called Flatpak. Codean Labs recently did a security analysis on it and found a number of issues. Even though Flatpak developers were aware of the dangers of filesystems, and created libglnx because of it, most of the discovered issues were just about that. One of them (CVE-2026-34078) was a complete sandbox escape.

flatpak run was designed as a command-line tool for trusted users. When you type flatpak run org.example.App, you control the arguments. The code that processes the arguments was written assuming the caller is legitimate. It accepted path strings, because that’s what command-line tools accept.

The Flatpak portal was then built as a D-Bus service that sandboxed apps could call to start subsandboxes — and it did this by effectively constructing a flatpak run invocation and executing it. This connected a component designed for trusted input directly to an untrusted caller (the sandboxed app).

Once that connection exists, every assumption baked into flatpak run about caller trustworthiness becomes a potential vulnerability. The fix wasn’t “change one function” — it was “audit the entire call chain from portal request to bubblewrap execution and replace every path string with an fd.” That’s commits touching the portal, flatpak-run, flatpak_run_app, flatpak_run_setup_base_argv, and the bwrap argument construction, plus new options (--app-fd, --usr-fd, --bind-fd, --ro-bind-fd) threaded through all of them.

If the GLib standard file and path APIs were secure, we would not have had this issue.

Another annoyance here is that the entire subsandboxing approach in Flatpak comes from 15 years ago, when unprivileged user namespaces were not common. Nowadays we could (and should) let apps use kernel-native unprivileged user namespaces to create their own subsandboxes.

Unfortunately with rather large changes comes a high likelihood of something going wrong. For a few days we scrambled to fix a few regressions that prevented Steam, WebKit, and Chromium-based apps from launching. Huge thanks to Simon McVittie!

In the end, we managed to fix everything, made Flatpak more secure, the ecosystem is now better equipped to handle this class of issues, and hopefully you learned something as well.

China's CATL Reveals 621-Mile EV Battery, Under-7-Minute Charging

Slashdot - Mër, 22/04/2026 - 6:00md
CATL unveiled a new wave of EV battery tech, "including a lighter battery pack rated for a 1,000-km (621-mile) driving range and an upgraded fast-charging battery that can go from 10 percent to 98 percent in under seven minutes," reports Interesting Engineering. From the report: The launches were made during a 90-minute event in Beijing ahead of the Beijing Auto Show, where automakers are expected to showcase next-generation EVs and connected technologies. CATL said its latest Qilin battery -- a high-energy-density pack often paired with nickel manganese cobalt (NMC) cells for long range and improved space efficiency -- can deliver a 1,000-km (621-mile) driving range. It is designed to deliver long range while reducing battery pack weight. The company said the product is aimed at automakers facing tighter efficiency rules in China and other markets. It also rolled out an upgraded Shenxing battery -- CATL's fast-charging lithium iron phosphate (LFP) pack -- that targets one of the biggest barriers to EV adoption: charging time. CATL said the pack can recharge from 10 percent to 98 percent in less than seven minutes. The new Shenxing battery marks a significant improvement over CATL's previous version, which charged from 5 percent to 80 percent in 15 minutes, according to Financial Times. [...] The company also announced plans to begin mass delivery of sodium-ion batteries in the fourth quarter. Sodium-ion technology is seen as a lower-cost alternative that could reduce dependence on lithium, cobalt, and nickel.

Read more of this story at Slashdot.

Pentagon Wants $54 Billion For Drones

Slashdot - Mër, 22/04/2026 - 5:00md
An anonymous reader quotes a report from Ars Technica: The US military's massive $1.5 trillion budget request for the next fiscal year includes what Pentagon officials described as the largest investment in drone warfare and counter-drone technology in US history. The proposed spending on drone and autonomous warfare technologies within the FY2027 budget proposal for the US Department of Defense would surpass most countries' defense budgets and rank among the top 10 in the world for military spending, ahead of countries such as Ukraine, South Korea, and Israel. Specifically, the Pentagon is requesting $53.6 billion to boost US production and procurement of drones, train drone operators, build out a logistics network for sustaining drone deployments, and expand counter-drone systems to defend more US military sites. The funding request is budgeted under the Defense Autonomous Warfare Group (DAWG), an organization established in late 2025 that would see a massive budget increase after receiving about $226 million in the 2026 fiscal year budget. [...] Another $20.6 billion would help purchase one-way attack drones and drone aircraft developed through the US Air Force's Collaborative Combat Aircraft program, which is building drone prototypes capable of teaming up with human-piloted fighter jets. Part of this funding would also go toward defensive systems for countering small drones and the US Navy's Boeing MQ-25 drone designed to perform midair refueling of carrier-borne fighter aircraft to extend their strike ranges. Such drone-related spending even rivals the entire budget of the US Marine Corps. But the Pentagon has not said that it is creating a dedicated drone branch of the US military similar to the standalone Space Force. Pentagon officials emphasized that most of the money would go toward procuring drone and autonomous warfare technologies that already exist, and is largely separate from additional funding that would bolster US domestic manufacturing capacity to build such weapon systems. "That $70 billion is all going into existing systems and technologies," said Hurst. "The industrial base support is entirely separate." "The evolution we've seen in the battlefield is this evolution of technologies in the timeframe of weeks, not the typical years we see with our defense production," said Lt. Gen. Steven Whitney, director of force structure, resources, and assessment for the Pentagon's Joint Chiefs of Staff, during a Pentagon press briefing. "So it's really critical we work with industry to get that capability fielded."

Read more of this story at Slashdot.

next-20260422: linux-next

Kernel Linux - Mër, 22/04/2026 - 2:45md
Version:next-20260422 (linux-next) Released:2026-04-22

7.0.1: stable

Kernel Linux - Mër, 22/04/2026 - 1:33md
Version:7.0.1 (stable) Released:2026-04-22 Source:linux-7.0.1.tar.xz PGP Signature:linux-7.0.1.tar.sign Patch:full ChangeLog:ChangeLog-7.0.1

6.19.14: stable

Kernel Linux - Mër, 22/04/2026 - 1:31md
Version:6.19.14 (EOL) (stable) Released:2026-04-22 Source:linux-6.19.14.tar.xz PGP Signature:linux-6.19.14.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.19.14

6.18.24: longterm

Kernel Linux - Mër, 22/04/2026 - 1:22md
Version:6.18.24 (longterm) Released:2026-04-22 Source:linux-6.18.24.tar.xz PGP Signature:linux-6.18.24.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.18.24

6.12.83: longterm

Kernel Linux - Mër, 22/04/2026 - 1:20md
Version:6.12.83 (longterm) Released:2026-04-22 Source:linux-6.12.83.tar.xz PGP Signature:linux-6.12.83.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.12.83

Mars Rover Detects Never-Before-Seen Organic Compounds In New Experiment

Slashdot - Mër, 22/04/2026 - 1:00md
NASA's Curiosity rover has identified a diverse set of organic molecules on Mars, including a nitrogen-bearing compound similar in structure to DNA precursors. The finding strengthens the case that ancient organic material can survive in the Martian subsurface, though it does not prove past life because the compounds could also come from geology or meteorites. Phys.org reports: The study was led by Amy Williams, Ph.D., a professor of geological sciences at the University of Florida and a scientist on the Curiosity and Perseverance Mars rover missions. Curiosity landed on Mars in 2012 to find evidence that ancient Mars had conditions that could support microbial life billions of years ago; the Perseverance rover, which landed in 2021, was sent to look for signs of any ancient life that might have formed. Among the 20-plus chemicals identified by the experiment, Curiosity spotted a nitrogen-bearing molecule with a structure similar to DNA precursors -- a chemical never before spotted on Mars. The rover also identified benzothiophene, a large, double-ringed, sulfurous chemical often delivered to planets by meteorites. "The same stuff that rained down on Mars from meteorites is what rained down on Earth, and it probably provided the building blocks for life as we know it on our planet," Williams said. The findings have been published in the journal Nature Communications.

Read more of this story at Slashdot.

FBI Looks Into Dead or Missing Scientists Tied To Sensitive US Research

Slashdot - Mër, 22/04/2026 - 9:00pd
Federal authorities are now reviewing a string of deaths and disappearances involving scientists tied to sensitive U.S. aerospace and nuclear work, though officials have not established any confirmed link between the cases. The FBI says it "is spearheading the effort to look for connections into the missing and deceased scientists," adding that it "is working with the Department of Energy, Department of War, and with our state ... and local law enforcement partners to find answers." The Republican-led House Oversight Committee also announced an investigation into the reports. CNN reports: A nuclear physicist and MIT professor fatally shot outside his Massachusetts residence. A retired Air Force general missing from his New Mexico home. An aerospace engineer who disappeared during a hike in Los Angeles. These are among at least 10 individuals connected to sensitive US nuclear and aerospace research who have died or disappeared in recent years, prompting concerns whether they are connected and fueling speculation online about the possibility of nefarious activity. [...] The Defense Department said only that it would respond to the committee directly, and the Department of Energy referred questions to the White House. In a post on X, NASA said it is "coordinating and cooperating with the relevant agencies" in relation to the scientists. "At this time, nothing related to NASA indicates a national security threat," NASA spokesperson Bethany Stevens said. The cases vary widely in circumstance. Some involve unsolved homicides, while others are missing persons cases with no signs of foul play. In at least two instances, families have pointed to preexisting medical conditions or personal struggles as explanations. Authorities have not established any links between the cases. The White House said last week it is also working with federal agencies to probe any potential links between the deaths and disappearances, with President Donald Trump referring to the matter as "pretty serious stuff." "The United States has thousands of nuclear scientists and nuclear experts," said Rep. James Walkinshaw, a Democrat who also serves on the Oversight Committee. "It's not the kind of nuclear program that potentially a foreign adversary could significantly impact by targeting 10 individuals."

Read more of this story at Slashdot.

SpaceX Strikes Deal With Coding Startup Cursor For $60 Billion

Slashdot - Mër, 22/04/2026 - 5:30pd
An anonymous reader quotes a report from the New York Times: SpaceX, Elon Musk's rocket and satellite company, said on Tuesday that it had struck a deal with the artificial intelligence start-up Cursor that could result in its acquiring the young company for $60 billion. SpaceX is making the deal just as it prepares to go public in what is likely to be one of the largest initial public offerings ever. In a social media post, SpaceX said the combination with Cursor, which makes code-writing software, would "allow us to build the world's most useful" A.I. models. SpaceX added that the agreement gave it the option "to acquire Cursor later this year for $60 billion or pay $10 billion for our work together." It is unclear if the companies plan to consummate the deal before or after SpaceX's I.P.O., which could happen as early as June. [...] Cursor, which has raised more than $3 billion in funding, was founded in 2022 and made waves as a fast-growing A.I. start-up. It was under pressure in recent months after OpenAI and Anthropic announced competing code-writing products that were embraced by tech companies. Cursor had been in talks to raise funding in recent weeks.

Read more of this story at Slashdot.

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting

Slashdot - Mër, 22/04/2026 - 1:00pd
Florida's attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is "not responsible for this terrible crime" and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner's chat logs. "My prosecutors have looked at this and they've told me, if it was a person on the other end of that screen, we would be charging them with murder," Uthmeier said. "We cannot have AI bots that are advising people on how to kill others." Uthmeier's office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. "We are going to look at who knew what, designed what, or should have done what," he said. "And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable." [...] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU's Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Read more of this story at Slashdot.

Mozilla Uses Anthropic's Mythos To Fix 271 Bugs In Firefox

Slashdot - Mër, 22/04/2026 - 12:00pd
BrianFagioli writes: Mozilla says it used an early version of Anthropic's Claude Mythos Preview to comb through Firefox's code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing tools or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise. The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them. "Computers were completely incapable of doing this a few months ago, and now they excel at it," says Mozilla in a blog post. "We have many years of experience picking apart the work of the world's best security researchers, and Mythos Preview is every bit as capable. So far we've found no category or complexity of vulnerability that humans can find that this model can't." The company concluded: "The defects are finite, and we are entering a world where we can finally find them all."

Read more of this story at Slashdot.

Framework Laptop 13 Pro Is a Major Overhaul For the Modular, Upgradeable Laptop

Slashdot - Mar, 21/04/2026 - 11:00md
An anonymous reader quotes a report from Ars Technica: Framework has been selling and shipping its modular, repairable, upgradable Laptop 13 for five years now, and in that time, it has released six distinct versions of its system board, each using fresh versions of Intel and AMD processors (seven versions, if you count this RISC-V one). The laptop around those components has gradually gotten better, too. Over the years, Framework has added higher-resolution screens in both matte and glossy finishes, a slightly larger battery, and other tweaked components that refine the original design. But so far, all of those parts have been totally interchangeable, and the fundamentals of the Laptop 13 design haven't changed much. That changes today with the Framework Laptop 13 Pro, which, despite its name, is less an offshoot of the original Laptop 13 and closer to a ground-up redesign. It includes new Core Ultra Series 3 chips (codenamed Panther Lake), Framework's first touchscreen, a new black aluminum color option, a larger battery, and other significant changes. And while it sacrifices some component compatibility with the original Laptop 13, displays and motherboards remain interchangeable, so Framework Laptop owners can buy the new Core Ultra board and owners of older Framework Laptop boards can pop one into a Pro to benefit from the new battery and screen. At 1.4kg (about 3 pounds), the Laptop 13 Pro is slightly heavier than the Laptop 13's 1.3kg, but it still stacks up well against the 14-inch M5 MacBook Pro (1.55kg, or 3.4 pounds). The Framework Laptop Pro will start at $1,199 for a DIY edition with a Core Ultra 5 325 processor, and no RAM, SSD, or operating system. A prebuilt version with Ubuntu Linux installed will start at $1,499, and Windows 11 will cost another $100 on top of that. A Core Ultra X7 358H version starts at $1,599 for a DIY edition, and a "limited batch" Core Ultra X9 388H version starts at $1,799. A bare motherboard with the Core Ultra 5 325 starts at $449, while a Core Ultra X7 358H board will cost $799. Pre-orders are available now, and begin shipping in June.

Read more of this story at Slashdot.

Michael Meeks: 2026-04-21 Tuesday

Planet GNOME - Mar, 21/04/2026 - 11:00md
  • Up early, off to HCL Engage in a football stadium for Richard's keynote, Jason's flashy Domino / AI demo, product management bits, and of course Collabora Online integration announced.
  • Gave talk on COOL, handed out huge numbers of beavers, quick-start guides, stickers and more. Great to talk to lots of excited people engaged with Sovereign alternatives.
  • Dinner in the evening, met more interesting people.

Jussi Pakkanen: CapyPDF is approaching feature sufficiency

Planet GNOME - Mar, 21/04/2026 - 10:09md

In the past I have written many blog posts on implementing various PDF features in CapyPDF. Typically they explain the feature being implemented, how confusing the documentation is, what perverse undocumented quirks one has to work around to get things working and so on. To save the effort of me writing and you reading yet another post of the same type, let me just say that you can now use CapyPDF to generate PDF forms that have widgets like text fields and radio buttons.

What makes this post special is that forms and widget annotations were pretty much the last major missing PDF feature Does that mean that it supports everything? No. Of course not. There is a whole bunch of subtlety to consider. Let's start with the fact that the PDF spec is massive, close to 1000 pages. Among its pages are features that are either not used or have been replaced by other features and deprecated.

The implementation principle of CapyPDF thus far has been "implement everything that needs special tracking, but only to the minimal level needed". This seems complicated but is in fact quite simple. As an example the PDF spec defines over 20 different kinds of annotations. Specifying them requires tracking each one and writing out appropriate entries in the document metadata structures. However once you have implemented that for one annotation type, the same code will work for all annotation types. Thus CapyPDF has only implemented a few of the most common annotations and the rest can be added later when someone actually needs them.

Many objects have lots of configuration options which are defined by adding keys and values to existing dictionaries. Again, only the most common ones are implemented, the rest are mostly a matter of adding functions to set those keys. There is no cross-referencing code that needs to be updated or so on. If nobody ever needs to specify the color with which a trim box should be drawn in a prepress preview application, there's no point in spending effort to make it happen.

The API should be mostly done, especially for drawing operations. The API for widgets probably needs to change. Especially since form submission actions are not done. I don't know if anything actually uses those, though. That work can be done based on user feedback.

Job Cuts Driven By AI Are Rising On Wall Street

Slashdot - Mar, 21/04/2026 - 10:00md
Firms like Bank of America, Citi, Wells Fargo, and others are reporting strong profits while reducing head count and automating more work. "All of them credited A.I. to some degree ... in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients," reports the New York Times. From the report: Less than four months ago, Bank of America's chief executive, Brian T. Moynihan, volunteered in a TV interview what he would say to his 210,000 employees about the chance of artificial intelligence replacing human work. "You don't have to worry," he said. "It's not a threat to their jobs." Last week, after Bank of America reported $8.6 billion in profit for the first quarter -- $1.6 billion more than the same period a year earlier -- Mr. Moynihan struck a different tone. The bank's bottom line, he said, was helped by shedding 1,000 jobs through attrition by "eliminating work and applying technology," which he repeatedly specified was artificial intelligence. He predicted more of that in the months and years to come. "A.I. gives us places to go we haven't gone," Mr. Moynihan said. The veneer of Wall Street's longstanding assertion -- that A.I. will enhance human work, not replace it -- is rapidly peeling away, as evidenced by the current quarterly earnings season. JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley and Wells Fargo racked up $47 billion in collective profits, up 18 percent, while shedding 15,000 employees. All of them credited A.I. to some degree with helping cut jobs and automate work in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients. Unlike executives in Silicon Valley, few major financial figures are stating outright that A.I. is eliminating jobs. Citi, for example, has pledged to shrink its work force by 20,000 people through what one executive described to financial analysts last week as the company's "productivity and efficiency journey." The bank is paying for A.I. software from Anthropic, Google, Microsoft and OpenAI, to automatically read legal documents, approve account openings, send invoices for trades and organize sensitive customer data, among other tasks, according to public statements by bank executives and two people familiar with Citi's systems. Among the recent job cuts at Citi were scores of employees who were part of the bank's "A.I. Champions and Accelerators" program, according to the two people, who were not permitted by the bank to speak publicly. The program involves Citi employees who perform their day jobs while also working to persuade their colleagues to adopt A.I. technologies.

Read more of this story at Slashdot.

Meta To Start Capturing Employee Mouse Movements, Keystrokes For AI Training Data

Slashdot - Mar, 21/04/2026 - 9:00md
Reuters reports that Meta plans to start collecting U.S.-based employees' mouse movements, clicks, keystrokes, and occasional screen snapshots to train AI agents that can better learn how humans use computers. The tool, called Model Capability Initiative (MCI), will reportedly "not be used for performance assessments or any other purpose besides model training and that safeguards were in place to protect 'sensitive content.'" From the report: Meta CTO Andrew Bosworth told employees in a separate memo shared on Monday that the company would step up internal data collection as part of those "AI for Work" efforts, now re-branded as Agent Transformation Accelerator (ATA). "The vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve," Bosworth said. The aim, he added, was for agents to "automatically see where we felt the need to intervene so they can be better next time." Bosworth did not explicitly spell out how those agents would be trained, but said Meta would be "rigorous" about "building up data and evals for all the types of interactions we have as we go about our work." Meta spokesperson Andy Stone acknowledged that the MCI data would be among the inputs. [...] "If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people "actually use them -- things like mouse movements, clicking buttons, and navigating dropdown menus," said Stone.

Read more of this story at Slashdot.

Google's Internal Politics Leave It Playing Catch-Up On AI Coding

Slashdot - Mar, 21/04/2026 - 8:00md
An anonymous reader quotes a report from Bloomberg: At Google, leaders are anxious about falling behind in the race to offer AI coding tools, especially as rivals like Anthropic PBC offer more effective and popular tools to businesses, according to people familiar with the matter. The search giant is now working to unite some of its coding initiatives under one banner to speed progress and take advantage of a surge in customer interest. In some corners of Alphabet's Google, particularly AI lab DeepMind, concerns about the company's position are mounting, according to current and former employees and executives, who declined to be named because they weren't authorized to speak publicly. Businesses are just starting to realize that AI coding tools can enable anyone to build products by prompting a chatbot. But Google doesn't have a clear solution for them. Its Gemini model's capabilities are sprinkled across half a dozen different coding products with different branding, indicating how the company's lack of focus and competing internal efforts have hampered success, the people said. Even internally, some Google engineers prefer to use Anthropic's Claude Code, they said. More concerning, the people said, are the engineers who are struggling to adopt AI coding at all. [...] Google's emphasis on its own technology has also complicated the push to catch up. Most employees are banned from using competing tools such as Claude Code or Codex due to security concerns, but Googlers can request exceptions if they can demonstrate they have a business case, one former employee said. Some teams at DeepMind, including those working on the Gemini model, internal applications, and open source models, use Claude Code, according to three former employees. "You want the best people to use the best tool, even inside Google," one of the former employees said. [...] In recent years, DeepMind has tried to tighten control over how its AI breakthroughs are woven into Google products. Last year, Google appointed Kavukcuoglu to a new position as chief AI architect, a role in which he is charged with folding generative AI into Google products. Yet confusion about who is leading the charge on AI coding persists. Along with DeepMind, Google Cloud, Google Core, Google Labs and Android are all pushing AI coding in different ways, one of the people said. [...] Within the Googleplex, there is a philosophical clash between AI researchers who want to move as quickly as possible and more traditional senior engineers who have exacting standards for code quality, former employees say. AI usage is factored into performance reviews, according to a former employee. But engineers who try to use internal AI coding tools often hit capacity constraints due to competition for computing power, the former employee said.

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi