You are here

Agreguesi i feed

Michael Meeks: 2026-05-04 Monday

Planet GNOME - Hën, 04/05/2026 - 11:00md
  • A day off - about time. Early partner call.
  • Helped J. put up stainless wire for rose training in the garden. Plugged away at garage tidying with more good progress.
  • Lunch with the family outside in the sun; tidied my office for the first time in a while; got the ladder moved into J's garden shed.
  • Made a wooden spatula with H. in the evening, turning plus band-sawing action; fun. Left it in tung-oil overnight.

Control Panel Authentication Failures Expose Entire Linux Servers

LinuxSecurity.com - Hën, 04/05/2026 - 7:18md
Linux security usually comes down to access controls and permissions, but those controls only work if the platform enforcing them holds up. What happens when the control layer most Linux environments depend on fails?

Roblox Blames Age-Verification Rollout for Lowered Growth. Stock Tumbles 22%

Slashdot - Hën, 04/05/2026 - 6:34pd
Age verification became mandatory for chat access on Roblox in January — and Friday morning Quartz reported it's apparently impacted the company's financials: Roblox cut its full-year 2026 bookings forecast by roughly $900 million at the midpoint on Thursday, blaming stronger-than-expected headwinds from its mandatory age-verification rollout on an audience that skews heavily toward children and teenagers. Full-year 2026 bookings are now projected at $7.33 billion to $7.60 billion, a range that sits roughly $900 million below the prior guidance of $8.28 billion to $8.55 billion; analysts had expected $8.38 billion, according to Yahoo Finance. Roblox stock fell almost 22% in premarket trading.... Daily active users rose 35% year over year to 132 million, while hours engaged climbed 43% to 31 billion hours... Daily Active Users and hours engaged fell below forecasts of 143.8 million and 33.68 billion, respectively, according to Yahoo Finance... Users who have not completed age checks have faced restricted communication features, and the process has weighed on the platform's ability to bring in new users. Russia's blocking of the platform, which took effect in December 2025, added further drag on user growth, according to Yahoo Finance. As of the end of the first quarter, 51% of global daily active users had completed age verification, with 65% of U.S. users having done so, Roblox said.... The safety push has come with legal costs. Roblox accrued $57 million in the first quarter for settlements and settlement proposals with certain states over youth-related consumer protection and digital safety matters, with payments structured over multiple years, the company said. Roblox acknowledged in a letter to shareholders that "our aggressive push to enhance safety lowers our expectations for topline growth in 2026." But they argued that it also "makes our platform fundamentally better and amplifies the long-term growth potential of Roblox through more effective content targeting, tailored communication experiences, and improved community sentiment."

Read more of this story at Slashdot.

NetHack 5.0 Released

Slashdot - Hën, 04/05/2026 - 4:09pd
"So yesterday the Devteam (it is always the Devteam) released version 5.0 of legendary and venerable rogueike compuer game NetHack," writes the Rogue-like games column @Play. "It is 39 years old..." MilenCent (Slashdot reader #219,397) writes: In addition to play changes it's left for players to discover, this version updates the code to compile with C99, makes it much easier to cross compile the code for other systems than the one running, and now uses Lua for its dungeon generation. Happy hacking! For new players, "Nethack 5.0 now has an optional tutorial in the early phases of the game that might help you," notes the Rogue-like games column @Play: Three systems binaries are provided: Windows, MS-DOS and Amiga. Yes, Nethack still supports MS-DOS, and yes, it still supports classic Amiga: it explicitly supports AmigaDOS 3.0, meaning it can still run on 68000 machines... That these are the only systems they provide binaries for shouldn't be seen as an indication that these are the "most important" platforms for Nethack, it's more that, since it's entirely open source, building it yourself is entirely possible, and more expected than with most software. Nethack can be built for Linux, Windows 8-11, AmigaDOS, MacOS (I'm not sure if this includes classic Mac too but it might), Windows CE (wow), OS/2 (additional wow), BeOS, VMS and multiple Unixes... Another option is to play through public Nethack servers. The most popular of these are probably alt.org and Hardfought.

Read more of this story at Slashdot.

OpenAI Introduces AI-Generated Pets for Its Codex App

Slashdot - Hën, 04/05/2026 - 2:29pd
"Vibe coding just got a whole lot more adorable," writes Engadget: OpenAI introduced AI-generated pets to the Codex app, its agentic tool that helps with coding. These "optional animated companions" don't do any coding themselves, but serve as a floating overlay that can tell you what Codex is working on, notify you when Codex completes a task or whether it needs your input on something. The new feature lets developers see Codex's active thread, without having to switch away from your current open app. "The feature ships with eight built-in variations — including a cat and dog," reports Mashable. "But the more interesting play is the custom pet creator." Users can prompt Codex directly to generate their own companion, then share it online. A quick scroll through the homepage reveals the community has already gotten to work. Current creations include Goku, Patrick Star, Microsoft's long-retired Clippy, OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and — naturally — a goblin. There's also Grogu, Dobby, a tiny Bob Rossi, and a "Doge-style Shiba Inu dog"...

Read more of this story at Slashdot.

AI Cameras are Being Deployed Across the Western US for Early Detection of Wildfires

Slashdot - Hën, 04/05/2026 - 1:29pd
The Associated Press reports: On a March afternoon, artificial intelligence detected something resembling smoke on a camera feed from Arizona's Coconino National Forest. Human analysts verified it wasn't a cloud or dust, then alerted the state's forest service and largest electric utility. One of dozens of AI cameras installed for the utility Arizona Public Service had spotted early signs of what came to be known as the Diamond Fire. Firefighters raced to the scene and contained the blaze before it grew past 7 acres (2.8 hectares). As record-breaking heat and an abysmal snowpack raise concerns about severe wildfires, states across the fire-prone West are adding AI to their wildfire detection toolbox, banking on the technology to help save lives and property. Arizona Public Service has nearly 40 active AI smoke-detection cameras and plans to have 71 by summer's end, and the state's fire agency has deployed seven of its own. Another utility, Xcel Energy in Colorado, has installed 126 and aims to have cameras in seven of the eight states it serves by year's end... ALERTCalifornia is a network of some 1,240 AI-enabled cameras across the Golden State that work similar to the system in Arizona.... Pano AI, whose technology combines high-definition camera feeds, satellite data and AI monitoring, has seen a growing interest in its cameras since launching in 2020. They've been deployed in Australia, Canada and 17 U.S. states, including Oregon, Washington and Texas... Last year, its technology detected 725 wildfires in the U.S., the company said... Cindy Kobold, an Arizona Public Service meteorologist, said the technology notifies them about 45 minutes faster on average than the first 911 call.

Read more of this story at Slashdot.

Carbon Pollution Is Making Food Less Nutritious, Risking the Health of Billions

Slashdot - Hën, 04/05/2026 - 12:29pd
A new meta-analysis found nutrients in food decreased over the last 40 years, reports the Washington Post. "Many of humanity's most important crops — including wheat, potatoes, beans — contain fewer vitamins and minerals than they did a generation ago." "The invisible culprit behind this damaging phenomenon? Carbon dioxide pollution." Surging concentrations of carbon in the atmosphere, caused largely by burning fossil fuels, have produced potent changes in the way plants grow — from increasing their sugar content to depleting essential nutrients like zinc... "The diets we eat today have less nutritional density than what our grandparents ate, even if we eat exactly the same thing," said Kristie Ebi, a professor at the University of Washington's Center for Health and the Global Environment. People in wealthy countries with strong health care systems will have many tools to cope with the change, experts said. But for the world's poorest and most vulnerable, the consequences could be devastating. One study concluded that by the middle of the century the phenomenon could put more than a billion additional women and children at risk of iron-deficiency anemia — a condition that can cause pregnancy complications, developmental problems and even death. Meanwhile, some 2 billion people across the globe who already suffer from some form of nutrient shortage could see their health problems grow even worse. "The scale of the problem is huge," Ebi said. Plants depend on carbon dioxide to perform photosynthesis — but that doesn't mean they grow better when there's more carbon in the air, scientists say. A sweeping survey of changes among 32 compounds in 43 crops found that nearly every plant that humans eat is harmed by rising CO2 levels... On average, they found, nutrients have already decreased by an average 3.2 percent across all plants since the late 1980s, when the concentration of carbon dioxide in the atmosphere was about 350 parts per million. Thanks to long-time Slashdot reader GameboyRMH for sharing the news.

Read more of this story at Slashdot.

7.1-rc2: mainline

Kernel Linux - Dje, 03/05/2026 - 11:21md
Version:7.1-rc2 (mainline) Released:2026-05-03 Source:linux-7.1-rc2.tar.gz Patch:full (incremental)

Robots Are Building Clay Homes In Texas Using Dirt From the Ground

Slashdot - Dje, 03/05/2026 - 10:59md
A startup south of Austin is using robots to build homes out of clay pulled directly from the ground, reports a local news station: The materials are gathered on site, mixed, and placed on a build plate. From there, a robot lowers from above, picks up the clay with a claw, carries it to the wall and drops it into place. Later, the same robot switches tools, using a hammer attachment to pound the material into shape. "It's kind of trying to replicate how a human might build an adobe house," said software engineer Anastasia Nikoulina... Using machine learning, the system constantly evaluates the wall, adjusting how it builds to create a flat, solid surface... The project is underway at Proto-Town, a ranch between Lockhart and Luling where startups test new technologies, from anti-drone systems to nuclear reactors. The company plans to build their next home on the property, with hopes to do more than 20 homes over the next year.

Read more of this story at Slashdot.

Nick Richards: WhatCable, Framework, and USB-C

Planet GNOME - Dje, 03/05/2026 - 10:10md

USB-C is excellent, provided you don’t look too closely.

I’ve been seeing a drum beat of interest in the internals of USB-C. Darryl Morley’s macOS WhatCable, Chromebooks exposing lots of lovely info about emarkers, USB cable testers and a bit more. Very infrastructure club topics. So I made a small GTK app also called WhatCable which is intended to show what Linux knows about your USB ports, cables, chargers and devices, but written as a GNOME/libadwaita app and using the interfaces Linux exposes through sysfs.

The hope was fairly straightforward: plug things into my Framework 13, ask Linux what is going on, and present the answer in a way that doesn’t require remembering which bit of /sys to poke. In particular I wanted cable identity and e-marker details. These are the useful little facts that tell you whether a cable is what it claims to be, or at least what it claims to be electronically. Given the number of USB-C cables in the house whose origin story is “came in a box with something”, this felt like a public service, or at least a satisfying evening.

The first bit is pleasantly sensible. Linux has standard-ish places for this information:

/sys/bus/usb/devices /sys/class/typec /sys/class/usb_power_delivery /sys/bus/thunderbolt/devices

When those are populated, a normal unprivileged app can learn quite a lot. It can show USB devices, Type-C ports, partners, cables, roles, power data, Thunderbolt and USB4 domains. That’s exactly the sort of thing a small Flatpak app should be good at: read some public kernel state, translate it into something at least moderately human friendly and then depart.

On my Framework 13, the USB device and Thunderbolt sides were useful. The Type-C side was not. /sys/class/typec existed but had no ports. /sys/class/usb_power_delivery existed but was empty. This is a slightly annoying result, because it means the nice standard API is present as a signpost rather than a destination.

The next clue was that the machine clearly does have USB-C machinery, and not just because I could look at the side of the device. It is a Framework 13 with the embedded controller and Cypress CCG power delivery controllers doing real work. The relevant kernel modules were loaded, including UCSI and Chrome EC pieces. There was also an ACPI UCSI device at:

/sys/bus/acpi/devices/USBC000:00

but ucsi_acpi did not appear to bind to it and create the Type-C class ports. So the hardware and firmware know things, but they were not arriving in the standard Linux userspace shape.

Framework’s own tooling gives another route in. I built framework_tool from FrameworkComputer/framework-system and asked the EC what it could see. The Framework-specific PD port command did not work on this firmware:

USB-C Port 0: [ERROR] EC Response Code: InvalidCommand

and similarly for the other ports. That’s not very poetic, but it is at least clear.

The Chromebook-style power command was more useful. With a charger connected it reported, for example:

USB-C Port 0 (Right Back): Role: Sink Charging Type: PD Voltage Now: 19.776 V, Max: 20.0 V Current Lim: 2250 mA, Max: 2250 mA Dual Role: Charger Max Power: 45.0 W

That’s good information. It’s not cable identity, but it is the kind of port state people actually want when they are trying to work out why a laptop is charging slowly, or not charging, or doing something else mildly USB-C shaped.

framework_tool --pd-info could also talk through the EC to the Cypress controllers and report their firmware details:

Right / Ports 01 Silicon ID: 0x2100 Mode: MainFw Ports Enabled: 0, 1 FW2 (Main) Version: Base: 3.4.0.A10, App: 3.8.00 Left / Ports 23 Silicon ID: 0x2100 Mode: MainFw Ports Enabled: 0, 1 FW2 (Main) Version: Base: 3.4.0.A10, App: 3.8.00

Again, useful. Again, not the cable.

Much of this investigation and app code was written with AI tools in the loop. That was useful for chasing down boring plumbing and generating probes. The decisive test was asking the Chrome EC for the newer Type-C discovery data directly. The EC advertised USB PD support, but not the newer Type-C command set. EC_CMD_TYPEC_STATUS and EC_CMD_TYPEC_DISCOVERY both came back as invalid commands on all four ports.

That means that on this Framework 13 firmware path I cannot get Discover Identity results, SOP/SOP’ discovery data, SVIDs, mode lists or e-marker details through Chrome EC host commands. The cable may well be telling the PD controller interesting things, but those things are not exposed through a stable unprivileged interface I can sensibly use in a desktop app.

This is the main lesson from the whole exercise: USB-C inspection on Linux is not one API. It is a set of possible stories. Sometimes the kernel Type-C class tells you lots of things. Sometimes Thunderbolt sysfs tells you a different useful slice. Sometimes a vendor EC can tell you power state, but only as root. Sometimes the information exists below you somewhere, but not in a form you should build an app around.

So WhatCable needs to be honest. It should show the sources it can read, and it should say when a source is unavailable rather than pretending absence means certainty. “No cable identity exposed on this machine” is a very different statement from “this cable has no identity”. The former is boring but true. The latter is how you end up lying with an icon (it is not a nice icon).

The current shape I think is right is:

  • use USB, Type-C, USB PD and Thunderbolt sysfs whenever they are available;
  • show raw values as well as friendly summaries;
  • explain missing sources in diagnostics;
  • treat Framework EC data as an optional extra, not a default dependency;
  • if EC access is added, put it behind a narrow read-only helper rather than teaching a Flatpak app to fling arbitrary commands at /dev/cros_ec.

That last point matters. On the host /dev/cros_ec exists, but it is root-only. Making a normal app require broad device access would be a poor bargain. A small privileged helper that answers a few known-safe questions might be acceptable. A graphical app with arbitrary EC command execution would be exciting in the wrong way.

This is not quite the result I wanted when I started. I wanted to show a friendly “this is a 100W e-marked cable” label and feel very clever about it. What I have instead is a more modest app and a better understanding of where the bodies are buried. That’s still useful. A tool that tells you what your machine actually exposes is better than one that implies the USB-C universe is more orderly than it is. Given this, I’m not going to be sharing this one more widely, but fork away if you wish, or come back with a better idea.

It’s very easy to run with GNOME Builder, so just check out the source and ‘press play’ or get an artifact out of the Github Actions. If you run WhatCable on a different laptop and see rich Type-C data, lovely. If you run it on a Framework 13 like mine and mostly see USB devices, Thunderbolt controllers and a note that Type-C data is missing, that is also information. Not as glamorous as catching a suspicious cable in the act, but much more likely to be true.

It's Goodbye Time for Jeeves and Ask.com - Relics of Yesterday's Internet

Slashdot - Dje, 03/05/2026 - 9:41md
A 1999 press release bragged "Jeeves" answered 92.3 million questions in just three months. "In the digital wilds of Y2K, we came to him with our most probing questions," remembers the New York Times — whether it was Britney Spears or tamagotchis: We asked, and he answered: Jeeves, the digital butler of information, the online valet who led us into the depths of cyberspace. Now, like so many other relics of yesterday's internet, Jeeves — and his home, Ask.com — are no more. After almost 30 years, the question-and-answer service and former search engine shuttered on Friday. "To you — the millions of users who turned to us for answers in a rapidly changing world — thank you for your endless curiosity, your loyalty, and your trust," the company said in a notice posted on its now-defunct website... Created in Berkeley, Calif., in the days of the dot-com gold rush, Ask Jeeves first appeared on computer screens in 1996.... Their mascot, Jeeves, was modeled on the clever English butler character from the famed P.G. Wodehouse book series. Its search function was simple — type in a question, get an answer. But the quality of its responses was uneven, and the website was quickly eclipsed by Google and Yahoo as the world's go-to search engines. The site was bought by InterActive Corp. for more than $1 billion in 2005, and was given an injection of cash to help it compete as a search engine. It rebranded as Ask.com and as part of the reimagining, the site also ditched the character of Jeeves in 2006. Scrappy but inventive, the site was one of the first to introduce hyperlocal map overlays to its searches and incorporate thumbnails of webpages. "They are doing a lot of clever and interesting things," a Google executive noted of Ask.com at the time. Still, Ask.com struggled to compete and returned in 2010 to its bread and butter: question-and-answer style prompts. Even then, it faltered against newer, crowdsourced iterations like Quora and Google's unyielding march to the internet fore — the platform now dominates search traffic, and the world's general experience of the internet. A statement at Ask.com ends "by thanking its millions of users, and saying, 'Jeeves' spirit endures'," notes this article from Engadget: As sad as it is to see a relic of the early Internet days fade into obscurity, we still have Ask Jeeves to thank for why some users still punch in full questions when querying Google. On top of that, Jeeves was built to provide detailed answers in natural language, which could have arguably acted as a precursor to today's AI chatbots like ChatGPT. "Now, Ask.com joins the Internet graveyard that includes competitors like AltaVista, which shut down in 2013," the article points out. "With Ask.com gone, alongside AIM and AOL dial-up services also sunsetting, we're truly coming to an end of a specific era of the Internet." And the New York Times argues the memory of Jeeves now rests somewhere between Limewire and Beanie Babies... Slashdot reader BrianFagioli calls it "a quiet reminder of how quickly the web moves, and how even widely recognized names can drift into obscurity once the underlying technology leaves them behind."

Read more of this story at Slashdot.

Former Nintendo Executive Says Amazon Once Requested 'Illegal' Price Discounts

Slashdot - Dje, 03/05/2026 - 8:28md
Amazon once tried to pressure Nintendo to break the law, says former Nintendo of America President Reggie Fils-Aimé. At a recent NYU lecture, he describes a conversation with an Amazon executive, Kotaku reports: "Amazon was looking to get bigger into the video game space," said Fils-Aimé. "Amazon's mentality back then is they wanted to have the lowest price out in the marketplace, even lower than Walmart... Essentially what Amazon wanted (was an) obscene amount of support, financial support, so they could have the lowest price and beat Walmart. I literally said to the executive, 'You know that's illegal, right? I can't do that'...." At the time, the Wii and DS were Nintendo's best selling hardware in history. Amazon originally sold books, but in the 2000s rapidly expanded with cheaper discounts to became a one-stop shop for almost everything. Everything except Nintendo, that is.... "Literally we stopped selling to Amazon," Fils-Aimé continued, "and it's because I wasn't going to do something illegal. I wasn't going to do something that would put at risk the relationship we have with other retailers." "The two sides have since made amends," notes the Verge, "and you can buy a Switch 2 through Amazon. But for a long time, Nintendo consoles had been largely unavailable on the site."

Read more of this story at Slashdot.

ChatGPT Became So Obsessed With Goblins That OpenAI Had to Intervene

Slashdot - Dje, 03/05/2026 - 6:34md
The Wall Street Journal reports that OpenAI "recently gave its popular ChatGPT strict instructions. Stop talking about goblins." Recent models of the artificial-intelligence chatbot have been bringing up the creatures in conversations with users seemingly out of the blue, as well as gremlins, trolls and ogres. The goblin-speak caught the attention of programmers, who are often heavy users of the bot. Barron Roth, a 32-year-old product manager at a tech company, said the bot referred to a flaw in his code as a "classic little goblin." He said he counted more than 20 times it mentioned goblins, without any prompting... Several users speculated that goblin terminology was how the model characterized itself, in lieu of identifying as a person with a soul. Then OpenAI decided enough was enough. "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query," reads an open source line in ChatGPT's base instructions for its coding assistant. The Journal calls this "a reminder that even as AI companies tout one advance after another in their technology, they are sometimes baffled by the things their own models do...." While training a "nerdy" personality for their model's customization feature, "We unknowingly gave particularly high rewards for metaphors with creatures," OpenAI explained in a log post. And "From there, the goblins spread." When we looked, use of "goblin" in ChatGPT had risen by 175% after the launch of GPT-5.1, while "gremlin" had risen by 52%... With GPT-5.4, we and our usersâ noticed an even bigger uptick in references to these creatures... Nerdy accounted for only 2.5% of all ChatGPT responses, but 66.7% of all "goblin" mentions in ChatGPT responses... The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them. Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data. It all started because the "nerdy" personality's prompt had said "You must undercut pretension through playful use of language. The world is complex and strange, and its strangeness must be acknowledged, analyzed, and enjoyed..." Now OpenAI calls this "a powerful example of how reward signals can shape model behavior in unexpected ways, and how models can learn to generalize rewards in certain situations to unrelated ones." But "fans of goblins don't have to fear," notes the Wall Street Journal. "OpenAI provided a command in its blog post that would remove its creature-suppressing instructions."

Read more of this story at Slashdot.

South Africa's Draft AI Policy Withdrawn Due to 'Fictitious' AI-Generated Citations

Slashdot - Dje, 03/05/2026 - 5:34md
An official in South Africa withdrew a draft of the country's national AI policy, reports a local newspaper, "after it was found the draft policy was compiled using AI, which cited academic articles that were 'fictitious'." Earlier this month, minister in the Presidency Khumbudzo Ntshavheni announced cabinet had approved the draft policy for public comment. [Ntshavheni] said the policy seeks to strengthen government's ability to regulate and adopt AI responsibly, while fostering innovation, job creation, and skills access. The article includes this quotes from the country's minister of communications/digital technologies department. "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical." Thanks to Slashdot reader Tokolosh for sharing the article.

Read more of this story at Slashdot.

Ransomware Is Getting Uglier As Cybercriminals Fake Leaks and Skip Encryption Entirely

Slashdot - Dje, 03/05/2026 - 4:34md
"Ransomware activity jumped again in Q1 2026," writes Slashdot reader BrianFagioli, "with 2,638 victim posts on leak sites, up 22% year over year," according to a report from cybersecurity company ReliaQuest. But the bigger shift is how messy the ecosystem has become. Established groups like Akira and Qilin are still active, while newer players like The Gentlemen surged into the top tier with a 588 percent spike in activity. At the same time, questionable leak sites such as 0APT and ALP-001 are muddying the waters by posting possibly fake breach claims, forcing companies to investigate incidents that may not even be real. Meanwhile, actors like ShinyHunters are showing that ransomware does not always need encryption anymore. By targeting identity systems and SaaS platforms, attackers can steal data using legitimate access, often through phishing or even phone-based social engineering, and then extort victims without deploying traditional malware. With a record 91 active leak sites and faster attack timelines, the report suggests defenders should focus less on tracking specific groups and more on stopping common tactics like credential theft, remote access abuse, and large-scale data exfiltration.

Read more of this story at Slashdot.

Smuggled Starlink Terminals are Beating Iran's Internet Blackout

Slashdot - Dje, 03/05/2026 - 1:34md
An anonymous reader shared this report from the BBC: "If even one extra person is able to access the internet, I think it's successful and it's worth it," says Sahand. The Iranian man is visibly anxious, speaking to the BBC outside Iran, as he carefully explains how he is part of a clandestine network smuggling satellite internet technology — which is illegal in Iran — into the country. Sahand, whose name we have changed, fears for family members and other contacts inside the country. "If I was identified by the Iranian regime, they might make those I'm in touch with in Iran pay the price," he says. For more than two months, Iran has been in digital darkness as the government maintains one of the longest-running national internet shutdowns ever recorded worldwide... Sahand says he has sent a dozen [Starlink terminals] to Iran since January and "we are actively looking for other ways to smuggle in more". The human rights organisation Witness estimated in January that there are at least 50,000 Starlink terminals in Iran. Activists say the number is likely to have risen... Last year, the Iranian government passed legislation that made using, buying or selling Starlink devices punishable by up to two years in prison. The jail term for distributing or importing more than 10 devices can be up to 10 years. State-affiliated media has reported multiple cases of people being arrested for selling and buying Starlink terminals, including four people — two of them foreign nationals — arrested last month for "importing satellite internet equipment". "The BBC contacted SpaceX for more details about the use of Starlink in the country but did not receive a response."

Read more of this story at Slashdot.

Claude, Microsoft Copilot Fail Again to Predict the Winners of the Kentucky Derby

Slashdot - Dje, 03/05/2026 - 9:34pd
In 2016 an online "swarm intelligence" platform generated a correct prediction for the Kentucky Derby — naming all four top finishers in order. (But its 2017 predictions weren't even close.) Slashdot checked in again on how modern AI systems performed in 2023, 2024, and 2025 — but their predictions were still pretty bad. Would AI-generated Derby predictions be any better in 2026? This year's winner was 24-to-1 longshot "Golden Tempo" — though a lot of oddsmakers had favored a horse named Further Ado (which ultimately only finished 11th). So when USA Today prompted Microsoft Copilot for its own picks for the Kentucky Derby, Copilot also went with Further Ado. (Even worse, it predicted Golden Tempo would come in... 13th.) Here's how Copilot's picks actually performed... Further Ado (finished 11th)Chief Wallabee (finished 4th)The Puma (SCRATCHED)Renegade (finished 2nd)Commandment (finished 7th)So Happy (finished 9th)Emerging Market (finished 10th)Danon Bourbon (finished 5th)Potente (finished 12th)Incredibolt (finished 6th)Robusta (finished 14th)Ocelli (finished 3rd)Golden Tempo (finished 1st)Pavlovian (finished 18th)Great White (SCRATCHED)Wonder Dean (finished 8th) Litmus Test (finished 17th)Albus (finished 15th)Six Speed (finished 13th)Intrepido (finished 16th) Copilot was told to use the latest odds, conditions, and analysis of favorites, best bets, expert picks, previous results and race history with the post positions, according to USA Today. And meanwhile, Yahoo Sports asked Claude "to simulate the race using the opening odds, draw and potential track conditions. We also asked it to factor in some human predictions." Like Microsoft Copilot, Claude also picked Further Ado to finish first (though it came in 11th) — and predicted that Golden Tempo (the eventual first-place finisher) would finish 12th. Further Ado (finished 11th)The Puma (SCRATCHED)Commandment (finished 7th)Chief Wallabee (finished 4th)Renegade (finished 2nd)Emerging Market (finished 10th)So Happy (finished 9th)Incredibolt (finished 6th)Danon Bourbon (finished 5th)Potente (finished 12th)Pavlovian (finished 18th)Golden Tempo (finished 1st) Litmus Test (finished 17th)Albus (finished 15th)Wonder Dean (finished 8th)Six Speed (finished 13th)Intrepido (finished 16th)

Read more of this story at Slashdot.

Chinese Exports of Green Technologies Surged to Record Levels After Iran War Began

Slashdot - Dje, 03/05/2026 - 5:34pd
"The war in Iran has sent oil-starved countries scrambling for fuel," CNN reported this week. And many of those countries now want renewable fuels, the article points out, "leaving them turning to the renewables king of the planet: China." Chinese exports of solar technology, batteries and electric vehicles all reached record highs in March, according to energy think tank Ember, a sign that the historic oil supply shock is accelerating the adoption of clean energy around the world... A Thursday report from Ember said China exported 68 gigawatts of solar technology in March, surpassing the previous record set in August by 50%. Fifty countries set new records for Chinese solar imports, with the most significant growth coming from emerging markets in Asia and Africa hit hardest by the energy crisis, according to the think tank. "Fossil shocks are boosting the solar surge," said Euan Graham, senior analyst at Ember, in the report. "Solar has already become the engine of the global economy, and now the current fossil fuel price shocks are taking it up a gear." Ember said exports of solar, batteries and EVs in total rose 70% in March year over year, according to Chinese customs data... China's battery exports reached $10 billion in March, with particularly high growth rates in the European Union, Australia and India, Ember said. Uncertainty over when the Strait of Hormuz will reopen has spurred deeper regional anxieties about energy securi"ty, helping to hasten the transition to clean energy, analysts said. The article notes how different countries are reacting to fuel Asian nations that depend on the Middle East for energy imports "are trying to mitigate fuel shortages by encouraging energy conservation and shortening work hours." The UK's Energy Secretary said this week that the country needed to reduce its reliance on gas for electricity. "As we face the second fossil fuel shock in less than 5 years, the lesson for our country is clear: The era of fossil fuel security is over, and the era of clean energy security must come of age." Pakistan "has been spared some of the impact from the war, since it began drastically importing cheap Chinese solar panels a few years ago. Using solar energy rather than costly oil imports is estimated to save the country billions of dollars each year." "According to the China Passenger Car Association, Chinese exports of electric vehicles and hybrids hit a record high in March, increasing 140% compared with the same period a year ago." Thanks to Slashdot reader AleRunner for sharing the article.

Read more of this story at Slashdot.

Former NASA Engineers Create Ingenious Way To Save Homes From Wildfires Using Noise

Slashdot - Dje, 03/05/2026 - 3:34pd
"Scientists have created a miraculous new way to stop fires from spreading through neighborhoods using nothing but sound," reports the New York Post: Former NASA engineers with California-based Sonic Fire Tech found that using sound waves can snuff out blazes and potentially be used to stop another Pacific Palisades inferno... The technology works by targeting oxygen molecules using low-frequency sound waves that vibrate them, stopping the fire from growing. "Sound waves vibrate the oxygen faster than the fuel can use it, and break the chemical reaction of the flame," Remington Hotchkis, Chief Commercialization Officer at Sonic Fire Tech told The Post. The San Bernardino County Fire Department recently tested out the equipment using a backpack version and the results were incredible. Video shows firefighters fighting small blazes on a shrub and a stove top fire with the technology putting it out... In the home application, the system would be alerted/activated if there was a fire, sending the sound waves through a home duct system, essentially snuffing out the blaze. The sound waves can reach as far as 30ft from a home, the report noted. The sound is also harmless to pets and humans. The article includes this quote that an executive at the company gave local news station KMPH. "Our former NASA engineers are rocket scientists, and they say it seems like magic, but it's just physics."

Read more of this story at Slashdot.

Andrea Veri: SELinux MCS challenges with GitLab Runners

Planet GNOME - Sht, 02/05/2026 - 3:00pd
Table of Contents Introduction

GNOME’s GitLab runners use Podman as the container runtime with SELinux in Enforcing mode on Fedora. The GitLab Runner Docker/Podman executor spawns multiple containers per job: a helper container that clones the repository and handles artifacts, and a build container that runs the actual CI script. Both containers need to share a /builds volume — and this is where SELinux’s Multi-Category Security (MCS) becomes a problem.

The MCS problem

An SELinux label has four fields: user:role:type:level. For containers the interesting part is the level, also called the MCS field. A level looks like s0:c123,c456 — s0 is the sensitivity (always s0 in targeted policy), and c123,c456 are the categories. A process or file can carry up to two categories.

MCS access is based on dominance. A subject’s label dominates an object’s label if the subject’s categories are a superset of (or equal to) the object’s categories:

Subject Object Access? Why s0:c100,c200 s0:c100,c200 Yes Exact match s0:c100,c200 s0:c100 Yes Subject’s categories are a superset s0:c100,c200 s0:c100,c300 No Subject lacks c300 s0:c0.c1023 s0:c100,c200 Yes Full range dominates everything s0 s0:c100,c200 No No categories can’t dominate any s0 s0 Yes Both have no categories

How this applies to the runners:

  • Container A runs as container_t:s0:c100,c100 — it can only access objects labeled s0:c100,c100 (or s0:c100, or s0)
  • Container B runs as container_t:s0:c200,c200 — it can only access objects labeled s0:c200,c200 (or s0:c200, or s0)
  • Container A cannot access Container B’s files — c100,c100 doesn’t dominate c200,c200
  • Overlay layers labeled s0 (no categories) — accessible by all containers since every category set dominates the empty set
  • Podman at container_runtime_t:s0-s0:c0.c1023 — the full range means it dominates every possible category combination, so it can manage all containers

The range syntax (s0-s0:c0.c1023) is used for processes that need to operate across multiple levels. It means “my low clearance is s0 and my high clearance is s0:c0.c1023.” The process can read objects at any level within that range and create objects at any level within it. This is why Podman needs the full range — it creates containers with different MCS labels and needs to access all of them.

When Podman starts a container, it picks a random pair of categories (e.g., s0:c512,c768) from within its allowed range and assigns that as the container’s process label. Files created by the container inherit that label. Another container gets a different random pair (e.g., s0:c33,c901). Since c512,c768 and c33,c901 do not match — neither is a superset of the other — SELinux denies cross-container file access. This is the isolation mechanism, and the root cause of the problem with GitLab Runner’s multi-container-per-job architecture.

The helper container gets one random MCS pair, writes the cloned repo to /builds labeled with that pair, and the build container gets a different pair. The build container cannot read or write those files. The :Z volume flag (exclusive relabel) relabels the volume to the mounting container’s category, but that only helps the first container — the second one still has a different label.

The test script

I wrote a script that demonstrates the problem with both standard containers (crun) and microVMs (libkrun). The script creates two containers per test — a helper that writes a file to a shared /builds volume, and a build container that tries to read it — simulating the GitLab Runner workflow:

#!/bin/bash # Description: SELinux MCS Diagnostic (crun vs krun) if [ "$(getenforce)" != "Enforcing" ]; then echo "WARNING: SELinux is not in Enforcing mode. This test requires Enforcing mode." exit 1 fi TEST_BASE="/tmp/gitlab-runner-mcs-test" CRUN_DIR="$TEST_BASE/crun-builds" KRUN_DIR="$TEST_BASE/krun-builds" # Cleanup from previous runs rm -rf "$TEST_BASE" mkdir -p "$CRUN_DIR" "$KRUN_DIR" echo "=======================================================" echo " TEST 1: Standard Container Isolation (crun)" echo "=======================================================" # 1. CREATE Helper podman create --name crun-helper -v "$CRUN_DIR:/builds:Z" fedora bash -c " echo '[crun] -> Helper Process Context (Inside):' cat /proc/self/attr/current echo 'crun-data' > /builds/artifact.txt echo '[crun] -> File Label INSIDE Helper:' ls -Z /builds/artifact.txt " > /dev/null echo "[crun] Starting Helper Container (applying :Z relabel)..." HELPER_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-helper) echo "[crun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_CRUN" podman start -a crun-helper echo "" echo "[crun] -> File Label ON HOST (Notice the specific MCS category):" ls -Z "$CRUN_DIR/artifact.txt" # 2. CREATE Build Container (The Victim) podman create --name crun-build -v "$CRUN_DIR:/builds" fedora bash -c " echo ' [Build-Internal] Process Context:' cat /proc/self/attr/current 2>/dev/null echo ' [Build-Internal] Executing ls -laZ /builds :' ls -laZ /builds 2>&1 | sed 's/^/ /' echo ' [Build-Internal] Executing cat /builds/artifact.txt :' cat /builds/artifact.txt 2>&1 | sed 's/^/ /' " > /dev/null echo "" echo "[crun] Starting Build Container to inspect shared volume..." BUILD_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-build) echo "[crun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_CRUN" podman start -a crun-build podman rm -f crun-helper crun-build > /dev/null echo "" echo "=======================================================" echo " TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED" echo "=======================================================" # --- Write the execution scripts to the host to avoid parsing errors --- cat << 'EOF' > "$TEST_BASE/krun_helper.sh" #!/bin/bash echo '[krun] -> Helper Process Context (Inside VM):' cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)' echo 'krun-data' > /builds/artifact.txt echo '[krun] -> File Label INSIDE Helper VM (Blindspot):' ls -laZ /builds/artifact.txt 2>&1 | sed 's/^/ /' EOF cat << 'EOF' > "$TEST_BASE/krun_build.sh" #!/bin/bash echo ' [Build-Internal] Process Context (Inside VM):' cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)' echo ' [Build-Internal] Executing ls -laZ /builds :' ls -laZ /builds 2>&1 | sed 's/^/ /' echo ' [Build-Internal] Executing cat /builds/artifact.txt :' cat /builds/artifact.txt 2>&1 | sed 's/^/ /' EOF chmod +x "$TEST_BASE/krun_helper.sh" "$TEST_BASE/krun_build.sh" # --------------------------------------------------------------------- # 1. CREATE Helper MicroVM podman create --name krun-helper --runtime krun --memory=1024m \ -v "$KRUN_DIR:/builds:Z" \ -v "$TEST_BASE/krun_helper.sh:/script.sh:ro,Z" \ fedora /script.sh > /dev/null echo "[krun] Starting Helper MicroVM (applying :Z relabel)..." HELPER_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-helper) echo "[krun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_KRUN" podman start -a krun-helper echo "" echo "[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):" ls -Z "$KRUN_DIR/artifact.txt" # 2. CREATE Build MicroVM (The Victim) podman create --name krun-build --runtime krun --memory=1024m \ -v "$KRUN_DIR:/builds" \ -v "$TEST_BASE/krun_build.sh:/script.sh:ro,Z" \ fedora /script.sh > /dev/null echo "" echo "[krun] Starting Build MicroVM to inspect shared volume..." BUILD_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-build) echo "[krun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_KRUN" echo " *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***" podman start -a krun-build # Cleanup podman rm -f krun-helper krun-build > /dev/null echo "" echo "=======================================================" echo " Test Complete."

Test 1 (crun) creates a helper container that mounts the builds directory with :Z (exclusive relabel) and writes artifact.txt. Podman assigns it a random MCS label — in this run it was s0:c20,c540. The file on disk inherits that label. Then a second container (the build container) mounts the same path without :Z and gets a different random label (s0:c46,c331). Since c46,c331 does not dominate c20,c540, the build container is denied access to the file.

Test 2 (krun) runs the same scenario but with --runtime krun, which boots each container inside a lightweight microVM via libkrun. The helper VM gets container_kvm_t:s0:c823,c999 and the build VM gets container_kvm_t:s0:c309,c405 — same MCS mismatch, same denial. The type changes from container_t to container_kvm_t, but the MCS mechanism is identical. On the host side, virtiofsd — the daemon that serves the volume into the VM via virtio-fs — runs under the MCS label Podman assigned to the VM. The build VM’s virtiofsd is trapped in s0:c309,c405 and cannot access files labeled s0:c823,c999.

An interesting detail: inside the libkrun VMs, cat /proc/self/attr/current returns just kernel — SELinux is not available in the guest. The VM thinks it has no mandatory access control, but the host-side virtiofsd is still fully subject to MCS enforcement. This is a blindspot worth being aware of.

The output from a run on Fedora with SELinux Enforcing and Podman 5.8.2:

======================================================= TEST 1: Standard Container Isolation (crun) ======================================================= [crun] Starting Helper Container (applying :Z relabel)... [crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c20,c540 [crun] -> Helper Process Context (Inside): system_u:system_r:container_t:s0:c20,c540 [crun] -> File Label INSIDE Helper: system_u:object_r:container_file_t:s0:c20,c540 /builds/artifact.txt [crun] -> File Label ON HOST (Notice the specific MCS category): system_u:object_r:container_file_t:s0:c20,c540 /tmp/gitlab-runner-mcs-test/crun-builds/artifact.txt [crun] Starting Build Container to inspect shared volume... [crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c46,c331 *** COMPARE THE cXXX,cYYY ABOVE TO THE FILE LABEL. THIS MISMATCH CAUSES THE DENIAL *** [Build-Internal] Process Context: system_u:system_r:container_t:s0:c46,c331 [Build-Internal] Executing ls -laZ /builds : ls: cannot open directory '/builds': Permission denied [Build-Internal] Executing cat /builds/artifact.txt : cat: /builds/artifact.txt: Permission denied ======================================================= TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED ======================================================= [krun] Starting Helper MicroVM (applying :Z relabel)... [krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c823,c999 [krun] -> Helper Process Context (Inside VM): kernel [krun] -> File Label INSIDE Helper VM (Blindspot): -rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c823,c999 10 May 2 2026 /builds/artifact.txt [krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z): system_u:object_r:container_file_t:s0:c823,c999 /tmp/gitlab-runner-mcs-test/krun-builds/artifact.txt [krun] Starting Build MicroVM to inspect shared volume... [krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c309,c405 *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT *** [Build-Internal] Process Context (Inside VM): kernel [Build-Internal] Executing ls -laZ /builds : ls: /builds: Permission denied ls: cannot open directory '/builds': Permission denied [Build-Internal] Executing cat /builds/artifact.txt : cat: /builds/artifact.txt: Permission denied ======================================================= Test Complete. GitLab’s official suggestion and why it falls short

GitLab’s documentation on configuring SELinux MCS suggests applying the same MCS label to all containers launched by a runner:

[[runners]] [runners.docker] security_opt = ["label=level:s0:c1000,c1000"]

This works — all containers get the same category pair, so the helper and build containers can share files. But it collapses MCS isolation between all concurrent jobs on that runner. With concurrent = 4, four simultaneous jobs all run as s0:c1000,c1000 and can read each other’s /builds content — cloned source code, build artifacts, cached dependencies. On a shared or multi-tenant runner, this is a security regression: it trades MCS isolation for functionality.

For runners with concurrent = 1 or dedicated single-tenant runners this is an acceptable tradeoff, but it does not generalize to shared infrastructure where multiple untrusted projects run side by side.

How GNOME currently handles this

GNOME’s runners are managed via an Ansible role that enforces SELinux in Enforcing mode, installs rootless Podman running as a dedicated podman system user with linger enabled, and deploys custom SELinux policy modules. The Podman service runs under SELinuxContext=system_u:system_r:container_runtime_t:s0-s0:c0.c1023 via a systemd override — the full MCS range (s0-s0:c0.c1023) gives the container runtime the ability to spawn containers at any MCS level and relabel volumes accordingly, as explained in the dominance rules above.

Four custom SELinux .te modules are compiled and loaded on every runner host: pydocuum (allows the image cleanup daemon to talk to the Podman socket), podman (grants user_namespace create and /dev/null mapping), flatpak (permits the filesystem mounts flatpak builds need), and gnome_runner (covers binfmt_misc access, device nodes, and other permissions GNOME OS builds require).

For the MCS problem specifically, the runner config.toml — rendered from a Jinja2 template via per-host Ansible variables — sets a fixed MCS label per runner type. Here’s a representative snippet from one of the runner hosts:

[[runners]] name = "a15948139c78" executor = "docker" [runners.docker] image = "quay.io/fedora/fedora:latest" privileged = false security_opt = ["label=level:s0:c100,c100"] devices = ["/dev/kvm", "/dev/udmabuf"] cap_add = ["SYS_PTRACE", "SYS_CHROOT"] [[runners]] name = "a15948139c78-flatpak" executor = "docker" [runners.docker] image = "quay.io/gnome_infrastructure/gnome-runtime-images:gnome-master" privileged = false security_opt = ["seccomp:/home/podman/gitlab-runner/flatpak.seccomp.json", "label=level:s0:c200,c200"] cap_drop = ["all"]

This is the same approach GitLab’s documentation suggests, with one refinement: we use different fixed categories per runner type — c100,c100 for untagged runners and c200,c200 for flatpak runners — so that flatpak builds and regular builds remain MCS-isolated from each other, even though builds of the same type share a category.

This is a pragmatic compromise, not an ideal solution. All concurrent jobs on the same runner type share the same MCS category. With concurrent: 4 on our Hetzner runners, four simultaneous untagged jobs can read each other’s /builds content. For GNOME’s use case — a community CI infrastructure where the runners are shared by GNOME project maintainers — this is an acceptable tradeoff. The alternative, leaving MCS labels random, would break every single job. But it is precisely this tradeoff that motivates exploring per-job VM isolation via microVMs.

Exploring libkrun

libkrun is a lightweight Virtual Machine Monitor (VMM) that integrates with Podman via --runtime krun, running each container inside a microVM with its own lightweight kernel. The appeal is strong: per-container VM isolation would give each job its own kernel and address space, making the MCS cross-container problem irrelevant inside the VM.

I tested libkrun on a Fedora system and hit an immediate blocker: Fatal glibc error: rseq registration failed. The rseq (Restartable Sequences) syscall was introduced in Linux kernel 5.3 and is required by glibc >= 2.35. libkrun uses a custom minimal kernel that does not expose rseq support. Since the guest images — Fedora in our case — ship modern glibc that expects rseq to be available, the process aborts at startup before any user code runs.

The libkrun kernel is compiled into the library itself and cannot be modified or replaced by the user. This is not a configuration issue but a fundamental limitation of the current libkrun release.

Even if the rseq issue were resolved, the MCS challenge would still be there — as the test script demonstrates in Test 2. On the host side, Podman assigns MCS labels to the virtiofsd process that serves the volume into the VM via virtio-fs. Different VMs get different host-side MCS labels, meaning the same :Z relabel / cross-container access denial applies. The mechanism changes from overlay mounts to virtio-fs, but the SELinux enforcement is identical: virtiofsd for the build VM runs at container_kvm_t:s0:c309,c405 and cannot access files labeled s0:c823,c999 by the helper VM’s virtiofsd.

Firecracker and the custom executor path

Firecracker is another microVM technology, the one behind AWS Lambda and Fly.io, that could provide strong per-job isolation. However, there is no native GitLab Runner executor for Firecracker. The only integration path is the Custom Executor, which requires implementing prepare, run, and cleanup scripts from scratch.

The job image is exposed via CUSTOM_ENV_CI_JOB_IMAGE, but everything else is on the operator: pulling the OCI image, extracting a rootfs, booting a Firecracker VM with the right kernel and network configuration, injecting the build script, mounting or copying the cloned repository into the VM, collecting artifacts and cache after the job finishes, and tearing the VM down. GitLab provides an LXD-based example that shows the pattern — prepare creates a container and installs dependencies, run pipes the job script into it, cleanup destroys it — but adapting that to microVMs adds the complexity of VM lifecycle management, kernel and rootfs preparation, networking, and storage. This is a significant engineering effort, essentially rebuilding the entire Docker executor workflow from scratch.

What comes next

MCS is a core SELinux feature. Type enforcement (TE) already confines processes by type — container_t can only access container_file_t, not user_home_t or httpd_sys_content_t — but TE alone cannot distinguish one container_t process from another. MCS adds that layer: by assigning each container a unique category pair, the kernel enforces isolation between processes that share the same type. Container A at s0:c100,c100 and Container B at s0:c200,c200 are both container_t, but MCS ensures they cannot touch each other’s files. The conflict with GitLab Runner’s multi-container-per-job architecture is that two containers that need to share a volume are given different categories by default. The workarounds we deploy today, including the fixed MCS labels on GNOME’s runners, trade that inter-container isolation for functionality.

The most promising direction I’ve found so far is the combination of Cloud Hypervisor and the fleeting-plugin-fleetingd plugin. Cloud Hypervisor is built on Intel’s Rust-VMM crate and is essentially a more capable sibling of Firecracker — it supports CPU and memory hotplugging, VFIO device passthrough, and virtio-fs, features that are often necessary for complex CI tasks like building large binaries or running UI tests and that Firecracker’s minimalist design deliberately omits. The fleeting-plugin-fleetingd is a community plugin for GitLab’s Instance Executor (the modern evolution of the Custom Executor) that automates the full VM lifecycle: downloading cloud images, creating Copy-on-Write disks, launching Cloud Hypervisor VMs with direct kernel boot, provisioning them via cloud-init, and tearing them down after each build. Each job gets a fresh disposable VM, which is exactly the per-job isolation model we need. The plugin already handles networking via TAP interfaces and nftables SNAT, and supports customization of the VM image through cloud-init commands — so preinstalling Podman or other build tools is straightforward.

Beyond that, I’ll also keep evaluating libkrun (promising Red Hat technology), Firecracker with a hand-rolled custom executor, and QEMU’s microvm machine type. The common denominator across all of these — except for the fleeting-plugin-fleetingd path — is that none of them have an existing GitLab Runner integration. Regardless of which microVM technology we settle on, the path forward involves either building a workflow from scratch using the Custom Executor and its prepare, run, cleanup hooks, or leveraging the fleeting plugin ecosystem that GitLab has been building around the Instance and Docker Autoscaler executors.

That should be all for today, stay tuned!

Faqet

Subscribe to AlbLinux agreguesi