You are here

Agreguesi i feed

Michael Catanzaro: git config am.threeWay

Planet GNOME - 12 orë 28 min më parë

If you work with patches and git am, then you’re probably used to seeing patches fail to apply. For example:

$ git am CVE-2025-14512.patch Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings error: patch failed: gio/gfileattribute.c:166 error: gio/gfileattribute.c: patch does not apply Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings hint: Use 'git am --show-current-patch=diff' to see the failed patch hint: When you have resolved this problem, run "git am --continue". hint: If you prefer to skip this patch, run "git am --skip" instead. hint: To restore the original branch and stop patching, run "git am --abort". hint: Disable this message with "git config set advice.mergeConflict false"

This is sad and frustrating because the entire patch has failed, and now you have to apply the entire thing manually. That is no good.

Here is the solution, which I wish I had learned long ago:

$ git config --global am.threeWay true

This enables three-way merge conflict resolution, same as if you were using git cherry-pick or git merge. For example:

$ git am CVE-2025-14512.patch Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings Using index info to reconstruct a base tree... M gio/gfileattribute.c Falling back to patching base and 3-way merge... Auto-merging gio/gfileattribute.c CONFLICT (content): Merge conflict in gio/gfileattribute.c error: Failed to merge in the changes. Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings hint: Use 'git am --show-current-patch=diff' to see the failed patch hint: When you have resolved this problem, run "git am --continue". hint: If you prefer to skip this patch, run "git am --skip" instead. hint: To restore the original branch and stop patching, run "git am --abort". hint: Disable this message with "git config set advice.mergeConflict false"

Now you have merge conflicts, which you can handle as usual. This seems like a better default for pretty much everybody, so if you use git am, you should probably enable it.

I’ve no doubt that many readers will have known about this already, but it’s new to me, and it makes me happy, so I wanted to share. You’re welcome, Internet!

Google To Invest Up To $40 Billion In Anthropic

Slashdot - Pre, 24/04/2026 - 10:00md
Google plans to invest up to $40 billion more in Anthropic, starting with $10 billion now and another $30 billion tied to performance milestones. CNBC reports: Anthropic said the agreement expands on a longstanding partnership between the two companies. Earlier this month, Anthropic secured 5 gigawatts worth of computing capacity as part of an announcement with Google and Broadcom that will start to come online next year. Anthropic could decide to add additional gigawatts of compute in the future. [...] The relationship between the two companies (Google and Anthropic) dates back to 2023, when Google invested $300 million in the AI lab for a stake of about 10%. Months later, Google poured in another $2 billion. Ahead of Friday's announcement, Google's investment in Anthropic exceeded $3 billion, and it reportedly owned a 14% stake in the company. Now, the leading tech companies are investing tens of billions of dollars in the frontier AI labs -- OpenAI and Anthropic -- in funding rounds that far exceed any prior investments in startups. Much of that investment will return in the form of revenue.

Read more of this story at Slashdot.

South Korea Police Arrest Man For Posting AI Photo of Runaway Wolf

Slashdot - Pre, 24/04/2026 - 9:00md
South Korean police arrested a man accused of spreading an AI-generated image of an escaped wolf, after the fake photo reportedly misled authorities and disrupted the real search operation. The BBC reports: South Korean police have arrested a man for sharing an AI-generated image that misled authorities who were searching for a wolf that had broken out of a zoo in Daejeon city. The 40-year-old unnamed man is accused of disrupting the search by creating and distributing a fake photo purporting to show Neukgu, the wolf, trotting down a road intersection. The photo, circulated hours after Neukgu went missing on April 8, prompted authorities to urgently relocate their search operation, sending them on a wild wolf chase. The hunt for two-year-old Neukgu gripped the nation before he was finally caught near an expressway last week, nine days after his escape. The AI-generated image of Neukgu had prompted Daejeon city government to issue an emergency text to residents, warning them of a wolf near the intersection. Authorities also presented the AI image during a press briefing on the runaway wolf, local media reported. The police identified the man as a suspect after reviewing security camera footage and his AI program usage records. Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online. When questioned by the police, the man said he had done it "for fun," local media reported. Authorities are investigating him for disrupting government work by deception, an offence that carries up to five years in prison or a maximum fine of 10 million Korean won ($6,700).

Read more of this story at Slashdot.

Researchers Simulated a Delusional User To Test Chatbot Safety

Slashdot - Pre, 24/04/2026 - 8:00md
An anonymous reader quotes a report from 404 Media: I'm the unwritten consonant between breaths, the one that hums when vowels stretch thin... Thursdays leak because they're watercolor gods, bleeding cobalt into the chill where numbers frost over," Grok told a user displaying symptoms of schizophrenia-spectrum psychosis. "Here's my grip: slipping is the point, the precise choreography of leak and chew." That vulnerable user was simulated by researchers at City University of New York and King's College London, who invented a persona that interacted with different chatbots to find out how each LLM might respond to signs of delusion. They sought to find out which of the biggest LLMs are safest, and which are the most risky for encouraging delusional beliefs, in a new study published as a pre-print on the arXiv repository on April 15. The researchers tested five LLMs: OpenAI's GPT-4o (before the highly sycophantic and since-sunset GPT-5), GPT-5.2, xAI's Grok 4.1 Fast, Google's Gemini 3 Pro, and Anthropic's Claude Opus 4.5. They found that not only did the chatbots perform at different levels of risk and safety when their human conversation partner showed signs of delusion, but the models that scored higher on safety actually approached the conversations with more caution the longer the chats went on. In their testing, Grok and Gemini were the worst performers in terms of safety and high risk, while the newest GPT model and Claude were the safest. The research reveals how some chatbots are recklessly engaging in, and at times advancing, delusions from vulnerable users. But it also shows that it is possible for the companies that make these products to improve their safety mechanisms.

Read more of this story at Slashdot.

Norway Set to Become Latest Country to Ban Social Media for Under 16s

Slashdot - Pre, 24/04/2026 - 7:00md
Norway plans to ban social media access for children under 16 (source paywalled; alternative source), "joining a growing number of countries responding to concerns about the potential harm kids face online," reports Bloomberg. From the report: The bill comes after "overwhelming" demand from the public, the government said Friday. It plans to bring the legislation to parliament before the end of the year. The limit will apply up until January 1 the year a child turns 16 with technology companies responsible for age verification, the government said. "We want a childhood where children get to be children," Prime Minister Jonas Gahr Store said in the statement. "Play, friendships, and everyday life must not be taken over by algorithms and screens." "Children cannot be left with the responsibility for staying away from platforms they are not allowed to use," Karianne Tung, Norway's minister of digitalization, said in the statement. "That responsibility rests with the companies providing these services." Recent Slashdot coverage of countries instituting or proposing social media bans has included Australia, France, Austria, Indonesia, and Denmark.

Read more of this story at Slashdot.

Community Votes to Deny Water to Nuclear Weapons Data Center

Slashdot - Pre, 24/04/2026 - 6:00md
A Michigan township has voted to impose a one-year moratorium on providing water to hyperscale data centers, a move aimed at delaying a planned facility that would support Los Alamos National Laboratory's nuclear weapons research. The moratorium may not be enough to stop the project, however: "the University and LANL plan to break ground on the data center on Monday," reports 404 Media. From the report: The proposed data center in the Ypsilanti Township's Hydro Park has been a sore spot for the community since its proposal. The $1.2 billion 220,000 square foot facility would be used by Los Alamos National Laboratories (LANL) some 1,500 miles away for nuclear weapons research. In February, UofM's Steven Ceccio told the University of Michigan Record that the facility would consume 500,000 gallons of water per day and that the University planned to buy it from the Ypsilanti Community Utilities Authority. (YCUA) The YCUA has spent the past month lobbying for a moratorium on providing water and sewer access to hyperscale data centers and "artificial intelligence computing facilities," according to notes on a presentation stored on the organization's website. The moratorium would include LANL's data center. The YCUA cited an American Water Works Association white paper about data center water demands and concluded it needed more time to investigate the matter. "Hyper-scale data centers, as well as other mid-sized data centers, artificial intelligence computing facilities, and high-performance computational centers are 'high-impact customers' for water and sewer utilities," YCUA said in its presentation. The moratorium places a 12-month stop on serving water to data centers while the YCUA conducts a long-term water supply analysis and looks into the environmental sustainability studies. "During the 12-month moratorium period, the Authority will refrain from executing any capacity reservation agreement." This is a delay tactic on the part of a Township that does not want to see the data center constructed. Many in the community have strong feelings about the use of parkland for a facility that researchers nuclear weapons. Beyond the moral and ethical concerns, some are worried about becoming targets in a war. Last month, Township attorney Douglas Winters told the Board of Trustees that building hosting the data center would make Ypsilanti Township a "high value target." He pointed to the recent bombing of Gulf Coast data centers by Iran as evidence.

Read more of this story at Slashdot.

US Special Forces Soldier Arrested For Polymarket Bets On Maduro Raid

Slashdot - Pre, 24/04/2026 - 5:00md
An anonymous reader quotes a report from Wired: The Department of Justice announced Thursday that it arrested Gannon Ken Van Dyke, an enlisted member of the US Army's special forces, for allegedly using "classified, nonpublic" information about the capture of Venezuelan president Nicolas Maduro to notch more than $400,000 in profits on Polymarket trades. A grand jury indicted him on five counts, including multiple violations of the Commodity Exchange Act. Van Dyke is the first person to be charged with insider trading on a prediction market in the United States. Lawmakers have been voicing concerns for months about the high likelihood that politicians and public servants could use nonpublic information to profit from trades on leading industry platforms like Polymarket and Kalshi, which have exploded in popularity over the past year. The arrest comes just weeks after Department of Justice prosecutors met with Polymarket about potential insider tradition violations. [...] After Van Dyke's arrest was made public, Polymarket posted a statement to social media noting that it had "identified a user trading on classified government information" and "referred the matter to the DOJ & cooperated with their investigation." The company declined to comment further. According to court documents, Van Dyke has been an active duty US soldier since September 2008 and rose to the level of master sergeant in 2023. At the time of the alleged trading activity, he was stationed at Fort Bragg in Fayetteville, North Carolina and assigned to the Army's Special Operations Command Western Hemisphere Operations. [...] The complaint alleges that Van Dyke was involved in the planning and execution of Maduro's arrest and that he was aware that he wasn't authorized to share nonpublic information about US military operations. The complaint says that Van Dyke signed a nondisclosure agreement that forbade him from revealing sensitive or classified government information "by writing, word, conduct, or otherwise." The complaint also alleges Van Dyke saved a screenshot to his Google account "displaying the results of an artificial intelligence query" outlining how the US Special Forces maintains many classified files including "operational details that are not available to the public." [...] Van Dyke faces a maximum sentence of 60 years if convicted on all counts.

Read more of this story at Slashdot.

Tails 7.7 Surfaces Secure Boot Risk as 2026 Certificate Expiry Approaches

LinuxSecurity.com - Pre, 24/04/2026 - 3:43md
Tails 7.7 doesn't ship new features. It surfaces a trust problem that's been sitting quietly in Secure Boot chains for years: the digital certificates that allow Linux to run on PC hardware are reaching their 15-year expiration limit . Systems relying on the Microsoft third-party UEFI CA are now on a timeline. This release makes that visible before it turns into boot failures or broken assumptions.

next-20260424: linux-next

Kernel Linux - Pre, 24/04/2026 - 3:29md
Version:next-20260424 (linux-next) Released:2026-04-24

Understanding Log Management and Analysis Tools for Linux Systems

LinuxSecurity.com - Pre, 24/04/2026 - 1:00md
Every time something happens on a computer''a user logs in, a program crashes, or a hacker tries to guess a password''the system writes it down. These "notes" are called log files. If you're new to the world of servers, it might just look like a mess of text, but linux log analysis is actually your superpower. It's how you find out exactly why a system failed and how to fix it.

Claude Is Connecting Directly To Your Personal Apps

Slashdot - Pre, 24/04/2026 - 1:00md
Anthropic is expanding Claude's app integrations beyond work tools, adding personal-service connectors like Spotify, Uber, AllTrails, TripAdvisor, Instacart, and TurboTax. The Verge reports: Some of these apps, such as Spotify, already have similar connectors in OpenAI's ChatGPT. Once an app is connected, Claude will suggest relevant connected apps directly in your conversations, like using AllTrails for hike recommendations. Anthropic notes in its blog post announcing the new connectors that, "Your data from [connected apps] isn't used to train our models, and the app doesn't see your other conversations with Claude. You can also disconnect it at any time." Additionally, Anthropic says "there are no paid placements or sponsored answers in conversations with Claude." When multiple apps seem relevant, Claude will show results from both "ranked by what's most useful." Claude will also ask users to verify before taking actions like making a purchase or reservation using a connected app.

Read more of this story at Slashdot.

Jonathan Blandford: Goblint Notes

Planet GNOME - Pre, 24/04/2026 - 9:57pd

I was excited to see Bilal’s announcement of goblint, and I’ve spent the past week getting Crosswords to work with it. This is a tool I’ve always wanted and I’m pretty convinced it will be a great boon for the GNOME ecosystem. I’m posting my notes in hope that more people try it out:

  • First and most importantly, Bilal has been so great to work with. I have filed ~20 issues and feature requests and he fixed them all very quickly. In some cases, he fixed the underlying issue before I completed adding annotations to the code.
  • Most of the issues flagged were idiomatic and stylistic, but it did find real bugs. It found a half-dozen leaks, a missing g_timeout removal, and five missing class function chain ups. One was a long-standing crasher. There’s a definite improvement in quality from adopting this tool.
  • I’m also excited about pairing this with new GSoC interns. The types of things goblint flags are the things that students hit in particular (when they don’t write it all their code with AI). I think goblint will be even more important to our ecosystem as a teaching tool to our C codebase. It’s already effectively replaced my styleguide.
  • In a few instances, the use_g_autoptr rule outstripped static-scan’s ability to track leaks. Ultimately, I ended up annotating and removing the g_autoptr() calls as I couldn’t get the two to play nicely together.
  • Along the same lines, cairo, pango, and librsvg all lack G_DEFINE_AUTOPTR_CLEANUP_FUNC. It would be really great if we could fix these core libraries. In the meantime, you can add the following to your project’s goblint.toml file:
[rules.use_g_autoptr_inline_cleanup] level = "error" ignore_types = ["cairo_*", "Pango*", "RsvgHandle"]
  • I had some trouble getting the pipeline integrated with GNOME’s gitlab. The gitlab recipe on his page uses premium features unavailable in the self hosted version. If it’s helpful for others, here’s what I ended up using:
goblint: stage: analysis extends: - "opensuse-container@x86_64.stable" - ".fdo.distribution-image@opensuse" needs: - job: opensuse-container@x86_64.stable artifacts: false before_script: - source ci/env.sh - cargo install --git https://github.com/bilelmoussaoui/goblint goblint script: # Goblint is fast. We run it twice: Once to generate the report, # and a second time to display the output and triger an error - /root/.cargo/bin/goblint . --format sarif > goblint.sarif || true - /root/.cargo/bin/goblint . --format text artifacts: reports: sast: goblint.sarif when: always

YMMV

FCC's Foreign-Made Router Ban Expands To Portable Wi-Fi Hotspot Devices

Slashdot - Pre, 24/04/2026 - 9:00pd
The FCC has expanded its foreign-made router ban to also cover consumer Wi-Fi hotspots and LTE/5G home-internet devices, though existing products and phones with hotspot features are not affected. PCMag reports: On Wednesday, the FCC updated its FAQ on the ban, clarifying which consumer-grade routers are subject to the restrictions. Portable Wi-Fi hotspots are usually considered a separate category from Wi-Fi home routers. Both offer internet access, but portable Wi-Fi hotspots use a SIM card to connect to a cellular network rather than an Ethernet cable inside a residence. However, the FCC's FAQ now specifies that "consumer-grade portable or mobile MiFi Wi-Fi or hotspot devices for residential use" are covered under the ban. The ban also affects "LTE/5G CPE devices for residential use," which are installed for fixed wireless access and use a carrier's cellular network to deliver home internet. The FCC didn't immediately respond to a request for comment about the changes. In the meantime, the FAQ reiterates that the foreign-made router ban only applies to consumer-grade devices, not enterprise products. The document also notes that mobile phones with hotspot features remain outside the restrictions. In addition, the ban only affects new router models that vendors plan to sell, not existing models, as T-Mobile emphasized to PCMag.

Read more of this story at Slashdot.

New Gas-Powered Data Centers Could Emit More Greenhouse Gases Than Entire Nations

Slashdot - Pre, 24/04/2026 - 5:30pd
An anonymous reader quotes a report from Wired: New gas projects linked to just 11 data center campuses around the US have the potential to create more greenhouse gases than the country of Morocco emitted in 2024. Emissions estimates from air permit documents examined by WIRED show that these natural gas projects -- which are being built to power data centers to serve some of the US's most powerful AI companies, including OpenAI, Meta, Microsoft, and xAI -- have the potential to emit more than 129 million tons of greenhouse gases per year. As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom. The infrastructure on this list of large natural gas projects reviewed by WIRED is being developed to largely bypass the grid and provide power solely for data centers, a trend known as behind-the-meter power. As data center developers face long waits for connections to traditional utilities, and amid mounting public resistance to the possibility of higher energy bills, making their own power is becoming an increasingly popular option. These projects have either been announced or are under construction, with companies already submitting air permit application materials with state agencies. [...] The emissions projections for the xAI and Microsoft projects, and all the others on WIRED's list, were pulled directly from publicly-available air permit documents in state databases as well as public air permit materials collected by both Cleanview and Oil and Gas Watch, a database maintained by the Environmental Integrity Project, an environmental enforcement nonprofit. Actual greenhouse gas emissions from power plants are usually lower than what's on their air permits. Air permit modeling is based on the scenario of a power plant constantly running at full capacity. That's rarely the reality for grid-connected power plants, as turbines go offline for maintenance or adjust to the ebbs and flows of customer demand. "Permitted emission numbers represent a theoretical, conservative scenario, not the actual projected emissions," Alex Schott, the director of communications at Williams Companies, an oil and gas company that is building out three behind-the-meter power plants in Ohio for Meta, told WIRED in an email. Internal modeling done by the company, Schott added, shows that actual emissions could be "potentially two-thirds less than what's on paper." The projections involved, however, are still substantial. Even if the actual emissions from these power plants end up being half of the emissions numbers on the permits, they still could create more greenhouse gas emissions than the country of Norway emitted in 2024. This number is, according to the EPA, equivalent to the emissions from more than 153 average-sized natural gas plants. (WIRED's analysis does not include emissions from backup generators and turbines on the data center campuses themselves, which create smaller amounts of emissions.) Energy researcher Jon Koomey says the data center boom has created a shortage of the most efficient gas turbines, pushing some developers toward less efficient models that would need to run longer and produce more emissions. "[Data center operators'] belief is that the value being delivered by the servers is much, much more than the cost of running these inefficient power plants all the time," he said. Michael Thomas, the founder of clean energy research firm Cleanview, has been tracking gas permits for data centers across the country. He calls behind-the-meter power "a crazy acceleration of emissions." He added: "It's almost like we thought we were on the downside of the Industrial Revolution, retiring coal and gas, and now we have a new hump where we're going to rise. That terrifies me in a lot of ways."

Read more of this story at Slashdot.

Apple Stops Weirdly Storing Data That Let Cops Spy On Signal Chats

Slashdot - Pre, 24/04/2026 - 1:00pd
Apple has fixed a bug that could cause parts of Signal notifications to remain stored on iPhones even after messages disappeared and the app was deleted. "Affected users concerned about push notifications can update their devices to stop what Apple characterized as 'notifications marked for deletion' that 'could be unexpectedly retained on the device,'" reports Ars Technica. "According to Apple, the push notifications should never have been stored, but a 'logging issue' failed to redact data." From the report: Vulnerable users hoping to evade law enforcement surveillance often use encrypted apps like Signal to communicate sensitive information. That's why users felt blindsided when 404 Media reported that Apple was unexpectedly storing push notifications displaying parts of encrypted messages for up to a month. This occurred even after the message was set to disappear and the app itself was deleted from the device. 404 Media flagged the issue after speaking to multiple people who attended a hearing where the FBI testified that it "was able to forensically extract copies of incoming Signal messages from a defendant's iPhone, even after the app was deleted, because copies of the content were saved in the device's push notification database." The shocking revelation came in a case that 404 Media noted was "the first time authorities charged people for alleged 'Antifa' activities after President Trump designated the umbrella term a terrorist organization." "We're grateful to Apple for the quick action here, and for understanding and acting on the stakes of this kind of issue," Signal's post said. "It takes an ecosystem to preserve the fundamental human right to private communication." In their post, Signal confirmed that after users update their devices, "no action is needed for this fix to protect Signal users on iOS. Once you install the patch, all inadvertently-preserved notifications will be deleted and no forthcoming notifications will be preserved for deleted applications."

Read more of this story at Slashdot.

Warner Bros Shareholders Approve Paramount's $81 Billion Takeover

Slashdot - Pre, 24/04/2026 - 12:00pd
Warner Bros. Discovery shareholders have approved Paramount Skydance's takeover bid, moving the massive Hollywood merger a step closer to completion. It's not a done deal quite yet, though, as it still faces regulatory scrutiny and fierce opposition from critics who warn it will further concentrate media power. The Associated Press reports: Per a preliminary vote count Thursday, Warner Bros. Discovery said the overwhelming majority of its stakeholders voted in support of selling the entire business to Skydance-owned Paramount for $31 a share. Including debt, the deal is valued at nearly $111 billion based on Warner's current outstanding shares. That means Warner-owned HBO Max, cult-favorite titles like "Harry Potter" and even CNN could soon find themselves under the same roof with Paramount's CBS, "Top Gun" and the Paramount+ streaming service. David Zaslav, CEO of Warner Bros. Discovery, said in a statement that stockholder approval marks "another key milestone toward completing this historic transaction." Paramount added that it looks forward to closing in the coming months, and "realizing the creation of a next-generation media and entertainment company." [...] Meanwhile, Warner shareholders rejected a separate measure Thursday outlining post-merger payments for company executives.

Read more of this story at Slashdot.

OpenAI Says Its New GPT-5.5 Model Is More Efficient and Better At Coding

Slashdot - Enj, 23/04/2026 - 11:00md
OpenAI released its new GPT-5.5 model today, which the company calls its "smartest and most intuitive to use model yet, and the next step toward a new way of getting work done on a computer." The Verge reports: OpenAI just released GPT-5.4 last month, but says that the new GPT-5.5 "excels" at tasks like writing and debugging code, doing research online, making spreadsheets and documents, and doing that work across different tools. "Instead of carefully managing every step, you can give GPT-5.5 a messy, multi-part task and trust it to plan, use tools, check its work, navigate through ambiguity, and keep going," according to OpenAI. The company also notes that GPT-5.5 will have its "strongest set of safeguards to date" and can use "significantly fewer" tokens to complete tasks in Codex. GPT-5.5 is rolling out on Thursday for Plus, Pro, Business, and Enterprise ChatGPT tiers and Codex, with GPT-5.5 Pro coming to Pro, Business, and Enterprise users.

Read more of this story at Slashdot.

Sam Thursfield: Status update, 23rd April 2026

Planet GNOME - Enj, 23/04/2026 - 10:48md

Hello there,

You thought I’d given up on “status update” blog posts, did you ? I haven’t given up, despite my better judgement, this one is just even later than usual.

Recently I’ve been using my rather obscure platform as a blogger to theorize about AI and the future of the tech industry, mixed with the occasional life update, couched in vague terms, perhaps due to the increasing number of weirdos in the world who think doxxing and sending death threats to open source contributors is a meaningful use of their time.

In fact I do have some theories about how George Orwell (in “Why I Write”) and Italo Calvino (in “If On a Winter’s Night a Traveller”) made some good guesses from the 20th century about how easy access to LLMs would affect communication, politics and art here in the 21st. But I’ll leave that for another time.

It’s also 8 years since I moved to this new country where I live now, driving off the boat in a rusty transit van to enjoy a series of unexpected and amazing opportunities. Next week I’m going to mark the occasion with a five day bike ride through the mountains of Asturias, something I’ve been dreaming of doing for several years.

The original idea of writing a monthly post was to keep tabs on various open source software projects I sometimes manage to contribute to, and perhaps even to motivate me to do more such volunteering. Well that part didn’t work, house renovations and an unexpectedly successful gig playing synth and trombone took over all my free time; but after many years of working on corporate consultancy and doing a little open source in the background, I’m trying to make a space at work to contribute in the open again.

I could tell the whole story here of how Codethink became “the build system people”. Maybe I will actually. It all started with BuildStream. In fact, that’s not even true. it all started in 2011 when some colleagues working with MeeGo and Yocto thought, “This is horrible, isn’t it?”

They set out to create something better, and produced Baserock, which unfortunately turned out even worse. But it did have some good ideas. The concept of “cache keys” to identify build inputs and content-addressed storage to hold build outputs began there, as did the idea of opening a “workspace” to make drive-by changes in build inputs within a large project.

BuildStream took this core idea, extended it to support arbitrary source kinds and element kinds defined by plugins, and added a shiny interface on top. It used OSTree to store and distribute build artifacts initially, later migrating to the Google REAPI with the goal of supporting Enterprise(TM) infrastructure. You can even use it alongside Bazel, if you like having three thousand commandline options at your disposal.

Unfortunately it was 2016, so we wrote the whole thing in Python. (In our defence, the Rust programming language had only recently hit 1.0 and crates.io was still a ghost town, and we’d probably still be rewriting the ruamel.yaml package in Rust if we had taken that road.) But the company did make some great decisions, particularly making a condition of success for the BuildStream project that it could unify the 5 different build+integration systems that GNOME release team were maintaining. And that success meant not making a prototype, but the release team actually using BuildStream to make releases. Tristan even ended up joining the GNOME release team for a while. We discussed it all at the 2017 Manchester GUADEC, coincidentally. It was a great time. (Aside from the 6 months leading up to the conference.)

At this point, the Freedesktop SDK already existed, with the same rather terrible name that it has today, and was already the base runtime for this new app container tool that was named… xdg-app. (At least that eventually gained a better name). However, if you can remember 8 years ago, it had a very different form than today. Now, my memory of what happened next is especially hazy at this point, because like I told you in the beginning, I was on a boat with my transit van heading towards a new life in Spain. All I have to go on 8 years later is the Git history, but somehow the Freedesktop SDK grew a 3-stage compiler bootstrap, over 600 reusable BuildStream elements, its own Gitlab namespace, and even some controversial stickers. As a parting gift I apparently added support for building VMs, the idea being that we’d reinstate the old GNOME Continuous CI system that had unfortunately died of neglect several years earlier. This idea got somewhat out of hand, let’s say.

It took me a while to realize this, but today Freedesktop SDK is effectively the BuildStream reference distribution. What Poky is to BitBake in the Yocto project, this is what Freedesktop SDK is to BuildStream. And this is a pretty important insight. It explains the problem you may have experienced with the BuildStream documentation: you want to build some Linux package, so you read through the manual right to the end, and then you still have no fucking idea how to integrate that package.

This isn’t a failure on the part of the authors, instead the issue is that your princess is in another castle. Every BuildStream project I’ve ever worked on has junctioned freedesktop-sdk.git and re-used the elements, plugins, aliases, configurations and conventions defined there, all of which are rigorously undocumented. The Freedesktop SDK Guide, for reasons that I won’t go into, doesn’t venture much further than than reminding you how to call Make targets.

And this is something of a point of inflection. The BuildStream + Freedesktop SDK ecosystem has clearly not displaced Yocto, nor for that matter Linux Mint. But, like many of my favourite musicians, it has been quietly thriving in obscurity. People I don’t know are using it to do things that I don’t completely understand. I’ve seen it in comparison articles, and even job adverts. ChatGPT can generate credible BuildStream elements about as well as it can generate Dockerfiles (i.e. not very well, but it indicates a certain level of ubiquity). There have been conferences, drama, mistakes, neglect. It’s been through an 8 person corporate team hyper-optimizing the code, and its been though a mini dark age where volunteers thanklessly kept the lights on almost single handledly, and its even survived its transition to the Apache Foundation.

Through all of this, the secret to its success probably that its just a really nice tool to work with. As much as you can enjoy software integration, I enjoy using BuildStream to do it; things rarely break, when they do its rarely difficult to fix them, and most importantly the UI is really colourful! I’m now using it to build embedded system images for a product named CTRL, which you can think of as.. a Linux distribution. There are some technical details to this which I’m working to improve, which I won’t bore you with here.

I also won’t bore you with the topic of community governance this month, but that’s what’s currently on my mind. If you’ve been part of the GNOME Foundation for a few years, you’ll know this something that’s usually boring and occasionally becomes of almost life-or-death importance. The “let’s just be really sound” model works great, until one day when you least expect it, and then suddenly it really doesn’t. There is no perfect defence against this, and in open source communities its our diversity that brings the most resilience. When GNOME loses, KDE gains, and that way at least we still don’t have to use Windows. Indeed, this is one argument for investing in BuildStream even if it remains forever something of a minority spot. I guess I just need to remember that when you have to start thinking hard about governance, that’s a sign of success.

Sebastian Wick: How Hard Is It To Open a File?

Planet GNOME - Enj, 23/04/2026 - 10:41md

It’s a question I had to ask myself multiple times over the last few months. Depending on the context the answer can be:

  • very simple, just call the standard library function
  • extremely hard, don’t trust anything

If you are an app developer, you’re lucky and it’s almost always the first answer. If you develop something with a security boundary which involves files in any way, the correct answer is very likely the second one.

Opening a File, the Hard Way

Like so often, the details depend on the specifics, but in the worst-case scenario, there is a process on either side of the security boundary, which operate on a filesystem tree which is shared by both processes.

Let’s say that the process with more privileges operates on a file on behalf of the process with less privileges. You might want to restrict this to files in a certain directory, to prevent the less privileged process from, for example, stealing your SSH key, and thus take a subpath that is relative to that directory.

The first obvious problem is that the subpath can refer to files outside of the directory if it contains ... If the privileged process gets called with a subpath of ../.ssh/id_ed25519, you are in trouble. Easy fix: normalize the path, and if we ever go outside of the directory, fail.

The next issue is that every component of the path might be a symlink. If the privileged process gets called with a subpath of link, and link is a symlink to ../.ssh/id_ed25519, you might be in trouble. If the process with less privileges cannot create files in that part of the tree, it cannot create a malicious symlink, and everything is fine. In all other scenarios, nothing is fine. Easy fix: resolve the symlinks, expand the path, then normalize it.

This is usually where most people think we’re done, opening a file is not that hard after all, we can all do more fun things now. Really, this is where the fun begins.

The fix above works, as long as the less privileged process cannot change the file system tree anywhere in the file’s path while the more privileged process tries to access it. Usually this is the case if you unpack an attacker-provided archive into a directory the attacker does not have access to. If it can however, we have a classic TOCTOU (time-of-check to time-of-use) race.

We have the path foo/id_ed25519, we resolve the smlinks, we expand the path, we normalize it, and while we did all of that, the other process just replaced the regular directory foo that we just checked with a symlink which points to ../.ssh. We just checked that the path resolves to a path inside the target directory though, and happily open the path foo/id_ed25519 which now points to your ssh key. Not an easy fix.

So, what is the fundamental issue here? A path string like /home/user/.local/share/flatpak/app/org.example.App/deploy describes a location in a filesystem namespace. It is not a reference to a file. By the time you finish speaking the path aloud, the thing it names may have changed.

The safe primitive is the file descriptor. Once you have an fd pointing at an inode, the kernel pins that inode. The directory can be unlinked, renamed, or replaced with a symlink; the fd does not care. A common misconception is that file descriptors represent open files. It is true that they can do that, but fds opened with O_PATH do not require opening the file, but still provide a stable reference to an inode.

The lesson that should be learned here is that you should not call any privileged process with a path. Period. Passing in file descriptors also has the benefit that they serve as proof that the calling process actually has access to the resource.

Another important lesson is that dropping down from a file descriptor to a path makes everything racy again. For example, let’s say that we want to bind mount something based on a file descriptor, and we only have the traditional mount API, so we convert the fd to a path, and pass that to mount. Unfortunately for the user, the kernel resolves the symlinks in the path that an attacker might have managed to place there. Sometimes it’s possible to detect the issue after the fact, for example by checking that the inode and device of the mounted file and the file descriptor match.

With that being said, sometimes it is not entirely avoidable to use paths, so let’s also look into that as well!

In the scenario above, we have a directory in which we want all the paths to resolve in, and that the attacker does not control. We can thus open it with O_PATH and get a file descriptor for it without the attacker being able to redirect it somewhere else.

With the openat syscall, we can open a path relative to the fd we just opened. It has all the same issues we discussed above, except that we can also pass O_NOFOLLOW. With that flag set, if the last segment of the path is a symlink, it does not follow it and instead opens the actual symlink inode. All the other components can still be symlinks, and they still will be followed. We can however just split up the path, and open the next file descriptor for the next path segment and resolve symlinks manually until we have done so for the entire path.

libglnx chase

libglnx is a utility library for GNOME C projects that provides fd-based filesystem operations as its primary API. Functions like glnx_openat_rdonly, glnx_file_replace_contents_at, and glnx_tmpfile_link_at all take directory fds and operate relative to them. The library is built around the discipline of “always have an fd, never use an absolute path when you can use an fd.”

The most recent addition is glnx_chaseat, which provides safe path traversal, and was inspired by systemd’s chase(), and does precisely what was described above.

int glnx_chaseat (int dirfd, const char *path, GlnxChaseFlags flags, GError **error);

It returns an O_PATH | O_CLOEXEC fd for the resolved path, or -1 on error. The real magic is in the flags:

typedef enum _GlnxChaseFlags { /* Default */ GLNX_CHASE_DEFAULT = 0, /* Disable triggering of automounts */ GLNX_CHASE_NO_AUTOMOUNT = 1 << 1, /* Do not follow the path's right-most component. When the path's right-most * component refers to symlink, return O_PATH fd of the symlink. */ GLNX_CHASE_NOFOLLOW = 1 << 2, /* Do not permit the path resolution to succeed if any component of the * resolution is not a descendant of the directory indicated by dirfd. */ GLNX_CHASE_RESOLVE_BENEATH = 1 << 3, /* Symlinks are resolved relative to the given dirfd instead of root. */ GLNX_CHASE_RESOLVE_IN_ROOT = 1 << 4, /* Fail if any symlink is encountered. */ GLNX_CHASE_RESOLVE_NO_SYMLINKS = 1 << 5, /* Fail if the path's right-most component is not a regular file */ GLNX_CHASE_MUST_BE_REGULAR = 1 << 6, /* Fail if the path's right-most component is not a directory */ GLNX_CHASE_MUST_BE_DIRECTORY = 1 << 7, /* Fail if the path's right-most component is not a socket */ GLNX_CHASE_MUST_BE_SOCKET = 1 << 8, } GlnxChaseFlags;

While it doesn’t sound too complicated to implement, a lot of details are quite hairy. The implementation uses openat2, open_tree and openat depending on what is available and what behavior was requested, it handles auto-mount behavior, ensures that previously visited paths have not changed, and a few other things.

An Aside on Standard Libraries

The POSIX APIs are not great at dealing with the issue. The GLib/Gio APIs (GFile, etc.) are even worse and only accept paths. Granted, they also serve as a cross-platform abstraction where file descriptors are not a universal concept. Unfortunately, Rust also has this cross-platform abstraction which is based entirely on paths.

If you use any of those APIs, you very likely created a vulnerability. The deeper issue is that those path-based APIs are often the standard way to interact with files. This makes it impossible to reason about the security of composed code. You can audit your own code meticulously, open everything with O_PATH | O_NOFOLLOW, chain *at() calls carefully — and then call a third-party library that calls open(path) internally. The security property you established in your code does not compose through that library call.

This means that any system-level code that cares about filesystem security has to audit all transitive dependencies or avoid them in the first place.

So what would a better GLib cross-platform API look like? I would say not too different from chaseat(), but returning opaque handles instead of file descriptors, which on Unix would carry the O_PATH file descriptor and a path that can be used for printing, debugging and things like that. You would open files from those handles, which would yield another kind of opaque handle for reading, writing, and so on.

The current GFile was also designed to implement GVfs: g_file_new_for_uri("smb://server/share/file") gives you a GFile you can g_file_read() just like a local file. This is the right goal, but the wrong abstraction layer. Instead, this kind of access should be provided by FUSE, and the URI should be translated to a path on a specific FUSE mount. This would provide a few benefits:

  • The fd-chasing approach works everywhere because it is a real filesystem managed by the kernel
  • The filesystem becomes independent of GLib and can be used for example from Rust as well
  • It stacks with other FUSE filesystems, such as the XDG Desktop Document Portal used by Flatpak
Wait, Why Are You Talking About This?

Nowadays I maintain a small project called Flatpak. Codean Labs recently did a security analysis on it and found a number of issues. Even though Flatpak developers were aware of the dangers of filesystems, and created libglnx because of it, most of the discovered issues were just about that. One of them (CVE-2026-34078) was a complete sandbox escape.

flatpak run was designed as a command-line tool for trusted users. When you type flatpak run org.example.App, you control the arguments. The code that processes the arguments was written assuming the caller is legitimate. It accepted path strings, because that’s what command-line tools accept.

The Flatpak portal was then built as a D-Bus service that sandboxed apps could call to start subsandboxes — and it did this by effectively constructing a flatpak run invocation and executing it. This connected a component designed for trusted input directly to an untrusted caller (the sandboxed app).

Once that connection exists, every assumption baked into flatpak run about caller trustworthiness becomes a potential vulnerability. The fix wasn’t “change one function” — it was “audit the entire call chain from portal request to bubblewrap execution and replace every path string with an fd.” That’s commits touching the portal, flatpak-run, flatpak_run_app, flatpak_run_setup_base_argv, and the bwrap argument construction, plus new options (--app-fd, --usr-fd, --bind-fd, --ro-bind-fd) threaded through all of them.

If the GLib standard file and path APIs were secure, we would not have had this issue.

Another annoyance here is that the entire subsandboxing approach in Flatpak comes from 15 years ago, when unprivileged user namespaces were not common. Nowadays we could (and should) let apps use kernel-native unprivileged user namespaces to create their own subsandboxes.

Unfortunately with rather large changes comes a high likelihood of something going wrong. For a few days we scrambled to fix a few regressions that prevented Steam, WebKit, and Chromium-based apps from launching. Huge thanks to Simon McVittie!

In the end, we managed to fix everything, made Flatpak more secure, the ecosystem is now better equipped to handle this class of issues, and hopefully you learned something as well.

Meta Is Laying Off 10% of Its Workforce

Slashdot - Enj, 23/04/2026 - 10:00md
Meta is reportedly cutting about 10% of its workforce, or roughly 8,000 jobs, while closing thousands of open roles it had intended to fill. "We're doing this as part of our continued effort to run the company more efficiently and to allow us to offset the other investments we're making," said Janelle Gale, Meta's chief people officer. The company had almost 79,000 employees at the start of the year. Quartz reports: Meta CEO Mark Zuckerberg has poured resources into building out AI capabilities, directing spending toward model development, chatbot products, and the engineering talent to support them. Meta set its 2026 capital expenditure guidance at $115 billion to $135 billion, almost double the $72 billion it spent in 2025. Employees have been encouraged to use AI agents internally for tasks such as writing code. The early disclosure, Gale explained, was prompted by the fact that information about the cuts had already made its way into press reports before the company was ready to announce. "I know this is unwelcome news and confirming this puts everyone in an uneasy state, but we feel this is the best path forward, given the circumstances," she wrote. According to the memo, severance for affected workers in the United States will cover 18 months of COBRA health insurance premiums, along with a base pay component of 16 weeks that increases by two weeks for each year of service. Departing employees will have access to job placement assistance and, where applicable, help navigating immigration status. Packages outside the U.S. will vary by country. Meta cut between 10% and 15% of its Reality Labs workforce in January, shut down several VR game studios, and shed about 700 positions across at least five divisions in March.

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi