You are here

Agreguesi i feed

6.1.169: longterm

Kernel Linux - Sht, 18/04/2026 - 10:38pd
Version:6.1.169 (longterm) Released:2026-04-18 Source:linux-6.1.169.tar.xz PGP Signature:linux-6.1.169.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.1.169

5.15.203: longterm

Kernel Linux - Sht, 18/04/2026 - 10:35pd
Version:5.15.203 (longterm) Released:2026-04-18 Source:linux-5.15.203.tar.xz PGP Signature:linux-5.15.203.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-5.15.203

5.10.253: longterm

Kernel Linux - Sht, 18/04/2026 - 10:32pd
Version:5.10.253 (longterm) Released:2026-04-18 Source:linux-5.10.253.tar.xz PGP Signature:linux-5.10.253.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-5.10.253

Matthias Klumpp: Hello old new “Projects” directory!

Planet GNOME - Sht, 18/04/2026 - 10:06pd

If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: “Projects”

Why?

With the recent 0.20 release of xdg-user-dirs we enabled the “Projects” directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.

The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the “Projects” directory, with output video being more at home in “Videos”.

By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a “project-centric” manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the “Documents” folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.

This sucks, I don’t like it!

As usual, you are in control and can modify your system’s behavior. If you do not like the “Projects” folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.

If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).

What else is new?

Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the “arbitrary code execution from unsanitized input” bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.

Thanks to everyone who contributed to this release!

NASA Restarts Work To Support Europe's Uncrewed Trip To Mars After Years of Setbacks

Slashdot - Sht, 18/04/2026 - 9:00pd
NASA has revived support for the European Space Agency's long-delayed Rosalind Franklin Mars rover mission. According to the space agency, the current plan is to launch via a SpaceX Falcon Heavy no earlier than 2028. Engadget reports: This is a partnership between NASA and the ESA, with the European agency providing the rover, the spacecraft and the lander. The US will provide braking engines for the lander, heater units for the rover's internal systems and, of course, assistance with the actual launch. The rover will be outfitted with scientific instruments to look for signs of ancient life on the red planet. These include a state-of-the-art mass spectrometer and an organic molecule analyzer, which will come in handy as the vehicle collects samples at the Oxia Planum landing site. The mission has been stuck in development limbo since 2001, with delays caused by budget problems, technical issues, shifting international partners, and geopolitical fallout. After NASA dropped out, Russia stepped in, then was cut loose after invading Ukraine, and now -- despite NASA rejoining in 2024 and fresh political budget threats -- the rover is tentatively back on track for a 2028 launch.

Read more of this story at Slashdot.

Critical Atlantic Current Significantly More Likely To Collapse Than Thought

Slashdot - Sht, 18/04/2026 - 5:30pd
An anonymous reader quotes a report from the Guardian: The critical Atlantic current system appears significantly more likely to collapse than previously thought after new research found that climate models predicting the biggest slowdown are the most realistic. Scientists called the new finding "very concerning" as a collapse would have catastrophic consequences for Europe, Africa and the Americas. The Atlantic meridional overturning circulation (Amoc) is a major part of the global climate system and was already known to be at its weakest for 1,600 years as a result of the climate crisis. Scientists spotted warning signs of a tipping point in 2021 and know that the Amoc has collapsed in the Earth's past. Climate scientists use dozens of different computer models to assess the future climate. However, for the complex Amoc system, these produce widely varying results, ranging from some that indicate no further slowdown by 2100 to those suggesting a huge deceleration of about 65%, even when carbon emissions from fossil fuel burning are gradually cut to net zero. The research combined real-world ocean observations with the models to determine the most reliable, and this hugely reduced the spread of uncertainty. They found an estimated slowdown of 42% to 58% in 2100, a level almost certain to end in collapse. The Amoc is a major part of the global climate system and brings sun-warmed tropical water to Europe and the Arctic, where it cools and sinks to form a deep return current. A collapse would shift the tropical rainfall belt on which many millions of people rely to grow their food, plunge western Europe into extreme cold winters and summer droughts, and add 50-100cm to already rising sea levels around the Atlantic. The slowdown has to do with the Arctic's rapidly rising temperatures from global warming. "Warmer water is less dense and therefore sinks into the depths more slowly," explains the Guardian. "This slowing allows more rainfall to accumulate in the salty surface waters, also making it less dense, and further slowing the sinking and forming an Amoc feedback loop." The new research has been published in the journal Science Advances.

Read more of this story at Slashdot.

Online Personalities and Comedians Overtake TV and Newspapers as Primary News Sources

Slashdot - Sht, 18/04/2026 - 1:00pd
A new Ipsos poll finds Americans are increasingly getting news from online personalities and comedians instead of traditional TV or newspapers. The survey says nearly 70% get news online in a given week, versus 55% from TV and 25% from newspapers, with figures like Joe Rogan, Greg Gutfeld, Sean Hannity, and late-night hosts ranking prominently depending on political leanings. From the Hollywood Reporter: The poll, which was conducted in March, actually found the conservative politicians and cabinet members, including President Trump, were the top news influencers. When politicos were excluded, Joe Rogan led the list, followed by Fox News personalities Greg Gutfeld and Sean Hannity, and then TuckerCarlson and Ben Shapiro. The only three influencers to crack 10 percent were Trump, Rogan, and JD Vance. Among people who voted for Kamala Harris, the top news personalities were late night hosts, led by ABC's Jimmy Kimmel, followed by CBS Late Show host Stephen Colbert, and Daily Show host Jon Stewart. Just under 70 percent of respondents said they get their news online in a given week, compared to 55 percent for TV, and 25 percent for newspapers. [...] Of traditional media outlets, TV dominated, with Fox News, the broadcast networks, and CNN topping the list of sources. Facebook, YouTube and Instagram were the most popular online news sources. "On these platforms opinionated personalities and comedians appear to drown out anyone who would fit in the traditional journalist category," said assistant professor of practice and Jordan Center Executive Director Steven L Herman. "Even in the late 19th century and early 20th centuries, sensationalist and polarizing voices in print and later on air were among the most influential in the political landscape -- such as political satirist Mark Twain and populist Father Charles Coughlin."

Read more of this story at Slashdot.

NIST Limits CVE Enrichment After 263% Surge In Vulnerability Submissions

Slashdot - Sht, 18/04/2026 - 12:00pd
NIST is narrowing how it handles CVEs in the National Vulnerability Database (NVD), saying it will only automatically enrich higher-priority vulnerabilities. "CVEs that do not meet those criteria will still be listed in the NVD but will not automatically be enriched by NIST," it said. "This change is driven by a surge in CVE submissions, which increased 263% between 2020 and 2025. We don't expect this trend to let up anytime soon." The Hacker News reports: The prioritization criteria outlined by NIST, which went into effect on April 15, 2026, are as follows: - CVEs appearing in the U.S. Cybersecurity and Infrastructure Security Agency's (CISA) Known Exploited Vulnerabilities (KEV) catalog. - CVEs for software used within the federal government. - CVEs for critical software as defined by Executive Order 14028: this includes software that's designed to run with elevated privilege or managed privileges, has privileged access to networking or computing resources, controls access to data or operational technology, and operates outside of normal trust boundaries with elevated access. Any CVE submission that doesn't meet these thresholds will be marked as "Not Scheduled." The idea, NIST said, is to focus on CVEs that have the maximum potential for widespread impact. "While CVEs that do not meet these criteria may have a significant impact on affected systems, they generally do not present the same level of systemic risk as those in the prioritized categories," it added. [...] Changes have also been instituted for various other aspects of the NVD operations. These include: - NIST will no longer routinely provide a separate severity score for a CVE where the CVE Numbering Authority has already provided a severity score. - A modified CVE will be reanalyzed only if it "materially impacts" the enrichment data. Users can request specific CVEs to be reanalyzed by sending an email to the same address listed above. - All unenriched CVEs currently in backlog with an NVD publish date earlier than March 1, 2026, will be moved into the "Not Scheduled" category. This does not apply to CVEs that are already in the KEV catalog. - NIST has updated the CVE status labels and descriptions, as well as the NVD Dashboard, to accurately reflect the status of all CVEs and other statistics in real time.

Read more of this story at Slashdot.

Gazing Into Sam Altman's Orb Could Solve Ticket Scalping

Slashdot - Pre, 17/04/2026 - 11:00md
An anonymous reader quotes a report from Wired: Sam Altman's iris-scanning, humanity-verifying World project announced at an event in San Francisco on Friday that Tinder users around the globe can now put a digital badge on their profiles signaling to potential suitors that they're a real human, provided they've already stared into one of World's glossy white Orbs and allowed their eyes to be scanned. The announcement follows a pilot project for Tinder verification that World previously conducted in Japan. [...] In addition to the Tinder global expansion, Tools for Humanity, the company behind World, announced a number of other consumer and enterprise partnerships on Friday at its Lift Off event in San Francisco. The startup says Tinder users who verify with their World ID will receive five free "boosts," typically a paid feature that increases the number of users who see a profile by up to 10 times for 30 minutes. The videoconferencing platform Zoom also says that users can now require other participants to verify their identity with World before joining a call. Docusign, the contract signing software, will allow users to require World's identity verification technology. Tiago Sada, Tools for Humanity's chief product officer, tells WIRED the company sees major platform partnerships as key to helping World become a mainstream identity-verification technology. Sada said he's especially interested in working with social media companies in the future, and was encouraged to see that Reddit has started testing World as a solution to help users distinguish bots from real people. [...] World is also launching a tool called Concert Kit, which lets artists reserve concert tickets for verified humans, a pitch aimed squarely at the bot-driven scalping problem that critics say has plagued sites like TicketMaster. World will test the feature on the upcoming Bruno Mars World Tour featuring Anderson .Paak, who is scheduled to play a verified-humans-only show under his alias DJ Pee .Wee in San Francisco on Friday night. "The idea that World ID is not just private, but it's one of the most private things you've ever used, that's not obvious," says Sada. "We're just not used to this kind of technology. Many people used to tape their [iPhone's sensor used to enable] Face ID when it came out, then we got used to it."

Read more of this story at Slashdot.

Mozilla 'Thunderbolt' Is an Open-Source AI Client Focused On Control and Self-Hosting

Slashdot - Pre, 17/04/2026 - 10:00md
BrianFagioli writes: Mozilla's email subsidiary MZLA Technologies just introduced Thunderbolt, an open-source AI client aimed at organizations that want to run AI on their own infrastructure instead of relying entirely on cloud services. The idea is to give companies full control over their data, models, and workflows while still offering things like chat, research tools, automation, and integration with enterprise systems through the Haystack AI framework. Native apps are planned for Windows, macOS, Linux, iOS, and Android. Thunderbolt allows organizations to do the following: - Run AI with their choice of models, from leading commercial providers to open-source and local models - Connect to systems and data: Integrate with pipelines and open protocols, including: deepset's Haystack platform, Model Context Protocol (MCP) servers, and agents with the Agent Client Protocol (ACP) - Automate workflows and recurring tasks: Generate daily briefings, monitor topics, compile reports, or trigger actions based on events and schedules - Work seamlessly across devices with native applications for Windows, macOS, Linux, iOS, and Android - Maintain security with self-hosted deployment, optional end-to-end encryption, and device-level access controls

Read more of this story at Slashdot.

Amazon's New Fire TV Sticks No Longer Support Sideloading

Slashdot - Pre, 17/04/2026 - 9:00md
Amazon's newest Fire TV Sticks are dropping support for normal sideloading, blocking apps from outside the Amazon Appstore unless the device is registered with developers. Cord Cutters News reports: This week, Amazon announced the upcoming launch of a new Fire TV Stick HD. The new model will run on Amazon's Vega OS, rather than Android, so most streaming apps will be supported, but users won't be add third party apps. Now, on the product page to preorder the new Fire Stick, some Amazon customers are getting a message warning them that the new model won't allow sideloading. Interestingly, not all customers are getting the message, whether signed in to an Amazon account or not. The message, shown in a screenshot below, says: "For enhanced security, this device prevents sideloading or installing apps from unknown sources. Only apps from the Amazon Appstore are available for download." [...] The Fire TV Stick Select, announced in September 2025, also runs on Vega and some customers will see the same message about sideloading on that product page. [...] While Amazon continues to be a "multi-OS company," we should expect that future Fire TV models will also be built with Vega OS, limiting the apps users can access with their streaming devices to those from the Amazon Appstore.

Read more of this story at Slashdot.

OpenAI Starts Offering a Biology-Tuned LLM

Slashdot - Pre, 17/04/2026 - 8:00md
An anonymous reader quotes a report from Ars Technica: On Thursday, OpenAI announced it had developed a large language model specifically trained on common biology workflows. Called GPT-Rosalind after Rosalind Franklin, the model appears to differ from most science-focused models from major tech companies, which have generally taken a more generic approach that works for various fields. In a press briefing, Yunyun Wang, OpenAI's Life Sciences Product Lead, said the system was designed to tackle two major roadblocks faced by current biology researchers. One is the massive datasets created by decades of genome sequencing and protein biochemistry, which can be too much for any one researcher to take in. The second is that biology has many highly specialized subfields, each with its own techniques and jargon. So, for example, a geneticist who finds themselves working on a gene that's active in brain cells might struggle to understand the immense neurobiological literature. Wang said the company had taken an LLM and trained it on 50 of the most common biological workflows, as well as on how to access the major public databases of biological information. Further training has resulted in a system that can suggest likely biological pathways and prioritize potential drug targets. "We're connecting genotype to phenotype through known pathways and regulatory mechanisms, infer likely structural or functional properties of proteins, and really leveraging this mechanistic understanding," Wang said. To address LLMs' tendencies toward sycophancy and overenthusiasm, OpenAI says it has tuned the model to be more skeptical, so it's more likely to tell you when something is a bad drug target. There was a lot of talk about GPT-Rosalind's "reasoning" and "expert-level" abilities. We were told that the former was defined as being able to work through complex, multi-step processes, while the latter was derived from the model's performance on a handful of benchmarks. Access to GPT-Rosalind is currently limited "due to concerns about the model's potential for harmful outputs if asked to do something like optimize a virus's infectivity," notes Ars. Only U.S.-based organizations can request access at the moment.

Read more of this story at Slashdot.

Microsoft Increases the FAT32 Limit From 32GB To 2TB

Slashdot - Pre, 17/04/2026 - 7:00md
Longtime Slashdot reader AmiMoJo writes: Windows has limited FAT32 partitions to a maximum of 32GB for decades now. When memory cards and USB drives exceeded 32GB in size, the only options were exFAT or NTFS. Neither option was well supported on other platforms at first, although exFAT support is fairly widespread now. In their latest blog post, Microsoft announced that the limit for FAT32 partitions is being increased to 2TB. Of course, that doesn't mean that every device that supports FAT32 will work flawlessly with a 2TB partition size, but at least there is a decent chance that older devices with don't support exFAT will now be usable with memory cards over 32GB.

Read more of this story at Slashdot.

Newly Unsealed Records Reveal Amazon's Price-Fixing Tactics

Slashdot - Pre, 17/04/2026 - 6:00md
Newly unsealed records in California's antitrust case against Amazon allegedly show the company pressured third-party sellers to raise prices on rival sites like Walmart, Target, and Wayfair so Amazon could maintain the appearance of offering the lowest price. California says Amazon used tools like Buy Box suppression to punish cheaper listings elsewhere. The Guardian reports: [...] In one previously redacted deposition, marked "highly confidential," Mayer Handler, owner of a clothing company called Leveret, testified that he received an email in October 2022 from Amazon notifying him that one of his products was "no longer eligible to be a featured offer" through Amazon's Buy Box. The tech giant, he testified, had suppressed the item, a tiger-themed, toddler's pajama set, because his company was selling it for $19.99 on Amazon, a single cent higher than what his company was offering it for on Walmart. Afterwards, Handler testified, his company "changed pricing on Walmart to match or exceed Amazon's price" or changed the item's product code to try to throw off Amazon's price tracking system. In response to a question from the Guardian, Handler criticized Amazon for tracking prices across the internet and "shadow" blocking his company's products -- tactics which he said were depriving consumers of "lower prices." "Maybe that's capitalism," he wrote. "Or that's a monopoly causing price hikes on the consumer." In another unsealed deposition, Terry Esbenshade, a Pennsylvania garden store supplier, testified in October 2024 that whenever his products lost Amazon's Buy Box because of lower prices elsewhere on the internet, his sales on Amazon would plummet by about 80%. This financial reality forced him to try to raise his products' prices with other retailers elsewhere, he said. In one instance, Esbenshade testified, he discovered that one of his company's better-selling patio tables had "become suppressed" on Amazon. Esbenshade wasn't sure why, he recalled, until someone at Amazon suggested he look at Wayfair, another online retailer that happened to be selling his patio table below Amazon's price. The businessman went online and set up a new minimum advertised price for the table on Wayfair to ensure it was higher than Amazon's. "So that raised the price up, and, voila, my product came back" on Amazon, he said, thanks to the reinstatement of the Buy Box.

Read more of this story at Slashdot.

eBPF for Runtime Threat Detection: What Linux Admins Are Actually Deploying

LinuxSecurity.com - Pre, 17/04/2026 - 5:44md
Runtime security has moved from ''nice to have'' to an operational baseline in Linux environments. Most teams learned the hard way that logs and post-event alerts don't catch what actually runs on the system in real time. Attackers don't wait for indexing pipelines or SIEM correlation.

Allan Day: GNOME Foundation Update, 2026-04-17

Planet GNOME - Pre, 17/04/2026 - 5:22md

Welcome to another update about everything that’s been happening at the GNOME Foundation. It’s been four weeks since my last post, due to a vacation and public holidays, so there’s lots to cover. This period included a major announcement, but there’s also been a lot of other notable work behind the scenes.

Fellowship & Fundraising

The really big news from the last four weeks was the launch of our new Fellowship program. This is something that the Board has been discussing for quite some time, so we were thrilled to be able to make the program a reality. We are optimistic that it will make a significant difference to the GNOME project.

If you didn’t see it already, check out the announcement for details. Also, if you want to apply to be our first Fellow, you have just three days until the application deadline on 20th April!

donate.gnome.org has been a great success for the GNOME Foundation, and it is only through the support of our existing donors that the Fellowship was possible. Despite these amazing contributions, the GNOME Foundation needs to grow our donations if we are going to be able to support future Fellowship rounds while simultaneously sustaining the organisation.

To this end, there’s an effort happening to build our marketing and fundraising effort. This is primarily taking place in the GNOME Engagement Team, and we would love help from the community to help boost our outbound comms. If you are interested, please join the Engagement space and look out for announcements.

Also, if you haven’t already, and are able to do so: please donate!

Conferences

We have two major events coming up, with Linux App Summit in May and GUADEC in July, so right now is a busy time for conferences.

The schedules for both of these upcoming events are currently being worked on, and arrangements for catering, photographers, and audio visual services are all in the process of being finalized.

The Travel Committee has also been busy handling GUADEC travel requests, and has sent out the first batch of approvals. There are some budget pressures right now due to rising flight prices, but budget has been put aside for more GUADEC travel, so please apply if you want to attend and need support.

April 2026 Board Meeting

This week was the Board’s regular monthly meeting for April. Highlights from the meeting included:

  • I gave a general report on the Foundation’s activities, and we discussed progress on programs and initiatives, including the new Fellowship program and fundraising.
  • Deepa gave a finance report for October to December 2025.
  • Andrea Veri joined us to give an update on the Membership & Elections Committee, as well as the Infrastructure team. Andrea has been doing this work for a long time and has been instrumental in helping to keep the Foundation running, so this was a great opportunity to thank him for his work.
  • One key takeaway from this month’s discussion was the very high level of support that GNOME receives from our infrastructure partners, particularly AWS and also Fastly. We are hugely appreciative of this support, which represents a major financial contribution to GNOME, and want to make sure that these partners get positive exposure from us and feel appreciated.
  • We reviewed the timeline for the upcoming 2026 board elections, which we are tweaking a little this year, in order to ensure that there is opportunity to discuss every candidacy, and reduce some unnecessary delay in final result.
Infrastructure

As usual, plenty has been happening on the infrastructure side over the past month. This has included:

  • Ongoing work to tune our Fastly configuration and managing the resource usage of GNOME’s infra.
  • Deployment of a LiberaForms instance on GNOME infrastructure. This is hooked up to GNOME’s SSO, so is available to anyone with an account who wants to use it – just head over to forms.gnome.org to give it a try.
  • Changes to the Foundation’s internal email setup, to allow easier management of the generic contact email addresses, as well as better organisation of the role-based email addresses that we have.
  • New translation support for donate.gnome.org.
  • Ongoing work in Flathub, around OAuth and flat-manager.
Admin & Finance

On the accounting side, the team has been busy catching up on regular work that got put to one side during last month’s audit. There were some significant delays to our account process as a result of this, but we are now almost up to date.

Reorganisation of many of our finance processes has also continued over the past four weeks. Progress has included a new structure and cadence for our internal accounting calls, continued configuration of our new payments platform, and new forms for handling reimbursement requests.

Finally, we have officially kicked off the process of migrating to our new physical mail service. Work on this is ongoing and will take some time to complete. Our new address is on the website, if anyone needs it.

That’s it for this report! Thanks for reading, and feel free to use the comments if you have questions!

US To Create High-Tech Manufacturing Zone In Philippines

Slashdot - Pre, 17/04/2026 - 5:00md
An anonymous reader quotes a report from the Wall Street Journal: An agreement with the Philippines to establish a high-tech industrial hub is the Trump administration's latest effort to lessen China's dominance over global supply chains. The deal to build up American manufacturing across a stretch of the island of Luzon, signed Thursday, will offer U.S. companies access to essential inputs such as critical minerals that bypass Beijing's control. The artificial-intelligence-powered manufacturing hub is planned for a 4,000-acre site given to the U.S. by Manila, said undersecretary of State for Economic Affairs Jacob Helberg. The U.S. will occupy the site rent-free and administer it as a special economic zone. The hub will have diplomatic immunity, such as the protections afforded to an American embassy, and operate under U.S. common law -- the first arrangement of its kind anywhere in the world. The two-year lease is renewable for 99 years. [...] "You can't build anything in Ohio if the minerals and the process materials are controlled by an adversary who can cut you off tomorrow," Helberg said in an interview. [...] The planned manufacturing hub is largely conceptual at this stage, and details, including which American companies will participate and just what they will build in the Philippines, are yet to be determined. [...] The administration will ask companies to put forward proposals to compete for a spot in building out the hub, giving priority to bids that will help move critical minerals processing and manufacturing off Chinese suppliers. Investment will have to come from private-sector companies -- not the U.S. government. Factories approved for operation in the hub will be highly automated, Helberg said, using autonomous systems to operate around the clock. The Philippines has a history of robust manufacturing, particularly in semiconductors, but that has stagnated in recent decades because of high energy and logistics costs. Companies will have to address in their proposals how they will contend with energy costs and workforce needs; they can send American workers overseas or hire locally, Helberg said.

Read more of this story at Slashdot.

Andrea Veri: GNOME GitLab Git traffic caching

Planet GNOME - Pre, 17/04/2026 - 4:00md
Table of Contents Introduction

One of the most visible signs that GNOME’s infrastructure has grown over the years is the amount of CI traffic that flows through gitlab.gnome.org on any given day. Hundreds of pipelines run in parallel, most of them starting with a git clone or git fetch of the same repository, often at the same commit. All that traffic was landing directly on GitLab’s webservice pods, generating redundant load for work that was essentially identical.

GNOME’s infrastructure runs on AWS, which generously provides credits to the project. Even so, data transfer is one of the largest cost drivers we face, and we have to operate within a defined budget regardless of those credits. The bandwidth costs associated with this Git traffic grew significant enough that for a period of time we redirected unauthenticated HTTPS Git pulls to our GitHub mirrors as a short-term cost mitigation. That measure bought us some breathing room, but it was never meant to be permanent: sending users to a third-party platform for what is essentially a core infrastructure operation is not a position we wanted to stay in. The goal was always to find a proper solution on our own infrastructure.

This post documents the caching layer we built to address that problem. The solution sits between the client and GitLab, intercepts Git fetch traffic, and routes it through Fastly’s CDN so that repeated fetches of the same content are served from cache rather than generating a fresh pack every time.

The problem

The Git smart HTTP protocol uses two endpoints: info/refs for capability advertisement and ref discovery, and git-upload-pack for the actual pack generation. The second one is the expensive one. When a CI job runs git fetch origin main, GitLab has to compute and send the entire pack for that fetch negotiation. If ten jobs run the same fetch within a short window, GitLab does that work ten times.

The tricky part is that git-upload-pack is a POST request with a binary body that encodes what the client already has (have lines) and what it wants (want lines). Traditional HTTP caches ignore POST bodies entirely. Building a cache that actually understands those bodies and deduplicates identical fetches requires some work at the edge.

For a fresh clone the body contains only want lines — one per ref the client is requesting:

0032want 7d20e995c3c98644eb1c58a136628b12e9f00a78 0032want 93e944c9f728a4b9da506e622592e4e3688a805c 0032want ef2cbad5843a607236b45e5f50fa4318e0580e04 ...

For an incremental fetch the body is a mix of want lines (what the client needs) and have lines (commits the client already has locally), which the server uses to compute the smallest possible packfile delta:

00a4want 51a117587524cbdd59e43567e6cbd5a76e6a39ff 0000 0032have 8282cff4b31dce12e100d4d6c78d30b1f4689dd3 0032have be83e3dae8265fdc4c91f11d5778b20ceb4e2479 0032have 7d46abdf9c5a3f119f645c8de6d87efffe3889b8 ...

The leading four hex characters on each line are the pkt-line length prefix. The server walks back through history from the wanted commits until it finds a common ancestor with the have set, then packages everything in between into a packfile. Two CI jobs running the same pipeline at the same commit will produce byte-for-byte identical request bodies and therefore identical responses — exactly the property a cache can help with.

Architecture overview

The overall setup involves four components:

  • OpenResty (Nginx + LuaJIT) running as a reverse proxy in front of GitLab’s webservice
  • Fastly acting as the CDN, with custom VCL to handle the non-standard caching behaviour
  • Valkey (a Redis-compatible store) holding the denylist of private repositories
  • gitlab-git-cache-webhook, a small Python/FastAPI service that keeps the denylist in sync with GitLab
flowchart TD client["Git client / CI runner"] gitlab_gnome["gitlab.gnome.org (Nginx reverse proxy)"] nginx["OpenResty Nginx"] lua["Lua: git_upload_pack.lua"] cdn_origin["/cdn-origin internal location"] fastly_cdn["Fastly CDN"] origin["gitlab.gnome.org via its origin (second pass)"] gitlab["GitLab webservice"] valkey["Valkey denylist"] webhook["gitlab-git-cache-webhook"] gitlab_events["GitLab project events"] client --> gitlab_gnome gitlab_gnome --> nginx nginx --> lua lua -- "check denylist" --> valkey lua -- "private repo: BYPASS" --> gitlab lua -- "public/internal: internal redirect" --> cdn_origin cdn_origin --> fastly_cdn fastly_cdn -- "HIT" --> cdn_origin fastly_cdn -- "MISS: origin fetch" --> origin origin --> gitlab gitlab_events --> webhook webhook -- "SET/DEL git:deny:" --> valkey

The request path for a public or internal repository looks like this:

  1. The Git client runs git fetch or git clone. Git’s smart HTTP protocol translates this into two HTTP requests: a GET /Namespace/Project.git/info/refs?service=git-upload-pack for ref discovery, followed by a POST /Namespace/Project.git/git-upload-pack carrying the negotiation body. It is that second request — the expensive pack-generating one — that the cache targets.
  2. It arrives at gitlab.gnome.org’s Nginx server, which acts as the reverse proxy in front of GitLab’s webservice.
  3. The git-upload-pack location runs a Lua script that parses the repo path, reads the request body, and SHA256-hashes it. The hash is the foundation of the cache key: because the body encodes the exact set of want and have SHAs the client is negotiating, two jobs fetching the same commit from the same repository will produce byte-for-byte identical bodies and therefore the same hash — making the cached packfile safe to reuse.
  4. Lua checks Valkey: is this repo in the denylist? If yes, the request is proxied directly to GitLab with no caching.
  5. For public/internal repos, Lua strips the Authorization header, builds a cache key, converts the POST to a GET, and does an internal redirect to /cdn-origin. The POST-to-GET conversion is necessary because Fastly does not apply consistent hashing to POST requests — each of the hundreds of nodes within a POP maintains its own independent cache storage, so the same POST request hitting different nodes will always be a miss. By converting to a GET, Fastly’s consistent hashing kicks in and routes requests with the same cache key to the same node, which means the cache is actually shared across all concurrent jobs hitting that POP.
  6. The /cdn-origin location proxies to the Fastly git cache CDN with the X-Git-Cache-Key header set.
  7. Fastly’s VCL sees the key and does a cache lookup. On a HIT it returns the cached pack. On a MISS it fetches from gitlab.gnome.org directly via its origin (bypassing the CDN to avoid a loop) — the same Nginx instance — and caches the response for 30 days.
  8. On that second pass (origin fetch), Nginx detects the X-Git-Cache-Internal header, decodes the original POST body from X-Git-Original-Body, restores the request method, and proxies to GitLab.
The Nginx and Lua layer

The Nginx configuration exposes two relevant locations. The first is the internal one used for the CDN proxy leg:

location ^~ /cdn-origin/ { internal; rewrite ^/cdn-origin(/.*)$ $1 break; proxy_pass $cdn_upstream; proxy_ssl_server_name on; proxy_ssl_name <cdn-hostname>; proxy_set_header Host <cdn-hostname>; proxy_set_header Accept-Encoding ""; proxy_http_version 1.1; proxy_buffering on; proxy_request_buffering on; proxy_connect_timeout 10s; proxy_send_timeout 60s; proxy_read_timeout 60s; header_filter_by_lua_block { ngx.header["X-Git-Cache-Key"] = ngx.req.get_headers()["X-Git-Cache-Key"] ngx.header["X-Git-Body-Hash"] = ngx.req.get_headers()["X-Git-Body-Hash"] local xcache = ngx.header["X-Cache"] or "" if xcache:find("HIT") then ngx.header["X-Git-Cache-Status"] = "HIT" else ngx.header["X-Git-Cache-Status"] = "MISS" end } }

The header_filter_by_lua_block here is doing something specific: it reads X-Cache from the response Fastly returns and translates it into a clean X-Git-Cache-Status header for observability. The X-Git-Cache-Key and X-Git-Body-Hash are also passed through so that callers can see what cache entry was involved.

The second location is git-upload-pack itself, which delegates all the logic to a Lua file:

location ~ /git-upload-pack$ { client_body_buffer_size 5m; client_max_body_size 5m; access_by_lua_file /etc/nginx/lua/git_upload_pack.lua; header_filter_by_lua_block { local key = ngx.req.get_headers()["X-Git-Cache-Key"] if key then ngx.header["X-Git-Cache-Key"] = key end } proxy_pass http://gitlab-webservice; proxy_http_version 1.1; proxy_set_header Host gitlab.gnome.org; proxy_set_header X-Real-IP $http_fastly_client_ip; proxy_set_header X-Forwarded-For $http_fastly_client_ip; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Ssl on; proxy_set_header Connection ""; proxy_buffering on; proxy_request_buffering on; proxy_connect_timeout 10s; proxy_send_timeout 60s; proxy_read_timeout 60s; }

The access_by_lua_file directive runs before the request is proxied. If the Lua script calls ngx.exec("/cdn-origin" .. uri), Nginx performs an internal redirect to the CDN location and the proxy_pass to GitLab is never reached. If the script returns normally (for private repos or non-fetch commands), the request falls through to the proxy_pass.

Building the cache key

The full Lua script that runs in access_by_lua_file handles both passes of the request. The first pass (client → nginx) does the heavy lifting:

local resty_sha256 = require("resty.sha256") local resty_str = require("resty.string") local redis_helper = require("redis_helper") local redis_host = os.getenv("REDIS_HOST") or "localhost" local redis_port = os.getenv("REDIS_PORT") or "6379" -- Second pass: request arriving from CDN origin fetch. -- Decode the original POST body from the header and restore the method. if ngx.req.get_headers()["X-Git-Cache-Internal"] then local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"] if encoded_body then ngx.req.read_body() local body = ngx.decode_base64(encoded_body) ngx.req.set_method(ngx.HTTP_POST) ngx.req.set_body_data(body) ngx.req.set_header("Content-Length", tostring(#body)) ngx.req.clear_header("X-Git-Original-Body") end return end

The second-pass guard is at the top of the script. When Fastly’s origin fetch arrives, it will carry X-Git-Cache-Internal: 1. The script detects that, reconstructs the POST body from the base64-encoded header, restores the POST method, and returns — allowing Nginx to proxy the real request to GitLab.

For the first pass, the script parses the repo path from the URI, reads and buffers the full request body, and computes a SHA256 over it:

-- Only cache "fetch" commands; ls-refs responses are small, fast, and -- become stale on every push (the body hash is constant so a long TTL -- would serve outdated ref listings). if not body:find("command=fetch", 1, true) then ngx.header["X-Git-Cache-Status"] = "BYPASS" return end -- Hash the body local sha256 = resty_sha256:new() sha256:update(body) local body_hash = resty_str.to_hex(sha256:final()) -- Build cache key: cache_versioning + repo path + body hash local cache_key = "v2:" .. repo_path .. ":" .. body_hash

A few things worth noting here. The ls-refs command is explicitly excluded from caching. The reason is that ls-refs is used to list references and its request body is essentially static (just a capability advertisement). If we cached it with a 30-day TTL, a push to the repository would not invalidate the cache — the key would be the same — and clients would get stale ref listings. Fetch bodies, on the other hand, encode exactly the SHAs the client wants and already has. The same set of want/have lines always maps to the same pack, which makes them safe to cache for a long time.

The v2: prefix is a cache version string. It makes it straightforward to invalidate all existing cache entries if we ever need to change the key scheme, without touching Fastly’s purge API.

The POST-to-GET conversion

This is probably the most unusual part of the design:

-- Carry the POST body as a base64 header and convert to GET so that -- Fastly's intra-POP consistent hashing routes identical cache keys -- to the same server (Fastly only does this for GET, not POST). ngx.req.set_header("X-Git-Original-Body", ngx.encode_base64(body)) ngx.req.set_method(ngx.HTTP_GET) ngx.req.set_body_data("") return ngx.exec("/cdn-origin" .. uri)

Fastly’s shield feature routes cache misses through a designated intra-POP “shield” node before going to origin. When two different edge nodes both get a MISS for the same cache key simultaneously, the shield node collapses them into a single origin request. This is important for us because without it, a burst of CI jobs fetching the same commit would all miss, all go to origin in parallel, and GitLab would end up generating the same pack multiple times anyway.

The catch is that Fastly’s consistent hashing and shield routing only works for GET requests. POST requests always go straight to origin. Fastly does provide a way to force POST responses into the cache — by returning pass in vcl_recv and setting beresp.cacheable in vcl_fetch — but it is a blunt instrument: there is no consistent hashing, no shield collapsing, and no guarantee that two nodes in the same POP will ever share the cached result. By converting the POST to a GET and encoding the body in a header, we get consistent hashing and shield-level request collapsing for free.

The VCL on the Fastly side uses the X-Git-Cache-Key header (not the URL or method) as the cache key, so the GET conversion is invisible to the caching logic.

Protecting private repositories

We cannot route private repository traffic through an external CDN — that would mean sending authenticated git content to a third-party cache. The way we prevent this is a denylist stored in Valkey. Before doing anything else, the Lua script checks whether the repository is listed there:

local denied, err = redis_helper.is_denied(redis_host, redis_port, repo_path) if err then ngx.log(ngx.ERR, "git-cache: Redis error for ", repo_path, ": ", err, " — cannot verify project visibility, bypassing CDN") ngx.header["X-Git-Cache-Status"] = "BYPASS" return end if denied then ngx.header["X-Git-Cache-Status"] = "BYPASS" ngx.header["X-Git-Body-Hash"] = body_hash:sub(1, 12) return end -- Public/internal repo: strip credentials before routing through CDN ngx.req.clear_header("Authorization")

If Valkey is unreachable, the script logs an error and bypasses the CDN entirely, treating the repository as if it were private. This is the safe default: the cost of a Redis failure is slightly increased load on GitLab, not the risk of routing private repository content through an external cache. In practice, Valkey runs alongside Nginx on the same node, so true availability failures are uncommon.

The denylist is maintained by gitlab-git-cache-webhook, a small FastAPI service. It listens for GitLab system hooks on project_create and project_update events:

HANDLED_EVENTS = {"project_create", "project_update"} @router.post("/webhook") async def webhook(request: Request, ...) -> Response: ... event = body.get("event_name", "") if event not in HANDLED_EVENTS: return Response(status_code=204) project = body.get("project", {}) path = project.get("path_with_namespace", "") visibility_level = project.get("visibility_level") if visibility_level == 0: await deny_repo(path) else: removed = await allow_repo(path) return Response(status_code=204)

GitLab’s visibility_level is 0 for private, 10 for internal, and 20 for public. Internal repositories are intentionally treated the same as public ones here: they are accessible to any authenticated user on the instance, so routing them through the CDN is acceptable. Only truly private repositories go into the denylist.

The key format in Valkey is git:deny:<path_with_namespace>. The Lua redis_helper module does an EXISTS check on that key. The webhook service also ships a reconciliation command (python -m app.reconcile) that does a full resync of all private repositories via the GitLab API, which is useful to run on first deployment or after any extended Valkey downtime.

The Fastly VCL

On the Fastly side, three VCL subroutines carry the relevant logic. In vcl_recv:

if (req.url ~ "/info/refs") { return(pass); } if (req.http.X-Git-Cache-Key) { set req.backend = F_Host_1; if (req.restarts == 0) { set req.backend = fastly.try_select_shield(ssl_shield_iad_va_us, F_Host_1); } return(lookup); }

/info/refs is always passed through uncached — it is the capability advertisement step and caching it would cause problems with protocol negotiation. Requests carrying X-Git-Cache-Key get an explicit lookup directive and are routed through the shield. Everything else falls through to Fastly’s default behaviour.

In vcl_hash, the cache key overrides the default URL-based key:

if (req.http.X-Git-Cache-Key) { set req.hash += req.http.X-Git-Cache-Key; return(hash); }

And in vcl_fetch, responses are marked cacheable when they come back with a 200 and a non-empty body:

if (req.http.X-Git-Cache-Key && beresp.status == 200) { if (beresp.http.Content-Length == "0") { set beresp.ttl = 0s; set beresp.cacheable = false; return(deliver); } set beresp.cacheable = true; set beresp.ttl = 30d; set beresp.http.X-Git-Cache-Key = req.http.X-Git-Cache-Key; unset beresp.http.Cache-Control; unset beresp.http.Pragma; unset beresp.http.Expires; unset beresp.http.Set-Cookie; return(deliver); }

The 30-day TTL is deliberately long. Git pack data is content-addressed: a pack for a given set of want/have lines will always be the same. As long as the objects exist in the repository, the cached pack is valid. The only case where a cached pack could be wrong is if objects were deleted (force-push that drops history, for instance), which is rare and, on GNOME’s GitLab, made even rarer by the Gitaly custom hooks we run to prevent force-pushes and history rewrites on protected namespaces. In those cases the cache version prefix would force a key change rather than relying on TTL expiry.

Empty responses (Content-Length: 0) are explicitly not cached. GitLab can return an empty body in edge cases and caching that would break all subsequent fetches for that key.

Conclusions

The system has been running in production for a few days now and the cache hit rate on fetch traffic has been overall consistently high (over 80%). If something goes wrong with the cache layer, the worst case is that requests fall back to BYPASS and GitLab handles them directly, which is how things worked before. This also means we don’t redirect any traffic to github.com anymore.

That should be all for today, stay tuned!

next-20260417: linux-next

Kernel Linux - Pre, 17/04/2026 - 2:44md
Version:next-20260417 (linux-next) Released:2026-04-17

Reed Hastings Is Leaving Netflix After 29 Years

Slashdot - Pre, 17/04/2026 - 1:00md
Reed Hastings is stepping down from Netflix's board in June, ending a 29-year run at the company he co-founded and helped transform from a DVD-by-mail business into a global streaming giant. Hastings said in a shareholder (PDF) letter that he's stepping down to focus on "his philanthropy and other pursuits." Engadget reports: Hastings has served as chairman of Netflix's board since 2023, a role he assumed after stepping down as co-CEO and promoting Greg Peters in his place. "Netflix changed my life in so many ways, and my all-time favorite memory was January 2016, when we enabled nearly the entire planet to enjoy our service," Hastings said in a statement. "My real contribution at Netflix wasn't a single decision; it was a focus on member joy, building a culture that others could inherit and improve, and building a company that could be both beloved by members and wildly successful for generations to come. A special thanks to Greg and Ted, whose commitment to Netflix's greatness is so strong that I can now focus on new things."

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi