You are here

Agreguesi i feed

Allan Day: GNOME Foundation Update, 2026-04-17

Planet GNOME - Pre, 17/04/2026 - 5:22md

Welcome to another update about everything that’s been happening at the GNOME Foundation. It’s been four weeks since my last post, due to a vacation and public holidays, so there’s lots to cover. This period included a major announcement, but there’s also been a lot of other notable work behind the scenes.

Fellowship & Fundraising

The really big news from the last four weeks was the launch of our new Fellowship program. This is something that the Board has been discussing for quite some time, so we were thrilled to be able to make the program a reality. We are optimistic that it will make a significant difference to the GNOME project.

If you didn’t see it already, check out the announcement for details. Also, if you want to apply to be our first Fellow, you have just three days until the application deadline on 20th April!

donate.gnome.org has been a great success for the GNOME Foundation, and it is only through the support of our existing donors that the Fellowship was possible. Despite these amazing contributions, the GNOME Foundation needs to grow our donations if we are going to be able to support future Fellowship rounds while simultaneously sustaining the organisation.

To this end, there’s an effort happening to build our marketing and fundraising effort. This is primarily taking place in the GNOME Engagement Team, and we would love help from the community to help boost our outbound comms. If you are interested, please join the Engagement space and look out for announcements.

Also, if you haven’t already, and are able to do so: please donate!

Conferences

We have two major events coming up, with Linux App Summit in May and GUADEC in July, so right now is a busy time for conferences.

The schedules for both of these upcoming events are currently being worked on, and arrangements for catering, photographers, and audio visual services are all in the process of being finalized.

The Travel Committee has also been busy handling GUADEC travel requests, and has sent out the first batch of approvals. There are some budget pressures right now due to rising flight prices, but budget has been put aside for more GUADEC travel, so please apply if you want to attend and need support.

April 2026 Board Meeting

This week was the Board’s regular monthly meeting for April. Highlights from the meeting included:

  • I gave a general report on the Foundation’s activities, and we discussed progress on programs and initiatives, including the new Fellowship program and fundraising.
  • Deepa gave a finance report for October to December 2025.
  • Andrea Veri joined us to give an update on the Membership & Elections Committee, as well as the Infrastructure team. Andrea has been doing this work for a long time and has been instrumental in helping to keep the Foundation running, so this was a great opportunity to thank him for his work.
  • One key takeaway from this month’s discussion was the very high level of support that GNOME receives from our infrastructure partners, particularly AWS and also Fastly. We are hugely appreciative of this support, which represents a major financial contribution to GNOME, and want to make sure that these partners get positive exposure from us and feel appreciated.
  • We reviewed the timeline for the upcoming 2026 board elections, which we are tweaking a little this year, in order to ensure that there is opportunity to discuss every candidacy, and reduce some unnecessary delay in final result.
Infrastructure

As usual, plenty has been happening on the infrastructure side over the past month. This has included:

  • Ongoing work to tune our Fastly configuration and managing the resource usage of GNOME’s infra.
  • Deployment of a LiberaForms instance on GNOME infrastructure. This is hooked up to GNOME’s SSO, so is available to anyone with an account who wants to use it – just head over to forms.gnome.org to give it a try.
  • Changes to the Foundation’s internal email setup, to allow easier management of the generic contact email addresses, as well as better organisation of the role-based email addresses that we have.
  • New translation support for donate.gnome.org.
  • Ongoing work in Flathub, around OAuth and flat-manager.
Admin & Finance

On the accounting side, the team has been busy catching up on regular work that got put to one side during last month’s audit. There were some significant delays to our account process as a result of this, but we are now almost up to date.

Reorganisation of many of our finance processes has also continued over the past four weeks. Progress has included a new structure and cadence for our internal accounting calls, continued configuration of our new payments platform, and new forms for handling reimbursement requests.

Finally, we have officially kicked off the process of migrating to our new physical mail service. Work on this is ongoing and will take some time to complete. Our new address is on the website, if anyone needs it.

That’s it for this report! Thanks for reading, and feel free to use the comments if you have questions!

Andrea Veri: GNOME GitLab Git traffic caching

Planet GNOME - Pre, 17/04/2026 - 4:00md
Table of Contents Introduction

One of the most visible signs that GNOME’s infrastructure has grown over the years is the amount of CI traffic that flows through gitlab.gnome.org on any given day. Hundreds of pipelines run in parallel, most of them starting with a git clone or git fetch of the same repository, often at the same commit. All that traffic was landing directly on GitLab’s webservice pods, generating redundant load for work that was essentially identical.

GNOME’s infrastructure runs on AWS, which generously provides credits to the project. Even so, data transfer is one of the largest cost drivers we face, and we have to operate within a defined budget regardless of those credits. The bandwidth costs associated with this Git traffic grew significant enough that for a period of time we redirected unauthenticated HTTPS Git pulls to our GitHub mirrors as a short-term cost mitigation. That measure bought us some breathing room, but it was never meant to be permanent: sending users to a third-party platform for what is essentially a core infrastructure operation is not a position we wanted to stay in. The goal was always to find a proper solution on our own infrastructure.

This post documents the caching layer we built to address that problem. The solution sits between the client and GitLab, intercepts Git fetch traffic, and routes it through Fastly’s CDN so that repeated fetches of the same content are served from cache rather than generating a fresh pack every time.

The problem

The Git smart HTTP protocol uses two endpoints: info/refs for capability advertisement and ref discovery, and git-upload-pack for the actual pack generation. The second one is the expensive one. When a CI job runs git fetch origin main, GitLab has to compute and send the entire pack for that fetch negotiation. If ten jobs run the same fetch within a short window, GitLab does that work ten times.

The tricky part is that git-upload-pack is a POST request with a binary body that encodes what the client already has (have lines) and what it wants (want lines). Traditional HTTP caches ignore POST bodies entirely. Building a cache that actually understands those bodies and deduplicates identical fetches requires some work at the edge.

For a fresh clone the body contains only want lines — one per ref the client is requesting:

0032want 7d20e995c3c98644eb1c58a136628b12e9f00a78 0032want 93e944c9f728a4b9da506e622592e4e3688a805c 0032want ef2cbad5843a607236b45e5f50fa4318e0580e04 ...

For an incremental fetch the body is a mix of want lines (what the client needs) and have lines (commits the client already has locally), which the server uses to compute the smallest possible packfile delta:

00a4want 51a117587524cbdd59e43567e6cbd5a76e6a39ff 0000 0032have 8282cff4b31dce12e100d4d6c78d30b1f4689dd3 0032have be83e3dae8265fdc4c91f11d5778b20ceb4e2479 0032have 7d46abdf9c5a3f119f645c8de6d87efffe3889b8 ...

The leading four hex characters on each line are the pkt-line length prefix. The server walks back through history from the wanted commits until it finds a common ancestor with the have set, then packages everything in between into a packfile. Two CI jobs running the same pipeline at the same commit will produce byte-for-byte identical request bodies and therefore identical responses — exactly the property a cache can help with.

Architecture overview

The overall setup involves four components:

  • OpenResty (Nginx + LuaJIT) running as a reverse proxy in front of GitLab’s webservice
  • Fastly acting as the CDN, with custom VCL to handle the non-standard caching behaviour
  • Valkey (a Redis-compatible store) holding the denylist of private repositories
  • gitlab-git-cache-webhook, a small Python/FastAPI service that keeps the denylist in sync with GitLab
flowchart TD client["Git client / CI runner"] gitlab_gnome["gitlab.gnome.org (Nginx reverse proxy)"] nginx["OpenResty Nginx"] lua["Lua: git_upload_pack.lua"] cdn_origin["/cdn-origin internal location"] fastly_cdn["Fastly CDN"] origin["gitlab.gnome.org via its origin (second pass)"] gitlab["GitLab webservice"] valkey["Valkey denylist"] webhook["gitlab-git-cache-webhook"] gitlab_events["GitLab project events"] client --> gitlab_gnome gitlab_gnome --> nginx nginx --> lua lua -- "check denylist" --> valkey lua -- "private repo: BYPASS" --> gitlab lua -- "public/internal: internal redirect" --> cdn_origin cdn_origin --> fastly_cdn fastly_cdn -- "HIT" --> cdn_origin fastly_cdn -- "MISS: origin fetch" --> origin origin --> gitlab gitlab_events --> webhook webhook -- "SET/DEL git:deny:" --> valkey

The request path for a public or internal repository looks like this:

  1. The Git client runs git fetch or git clone. Git’s smart HTTP protocol translates this into two HTTP requests: a GET /Namespace/Project.git/info/refs?service=git-upload-pack for ref discovery, followed by a POST /Namespace/Project.git/git-upload-pack carrying the negotiation body. It is that second request — the expensive pack-generating one — that the cache targets.
  2. It arrives at gitlab.gnome.org’s Nginx server, which acts as the reverse proxy in front of GitLab’s webservice.
  3. The git-upload-pack location runs a Lua script that parses the repo path, reads the request body, and SHA256-hashes it. The hash is the foundation of the cache key: because the body encodes the exact set of want and have SHAs the client is negotiating, two jobs fetching the same commit from the same repository will produce byte-for-byte identical bodies and therefore the same hash — making the cached packfile safe to reuse.
  4. Lua checks Valkey: is this repo in the denylist? If yes, the request is proxied directly to GitLab with no caching.
  5. For public/internal repos, Lua strips the Authorization header, builds a cache key, converts the POST to a GET, and does an internal redirect to /cdn-origin. The POST-to-GET conversion is necessary because Fastly does not apply consistent hashing to POST requests — each of the hundreds of nodes within a POP maintains its own independent cache storage, so the same POST request hitting different nodes will always be a miss. By converting to a GET, Fastly’s consistent hashing kicks in and routes requests with the same cache key to the same node, which means the cache is actually shared across all concurrent jobs hitting that POP.
  6. The /cdn-origin location proxies to the Fastly git cache CDN with the X-Git-Cache-Key header set.
  7. Fastly’s VCL sees the key and does a cache lookup. On a HIT it returns the cached pack. On a MISS it fetches from gitlab.gnome.org directly via its origin (bypassing the CDN to avoid a loop) — the same Nginx instance — and caches the response for 30 days.
  8. On that second pass (origin fetch), Nginx detects the X-Git-Cache-Internal header, decodes the original POST body from X-Git-Original-Body, restores the request method, and proxies to GitLab.
The Nginx and Lua layer

The Nginx configuration exposes two relevant locations. The first is the internal one used for the CDN proxy leg:

location ^~ /cdn-origin/ { internal; rewrite ^/cdn-origin(/.*)$ $1 break; proxy_pass $cdn_upstream; proxy_ssl_server_name on; proxy_ssl_name <cdn-hostname>; proxy_set_header Host <cdn-hostname>; proxy_set_header Accept-Encoding ""; proxy_http_version 1.1; proxy_buffering on; proxy_request_buffering on; proxy_connect_timeout 10s; proxy_send_timeout 60s; proxy_read_timeout 60s; header_filter_by_lua_block { ngx.header["X-Git-Cache-Key"] = ngx.req.get_headers()["X-Git-Cache-Key"] ngx.header["X-Git-Body-Hash"] = ngx.req.get_headers()["X-Git-Body-Hash"] local xcache = ngx.header["X-Cache"] or "" if xcache:find("HIT") then ngx.header["X-Git-Cache-Status"] = "HIT" else ngx.header["X-Git-Cache-Status"] = "MISS" end } }

The header_filter_by_lua_block here is doing something specific: it reads X-Cache from the response Fastly returns and translates it into a clean X-Git-Cache-Status header for observability. The X-Git-Cache-Key and X-Git-Body-Hash are also passed through so that callers can see what cache entry was involved.

The second location is git-upload-pack itself, which delegates all the logic to a Lua file:

location ~ /git-upload-pack$ { client_body_buffer_size 5m; client_max_body_size 5m; access_by_lua_file /etc/nginx/lua/git_upload_pack.lua; header_filter_by_lua_block { local key = ngx.req.get_headers()["X-Git-Cache-Key"] if key then ngx.header["X-Git-Cache-Key"] = key end } proxy_pass http://gitlab-webservice; proxy_http_version 1.1; proxy_set_header Host gitlab.gnome.org; proxy_set_header X-Real-IP $http_fastly_client_ip; proxy_set_header X-Forwarded-For $http_fastly_client_ip; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Ssl on; proxy_set_header Connection ""; proxy_buffering on; proxy_request_buffering on; proxy_connect_timeout 10s; proxy_send_timeout 60s; proxy_read_timeout 60s; }

The access_by_lua_file directive runs before the request is proxied. If the Lua script calls ngx.exec("/cdn-origin" .. uri), Nginx performs an internal redirect to the CDN location and the proxy_pass to GitLab is never reached. If the script returns normally (for private repos or non-fetch commands), the request falls through to the proxy_pass.

Building the cache key

The full Lua script that runs in access_by_lua_file handles both passes of the request. The first pass (client → nginx) does the heavy lifting:

local resty_sha256 = require("resty.sha256") local resty_str = require("resty.string") local redis_helper = require("redis_helper") local redis_host = os.getenv("REDIS_HOST") or "localhost" local redis_port = os.getenv("REDIS_PORT") or "6379" -- Second pass: request arriving from CDN origin fetch. -- Decode the original POST body from the header and restore the method. if ngx.req.get_headers()["X-Git-Cache-Internal"] then local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"] if encoded_body then ngx.req.read_body() local body = ngx.decode_base64(encoded_body) ngx.req.set_method(ngx.HTTP_POST) ngx.req.set_body_data(body) ngx.req.set_header("Content-Length", tostring(#body)) ngx.req.clear_header("X-Git-Original-Body") end return end

The second-pass guard is at the top of the script. When Fastly’s origin fetch arrives, it will carry X-Git-Cache-Internal: 1. The script detects that, reconstructs the POST body from the base64-encoded header, restores the POST method, and returns — allowing Nginx to proxy the real request to GitLab.

For the first pass, the script parses the repo path from the URI, reads and buffers the full request body, and computes a SHA256 over it:

-- Only cache "fetch" commands; ls-refs responses are small, fast, and -- become stale on every push (the body hash is constant so a long TTL -- would serve outdated ref listings). if not body:find("command=fetch", 1, true) then ngx.header["X-Git-Cache-Status"] = "BYPASS" return end -- Hash the body local sha256 = resty_sha256:new() sha256:update(body) local body_hash = resty_str.to_hex(sha256:final()) -- Build cache key: cache_versioning + repo path + body hash local cache_key = "v2:" .. repo_path .. ":" .. body_hash

A few things worth noting here. The ls-refs command is explicitly excluded from caching. The reason is that ls-refs is used to list references and its request body is essentially static (just a capability advertisement). If we cached it with a 30-day TTL, a push to the repository would not invalidate the cache — the key would be the same — and clients would get stale ref listings. Fetch bodies, on the other hand, encode exactly the SHAs the client wants and already has. The same set of want/have lines always maps to the same pack, which makes them safe to cache for a long time.

The v2: prefix is a cache version string. It makes it straightforward to invalidate all existing cache entries if we ever need to change the key scheme, without touching Fastly’s purge API.

The POST-to-GET conversion

This is probably the most unusual part of the design:

-- Carry the POST body as a base64 header and convert to GET so that -- Fastly's intra-POP consistent hashing routes identical cache keys -- to the same server (Fastly only does this for GET, not POST). ngx.req.set_header("X-Git-Original-Body", ngx.encode_base64(body)) ngx.req.set_method(ngx.HTTP_GET) ngx.req.set_body_data("") return ngx.exec("/cdn-origin" .. uri)

Fastly’s shield feature routes cache misses through a designated intra-POP “shield” node before going to origin. When two different edge nodes both get a MISS for the same cache key simultaneously, the shield node collapses them into a single origin request. This is important for us because without it, a burst of CI jobs fetching the same commit would all miss, all go to origin in parallel, and GitLab would end up generating the same pack multiple times anyway.

The catch is that Fastly’s consistent hashing and shield routing only works for GET requests. POST requests always go straight to origin. Fastly does provide a way to force POST responses into the cache — by returning pass in vcl_recv and setting beresp.cacheable in vcl_fetch — but it is a blunt instrument: there is no consistent hashing, no shield collapsing, and no guarantee that two nodes in the same POP will ever share the cached result. By converting the POST to a GET and encoding the body in a header, we get consistent hashing and shield-level request collapsing for free.

The VCL on the Fastly side uses the X-Git-Cache-Key header (not the URL or method) as the cache key, so the GET conversion is invisible to the caching logic.

Protecting private repositories

We cannot route private repository traffic through an external CDN — that would mean sending authenticated git content to a third-party cache. The way we prevent this is a denylist stored in Valkey. Before doing anything else, the Lua script checks whether the repository is listed there:

local denied, err = redis_helper.is_denied(redis_host, redis_port, repo_path) if err then ngx.log(ngx.ERR, "git-cache: Redis error for ", repo_path, ": ", err, " — cannot verify project visibility, bypassing CDN") ngx.header["X-Git-Cache-Status"] = "BYPASS" return end if denied then ngx.header["X-Git-Cache-Status"] = "BYPASS" ngx.header["X-Git-Body-Hash"] = body_hash:sub(1, 12) return end -- Public/internal repo: strip credentials before routing through CDN ngx.req.clear_header("Authorization")

If Valkey is unreachable, the script logs an error and bypasses the CDN entirely, treating the repository as if it were private. This is the safe default: the cost of a Redis failure is slightly increased load on GitLab, not the risk of routing private repository content through an external cache. In practice, Valkey runs alongside Nginx on the same node, so true availability failures are uncommon.

The denylist is maintained by gitlab-git-cache-webhook, a small FastAPI service. It listens for GitLab system hooks on project_create and project_update events:

HANDLED_EVENTS = {"project_create", "project_update"} @router.post("/webhook") async def webhook(request: Request, ...) -> Response: ... event = body.get("event_name", "") if event not in HANDLED_EVENTS: return Response(status_code=204) project = body.get("project", {}) path = project.get("path_with_namespace", "") visibility_level = project.get("visibility_level") if visibility_level == 0: await deny_repo(path) else: removed = await allow_repo(path) return Response(status_code=204)

GitLab’s visibility_level is 0 for private, 10 for internal, and 20 for public. Internal repositories are intentionally treated the same as public ones here: they are accessible to any authenticated user on the instance, so routing them through the CDN is acceptable. Only truly private repositories go into the denylist.

The key format in Valkey is git:deny:<path_with_namespace>. The Lua redis_helper module does an EXISTS check on that key. The webhook service also ships a reconciliation command (python -m app.reconcile) that does a full resync of all private repositories via the GitLab API, which is useful to run on first deployment or after any extended Valkey downtime.

The Fastly VCL

On the Fastly side, three VCL subroutines carry the relevant logic. In vcl_recv:

if (req.url ~ "/info/refs") { return(pass); } if (req.http.X-Git-Cache-Key) { set req.backend = F_Host_1; if (req.restarts == 0) { set req.backend = fastly.try_select_shield(ssl_shield_iad_va_us, F_Host_1); } return(lookup); }

/info/refs is always passed through uncached — it is the capability advertisement step and caching it would cause problems with protocol negotiation. Requests carrying X-Git-Cache-Key get an explicit lookup directive and are routed through the shield. Everything else falls through to Fastly’s default behaviour.

In vcl_hash, the cache key overrides the default URL-based key:

if (req.http.X-Git-Cache-Key) { set req.hash += req.http.X-Git-Cache-Key; return(hash); }

And in vcl_fetch, responses are marked cacheable when they come back with a 200 and a non-empty body:

if (req.http.X-Git-Cache-Key && beresp.status == 200) { if (beresp.http.Content-Length == "0") { set beresp.ttl = 0s; set beresp.cacheable = false; return(deliver); } set beresp.cacheable = true; set beresp.ttl = 30d; set beresp.http.X-Git-Cache-Key = req.http.X-Git-Cache-Key; unset beresp.http.Cache-Control; unset beresp.http.Pragma; unset beresp.http.Expires; unset beresp.http.Set-Cookie; return(deliver); }

The 30-day TTL is deliberately long. Git pack data is content-addressed: a pack for a given set of want/have lines will always be the same. As long as the objects exist in the repository, the cached pack is valid. The only case where a cached pack could be wrong is if objects were deleted (force-push that drops history, for instance), which is rare and, on GNOME’s GitLab, made even rarer by the Gitaly custom hooks we run to prevent force-pushes and history rewrites on protected namespaces. In those cases the cache version prefix would force a key change rather than relying on TTL expiry.

Empty responses (Content-Length: 0) are explicitly not cached. GitLab can return an empty body in edge cases and caching that would break all subsequent fetches for that key.

Conclusions

The system has been running in production for a few days now and the cache hit rate on fetch traffic has been overall consistently high (over 80%). If something goes wrong with the cache layer, the worst case is that requests fall back to BYPASS and GitLab handles them directly, which is how things worked before. This also means we don’t redirect any traffic to github.com anymore.

That should be all for today, stay tuned!

Jussi Pakkanen: Multi merge sort, or when optimizations aren't

Planet GNOME - Pre, 17/04/2026 - 12:41md

In our previous episode we wrote a merge sort implementation that runs a bit faster than the one in stdlibc++. The question then becomes, could it be made even faster. If you go through the relevant literature one potential improvement is to do a multiway merge. That is, instead of merging two arrays into one, you merge four into one using, for example, a priority queue.

This seems like a slam dunk for performance.

  • Doubling the number of arrays to merge at a time halves the number of total passes needed
  • The priority queue has a known static maximum size, so it can be put on the stack, which is guaranteed to be in the cache all the time
  • Processing an element takes only log(#lists) comparisons
Implementing multimerge was conceptually straightforward but getting all the gritty details right took a fair bit of time. Once I got it working the end result was slower. And not by a little, either, but more than 30% slower. Trying some optimizations made it a bit faster but not noticeably so.

Why is this so? Maybe there are bugs that cause it to do extra work? Assuming that is not the case, what actually is? Measuring seems to indicate that a notable fraction of the runtime is spent in the priority queue code. Beyond that I got very little to nothing.

The best hypotheses I could come up with has to with the number of comparisons made. A classical merge sort does two if statements per output elements. One to determine which of the two lists has a smaller element at the front and one to see whether removing the element exhausted the list. The former is basically random and the latter is always false except when the last element is processed. This amounts to 0.5 mispredicted branches per element per round.

A priority queue has to do a bunch more work to preserve the heap property. The first iteration needs to check the root and its two children. That's three comparisons for value and two checks whether the children actually exist. Those are much less predictable than the comparisons in merge sort. Computers are really efficient at doing simple things, so it may be that the additional bookkeeping is so expensive that it negates the advantage of fewer rounds.

Or maybe it's something else. Who's to say? Certainly not me. If someone wants to play with the code, the implementation is here. I'll probably delete it at some point as it does not have really any advantage over the regular merge sort.

EU Age Verification App Announced To Protect Children Online

Slashdot - Enj, 16/04/2026 - 6:00md
The EU says a new age-verification app is technically ready and could let users prove they are old enough to access restricted online content without revealing their identity or personal data. Deutsche Welle reports: Once released, users will be able to download the app from an app store and set it up using proof of identity, such as a passport or national ID card. They can then use it to confirm they are above a certain age when accessing restricted content, without revealing their identity. According to the Commission, the system is similar to the digital certificates used during the COVID-19 pandemic, which allowed people to prove their vaccination status. The app is expected to support enforcement of the bloc's Digital Services Act, which aims to better regulate online platforms. This includes restricting access to content such as pornography, gambling and alcohol-related services. Officials say the app will be "completely anonymous" and built on open-source technology, meaning it could also be adopted outside the EU. [...] While there is no binding EU-wide law yet, the European Parliament has called for a minimum age of 16 for social media access. For now, enforcement would largely fall to individual member states, but the new app is intended to help platforms comply with future national and EU rules.

Read more of this story at Slashdot.

Researchers Induce Smells With Ultrasound, No Chemical Cartridges Required

Slashdot - Enj, 16/04/2026 - 5:00md
An anonymous reader quotes a report from UploadVR: A group of independent researchers built a device that can artificially induce smell using ultrasound, with no consumable cartridges required. [...] The team of four are Lev Chizhov, Albert Yan-Huang, Thomas Ribeiro, Aayush Gupta. Chizhov is a neurotech entrepreneur with a background in math and physics, Yan-Huang is a researcher at Caltech with a background in computation and neural systems, and Ribeiro and Gupta are co-researchers on the project with software engineering and AI expertise. Instead of targeting your nose at all, the device directly targets the olfactory bulb in your brain with "focused ultrasound through the skull." The researchers say that as far as they're aware, no one has ever done this before, even in animals. A challenge in targeting the olfactory bulb is that it's buried behind the top of your nose, and your nose doesn't provide a flat surface for an emitter. Ultrasound also doesn't travel well through air. The solution the researchers came up with was to place the emitter on your forehead instead, with a "solid, jello-like pad for stability and general comfort," and the ultrasound directed downward towards the olfactory bulb. To determine the best placement, they say they used an MRI of one of their skulls to "roughly determine where the transducer would point and how the focal region (where ultrasound waves actually concentrate) aligned with the olfactory bulb (the target for stimulation)". [...] According to the researchers, they were able to induce the sensation of fresh air "with a lot of oxygen", the smell of garbage "like few-day-old fruit peels," an ozone-like sensation "like you're next to an air ionizer," and a campfire smell of burning wood. While technically head-mounted, the current device does require being held up with two hands. But as with all such prototypes, it likely could be significantly miniaturized.

Read more of this story at Slashdot.

next-20260416: linux-next

Kernel Linux - Enj, 16/04/2026 - 3:55md
Version:next-20260416 (linux-next) Released:2026-04-16

Bullet Train Upgrade Brings 5G Windows, Noise-Cancelling Cabins To Japan

Slashdot - Enj, 16/04/2026 - 1:00md
Some Japanese bullet trains will soon support premium private suites this October, featuring windows with embedded 5G antennas for steadier onboard Wi-Fi and NTT noise-cancelling cabin tech to reduce train noise. The 5G window antennas are designed to maintain line-of-sight connections as trains race past base stations at up to 285 km/h. The Register reports: Rail operator JR Central announced the new tech late last month and will initially deploy a couple of the suites on six trains. The carrier explained that the antennas come from a Japanese company called AGC that weaves microscopic wires through glass to form an antenna. JR Central will connect the windows to an on-train Wi-Fi router. AGC says rival tech relies on 5G signals reaching a train and then bouncing around inside before reaching the Wi-Fi unit. The company says antennas woven into train windows maintain line of sight to nearby 5G base stations. That matters because JR Central's Shinkansen can achieve speeds of up to 285 km/h, which means they speed past cellular network base stations so quickly that it's frequently necessary to reconnect to another radio. AGC says keeping a line of sight connection means its antennas allow increased 5G signal strength, so Wi-Fi service on board trains should be more stable and speedy. The sound-deadening kit JR Central will deploy is called Personalized Sound Zone (PSZ) and comes from Japan's tech giant NTT. The tech uses the same principles applied to noise-cancelling headphones -- determine the waveform of sound and project an inversion of that waveform that cancels out ambient noise.

Read more of this story at Slashdot.

Thibault Martin: TIL that Pagefind does great client-side search

Planet GNOME - Enj, 16/04/2026 - 12:00md

I post more and more content on my website. What was visible at glance then is now more difficult to look for. I wanted to implement search, but it is a static website. It means that everything is built once, and then published somewhere as final, immutable pages. I can't send a request for search and get results in return.

Or that's what I thought! Pagefind is a neat javascript library that does two things:

  1. It produces an index of the content right after building the static site.
  2. It provides 2 web components to insert in my pages: <pagefind-modal> that is the search modal itself, hidden by default, and <pagefind-modal-trigger> that looks like a search field and opens the modal.

The pagefind-modal component looks up the index when the user types a request. The index is a static file, so there is not need for a backend that processes queries. Of course this only works for basic queries, but it's a great rool already!

Pagefind is also easy to customize via a list of CSS variables. Adding it to this website was very straightforward.

UK Households To Be Urged To Use More Power This Summer As Renewables Soar

Slashdot - Enj, 16/04/2026 - 9:00pd
Longtime Slashdot reader AmiMoJo shares a report from the Guardian: Households will be called on to boost their consumption of Great Britain's record renewable energy this summer to help balance the power grid and lower energy bills. Under the new plans, people could be encouraged to run dishwashers and washing machines or charge up their electric vehicles when there is more wind and solar power than the electricity grid needs. The plan will be delivered with the help of energy suppliers, which may choose to offer heavily discounted or free electricity to their customers during specific periods when the energy system operator predicts there will be a surplus of electricity. Many suppliers already offer more than 2 million households the opportunity to pay lower rates for electricity used during off-peak hours but this will be the first time that the system operator will use this tool to help balance the grid. The National Energy System Operator (Neso) hopes that by issuing a market notice to call on energy users to increase their consumption it can avoid making hefty payments to turn wind and solar farms off when demand for electricity is low, which are ultimately paid for through energy bills.

Read more of this story at Slashdot.

Nature Is Still Molding Human Genes, Study Finds

Slashdot - Enj, 16/04/2026 - 5:30pd
An anonymous reader quotes a report from the New York Times: Many scientists have contended that humans have evolved very little over the past 10,000 years. A few hundred generations was just a blink of the evolutionary eye, it seemed. Besides, our cultural evolution -- our technology, agriculture and the rest -- must have overwhelmed our biological evolution by now. A vast study, published on Wednesday in the journal Nature, suggests the opposite. Examining DNA from 15,836 ancient human remains, scientists found 479 genetic variants that appeared to have been favored by natural selection in just the past 10,000 years. The researchers also concluded that thousands of additional genetic variants have probably experienced natural selection. Before the new study, scientists had identified only a few dozen variants. "There are so many of them that it's hard to wrap one's mind around them," said David Reich, a geneticist at Harvard Medical School and an author of the new study. He and his colleagues found that a mutation that is a major risk factor for celiac disease, for example, appeared just 4,000 years ago, meaning the condition may be younger than the Egyptian pyramids. The mutation became ever more common. Today, an estimated 80 million people worldwide have celiac disease, in which the immune system attacks gluten and damages the intestines. The steady rise of the mutation came about through natural selection, the scientists argue. For some reason, people with the mutation had more descendants than people without it -- even though it put them at risk of an autoimmune disorder. Other findings are even more puzzling. The researchers found that genetic variants that raise the odds of a smoking habit have been getting steadily rarer in Europe for the past 10,000 years. Something is working against those variants -- but it can't be the harm from smoking. Europeans have been smoking tobacco for only about 460 years. The scientists can't see from their research so far what forces might be making these variants more or less common. "My short answer is, I don't know," said Ali Akbari, a senior staff scientist at Harvard and an author of the study. The researchers also found that some variants, like the one linked to Type B blood, became much more common in Europe around 6,000 years ago, while others changed direction over time. For example, a TYK2 immune gene variant that may have once been beneficial later became harmful because it increased tuberculosis risk. The study also found signs of natural selection in 44 out of 563 traits. Variants linked to Type 2 diabetes, wider waists, and higher body fat have become less common, possibly because farming and carbohydrate-heavy diets made once-useful fat-storing traits more harmful. Other findings, such as selection favoring genes linked to more years of schooling, are harder to interpret.

Read more of this story at Slashdot.

Boston Dynamics' Robot Dog Can Now Read Gauges, Spot Spills, and Reason

Slashdot - Enj, 16/04/2026 - 1:00pd
Boston Dynamics has integrated Google DeepMind into its robotic dog Spot, giving it more autonomous reasoning for industrial inspections like spotting spills and reading gauges. Spot can also now recognize when to call on other AI tools. IEEE Spectrum reports: Boston Dynamics is one of the few companies to commercially deploy legged robots at any appreciable scale; there are now several thousand hard at work. Today the company is announcing that its quadruped robot Spot is now equipped with Google DeepMind's Gemini Robotics-ER 1.6, a high-level embodied reasoning model that brings usability and intelligence to complex tasks. [T]he focus of this partnership is on one of the very few applications where legged robots have proven themselves to be commercially viable: inspection. That is, wandering around industrial facilities, checking to make sure that nothing is imminently exploding. With the new AI onboard, Spot is now able to autonomously look for dangerous debris or spills, read complex gauges and sight glasses, and call on tools like vision-language-action models when it needs help understanding what's going on in the environment around it. "Advances like Gemini Robotics-ER 1.6 mark an important step toward robots that can better understand and operate in the physical world," Marco da Silva, vice president and general manager of Spot at Boston Dynamics, says in a press release. "Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously." You can watch a demo of Spot's new capabilities on YouTube.

Read more of this story at Slashdot.

US Jobs Too Important To Risk Chinese Car Imports, Says Ford CEO

Slashdot - Enj, 16/04/2026 - 12:00pd
In an interview with Fox News, Ford CEO Jim Farley warned that allowing Chinese vehicle imports could put nearly a million U.S. jobs at risk. He said China's heavily subsidized auto industry has enough excess capacity to supply the entire U.S. market, while also raising serious cybersecurity concerns given how much data modern connected cars collect. Ars Technica reports: "First of all, the Chinese have huge direct support for their auto companies," Farley said, while noting that China has the ability to build an additional 21 million vehicles a year on top of the 29 million that are expected to roll off Chinese production lines in 2026. "They have enough capacity in China to cover all the manufacturing, all the vehicle sales in the United States," Farley said. "Manufacturing is the heart and soul of our country, and for us to lose those exports would be devastating for our country," he continued, before pointing out the cybersecurity worries about Chinese cars. "All the vehicles have 10 cameras. They can collect a lot of data," he said. Farley has praised Chinese EVs like the Xiaomi SU7, even going on podcasts to sing its praises. But he believes Ford's forthcoming affordable Kentucky-built EVs, due to start hitting dealerships next year, have what it takes to be competitive. When asked about new car prices rising an average of 2 percent last year, Farley repeatedly said that Ford had "worked with the administration" so that there's "essentially no big impact" of the Trump tariffs. The CEO justified the rising costs by pointing to the F-150's sales as proof of its value.

Read more of this story at Slashdot.

Cal.com Is Going Closed Source Because of AI

Slashdot - Mër, 15/04/2026 - 11:00md
Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security. [...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source." While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."

Read more of this story at Slashdot.

Live Nation Illegally Monopolized Ticketing Market, Jury Finds

Slashdot - Mër, 15/04/2026 - 10:00md
A Manhattan federal jury found that Live Nation and Ticketmaster illegally maintained monopoly power in the ticketing market. The findings follow an antitrust case brought by states after a separate DOJ settlement. CNN reports: The verdict was reached following a lengthy trial in New York federal court that included testimony from top executives in the music and entertainment industries. Jurors began deliberating on Friday. The Justice Department and 39 state attorneys general, including California and New York, and Washington, DC, sued Live Nation in 2024 alleging its combination with Ticketmaster and control of "virtually every aspect of the live music ecosystem" have harmed fans, artists, and venues. During the second week of trial, in a move that surprised even the judge, the Justice Department reached a secret settlement with Live Nation. A handful of states signed onto the deal, but more than two dozen proceeded to trial. Under the DOJ deal, Live Nation agreed to allow competitors, like SeatGeek or StubHub, to offer tickets to its events, cap ticketing service fees at 15%, and divest exclusive booking agreements with 13 amphitheaters. The deal includes a $280 million settlement fund for state damages claims for the handful of states that signed onto the deal. The DOJ settlement requires the judge's approval.

Read more of this story at Slashdot.

Anna's Archive Loses $322 Million Spotify Piracy Case Without a Fight

Slashdot - Mër, 15/04/2026 - 9:00md
An anonymous reader quotes a report from TorrentFreak: Spotify and several major record labels, including UMG, Sony, and Warner, secured a $322 million default judgment against the unknown operators of Anna's Archive. The shadow library failed to appear in court and briefly released millions of tracks that were scraped from Spotify via BitTorrent. In addition to the monetary penalty, a permanent injunction required domain registrars and other parties to suspend the site's domain names. [...] The music labels get the statutory maximum of $150,000 in damages for around 50 works. Spotify adds a DMCA circumvention claim of $2,500 for 120,000 music files, bringing the total to more than $322 million. The plaintiff previously described their damages request as "extremely conservative." The DMCA claim is based only on the 120,000 files, not the full 2.8 million that were released. Had they applied the $2,500 rate to all released files, the damages figure would exceed $7 billion. Anna's Archive did not show up in court, and the operators of the site remain unidentified. The judgment attempts to address this directly, by ordering Anna's Archive to file a compliance report within ten business days, under penalty of perjury, that includes valid contact information for the site and its managing agents. Whether the site will comply with this order is highly uncertain. For now, the monetary judgment is mostly a victory on paper, as recouping money from an unknown entity is impossible. For this reason, the music companies also requested a permanent injunction. In addition to the damages award, [Judge Jed Rakoff] entered a permanent worldwide injunction covering ten Anna's Archive domains: annas-archive.org, .li, .se, .in, .pm, .gl, .ch, .pk, .gd, and .vg. Domain registries and registrars of record, along with hosting and internet service providers, are ordered to permanently disable access to those domains, disable authoritative nameservers, cease hosting services, and preserve evidence that could identify the site's operators. The judgment names specific third parties bound by those obligations, including Public Interest Registry, Cloudflare, Switch Foundation, The Swedish Internet Foundation, Njalla SRL, IQWeb FZ-LLC, Immaterialism Ltd., Hosting Concepts B.V., Tucows Domains Inc., and OwnRegistrar, Inc. Anna's Archive is also ordered to destroy all copies of works scraped from Spotify and to file a compliance report within ten business days, under penalty of perjury, including valid contact information for the site and its managing agents. That last requirement could prove significant, given that the identity of the site's operators remains unknown.

Read more of this story at Slashdot.

Snapchat Blames AI As It Cuts 1,000 Jobs

Slashdot - Mër, 15/04/2026 - 8:00md
Snap is laying off about 1,000 employees, or 16% of its workforce, while closing 300 open roles as it tries to cut costs and push toward profitability with more AI-driven efficiency. "While these changes are necessary to realize Snap's long-term potential, we believe that rapid advancements in artificial intelligence enable our teams to reduce repetitive work, increase velocity, and better support our community, partners, and advertisers," CEO Evan Spiegel wrote in a memo, which was included in the company's 8-K filing (PDF). "We have already witnessed small squads leveraging AI tools to drive meaningful progress across several important initiatives." The Verge reports: The changes are expected to save Snap $500 million by the second half of 2026. Snap had about 5,261 full-time employees as of December 2025, and now joins the growing list of tech companies that have already announced significant layoffs this year, including Meta, Amazon, Oracle, GoPro, and Jack Dorsey's Block. "Last fall, I described Snap as facing a crucible moment, requiring a new way of working that is faster and more efficient, while pivoting towards profitable growth," Spiegel wrote. "Over the past several months, we have carefully reviewed the work required to best serve our community and partners, and made tough choices to prioritize the investments we believe are most likely to create long-term value."

Read more of this story at Slashdot.

Struggling Shoe Retailer Allbirds Pivots To AI, Stock Explodes More Than 700%

Slashdot - Mër, 15/04/2026 - 7:00md
Allbirds made a surprise announcement this morning: it's pivoting from sustainable shoes to AI compute infrastructure, rebranding as NewBird AI after selling its brand assets and closing its U.S. full-price stores. The move sent shares soaring more than 700%. CNBC reports: The move boosted shares of the miniscule market cap company -- it was valued at about $21 million at Tuesday's close -- by more than 700%. The shares, which were under $3 a day ago, jumped to above $17. [...] The new company, which expects to be called NewBird AI, announced a deal to raise up to $50 million in funding, expected to close in the second quarter of 2026. Allbirds announced a deal with American Exchange Group to sell its intellectual property and other assets for $39 million last month. "The Company will initially seek to acquire high-performance, low-latency AI compute hardware and provide access under long-term lease arrangements, meeting customer demand that spot markets and hyperscalers are unable to reliably service," the company said in the announcement.

Read more of this story at Slashdot.

Kubernetes Container Security Misconfigurations Leading to Threats

LinuxSecurity.com - Mër, 15/04/2026 - 6:00md
Container security failures rarely come from zero-days. They come from the configuration. Misconfigurations don't trigger alerts. They don456't crash systems. Most of the time, they sit quietly in production until something starts probing from the outside or moving laterally from the inside.

Rivian's Illinois Factory Will Run On Recycled EV Batteries

Slashdot - Mër, 15/04/2026 - 6:00md
An anonymous reader quotes a report from the Wall Street Journal: Rivian is joining with Redwood Materials to reuse EV batteries for energy storage -- the largest repurposed-battery energy storage system for an automotive manufacturer in the U.S., executives told The Wall Street Journal. Redwood Materials is a battery-recycling firm started by Tesla co-founder JB Straubel. Once completed later this year, Rivian's plant in Normal, Ill., will draw electricity from more than 100 Rivian EV batteries in an area the size of a small parking lot. It will reduce Rivian's dependence on the power grid during peak demand hours. "It saves Rivian money on what it takes to run the plant. It reduces the demand on the grid, which is great," Rivian Chief Executive Officer RJ Scaringe said in an interview. In the Rivian project, the batteries will come from either its test vehicles or from vehicles that have viable batteries but can no longer drive. Those batteries get sent off to Redwood, which integrates them into power storage units. Both companies declined to specify the cost of this project. The setup is expected to initially provide 10 megawatt-hours of energy, equivalent to about 1,000 home-energy battery storage units linked together, Redwood's Straubel said. "These batteries are already built," he said. "We need to integrate them and connect them together, but that can happen quite fast. They don't have to get imported from some other place." [...] Scaringe said that while branching into battery energy storage systems is "not a focus for us as a business right now," Rivian hopes to do more at its sites with Redwood. "There's hopefully a lot more, and there's going to be a lot of batteries we'll have access to," he said.

Read more of this story at Slashdot.

Norway Man Cured of HIV With Brother's Stem Cells

Slashdot - Mër, 15/04/2026 - 5:00md
A 63-year-old man in Norway appears to be cured of HIV after receiving a stem cell transplant from his brother, who turned out to have a rare mutation that makes immune cells resistant to HIV. "Four years after the transplant, and two years after the man stopped antiretroviral therapy, he still appears to be free of the infection," reports Gizmodo. From the report: According to the report, the man was first diagnosed with myelodysplastic syndrome, a type of cancer that weakens blood cell production from bone marrow, in 2018. Though he seemed to initially respond to treatment, the cancer returned after two years, and doctors decided to perform a stem cell transplant. Because the man also had HIV (diagnosed in 2006), the doctors were hoping to treat both conditions at once, though they knew their chances were low. Most of these cases have involved the use of stem cells taken from people with two copies of a particular mutation in their CCR5 gene, which regulates the CC5R receptor on white blood cells. This mutation, named CCR5-delta 32, makes immune cells naturally resistant to infection from strains of HIV-1 (the most common type of the virus). However, only about 1% of the population carries two copies of the mutation. After initial screening failed to find someone who both possessed the mutation and had compatible bone marrow, the doctors decided to move ahead with the man's brother, who was already known to have compatible bone marrow. But to everyone's surprise, testing on the day of the transplant showed that the brother also had the mutation. Though the man did experience some complications from the procedure, his body successfully started to produce new blood cells with the mutation. The doctors decided to take him off antiretroviral medication two years after the transplant. And in the two years since then, regular follow-up tests have failed to show any signs of the virus in his system. [...] According to AFP, there have only been roughly 10 cases worldwide involving an HIV cure through stem cell transplantation. This is the first to involve a family donor.

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi