You are here

Agreguesi i feed

US To Create High-Tech Manufacturing Zone In Philippines

Slashdot - Pre, 17/04/2026 - 5:00md
An anonymous reader quotes a report from the Wall Street Journal: An agreement with the Philippines to establish a high-tech industrial hub is the Trump administration's latest effort to lessen China's dominance over global supply chains. The deal to build up American manufacturing across a stretch of the island of Luzon, signed Thursday, will offer U.S. companies access to essential inputs such as critical minerals that bypass Beijing's control. The artificial-intelligence-powered manufacturing hub is planned for a 4,000-acre site given to the U.S. by Manila, said undersecretary of State for Economic Affairs Jacob Helberg. The U.S. will occupy the site rent-free and administer it as a special economic zone. The hub will have diplomatic immunity, such as the protections afforded to an American embassy, and operate under U.S. common law -- the first arrangement of its kind anywhere in the world. The two-year lease is renewable for 99 years. [...] "You can't build anything in Ohio if the minerals and the process materials are controlled by an adversary who can cut you off tomorrow," Helberg said in an interview. [...] The planned manufacturing hub is largely conceptual at this stage, and details, including which American companies will participate and just what they will build in the Philippines, are yet to be determined. [...] The administration will ask companies to put forward proposals to compete for a spot in building out the hub, giving priority to bids that will help move critical minerals processing and manufacturing off Chinese suppliers. Investment will have to come from private-sector companies -- not the U.S. government. Factories approved for operation in the hub will be highly automated, Helberg said, using autonomous systems to operate around the clock. The Philippines has a history of robust manufacturing, particularly in semiconductors, but that has stagnated in recent decades because of high energy and logistics costs. Companies will have to address in their proposals how they will contend with energy costs and workforce needs; they can send American workers overseas or hire locally, Helberg said.

Read more of this story at Slashdot.

Andrea Veri: GNOME GitLab Git traffic caching

Planet GNOME - Pre, 17/04/2026 - 4:00md
Table of Contents Introduction

One of the most visible signs that GNOME’s infrastructure has grown over the years is the amount of CI traffic that flows through gitlab.gnome.org on any given day. Hundreds of pipelines run in parallel, most of them starting with a git clone or git fetch of the same repository, often at the same commit. All that traffic was landing directly on GitLab’s webservice pods, generating redundant load for work that was essentially identical.

GNOME’s infrastructure runs on AWS, which generously provides credits to the project. Even so, data transfer is one of the largest cost drivers we face, and we have to operate within a defined budget regardless of those credits. The bandwidth costs associated with this Git traffic grew significant enough that for a period of time we redirected unauthenticated HTTPS Git pulls to our GitHub mirrors as a short-term cost mitigation. That measure bought us some breathing room, but it was never meant to be permanent: sending users to a third-party platform for what is essentially a core infrastructure operation is not a position we wanted to stay in. The goal was always to find a proper solution on our own infrastructure.

This post documents the caching layer we built to address that problem. The solution sits between the client and GitLab, intercepts Git fetch traffic, and routes it through Fastly’s CDN so that repeated fetches of the same content are served from cache rather than generating a fresh pack every time.

The problem

The Git smart HTTP protocol uses two endpoints: info/refs for capability advertisement and ref discovery, and git-upload-pack for the actual pack generation. The second one is the expensive one. When a CI job runs git fetch origin main, GitLab has to compute and send the entire pack for that fetch negotiation. If ten jobs run the same fetch within a short window, GitLab does that work ten times.

The tricky part is that git-upload-pack is a POST request with a binary body that encodes what the client already has (have lines) and what it wants (want lines). Traditional HTTP caches ignore POST bodies entirely. Building a cache that actually understands those bodies and deduplicates identical fetches requires some work at the edge.

For a fresh clone the body contains only want lines — one per ref the client is requesting:

0032want 7d20e995c3c98644eb1c58a136628b12e9f00a78 0032want 93e944c9f728a4b9da506e622592e4e3688a805c 0032want ef2cbad5843a607236b45e5f50fa4318e0580e04 ...

For an incremental fetch the body is a mix of want lines (what the client needs) and have lines (commits the client already has locally), which the server uses to compute the smallest possible packfile delta:

00a4want 51a117587524cbdd59e43567e6cbd5a76e6a39ff 0000 0032have 8282cff4b31dce12e100d4d6c78d30b1f4689dd3 0032have be83e3dae8265fdc4c91f11d5778b20ceb4e2479 0032have 7d46abdf9c5a3f119f645c8de6d87efffe3889b8 ...

The leading four hex characters on each line are the pkt-line length prefix. The server walks back through history from the wanted commits until it finds a common ancestor with the have set, then packages everything in between into a packfile. Two CI jobs running the same pipeline at the same commit will produce byte-for-byte identical request bodies and therefore identical responses — exactly the property a cache can help with.

Architecture overview

The overall setup involves four components:

  • OpenResty (Nginx + LuaJIT) running as a reverse proxy in front of GitLab’s webservice
  • Fastly acting as the CDN, with custom VCL to handle the non-standard caching behaviour
  • Valkey (a Redis-compatible store) holding the denylist of private repositories
  • gitlab-git-cache-webhook, a small Python/FastAPI service that keeps the denylist in sync with GitLab
flowchart TD client["Git client / CI runner"] gitlab_gnome["gitlab.gnome.org (Nginx reverse proxy)"] nginx["OpenResty Nginx"] lua["Lua: git_upload_pack.lua"] cdn_origin["/cdn-origin internal location"] fastly_cdn["Fastly CDN"] origin["gitlab.gnome.org via its origin (second pass)"] gitlab["GitLab webservice"] valkey["Valkey denylist"] webhook["gitlab-git-cache-webhook"] gitlab_events["GitLab project events"] client --> gitlab_gnome gitlab_gnome --> nginx nginx --> lua lua -- "check denylist" --> valkey lua -- "private repo: BYPASS" --> gitlab lua -- "public/internal: internal redirect" --> cdn_origin cdn_origin --> fastly_cdn fastly_cdn -- "HIT" --> cdn_origin fastly_cdn -- "MISS: origin fetch" --> origin origin --> gitlab gitlab_events --> webhook webhook -- "SET/DEL git:deny:" --> valkey

The request path for a public or internal repository looks like this:

  1. The Git client runs git fetch or git clone. Git’s smart HTTP protocol translates this into two HTTP requests: a GET /Namespace/Project.git/info/refs?service=git-upload-pack for ref discovery, followed by a POST /Namespace/Project.git/git-upload-pack carrying the negotiation body. It is that second request — the expensive pack-generating one — that the cache targets.
  2. It arrives at gitlab.gnome.org’s Nginx server, which acts as the reverse proxy in front of GitLab’s webservice.
  3. The git-upload-pack location runs a Lua script that parses the repo path, reads the request body, and SHA256-hashes it. The hash is the foundation of the cache key: because the body encodes the exact set of want and have SHAs the client is negotiating, two jobs fetching the same commit from the same repository will produce byte-for-byte identical bodies and therefore the same hash — making the cached packfile safe to reuse.
  4. Lua checks Valkey: is this repo in the denylist? If yes, the request is proxied directly to GitLab with no caching.
  5. For public/internal repos, Lua strips the Authorization header, builds a cache key, converts the POST to a GET, and does an internal redirect to /cdn-origin. The POST-to-GET conversion is necessary because Fastly does not apply consistent hashing to POST requests — each of the hundreds of nodes within a POP maintains its own independent cache storage, so the same POST request hitting different nodes will always be a miss. By converting to a GET, Fastly’s consistent hashing kicks in and routes requests with the same cache key to the same node, which means the cache is actually shared across all concurrent jobs hitting that POP.
  6. The /cdn-origin location proxies to the Fastly git cache CDN with the X-Git-Cache-Key header set.
  7. Fastly’s VCL sees the key and does a cache lookup. On a HIT it returns the cached pack. On a MISS it fetches from gitlab.gnome.org directly via its origin (bypassing the CDN to avoid a loop) — the same Nginx instance — and caches the response for 30 days.
  8. On that second pass (origin fetch), Nginx detects the X-Git-Cache-Internal header, decodes the original POST body from X-Git-Original-Body, restores the request method, and proxies to GitLab.
The Nginx and Lua layer

The Nginx configuration exposes two relevant locations. The first is the internal one used for the CDN proxy leg:

location ^~ /cdn-origin/ { internal; rewrite ^/cdn-origin(/.*)$ $1 break; proxy_pass $cdn_upstream; proxy_ssl_server_name on; proxy_ssl_name <cdn-hostname>; proxy_set_header Host <cdn-hostname>; proxy_set_header Accept-Encoding ""; proxy_http_version 1.1; proxy_buffering on; proxy_request_buffering on; proxy_connect_timeout 10s; proxy_send_timeout 60s; proxy_read_timeout 60s; header_filter_by_lua_block { ngx.header["X-Git-Cache-Key"] = ngx.req.get_headers()["X-Git-Cache-Key"] ngx.header["X-Git-Body-Hash"] = ngx.req.get_headers()["X-Git-Body-Hash"] local xcache = ngx.header["X-Cache"] or "" if xcache:find("HIT") then ngx.header["X-Git-Cache-Status"] = "HIT" else ngx.header["X-Git-Cache-Status"] = "MISS" end } }

The header_filter_by_lua_block here is doing something specific: it reads X-Cache from the response Fastly returns and translates it into a clean X-Git-Cache-Status header for observability. The X-Git-Cache-Key and X-Git-Body-Hash are also passed through so that callers can see what cache entry was involved.

The second location is git-upload-pack itself, which delegates all the logic to a Lua file:

location ~ /git-upload-pack$ { client_body_buffer_size 5m; client_max_body_size 5m; access_by_lua_file /etc/nginx/lua/git_upload_pack.lua; header_filter_by_lua_block { local key = ngx.req.get_headers()["X-Git-Cache-Key"] if key then ngx.header["X-Git-Cache-Key"] = key end } proxy_pass http://gitlab-webservice; proxy_http_version 1.1; proxy_set_header Host gitlab.gnome.org; proxy_set_header X-Real-IP $http_fastly_client_ip; proxy_set_header X-Forwarded-For $http_fastly_client_ip; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Ssl on; proxy_set_header Connection ""; proxy_buffering on; proxy_request_buffering on; proxy_connect_timeout 10s; proxy_send_timeout 60s; proxy_read_timeout 60s; }

The access_by_lua_file directive runs before the request is proxied. If the Lua script calls ngx.exec("/cdn-origin" .. uri), Nginx performs an internal redirect to the CDN location and the proxy_pass to GitLab is never reached. If the script returns normally (for private repos or non-fetch commands), the request falls through to the proxy_pass.

Building the cache key

The full Lua script that runs in access_by_lua_file handles both passes of the request. The first pass (client → nginx) does the heavy lifting:

local resty_sha256 = require("resty.sha256") local resty_str = require("resty.string") local redis_helper = require("redis_helper") local redis_host = os.getenv("REDIS_HOST") or "localhost" local redis_port = os.getenv("REDIS_PORT") or "6379" -- Second pass: request arriving from CDN origin fetch. -- Decode the original POST body from the header and restore the method. if ngx.req.get_headers()["X-Git-Cache-Internal"] then local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"] if encoded_body then ngx.req.read_body() local body = ngx.decode_base64(encoded_body) ngx.req.set_method(ngx.HTTP_POST) ngx.req.set_body_data(body) ngx.req.set_header("Content-Length", tostring(#body)) ngx.req.clear_header("X-Git-Original-Body") end return end

The second-pass guard is at the top of the script. When Fastly’s origin fetch arrives, it will carry X-Git-Cache-Internal: 1. The script detects that, reconstructs the POST body from the base64-encoded header, restores the POST method, and returns — allowing Nginx to proxy the real request to GitLab.

For the first pass, the script parses the repo path from the URI, reads and buffers the full request body, and computes a SHA256 over it:

-- Only cache "fetch" commands; ls-refs responses are small, fast, and -- become stale on every push (the body hash is constant so a long TTL -- would serve outdated ref listings). if not body:find("command=fetch", 1, true) then ngx.header["X-Git-Cache-Status"] = "BYPASS" return end -- Hash the body local sha256 = resty_sha256:new() sha256:update(body) local body_hash = resty_str.to_hex(sha256:final()) -- Build cache key: cache_versioning + repo path + body hash local cache_key = "v2:" .. repo_path .. ":" .. body_hash

A few things worth noting here. The ls-refs command is explicitly excluded from caching. The reason is that ls-refs is used to list references and its request body is essentially static (just a capability advertisement). If we cached it with a 30-day TTL, a push to the repository would not invalidate the cache — the key would be the same — and clients would get stale ref listings. Fetch bodies, on the other hand, encode exactly the SHAs the client wants and already has. The same set of want/have lines always maps to the same pack, which makes them safe to cache for a long time.

The v2: prefix is a cache version string. It makes it straightforward to invalidate all existing cache entries if we ever need to change the key scheme, without touching Fastly’s purge API.

The POST-to-GET conversion

This is probably the most unusual part of the design:

-- Carry the POST body as a base64 header and convert to GET so that -- Fastly's intra-POP consistent hashing routes identical cache keys -- to the same server (Fastly only does this for GET, not POST). ngx.req.set_header("X-Git-Original-Body", ngx.encode_base64(body)) ngx.req.set_method(ngx.HTTP_GET) ngx.req.set_body_data("") return ngx.exec("/cdn-origin" .. uri)

Fastly’s shield feature routes cache misses through a designated intra-POP “shield” node before going to origin. When two different edge nodes both get a MISS for the same cache key simultaneously, the shield node collapses them into a single origin request. This is important for us because without it, a burst of CI jobs fetching the same commit would all miss, all go to origin in parallel, and GitLab would end up generating the same pack multiple times anyway.

The catch is that Fastly’s consistent hashing and shield routing only works for GET requests. POST requests always go straight to origin. Fastly does provide a way to force POST responses into the cache — by returning pass in vcl_recv and setting beresp.cacheable in vcl_fetch — but it is a blunt instrument: there is no consistent hashing, no shield collapsing, and no guarantee that two nodes in the same POP will ever share the cached result. By converting the POST to a GET and encoding the body in a header, we get consistent hashing and shield-level request collapsing for free.

The VCL on the Fastly side uses the X-Git-Cache-Key header (not the URL or method) as the cache key, so the GET conversion is invisible to the caching logic.

Protecting private repositories

We cannot route private repository traffic through an external CDN — that would mean sending authenticated git content to a third-party cache. The way we prevent this is a denylist stored in Valkey. Before doing anything else, the Lua script checks whether the repository is listed there:

local denied, err = redis_helper.is_denied(redis_host, redis_port, repo_path) if err then ngx.log(ngx.ERR, "git-cache: Redis error for ", repo_path, ": ", err, " — cannot verify project visibility, bypassing CDN") ngx.header["X-Git-Cache-Status"] = "BYPASS" return end if denied then ngx.header["X-Git-Cache-Status"] = "BYPASS" ngx.header["X-Git-Body-Hash"] = body_hash:sub(1, 12) return end -- Public/internal repo: strip credentials before routing through CDN ngx.req.clear_header("Authorization")

If Valkey is unreachable, the script logs an error and bypasses the CDN entirely, treating the repository as if it were private. This is the safe default: the cost of a Redis failure is slightly increased load on GitLab, not the risk of routing private repository content through an external cache. In practice, Valkey runs alongside Nginx on the same node, so true availability failures are uncommon.

The denylist is maintained by gitlab-git-cache-webhook, a small FastAPI service. It listens for GitLab system hooks on project_create and project_update events:

HANDLED_EVENTS = {"project_create", "project_update"} @router.post("/webhook") async def webhook(request: Request, ...) -> Response: ... event = body.get("event_name", "") if event not in HANDLED_EVENTS: return Response(status_code=204) project = body.get("project", {}) path = project.get("path_with_namespace", "") visibility_level = project.get("visibility_level") if visibility_level == 0: await deny_repo(path) else: removed = await allow_repo(path) return Response(status_code=204)

GitLab’s visibility_level is 0 for private, 10 for internal, and 20 for public. Internal repositories are intentionally treated the same as public ones here: they are accessible to any authenticated user on the instance, so routing them through the CDN is acceptable. Only truly private repositories go into the denylist.

The key format in Valkey is git:deny:<path_with_namespace>. The Lua redis_helper module does an EXISTS check on that key. The webhook service also ships a reconciliation command (python -m app.reconcile) that does a full resync of all private repositories via the GitLab API, which is useful to run on first deployment or after any extended Valkey downtime.

The Fastly VCL

On the Fastly side, three VCL subroutines carry the relevant logic. In vcl_recv:

if (req.url ~ "/info/refs") { return(pass); } if (req.http.X-Git-Cache-Key) { set req.backend = F_Host_1; if (req.restarts == 0) { set req.backend = fastly.try_select_shield(ssl_shield_iad_va_us, F_Host_1); } return(lookup); }

/info/refs is always passed through uncached — it is the capability advertisement step and caching it would cause problems with protocol negotiation. Requests carrying X-Git-Cache-Key get an explicit lookup directive and are routed through the shield. Everything else falls through to Fastly’s default behaviour.

In vcl_hash, the cache key overrides the default URL-based key:

if (req.http.X-Git-Cache-Key) { set req.hash += req.http.X-Git-Cache-Key; return(hash); }

And in vcl_fetch, responses are marked cacheable when they come back with a 200 and a non-empty body:

if (req.http.X-Git-Cache-Key && beresp.status == 200) { if (beresp.http.Content-Length == "0") { set beresp.ttl = 0s; set beresp.cacheable = false; return(deliver); } set beresp.cacheable = true; set beresp.ttl = 30d; set beresp.http.X-Git-Cache-Key = req.http.X-Git-Cache-Key; unset beresp.http.Cache-Control; unset beresp.http.Pragma; unset beresp.http.Expires; unset beresp.http.Set-Cookie; return(deliver); }

The 30-day TTL is deliberately long. Git pack data is content-addressed: a pack for a given set of want/have lines will always be the same. As long as the objects exist in the repository, the cached pack is valid. The only case where a cached pack could be wrong is if objects were deleted (force-push that drops history, for instance), which is rare and, on GNOME’s GitLab, made even rarer by the Gitaly custom hooks we run to prevent force-pushes and history rewrites on protected namespaces. In those cases the cache version prefix would force a key change rather than relying on TTL expiry.

Empty responses (Content-Length: 0) are explicitly not cached. GitLab can return an empty body in edge cases and caching that would break all subsequent fetches for that key.

Conclusions

The system has been running in production for a few days now and the cache hit rate on fetch traffic has been overall consistently high (over 80%). If something goes wrong with the cache layer, the worst case is that requests fall back to BYPASS and GitLab handles them directly, which is how things worked before. This also means we don’t redirect any traffic to github.com anymore.

That should be all for today, stay tuned!

next-20260417: linux-next

Kernel Linux - Pre, 17/04/2026 - 2:44md
Version:next-20260417 (linux-next) Released:2026-04-17

Reed Hastings Is Leaving Netflix After 29 Years

Slashdot - Pre, 17/04/2026 - 1:00md
Reed Hastings is stepping down from Netflix's board in June, ending a 29-year run at the company he co-founded and helped transform from a DVD-by-mail business into a global streaming giant. Hastings said in a shareholder (PDF) letter that he's stepping down to focus on "his philanthropy and other pursuits." Engadget reports: Hastings has served as chairman of Netflix's board since 2023, a role he assumed after stepping down as co-CEO and promoting Greg Peters in his place. "Netflix changed my life in so many ways, and my all-time favorite memory was January 2016, when we enabled nearly the entire planet to enjoy our service," Hastings said in a statement. "My real contribution at Netflix wasn't a single decision; it was a focus on member joy, building a culture that others could inherit and improve, and building a company that could be both beloved by members and wildly successful for generations to come. A special thanks to Greg and Ted, whose commitment to Netflix's greatness is so strong that I can now focus on new things."

Read more of this story at Slashdot.

Jussi Pakkanen: Multi merge sort, or when optimizations aren't

Planet GNOME - Pre, 17/04/2026 - 12:41md

In our previous episode we wrote a merge sort implementation that runs a bit faster than the one in stdlibc++. The question then becomes, could it be made even faster. If you go through the relevant literature one potential improvement is to do a multiway merge. That is, instead of merging two arrays into one, you merge four into one using, for example, a priority queue.

This seems like a slam dunk for performance.

  • Doubling the number of arrays to merge at a time halves the number of total passes needed
  • The priority queue has a known static maximum size, so it can be put on the stack, which is guaranteed to be in the cache all the time
  • Processing an element takes only log(#lists) comparisons
Implementing multimerge was conceptually straightforward but getting all the gritty details right took a fair bit of time. Once I got it working the end result was slower. And not by a little, either, but more than 30% slower. Trying some optimizations made it a bit faster but not noticeably so.

Why is this so? Maybe there are bugs that cause it to do extra work? Assuming that is not the case, what actually is? Measuring seems to indicate that a notable fraction of the runtime is spent in the priority queue code. Beyond that I got very little to nothing.

The best hypotheses I could come up with has to with the number of comparisons made. A classical merge sort does two if statements per output elements. One to determine which of the two lists has a smaller element at the front and one to see whether removing the element exhausted the list. The former is basically random and the latter is always false except when the last element is processed. This amounts to 0.5 mispredicted branches per element per round.

A priority queue has to do a bunch more work to preserve the heap property. The first iteration needs to check the root and its two children. That's three comparisons for value and two checks whether the children actually exist. Those are much less predictable than the comparisons in merge sort. Computers are really efficient at doing simple things, so it may be that the additional bookkeeping is so expensive that it negates the advantage of fewer rounds.

Or maybe it's something else. Who's to say? Certainly not me. If someone wants to play with the code, the implementation is here. I'll probably delete it at some point as it does not have really any advantage over the regular merge sort.

Zero Trust for Email: Implementing Advanced Protections on Linux

LinuxSecurity.com - Pre, 17/04/2026 - 11:01pd
Email threats have long outgrown spamming and obvious phishing. Attackers now exploit trust itself. They impersonate internal users, hijack legitimate threads, and abuse misconfigured configurations. Defenses like perimeter filtering or static rules are not adequate any longer. A Zero Trust model redefines the issue by eliminating implicit trust at all phases of email processing.This shift is especially important in modern Linux mail environments where services are often modular, network-exposed, and heavily dependent on correct configuration across multiple components.

Intel's New Core Series 3 Is Its Answer To the MacBook Neo

Slashdot - Pre, 17/04/2026 - 9:00pd
Intel has launched a new budget-focused Core Series 3 processor line for lower-cost laptops -- "Intel's response to budget CPUs that are appearing in laptops like the Apple MacBook Neo," writes PCWorld's Mark Hachman. From the report: Intel unexpectedly launched the Core Series 3, based on its excellent "Panther Lake" (Core Ultra Series 3) architecture and 18A manufacturing, for devices for home consumers and small business on Thursday. Intel announced that a number of partners will launch laptops based upon the chip, including Acer, Asus, HP, Lenovo, and others. Although those laptops will be available beginning today, a number of them will begin shipping later this year, the partners said. All of it -- from the specifications down to the messaging -- feels extremely aimed at trimming the fat and delivering to users just what they'll want. Intel's new Core Series 3 family just includes two "Cougar Cove" performance cores and four low-power efficiency "Darkmont" cores, with two Xe graphics cores on top of it. Intel isn't really worrying about AI, with an NPU capable of just 17 TOPS, though the company claims the CPU, NPU, and GPU combined reach 40 TOPS of performance. Yes, laptops will use pricey DDR5 memory, but at the lower end: just DDR5-6400 speeds. Support for three external displays will be included, though, maximizing multiple screens for maximum productivity. Intel used the term "all day battery life" without elaboration. [...] Intel Core Series 3 delivers up to 47 percent better single-thread performance, up to 41 percent better multi thread performance, and up to 2.8x better GPU AI performance, Intel said. Compared against Intel's older Core 7 150U, Intel is saying that the new chip will outperform it by 2.1 times in content-creation and 2.7 times the AI performance. [...] We still don't know what Intel will charge for the chip, nor do we know what you'll be able to buy a Core Series 3 laptop for.

Read more of this story at Slashdot.

Sperm Whales' Communication Closely Parallels Human Language, Study Finds

Slashdot - Pre, 17/04/2026 - 5:30pd
An anonymous reader quotes a report from the Guardian: We may appear to have little in common with sperm whales – enormous, ocean-dwelling animals that last shared a common ancestor with humans more than 90 million years ago. But the whales' vocalized communications are remarkably similar to our own, researchers have discovered. Not only do sperm whale have a form of "alphabet" and form vowels within their vocalizations but the structure of these vowels behaves in the same way as human speech, the new study has found. Sperm whales communicate in a series of short clicks called codas. Analysis of these clicks shows that the whales can differentiate vowels through the short or elongated clicks or through rising or falling tones, using patterns similar to languages such as Mandarin, Latin and Slovenian. The structure of the whales' communication has "close parallels in the phonetics and phonology of human languages, suggesting independent evolution," the paper, published in the Proceedings B journal, states. Sperm whale coda vocalizations are "highly complex and represent one of the closest parallels to human phonology of any analyzed animal communication system," it added. [...] The new study shows that "sperm whale communication isn't just about patterns of clicks -- it involves multiple interacting layers of structure," said Mauricio Cantor, a behavioral ecologist at the Marine Mammal Institute who was not involved in the research. "With this study, we're starting to see that these signals are organized in ways we didn't fully appreciate before." The latest discovery around sperm whale speech has inched forward the possibility of someday fully understanding the creatures and even communicating with them. Project CETI has set a goal of being able to comprehend 20 different vocalized expressions, relating to actions such as diving and sleeping, within the next five years. A future where we're able to fully understand what the whales are saying and be able to have a conversation with them is "totally within our grasp," said David Gruber, founder and president of Project CETI. "We've already got a lot further than I thought we could. But it will take time, and funding. At the moment we are like a two-year-old, just saying a few words. In a few years' time, maybe we will be more like a five-year-old."

Read more of this story at Slashdot.

'TotalRecall Reloaded' Tool Finds a Side Entrance To Windows 11 Recall Database

Slashdot - Pre, 17/04/2026 - 1:00pd
An anonymous reader quotes a report from Ars Technica: Two years ago, Microsoft launched its first wave of "Copilot+" Windows PCs with a handful of exclusive features that could take advantage of the neural processing unit (NPU) hardware being built into newer laptop processors. These NPUs could enable AI and machine learning features that could run locally rather than in someone's cloud, theoretically enhancing security and privacy. One of the first Copilot+ features was Recall, a feature that promised to track all your PC usage via screenshot to help you remember your past activity. But as originally implemented, Recall was neither private nor secure; the feature stored its screenshots plus a giant database of all user activity in totally unencrypted files on the user's disk, making it trivial for anyone with remote or local access to grab days, weeks, or even months of sensitive data, depending on the age of the user's Recall database. After journalists and security researchers discovered and detailed these flaws, Microsoft delayed the Recall rollout by almost a year and substantially overhauled its security. All locally stored data would now be encrypted and viewable only with Windows Hello authentication; the feature now did a better job detecting and excluding sensitive information, including financial information, from its database; and Recall would be turned off by default, rather than enabled on every PC that supported it. The reconstituted Recall was a big improvement, but having a feature that records the vast majority of your PC usage is still a security and privacy risk. Security researcher Alexander Hagenah was the author of the original "TotalRecall" tool that made it trivially simple to grab the Recall information on any Windows PC, and an updated "TotalRecall Reloaded" version exposes what Hagenah believes are additional vulnerabilities. The problem, as detailed by Hagenah on the TotalRecall GitHub page, isn't with the security around the Recall database, which he calls "rock solid." The problem is that, once the user has authenticated, the system passes Recall data to another system process called AIXHost.exe, and that process doesn't benefit from the same security protections as the rest of Recall. "The vault is solid," Hagenah writes. "The delivery truck is not." The TotalRecall Reloaded tool uses an executable file to inject a DLL file into AIXHost.exe, something that can be done without administrator privileges. It then waits in the background for the user to open Recall and authenticate using Windows Hello. Once this is done, the tool can intercept screenshots, OCR'd text, and other metadata that Recall sends to the AIXHost.exe process, which can continue even after the user closes their Recall session. "The VBS enclave won't decrypt anything without Windows Hello," Hagenah writes. "The tool doesn't bypass that. It makes the user do it, silently rides along when the user does it, or waits for the user to do it." A handful of tasks, including grabbing the most recent Recall screenshot, capturing select metadata about the Recall database, and deleting the user's entire Recall database, can be done with no Windows Hello authentication. Once authenticated, Hagenah says the TotalRecall Reloaded tool can access both new information recorded to the Recall database as well as data Recall has previously recorded. "We appreciate Alexander Hagenah for identifying and responsibly reporting this issue. After careful investigation, we determined that the access patterns demonstrated are consistent with intended protections and existing controls, and do not represent a bypass of a security boundary or unauthorized access to data," a Microsoft spokesperson told Ars. "The authorization period has a timeout and anti-hammering protection that limit the impact of malicious queries."

Read more of this story at Slashdot.

EU Age Verification App Announced To Protect Children Online

Slashdot - Enj, 16/04/2026 - 6:00md
The EU says a new age-verification app is technically ready and could let users prove they are old enough to access restricted online content without revealing their identity or personal data. Deutsche Welle reports: Once released, users will be able to download the app from an app store and set it up using proof of identity, such as a passport or national ID card. They can then use it to confirm they are above a certain age when accessing restricted content, without revealing their identity. According to the Commission, the system is similar to the digital certificates used during the COVID-19 pandemic, which allowed people to prove their vaccination status. The app is expected to support enforcement of the bloc's Digital Services Act, which aims to better regulate online platforms. This includes restricting access to content such as pornography, gambling and alcohol-related services. Officials say the app will be "completely anonymous" and built on open-source technology, meaning it could also be adopted outside the EU. [...] While there is no binding EU-wide law yet, the European Parliament has called for a minimum age of 16 for social media access. For now, enforcement would largely fall to individual member states, but the new app is intended to help platforms comply with future national and EU rules.

Read more of this story at Slashdot.

Researchers Induce Smells With Ultrasound, No Chemical Cartridges Required

Slashdot - Enj, 16/04/2026 - 5:00md
An anonymous reader quotes a report from UploadVR: A group of independent researchers built a device that can artificially induce smell using ultrasound, with no consumable cartridges required. [...] The team of four are Lev Chizhov, Albert Yan-Huang, Thomas Ribeiro, Aayush Gupta. Chizhov is a neurotech entrepreneur with a background in math and physics, Yan-Huang is a researcher at Caltech with a background in computation and neural systems, and Ribeiro and Gupta are co-researchers on the project with software engineering and AI expertise. Instead of targeting your nose at all, the device directly targets the olfactory bulb in your brain with "focused ultrasound through the skull." The researchers say that as far as they're aware, no one has ever done this before, even in animals. A challenge in targeting the olfactory bulb is that it's buried behind the top of your nose, and your nose doesn't provide a flat surface for an emitter. Ultrasound also doesn't travel well through air. The solution the researchers came up with was to place the emitter on your forehead instead, with a "solid, jello-like pad for stability and general comfort," and the ultrasound directed downward towards the olfactory bulb. To determine the best placement, they say they used an MRI of one of their skulls to "roughly determine where the transducer would point and how the focal region (where ultrasound waves actually concentrate) aligned with the olfactory bulb (the target for stimulation)". [...] According to the researchers, they were able to induce the sensation of fresh air "with a lot of oxygen", the smell of garbage "like few-day-old fruit peels," an ozone-like sensation "like you're next to an air ionizer," and a campfire smell of burning wood. While technically head-mounted, the current device does require being held up with two hands. But as with all such prototypes, it likely could be significantly miniaturized.

Read more of this story at Slashdot.

next-20260416: linux-next

Kernel Linux - Enj, 16/04/2026 - 3:55md
Version:next-20260416 (linux-next) Released:2026-04-16

Bullet Train Upgrade Brings 5G Windows, Noise-Cancelling Cabins To Japan

Slashdot - Enj, 16/04/2026 - 1:00md
Some Japanese bullet trains will soon support premium private suites this October, featuring windows with embedded 5G antennas for steadier onboard Wi-Fi and NTT noise-cancelling cabin tech to reduce train noise. The 5G window antennas are designed to maintain line-of-sight connections as trains race past base stations at up to 285 km/h. The Register reports: Rail operator JR Central announced the new tech late last month and will initially deploy a couple of the suites on six trains. The carrier explained that the antennas come from a Japanese company called AGC that weaves microscopic wires through glass to form an antenna. JR Central will connect the windows to an on-train Wi-Fi router. AGC says rival tech relies on 5G signals reaching a train and then bouncing around inside before reaching the Wi-Fi unit. The company says antennas woven into train windows maintain line of sight to nearby 5G base stations. That matters because JR Central's Shinkansen can achieve speeds of up to 285 km/h, which means they speed past cellular network base stations so quickly that it's frequently necessary to reconnect to another radio. AGC says keeping a line of sight connection means its antennas allow increased 5G signal strength, so Wi-Fi service on board trains should be more stable and speedy. The sound-deadening kit JR Central will deploy is called Personalized Sound Zone (PSZ) and comes from Japan's tech giant NTT. The tech uses the same principles applied to noise-cancelling headphones -- determine the waveform of sound and project an inversion of that waveform that cancels out ambient noise.

Read more of this story at Slashdot.

Thibault Martin: TIL that Pagefind does great client-side search

Planet GNOME - Enj, 16/04/2026 - 12:00md

I post more and more content on my website. What was visible at glance then is now more difficult to look for. I wanted to implement search, but it is a static website. It means that everything is built once, and then published somewhere as final, immutable pages. I can't send a request for search and get results in return.

Or that's what I thought! Pagefind is a neat javascript library that does two things:

  1. It produces an index of the content right after building the static site.
  2. It provides 2 web components to insert in my pages: <pagefind-modal> that is the search modal itself, hidden by default, and <pagefind-modal-trigger> that looks like a search field and opens the modal.

The pagefind-modal component looks up the index when the user types a request. The index is a static file, so there is not need for a backend that processes queries. Of course this only works for basic queries, but it's a great rool already!

Pagefind is also easy to customize via a list of CSS variables. Adding it to this website was very straightforward.

UK Households To Be Urged To Use More Power This Summer As Renewables Soar

Slashdot - Enj, 16/04/2026 - 9:00pd
Longtime Slashdot reader AmiMoJo shares a report from the Guardian: Households will be called on to boost their consumption of Great Britain's record renewable energy this summer to help balance the power grid and lower energy bills. Under the new plans, people could be encouraged to run dishwashers and washing machines or charge up their electric vehicles when there is more wind and solar power than the electricity grid needs. The plan will be delivered with the help of energy suppliers, which may choose to offer heavily discounted or free electricity to their customers during specific periods when the energy system operator predicts there will be a surplus of electricity. Many suppliers already offer more than 2 million households the opportunity to pay lower rates for electricity used during off-peak hours but this will be the first time that the system operator will use this tool to help balance the grid. The National Energy System Operator (Neso) hopes that by issuing a market notice to call on energy users to increase their consumption it can avoid making hefty payments to turn wind and solar farms off when demand for electricity is low, which are ultimately paid for through energy bills.

Read more of this story at Slashdot.

Nature Is Still Molding Human Genes, Study Finds

Slashdot - Enj, 16/04/2026 - 5:30pd
An anonymous reader quotes a report from the New York Times: Many scientists have contended that humans have evolved very little over the past 10,000 years. A few hundred generations was just a blink of the evolutionary eye, it seemed. Besides, our cultural evolution -- our technology, agriculture and the rest -- must have overwhelmed our biological evolution by now. A vast study, published on Wednesday in the journal Nature, suggests the opposite. Examining DNA from 15,836 ancient human remains, scientists found 479 genetic variants that appeared to have been favored by natural selection in just the past 10,000 years. The researchers also concluded that thousands of additional genetic variants have probably experienced natural selection. Before the new study, scientists had identified only a few dozen variants. "There are so many of them that it's hard to wrap one's mind around them," said David Reich, a geneticist at Harvard Medical School and an author of the new study. He and his colleagues found that a mutation that is a major risk factor for celiac disease, for example, appeared just 4,000 years ago, meaning the condition may be younger than the Egyptian pyramids. The mutation became ever more common. Today, an estimated 80 million people worldwide have celiac disease, in which the immune system attacks gluten and damages the intestines. The steady rise of the mutation came about through natural selection, the scientists argue. For some reason, people with the mutation had more descendants than people without it -- even though it put them at risk of an autoimmune disorder. Other findings are even more puzzling. The researchers found that genetic variants that raise the odds of a smoking habit have been getting steadily rarer in Europe for the past 10,000 years. Something is working against those variants -- but it can't be the harm from smoking. Europeans have been smoking tobacco for only about 460 years. The scientists can't see from their research so far what forces might be making these variants more or less common. "My short answer is, I don't know," said Ali Akbari, a senior staff scientist at Harvard and an author of the study. The researchers also found that some variants, like the one linked to Type B blood, became much more common in Europe around 6,000 years ago, while others changed direction over time. For example, a TYK2 immune gene variant that may have once been beneficial later became harmful because it increased tuberculosis risk. The study also found signs of natural selection in 44 out of 563 traits. Variants linked to Type 2 diabetes, wider waists, and higher body fat have become less common, possibly because farming and carbohydrate-heavy diets made once-useful fat-storing traits more harmful. Other findings, such as selection favoring genes linked to more years of schooling, are harder to interpret.

Read more of this story at Slashdot.

Boston Dynamics' Robot Dog Can Now Read Gauges, Spot Spills, and Reason

Slashdot - Enj, 16/04/2026 - 1:00pd
Boston Dynamics has integrated Google DeepMind into its robotic dog Spot, giving it more autonomous reasoning for industrial inspections like spotting spills and reading gauges. Spot can also now recognize when to call on other AI tools. IEEE Spectrum reports: Boston Dynamics is one of the few companies to commercially deploy legged robots at any appreciable scale; there are now several thousand hard at work. Today the company is announcing that its quadruped robot Spot is now equipped with Google DeepMind's Gemini Robotics-ER 1.6, a high-level embodied reasoning model that brings usability and intelligence to complex tasks. [T]he focus of this partnership is on one of the very few applications where legged robots have proven themselves to be commercially viable: inspection. That is, wandering around industrial facilities, checking to make sure that nothing is imminently exploding. With the new AI onboard, Spot is now able to autonomously look for dangerous debris or spills, read complex gauges and sight glasses, and call on tools like vision-language-action models when it needs help understanding what's going on in the environment around it. "Advances like Gemini Robotics-ER 1.6 mark an important step toward robots that can better understand and operate in the physical world," Marco da Silva, vice president and general manager of Spot at Boston Dynamics, says in a press release. "Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously." You can watch a demo of Spot's new capabilities on YouTube.

Read more of this story at Slashdot.

US Jobs Too Important To Risk Chinese Car Imports, Says Ford CEO

Slashdot - Enj, 16/04/2026 - 12:00pd
In an interview with Fox News, Ford CEO Jim Farley warned that allowing Chinese vehicle imports could put nearly a million U.S. jobs at risk. He said China's heavily subsidized auto industry has enough excess capacity to supply the entire U.S. market, while also raising serious cybersecurity concerns given how much data modern connected cars collect. Ars Technica reports: "First of all, the Chinese have huge direct support for their auto companies," Farley said, while noting that China has the ability to build an additional 21 million vehicles a year on top of the 29 million that are expected to roll off Chinese production lines in 2026. "They have enough capacity in China to cover all the manufacturing, all the vehicle sales in the United States," Farley said. "Manufacturing is the heart and soul of our country, and for us to lose those exports would be devastating for our country," he continued, before pointing out the cybersecurity worries about Chinese cars. "All the vehicles have 10 cameras. They can collect a lot of data," he said. Farley has praised Chinese EVs like the Xiaomi SU7, even going on podcasts to sing its praises. But he believes Ford's forthcoming affordable Kentucky-built EVs, due to start hitting dealerships next year, have what it takes to be competitive. When asked about new car prices rising an average of 2 percent last year, Farley repeatedly said that Ford had "worked with the administration" so that there's "essentially no big impact" of the Trump tariffs. The CEO justified the rising costs by pointing to the F-150's sales as proof of its value.

Read more of this story at Slashdot.

Cal.com Is Going Closed Source Because of AI

Slashdot - Mër, 15/04/2026 - 11:00md
Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security. [...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source." While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."

Read more of this story at Slashdot.

Live Nation Illegally Monopolized Ticketing Market, Jury Finds

Slashdot - Mër, 15/04/2026 - 10:00md
A Manhattan federal jury found that Live Nation and Ticketmaster illegally maintained monopoly power in the ticketing market. The findings follow an antitrust case brought by states after a separate DOJ settlement. CNN reports: The verdict was reached following a lengthy trial in New York federal court that included testimony from top executives in the music and entertainment industries. Jurors began deliberating on Friday. The Justice Department and 39 state attorneys general, including California and New York, and Washington, DC, sued Live Nation in 2024 alleging its combination with Ticketmaster and control of "virtually every aspect of the live music ecosystem" have harmed fans, artists, and venues. During the second week of trial, in a move that surprised even the judge, the Justice Department reached a secret settlement with Live Nation. A handful of states signed onto the deal, but more than two dozen proceeded to trial. Under the DOJ deal, Live Nation agreed to allow competitors, like SeatGeek or StubHub, to offer tickets to its events, cap ticketing service fees at 15%, and divest exclusive booking agreements with 13 amphitheaters. The deal includes a $280 million settlement fund for state damages claims for the handful of states that signed onto the deal. The DOJ settlement requires the judge's approval.

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi