You are here

Agreguesi i feed

A CNN Producer Explores the 'Magic AI' Workout Mirror

Slashdot - Dje, 22/03/2026 - 5:34md
CNN looks at "the Magic AI fitness mirror," a new product "watching you, and giving you feedback automatically," while sometimes playing footage of a recorded personal trainer. Long-time Slashdot reader destinyland describes CNN's video report: CNN says the device "tracks form, counts reps, and corrects technique in real-time — and it doesn't go easy on you." (Although the company's CEO/cofounder, Varun Bhanot, says "we're not trying to completely replace personal trainers. What we are providing is a more accessible alternative.") CNN call the company "more a computer-vision firm than a fitness company, building the tech for this mirror from the ground up." CEO Bhanot tells CNN he'd hired a personal trainer in his 20s to get fit, but "Going through that journey, I realized how old-fashioned personal training was. Dumbbells were still dumb. There was no data or augmentation for the whole process!" "The AI fitness and wellness market is already huge — and it's growing," CNN adds. "In 2025 the global market was worth $11 billion, according to [market research firm] Insightace Analytic. By 2035, this market is expected to reach just shy of $58 billion. And Magic AI is far from alone. Form, Total, Speediance, and Echelon, to name a few, are all brands vying for a slice of this market. Even the most purely physical of activities — exercising your body — now gets "enhanced" with AI accessories...

Read more of this story at Slashdot.

Google Search Is Now Sometimes Using AI To Replace Headlines

Slashdot - Dje, 22/03/2026 - 4:34md
"Google is beginning to replace news headlines in its search results with ones that are AI-generated," reports the Verge: After doing something similar in its Google Discover news feed, it's starting to mess with headlines in the traditional "10 blue links," too. We've found multiple examples where Google replaced headlines we wrote with ones we did not, sometimes changing their meaning in the process. For example, Google reduced our headline "I used the 'cheat on everything' AI tool and it didn't help me cheat on anything" to just five words: "'Cheat on everything' AI tool." It almost sounds like we're endorsing a product we do not recommend at all. What we are seeing is a "small" and "narrow" experiment, one that's not yet approved for a fuller launch, Google spokespeople Jennifer Kutz, Mallory De Leon, and Ned Adriance tell The Verge. They would not say how "small" that experiment actually is. Over the past few months, multiple Verge staffers have seen examples of headlines that we never wrote appear in Google Search results — headlines that do not follow our editorial style, and without any indication that Google replaced the words we chose. And Google says it's tweaking how other websites show up in search, too, not just news. The good news, for now, is that these changed headlines seem to be few and far between, and they're not yet the kind of tripe we've seen in Google Discover. (For example, Google Discover told me this week that the PlayStation Portal was getting a 1080p streaming mode, when it actually got a higher bitrate mode instead.) Compared to that and other lying Google Discover headlines like "US reverses foreign drone ban" — on a story reporting the opposite — the nonsense headlines we're seeing in Google Search are downright tame. The article points out that Google "originally told us its AI headlines in Google Discover were an experiment too. A month later, it told us those AI headlines are now a feature..." "Google confirmed that the test uses generative AI, but claimed that 'if we were to actually launch something based on this experiment, it would not be using a generative model and we would not be creating headlines with gen AI'..."

Read more of this story at Slashdot.

Amazon Plans to Test Four-Legged Robots on Wheels for Deliveries

Slashdot - Dje, 22/03/2026 - 3:34md
CNBC reports: Amazon has acquired Rivr, a Swiss robotics company developing machines for "doorstep delivery," the company confirmed Thursday... It announced the deal in a notice sent to third-party delivery contractors... "We believe this technology, when working alongside your [delivery associates], has the potential to further improve safety outcomes and the overall customer experience, particularly in the last steps of the delivery process...." In its notice to delivery service partner owners, Amazon said Rivr's technology, which includes a four-legged robot on wheels, will allow it to research and test how the devices can be integrated into delivery operations, including "helping [delivery associates] carry packages from delivery vehicles to customer doorsteps."

Read more of this story at Slashdot.

US Cable TV Industry Faces 'Dramatic Collapse' as Local Operators Shut Down - or Become ISPs

Slashdot - Dje, 22/03/2026 - 12:34md
America's cable TV industry "is undergoing its most dramatic collapse in history," reports Cord Cutters News, "with operators large and small waving the white flag on traditional TV service and pointing their customers toward streaming platforms instead." Just in 2025 Comcast lost 1.25 million pay-TV subscribers (ending the year with just 11.3 million), while Charter Spectrum also lost hundreds of thousands of customers each quarter. But "for smaller regional operators, who lack the scale and diversified revenue streams of giants like Comcast, those kinds of losses are simply unsurvivable," they write. And "the companies that once delivered hundreds of channels through coaxial cables are now either shutting down entirely or reinventing themselves as internet providers." Pay-TV subscriptions have plummeted from nearly 90% of U.S. households in the mid-2010s to roughly half by the end of 2025, resulting in billions in lost revenue and forcing many smaller operators to conclude that continuing linear TV services is no longer viable... [This year over U.S. 50 cable TV companies — primarily smaller and midsize providers — are "expected to cease operations entirely or shut down their television services," Cord Cutters News reported earlier.] YouTube TV's pricing is so competitive that the platform is projected to have close to 12.6 million subscribers by the end of 2026, positioning it to become the largest paid TV distributor in the United States. Exclusive content deals, such as YouTube TV's acquisition of NFL Sunday Ticket rights, have further eroded the value proposition of traditional cable at every level of the market... As older cable subscribers age out of the market, there is no new generation of customers waiting to replace them... [Cable TV] operators like WOW! are betting that their physical infrastructure — now increasingly upgraded to fiber — is more valuable as an internet delivery system than as a cable TV platform. [WOW! serves customers across Michigan, Ohio, Illinois, and Alabama — but is "phasing out its proprietary streaming live TV service and directing all customers toward YouTube TV," the article notes.] Industry observers see this as part of a broader trend: operators shedding unprofitable video segments to focus on broadband, where returns and network investments are prioritized. By the end of 2026, non-pay-TV households are expected to surge to 80.7 million, outnumbering traditional pay-TV subscribers at 54.3 million — a milestone that would have seemed unthinkable just a decade ago. For the cable companies still standing, the math is now inescapable: the era of the cable bundle is ending, and the only real question left is how gracefully each operator manages its exit.

Read more of this story at Slashdot.

Meteor Rumbles Over Houston, as Six-Pound Fragment Crashes Into a Texas Home

Slashdot - Dje, 22/03/2026 - 8:34pd
"It is the talk of the town today — the loud boom, the flash of light in the sky experienced by a lot of folks across the Houston area this afternoon," says a local Texas newscaster. "And then there was this — a home in northwest Harris county hit by something that crashed through their roof." Travelling at very high speed, the six-pound meteorite crashed through their roof and through their attic, crashing again through the ceiling of the floor below. It then bounced off the floor, hit the ceiling again — and then fell onto the bed. CBS News reports: NASA said in a social media post that the meteor became visible at 49 miles above Stagecoach, northwest of Houston, at 4:40 p.m. local time. The meteor moved southeast at 35,000 miles per hour, breaking apart 29 miles above Bammel, just west of Cypress Station, NASA said. "The fragmentation of the meteor — which weighed about a ton with a diameter of 3 feet — created a pressure wave that caused booms heard by some in the area," NASA said in the post. Across the Houston area, residents described hearing a low, rumbling sound that many compared to thunder, even though the skies were clear, according to CBS affiliate KHOU. Earlier this week, an asteroid weighing about 7 tons and traveling at 45,000 mph traveled over multiple states. And last June, a bright meteor was seen across the southeastern U.S. and exploded over Georgia, creating similar booms heard by residents in the area.

Read more of this story at Slashdot.

Matthew Garrett: SSH certificates and git signing

Planet GNOME - Sht, 21/03/2026 - 8:38md

When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.

git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account1, but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.

SSH Certificates

And, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.

And, wonderfully, you can use them in git! Let’s find out how.

Local config

There’s two main parameters you need to set. First,

1 git config set gpg.format ssh

because unfortunately for historical reasons all the git signing config is under the gpg namespace even if you’re not using OpenPGP. Yes, this makes me sad. But you’re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it’s stored on a smartcard or something rather than on disk). Thankfully for you, I’ve written one. It will talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you’ll have a signature.

Validating signatures

This is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen2, which validates signatures against a file in a format that looks somewhat like authorized-keys. This lets you add something like:

1 * cert-authority ssh-rsa AAAA…

which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?

Haha. No.

Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can’t push to a repo unless they have a certificate signed by the configured CA), there’s currently no way to say “Trust all commits with an SSH certificate signed by this CA”. I am unclear on why. So, I wrote my own. It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS. Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you’ll need to handle that.

In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.

Doing it in hardware

Of course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.

So, obviously, I wrote my own. This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.

And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.

Wait, attestation?

Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.

Conclusion

Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it4 and everyone wins.

  1. Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys? Now you do ↩︎

  2. Yes it is somewhat confusing that the keygen command does things other than generate keys ↩︎

  3. This is more difficult than it sounds ↩︎

  4. And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎

Allan Day: GNOME Foundation Update, 2026-03-20

Planet GNOME - Pre, 20/03/2026 - 4:42md

Hello and welcome to another update on what’s been happening at the GNOME Foundation. It’s been two weeks since my last update, and there’s been plenty going on, so let’s dive straight in.

GNOME 50!

My update wouldn’t be complete without mentioning this week’s GNOME 50 release. It looks like an amazing release with lots of great improvements! Many thanks to everyone who contributed and made it such a success.

The Foundation plays a critical role in these releases, whether it’s providing development infrastructure, organising events where planning takes place, or providing development funding. If you are reading this and have the means, please consider signing up as a Friend of GNOME. Even small regular donations make a huge difference.

Board Meeting

The Board of Directors had its regular monthly meeting on March 9th, and we had a full agenda. Highlights from the meeting included:

  • The Board agreed to sign the Keep Android Open letter, as well as endorsing the United Nations Open Source Principles.
  • We heard reports from a number of committees, including the Executive Committee, Finance Committee, Travel Committee, and Code of Conduct Committee. Committee presentations are a new addition to the Board meeting format, with the goal of pushing more activity out to committees, with the Board providing high-level oversight and coordination.
  • Creation of a new bank account was authorized, which is needed as part of our ongoing finance and accounting development effort.
  • The main discussion topic was Flathub and what the organizational arrangements could be for it in the future. There weren’t any concrete decisions made here, but the Board indicated that it’s open to different options and sees Flathub’s success as the main priority rather than being attached to any particular organisation type or location.
  • The next regular Board meeting will be on April 13th.
Travel

The Travel Committee met both this week and last week, as it processed the initial batch of GUADEC sponsorship applications. As a result of this work the first set of approvals have been sent out. Documentation has also been provided for those who are applying for visas for their travel.

The membership of the current committee is quite new and it is having to figure out processes and decision-making principals as it goes, which is making its work more intensive than might normally be the case. We are starting to write up guidelines for future funding rounds, to help smooth the process.

Huge thanks to our committee members Asmit, Anisa, Julian, Maria, and Nirbeek, for taking on this important work.

Conferences

Planning and preparation for the 2026 editions of LAS and GUADEC have continued over the past fortnight. The call for papers for both events is a particular focus right now, and there are a couple of important deadlines to be aware of:

  • If you want to speak at LAS 2026, the deadline for proposals is 23 March – that’s in just three days.
  • The GUADEC 2026 call for abstracts has been extended to 27 March, so there is one more week to submit a talk.

There are teams behind each of these calls, reviewing and selecting proposals. Many thanks to the volunteers doing this work!

We are also excited to have sponsors come forward to support GUADEC.

Accounting

The Foundation has been undertaking a program of improvements to our accounting and finance systems in recent months. Those were put on hold for the audit fieldwork that took place at the beginning of March, but now that’s done, attention has turned to the remaining work items there.

We’ve been migrating to a new payments processing platform since the beginning of the year, and setup work has continued, including configuration to make it integrate correctly with our accounting software, migrating credit cards over from our previous solution, and creating new web forms which are going to be used for reimbursement requests in future.

There are a number of significant advantages to the new system, like the accounting integration, which are already helping to reduce workloads, and I’m looking forward to having the final pieces of the new system in place.

Another major change that is currently ongoing is that we are moving from a quarterly to a monthly cadence for our accounting. This is the cycle we move on to “complete” the accounts, with all data inputted and reconciled by the end of the cycle. The move to a monthly cycle will mean that we are generating finance reports on a more frequent basis, which will allow the Board to have a closer view on the organisation’s finances.

Finally, this week we also had our regular monthly “books” call with our accountant and finance advisor. This was our usual opportunity to resolve any questions that have come up in relation to the accounts, but we also discussed progress on the improvements that we’ve been making.

Infrastructure

On the infrastructure side, the main highlight in recent weeks has been the migration from Anubis to Fastly’s Next-Gen Web Application Firewall (WAF) for protecting our infrastructure. The result of this migration will be an increased level of protection from bots, while simultaneously not interfering in peoples’ way when they’re using our infra. The Fastly product provides sophisticated detection of threats plus the ability for us to write our own fine-grained detection rules, so we can adjust firewall behaviour as we go.

Huge thanks to Fastly for providing us with sponsorship for this service – it is a major improvement for our community and would not have been possible without their help.

That’s it for this update. Thanks for reading and be on the lookout for the next update, probably in two weeks!

next-20260320: linux-next

Kernel Linux - Pre, 20/03/2026 - 4:08md
Version:next-20260320 (linux-next) Released:2026-03-20

Port Scanning Explained: Tools, Techniques, and Best Open-Source Port Scanners for Linux

LinuxSecurity.com - Pre, 20/03/2026 - 8:12pd
Most Linux admins assume they know which TCP/IP ports their servers expose, until a scan reveals something unexpected. A database port listening on all interfaces, a forgotten development service, or a management interface that was meant to stay internal can easily appear once you look from the network side.

Port Scanning Explained: What Port Scanners Are, How Linux Systems Actually Respond, and Why It Matters

LinuxSecurity.com - Enj, 19/03/2026 - 6:29md
What is a port scan?A port scan is a diagnostic or reconnaissance technique used to identify open communication ports on a remote system. By sending packets to specific destinations and observing how the system responds, it becomes possible to map which services are reachable and how a host presents itself from the outside.Most Linux admins assume they already know that answer. Until a scan shows otherwise.From the system itself, everything looks controlled. Configuration files define what should be running, and local tools like netstat or ss confirm which services are active. But from the network, that same Linux system can tell a very different story.Port scanning makes that gap visible. It shows what is actually reachable, how services respond under external pressure, and whether that exposure lines up with what was intended.

6.19.9: stable

Kernel Linux - Enj, 19/03/2026 - 4:18md
Version:6.19.9 (stable) Released:2026-03-19 Source:linux-6.19.9.tar.xz PGP Signature:linux-6.19.9.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.19.9

6.18.19: longterm

Kernel Linux - Enj, 19/03/2026 - 4:10md
Version:6.18.19 (longterm) Released:2026-03-19 Source:linux-6.18.19.tar.xz PGP Signature:linux-6.18.19.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.18.19

Jussi Pakkanen: Simple sort implementations vs production quality ones

Planet GNOME - Enj, 19/03/2026 - 2:49md

One of the most optimized algorithms in any standard library is sorting. It is used everywhere so it must be fast. Thousands upon thousands of developer hours have been sunk into inventing new algorithms and making sort implementations faster. Pystd has a different design philosophy where fast compilation times and readability of the implementation have higher priority than absolute performance. Perf still very much matters, it has to be fast, but not at the cost of 10x compilation time.

This leads to the natural question of how much slower such an implementation would be compared to a production quality one. Could it even be faster? (Spoilers: no) The only way to find out is to run performance benchmarks on actual code.

To keep things simple there is only one test set, sorting 10'000'000 consecutive 64 bit integers that have been shuffled to a random order which is the same for all algorithms. This is not an exhaustive test by any means but you have to start somewhere. All tests used GCC 15.2 using -O2 optimization. Pystd code was not thoroughly hand optimized, I only fixed (some of the) obvious hotspots.

Stable sort

Pystd uses mergesort for stable sorting. The way the C++ standard specifies stable sort means that most implementations probably use it as well. I did not dive in the code to find out. Pystd's merge sort implementation consists of ~220 lines of code. It can be read on this page.

Stdlibc++ can do the sort in 0.9 seconds whereas Pystd takes .94 seconds. Getting to within 5% with such a simple implementation is actually quite astonishing. Even when considering all the usual caveats where it might completely fall over with a different input data distribution and all that.

Regular sort

Both stdlibc++ and Pystd use introsort. Pystd's implementation has ~150 lines of code but it also uses heapsort, which has a further 100 lines of code). Code for introsort is here, and heapsort is here.

Stdlibc++ gets the sort done in 0.76 seconds whereas Pystd takes 0.82 seconds. This makes it approximately 8% slower. It's not great, but getting within 10% with a few evening's work is still a pretty good result. Especially since, and I'm speculating here, std::sort has seen a lot more optimization work than std::stable_sort because it is used more.

For heavy duty number crunching this would be way too slow. But for moderate data set sizes the performance difference might be insignificant for many use cases.

Note that all of these are faster (note: did not measure) than libc's qsort because it requires an indirect function call on every comparison i.e. the comparison method can not be inlined.

Where does the time go?

Valgrind will tell you that quite easily.

This picture shows quite clearly why big O notation can be misleading. Both quicksort (the inner loop of introsort) and heapsort have "the same" average time complexity but every call to heapsort takes approximately 4.5 times as long.

Pardoned Nikola Fraudster Is Raising Funds For AI-Powered Planes He Claims Will Reshape Aviation

Slashdot - Enj, 19/03/2026 - 8:00pd
Trevor Milton, the pardoned founder of Nikola, is seeking $1 billion for AI-powered autonomous planes through a new venture called SyberJet. The Tech Buzz reports: "Autonomous planes will be 10 times harder than Nikola ever was," Milton told the Wall Street Journal in a rare interview. It's a remarkable admission from someone whose last venture collapsed under the weight of securities fraud charges after he overstated the capabilities of Nikola's electric and hydrogen-powered trucks. Milton was convicted in 2022 on three counts of fraud for misleading investors about Nikola's technology, including staging a video that made it appear a truck prototype was driving under its own power when it was actually rolling downhill. The conviction sent him to prison and turned Nikola into a cautionary tale about startup hype culture. His pardon, which came earlier this year, sparked immediate controversy in venture capital and legal circles. Now he's betting that AI and autonomous aviation represent a clean slate. SyberJet appears focused on developing artificial intelligence systems capable of piloting aircraft without human intervention - a technical challenge that's stumped even well-funded players like Boeing and Airbus. [...] Milton hasn't detailed SyberJet's technical approach or revealed who's backing the venture. The company's website remains sparse, and aviation industry sources say they haven't seen concrete demonstrations of the technology. That opacity echoes the early days of Nikola, when Milton made sweeping claims about revolutionary trucks that existed mostly in renderings and promotional videos. If you need a quick refresher on the Nikola saga, here's a timeline of key events: June, 2016: Nikola Motor Receives Over 7,000 Preorders Worth Over $2.3 Billion For Its Electric Truck December, 2016: Nikola Motor Company Reveals Hydrogen Fuel Cell Truck With Range of 1,200 Miles February, 2020: Nikola Motors Unveils Hybrid Fuel-Cell Concept Truck With 600-Mile Range June, 2020: Nikola Founder Exaggerated the Capability of His Debut Truck September, 2020: Nikola Motors Accused of Massive Fraud, Ocean of Lies September, 2020: Nikola Admits Prototype Was Rolling Downhill In Promo Video September, 2020: Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller October, 2020: Nikola Stock Falls 14 Percent After CEO Downplays Badger Truck Plans November, 2020: Nikola Stock Plunges As Company Cancels Badger Pickup Truck July, 2021: Nikola Founder Trevor Milton Indicted on Three Counts of Fraud December, 2021: EV Startup Nikola Agrees To $125 Million Settlement September, 2022: Nikola Founder Lied To Investors About Tech, Prosecutor Says in Fraud Trial

Read more of this story at Slashdot.

FBI Is Buying Location Data To Track US Citizens, Director Confirms

Slashdot - Enj, 19/03/2026 - 4:30pd
An anonymous reader quotes a report from TechCrunch: The FBI has resumed purchasing reams of Americans' data and location histories to aid federal investigations, the agency's director, Kash Patel, testified to lawmakers on Wednesday. This is the first time since 2023 that the FBI has confirmed it was buying access to people's data collected from data brokers, who source much of their information -- including location data -- from ordinary consumer phone apps and games, per Politico. At the time, then-FBI director Christopher Wray told senators that the agency had bought access to people's location data in the past but that it was not actively purchasing it. When asked by U.S. Senator Ron Wyden, Democrat of Oregon, if the FBI would commit to not buying Americans' location data, Patel said that the agency "uses all tools ... to do our mission." "We do purchase commercially available information that is consistent with the Constitution and the laws under the Electronic Communications Privacy Act -- and it has led to some valuable intelligence for us," Patel testified Wednesday. Wyden said buying information on Americans without obtaining a warrant was an "outrageous end-run around the Fourth Amendment," referring to the constitutional law that protects people in America from device searches and data seizures.

Read more of this story at Slashdot.

Jakub Steiner: Friday Sketches (part 2)

Planet GNOME - Enj, 19/03/2026 - 1:00pd

Two years have passed since I last shared my Friday app icon sketches, but the sketching itself hasn't stopped.

For me, it's the best way to figure out the right metaphors before we move to final pixels. These sketches are just one part of the GNOME Design Team's wider effort to keep our icons consistent and meaningful—it is an endeavor that’s been going on for years.

If you design a GNOME app following the GNOME Design Guidelines, feel free to request an icon to be made for you. If you are serious and apply for inclusion in GNOME Circle, you are way more likely to get a designer's attention.

Previously

Cloudflare Appeals Piracy Shield Fine, Hopes To Kill Italy's Site-Blocking Law

Slashdot - Enj, 19/03/2026 - 12:00pd
Cloudflare is appealing a 14.2 million-euro fine from Italy for refusing to comply with its "Piracy Shield" law, which requires blocking access to websites on its 1.1.1.1 DNS service within 30 minutes. The company argues the system lacks oversight, risks widespread overblocking, and could undermine core Internet infrastructure. Ars Technica's Jon Brodkin reports: Piracy Shield is "a misguided Italian regulatory scheme designed to protect large rightsholder interests at the expense of the broader Internet," Cloudflare said in a blog post this week. "After Cloudflare resisted registering for Piracy Shield and challenged it in court, the Italian communications regulator, AGCOM, fined Cloudflare... We appealed that fine on March 8, and we continue to challenge the legality of Piracy Shield itself." Cloudflare called the fine of 14.2 million euros ($16.4 million) "staggering." AGCOM issued the penalty in January 2026, saying Cloudflare flouted requirements to disable DNS resolution of domain names and routing of traffic to IP addresses reported by copyright holders. Cloudflare had previously resisted a blocking order it received in February 2025, arguing that it would require installing a filter on DNS requests that would raise latency and negatively affect DNS resolution for sites that aren't subject to the dispute over piracy. Cloudflare co-founder and CEO Matthew Prince said that censoring the 1.1.1.1 DNS resolver would force the firm "not just to censor the content in Italy but globally." Piracy Shield was designed to combat pirated streams of live sports events, requiring network operators to block domain names and IP addresses within 30 minutes of receiving a copyright notification. Cloudflare said the fine should have been capped at 140,000 euros ($161,000), or 2 percent of its Italian earnings, but that "AGCOM calculated the fine based on our global revenue, resulting in a penalty nearly 100 times higher than the legal limit." Despite its complaints about the size of the fine, Cloudflare said the principles at stake "are even larger" than the financial penalty. "Piracy Shield is an unsupervised electronic portal through which an unidentified set of Italian media companies can submit websites and IP addresses that online service providers registered with Piracy Shield are then required to block within 30 minutes," Cloudflare said. Cloudflare is pushing for the law to be struck down, arguing that it is "incompatible with EU law, most notably the Digital Services Act (DSA), which requires that any content restriction be proportionate and subject to strict procedural safeguards." In addition to appealing the fine, Cloudflare says it will continue to challenge Piracy Shield in Italian courts, engage with EU officials, and seek full access to AGCOM's Piracy Shield records.

Read more of this story at Slashdot.

Google Is Trying To Make 'Vibe Design' Happen

Slashdot - Mër, 18/03/2026 - 11:00md
With today's latest Stitch updates, Google is trying to make "vibe design" happen, reports The Verge's Jay Peters. The AI-native design platform encourages users to describe goals, feelings, or inspiration in "natural language," rather than starting with traditional blueprints. In a blog post, Google Labs Product Manager Rustin Banks says that Stitch can turn those inputs into interactive prototypes, automatically map user flows, and support real-time iteration. It introduces voice capabilities that allow users to "speak directly to [the] canvas" for feedback or changes. Tools like DESIGN.md also help users create reusable design systems across various projects.

Read more of this story at Slashdot.

New Windows 11 Bug Breaks Samsung PCs, Blocking Access To C: Drive

Slashdot - Mër, 18/03/2026 - 10:00md
Longtime Slashdot reader UnknowingFool writes: Users of Samsung PCs are reporting the inability to access the C: drive after the Windows 11 February update. The bug seems to be in connection with the Samsung Galaxy Connect app, which allows Samsung phones and tablets to connect to Windows machines. [A previous stable version of the app has been re-released to prevent this problem from spreading.] This parody explains the situation with humor. The issue stems from update KB5077181 and is impacting Samsung PCs running Windows 11 25H2 or 24H2. Microsoft and Samsung have confirmed the issue and published a workaround, but as PCWorld notes, it will take some time. The workaround "requires removing the Samsung application, then asking Windows to repair the drive permissions and assigning a new owner, then restoring the Windows default permissions, including patching in some custom code that Microsoft wrote."

Read more of this story at Slashdot.

Colin Walters: LLMs and core software: human driven

Planet GNOME - Mër, 18/03/2026 - 9:17md

It’s clear LLMs are one of the biggest changes in technology ever. The rate of progress is astounding: recently due to a configuration mistake I accidentally used Claude Sonnet 3.5 (released ~2 years ago) instead of Opus 4.6 for a task and looked at the output and thought “what is this garbage”?

But daily now: Opus 4.6 is able to generate reasonable PoC level Rust code for complex tasks for me. It’s not perfect – it’s a combination of exhausting and exhilarating to find the 10% absolutely bonkers/broken code that still makes it past subagents.

So yes I use LLMs every day, but I will be clear: if I could push a button to “un-invent” them I absolutely would because I think the long term issues in larger society (not being able to trust any media, and many of the things from Dario’s recent blog etc.) will outweigh the benefits.

But since we can’t un-invent them: here’s my opinion on how they should be used. As a baseline, I agree with a lot from this doc from Oxide about LLMs. What I want to talk about is especially around some of the norms/tools that I see as important for LLM use, following principles similar to those.

On framing: there’s “core” software vs “bespoke”. An entirely new capability of course is for e.g. a nontechnical restaurant owner to use an LLM to generate (“vibe code”) a website (excepting hopefully online orderings and payments!). I’m not overly concerned about this.

Whereas “core” software is what organizations/businesses provide/maintain for others. I work for a company (Red Hat) that produces a lot of this. I am sure no one would want to run for real an operating system, cluster filesystem, web browser, monitoring system etc. that was primarily “vibe coded”.

And while I respect people and groups that are trying to entirely ban LLM use, I don’t think that’s viable for at least my space.

Hence the subject of this blog is my perspective on how LLMs should be used for “core” software: not vibe coding, but using LLMs responsibly and intelligently – and always under human control and review.

Agents should amplify and be controlled by humans

I think most of the industry would agree we can’t give responsibility to LLMs. That means they must be overseen by humans. If they’re overseen by a human, then I think they should be amplifying what that human thinks/does as a baseline – intersected with the constraints of the task of course.

On “amplification”: Everyone using a LLM to generate content should inject their own system prompt (e.g. AGENTS.md) or equivalent. Here’s mine – notice I turn off all the emoji etc. and try hard to tune down bulleted lists because that’s not my style. This is a truly baseline thing to do.

Now most LLM generated content targeted for core software is still going to need review, but just ensuring that the baseline matches what the human does helps ensure alignment.

Pull request reviews

Let’s focus on a very classic problem: pull request reviews. Many projects have wired up a flow such that when a PR comes in, it gets reviewed by a model automatically. Many projects and tools pitch this. We use one on some of my projects.

But I want to get away from this because in my experience these reviews are a combination of:

  • Extremely insightful and correct things (there’s some amazing fine-tuning and tool use that must have happened to find some issues pointed out by some of these)
  • Annoying nitpicks that no one cares about (not handling spaces in a filename in a shell script used for tests)
  • Broken stuff like getting confused by things that happened after its training cutoff (e.g. Gemini especially seems to get confused by referencing the current date, and also is unaware of newer Rust features, etc)

In practice, we just want the first of course.

How I think it should work:

  • A pull request comes in
  • It gets auto-assigned to a human on the team for review
  • A human contributing to that project is running their own agents (wherever: could be local or in the cloud) using their own configuration (but of course still honoring the project’s default development setup and the project’s AGENTS.md etc)
  • A new containerized/sandboxed agent may be spawned automatically, or perhaps the human needs to click a button to do so – or perhaps the human sees the PR come in and thinks “this one needs a deeper review, didn’t we hit a perf issue with the database before?” and adds that to a prompt for the agent.
  • The agent prepares a draft review that only the human can see.
  • The human reviews/edits the draft PR review, and has the opportunity to remove confabulations, add their own content etc. And to send the agent back to look more closely at some code (i.e. this part can be a loop)
  • When the human is happy they click the “submit review” button.
  • Goal: it is 100% clear what parts are LLM generated vs human generated for the reader.

I wrote this agent skill to try to make this work well, and if you search you can see it in action in a few places, though I haven’t truly tried to scale this up.

I think the above matches the vision of LLMs amplifying humans.

Code Generation

There’s no doubt that LLMs can be amazing code generators, and I use them every day for that. But for any “core” software I work on, I absolutely review all of the output – not just superficially, and changes to core algorithms very closely.

At least in my experience the reality is still there’s that percentage of the time when the agent decided to reimplement base64 encoding for no reason, or disable the tests claiming “the environment didn’t support it” etc.

And to me it’s still a baseline for “core” software to require another human review to merge (per above!) with their own customized LLM assisting them (ideally a different model, etc).

FOSS vs closed

Of course, my position here is biased a bit by working on FOSS – I still very much believe in that, and working in a FOSS context can be quite different than working in a “closed environment” where a company/organization may reasonably want to (and be able to) apply uniform rules across a codebase.

While for sure LLMs allow organizations to create their own Linux kernel filesystems or bespoke Kubernetes forks or virtual machine runtime or whatever – it’s not clear to me that it is a good idea for most to do so. I think shared (FOSS) infrastructure that is productized by various companies, provided as a service and maintained by human experts in that problem domain still makes sense. And how we develop that matters a lot.

Faqet

Subscribe to AlbLinux agreguesi