You are here

Slashdot

Subscribe to Feed Slashdot Slashdot
News for nerds, stuff that matters
Përditësimi: 19 orë 56 min më parë

Walmart Announces Drone Delivery, Integration with Google's AI Chatbot Gemini

Dje, 11/01/2026 - 9:29md
Alphabet-owned Wing "is expanding its drone delivery service to an additional 150 Walmart stores across the U.S.," reports Axios: [T]he future is already here if you live in Dallas — where some Walmart customers order delivery by Wing three times a week. By the end of 2026, some 40 million Americans, or about 12 percent of the U.S. population, will be able to take advantage of the convenience, the companies claim... Once the items are picked and packed in a small cardboard basket, they are loaded onto a drone inside a fenced area in the Walmart parking lot. Drones fly autonomously to the designated address, with human pilots monitoring each flight from a central operations hub.... For now, Wing deliveries are free. "The goal is to expose folks to the wonders of drone delivery," explains Wing's chief business officer, Heather Rivera... Over time, she said Wing expects delivery fees to be comparable to other delivery options, but faster and more convenient. Service began recently in Atlanta and Charlotte, and it's coming soon to Los Angeles, Houston, Cincinnati, St. Louis, Miami and other major U.S. cities to be announced later, according to the article. "By 2027, Walmart and Wing say they'll have a network of more than 270 drone delivery locations nationwide." Walmart also announced a new deal today with Google's Gemini, allowing customers to purchase Walmart products from within Gemini. (Walmart announced a similar deal for ChatGPT in October.) Slashdot reader BrianFagioli calls this "a defensive angle that Walmart does not quite say out loud." As AI models answer more questions directly, retailers risk losing customers before they ever hit a website. If Gemini recommends a product from someone else first, Walmart loses the sale before it starts. By planting itself inside the AI, Walmart keeps a seat at the table while the internet shifts under everyone's feet. Google clearly benefits too. Gemini gets a more functional purpose than just telling you how to boil pasta or summarize recipes. Now it can carry someone from the moment they wonder what they need to the moment the order is placed. That makes the assistant stickier and a bit more practical than generic chat. Walmart's incoming CEO John Furner says the company wants to shape this new pattern instead of being dragged into it later. Sundar Pichai calls Walmart an early partner in what he sees as a broader wave of agent style commerce, where AI starts doing the errands people used to handle themselves. The article concludes "This partnership serves as a snapshot of where retail seems to be heading..."

Read more of this story at Slashdot.

Gentoo Linux Plans Migration from GitHub Over 'Attempts to Force Copilot Usage for Our Repositories'

Dje, 11/01/2026 - 8:29md
Gentoo Linux posted its 2025 project retrospective this week. Some interesting details: Mostly because of the continuous attempts to force Copilot usage for our repositories, Gentoo currently considers and plans the migration of our repository mirrors and pull request contributions to Codeberg. Codeberg is a site based on Forgejo, maintained by a non-profit organization, and located in Berlin, Germany. Gentoo continues to host its own primary git, bugs, etc infrastructure and has no plans to change that... We now publish weekly Gentoo images for Windows Subsystem for Linux (WSL), based on the amd64 stages, see our mirrors. While these images are not present in the Microsoft store yet, that's something we intend to fix soon... Given the unfortunate fracturing of the GnuPG / OpenPGP / LibrePGP ecosystem due to competing standards, we now provide an alternatives mechanism to choose the system gpg provider and ease compatibility testing... We have added a bootstrap path for Rust from C++ using Mutabah's Rust compiler mrustc, which alleviates the need for pre-built binaries and makes it significantly easier to support more configurations. Similarly, Ada and D support in gcc now have clean bootstrap paths, which makes enabling these in the compiler as easy as switching the useflags on gcc and running emerge. Other interesting statistics for the year: Gentoo currently consists of 31,663 ebuilds for 19,174 different packages.For amd64 (x86-64), there are 89 GBytes of binary packages available on the mirrors.Gentoo each week builds 154 distinct installation stages for different processor architectures and system configurations, with an overwhelming part of these fully up-to-date.The number of commits to the main ::gentoo repository has remained at an overall high level in 2025, with a slight decrease from 123,942 to 112,927.The number of commits by external contributors was 9,396, now across 377 unique external authors. Thanks to long-time Slashdot reader Heraklit for sharing the 2025 retrospective.

Read more of this story at Slashdot.

Personal Info on 17.5 Million Users May Have Leaked to Dark Web After 2024 Instagram Breach

Dje, 11/01/2026 - 6:34md
An anonymous reader shared this report from Engadget: If you received a bunch of password reset requests from Instagram recently, you're not alone. As reported by Malwarebytes, an antivirus software company, there was a data breach revealing the "sensitive information" of 17.5 million Instagram users. Malwarebytes added that the leak included Instagram usernames, physical addresses, phone numbers, email addresses and more. The company added that the "data is available for sale on the dark web and can be abused by cybercriminals." Malwarebytes noted in an email to its customers that it discovered the breach during its routine dark web scan and that it's tied to a potential incident related to an Instagram API exposure from 2024.

Read more of this story at Slashdot.

China Tests a Supercritical CO2 Generator in Commercial Operation

Dje, 11/01/2026 - 5:34md
"China recently placed a supercritical carbon dioxide power generator into commercial operation," writes CleanTechnica, "and the announcement was widely framed as a technological breakthrough." The system, referred to as Chaotan One, is installed at a steel plant in Guizhou province in mountainous southwest China and is designed to recover industrial waste heat and convert it into electricity. Each unit is reported to be rated at roughly 15 MW, with public statements describing configurations totaling around 30 MW. Claimed efficiency improvements range from 20% to more than 30% higher heat to power conversion compared with conventional steam based waste heat recovery systems. These are big numbers, typical of claims for this type of generator, and they deserve serious attention. China doing something first, however, has never been a reliable indicator that the thing will prove durable, economic, or widely replicable. China is large enough to try almost everything. It routinely builds first of a kind systems precisely because it can afford to learn by doing, discarding what does not work and scaling what does. This approach is often described inside China as crossing the river by feeling for stones. It produces valuable learning, but it also produces many dead ends. The question raised by the supercritical CO2 deployment is not whether China is capable of building it, but whether the technology is likely to hold up under real operating conditions for long enough to justify broad adoption. A more skeptical reading is warranted because Western advocates of specific technologies routinely point to China's limited deployments as evidence that their preferred technologies are viable, when the scale of those deployments actually argues the opposite. China has built a single small modular reactor and a single experimental molten salt reactor, not fleets of them, despite having the capital, supply chains, and regulatory capacity to do so if they made economic sense... If small modular reactors or hydrogen transportation actually worked at scale and cost, China would already be building many more of them, and the fact that it is not should be taken seriously rather than pointing to very small numbers of trials compared to China's very large denominators... What is notably absent from publicly available information is detailed disclosure of materials, operating margins, impurity controls, and maintenance assumptions. This is not unusual for early commercial deployments in China. It does mean that external observers cannot independently assess long term durability claims. The article notes America's Energy Department funded a carbon dioxide turbine in Texas rated at roughly 10 MW electric that "reached initial power generation in 2024 after several years of construction and commissioning." But for both these efforts, the article warns that "early efficiency claims should be treated as provisional. A system that starts at 15 MW and delivers 13 MW after several years with rising maintenance costs is not a breakthrough. It is an expensive way to recover waste heat compared with mature steam based alternatives that already operate for decades with predictable degradation..." "If both the Chinese and U.S. installations run for five years without significant reductions in performance and without high maintenance costs, I will be surprised. In that case, it would be worth revisiting this assessment and potentially changing my mind." Thanks to long-time Slashdot reader cusco for sharing the article.

Read more of this story at Slashdot.

That Bell Labs 'Unix' Tape from 1974: From a Closet to Computing History

Dje, 11/01/2026 - 4:34md
Remember that re-discovered computer tape with one of the earliest versions of Unix from the early 1970s? This week several local news outlets in Utah reported on the find, with KSL creating a video report with shots of the tape arriving at Silicon Valley's Computer History Museum, the closet where it was found, and even its handwritten label. The Salt Lake Tribune reports that the closet where it was found also contained "old cords from unknown sources and mountains of papers that had been dumped from a former professor's file cabinet, including old drawings from his kids and saved plane ticket stubs." (Their report also includes a photo of the University of Utah team that found the tape — the University's Flux Research Group). Professor Robert Ricci believes only 20 copies were ever produced of the version of Unix on that tape: At the time, in the 1970s, Ricci estimates there would have been maybe two or three of those computers — called a PDP-11, or programmed data processor — in Utah that could have run UNIX V4, including the one at the U. Having that technology is part of why he believes the U. got a copy of the rare software. The other part was the distinguished computing faculty at the school. The new UNIX operating system would've been announced at conferences in the early 1970s, and a U. professor at the time named Martin Newell frequently attended those because of his own recognized work in the field, Ricci said. In another box, stuffed in under manila envelopes, [researcher Aleks] Maricq found a 1974 letter written to Newell from Ken Thompson at Bell Labs that said as soon as "a new batch comes from the printers, I will send you the system." Ricci and Maricq are unsure if the software was ever used. They reached out to Newell, who is now 72 and retired, as well as some of his former students. None of them recalled actually running it through the PDP-11... The late Jay Lepreau also worked at the U.'s computing department and created the Flux Research Group that Ricci, Maricq and [engineering research associate Jon] Duerig are now part of. Lepreau overlapped just barely with Newell's tenure. In 1978, Lepreau and a team at the U. worked with a group at the University of California, Berkeley. Together, they built their own clone of the UNIX operating system. They called it BSD, or Berkeley Standard Distribution. Steve Jobs, the former CEO of Apple, worked with BSD, too, and it influenced his work. Ultimately, it was Lepreau who saved the 9-track tape with the UNIX system on it in his U. office. And he's why the university still has it today. "He seems to have found it and decided it was worth keeping," Ricci said... The U. will also get the tape back from the museum. Maricq said it will likely be displayed in the university's new engineering building that's set to open in January 2027. That's why, the research associate said, he was cleaning out the storage room to begin with — to try to prepare for the move. He was mostly just excited to see the floor again. "I thought we'd find some old stuff, but I didn't think it'd be anything like this," he said. And Maricq still has boxes to go through, including more believed to be from Lepreau's office. Local news station KMYU captured the thoughts of some of the University researchers who found the tape: "When you see the very first beginnings of something, and you go from seed to sapling, that's what we saw here," [engineering research associate Jon] Duerig said. "We see this thing in the moment of flux. We see the signs of all the things changing — of all the things developing that we now see today." Duerig also gave this comment to local news station KSL. "The coolest thing is that anybody, anywhere in the world can now access this, right? People can go on the internet archive and download the raw tape file and simulate running it," Duerig said. "People have posted browsable directory trees of the whole thing." One of the museum's directors said the tape's recovery marked a big day for the museum "One of the things that was pretty exciting to us is that just that there is this huge community of people around the world who were excited to jump on the opportunity to look at this piece of history," Ricci said. "And it was really cool that we were able to share that." Duerig said while there weren't many comments or footnotes from the programmers of that time, they did discovery more unexpected content having to do with Bell Labs on the tape. "There were survey results of them actually asking survey questions of their employees at these operator centers," he said. Thanks to long-time Slashdot reader walterbyrd for sharing the news.

Read more of this story at Slashdot.

Cory Doctorow: Legalising Reverse Engineering Could End 'Enshittification'

Dje, 11/01/2026 - 1:34md
Scifi author/tech activist Cory Doctorow has decried the "enshittification" of our technologies to extract more profit. But Saturday he also described what could be "the beginning of the end for enshittification" in a new article for the Guardian — "our chance to make tech good again". There is only one reason the world isn't bursting with wildly profitable products and projects that disenshittify the US's defective products: its (former) trading partners were bullied into passing an "anti-circumvention" law that bans the kind of reverse-engineering that is the necessary prelude to modifying an existing product to make it work better for its users (at the expense of its manufacturer)... Post-Brexit, the UK is uniquely able to seize this moment. Unlike our European cousins, we needn't wait for the copyright directive to be repealed before we can strike article 6 off our own law books and thereby salvage something good out of Brexit... Until we repeal the anti-circumvention law, we can't reverse-engineer the US's cloud software, whether it's a database, a word processor or a tractor, in order to swap out proprietary, American code for robust, open, auditable alternatives that will safeguard our digital sovereignty. The same goes for any technology tethered to servers operated by any government that might have interests adverse to ours — say, the solar inverters and batteries we buy from China. This is the state of play at the dawn of 2026. The digital rights movement has two powerful potential coalition partners in the fight to reclaim the right of people to change how their devices work, to claw back privacy and a fair deal from tech: investors and national security hawks. Admittedly, the door is only open a crack, but it's been locked tight since the turn of the century. When it comes to a better technology future, "open a crack" is the most exciting proposition I've heard in decades. Thanks to Slashdot reader Bruce66423 for sharing the article.

Read more of this story at Slashdot.

C# (and C) Grew in Popularity in 2025, Says TIOBE

Dje, 11/01/2026 - 9:34pd
For a quarter century, the TIOBE Index has attempted to rank the popularity of programming languages by the number of search engine results they bring up — and this week they had an announcement. Over the last year the language showing the largest increase in its share of TIOBE's results was C#. TIOBE founder/CEO Paul Jansen looks back at how C++ evolved: From a language-design perspective, C# has often been an early adopter of new trends among mainstream languages. At the same time, it successfully made two major paradigm shifts: from Windows-only to cross-platform, and from Microsoft-owned to open source. C# has consistently evolved at the right moment. For many years now, there has been a direct battle between Java and C# for dominance in the business software market. I always assumed Java would eventually prevail, but after all this time the contest remains undecided. It is an open question whether Java — with its verbose, boilerplate-heavy style and Oracle ownership — can continue to keep C# at bay. While C# remains stuck in the same #5 position it was in a year ago, its share of TIOBE's results rose 2.94% — the largest increase of the 100 languages in their rankngs. But TIOBE's CEO notes that his rankings for the top 10 highest-scoring languages delivered "some interesting movements" in 2025: C and C++ swapped positions. [C rose to the #2 position — behind Python — while C++ dropped from #2 to the #4 rank that C held in January of 2025]. Although C++ is evolving faster than ever, some of its more radical changes — such as the modules concept — have yet to see widespread industry adoption. Meanwhile, C remains simple, fast, and extremely well suited to the ever-growing market of small embedded systems. Even Rust has struggled to penetrate this space, despite reaching an all-time high of position #13 this month. So who were the other winners of 2025, besides C#? Perl made a surprising comeback, jumping from position #32 to #11 and re-entering the top 20. Another language returning to the top 10 is R, driven largely by continued growth in data science and statistical computing. Of course, where there are winners, there are also losers. Go appears to have permanently lost its place in the top 10 during 2025. The same seems true for Ruby, which fell out of the top 20 and is unlikely to return anytime soon. What can we expect from 2026? I have a long history of making incorrect predictions, but I suspect that TypeScript will finally break into the top 20. Additionally, Zig, which climbed from position #61 to #42 in 2025, looks like a strong candidate to enter the TIOBE top 30. Here's how TIOBE estimated the 10 most popularity programming languages at the end of 2025 PythonCJavaC++C#JavaScriptVisual BasicSQLDelphi/Object PascalR

Read more of this story at Slashdot.

Elon Musk: X's New Algorithm Will Be Made Open Source in Seven Days

Dje, 11/01/2026 - 6:34pd
"We will make the new ð algorithm...open source in 7 days," Elon Musk posted Saturday on X.com. Musk says this is "including all code used to determine what organic and advertising posts are recommended to users," and "This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed." Some context from Engadget: Musk has been making promises of open-sourcing the algorithm since his takeover of Twitter, and in 2023 published the code for the site's "For You" feed on GitHub. But the code wasn't all that revealing, leaving out key details, according to analyses at the time. And it hasn't been kept up to date. Bloomberg also reported on Saturday's announcement: The billionaire didn't say why X was making its algorithm open source. He and the company have clashed several times with regulators over content being shown to users. Some X users had previously complained that they were receiving fewer posts on the social media platform from people they follow. In October, Musk confirmed in a post on X that the company had found a "significant bug" in the platform's "For You" algorithm and pledged a fix. The company has also been working to incorporate more artificial intelligence into its recommendation algorithm for X, using Grok, Musk's artificial intelligence chatbot... In September, Musk wrote that the goal was for X's recommendation engine to "be purely AI" and that the company would share its open source algorithm about every two weeks. "To the degree that people are seeing improvements in their feed, it is not due to the actions of specific individuals changing heuristics, but rather increasing use of Grok and other AI tools," Musk wrote in October. The company was working to have all of the more than 100 million daily posts published to X evaluated by Grok, which would then offer individual users the posts most likely to interest them, Musk wrote. "This will profoundly improve the quality of your feed." He added that the company was planning to roll out the new features by November.

Read more of this story at Slashdot.

Nature-Inspired Computers Are Shockingly Good At Math

Dje, 11/01/2026 - 3:34pd
An R&D lab under America's Energy Department annnounced this week that "Neuromorphic computers, inspired by the architecture of the human brain, are proving surprisingly adept at solving complex mathematical problems that underpin scientific and engineering challenges." Phys.org publishes the announcement from Sandia National Lab: In a paper published in Nature Machine Intelligence, Sandia National Laboratories computational neuroscientists Brad Theilman and Brad Aimone describe a novel algorithm that enables neuromorphic hardware to tackle partial differential equations, or PDEs — the mathematical foundation for modeling phenomena such as fluid dynamics, electromagnetic fields and structural mechanics. The findings show that neuromorphic computing can not only handle these equations, but do so with remarkable efficiency. The work could pave the way for the world's first neuromorphic supercomputer, potentially revolutionizing energy-efficient computing for national security applications and beyond... "We're just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly," Theilman said.For decades, experts have believed that neuromorphic computers were best suited for tasks like recognizing patterns or accelerating artificial neural networks. These systems weren't expected to excel at solving rigorous mathematical problems like PDEs, which are typically tackled by traditional supercomputers. But for Aimone and Theilman, the results weren't surprising. The researchers believe the brain itself performs complex computations constantly, even if we don't consciously realize it. "Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball," Aimone said. "These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply..." Their research also raises intriguing questions about the nature of intelligence and computation. The algorithm developed by Theilman and Aimone retains strong similarities to the structure and dynamics of cortical networks in the brain. "We based our circuit on a relatively well-known model in the computational neuroscience world," Theilman said. "We've shown the model has a natural but non-obvious link to PDEs, and that link hasn't been made until now — 12 years after the model was introduced." The researchers believe that neuromorphic computing could help bridge the gap between neuroscience and applied mathematics, offering new insights into how the brain processes information. "Diseases of the brain could be diseases of computation," Aimone said. "But we don't have a solid grasp on how the brain performs computations yet." If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer's and Parkinson's.

Read more of this story at Slashdot.

Four More Tech Bloggers Are Switching to Linux

Sht, 10/01/2026 - 11:34md
Is there a trend? This week four different articles appeared on various tech-news sites with an author bragging about switching to Linux. "Greetings from the year of Linux on my desktop," quipped the Verge's senior reviews editor, who finally "got fed up and said screw it, I'm installing Linux." They switched to CachyOS — just like this writer for the videogame magazine Escapist: I've had a fantastic time gaming on Linux. Valve's Windows-to-Linux translation layer, Proton, and even CachyOS' bundled fork have been working just fine. Of course, it's not perfect, and there's been a couple of instances where I've had to problem-solve something, but most of the time, any issues gaming on Linux have been fixed by swapping to another version of Proton. If you're deep in online games like Fortnite, Call of Duty, Destiny 2, GTAV or Battlefield 6, it might not be the best option to switch. These games feature anti-cheats that look for versions of Windows or even the heart of the OS, the kernel, to verify the system isn't going to mess up someone's game.... CachyOS is thankfully pre-packed with Nvidia drivers, meaning I didn't have to dance around trying to find them.... Certain titles will perform worse than their counterparts, simply due to how the bods at Nvidia are handling the drivers for Linux. This said, I'm still not complaining when I'm pushing nearly 144fps or more in newer games. The performance hit is there, but it's nowhere near enough to stave off even an attempt to mess about with Linux. Do you know how bizarre it is to say it's "nice to have a taskbar again"? I use macOS daily for a lot of my work, which uses a design baked back in the 1990s through NeXT. Seeing just a normal taskbar that doesn't try to advertise to me or crash because an update killed it for some reason is fantastic. That's how bad it is out there right now for Windows. "I run Artix, by the way," joked a senior tech writer at Notebookcheck (adding "There. That's out of the way...") I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn't choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux... Artix differs from Arch in that it does not use SystemD as its init system. I won't go down the rabbit hole of init systems here, but suffice it to say that Artix boots lightning quick (less than 10 seconds from a cold power on) and is pretty light on system resources. However, it didn't come "fully assembled..." The biggest problem I ran into after installing Artix on the [MacBook] Air was the lack of wireless drivers, which meant that WiFi did not work out of the box. The resolution was simple: I needed to download the appropriate WiFi drivers (Broadcom drivers, to be exact) from Artix's main repository. This is a straightforward process handled by a single command in the Terminal, but it requires an internet connection... which my laptop did not have. Ultimately, I connected a USB-to-Ethernet adapter, plugged the laptop directly into my router, and installed the WiFi drivers that way. The whole process took about 10 minutes, but it was annoying nonetheless. For the record, my desktop (an AMD Ryzen 7 6800H-based system) worked flawlessly out-of-the-box, even with my second monitor's uncommon resolution (1680x1050, vertical orientation). I did run into issues with installing some packages on both machines. Trying to install the KDE desktop environment (essentially a different GUI for the main OS) resulted in strange artifacts that put white text on white backgrounds in the menus, and every resolution I tried failed to correct this bug. After reverting to XFCE4 (the default desktop environment for my Artix install), the WiFi signal indicator in the taskbar disappeared. This led to me having to uninstall a network manager installed by KDE and re-linking the default network manager to the runit services startup folder. If that sentence sounds confusing, the process was much more so. It has been resolved, and I have a WiFi indicator that lets me select wireless networks again, but only after about 45 minutes of reading manuals and forum posts. Other issues are inherent to Linux. Not all games on Steam that are deemed Linux compatible actually are. Civilization III Complete is a good example: launching the game results in the map turning completely black. (Running the game through an application called Lutris resolved this issue.) Not all the software I used on Windows is available in Linux, such as Greenshot for screenshots or uMark for watermarking photos in bulk. There are alternatives to these, but they don't have the same features or require me to relearn workflows... Linux is not a "one and done" silver bullet to solve all your computer issues. It is like any other operating system in that it will require users to learn its methods and quirks. Admittedly, it does require a little bit more technical knowledge to dive into the nitty-gritty of the OS and fully unlock its potential, but many distributions (such as Mint) are ready to go out of the box and may never require someone to open a command line... [T]he issues I ran into on Linux were, for the most part, my fault. On Windows or macOS, most problems I run into are caused by a restriction or bug in the OS. Linux gives me the freedom to break my machine and fix it again, teaching me along the way. With Microsoft's refusal (either from pride or ignorance) to improve (or at least not crapify) Windows 11 despite loud user outrage, switching to Linux is becoming a popular option. It's one you should consider doing, and if you've been thinking about it for any length of time, it's time to dive in. And tinkerer Kevin Wammer switched from MacOS to Linux, saying "Linux has come a long way" after more than 30 years — but "Windows still sucks..."

Read more of this story at Slashdot.

Why Care About Debt-to-GDP?

Pre, 09/01/2026 - 3:59md
Abstract of a paper on NBER: We construct an international panel data set comprising three distinct yet plausible measures of government indebtedness: the debt-to-GDP, the interest-to-GDP, and the debt-to-equity ratios. Our analysis reveals that these measures yield differing conclusions about recent trends in government indebtedness. While the debt-to-GDP ratio has reached historically high levels, the other two indicators show either no clear trend or a declining pattern over recent decades. We argue for the development of stronger theoretical foundations for the measures employed in the literature, suggesting that, without such grounding, assertions about debt (un)sustainability may be premature.

Read more of this story at Slashdot.

Record Ocean Heat is Intensifying Climate Disasters, Data Shows

Pre, 09/01/2026 - 3:07md
The world's oceans absorbed yet another record-breaking amount of heat in 2025, continuing an almost unbroken streak of annual records since the start of the millennium and fueling increasingly extreme weather events around the globe. More than 90% of the heat trapped by humanity's carbon emissions ends up in the oceans, making ocean heat content one of the clearest indicators of the climate crisis's trajectory. The analysis, published in the journal Advances in Atmospheric Sciences, drew on temperature data collected across the oceans and collated by three independent research teams. The measurements cover the top 2,000 meters of ocean depth, where most heat absorption occurs. The amount of heat absorbed is equivalent to more than 200 times the total electricity used by humans worldwide. This extra thermal energy intensifies hurricanes and typhoons, produces heavier rainfall and greater flooding, and results in longer marine heatwaves that decimate ocean life. The oceans are likely at their hottest in at least 1,000 years and heating faster than at any point in the past 2,000 years.

Read more of this story at Slashdot.

Fusion Physicists Found a Way Around a Long-Standing Density Limit

Pre, 09/01/2026 - 11:00pd
alternative_right shares a report from ScienceAlert: At the Experimental Advanced Superconducting Tokamak (EAST), physicists successfully exceeded what is known as the Greenwald limit, a practical density boundary beyond which plasmas tend to violently destabilize, often damaging reactor components. For a long time, the Greenwald limit was accepted as a given and incorporated into fusion reactor engineering. The new work shows that precise control over how the plasma is created and interacts with the reactor walls can push it beyond this limit into what physicists call a 'density-free' regime. [...] A team led by physicists Ping Zhu of Huazhong University of Science and Technology and Ning Yan of the Chinese Academy of Sciences designed an experiment to take this theory further, based on a simple premise: that the density limit is strongly influenced by the initial plasma-wall interactions as the reactor starts up. In their experiment, the researchers wanted to see if they could deliberately steer the outcome of this interaction. They carefully controlled the pressure of the fuel gas during tokamak startup and added a burst of heating called electron cyclotron resonance heating. These changes altered how the plasma interacts with the tokamak walls through a cooler plasma boundary, which dramatically reduced the degree to which wall impurities entered the plasma. Under this regime, the researchers were able to reach densities up to about 65 percent higher than the tokamak's Greenwald limit. This doesn't mean that magnetically confined plasmas can now operate with no density limits whatsoever. However, it does show that the Greenwald limit is not a fundamental barrier and that tweaking operational processes could lead to more effective fusion reactors. The findings have been published in Science Advances.

Read more of this story at Slashdot.

Ultimate Camouflage Tech Mimics Octopus In Scientific First

Pre, 09/01/2026 - 8:00pd
Researchers at Stanford University have created a programmable synthetic "skin" that can independently change color and texture, "a feat previously only available within the animal kingdom," reports the Register. From the report: The technique employs electron beams to write patterns and add optical layers that create color effects. When exposed to water, the film swells to reveal texture and colors independently, depending on which side of the material is exposed, according to a paper published in the scientific journal Nature this week. In an accompanying article, University of Stuttgart's Benjamin Renz and Na Liu said the researchers' "most striking achievement was a photonic skin in which color and texture could be independently controlled, mirroring the separate regulation... in octopuses." The research team used the polymer PEDOT:PSS, which can swell in water, as the basis for their material. Its reaction to water can be controlled by irradiating it with electrons, creating textures and patterns in the film. By adding thin layers of gold, the researchers turned surface texture into tunable optical effects. A single layer could be used to scatter light, giving the shiny metal a matte, textured appearance. To control color, a polymer film was sandwiched between two layers of gold, forming an optical cavity, which selectively reflects light.

Read more of this story at Slashdot.

Some Super-Smart Dogs Can Learn New Words Just By Eavesdropping

Pre, 09/01/2026 - 4:30pd
An anonymous reader quotes a report from NPR: [I]t turns out that some genius dogs can learn a brand new word, like the name of an unfamiliar toy, by just overhearing brief interactions between two people. What's more, these "gifted" dogs can learn the name of a new toy even if they first hear this word when the toy is out of sight -- as long as their favorite human is looking at the spot where the toy is hidden. That's according to a new study in the journal Science. "What we found in this study is that the dogs are using social communication. They're using these social cues to understand what the owners are talking about," says cognitive scientist Shany Dror of Eotvos Lorand University and the University of Veterinary Medicine, Vienna. "This tells us that the ability to use social information is actually something that humans probably had before they had language," she says, "and language was kind of hitchhiking on these social abilities." [...] "There's only a very small group of dogs that are able to learn this differentiation and then can learn that certain labels refer to specific objects," she says. "It's quite hard to train this and some dogs seem to just be able to do it." [...] To explore the various ways that these dogs are capable of learning new words, Dror and some colleagues conducted a study that involved two people interacting while their dog sat nearby and watched. One person would show the other a brand new toy and talk about it, with the toy's name embedded into sentences, such as "This is your armadillo. It has armadillo ears, little armadillo feet. It has a tail, like an armadillo tail." Even though none of this language was directed at the dogs, it turns out the super-learners registered the new toy's name and were later able to pick it out of a pile, at the owner's request. To do this, the dogs had to go into a separate room where the pile was located, so the humans couldn't give them any hints. Dror says that as she watched the dogs on camera from the other room, she was "honestly surprised" because they seemed to have so much confidence. "Sometimes they just immediately went to the new toy, knowing what they're supposed to do," she says. "Their performance was really, really high." She and her colleagues wondered if what mattered was the dog being able to see the toy while its name was said aloud, even if the words weren't explicitly directed at the dog. So they did another experiment that created a delay between the dog seeing a new toy and hearing its name. The dogs got to see the unfamiliar toy and then the owner dropped the toy in a bucket, so it was out of sight. Then the owner would talk to the dog, and mention the toy's name, while glancing down at the bucket. While this was more difficult for dogs, overall they still could use this information to learn the name of the toy and later retrieve it when asked. "This shows us how flexible they are able to learn," says Dror. "They can use different mechanisms and learn under different conditions."

Read more of this story at Slashdot.

YouTube Will Now Let You Filter Shorts Out of Search Results

Pre, 09/01/2026 - 3:10pd
YouTube is updating search filters so users can explicitly choose between Shorts and long-form videos. The change also replaces view-count sorting with a new "Popularity" filter and removes underperforming options like "Sort by Rating." The Verge reports: Right now, a filter-less search shows a mix of longform and short form videos, which can be annoying if you just want to see videos in one format or the other. But in the new search filters, among other options, you can pick to see "Videos," which in my testing has only showed a list of longform videos, or "Shorts," which just shows Shorts. YouTube is also removing the "Upload Date - Last Hour" and "Sort by Rating" filters because they "were not working as expected and had contributed to user complaints." The company will still offer other "Upload Date" filters, like "Today," "This week," "This Month," and "This Year," and you can also find popular videos with the new "Popularity" filter, which is replacing the "View count" sort option. (With the new "Popularity" filter, YouTube says that "our systems assess a video's view count and other relevance signals, such as watch time, to determine its popularity for that specific query.")

Read more of this story at Slashdot.

Lawsuit Over OpenAI For-Profit Conversion Can Head To Trial, US Judge Says

Pre, 09/01/2026 - 2:30pd
Longtime Slashdot reader schwit1 shares a report from Reuters: Billionaire entrepreneur Elon Musk persuaded a judge on Wednesday to allow a jury trial on his allegations that ChatGPT maker OpenAI violated its founding mission in its high-profile restructuring to a for-profit entity. Musk was a cofounder of OpenAI in 2015 but left in 2018 and now runs an AI company that competes with it. U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California, said at a hearing that there was "plenty of evidence" suggesting OpenAI's leaders made assurances that its original nonprofit structure was going to be maintained. The judge said there were enough disputed facts to let a jury consider the claims at a trial scheduled for March, rather than decide the issues herself. She said she would issue a written order after the hearing that addresses OpenAI's bid to throw out the case. [...] Musk contends he contributed about $38 million, roughly 60% of OpenAI's early funding, along with strategic guidance and credibility, based on assurances that the organization would remain a nonprofit dedicated to the public benefit. The lawsuit accuses OpenAI co-founders Sam Altman and Greg Brockman of plotting a for-profit switch to enrich themselves, culminating in multibillion-dollar deals with Microsoft and a recent restructuring. OpenAI, Altman and Brockman have denied the claims, and they called Musk "a frustrated commercial competitor seeking to slow down a mission-driven market leader." Microsoft is also a defendant and has urged the judge to toss Musk's lawsuit. A lawyer for Microsoft said there was no evidence that the company "aided and abetted" OpenAI. OpenAI in a statement after the hearing said: "Mr Musk's lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial."

Read more of this story at Slashdot.

Illinois Health Department Exposed Over 700,000 Residents' Personal Data For Years

Pre, 09/01/2026 - 1:50pd
Illinois Department of Human Services disclosed that a misconfigured internal mapping website exposed sensitive personal data for more than 700,000 Illinois residents for over four years, from April 2021 to September 2025. Officials say they can't confirm whether the publicly accessible data was ever viewed. TechCrunch reports: Officials said the exposed data included personal information on 672,616 individuals who are Medicaid and Medicare Savings Program recipients. The data included their addresses, case numbers, and demographic data -- but not individuals' names. The exposed data also included names, addresses, case statuses, and other information relating to 32,401 individuals in receipt of services from the department's Division of Rehabilitation Services.

Read more of this story at Slashdot.

Google Is Adding an 'AI Inbox' To Gmail That Summarizes Emails

Pre, 09/01/2026 - 1:10pd
An anonymous reader quotes a report from Wired: Google is putting even more generative AI tools into Gmail as part of its goal to further personalize user inboxes and streamline searches. On Thursday, the company announced a new "AI Inbox" tab, currently in a beta testing phase, that reads every message in a user's Gmail and suggests a list of to-dos and key topics, based on what it summarizes. In Google's example of what this AI Inbox could look like in Gmail, the new tab takes context from a user's messages and suggests they reschedule their dentist appointment, reply to a request from their child's sports coach, and pay an upcoming fee before the deadline. Also under the AI Inbox tab is a list of important topics worth browsing, nestled beneath the action items at the top. Each suggested to-do and topic links back to the original email for more context and for verification. [...] For users who are concerned about their privacy, the information Google gleans by skimming through inboxes will not be used to improve the company's foundational AI models. "We didn't just bolt AI onto Gmail," says Blake Barnes, who leads the project for Google. "We built a secure privacy architecture, specifically for this moment." He emphasizes that users can turn off Gmail's new AI tools if they don't want them. At the same time Google announced its AI Inbox, the company made free for all Gmail users multiple Gemini features that were previously available only to paying subscribers. This includes the Help Me Write tool, which generates emails from a user prompt, as well as AI Overviews for email threads, which essentially posts a TL;DR summary at the top of long message threads. Subscribers to Google's Ultra and Pro plans, which start at $20 a month, get two additional new features in their Gmail inbox. First, an AI proofreading tool that suggests more polished grammar and sentence structures. And second, an AI Overviews tool that can search your whole inbox and create relevant summaries on a topic, rather than just summarizing a single email thread.

Read more of this story at Slashdot.

French Court Orders Google DNS to Block Pirate Sites, Dismisses 'Cloudflare-First' Defense

Pre, 09/01/2026 - 12:30pd
Paris Judicial Court ordered Google to block additional pirate sports-streaming domains at the DNS level, rejecting Google's argument that enforcement should target upstream providers like Cloudflare first. "The blockade was requested by Canal+ and aims to stop pirate streams of Champions League games," notes TorrentFreak. From the report: Most recently, Google was compelled to take action following a complaint from French broadcaster Canal+ and its subsidiaries regarding Champions League piracy.. Like previous blocking cases, the request is grounded in Article L. 333-10 of the French Sports Code, which enables rightsholders to seek court orders against any entity that can help to stop 'serious and repeated' sports piracy. After reviewing the evidence and hearing arguments from both sides, the Paris Court granted the blocking request, ordering Google to block nineteen domain names, including antenashop.site, daddylive3.com, livetv860.me, streamysport.org and vavoo.to. The latest blocking order covers the entire 2025/2026 Champions League series, which ends on May 30, 2026. It's a dynamic order too, which means that if these sites switch to new domains, as verified by ARCOM, these have to be blocked as well. Google objected to the blocking request. Among other things, it argued that several domains were linked to Cloudflare's CDN. Therefore, suspending the sites on the CDN level would be more effective, as that would render them inaccessible. Based on the subsidiarity principle, Google argued that blocking measures should only be ordered if attempts to block the pirate sites through more direct means have failed. The court dismissed these arguments, noting that intermediaries cannot dictate the enforcement strategy or blocking order. Intermediaries cannot require "prior steps" against other technical intermediaries, especially given the "irremediable" character of live sports piracy. The judge found the block proportional because Google remains free to choose the technical method, even if the result is mandated. Internet providers, search engines, CDNs, and DNS resolvers can all be required to block, irrespective of what other measures were taken previously. Google further argued that the blocking measures were disproportionate because they were complex, costly, easily bypassed, and had effects beyond the borders of France. The Paris court rejected these claims. It argued that Google failed to demonstrate that implementing these blocking measures would result in "important costs" or technical impossibilities. Additionally, the court recognized that there would still be options for people to bypass these blocking measures. However, the blocks are a necessary step to "completely cease" the infringing activities.

Read more of this story at Slashdot.

Faqet