You are here

Agreguesi i feed

Paleontologists Identify Tiny Three-Eyed 'Sea Moth' Predator in Fossils

Slashdot - Sht, 17/05/2025 - 5:34md
"With the help of more than five dozen fossils, paleontologists have uncovered a tiny three-eyed predator nicknamed the 'sea moth'," reports CNN, "that swam in Earth's oceans 506 million years ago." Tiny as in 15 to 61 mm in total body length. (That's 0.60 to 2.4 inches...) But check out the illustration in CNN's article... Mosura fentoni, as the species is known, belongs to a group called radiodonts, an early offshoot of the arthropod evolutionary tree, according to a new study published Tuesday in the journal Royal Society Open Science. While radiodonts are now extinct, studying their fossilized remains can illuminate how modern arthropods such as insects, spiders and crabs evolved. One of the most diverse animal groups, arthropods are believed to account for more than 80% of living animal species, said lead study author Dr. Joe Moysiuk, curator of paleontology and geology at the Manitoba Museum in Winnipeg. Well-preserved specimens of the previously unknown Mosura fentoni also reveal something that's never been seen in any other radiodont: an abdomen-like body region with 16 segments that include gills at its rear. This part of the creature's anatomy is similar to a batch of segments bearing respiratory organs at the rear of the body found in distant modern radiodont relatives like horseshoe crabs, woodlice and insects, Moysiuk said.... No animal living today quite looks like Mosura fentoni, Moysiuk said, although it had jointed claws similar to those of modern insects and crustaceans. But unlike those critters, which can have two or four additional eyes used to help maintain orientation, Mosura had a larger and more conspicuous third eye in the middle of its head. "Although not closely related, Mosura probably swam in a similar way to a ray, undulating its multiple sets of swimming flaps up and down, like flying underwater," Moysiuk said in an email. "It also had a mouth shaped like a pencil sharpener and lined with rows of serrated plates, unlike any living animal." About the size of an adult human's index finger, Mosura and its swimming flaps vaguely resemble a moth, which led researchers to call it the "sea moth." The Royal Society publication notes the etymology of the species name (Mosura fentoni is "from the name of the fictional Japanese monster, or kaiju... also known as 'Mothra'...in reference to the moth-like appearance of the animal." Thanks to long-time Slashdot reader walterbyrd for sharing the news.

Read more of this story at Slashdot.

Rust Creator Graydon Hoare Thanks Its Many Stakeholders - and Mozilla - on Rust's 10th Anniversary

Slashdot - Sht, 17/05/2025 - 4:34md
Thursday was Rust's 10-year anniversary for its first stable release. "To say I'm surprised by its trajectory would be a vast understatement," writes Rust's original creator Graydon Hoare. "I can only thank, congratulate, and celebrate everyone involved... In my view, Rust is a story about a large community of stakeholders coming together to design, build, maintain, and expand shared technical infrastructure." It's a story with many actors: - The population of developers the language serves who express their needs and constraints through discussion, debate, testing, and bug reports arising from their experience writing libraries and applications. - The language designers and implementers who work to satisfy those needs and constraints while wrestling with the unexpected consequences of each decision. - The authors, educators, speakers, translators, illustrators, and others who work to expand the set of people able to use the infrastructure and work on the infrastructure. - The institutions investing in the project who provide the long-term funding and support necessary to sustain all this work over decades. All these actors have a common interest in infrastructure. Rather than just "systems programming", Hoare sees Rust as a tool for building infrastructure itself, "the robust and reliable necessities that enable us to get our work done" — a wide range that includes everything from embedded and IoT systems to multi-core systems. So the story of "Rust's initial implementation, its sustained investment, and its remarkable resonance and uptake all happened because the world needs robust and reliable infrastructure, and the infrastructure we had was not up to the task." Put simply: it failed too often, in spectacular and expensive ways. Crashes and downtime in the best cases, and security vulnerabilities in the worst. Efficient "infrastructure-building" languages existed but they were very hard to use, and nearly impossible to use safely, especially when writing concurrent code. This produced an infrastructure deficit many people felt, if not everyone could name, and it was growing worse by the year as we placed ever-greater demands on computers to work in ever more challenging environments... We were stuck with the tools we had because building better tools like Rust was going to require an extraordinary investment of time, effort, and money. The bootstrap Rust compiler I initially wrote was just a few tens of thousands of lines of code; that was nearing the limits of what an unfunded solo hobby project can typically accomplish. Mozilla's decision to invest in Rust in 2009 immediately quadrupled the size of the team — it created a team in the first place — and then doubled it again, and again in subsequent years. Mozilla sustained this very unusual, very improbable investment in Rust from 2009-2020, as well as funding an entire browser engine written in Rust — Servo — from 2012 onwards, which served as a crucial testbed for Rust language features. Rust and Servo had multiple contributors at Samsung, Hoare acknowledges, and Amazon, Facebook, Google, Microsoft, Huawei, and others "hired key developers and contributed hardware and management resources to its ongoing development." Rust itself "sits atop LLVM" (developed by researchers at UIUC and later funded by Apple, Qualcomm, Google, ARM, Huawei, and many other organizations), while Rust's safe memory model "derives directly from decades of research in academia, as well as academic-industrial projects like Cyclone, built by AT&T Bell Labs and Cornell." And there were contributions from "interns, researchers, and professors at top academic research programming-language departments, including CMU, NEU, IU, MPI-SWS, and many others." JetBrains and the Rust-Analyzer OpenCollective essentially paid for two additional interactive-incremental reimplementations of the Rust frontend to provide language services to IDEs — critical tools for productive, day-to-day programming. Hundreds of companies and other institutions contributed time and money to evaluate Rust for production, write Rust programs, test them, file bugs related to them, and pay their staff to fix or improve any shortcomings they found. Last but very much not least: Rust has had thousands and thousands of volunteers donating years of their labor to the project. While it might seem tempting to think this is all "free", it's being paid for! Just less visibly than if it were part of a corporate budget. All this investment, despite the long time horizon, paid off. We're all better for it. He looks ahead with hope for a future with new contributors, "steady and diversified streams of support," and continued reliability and compatability (including "investment in ever-greater reliability technology, including the many emerging formal methods projects built on Rust.") And he closes by saying Rust's "sustained, controlled, and frankly astonishing throughput of work" has "set a new standard for what good tools, good processes, and reliable infrastructure software should be like. "Everyone involved should be proud of what they've built."

Read more of this story at Slashdot.

The Top Fell Off Australia's First Orbital-Class Rocket, Delaying Its Launch

Slashdot - Sht, 17/05/2025 - 12:00md
Australia's first orbital-class rocket launch was delayed after the nose cone of Gilmour Space's Eris rocket unexpectedly detached due to an electrical fault during final preparations. Although no damage occurred and no payload was onboard, the company is postponing the launch to investigate and replace the fairing before attempting another test flight. Ars Technica reports: Gilmour, the Australian startup that developed the Eris rocket, announced the setback in a post to the company's social media accounts Thursday. "During final launch preparations last night, an electrical fault triggered the system that opens the rocket's nose cone (the payload fairing)," Gilmour posted on LinkedIn. "This happened before any fuel was loaded into the vehicle. Most importantly, no one was injured, and early checks show no damage to the rocket or the launch pad." Gilmour was gearing up for a launch attempt from a privately owned spaceport in the Australian state of Queensland early Friday, local time (Thursday in the United States). The company's Eris rocket, which was poised for its first test flight, stands about 82 feet (25 meters) tall with its payload fairing intact. It's designed to haul a payload of about 670 pounds (305 kilograms) to low-Earth orbit. While Gilmour didn't release any photos of the accident, a company spokesperson confirmed to Ars that the payload fairing "deployed" after the unexpected electrical issue triggered the separation system. Payload fairings are like clamshells that enclose the satellites mounted to the top of their launch vehicle, protecting them from weather on the launch pad and from airflow as the rocket accelerates to supersonic speeds. Once in space, the rocket releases the payload shroud, usually in two halves. There were no satellites aboard the rocket as Gilmour prepared for its first test flight. The report notes that the Eris rocket is aiming to "become the first all-Australian launcher to reach orbit."

Read more of this story at Slashdot.

NASA Resurrects Voyager 1 Interstellar Spacecraft's Thrusters After 20 Years

Slashdot - Sht, 17/05/2025 - 9:00pd
NASA engineers have successfully revived Voyager 1's backup thrusters, unused since 2004 and once considered defunct. Space.com reports: This remarkable feat became necessary because the spacecraft's primary thrusters, which control its orientation, have been degrading due to residue buildup. If its thrusters fail completely, Voyager 1 could lose its ability to point its antenna toward Earth, therefore cutting off communication with Earth after nearly 50 years of operation. To make matters more urgent, the team faced a strict deadline while trying to remedy the thruster situation. After May 4, the Earth-based antenna that sends commands to Voyager 1 -- and its twin, Voyager 2 -- was scheduled to go offline for months of upgrades. This would have made timely intervention impossible. To solve the problem, NASA's team had to reactivate Voyager 1's long-dormant backup roll thrusters and then attempt to restart the heaters that keep them operational. If the star tracker drifted too far from its guide star during this process, the roll thrusters would automatically fire as a safety measure -- but if the heaters weren't back online by then, firing the thrusters could cause a dangerous pressure spike. So, the team had to precisely realign the star tracker before the thrusters engaged. Because Voyager is so incredibly distant, the team faced an agonizing 23-hour wait for the radio signal to travel all the way back to Earth. If the test had failed, Voyager might have already been in serious trouble. Then, on March 20, their patience was finally rewarded when Voyager responded perfectly to their commands. Within 20 minutes of receiving the signal, the team saw the thruster heaters' temperature soar -- a clear sign that the backup thrusters were firing as planned. "It was such a glorious moment. Team morale was very high that day," Todd Barber, the mission's propulsion lead at JPL, said in the statement. "These thrusters were considered dead. And that was a legitimate conclusion. It's just that one of our engineers had this insight that maybe there was this other possible cause, and it was fixable. It was yet another miracle save for Voyager."

Read more of this story at Slashdot.

FDA Clears First Blood Test To Help Diagnose Alzheimer's Disease

Slashdot - Sht, 17/05/2025 - 5:30pd
An anonymous reader quotes a report from the Associated Press: U.S. health officials on Friday endorsed the first blood test that can help diagnose Alzheimer's and identify patients who may benefit from drugs that can modestly slow the memory-destroying disease. The test can aid doctors in determining whether a patient's memory problems are due to Alzheimer's or a number of other medical conditions that can cause cognitive difficulties. The Food and Drug Administration cleared it for patients 55 and older who are showing early signs of the disease. The new test, from Fujirebio Diagnostics, Inc., identifies a sticky brain plaque, known as beta-amyloid, that is a key marker for Alzheimer's. Previously, the only FDA-approved methods for detecting amyloid were invasive tests of spinal fluid or expensive PET scans. The lower costs and convenience of a blood test could also help expand use of two new drugs, Leqembi and Kisunla, which have been shown to slightly slow the progression of Alzheimer's by clearing amyloid from the brain. Doctors are required to test patients for the plaque before prescribing the drugs, which require regular IV infusions. [...] A number of specialty hospitals and laboratories have already developed their own in-house tests for amyloid in recent years. But those tests aren't reviewed by the FDA and generally aren't covered by insurance. Doctors have also had little data to judge which tests are reliable and accurate, leading to an unregulated marketplace that some have called a "wild west." Several larger diagnostic and drug companies are also developing their own tests for FDA approval, including Roche, Eli Lilly and C2N Diagnostics. The tests can only be ordered by a doctor and aren't intended for people who don't yet have any symptoms.

Read more of this story at Slashdot.

Microsoft's Command Palette is a Powerful Launcher For Apps, Search

Slashdot - Sht, 17/05/2025 - 3:35pd
Microsoft has released Command Palette, an enhanced version of its PowerToys Run launcher introduced five years ago. The utility, aimed at power users and developers, provides quick access to applications, files, calculations, and system commands through a Spotlight-like interface. Command Palette integrates the previously separate Window Walker functionality for switching between open windows and supports launching command prompts, executing web searches, and navigating folder structures. Unlike its predecessor, the new launcher offers full customization via extensions, allowing users to implement additional commands beyond default capabilities. Available through the PowerToys application since early April, Command Palette can be triggered using Win+Alt+Space after installation

Read more of this story at Slashdot.

Walmart Prepares for a Future Where AI Shops for Consumers

Slashdot - Sht, 17/05/2025 - 2:50pd
Walmart is preparing for a future where AI agents shop on behalf of consumers by adapting its systems to serve both humans and autonomous bots. As major players like Visa and PayPal also invest in agentic commerce, Walmart is positioning itself as a leader by developing its own AI agents and supporting broader industry integration. PYMNTS reports: Instead of scrolling through ads or comparing product reviews, future consumers may rely on digital assistants, like OpenAI's Operator, to manage their shopping lists, from replenishing household essentials to selecting the best TV based on personal preferences, according to the report (paywalled). "It will be different," Walmart U.S. Chief Technology Officer Hari Vasudev said, per the report. "Advertising will have to evolve." The emergence of AI-generated summaries in search results has already altered the way consumers gather product information, the report said. However, autonomous shopping agents represent a bigger transformation. These bots could not only find products but also finalize purchases, including payments, without the user ever lifting a finger. [...] Retail experts say agentic commerce will require companies to overhaul how they market and present their products online, the WSJ report said. They may need to redesign product pages and pricing strategies to cater to algorithmic buyers. The customer relationship could shift away from retailers if purchases are completed through third-party agents. [...] To prepare, Walmart is developing its own AI shopping agents, accessible through its website and app, according to the WSJ report. These bots can already handle basic tasks like reordering groceries, and they're being trained to respond to broader prompts, such as planning a themed birthday party. Walmart is working toward a future in which outside agents can seamlessly communicate with the retailer's own systems -- something Vasudev told the WSJ he expects to be governed by industry-wide protocols that are still under development. [...] Third-party shopping bots may also act independently, crawling retailers' websites much like consumers browse stores without engaging sales associates, the WSJ report said. In those cases, the retailer has little control over how its products are evaluated. Whether consumers instruct their AI to shop specifically at Walmart or ask for the best deal available, the outcomes will increasingly be shaped by algorithms, per the report. Operator, for example, considers search ranking, sponsored content and user preferences when making recommendations. That's a far cry from how humans shop. Bots don't respond to eye-catching visuals or emotionally driven branding in the same way people do. This means retailers must optimize their content not just for people but for machine readers as well, the report said. Pricing strategies could also shift as companies may need to make rapid pricing decisions and determine whether it's worth offering AI agents exclusive discounts to keep them from choosing a competitor's lower-priced item, according to the report.

Read more of this story at Slashdot.

UK Needs More Nuclear To Power AI, Says Amazon Boss

Slashdot - Sht, 17/05/2025 - 2:10pd
In an exclusive interview with the BBC, AWS CEO Matt Garman said the UK must expand nuclear energy to meet the soaring electricity demands of AI-driven data centers. From the report: Amazon Web Services (AWS), which is part of the retail giant Amazon, plans to spend 8 billion pounds on new data centers in the UK over the next four years. Matt Garman, chief executive of AWS, told the BBC nuclear is a "great solution" to data centres' energy needs as "an excellent source of zero carbon, 24/7 power." AWS is the single largest corporate buyer of renewable energy in the world and has funded more than 40 renewable solar and wind farm projects in the UK. The UK's 500 data centres currently consume 2.5% of all electricity in the UK, while Ireland's 80 hoover up 21% of the country's total power, with those numbers projected to hit 6% and 30% respectively by 2030. The body that runs the UK's power grid estimates that by 2050 data centers alone will use nearly as much energy as all industrial users consume today. In an exclusive interview with the BBC, Matt Garman said that future energy needs were central to AWS planning process. "It's something we plan many years out," he said. "We invest ahead. I think the world is going to have to build new technologies. I believe nuclear is a big part of that particularly as we look 10 years out."

Read more of this story at Slashdot.

Linux Swap Table Code Shows The Potential For Huge Performance Gains

Slashdot - Sht, 17/05/2025 - 1:30pd
A new set of 27 Linux kernel patches introduces a "Swap Tables" mechanism aimed at enhancing virtual memory management. As Phoronix's Michael Larabel reports, "the hope is for lower memory use, higher performance, dynamic swap allocation and growth, greater extensibility, and other improvements over the existing swap code within the Linux kernel." From the report: Engineer Kairui Song with Tencent posted the Swap Table patch series today for implementing the design ideas discussed in recent months by kernel developers. The results are very exciting so let's get straight to it: "With this series, swap subsystem will have a ~20-30% performance gain from basic sequential swap to heavy workloads, for both 4K and mTHP folios. The idle memory usage is already much lower, the average memory consumption is still the same or will also be even lower (with further works). And this enables many more future optimizations, with better defined swap operations." "The patches also clean-up and address various historical issues with the SWAP subsystem," notes Larabel. Context: In Linux, swap space acts as an overflow for RAM, storing inactive memory pages on disk to free up RAM for active processes. Traditional swap mechanisms are limited in flexibility and performance. The proposed "Swap Tables" aim to address these issues by allowing more efficient and dynamic management of swap space, potentially leading to better system responsiveness and resource utilization.

Read more of this story at Slashdot.

Apple's New CarPlay 'Ultra' Won't Fix the Biggest Problem of Phone-Connected Cars

Slashdot - Sht, 17/05/2025 - 12:50pd
An anonymous reader quotes a report from Gizmodo: Apple's next step for CarPlay is a version you'll only get to try if you're a fan of luxury cruisers or a popular spy film franchise. CarPlay Ultra, with its new suite of exclusive features like custom gauges, is coming first to Aston Martin vehicles with the largest, most blaring dash screens. The more advanced version of CarPlay won't necessarily fix the lingering issues the software has with some modern vehicles. Segmenting CarPlay into newer and older systems may make things worse for those with aging cars. Apple's CarPlay Ultra includes a new kind of dashboard alongside real-time information that can include car diagnostics -- like tire pressure -- or dashboard gauges. You should be able to control temperature and other car-based features as well. The new version of the software includes options for dashboards or console screens, and it will work with on-screen controls, Siri, and "physical buttons." CarPlay Ultra was supposed to launch in 2024, but Apple missed its release date by close to half a year. The new feature suite was first revealed at WWDC in 2022, where Apple promised a "unified and consistent" suite of informational dashboards offering more control over radio and AC "without ever leaving the CarPlay experience." Last year, Apple showed off "the next generation" of its car-focused app that included custom gauges and other layouts made for a variety of automakers. It lacked much of the full-width, busy design of the initial iteration from two years prior. [...] To entice more manufacturers, CarPlay Ultra is supposed to adapt to multiple screen sizes thanks to a modular layout system with more options for companies to adhere to their own brand identity. Apple promised carmakers they could resize and reorient gauges on a dashboard like you do widgets on your iPhone. Users can change up various gauges on the dash and bring up apps like Apple Music or Maps in between your temperature gauge and speedometer. Aston Martin showed off these features on an Aston Martin DBX, a luxury SUV that costs more than $250,000. Apple said these features should be coming to the U.S. and Canada first, with more Aston Martins getting these features through software updates from local dealerships. Apple said its still trying to bring these features to brands like Hyundai, Kia, and Genesis. Maybe we'll see Ultra on a vehicle regular folk can afford. "The customizable dashboards are a way for Apple to let each carmaker have their say in how their vehicles look, but they won't help all those who are stuck with regular CarPlay on their aging beaters," concludes Gizmodo's Kyle Barr. "The new version will inevitably create a distinction between those with new software and others with legacy software..."

Read more of this story at Slashdot.

MIT Asks arXiv To Take Down Preprint Paper On AI and Scientific Discovery

Slashdot - Sht, 17/05/2025 - 12:10pd
MIT has formally requested the withdrawal of a preprint paper on AI and scientific discovery due to serious concerns about the integrity and validity of its data and findings. It didn't provide specific details on what it believes is wrong with the paper. From a post: "Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv's Code of Conduct. "Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so. Therefore, in an effort to clarify the research record, MIT respectfully request that the paper be marked as withdrawn from arXiv as soon as possible." Preprints, by definition, have not yet undergone peer review. MIT took this step in light of the publication's prominence in the research conversation and because it was a formal step it could take to mitigate the effects of misconduct. The author is no longer at MIT. [...] "We are making this information public because we are concerned that, even in its non-published form, the paper is having an impact on discussions and projections about the effects of AI on science. Ensuring an accurate research record is important to MIT. We therefore would like to set the record straight and share our view that at this point the findings reported in this paper should not be relied on in academic or public discussions of these topics." The paper in question, titled "Artificial Intelligence, Scientific Discovery, and Product Innovation" and authored by Aidan Toner-Rodgers, investigated the effects of introducing an AI-driven materials discovery tool to 1,018 scientists in a U.S. R&D lab. The study reported that AI-assisted researchers discovered 44% more materials, filed 39% more patents, and achieved a 17% increase in product innovation. These gains were primarily attributed to AI automating 57% of idea-generation tasks, allowing top-performing scientists to focus on evaluating AI-generated suggestions effectively. However, the benefits were unevenly distributed; lower-performing scientists saw minimal improvements, and 82% of participants reported decreased job satisfaction due to reduced creativity and skill utilization. The Wall Street Journal reported on MIT's statement.

Read more of this story at Slashdot.

Updated Debian 12: 12.11 released

Debian.org - Sht, 17/05/2025 - 12:00pd
The Debian project is pleased to announce the eleventh update of its stable distribution Debian 12 (codename bookworm). This point release mainly adds corrections for security issues, along with a few adjustments for serious problems. Security advisories have already been published separately and are referenced where available.

OpenAI Launches Codex, an AI Coding Agent, In ChatGPT

Slashdot - Pre, 16/05/2025 - 11:30md
OpenAI has launched Codex, a powerful AI coding agent in ChatGPT that autonomously handles tasks like writing features, fixing bugs, and testing code in a cloud-based environment. TechCrunch reports: Codex is powered by codex-1, a version of the company's o3 AI reasoning model optimized for software engineering tasks. OpenAI says codex-1 produces "cleaner" code than o3, adheres more precisely to instructions, and will iteratively run tests on its code until passing results are achieved. The Codex agent runs in a sandboxed, virtual computer in the cloud. By connecting with GitHub, Codex's environment can come preloaded with your code repositories. OpenAI says the AI coding agent will take anywhere from one to 30 minutes to write simple features, fix bugs, answer questions about your codebase, and run tests, among other tasks. Codex can handle multiple software engineering tasks simultaneously, says OpenAI, and it doesn't limit users from accessing their computer and browser while it's running. Codex is rolling out starting today to subscribers to ChatGPT Pro, Enterprise, and Team. OpenAI says users will have "generous access" to Codex to start, but in the coming weeks, the company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex, an OpenAI spokesperson tells TechCrunch. OpenAI plans to expand Codex access to ChatGPT Plus and Edu users soon.

Read more of this story at Slashdot.

Meta Argues Enshittification Isn't Real

Slashdot - Pre, 16/05/2025 - 10:50md
An anonymous reader quotes a report from Ars Technica: Meta thinks there's no reason to carry on with its defense after the Federal Trade Commission closed its monopoly case, and the company has moved to end the trial early by claiming that the FTC utterly failed to prove its case. "The FTC has no proof that Meta has monopoly power," Meta's motion for judgment (PDF) filed Thursday said, "and therefore the court should rule in favor of Meta." According to Meta, the FTC failed to show evidence that "the overall quality of Meta's apps has declined" or that the company shows too many ads to users. Meta says that's "fatal" to the FTC's case that the company wielded monopoly power to pursue more ad revenue while degrading user experience over time (an Internet trend known as "enshittification"). And on top of allegedly showing no evidence of "ad load, privacy, integrity, and features" degradation on Meta apps, Meta argued there's no precedent for an antitrust claim rooted in this alleged harm. "Meta knows of no case finding monopoly power based solely on a claimed degradation in product quality, and the FTC has cited none," Meta argued. Meta has maintained throughout the trial that its users actually like seeing ads. In the company's recent motion, Meta argued that the FTC provided no insights into what "the right number of ads" should be, "let alone" provide proof that "Meta showed more ads" than it would in a competitive market where users could easily switch services if ad load became overwhelming. Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it "does not profit by showing more ads to users who do not click on them," so it only shows more ads to users who click ads. Meta also insisted that there's "nothing but speculation" showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them. The company claimed that without Meta's resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was "pretty broken and duct-taped" together, making it "vulnerable to spam" before Meta bought it. Rather than enshittification, what Meta did to Instagram could be considered "a consumer-welfare bonanza," Meta argued, while dismissing "smoking gun" emails from Mark Zuckerberg discussing buying Instagram to bury it as "legally irrelevant." Dismissing these as "a few dated emails," Meta argued that "efforts to litigate Mr. Zuckerberg's state of mind before the acquisition in 2012 are pointless." "What matters is what Meta did," Meta argued, which was pump Instagram with resources that allowed it "to 'thrive' -- adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success." In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that "the sole Meta witness to (supposedly) learn of Google's acquisition efforts testified that he did not have that worry." In sum: A ruling in Meta's favor could prevent a breakup of its apps, while a denial would push the trial toward a possible order to divest Instagram and WhatsApp.

Read more of this story at Slashdot.

Verizon Secures FCC Approval for $9.6 Billion Frontier Acquisition

Slashdot - Pre, 16/05/2025 - 10:02md
The Federal Communications Commission has approved Verizon's $9.6 billion acquisition of Frontier Communications, valuing the Dallas-based company at $20 billion including debt. The approval comes after Verizon agreed to scale back diversity initiatives to comply with Trump administration policies. FCC Chairman Brendan Carr, who previously threatened to block mergers over DEI practices, praised the deal for its potential to "unleash billions in new infrastructure builds" and "accelerate the transition away from old, copper line networks to modern, high-speed ones." The acquisition positions America's largest phone carrier to expand its high-speed internet footprint across Frontier's 25-state network. Verizon plans to deploy fiber to more than one million U.S. homes annually following the transaction.

Read more of this story at Slashdot.

Charter To Buy Cox For $21.9 Billion Amid Escalating War With Wireless

Slashdot - Pre, 16/05/2025 - 9:22md
Charter Communications announced a $21.9 billion deal Friday to acquire Cox Communications, combining two major cable providers as they face mounting competition from wireless carriers offering 5G home internet. The transaction merges Charter's 31.4 million customers with Cox's 6.3 million, creating a larger entity to defend against aggressive expansion from Verizon and T-Mobile. Charter lost 60,000 internet customers in the March quarter, underscoring the industry's vulnerability as traditional cable broadband growth stalls. Wireless carriers have successfully marketed their fixed wireless access services at lower price points while delivering competitive speeds, turning what was once cable's most profitable segment into contested territory. The combined company, which will be headquartered in Stamford, Connecticut, plans to adopt the Cox Communications name within a year of closing while retaining Spectrum as its consumer-facing brand.

Read more of this story at Slashdot.

next-20250516: linux-next

Kernel Linux - Pre, 16/05/2025 - 12:22md
Version:next-20250516 (linux-next) Released:2025-05-16

Telegram Bans $35 Billion Black Markets Used To Sell Stolen Data, Launder Crypto

Slashdot - Enj, 15/05/2025 - 10:07md
An anonymous reader quotes a report from Ars Technica: On Thursday, Telegram announced it had removed two huge black markets estimated to have generated more than $35 billion since 2021 by serving cybercriminals and scammers. Blockchain research firm Elliptic told Reuters that the Chinese-language markets Xinbi Guarantee and Huione Guarantee together were far more lucrative than Silk Road, an illegal drug marketplace that the FBI notoriously seized in 2013, which was valued at about $3.4 billion. Both markets were forced offline on Tuesday, Elliptic reported, and already, Huione Guarantee has confirmed that its market will cease to operate entirely due to the Telegram removal. The disruption of both markets will be "a big blow for online fraudsters," Elliptic confirmed, cutting them off from a dependable source for "stolen data, money laundering services, and telecoms infrastructure." [...] Elliptic reported that Telegram connected black markets with an audience of a billion users, noting that Telegram tried to remove several Huione Guarantee channels earlier this year, but "the marketplace was ready" with backups and remained online until this week. Wired suggested that Huione Guarantee "operated in plain sight" on Telegram for years. But Telegram suggested it just discovered it. Huione Guarantee is a subsidiary of Huione Group, which was recently sanctioned by the U.S. Treasury for supporting "criminal syndicates who have stolen billions of dollars from Americans." According to Reuters, that included allegedly laundering "at least $37 million in crypto from cyber heists by North Korea and $36 million of crypto from so-called 'pig butchering' scams."

Read more of this story at Slashdot.

Uber Expects More Drivers Amid Robotaxi Push

Slashdot - Enj, 15/05/2025 - 9:03md
Uber's autonomous vehicle chief Andrew Macdonald predicted this week that the company will employ more human drivers in a decade despite aggressively expanding robotaxi operations. Speaking at the Financial Times' Future of the Car conference, Macdonald outlined a "hybrid marketplace" where autonomous vehicles dominate city centers while human drivers serve areas beyond robotaxi coverage, handle airport runs, and respond during extreme weather events. "I am almost certain that there will be more Uber drivers in 10 years, not less, because I think the world will move from individual car ownership to mobility as a service," Macdonald said. The ride-hailing giant has struck partnerships with Waymo, Volkswagen, Wayve, WeRide, and Pony AI. Robotaxis are already operational in Austin and Phoenix, with CEO Dara Khosrowshahi claiming Waymo vehicles in Austin are busier than "99%" of human drivers.

Read more of this story at Slashdot.

American Schools Were Deeply Unprepared for ChatGPT, Public Records Show

Slashdot - Enj, 15/05/2025 - 8:20md
School districts across the United States were woefully unprepared for ChatGPT's impact on education, according to thousands of pages of public records obtained by 404 Media. Documents from early 2023, the publication reports, show a "total crapshoot" in responses, with some state education departments admitting they hadn't considered ChatGPT's implications while others hired pro-AI consultants to train educators. In California, when principals sought guidance, state officials responded that "unfortunately, the topic of ChatGPT has not come up in our circles." One California official admitted, "I have never heard of ChatGPT prior to your email." Meanwhile, Louisiana's education department circulated presentations suggesting AI "is like giving a computer a brain" and warning that "going back to writing essays - only in class - can hurt struggling learners." Some administrators accepted the technology enthusiastically, with one Idaho curriculum head calling ChatGPT "AMAZING" and comparing resistance to early reactions against spell-check.

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi