You are here

Agreguesi i feed

Evolving Linux Malware Threats: A Guide for Admins in Cloud-Native Contexts

LinuxSecurity.com - Enj, 22/01/2026 - 3:49pd
For a long time, Linux malware followed a familiar pattern. A compromised host. A binary written to disk. Persistence through cron, systemd, or a quiet modification that survived reboots. If you hardened the system and watched for changes, you felt reasonably in control. That model no longer matches how Linux is actually run. Modern Linux malware increasingly assumes it is landing in environments where hosts are disposable, workloads are short-lived, and the real authority sits somewhere above the operating system.

FBI's Washington Post Investigation Shows How Your Printer Can Snitch On You

Slashdot - Enj, 22/01/2026 - 3:02pd
alternative_right quotes a report from The Intercept: Federal prosecutors on January 9 charged Aurelio Luis Perez-Lugones, an IT specialist for an unnamed government contractor, with "the offense of unlawful retention of national defense information," according to an FBI affidavit (PDF). The case attracted national attention after federal agents investigating Perez-Lugones searched the home of a Washington Post reporter. But overlooked so far in the media coverage is the fact that a surprising surveillance tool pointed investigators toward Perez-Lugones: an office printer with a photographic memory. News of the investigation broke when the Washington Post reported that investigators seized the work laptop, personal laptop, phone, and smartwatch of journalist Hannah Natanson, who has covered the Trump administration's impact on the federal government and recently wrote about developing more than 1,000 government sources. A Justice Department official told the Post that Perez-Lugones had been messaging Natanson to discuss classified information. The affidavit does not allege that Perez-Lugones disseminated national defense information, only that he unlawfully retained it. The affidavit provides insight into how Perez-Lugones allegedly attempted to exfiltrate information from a Secure Compartmented Information Facility, or SCIF, and the unexpected way his employer took notice. According to the FBI, Perez-Lugones printed a classified intelligence report, albeit in a roundabout fashion. It's standard for workplace printers to log certain information, such as the names of files they print and the users who printed them. In an apparent attempt to avoid detection, Perez-Lugones, according to the affidavit, took screenshots of classified materials, cropped the screenshots, and pasted them into a Microsoft Word document. By using screenshots instead of text, there would be no record of a classified report printed from the specific workstation. (Depending on the employer's chosen data loss prevention monitoring software, access logs might show a specific user had opened the file and perhaps even tracked whether they took screenshots). Perez-Lugones allegedly gave the file an innocuous name, "Microsoft Word - Document1," that might not stand out if printer logs were later audited. In this case, however, the affidavit reveals that Perez-Lugones's employer could see not only the typical metadata stored by printers, such as file names, file sizes, and time of printing, but it could also view the actual contents of the printed materials -- in this case, prosecutors say, the screenshots themselves. As the affidavit points out, "Perez-Lugones' employer can retrieve records of print activity on classified systems, including copies of printed documents." [...] Aside from attempting to surreptitiously print a document, Perez-Lugones, investigators say, was also seen allegedly opening a classified document and taking notes, looking "back and forth between the screen corresponding the classified system and the notepad, all the while writing on the notepad." The affidavit doesn't state how this observation was made, but it strongly suggests a video surveillance system was also in play.

Read more of this story at Slashdot.

'America Is Slow-Walking Into a Polymarket Disaster'

Slashdot - Enj, 22/01/2026 - 2:25pd
In an opinion piece for The Atlantic, senior editor Saahil Desai argues that media outlets are increasingly treating prediction markets like Polymarket and Kalshi as legitimate signals of reality. The risk, as Desai warns, is a future where news coverage amplifies manipulable betting odds and turns politics, geopolitics, and even tragedy into speculative gambling theater. Here's an excerpt from the report: [...] The problem is that prediction markets are ushering in a world in which news becomes as much about gambling as about the event itself. This kind of thing has already happened to sports, where the language of "parlays" and "covering the spread" has infiltrated every inch of commentary. ESPN partners with DraftKings to bring its odds to SportsCenter and Monday Night Football; CBS Sports has a betting vertical; FanDuel runs its own streaming network. But the stakes of Greenland's future are more consequential than the NFL playoffs. The more that prediction markets are treated like news, especially heading into another election, the more every dip and swing in the odds may end up wildly misleading people about what might happen, or influencing what happens in the real world. Yet it's unclear whether these sites are meaningful predictors of anything. After the Golden Globes, Polymarket CEO Shayne Coplan excitedly posted that his site had correctly predicted 26 of 28 winners, which seems impressive -- but Hollywood awards shows are generally predictable. One recent study found that Polymarket's forecasts in the weeks before the 2024 election were not much better than chance. These markets are also manipulable. In 2012, one bettor on the now-defunct prediction market Intrade placed a series of huge wagers on Mitt Romney in the two weeks preceding the election, generating a betting line indicative of a tight race. The bettor did not seem motivated by financial gain, according to two researchers who examined the trades. "More plausibly, this trader could have been attempting to manipulate beliefs about the odds of victory in an attempt to boost fundraising, campaign morale, and turnout," they wrote. The trader lost at least $4 million but might have shaped media attention of the race for less than the price of a prime-time ad, they concluded. [...] The irony of prediction markets is that they are supposed to be a more trustworthy way of gleaning the future than internet clickbait and half-baked punditry, but they risk shredding whatever shared trust we still have left. The suspiciously well-timed bets that one Polymarket user placed right before the capture of Nicolas Maduro may have been just a stroke of phenomenal luck that netted a roughly $400,000 payout. Or maybe someone with inside information was looking for easy money. [...] As Tarek Mansour, Kalshi's CEO, has said, his long-term goal is to "financialize everything and create a tradable asset out of any difference in opinion." (Kalshi means "everything" in Arabic.) What could go wrong? As one viral post on X recently put it, "Got a buddy who is praying for world war 3 so he can win $390 on Polymarket." It's a joke. I think.

Read more of this story at Slashdot.

Apple Reportedly Replacing Siri Interface With Actual Chatbot Experience For iOS 27

Slashdot - Enj, 22/01/2026 - 1:45pd
According to Bloomberg's Mark Gurman, Apple is reportedly planning a major Siri overhaul in iOS 27 and macOS 27 where the current assistant interface will be replaced with a deeply integrated, ChatGPT-style chatbot experience. "Users will be able to summon the new service the same way they open Siri now, by speaking the 'Siri' command or holding down the side button on their iPhone or iPad," says Gurman. "More significantly, Siri will be integrated into all of the company's core apps, including ones for mail, music, podcasts, TV, Xcode programming software and photos. That will allow users to do much more with just their voice." 9to5Mac reports: The unannounced Siri overhaul will reportedly be revealed at WWDC in June as the flagship feature for iOS 27 and macOS 27. Its release is expected in September when Apple typically ships major software updates. While Apple plans to release an improved version of Siri and Apple Intelligence this spring, that version will use the existing Siri interface. The big difference is that Google's Gemini models will power the intelligence. With the bigger update planned for iOS 27, the iOS 26 upgrade to Siri and Apple Intelligence sounds more like the first step to a long overdue modernization. Gurman reports that the major Siri overhaul will "allow users to search the web for information, create content, generate images, summarize information and analyze uploaded files" while using "personal data to complete tasks, being able to more easily locate specific files, songs, calendar events and text messages." People are already familiar with conversational interactions with AI, and Bloomberg says the bigger update to Siri will be support both text and voice. Siri already uses these input methods, but there's no real continuity between sessions.

Read more of this story at Slashdot.

Spotify Lawsuit Triggered Anna's Archive Domain Name Suspensions

Slashdot - Enj, 22/01/2026 - 1:02pd
An anonymous reader quotes a report from TorrentFreak: Spotify and several major record labels, including UMG, Sony, and Warner, have taken legal action against the unknown operators of Anna's Archive. The action follows the shadow library's announcement that it would release hundreds of terabytes of scraped Spotify data. Unsealed documents reveal that the court already issued a broad preliminary injunction, ordering hosting companies, Cloudflare, and domain name services, to take action. [...] All these documents were filed under seal, as the shadow library might otherwise be tipped off and take countermeasures. These documents were filed ex-parte and kept away from Anna's Archive. According to Spotify and the labels, this is needed "so that Anna's Archive cannot pre-emptively frustrate" the countermeasures they seek. The lawsuit (PDF), which was unsealed recently, explains directly why Anna's Archive lost several of its domain names over the past weeks. The .ORG domain was suspended by the U.S.-based Public Interest Registry (PIR) in early January, while a domain registrar took the .SE variant offline a few days later. "We don't believe this has to do with our Spotify backup," AnnaArchivist said at the time, but court records prove them wrong. The unsealed paperwork shows that the court granted a temporary restraining order (TRO) on January 2, which aimed to target Anna's Archive hosting and domain names. The sealed nature of this order also explains why the .ORG registry informed us that it could not comment on the suspension last week. While the .ORG and the .SE domains are suspended now, other domains remain operational. This suggests that the responsible registrars and registries do not automatically comply with U.S. court orders. [...] While the unsealed documents resolve the domain suspension mystery, it is only the start of the legal battle in court. It is expected that Spotify and the music companies will do everything in their power to take further action, if needed. Interestingly, however, it appears that the music industry lawsuit may have already reached its goal. A few days ago, the dedicated Spotify download section was removed by Anna's Archive. Whether this removal is linked to the legal troubles is unknown. However, it appears that Anna's Archive stopped the specific distribution of Spotify content alleged in the complaint, seemingly in partial compliance with the injunction's ban on 'making available' the scraped files.

Read more of this story at Slashdot.

Apple Developing AI Wearable Pin

Slashdot - Enj, 22/01/2026 - 12:20pd
According to a report by The Information (paywalled), Apple is reportedly developing an AirTag-sized, camera-equipped AI wearable pin that could arrive as early as 2027. "Apple's pin, which is a thin, flat, circular disc with an aluminum-and-glass shell, features two cameras -- a standard lens and a wide-angle lens -- on its front face, designed to capture photos and videos of the user's surroundings," reports The Information, citing people familiar with the device. "It also includes three microphones to pick up sounds in the area surrounding the person wearing it. It has a speaker, a physical button along one of its edges and a magnetic inductive charging interface on its back, similar to the one used on the Apple Watch..." 9to5Mac reports: The Information also notes that Apple is attempting to speed up development in hopes of competing with OpenAI's first wearable (slated to debut in 2026), and that it is not immediately clear whether this wearable would work in conjunction with other products, such as AirPods or Apple's reported upcoming smart glasses. Today's report also notes that this has been a challenging market for new companies, citing the recent failure of Humane's AI Pin as an example.

Read more of this story at Slashdot.

Nova Launcher Gets a New Owner and Ads

Slashdot - Mër, 21/01/2026 - 11:40md
Nova Launcher has been acquired by Instabridge, which says it will keep the app maintained but is evaluating ad-supported options for the free version. Android Authority reports: Today, Nova Launcher announced that the Swedish company Instabridge has acquired it from Branch Metrics. Instabridge claims it wants to be a responsible owner of Nova and does not want to reinvent the launcher overnight. However, the launcher still needs a sustainable business model to support ongoing development and maintenance. To this end, Instabridge is exploring different options, including paid tiers and ad-supported options for the free version. The new owners claim that if ads are introduced, Nova Prime will remain ad-free. However, this is misleading, as ads are already here for some users. Last year, the founder and original programmer of Nova Launcher left the company, signaling its "death" as he had been the sole developer working on the launcher for the past year.

Read more of this story at Slashdot.

HAM Radio Operators In Belarus Arrested, Face the Death Penalty

Slashdot - Mër, 21/01/2026 - 11:02md
An anonymous reader quotes a report from 404 Media: The Belarusian government is threatening three HAM radio operators with the death penalty, detained at least seven people, and has accused them of "intercepting state secrets," according to Belarusian state media, independent media outside of Belarus, and the Belarusian human rights organization Viasna. The arrests are an extreme attack on what is most often a wholesome hobby that has a history of being vilified by authoritarian governments in part because the technology is quite censorship resistant. The detentions were announced last week on Belarusian state TV, which claimed the men were part of a network of more than 50 people participating in the amateur radio hobby and have been accused of both "espionage" and "treason." Authorities there said they seized more than 500 pieces of radio equipment. The men were accused on state TV of using radio to spy on the movement of government planes, though no actual evidence of this has been produced. State TV claimed they were associated with the Belarusian Federation of Radioamateurs and Radiosportsmen (BFRR), a long-running amateur radio club and nonprofit that holds amateur radio competitions, meetups, trainings, and forums. Siarhei Besarab, a Belarusian HAM radio operator, posted a plea for support from others in the r/amateurradio subreddit. "I am writing this because my local community is being systematically liquidated in what I can only describe as a targeted intellectual genocide," Besarab wrote. "I beg you to amplify this signal and help us spread this information. Please show this to any journalist you know, send it to human rights organizations, and share it with your local radio associations."

Read more of this story at Slashdot.

Ozempic is Reshaping the Fast Food Industry

Slashdot - Mër, 21/01/2026 - 10:22md
New research from Cornell University has tracked how households change their spending after someone starts taking GLP-1 medications like Ozempic and Wegovy, and the numbers are material enough to explain why food industry earnings calls keep blaming everything except the obvious culprit. The study analyzed transaction data from 150,000 households linked to survey responses on medication adoption. Households cut grocery spending by 5.3% within six months of a member starting GLP-1s; high-income households cut by 8.2%. Fast food spending fell 8.0%. Savory snacks took the biggest hit at 10.1%, followed by sweets and baked goods. Yogurt was the only category to see a statistically significant increase. As of July 2024, 16.3% of U.S. households had at least one GLP-1 user. Nearly half of adopters reported taking the medication specifically for weight loss rather than diabetes management. About 34% of users discontinue within the sample period, and when they stop, candy and chocolate purchases rise 11.4% above pre-adoption levels. Further reading: Weighing the Cost of Smaller Appetites.

Read more of this story at Slashdot.

Half of World's CO2 Emissions Come From Just 32 Fossil Fuel Firms, Study Shows

Slashdot - Mër, 21/01/2026 - 9:45md
Just 32 fossil fuel companies were responsible for half the global carbon dioxide emissions driving the climate crisis in 2024, down from 36 a year earlier, a report has revealed. The Guardian: Saudi Aramco was the biggest state-controlled polluter and ExxonMobil was the largest investor-owned polluter. Critics accused the leading fossil fuel companies of "sabotaging climate action" and "being on the wrong side of history" but said the emissions data was increasingly being used to hold the companies accountable. State-owned fossil fuel producers made up 17 of the top 20 emitters in the Carbon Majors report, which the authors said underscored the political barriers to tackling global heating. All 17 are controlled by countries that opposed a proposed fossil fuel phaseout at the Cop30 UN climate summit in December, including Saudi Arabia, Russia, China, Iran, the United Arab Emirates and India. More than 80 other nations had backed the phaseout plan.

Read more of this story at Slashdot.

Adobe Acrobat Now Lets You Edit Files Using Prompts, Generate Podcast Summaries

Slashdot - Mër, 21/01/2026 - 9:01md
Adobe has added a suite of AI-powered features to Acrobat that enable users to edit documents through natural language prompts, generate podcast-style audio summaries of their files, and create presentations by pulling content from multiple documents stored in a single workspace. The prompt-based editing supports 12 distinct actions: removing pages, text, comments, and images; finding and replacing words and phrases; and adding e-signatures and passwords. The presentation feature builds on Adobe Spaces, a collaborative file and notes collection the company launched last year. Users can point Acrobat's AI assistant at files in a Space and have it generate an editable pitch deck, then style it using Adobe Express themes and stock imagery. Shared files in Spaces now include AI-generated summaries that cite specific locations in the source document. Users can also choose from preset AI assistant personas -- "analyst," "entertainer," or "instructor" -- or create custom assistants using their own prompts.

Read more of this story at Slashdot.

The Gold Plating of American Water

Slashdot - Mër, 21/01/2026 - 8:22md
The price of water and sewer services for American households has more than doubled since the early 1980s after adjusting for inflation, even though per-capita water use has actually decreased over that period. Households in large cities now spend about $1,300 a year on water and sewer charges, approaching the roughly $1,600 they spend on electricity. The main driver is federal regulation. Since the Clean Water Act of 1972 and the Safe Drinking Water Act of 1974, the U.S. has spent approximately $5 trillion in contemporary dollars fighting water pollution -- about 0.8% of annual GDP across that period. The EPA itself admits that surface water regulations are the one category of environmental rules where estimated costs exceed estimated benefits. New York City was required to build a filtration plant to address two minor parasites in water from its Croton aqueduct. The project took a decade longer than expected and cost $3.2 billion, more than double the original estimate. After the plant opened in 2015, the city's Commissioner of Environmental Protection noted that the water would basically be "the same" to the public. Jefferson County, Alabama, meanwhile, descended into what was then the largest municipal bankruptcy in U.S. history in 2011 after EPA-mandated sewer upgrades pushed its debt from $300 million to over $3 billion.

Read more of this story at Slashdot.

AI Company Eightfold Sued For Helping Companies Secretly Score Job Seekers

Slashdot - Mër, 21/01/2026 - 7:44md
Eightfold AI, a venture capital-backed AI hiring platform used by Microsoft, PayPal and many other Fortune 500 companies, is being sued in California for allegedly compiling reports used to screen job applicants without their knowledge. From a report: The lawsuit, filed on Tuesday accusing Eightfold of violating the Fair Credit Reporting Act shows how consumer advocates are seeking to apply existing law to AI systems capable of drawing inferences about individuals based on vast amounts of data. Santa Clara, California-based Eightfold provides tools that promise to speed up the hiring process by assessing job applicants and predicting whether they would be a good fit for a job using massive amounts of data from online resumes and job listings. But candidates who apply for jobs at companies that use those tools are not given notice and a chance to dispute errors, job applicants Erin Kistler and Sruti Bhaumik allege in their proposed class action. Because of that, they claim Eightfold violated the FCRA and a California law that gives consumers the right to view and challenge credit reports used in lending and hiring.

Read more of this story at Slashdot.

Christian Schaller: Can AI help ‘fix’ the patent system?

Planet GNOME - Mër, 21/01/2026 - 7:35md

So one thing I think anyone involved with software development for the last decades can see is the problem of “forest of bogus patents”. I have recently been trying to use AI to look at patents in various ways. So one idea I had was “could AI help improve the quality of patents and free us from obvious ones?”

Lets start with the justification for patents existing at all. The most common argument for the patent system I hear is this one : “Patents require public disclosure of inventions in exchange for protection. Without patents, inventors would keep innovations as trade secrets, slowing overall technological progress.”. This reasoning is something that makes sense to me, but it is also screamingly obvious to me that for it to hold true you need to ensure the patents granted are genuinely inventions that otherwise would stay hidden as trade secrets. If you allow patents on things that are obvious to someone skilled in the art, you are not enhancing technological progress, you are hampering it because the next person along will be blocking from doing it.

So based on this justification the question then becomes does for example the US Patents Office do a good job filtering out obvious patents? And I believe the answer is “No they don’t”. Having been working in the space of software for many decades now it is very clear to me that the patent office do very little to avoid patents getting approved for obvious things. And there are many reasons for why that happens I think. First of all if you are an expert in a field you would likely not be working as a case worker in the patent office, no disrespect for the people working there, but for instance if you are a really good software engineer there are much better paying options to be found elsewhere, same goes for medical researchers for example. So we have people who have limited understanding of the field of the patent trying to evaluate if the patent is obvious or not.

Secondly the patent office is not really incentivized to deny patents as they are a self funded entity and their ‘success’ and revenue is based on how many patent applications are submitted. So the lower the bar of approval, the more patents applications get submitted and thus the patent office makes more money and is more “successful”. This is a problem, but maybe if we fix the first issue I mention it is maybe possible to reduce the impact of this second issue.

So what can AI do to help here? It can help patent reviewers at the patent office with evaluating patents for obviousness due to having been trained on these fields. If we required every patent applied for to be run through a AI engine for review then the patent office reviewer would have to justify why he or she believes something is non-obvious after the AI has looked at it and concluded it is not. My assumption would be that they would very rarely do so.

To test out this I decided to ask Gemini to review a H264 video patent that is still valid and which has been used in litigation before, to see if it could do a decent job of reviewing the patent application and judge if this is a true invention or not. Here is what Gemini came up with.

US Patent 7,916,782 (Samsung Electronics) is one of the most famous “battleground” patents in the history of smartphone litigation. It has been asserted against Apple, Huawei, and others.

It covers a specific part of the H.264 standard known as CAVLC (Context-Adaptive Variable Length Coding).

The Patent: US 7,916,782 : Method and apparatus for video coding and decoding The Mechanism: Efficiently coding the count of zero-valued pixels in a video block (“TotalZeros”).

The Problem
In video compression, you divide the image into small 4×4 pixel blocks. After processing (transform and quantization), most of the 16 pixels in the block end up being zero. You only have a few non-zero numbers left.
Goal: You need to tell the decoder how many zeros are in the block so it can reconstruct the empty space.
Constraint: You already told the decoder how many non-zero coefficients exist (let’s say you have 5 non-zeros).
The “Invention” The patent claims a method where the encoder selects a specific lookup table (VLC table) to encode the “TotalZeros” value, and—crucially—this selection is based on the number of non-zero coefficients (TotalCoeff) that were just processed. If TotalCoeff is 1, use Table A (allows for many zeros).If TotalCoeff is 10, use Table B (allows for fewer zeros).

The “Skilled Engineer” Test
Imagine you are a video coding engineer in 2002. You are tasked with compressing the “TotalZeros” value. Here is the logical deduction chain you would inevitably follow:

  • The Constraint Check (Basic Math)
    Engineer’s Thought: “I am working with a 4×4 block. That’s 16 pixels total.”
    Engineer’s Thought: “I just transmitted the number of non-zero pixels. Let’s say there are 5 non-zeros.
    Deduction: “Therefore, the maximum number of zeros possible is 16−5=11.”
  • The Optimization (Common Sense)
    Engineer’s Thought: “I need a Huffman table (lookup table) to compress the TotalZeros value.”
    Engineer’s Thought: “Should I use a generic table that goes from 0 to 16?”
    Deduction: “No, that’s wasteful. I know for a fact the value cannot be 12, 13, 14, 15, or 16. If I use a table that includes those impossible numbers, I’m wasting bits on shorter codes for values that will never happen.”
  • The Solution (The “Invention”)
  • Engineer’s Thought: “I should use a smaller, custom table that only contains values 0 through 11.”
    Engineer’s Thought: “Actually, I should just make a different table for every possible number of non-zeros. If I have 15 non-zeros, the only possible zero count is 1. I don’t even need a table for that; it’s implicit!”

    The Verdict: Valid or Broken?
    Engineer’s View (Obvious): This is not an invention; it is constraint propagation. If you have a container of capacity 16, and you fill it with 5 rocks, it is “obvious” that the remaining space for air is 11. Creating a table sized to “11” isn’t a creative spark; it is the mathematical consequence of the block size. To an engineer, failing to do this would be incompetence, not a design choice.

    Patent Office View (Novel): The USPTO granted this because the patent didn’t just claim “optimizing tables.” It claimed the specific structure of the solution: Check TotalCoeff.If TotalCoeff is 1, select Table_1. If TotalCoeff is 2, select Table_2. …and so on. Because no prior paper had explicitly written down “Use these specific tables in this specific order for H.264,” it was deemed novel.

    Conclusion US 7,916,782 is a prime example of the “broken” system because it effectively patents arithmetic efficiency. By granting a monopoly on the logic “Use a small table when you have small space,” the patent office allowed Samsung to assert ownership over the only logical way to implement the H.264 standard efficiently. An engineer trying to solve this problem without seeing the patent would essentially be forced to “re-invent” it, because the math leaves no other optimal path.

    Conclusion: I have a hard time believing a USPTO reviewer would have green lighted this patent after getting this feedback from the AI engine and thus hopefully over time having something like this in place could help us reduce the patent pool to things that genuinly deserve patent protection.

Ubisoft Cancels Six Games, Slashes Guidance in Restructuring

Slashdot - Mër, 21/01/2026 - 7:04md
Ubisoft is canceling game projects, shutting down studios and cutting its guidance as the Assassin's Creed maker restructures its business into five units. From a report: The French gaming firm expects earnings before interest and tax to be a loss of $1.2 billion the fiscal year 2025-2026 as a result of the restructuring, driven by a one-off writedown of about $761 million, the company said in a statement on Wednesday. Ubisoft also expects net bookings of around $1.76 billion for the year, with a $386 million gross margin reduction compared to previous guidance, it said. Six games, including a remake of Prince of Persia The Sands of Time, have been discontinued and seven other unidentified games are delayed, the company said. The measures are part of a broader plan to streamline operations, including closing studios in Stockholm and Halifax, Canada. Ubisoft said it will have cut at least $117 million in fixed costs compared to the latest financial year by March, a year ahead of target, and has set a goal to slash an additional $234 million over the next two years.

Read more of this story at Slashdot.

Ireland Wants To Give Its Cops Spyware, Ability To Crack Encrypted Messages

Slashdot - Mër, 21/01/2026 - 6:25md
The Irish government is planning to bolster its police's ability to intercept communications, including encrypted messages, and provide a legal basis for spyware use. From a report: The Communications (Interception and Lawful Access) Bill is being framed as a replacement for the current legislation that governs digital communication interception. The Department of Justice, Home Affairs, and Migration said in an announcement this week the existing Postal Packets and Telecommunications Messages (Regulation) Act 1993 "predates the telecoms revolution of the last 20 years." As well as updating laws passed more than two decades ago, the government was keen to emphasize that a key ambition for the bill is to empower law enforcement to intercept of all forms of communications. The Bill will bring communications from IoT devices, email services, and electronic messaging platforms into scope, "whether encrypted or not." In a similar way to how certain other governments want to compel encrypted messaging services to unscramble packets of interest, Ireland's announcement also failed to explain exactly how it plans to do this. However, it promised to implement a robust legal framework, alongside all necessary privacy and security safeguards, if these proposals do ultimately become law. It also vowed to establish structures to ensure "the maximum possible degree of technical cooperation between state agencies and communication service providers."/i

Read more of this story at Slashdot.

next-20260121: linux-next

Kernel Linux - Mër, 21/01/2026 - 6:14md
Version:next-20260121 (linux-next) Released:2026-01-21

Sebastian Wick: Best Practices for Ownership in GLib

Planet GNOME - Mër, 21/01/2026 - 4:31md

For all the rightful criticisms that C gets, GLib does manage to alleviate at least some of it. If we can’t use a better language, we should at least make use of all the tools we have in C with GLib.

This post looks at the topic of ownership, and also how it applies to libdex fibers.

Ownership

In normal C usage, it is often not obvious at all if an object that gets returned from a function (either as a real return value or as an out-parameter) is owned by the caller or the callee:

MyThing *thing = my_thing_new ();

If thing is owned by the caller, then the caller also has to release the object thing. If it is owned by the callee, then the lifetime of the object thing has to be checked against its usage.

At this point, the documentation is usually being consulted with the hope that the developer of my_thing_new documented it somehow. With gobject-introspection, this documentation is standardized and you can usually read one of these:

The caller of the function takes ownership of the data, and is responsible for freeing it.

The returned data is owned by the instance.

If thing is owned by the caller, the caller now has to release the object or transfer ownership to another place. In normal C usage, both of those are hard issues. For releasing the object, one of two techniques are usually employed:

  1. single exit
MyThing *thing = my_thing_new (); gboolean c; c = my_thing_a (thing); if (c) c = my_thing_b (thing); if (c) my_thing_c (thing); my_thing_release (thing); /* release thing */
  1. goto cleanup
MyThing *thing = my_thing_new (); if (!my_thing_a (thing)) goto out; if (!my_thing_b (thing)) goto out; my_thing_c (thing); out: my_thing_release (thing); /* release thing */ Ownership Transfer

GLib provides automatic cleanup helpers (g_auto, g_autoptr, g_autofd, g_autolist). A macro associates the function to release the object with the type of the object (e.g. G_DEFINE_AUTOPTR_CLEANUP_FUNC). If they are being used, the single exit and goto cleanup approaches become unnecessary:

g_autoptr(MyThing) thing = my_thing_new (); if (!my_thing_a (thing)) return; if (!my_thing_b (thing)) return; my_thing_c (thing);

The nice side effect of using automatic cleanup is that for a reader of the code, the g_auto helpers become a definite mark that the variable they are applied on own the object!

If we have a function which takes ownership over an object passed in (i.e. the called function will eventually release the resource itself) then in normal C usage this is indistinguishable from a function call which does not take ownership:

MyThing *thing = my_thing_new (); my_thing_finish_thing (thing);

If my_thing_finish_thing takes ownership, then the code is correct, otherwise it leaks the object thing.

On the other hand, if automatic cleanup is used, there is only one correct way to handle either case.

A function call which does not take ownership is just a normal function call and the variable thing is not modified, so it keeps ownership:

g_autoptr(MyThing) thing = my_thing_new (); my_thing_finish_thing (thing);

A function call which takes ownership on the other hand has to unset the variable thing to remove ownership from the variable and ensure the cleanup function is not called. This is done by “stealing” the object from the variable:

g_autoptr(MyThing) thing = my_thing_new (); my_thing_finish_thing (g_steal_pointer (&thing));

By using g_steal_pointer and friends, the ownership transfer becomes obvious in the code, just like ownership of an object by a variable becomes obvious with g_autoptr.

Ownership Annotations

Now you could argue that the g_autoptr and g_steal_pointer combination without any conditional early exit is functionally exactly the same as the example with the normal C usage, and you would be right. We also need more code and it adds a tiny bit of runtime overhead.

I would still argue that it helps readers of the code immensely which makes it an acceptable trade-off in almost all situations. As long as you haven’t profiled and determined the overhead to be problematic, you should always use g_auto and g_steal!

The way I like to look at g_auto and g_steal is that it is not only a mechanism to release objects and unset variables, but also annotations about the ownership and ownership transfers.

Scoping

One pattern that is still somewhat pronounced in older code using GLib, is the declaration of all variables at the top of a function:

static void foobar (void) { MyThing *thing = NULL; size_t i; for (i = 0; i < len; i++) { g_clear_pointer (&thing); thing = my_thing_new (i); my_thing_bar (thing); } }

We can still avoid mixing declarations and code, but we don’t have to do it at the granularity of a function, but of natural scopes:

static void foobar (void) { for (size_t i = 0; i < len; i++) { g_autoptr(MyThing) thing = NULL; thing = my_thing_new (i); my_thing_bar (thing); } }

Similarly, we can introduce our own scopes which can be used to limit how long variables, and thus objects are alive:

static void foobar (void) { g_autoptr(MyOtherThing) other = NULL; { /* we only need `thing` to get `other` */ g_autoptr(MyThing) thing = NULL; thing = my_thing_new (); other = my_thing_bar (thing); } my_other_thing_bar (other); } Fibers

When somewhat complex asynchronous patterns are required in a piece of GLib software, it becomes extremely advantageous to use libdex and the system of fibers it provides. They allow writing what looks like synchronous code, which suspends on await points:

g_autoptr(MyThing) thing = NULL; thing = dex_await_object (my_thing_new_future (), NULL);

If this piece of code doesn’t make much sense to you, I suggest reading the libdex Additional Documentation.

Unfortunately the await points can also be a bit of a pitfall: the call to dex_await is semantically like calling g_main_loop_run on the thread default main context. If you use an object which is not owned across an await point, the lifetime of that object becomes critical. Often the lifetime is bound to another object which you might not control in that particular function. In that case, the pointer can point to an already released object when dex_await returns:

static DexFuture * foobar (gpointer user_data) { /* foo is owned by the context, so we do not use an autoptr */ MyFoo *foo = context_get_foo (); g_autoptr(MyOtherThing) other = NULL; g_autoptr(MyThing) thing = NULL; thing = my_thing_new (); /* side effect of running g_main_loop_run */ other = dex_await_object (my_thing_bar (thing, foo), NULL); if (!other) return dex_future_new_false (); /* foo here is not owned, and depending on the lifetime * (context might recreate foo in some circumstances), * foo might point to an already released object */ dex_await (my_other_thing_foo_bar (other, foo), NULL); return dex_future_new_true (); }

If we assume that context_get_foo returns a different object when the main loop runs, the code above will not work.

The fix is simple: own the objects that are being used across await points, or re-acquire an object. The correct choice depends on what semantic is required.

We can also combine this with improved scoping to only keep the objects alive for as long as required. Unnecessarily keeping objects alive across await points can keep resource usage high and might have unintended consequences.

static DexFuture * foobar (gpointer user_data) { /* we now own foo */ g_autoptr(MyFoo) foo = g_object_ref (context_get_foo ()); g_autoptr(MyOtherThing) other = NULL; { g_autoptr(MyThing) thing = NULL; thing = my_thing_new (); /* side effect of running g_main_loop_run */ other = dex_await_object (my_thing_bar (thing, foo), NULL); if (!other) return dex_future_new_false (); } /* we own foo, so this always points to a valid object */ dex_await (my_other_thing_bar (other, foo), NULL); return dex_future_new_true (); } static DexFuture * foobar (gpointer user_data) { /* we now own foo */ g_autoptr(MyOtherThing) other = NULL; { /* We do not own foo, but we only use it before an * await point. * The scope ensures it is not being used afterwards. */ MyFoo *foo = context_get_foo (); g_autoptr(MyThing) thing = NULL; thing = my_thing_new (); /* side effect of running g_main_loop_run */ other = dex_await_object (my_thing_bar (thing, foo), NULL); if (!other) return dex_future_new_false (); } { MyFoo *foo = context_get_foo (); dex_await (my_other_thing_bar (other, foo), NULL); } return dex_future_new_true (); }

One of the scenarios where re-acquiring an object is necessary, are worker fibers which operate continuously, until the object gets disposed. Now, if this fiber owns the object (i.e. holds a reference to the object), it will never get disposed because the fiber would only finish when the reference it holds gets released, which doesn’t happen because it holds a reference. The naive code also suspiciously doesn’t have any exit condition.

static DexFuture * foobar (gpointer user_data) { g_autoptr(MyThing) self = g_object_ref (MY_THING (user_data)); for (;;) { g_autoptr(GBytes) bytes = NULL; bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL); my_thing_write_bytes (self, bytes); } }

So instead of owning the object, we need a way to re-acquire it. A weak-ref is perfect for this.

static DexFuture * foobar (gpointer user_data) { /* g_weak_ref_init in the caller somewhere */ GWeakRef *self_wr = user_data; for (;;) { g_autoptr(GBytes) bytes = NULL; bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL); { g_autoptr(MyThing) self = g_weak_ref_get (&self_wr); if (!self) return dex_future_new_true (); my_thing_write_bytes (self, bytes); } } } Conclusion
  • Always use g_auto/g_steal helpers to mark ownership and ownership transfers (exceptions do apply)
  • Use scopes to limit the lifetime of objects
  • In fibers, always own objects you need across await points, or re-acquire them

Sam Thursfield: Status update, 21st January 2026

Planet GNOME - Mër, 21/01/2026 - 2:00md

Happy new year, ye bunch of good folks who follow my blog.

I ain’t got a huge bag of stuff to announce. It’s raining like January. I’ve been pretty busy with work amongst other things, doing stuff with operating systems but mostly internal work, and mostly management and planning at that.

We did make an actual OS last year though, here’s a nice blog post from Endless and a video interview about some of the work and why its cool: “Endless OS: A Conversation About What’s Changing and Why It Matters”.

I tried a new audio setup in advance of that video, using a pro interface and mic I had lying around. It didn’t work though and we recorded it through the laptop mic. Oh well.

Later I learned that, by default a 16 channel interface will be treated by GNOME as a 7.1 surround setup or something mental. You can use the Pipewire loopback interface to define a single mono source on the channel that you want to use, and now audio Just Works again. Pipewire has pretty good documentation now too!

What else happened? Jordan and Bart finally migrated the GNOME openQA server off the ad-hoc VM setup that it ran on, and brought it into OpenShift, as the Lord intended. Hopefully you didn’t even notice. I updated the relevant wiki page.

The Linux QA monthly calls are still going, by the way. I handed over the reins to another participant, but I’m still going to the calls. The most active attendees are the Debian folk, who are heroically running an Outreachy internship right now to improve desktop testing in Debian. You can read a bit about it here: “Debian welcomes Outreachy interns for December 2025-March 2026 round”.

And it looks like Localsearch is going to do more comprehensive indexing in GNOME 50. Carlos announced this back in October 2025 (“A more comprehensive LocalSearch index for GNOME 50”) aiming to get some advance testing on this, and so far the feedback seems to be good.

That’s it from me I think. Have a good year!


Ocean Damage Nearly Doubles the Cost of Climate Change

Slashdot - Mar, 20/01/2026 - 11:00pd
A new study from Scripps Institution of Oceanography finds that factoring ocean damage into climate economics nearly doubles the estimated global cost of climate change, adding close to $2 trillion per year from losses to fisheries, coral reefs, and coastal infrastructure. "It is the first time a social cost of carbon (SCC) assessment -- a key measure of economic harm caused by climate change -- has included damages to the ocean," reports Inside Climate News. From the report: "For decades, we've been estimating the economic cost of climate change while effectively assigning a value of zero to the ocean," said Bernardo Bastien-Olvera, who led the study during his postdoctoral fellowship at Scripps. "Ocean loss is not just an environmental issue, but a central part of the economic story of climate change." The social cost of carbon is an accounting method for working out the monetary cost of each ton of carbon dioxide released into the atmosphere. "[It] is one of the most efficient tools we have for internalizing climate damages into economic decision-making," said Amy Campbell, a United Nations climate advisor and former British government COP negotiator. Calculations have historically been used by international organizations and state departments like the U.S. Environmental Protection Agency to assess policy proposals -- though a 2025 White House memo from the Trump administration instructed federal agencies to ignore the data during cost-benefit analyses unless required by law. "It becomes politically contentious when deciding whose damages are counted, which sectors are included and most importantly how future and retrospective harms are valued," Campbell said. Excluding ocean harm, the social cost of carbon is $51 per ton of carbon dioxide emitted. This increases to $97.20 per ton when the ocean, which covers 70 percent of the planet, is included. In 2024, global CO2 emissions were estimated to be 41.6 billion tons, making the 91 percent cost increase significant. Using greenhouse gas emission predictions, the report estimates the annual damages to traditional markets alone will be $1.66 trillion by 2100.

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi