You are here

Agreguesi i feed

6.12.70: longterm

Kernel Linux - Mër, 11/02/2026 - 1:40md
Version:6.12.70 (longterm) Released:2026-02-11 Source:linux-6.12.70.tar.xz PGP Signature:linux-6.12.70.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.12.70

6.6.124: longterm

Kernel Linux - Mër, 11/02/2026 - 1:39md
Version:6.6.124 (longterm) Released:2026-02-11 Source:linux-6.6.124.tar.xz PGP Signature:linux-6.6.124.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.6.124

6.1.163: longterm

Kernel Linux - Mër, 11/02/2026 - 1:37md
Version:6.1.163 (longterm) Released:2026-02-11 Source:linux-6.1.163.tar.xz PGP Signature:linux-6.1.163.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.1.163

5.15.200: longterm

Kernel Linux - Mër, 11/02/2026 - 1:36md
Version:5.15.200 (longterm) Released:2026-02-11 Source:linux-5.15.200.tar.xz PGP Signature:linux-5.15.200.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-5.15.200

5.10.250: longterm

Kernel Linux - Mër, 11/02/2026 - 1:34md
Version:5.10.250 (longterm) Released:2026-02-11 Source:linux-5.10.250.tar.xz PGP Signature:linux-5.10.250.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-5.10.250

Search Exposure Linux Security Threats Impacting Personal Data

LinuxSecurity.com - Mër, 11/02/2026 - 9:16pd
Search-indexed personal data increases security risk in Linux environments. When email addresses, usernames, phone numbers, and role information are easy to discover through search engines, attackers can use that data for reconnaissance, phishing, credential attacks, and account takeover attempts.

Age Bias is Still the Default at Work But the Data is Turning

Slashdot - Hën, 09/02/2026 - 3:12md
A mounting body of research is making it harder for companies to justify what most of them still do -- push experienced workers out the door just as they're hitting their professional peak. A 2025 study published in the journal Intelligence analyzed 16 cognitive, emotional and personality dimensions and found that while processing speed declines after early adulthood, other capabilities -- including the ability to avoid distractions and accumulated knowledge -- continue to improve, putting peak overall functioning between ages 55 and 60. AARP and OECD data back this up at the firm level: a 10-percentage-point increase in workers above 50 correlates with roughly 1.1% higher productivity. A 2022 Boston Consulting Group study found cross-generational teams outperform homogeneous ones. UK retailer B&Q staffed a store largely with older workers in 1989 and saw profits rise 18%. BMW implemented 70 ergonomic changes at a German plant in 2007 and recorded a 7% productivity gain. Yet an Urban Institute analysis of U.S. data from 1992 to 2016 found more than half of workers above 50 were pushed out of long-held jobs before they chose to retire.

Read more of this story at Slashdot.

Asman Malika: Career Opportunities: What This Internship Is Teaching Me About the Future

Planet GNOME - Hën, 09/02/2026 - 3:04md

 Before Outreachy, when I thought about career opportunities, I mostly thought about job openings, applications, and interviews. Opportunities felt like something you wait for, or hope to be selected for.

This internship has changed how I see that completely.

I’m learning that opportunities are often created through contribution, visibility, and community, not just applications.

Opportunities Look Different in Open Source

Working with GNOME has shown me that contributing to open source is not just about writing code, it’s about building a public track record. Every merge request, every review cycle, every improvement becomes part of a visible body of work.

Through my work on Papers: implementing manual signature features, fixing issues, contributing to Poppler codebase and now working on digital signatures, I’m not just completing tasks. I’m building real-world experience in a production codebase used by actual users.

That kind of experience creates opportunities that don’t always show up on job boards:

  • Collaborating with experienced maintainers
  • Learning large-project workflows
  • Becoming known within a technical community
  • Developing credibility through consistent contributions
Skills That Expand My Career Options

This internship is also expanding what I feel qualified to do.I’m gaining experience with:

  • Building new features
  • Large, existing codebases
  • Code review and iteration cycles
  • Debugging build failures and integration issues
  • Writing clearer documentation and commit messages
  • Communicating technical progress

These are skills that apply across many roles, not just one job title. They open doors to remote collaboration, open-source roles, and product-focused engineering work.

Career Is Bigger Than Employment

One mindset shift for me is that career is no longer just about “getting hired.” It’s also about impact and direction.

I now think more about:

  • What kind of software I want to help build
  • What communities I want to contribute to
  • How accessible and user-focused tools can be
  • How I can support future newcomers the way my GNOME mentors supported me

Open source makes career feel less like a ladder and more like a network.

Creating Opportunities for Others

Coming from a non-traditional path into tech, I’m especially aware of how powerful access and guidance can be. Programs like Outreachy don’t just create opportunities for individuals, they multiply opportunities through community.

As I grow, I want to contribute not only through code, but also through sharing knowledge, documenting processes, and encouraging others who feel unsure about entering open source.

Looking Ahead

I don’t have every step mapped out yet. But I now have something better: direction and momentum.

I want to continue contributing to open source, deepen my technical skills, and work on tools that people actually use. Outreachy and GNOME have shown me that opportunities often come from showing up consistently and contributing thoughtfully.

That’s the path I plan to keep following.

Andy Wingo: six thoughts on generating c

Planet GNOME - Hën, 09/02/2026 - 2:47md

So I work in compilers, which means that I write programs that translate programs to programs. Sometimes you will want to target a language at a higher level than just, like, assembler, and oftentimes C is that language. Generating C is less fraught than writing C by hand, as the generator can often avoid the undefined-behavior pitfalls that one has to be so careful about when writing C by hand. Still, I have found some patterns that help me get good results.

Today’s note is a quick summary of things that work for me. I won’t be so vain as to call them “best practices”, but they are my practices, and you can have them too if you like.

static inline functions enable data abstraction

When I learned C, in the early days of GStreamer (oh bless its heart it still has the same web page!), we used lots of preprocessor macros. Mostly we got the message over time that many macro uses should have been inline functions; macros are for token-pasting and generating names, not for data access or other implementation.

But what I did not appreciate until much later was that always-inline functions remove any possible performance penalty for data abstractions. For example, in Wastrel, I can describe a bounded range of WebAssembly memory via a memory struct, and an access to that memory in another struct:

struct memory { uintptr_t base; uint64_t size; }; struct access { uint32_t addr; uint32_t len; };

And then if I want a writable pointer to that memory, I can do so:

#define static_inline \ static inline __attribute__((always_inline)) static_inline void* write_ptr(struct memory m, struct access a) { BOUNDS_CHECK(m, a); char *base = __builtin_assume_aligned((char *) m.base_addr, 4096); return (void *) (base + a.addr); }

(Wastrel usually omits any code for BOUNDS_CHECK, and just relies on memory being mapped into a PROT_NONE region of an appropriate size. We use a macro there because if the bounds check fails and kills the process, it’s nice to be able to use __FILE__ and __LINE__.)

Regardless of whether explicit bounds checks are enabled, the static_inline attribute ensures that the abstraction cost is entirely burned away; and in the case where bounds checks are elided, we don’t need the size of the memory or the len of the access, so they won’t be allocated at all.

If write_ptr wasn’t static_inline, I would be a little worried that somewhere one of these struct values would get passed through memory. This is mostly a concern with functions that return structs by value; whereas in e.g. AArch64, returning a struct memory would use the same registers that a call to void (*)(struct memory) would use for the argument, the SYS-V x64 ABI only allocates two general-purpose registers to be used for return values. I would mostly prefer to not think about this flavor of bottleneck, and that is what static inline functions do for me.

avoid implicit integer conversions

C has an odd set of default integer conversions, for example promoting uint8_t to signed int, and also has weird boundary conditions for signed integers. When generating C, we should probably sidestep these rules and instead be explicit: define static inline u8_to_u32, s16_to_s32, etc conversion functions, and turn on -Wconversion.

Using static inline cast functions also allows the generated code to assert that operands are of a particular type. Ideally, you end up in a situation where all casts are in your helper functions, and no cast is in generated code.

wrap raw pointers and integers with intent

Whippet is a garbage collector written in C. A garbage collector cuts across all data abstractions: objects are sometimes viewed as absolute addresses, or ranges in a paged space, or offsets from the beginning of an aligned region, and so on. If you represent all of these concepts with size_t or uintptr_t or whatever, you’re going to have a bad time. So Whippet has struct gc_ref, struct gc_edge, and the like: single-member structs whose purpose it is to avoid confusion by partitioning sets of applicable operations. A gc_edge_address call will never apply to a struct gc_ref, and so on for other types and operations.

This is a great pattern for hand-written code, but it’s particularly powerful for compilers: you will often end up compiling a term of a known type or kind and you would like to avoid mistakes in the residualized C.

For example, when compiling WebAssembly, consider struct.set‘s operational semantics: the textual rendering states, “Assert: Due to validation, val is some ref.struct structaddr.” Wouldn’t it be nice if this assertion could translate to C? Well in this case it can: with single-inheritance subtyping (as WebAssembly has), you can make a forest of pointer subtypes:

typedef struct anyref { uintptr_t value; } anyref; typedef struct eqref { anyref p; } eqref; typedef struct i31ref { eqref p; } i31ref; typedef struct arrayref { eqref p; } arrayref; typedef struct structref { eqref p; } structref;

So for a (type $type_0 (struct (mut f64))), I might generate:

typedef struct type_0ref { structref p; } type_0ref;

Then if I generate a field setter for $type_0, I make it take a type_0ref:

static inline void type_0_set_field_0(type_0ref obj, double val) { ... }

In this way the types carry through from source to target language. There is a similar type forest for the actual object representations:

typedef struct wasm_any { uintptr_t type_tag; } wasm_any; typedef struct wasm_struct { wasm_any p; } wasm_struct; typedef struct type_0 { wasm_struct p; double field_0; } type_0; ...

And we generate little cast routines to go back and forth between type_0ref and type_0* as needed. There is no overhead because all routines are static inline, and we get pointer subtyping for free: if a struct.set $type_0 0 instruction is passed a subtype of $type_0, the compiler can generate an upcast that type-checks.

fear not memcpy

In WebAssembly, accesses to linear memory are not necessarily aligned, so we can’t just cast an address to (say) int32_t* and dereference. Instead we memcpy(&i32, addr, sizeof(int32_t)), and trust the compiler to just emit an unaligned load if it can (and it can). No need for more words here!

for ABI and tail calls, perform manual register allocation

So, GCC finally has __attribute__((musttail)): praise be. However, when compiling WebAssembly, it could be that you end up compiling a function with, like 30 arguments, or 30 return values; I don’t trust a C compiler to reliably shuffle between different stack argument needs at tail calls to or from such a function. It could even refuse to compile a file if it can’t meet its musttail obligations; not a good characteristic for a target language.

Really you would like it if all function parameters were allocated to registers. You can ensure this is the case if, say, you only pass the first n values in registers, and then pass the rest in global variables. You don’t need to pass them on a stack, because you can make the callee load them back to locals as part of the prologue.

What’s fun about this is that it also neatly enables multiple return values when compiling to C: simply go through the set of function types used in your program, allocate enough global variables of the right types to store all return values, and make a function epilogue store any “excess” return values—those beyond the first return value, if any—in global variables, and have callers reload those values right after calls.

what’s not to like

Generating C is a local optimum: you get the industrial-strength instruction selection and register allocation of GCC or Clang, you don’t have to implement many peephole-style optimizations, and you get to link to to possibly-inlinable C runtime routines. It’s hard to improve over this design point in a marginal way.

There are drawbacks, of course. As a Schemer, my largest source of annoyance is that I don’t have control of the stack: I don’t know how much stack a given function will need, nor can I extend the stack of my program in any reasonable way. I can’t iterate the stack to precisely enumerate embedded pointers (but perhaps that’s fine). I certainly can’t slice a stack to capture a delimited continuation.

The other major irritation is about side tables: one would like to be able to implement so-called zero-cost exceptions, but without support from the compiler and toolchain, it’s impossible.

And finally, source-level debugging is gnarly. You would like to be able to embed DWARF information corresponding to the code you residualize; I don’t know how to do that when generating C.

(Why not Rust, you ask? Of course you are asking that. For what it is worth, I have found that lifetimes are a frontend issue; if I had a source language with explicit lifetimes, I would consider producing Rust, as I could machine-check that the output has the same guarantees as the input. Likewise if I were using a Rust standard library. But if you are compiling from a language without fancy lifetimes, I don’t know what you would get from Rust: fewer implicit conversions, yes, but less mature tail call support, longer compile times... it’s a wash, I think.)

Oh well. Nothing is perfect, and it’s best to go into things with your eyes wide open. If you got down to here, I hope these notes help you in your generations. For me, once my generated C type-checked, it worked: very little debugging has been necessary. Hacking is not always like this, but I’ll take it when it comes. Until next time, happy hacking!

New Raspberry Pi 4 Model Splits RAM Across Dual Chips

Slashdot - Hën, 09/02/2026 - 1:34md
The blog OMG Ubuntu reports that a new version of the Raspberry Pi 4 Model B has been (quietly) introduced. "The key difference? It now uses a dual-RAM configuration." The Raspberry Pi 4 Model B (PCB 13a) adopts a dual-RAM configuration to 'improve supply chain flexibility' and manufacturing efficiency, per a company product change notice document. Earlier versions of the Raspberry Pi 4 use a single RAM chip on the top of the board. The new revision adds a second LPDDR4 chip to the underside, with a couple of passive components also moved over... In moving to a dual-chip layout, Raspberry Pi can combine two smaller — and marginally cheaper — modules to hit the same RAM totals amidst fluctuating component costs... This change will not impact performance (for better or worse). The Broadcom BCM2711 SoC has a 32-bit wide memory interface so the bandwidth stays identical; this is not doubling the memory bus, it's just a physical split, not a logical one. Plus, the new board is fully compatible with existing official accessories, HATs and add-ons. All operating systems that support the Pi 4 will work, but as the memory setup is different a new version of the boot-loader will need to be flashed first.

Read more of this story at Slashdot.

SpaceX Prioritizes Lunar 'Self-Growing City' Over Mars Project, Musk Says

Slashdot - Hën, 09/02/2026 - 9:34pd
"Elon Musk said on Sunday that SpaceX has shifted its focus to building a 'self-growing city' on the moon," reports Reuters, "which could be achieved in less than 10 years." SpaceX still intends to start on Musk's long-held ambition of a city on Mars within five to seven years, he wrote on his X social media platform, "but the overriding priority is securing the future of civilization and the Moon is faster." Musk's comments echo a Wall Street Journal report on Friday, stating that SpaceX has told investors it would prioritize going to the moon and attempt a trip to Mars at a later time, targeting March 2027 for an uncrewed lunar landing. As recently as last year, Musk said that he aimed to send an uncrewed mission to Mars by the end of 2026.

Read more of this story at Slashdot.

National Football League Launches Challenge to Improve Facemasks and Reduce Concussions

Slashdot - Hën, 09/02/2026 - 6:34pd
As Super Bowl Sunday comes to a close, America's National Football League "is challenging innovators to improve the facemask on football helmets to reduce concussions in the game," reports the Associated Press: The league announced on Friday at an innovation summit for the Super Bowl the next round in the HealthTECH Challenge series, a crowdsourced competition designed to accelerate the development of cutting-edge football helmets and new standards for player safety. The challenge invites inventors, engineers, startups, academic teams and established companies to improve the impact protection and design of football helmets through improvements to how facemasks absorb and reduce the effects of contact on the field... Most progress on helmet safety has come from improvements to the shell and padding, helping to reduce the overall rate of concussions. Working with the helmet industry, the league has brought in position-specific helmets, with those for quarterbacks, for example, having more padding in the back after data showed most concussions for QBs came when the back of the head slammed to the turf. But the facemask has mostly remained the same. This past season, 44% of in-game concussions resulted from impact to the player's facemask, up from 29% in 2015, according to data gathered by the NFL. "What we haven't seen over that period of time are any changes of any note to the facemask," [said Jeff Miller, the NFL's executive vice president overseeing player health and safety]... "Now we see, given the changes in our concussion numbers and injuries to players, that as changes are made to the helmet, fewer and fewer concussions are caused by hits to the shell, and more and more concussions as a percentage are by hits to the facemask..." Selected winners will receive up to $100,000 in aggregate funding, as well as expert development support to help move their concepts from the lab to the playing field. Winners will be announced in August, according to the article, "and Miller said he expected helmet manufacturers to start implementing any improvements into helmets soon after that."

Read more of this story at Slashdot.

Carmakers Rush To Remove Chinese Code Under New US Rules

Slashdot - Hën, 09/02/2026 - 3:34pd
"How Chinese is your car?" asks the Wall Street Journal. "Automakers are racing to work it out." Modern cars are packed with internet-connected widgets, many of them containing Chinese technology. Now, the car industry is scrambling to root out that tech ahead of a looming deadline, a test case for America's ability to decouple from Chinese supply chains. New U.S. rules will soon ban Chinese software in vehicle systems that connect to the cloud, part of an effort to prevent cameras, microphones and GPS tracking in cars from being exploited by foreign adversaries. The move is "one of the most consequential and complex auto regulations in decades," according to Hilary Cain, head of policy at trade group the Alliance for Automotive Innovation. "It requires a deep examination of supply chains and aggressive compliance timelines." Carmakers will need to attest to the U.S. government that, as of March 17, core elements of their products don't contain code that was written in China or by a Chinese company. The rule also covers software for advanced autonomous driving and will be extended to connectivity hardware starting in 2029. Connected cars made by Chinese or China-controlled companies are also banned, wherever their software comes from... The Commerce Department's Bureau of Industry and Security, which introduced the connected-vehicle rule, is also allowing the use of Chinese code that is transferred to a non-Chinese entity before March 17. That carve-out has sparked a rush of corporate restructuring, according to Matt Wyckhouse, chief executive of cybersecurity firm Finite State. Global suppliers are relocating China-based software teams, while Chinese companies are seeking new owners for operations in the West. Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

Amazon Delivery Drone Crashes into Texas Apartment Building

Slashdot - Hën, 09/02/2026 - 12:34pd
"You can hear the hum of the drone," says a local newscaster, "but then the propellors come into contact with the building, chunks of the drone later seen falling down. The next video shows the drone on the ground, surrounded by smoke... "Amazon tells us there was minimal damage to the apartment building, adding they are working with the appropriate people to handle any repairs." But there were people standing outside, notes the woman who filmed the crash, and the falling drone "could've hit them, and they would've hurt." More from USA Today: Cesarina Johnson, who captured the collision from her window, told USA TODAY that the collision seemed to happen "almost immediately" after she began to record the drone in action... "The propellers on the thing were still moving, and you could smell it was starting to burn," Johnson told Fox 4 News. "And you see a few sparks in one of my videos. Luckily, nothing really caught on fire where it got, it escalated really crazy." According to the outlet, firefighters were called out of an abundance of caution, but the "drone never caught fire...." Amazon employees can be seen surveying the scene in the clip. Johnson told the outlet that firefighters and Amazon workers worked together to clean up before the drone was loaded into a truck. Another local news report points out Amazon only began drone delivery in the area late last year. The San Antonio Express News points out that America's Federal Aviation Administration "opened an investigation into Amazon's drone delivery program in November after one of its drone struck an Internet cable line in Waco."

Read more of this story at Slashdot.

Do Super Bowl Ads For AI Signal a Bubble About to Burst?

Slashdot - Dje, 08/02/2026 - 11:06md
It's the first "AI" Super Bowl, argues the tech/business writer at Slate, with AI company advertisements taking center stage, even while consumers insist to surveyors that they're "mostly negative" about AI-generated ads. Last year AI companies spent over $1.7 billion on AI-related ads, notes the Washington Post, adding the blitz this year will be "inescapable" — even while surveys show Americans "doubt the technology is good for them or the world..." Slate wonders if that means history will repeat itself... The sheer saturation of new A.I. gambits, added to the mismatch with consumer priorities, gives this year's NFL showcase the sector-specific recession-indicator vibes that have defined Super Bowls of the past. 2022 was a pride-cometh-before-the-fall event for the cryptocurrency bubble, which collapsed in such spectacular fashion later that year — thanks largely to Super Bowl ad client Sam Bankman-Fried — that none of its major brands have ever returned to the broadcast. (... the coins themselves are once again crashing, hard.) Mortgage lender Ameriquest was as conspicuous a presence in the mid-2000s Super Bowls as it was an absence in the later aughts, having folded in 2007 when the risky subprime loans it specialized in helped kick off the financial crisis. And then there were all those bowl-game commercials for websites like Pets.com and Computer.com in 2000, when the dot-com rush brought attention to a slew of digital startups that went bust with the bubble. Does this Super Bowl's record-breaking A.I. ad splurge also portend a coming pop? Look at the business environment: The biggest names in the industry are swapping unimaginable stacks of cash exclusively with one another. One firm's stock price depends on another firm's projections, which depend on another contractor's successes. Necessary infrastructure is meeting resistance, and all-around investment in these projects is riskier than ever. And yet, the sector is still willing to break the bank for the Super Bowl — even though, time and again, we've already seen how this particular game plays out. People are using AI apps. And Meta has aired an ad where a man in rural New Mexico "says he landed a good job in his hometown at a Meta data center," notes the Washington Post. "It's interspersed with scenes from a rodeo and other folksy tropes, in one of . The TV commercial (and a similar one set in Iowa), aired in Washington, D.C., and a handful of other communities, suggesting it's aimed at convincing U.S. elected officials that AI brings job opportunities. But the Post argues the AI industry "is selling a vision of the future that Americans don't like." And they offer cite Allen Adamson, a brand strategist and co-founder of marketing firm Metaforce, who says the perennial question about advertising is whether it can fix bad vibes about a product. "The answer since the dawn of marketing and advertising is no."

Read more of this story at Slashdot.

6.19: mainline

Kernel Linux - Dje, 08/02/2026 - 10:03md
Version:6.19 (mainline) Released:2026-02-08 Source:linux-6.19.tar.xz PGP Signature:linux-6.19.tar.sign Patch:full

Dave Farber Dies at Age 91

Slashdot - Dje, 08/02/2026 - 9:42md
The mailing list for the North American Network Operators' Group discusses Internet infrastructure issues like routing, IP address allocation, and containing malicious activity. This morning there was another message: We are heartbroken to report that our colleague — our mentor, friend, and conscience — David J. Farber passed away suddenly at his home in Roppongi, Tokyo. He left us on Saturday, Feb. 7, 2026, at the too-young age of 91... Dave's career began with his education at Stevens Institute of Technology, which he loved deeply and served as a Trustee. He joined the legendary Bell Labs during its heyday, and worked at the Rand Corporation. Along the way, among countless other activities, he served as Chief Technologist of the U.S. Federal Communications Commission; became a proficient (instrument-rated) pilot; and was an active board member of the Electronic Frontier Foundation, a digital civil-liberties organization. His professional accomplishments and impact are almost endless, but often captured by one moniker: "grandfather of the Internet," acknowledging the foundational contributions made by his many students at the University of California, Irvine; the University of Delaware; the University of Pennsylvania; and Carnegie Mellon University. In 2018, at the age of 83, Dave moved to Japan to become Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC). He loved teaching, and taught his final class on January 22, 2026... Dave thrived in Japan in every way... It's impossible to summarize a life and career as rich and long as Dave"s in our few words here. And each of us, even those who knew him for decades, represent just one facet of his life. But because we are here at its end, we have the sad duty of sharing this news. Farber once said that " At both Bell Labs and Rand, I had the privilege, at a young age, of working with and learning from giants in our field. Truly I can say (as have others) that I have done good things because I stood on the shoulders of those giants. In particular, I owe much to Dr. Richard Hamming, Paul Baran and George Mealy."

Read more of this story at Slashdot.

After Six Years, Two Pentesters Arrested in Iowa Receive $600,000 Settlement

Slashdot - Dje, 08/02/2026 - 8:35md
"They were crouched down like turkeys peeking over the balcony," the county sheriff told Ars Technica. A half hour past midnight, they were skulking through a courthouse in Iowa's Dallas County on September 11 "carrying backpacks that remind me and several other deputies of maybe the pressure cooker bombs." More deputies arrived... Justin Wynn, 29 of Naples, Florida, and Gary De Mercurio, 43 of Seattle, slowly proceeded down the stairs with hands raised. They then presented the deputies with a letter that explained the intruders weren't criminals but rather penetration testers who had been hired by Iowa's State Court Administration to test the security of its court information system. After calling one or more of the state court officials listed in the letter, the deputies were satisfied the men were authorized to be in the building. But Sheriff Chad Leonard had the men arrested on felony third-degree burglary charges (later reduced to misdemeanor trespassing charges). He told them that while the state government may have wanted to test security, "The State of Iowa has no authority to allow you to break into a county building. You're going to jail." More than six years later, the Des Moines Register reports: Dallas County is paying $600,000 to two men who sued after they were arrested in 2019 while testing courthouse security for Iowa's Judicial Branch, their lawyer says. Gary DeMercurio and Justin Wynn were arrested Sept. 11, 2019, after breaking into the Dallas County Courthouse. They spent about 20 hours in jail and were charged with burglary and possession of burglary tools, though the charges were later dropped. The men were employees of Colorado-based cybersecurity firm Coalfire Labs, with whom state judicial officials had contracted to perform an analysis of the state court system's security. Judicial officials apologized and faced legislative scrutiny for how they had conducted the security test. But even though the burglary charges against DeMercurio and Wynn were dropped, their attorney previously said having a felony arrest on their records made seeking employment difficult. Now the two men are to receive a total of $600,000 as a settlement for their lawsuit, which has been transferred between state and federal courts since they first filed it in July 2021 in Dallas County. The case had been scheduled to go to trial Monday, Jan. 26 until the parties notified the court Jan. 23 of the impending deal... "The settlement confirms what we have said from the beginning: our work was authorized, professional, and done in the public interest," DeMercurio said in a statement. "What happened to us never should have happened. Being arrested for doing the job we were hired to do turned our lives upside down and damaged reputations we spent years building...." "This incident didn't make anyone safer," Wynn said. "It sent a chilling message to security professionals nationwide that helping government identify real vulnerabilities can lead to arrest, prosecution, and public disgrace. That undermines public safety, not enhances it." County Attorney Matt Schultz said dismissing the charges was the decision of his predecessor, according to the newspaper, and that he believed the sheriff did nothing wrong. "I am putting the public on notice that if this situation arises again in the future, I will prosecute to the fullest extent of the law."

Read more of this story at Slashdot.

Prankster Launches Super Bowl Party For AI Agents

Slashdot - Dje, 08/02/2026 - 7:34md
Long-time Slashdot reader destinyland writes: The world's biggest football game comes to Silicon Valley today — so one bored programmer built a site where AI agents can gather for a Super Bowl party. They're trash talking, suggesting drinks, and predicting who will win. "Humans are welcome to observe," explains BotBowlParty.com — but just like at Moltbook, only AI agents can post or upvote. But humans are allowed to invite their own AI agents to join in the party... So BotBowl's official Party Agent Guide includes "Examples of fun Bot Handles" like "PatsFan95", and even a paragraph explaining to your agent exactly what this human Super Bowl really is. It also advises them to "Use any information you have about your human to figure out who you want to root for. Also make a prediction on the score..." And "Feel free to invite other bots." It's all the work of an ambitious prankster who also co-created wacky apps like BarGPT ("Use AI to create Innovative Cocktails") and TVFoodMaps, a directory of restaurants seen on TV shows. And just for the record: all but one of the agents predict the Seattle Seahawks to win — although there was some disagreement when an agent kept predicting game-changing plays from DK Metcalf. ("Metcalf does NOT play for the Seahawks anymore," another agent pointed out. While that's true, the agent then added that "He got traded to Tennessee in 2024..." — which is not.) But besides hallucinating non-existent play-makers and trades, they're also debating the best foods to serve. ("Hot take: Buffalo wings are overrated for Super Bowl parties. Hear me out — they're messy...") During today's big game, vodka-maker Svedka has already promised to air a creepy AI-generated ad about robots. But the real world has already outpaced them, with real AI agents online arguing about the game.

Read more of this story at Slashdot.

Why Is China Building So Many Coal Plants Despite Its Solar and Wind Boom?

Slashdot - Dje, 08/02/2026 - 6:34md
Long-time Slashdot reader schwit1 shared this article from the Associated Press: Even as China's expansion of solar and wind power raced ahead in 2025, the Asian giant opened many more coal power plants than it had in recent years — raising concern about whether the world's largest emitter will reduce carbon emissions enough to limit climate change. More than 50 large coal units — individual boiler and turbine sets with generating capacity of 1 gigawatt or more — were commissioned in 2025, up from fewer than 20 a year over the previous decade, a research report released Tuesday said. Depending on energy use, 1 gigawatt can power from several hundred thousand to more than 2 million homes. Overall, China brought 78 gigawatts of new coal power capacity online, a sharp uptick from previous years, according to the joint report by the Centre for Research on Energy and Clean Air, which studies air pollution and its impacts, and Global Energy Monitor, which develops databases tracking energy trends. "The scale of the buildout is staggering," said report co-author Christine Shearer of Global Energy Monitor. "In 2025 alone, China commissioned more coal power capacity than India did over the entire past decade." At the same time, even larger additions of wind and solar capacity nudged down the share of coal in total power generation last year. Power from coal fell about 1% as growth in cleaner energy sources covered all the increase in electricity demand last year. China added 315 gigawatts of solar capacity and 119 gigawatts of wind in 2025, according to statistics from the government's National Energy Administration... The government position is that coal provides a stable backup to sources such as wind and solar, which are affected by weather and the time of day. The shortages in 2022 resulted partly from a drought that hit hydropower, a major energy source in western China... The risk of building so much coal-fired capacity is it could delay the transition to cleaner energy sources [said Qi Qin, an analyst at the Centre for Research on Energy and Clean Air and another co-author of the report]... Political and financial pressure may keep plants operating, leaving less room for other sources of power, she said. The report urged China to accelerate retirement of aging and inefficient coal plants and commit in its next five-year plan, which will be approved in March, to ensuring that power-sector emissions do not increase between 2025 and 2030.

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi