You are here

Agreguesi i feed

Matthew Garrett: What is a PC compatible?

Planet GNOME - Dje, 04/01/2026 - 4:11pd

Wikipedia says “An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models”. But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet?

Before we dig into that, let’s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual. Anyone could buy the same parts from Intel and build a compatible board. They’d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who’d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn’t run elsewhere. CP/M’s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn’t need to care about the underlying hardware and would run on all systems that had a working CP/M port.

By 1979, boards based on the 8086, Intel’s successor to the 8080, were hitting the market. The 8086 wasn’t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM’s hardware, and the rest is history.

But one key part of this was that despite what was now MS-DOS existing only to support IBM’s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn’t include all the code needed to run on a PC - you needed IBM’s BIOS. To begin with this wasn’t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn’t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor’s ROM code wasn’t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way.

And here’s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM’s functionality, or didn’t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you’d think wouldn’t be necessary given that’s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it.

You’d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn’t maintain compatibility. As long as everything went via the BIOS this shouldn’t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn’t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them.

And that’s what happened. IBM was the biggest player, so people targeted IBM’s platform. When BIOS interfaces weren’t sufficient they hit the hardware directly - and even if they weren’t doing that, they’d end up depending on behavioural quirks of IBM’s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof.

So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009, and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you’d need some additional media for hardware-specific drivers. It’s something that still distinguishes the PC market from the ARM desktop market. But it’s not as true as it used to be, and it’s interesting to think about whether it ever was as true as people thought.

Let’s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don’t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn’t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems1. Is this system PC compatible? By the strictest of definitions, no.

Ok. But the hardware is broadly the same, right? There’s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn’t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it’s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that’s really not going to work when you have a PCI card that’s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it’s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won’t2, so you’re still actually relying on the firmware to do the right thing but it’s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you’re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work.

But imagine you are, or imagine you’re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that’s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you’re trying to run something built with IBM Pascal 1.0? There’s a risk that it’ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it’ll break. It’d work fine on an actual PC, and it won’t work here, so are we PC compatible?

That’s a very interesting abstract question and I’m going to entirely ignore it. Let’s talk about PC graphics3. The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware.

Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once4, and software that depended on that wouldn’t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn’t display correctly on any future PCs either. This is going to become a theme.

There’s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn’t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you’re going to have a bad time. This isn’t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we’re not entirely CGA compatible. Is an IBM PC/AT with EGA PC compatible? You’d likely say “yes”, but there’s software written for the original PC that won’t work there.

And, well, let’s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It’s fine, we’d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through.

So, what’s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it’ll run most old software, as long as it doesn’t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it’ll potentially be unusable or crash because time is hard.

The truth is that there’s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn’t run Flight Simulator. “PC Compatible” is a socially defined construct, just like “Woman”. We can get hung up on the details or we can just chill.

  1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don’t provide BIOS compatibility ↩︎

  2. Back in the 90s and early 2000s operating systems didn’t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that’s how I made a laptop that could boot unmodified MacOS X ↩︎

  3. (my name will not be Wolfwings Shadowflight↩︎

  4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA ↩︎

  5. and by advanced we’re still talking about the 90s, don’t get excited ↩︎

Christian Hergert: pgsql-glib

Planet GNOME - Pre, 02/01/2026 - 9:54md

Much like the s3-glib library I put together recently, I had another itch to scratch. What would it look like to have a PostgreSQL driver that used futures and fibers with libdex? This was something I wondered about more than a decade ago when writing the libmongoc network driver for 10gen (later MongoDB).

pgsql-glib is such a library which I made to wrap the venerable libpq PostgreSQL state-machine library. It does operations on fibers and awaits FD I/O to make something that feels synchronous even though it is not.

It also allows for something more “RAII-like” using g_autoptr() which interacts very nicely with fibers.

API Documentation can be found here.

Ghana Tries To Regulate Online Prophecies

Slashdot - Pre, 02/01/2026 - 4:01md
Ghana has decided to deal with the viral spread of prophetic content on social media by setting up an official reporting mechanism for sensitive predictions, a move triggered by the August 2025 helicopter crash that killed the country's defence and environment ministers along with six others. After the accident, TikTok clips circulated showing pastors who claimed to have foreseen the disaster before it happened. Elvis Ankrah, the presidential envoy for inter-faith and ecumenical relations, now asks prophets to submit their predictions for review. Charismatic preacher-prophets have been a fixture of Ghanaian public life since Pentecostalism arrived in the 1980s, but social media has amplified their reach and made their claims increasingly outlandish. Police have threatened to arrest prophets who cannot prove their predictions eventually came true. Some two-thirds of Ghanaians favor giving divine intervention a role in politics. Ankrah recently declared that most prophecies submitted to him are "total bunk."

Read more of this story at Slashdot.

The Atlanta Journal-Constitution Prints Final Newspaper, Shifts To All-Digital Format

Slashdot - Pre, 02/01/2026 - 3:00md
CBS News: The Atlanta Journal-Constitution has printed its final newspaper, marking the end of a 157-year chapter in Georgia history and officially transitioning the longtime publication into a fully digital news outlet. The front-page story of the final print edition asks a fitting question: "What is the future of local media in Atlanta?" The historic last issue is also being sold for $8, a significant increase from the typical $2.00 price. Wednesday, Dec. 31, marks the last day The AJC will be delivered to driveways across metro Atlanta. Starting Jan. 1, 2026, the newspaper will exist exclusively online, a move its leadership says reflects how readers now consume news and ensures the organization's future. AJC President and Publisher Andrew Morse said the decision was not made lightly, especially given how deeply the paper is woven into daily life for generations of readers. The move makes Atlanta the only major U.S. city without a daily printed newspaper.

Read more of this story at Slashdot.

Felipe Borges: Looking for Mentors for Google Summer of Code 2026

Planet GNOME - Pre, 02/01/2026 - 1:39md

It is once again that pre-GSoC time of year where I go around asking GNOME developers for project ideas they are willing to mentor during Google Summer of Code. GSoC is approaching fast, and we should aim to get a preliminary list of project ideas by the end of January.

Internships offer an opportunity for new contributors to join our community and help us build the software we love.

@Mentors, please submit new proposals in our Project Ideas GitLab repository.

Proposals will be reviewed by the GNOME Internship Committee and posted at https://gsoc.gnome.org/2026. If you have any questions, please don’t hesitate to contact us.

How Nokia Went From iPhone Victim To $1 Billion Nvidia Deal

Slashdot - Pre, 02/01/2026 - 1:00md
Nokia, the Finnish company whose iconic ringtone was played an estimated 1.8 billion times daily at the height of its mobile phone dominance and whose 3310 "brick" sold 126 million units, has reinvented itself again -- this time as a key piece of AI infrastructure. In October, Nvidia announced a $1 billion investment in Nokia and a strategic partnership to incorporate AI into telecommunications networks. The company that was once worth $335 billion and controlled more than a quarter of the global handset market seemed destined for irrelevance after the iPhone's 2007 arrival. A last-ditch bet on Microsoft's Windows phone system in 2011 failed, and Nokia sold its devices division to Microsoft for $6.34 billion in 2014. Revenues had fallen from $44.27 billion in 2007 to $12.56 billion. Nokia rebuilt around its $2 billion acquisition of Siemens' networks stake in 2013, then added French network provider Alcatel-Lucent for $18.32 billion in 2015. Current CEO Justin Hotard, who took over in April, has pushed the company further into cloud services, data centers and optical networks. Nokia acquired optical specialist Infinera for $2.3 billion in February. The company's optical technology enables information to pass between data centers, and it produces routers for cloud-based services.

Read more of this story at Slashdot.

6.18.3: stable

Kernel Linux - Pre, 02/01/2026 - 12:57md
Version:6.18.3 (stable) Released:2026-01-02 Source:linux-6.18.3.tar.xz PGP Signature:linux-6.18.3.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.18.3

Jussi Pakkanen: New year, new Pystd epoch, or evolving an API without breaking it

Planet GNOME - Pre, 02/01/2026 - 12:00md

One of the core design points of Pystd has been that it maintains perfect API and ABI stability while also making it possible to improve the code in arbitrary ways. To see how that can be achieved, let's look at what creating a new "year epoch" looks like. It's quite simple. First you run this script

Then you add the new files to Meson build targets (I was too lazy to implement that in the script). Done. For extra points there is also a new test that mixes types of pystd2025 and pystd2026 just to verify that things work.

As everything is inside a yearly namespace (and macros have the corresponding prefix) the symbols do not clash with each other.

At this point in time pystd2025 is frozen so old apps (of which there are, to be honest, approximately zero) keep working forever. It won't get any new features, only bug fixes. Pystd2026, on the other hand, is free to make any changes it pleases as it has zero backwards compatibility guarantees.

Isn't code duplication terribly slow and inefficient?

It can be. Rather than handwaving about it, lets measure. I used my desktop computer which has an AMD Ryzen 7 3700X.

Compiling Pystd from scratch and running the test suite (with code for both 2025 and 2026) in both debug and optimized modes takes 3 seconds in total (1s for debug, 2s for optimized). This amounts to 2*13 compiler invocations, 2 static linker invocations and 2*5 dynamic linker invocations.

Compiling a helloworld with standard C++ using -O2 -g also takes 3 seconds. This amounts to a single compiler invocation.

ASUS Announces Price Hikes Starting January 5

Slashdot - Pre, 02/01/2026 - 9:30pd
ASUS has informed its partners that prices on certain products will increase starting January 5, just days before the company is expected to unveil new hardware at CES. In a letter dated December 30 and obtained by Digitimes, the Taiwanese manufacturer pointed to rising costs for memory and storage components as the primary driver behind the adjustment. The company specifically called out DRAM, NAND, and SSD pricing pressure stemming from what it described as "structural volatility" in the global supply chain tied to AI-driven demand. ASUS also cited shifts in capacity allocation by upstream suppliers and higher investment costs for advanced manufacturing processes.

Read more of this story at Slashdot.

Australia's Biggest Pension Fund To Cut Global Stocks Allocation on AI Concerns

Slashdot - Pre, 02/01/2026 - 6:31pd
Australia's largest pension fund is planning to reduce its allocation to global equities this year, amid signs that the AI boom in the US stock market could be running out of steam. Financial Times: John Normand, head of investment strategy at the A$400bn (US$264bn) AustralianSuper, told the Financial Times that not only did valuations of big US tech companies look high relative to history, but the leverage being used to fund AI investment was increasing "very rapidly," as was the pace of fundraising through mergers, venture capital and public listings. "I can see some forces lining up that we are looking for less public equity allocation at some point next year. It's the basic intersection of the maturing AI cycle with a shift towards Fed[eral Reserve] tightening in 2027," Normand said in an interview.

Read more of this story at Slashdot.

No Standard iPhone 18 Launch This Year, Reports Suggest

Slashdot - Pre, 02/01/2026 - 3:30pd
MacRumors: Apple is not expected to release a standard iPhone 18 model this year, according to a growing number of reports that suggest the company is planning a significant change to its long-standing annual iPhone launch cycle. Despite the immense success of the iPhone 17 in 2025, the iPhone 18 is not expected to arrive until the spring of 2027, leaving the iPhone 17 in the lineup as the latest standard model for over 18 months. This would mark the first time Apple skips an entire calendar year without releasing a new generation of its flagship non-Pro iPhone.

Read more of this story at Slashdot.

IDC Estimates Apple Shipped Just 45,000 Vision Pros Last Quarter

Slashdot - Pre, 02/01/2026 - 1:02pd
Apple's Chinese manufacturing partner Luxshare halted production of the Vision Pro headset at the start of 2025, according to market research firm IDC, after the device shipped 390,000 units during its 2024 launch year. The $3,499 headset has also seen its digital advertising budget cut by more than 95% year to date in the US and UK, according to market intelligence group Sensor Tower. IDC expects Apple to ship just 45,000 new units in the fourth quarter of 2025. Apple launched an upgraded M5 version in October featuring a more powerful chip, extended battery life, and a redesigned headband. The company sells the device directly in 13 countries and did not expand availability in 2025.

Read more of this story at Slashdot.

Some of Your Cells Are Not Genetically Yours

Slashdot - Enj, 01/01/2026 - 11:30md
Every human body contains a small population of cells that are not genetically its own -- cells that crossed the placenta during pregnancy and that persist for decades after birth. These "microchimeric" cells, named after the lion-goat-serpent hybrid of Greek mythology, have been found in every organ studied so far, though they are exceedingly rare: one such cell exists for every 10,000 to 1 million of a person's own cells. The cells were first noticed in the late 1800s when pathologist Georg Schmorl described placenta-like "giant cells" in the lungs of people who had died from eclampsia. In 1969, researchers detected Y-chromosome-containing white blood cells in people who would later give birth to boys. For more than two decades, scientists presumed these cells were temporary. That changed in 1993 when geneticist Diana Bianchi found Y-chromosome cells in women who had given birth to sons up to 27 years earlier. The cells appear to have regenerative properties, transforming into blood vessels or skin cells to promote wound healing. They also challenge a central assumption of immunology -- that the immune system classifies cells as either "self" or "non-self" and rejects foreign material. Microchimeric cells should trigger rejection but do not. Higher-than-typical concentrations have been found in people with autoimmune conditions including diabetes, lupus, and scleroderma.

Read more of this story at Slashdot.

'The Cult of Costco'

Slashdot - Enj, 01/01/2026 - 10:10md
Costco's consistency -- from its $1.50 hot dog and drink combo to its functional shopping carts and satisfied employees -- has produced what The Atlantic calls a "cultlike loyalty" among members at more than 600 locations across the U.S. Its annual membership costs $65. The model traces back to Fedco, a nonprofit wholesale collective for federal employees founded in Los Angeles in the 1940s. Costco's private label Kirkland Signature has become one of the world's largest consumer packaged goods brands while maintaining deliberately understated branding. The company relies on word-of-mouth marketing from satisfied members rather than traditional advertising. Atlantic staff writer Jake Lundberg, who shops at the Granger, Indiana location, describes the stores as spaces of "cooperation, courtesy, and grown-ups mostly acting like grown-ups." Shoppers follow unwritten rules: move along, don't block the way, step aside to check your phone. Checkout lines form orderly queues. The exceptions come near sample stations and before major holidays, when spatial awareness and common courtesy break down.

Read more of this story at Slashdot.

Iran Offers To Sell Advanced Weapons Systems For Crypto

Slashdot - Enj, 01/01/2026 - 9:11md
Iran is offering to sell advanced weapons systems including ballistic missiles, drones and warships to foreign governments for cryptocurrency, in a bid to use digital assets to bypass western financial controls. From a report: Iran's Ministry of Defence Export Center, known as Mindex, says it is prepared to negotiate military contracts that allow payment in digital currencies, as well as through barter arrangements and Iranian rials, according to promotional documents and payment terms analysed by the Financial Times. The offer, introduced during the past year, appears to mark one of the first known instances in which a nation state has publicly indicated its willingness to accept cryptocurrency as payment for the export of strategic military hardware. Mindex, a state-run body responsible for Iran's overseas defence sales, says it has client relationships with 35 countries and advertises a catalogue of weapons that includes Emad ballistic missiles, Shahed drones, Shahid Soleimani-class warships and short-range air defence systems.

Read more of this story at Slashdot.

'IPv6 Just Turned 30 and Still Hasn't Taken Over the World, But Don't Call It a Failure'

Slashdot - Enj, 01/01/2026 - 7:40md
Three decades after RFC 1883 promised to future-proof the internet by expanding the available pool of IP addresses from around 4.3 billion to over 340 undecillion, IPv6 has yet to achieve the dominance its creators envisioned. Data from Google, APNIC and Cloudflare analyzed by The Register shows less than half of all internet users rely on IPv6 today. "IPv6 was an extremely conservative protocol that changed as little as possible," APNIC chief scientist Geoff Huston told The Register. "It was a classic case of mis-design by committee." The protocol's lack of backward compatibility with IPv4 meant users had to choose one or run both in parallel. Network address translation, which allows thousands of devices to share a single public IPv4 address, gave operators an easier path forward. Huston adds: "These days the Domain Name Service (DNS) is the service selector, not the IP address," Huston told The Register. "The entire security framework of today's Internet is name based and the world of authentication and channel encryption is based on service names, not IP addresses." "So folk use IPv6 these days based on cost: If the cost of obtaining more IPv4 addresses to fuel bigger NATs is too high, then they deploy IPv6. Not because it's better, but if they are confident that they can work around IPv6's weaknesses then in a largely name based world there is no real issue in using one addressing protocol or another as the transport underlay." But calling IPv6 a failure misses the point. "IPv4's continued viability is largely because IPv6 absorbed that growth pressure elsewhere -- particularly in mobile, broadband, and cloud environments," said John Curran, president and CEO of the American Registry for Internet Numbers. "In that sense, IPv6 succeeded where it was needed most." Huawei has sought 2.56 decillion IPv6 addresses and Starlink appears to have acquired 150 sextillion.

Read more of this story at Slashdot.

DHS Says REAL ID, Which DHS Certifies, Is Too Unreliable To Confirm US Citizenship

Slashdot - Enj, 01/01/2026 - 7:11md
An anonymous reader shares a report: Only the government could spend 20 years creating a national ID that no one wanted and that apparently doesn't even work as a national ID. But that's what the federal government has accomplished with the REAL ID, which the Department of Homeland Security (DHS) now considers unreliable, even though getting one requires providing proof of citizenship or lawful status in the country. In a December 11 court filing [PDF], Philip Lavoie, the acting assistant special agent in charge of DHS' Mobile, Alabama, office, stated that, "REAL ID can be unreliable to confirm U.S. citizenship." Lavoie's declaration was in response to a federal civil rights lawsuit filed in October by the Institute for Justice, a public-interest law firm, on behalf of Leo Garcia Venegas, an Alabama construction worker. Venegas was detained twice in May and June during immigration raids on private construction sites, despite being a U.S. citizen. In both instances, Venegas' lawsuit says, masked federal immigration officers entered the private sites without a warrant and began detaining workers based solely on their apparent ethnicity. And in both instances officers allegedly retrieved Venegas' Alabama-issued REAL ID from his pocket but claimed it could be fake. Venegas was kept handcuffed and detained for an hour the first time and "between 20 and 30 minutes" the second time before officers ran his information and released him.

Read more of this story at Slashdot.

Public Domain Day 2026 Brings Betty Boop, Nancy Drew and 'I Got Rhythm' Into the Commons

Slashdot - Enj, 01/01/2026 - 6:12md
As the calendar flips to January 1, 2026, thousands of copyrighted works from 1930 are entering the US public domain alongside sound recordings from 1925, making them free to copy, share, remix and build upon without permission or licensing fees. The literary haul includes William Faulkner's As I Lay Dying, Dashiell Hammett's full novel The Maltese Falcon, Agatha Christie's first Miss Marple mystery The Murder at the Vicarage, and the first four Nancy Drew books. The popular illustrated version of The Little Engine That Could also joins the commons. Betty Boop makes her public domain debut through her first appearance in the Fleischer Studios cartoon Dizzy Dishes. The original iteration of Disney's Pluto -- then named Rover -- enters as well. Nine additional Mickey Mouse cartoons and ten Silly Symphonies from 1930 are now available for reuse. Films entering the public domain include the Academy Award-winning All Quiet on the Western Front, the Marx Brothers' Animal Crackers, and John Wayne's first leading role in The Big Trail. Musical compositions going public include George and Ira Gershwin's "I Got Rhythm," Hoagy Carmichael's "Georgia on My Mind," and "Dream a Little Dream of Me." Sound recordings from 1925 now available include Bessie Smith and Louis Armstrong's "The St. Louis Blues" and Marian Anderson's "Nobody Knows the Trouble I've Seen." Piet Mondrian's Composition with Red, Blue, and Yellow rounds out the artistic entries.

Read more of this story at Slashdot.

European Space Agency Acknowledges Another Breach as Criminals Claim 200 GB Data Haul

Slashdot - Enj, 01/01/2026 - 5:01md
The European Space Agency has acknowledged yet another security incident after a cybercriminal posted an offer on BreachForums the day after Christmas claiming to have stolen over 20GB of data including source code, confidential documents, API tokens and credentials. The attacker claims they gained access to ESA-linked external servers on December 18 and remained connected for about a week, during which they allegedly exfiltrated private Bitbucket repositories, CI/CD pipelines, Terraform files and hardcoded credentials. ESA said that the breach may have affected only "a very small number of external servers" used for unclassified engineering and scientific collaboration, and that it has initiated a forensic security analysis.

Read more of this story at Slashdot.

The Man Taking Over the Large Hadron Collider

Slashdot - Enj, 01/01/2026 - 4:00md
Mark Thomson, a professor of experimental particle physics at the University of Cambridge, takes over as CERN's director general this week, and one of his first major decisions during his five-year tenure will be shutting down the Large Hadron Collider for an extended upgrade. The shutdown starts in June to make way for the high-luminosity LHC -- a major overhaul involving powerful new superconducting magnets that will squeeze the collider's proton beams and increase their brightness. The upgrade will raise collisions tenfold and strengthen the detectors to better capture subtle signs of new physics. The machine won't restart until Thomson's term is nearly over. Thomson is far from disconsolate about the downtime. "The machine is running brilliantly and we're recording huge amounts of data," he told The Guardian. "There's going to be plenty to analyse over the period." Beyond the upgrade, Thomson must shepherd CERN's plans for the Future Circular Collider, a proposed 91km machine more than three times the size of the current collider. Member states vote on the project in 2028; the first phase carries an estimated price tag of 15 billion Swiss francs (nearly $19 billion).

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi