You are here

Agreguesi i feed

Congressman Introduces Legislation To Criminalize Insider Trading On Prediction Markets

Slashdot - 12 orë 33 min më parë
Ritchie Torres has introduced a bill to ban government officials from using insider information to trade on political prediction markets like Polymarket. The bill was prompted by reports that traders on Polymarket made large profits betting on Nicolas Maduro's removal, raising suspicions that some wagers were placed using material non-public information. "While such insider trading in capital markets is already illegal and often prosecuted by the Justice Department and Securities and Exchange Commission, online prediction markets are far less regulated," notes Axios. From the report: Rep. Ritchie Torres' (D-N.Y.) three-page bill, a copy of which was obtained by Axios, is called the Public Integrity in Financial Prediction Markets Act of 2026. It would ban federal elected officials, political appointees and bureaucrats from making insider trades on prediction sites sites such as Polymarket. Specifically, the bill prohibits such government officials from trading based on information that is not publicly available and that "a reasonable investor would consider important in making an investment decision." [...] It's not clear if House Speaker Mike Johnson (R-La.) would put Torres' bill to a vote in the House or if President Trump would sign it. "We're looking at the specifics of the bill, but we already ban the activity it cites and are in support of means to prevent this type of activity," said Elisabeth Diana, a spokesperson for the prediction website Kalshi. Diana added that the "activity from the past few days" did not occur on their platform.

Read more of this story at Slashdot.

Daiki Ueno: GNOME.Asia Summit 2025

Planet GNOME - 13 orë 56 min më parë

Last month, I attended the GNOME.Asia Summit 2025 held at the IIJ office in Tokyo. This was my fourth time attending the summit, following previous events in Taipei (2010), Beijing (2015), and Delhi (2016).

As I live near Tokyo, this year’s conference was a unique experience for me: an opportunity to welcome the international GNOME community to my home city rather than traveling abroad. Reconnecting with the community after several years provided a helpful perspective on how our ecosystem has evolved.

Addressing the post-quantum transition

During the summit, I delivered a keynote address regarding post-quantum cryptography (PQC) and desktop. The core of my presentation focused on the “Harvest Now, Decrypt Later” (HNDL) type of threats, where encrypted data is collected today with the intent of decrypting it once quantum computing matures. The talk was followed by the history and the current status of PQC support in crypto libraries including OpenSSL, GnuTLS, and NSS, and concluded with the next steps recommended for the users and developers.

It is important to recognize that classical public key cryptography, which is vulnerable to quantum attacks, plays an integral role on the modern desktop: from secure web browsing to the underlying verification of system updates. Given that major government timelines (such as NIST and the NSA’s CNSA 2.0) are pushing for a full migration to quantum-resistant algorithms between 2027 and 2035, the GNU/Linux desktop should prioritize “crypto-agility” to remain secure in the coming decade.

From discussion to implementation: Crypto Usage Analyzer

One of the tools I discussed during my talk was crypto-auditing, a project designed to help developers identify and update the legacy cryptography usage. At the time of the summit, the tool was limited to a command-line interface, which I noted was a barrier to wider adoption.

Inspired by the energy of the summit, I spent part of the recent holiday break developing a GUI for crypto-auditing. By utilizing AI-assisted development tools, I was able to rapidly prototype an application, which I call “Crypto Usage Analyzer”, that makes the auditing data more accessible.

Conclusion

The summit in Tokyo had a relatively small audience, which resulted in a cozy and professional atmosphere. This smaller scale proved beneficial for technical exchange, as it allowed for focused discussions on desktop-related topics than is often possible at larger conferences.

Attending GNOME.Asia 2025 was a reminder of the steady work required to keep the desktop secure and relevant. I appreciate the efforts of the organizing committee in bringing the summit to Tokyo, and I look forward to continuing my work on making security libraries and tools more accessible for our users and developers.

An AI-Generated NWS Map Invented Fake Towns In Idaho

Slashdot - 15 orë 33 min më parë
National Weather Service pulled an AI-generated forecast graphic after it hallucinated fake town names in Idaho. "The blunder -- not the first of its kind to be posted by the NWS in the past year -- comes as the agency experiments with a wide range of AI uses, from advanced forecasting to graphic design," reports the Washington Post. "Experts worry that without properly trained officials, mistakes could erode trust in the agency and the technology." From the report: At first glance, there was nothing out of the ordinary about Saturday's wind forecast for Camas Prairie, Idaho. "Hold onto your hats!" said a social media post from the local weather office in Missoula, Montana. "Orangeotild" had a 10 percent chance of high winds, while just south, "Whata Bod" would be spared larger gusts. The problem? Neither of those places exist. Nor do a handful of the other spots marked on the National Weather Service's forecast graphic, riddled with spelling and geographical errors that the agency confirmed were linked to the use of generative AI. NWS said AI is not commonly used for public-facing content, nor is its use prohibited. The agency said it is exploring ways to employ AI to inform the public and acknowledged mistakes have been made. "Recently, a local office used AI to create a base map to display forecast information, however the map inadvertently displayed illegible city names," said NWS spokeswoman Erica Grow Cei. "The map was quickly corrected and updated social media posts were distributed." A post with the inaccurate map was deleted Monday, the same day The Washington Post contacted officials with questions about the image. Cei added that "NWS is exploring strategic ways to continue optimizing our service delivery for Americans, including the implementation of AI where it makes sense. NWS will continue to carefully evaluate results in cases where AI is implemented to ensure accuracy and efficiency, and will discontinue use in scenarios where AI is not effective." A Nov. 25 tweet out of the Rapid City, South Dakota, office also had misspelled locations and the Google Gemini logo in its forecast. NWS did not confirm whether the Rapid City image was made with generative AI.

Read more of this story at Slashdot.

next-20260107: linux-next

Kernel Linux - 16 orë 1 min më parë
Version:next-20260107 (linux-next) Released:2026-01-07

Creator of Claude Code Reveals His Workflow

Slashdot - 19 orë 3 min më parë
Boris Cherny, the creator of Claude Code at Anthropic, revealed a deceptively simple workflow that uses parallel AI agents, verification loops, and shared memory to let one developer operate with the output of an entire engineering team. "I run 5 Claudes in parallel in my terminal," Cherny wrote. "I number my tabs 1-5, and use system notifications to know when a Claude needs input." He also runs "5-10 Claudes on claude.ai" in his browser, using a "teleport" command to hand off work between the web and his local machine. This validates the "do more with less" strategy Anthropic's President Daniela Amodei recently pitched during an interview with CNBC. VentureBeat reports: For the past week, the engineering community has been dissecting a thread on X from Boris Cherny, the creator and head of Claude Code at Anthropic. What began as a casual sharing of his personal terminal setup has spiraled into a viral manifesto on the future of software development, with industry insiders calling it a watershed moment for the startup. "If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went further, declaring that with Cherny's "game-changing updates," Anthropic is "on fire," potentially facing "their ChatGPT moment." The excitement stems from a paradox: Cherny's workflow is surprisingly simple, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted on X after implementing Cherny's setup, the experience "feels more like Starcraft" than traditional coding -- a shift from typing syntax to commanding autonomous units.

Read more of this story at Slashdot.

Red Team Blue Team Insights for Linux Admins: Key Security Roles Explained

LinuxSecurity.com - 19 orë 19 min më parë
If you manage Linux systems long enough, you start to notice that most security conversations are not really about attackers or tools. They are about pressure. Uptime targets that do not move. Patch windows that keep shrinking. Audits that ask for proof you did the right thing six months ago. Incidents that blur together because the alerts never quite stop.

Discord Files Confidentially For IPO

Slashdot - 20 orë 31 min më parë
According to Bloomberg, Discord has confidentially filed for a U.S. IPO. Reuters reports: The U.S. IPO market regained momentum in 2025 after nearly three years of sluggish activity, but hopes for a stronger rebound were tempered by tariff-driven volatility, a prolonged government shutdown and a late-year selloff in artificial intelligence stocks. Discord, which was founded in 2015, offers voice, video and text chatting capabilities aimed at gamers and streamers. According to a statement in December, the platform has more than 200 million monthly users.

Read more of this story at Slashdot.

NYC Wegmans Is Storing Biometric Data On Shoppers' Eyes, Voices and Faces

Slashdot - 21 orë 8 min më parë
schwit1 shares a report from Gothamist: Wegmans in New York City has begun collecting biometric data from anyone who enters its supermarkets, according to new signage posted at the chain's Manhattan and Brooklyn locations earlier this month. Anyone entering the store could have data on their face, eyes and voices collected and stored by the Rochester-headquartered supermarket chain. The information is used to "protect the safety and security of our patrons and employees," according to the signage. The new scanning policy is an expansion of a 2024 pilot. The chain had initially said that the scanning system was only for a small group of employees and promised to delete any biometric data it collected from shoppers during the pilot rollout. The new notice makes no such assurances. Wegmans representatives did not reply to questions about how the data would be stored, why it changed its policy or if it would share the data with law enforcement.

Read more of this story at Slashdot.

Utah Allows AI To Renew Medical Prescriptions

Slashdot - 21 orë 48 min më parë
sinij shares a news release from the Utah Department of Commerce: The state of Utah, through the Utah Department of Commerce's Office of Artificial Intelligence Policy, today announced a first-of-its-kind partnership with Doctronic, the AI-native health platform, to give patients with chronic conditions a faster, automated way to renew medications. This agreement marks the first state-approved program in the country that allows an AI system to legally participate in medical decision-making for prescription renewals, an emerging model that could reshape access to care and ultimately improve care outcomes. Politico provides additional context in its reporting: In data shared with Utah regulators, Doctronic compared its AI system with human clinicians across 500 urgent care cases. The results showed the AI's treatment plan matched the physicians' 99.2 percent of the time, according to the company. "The AI is actually better than doctors at doing this," said Dr. Adam Oskowitz, Doctronic co-founder and an associate professor of surgery at the University of California San Francisco. "When you go see a doctor, it's not going to do all the checks that the AI is doing." Oskowitz said the AI is designed to err on the side of safety, automatically escalating cases to a physician if there's any uncertainty. Human doctors will also review the first 250 prescriptions issued in each medication class to validate the AI's performance. Once that threshold is met, subsequent renewals in that class will be handled autonomously. The company has also secured a one-of-a-kind malpractice insurance policy covering an AI system, which means the system is insured and held to the same level of responsibility as a doctor would be. Doctronic also runs a nationwide telehealth practice that directs patients to doctors after an AI consultation. In Utah, patients who use the system will visit a webpage that verifies they are physically in the state. Then the system will pull the patient's prescription history and offer a list of medications eligible for renewal. The AI walks the patient through the same clinical questions a physician would ask to determine whether a refill is appropriate. If the system clears the renewal, the prescription is sent directly to a pharmacy. The program is limited to 190 commonly prescribed medications. Some medications -- including pain management and ADHD drugs as well as injectables -- are excluded for safety reasons.

Read more of this story at Slashdot.

Nvidia Details New AI Chips and Autonomous Car Project With Mercedes

Slashdot - 22 orë 31 min më parë
An anonymous reader quotes a report from the New York Times: On Monday, [Jensen Huang, the chief executive of the chip-making giant Nvidia] said the company would begin shipping a new A.I. chip later this year, one that can do more computing with less power than previous generations of chips could. Known as the Vera Rubin, the chip has been in development for three years and is designed to fulfill A.I. requests more quickly and cheaply than its predecessors. Mr. Huang, who spoke during CES, an annual tech conference in Las Vegas, also discussed Nvidia's surprisingly ambitious work around autonomous vehicles. This year, Mercedes-Benz will begin shipping cars equipped with Nvidia self-driving technology comparable to Tesla's Autopilot. Nvidia's new Rubin chips are being manufactured and will be shipped to customers, including Microsoft and Amazon, in the second half of the year, fulfilling a promise Mr. Huang made last March when he first described the chip at the company's annual conference in San Jose, Calif. Companies will be able to train A.I. models with one-quarter as many Rubin chips as its predecessor, the Blackwell. It can provide information for chatbots and other A.I. products for one-tenth of the cost. They will also be able to install the chips in data centers more quickly, courtesy of redesigned supercomputers that feature fewer cables. If the new chips live up to their promise, they could allow companies to develop A.I. at a lower cost and at least begin to respond to the soaring electrical demands of data centers being built around the world. [...] On Monday, he said Nvidia had developed new A.I. software that would allow customers like Uber and Lucid to develop cars that navigate roads autonomously. It will share the system, called Alpamayo, to spread its influence and the appeal of Nvidia's chip technology. Since 2020, Nvidia has been working with Mercedes to develop a class of self-driving cars. They will begin shipping an early example of their collaboration when Mercedes CLA cars become available in the first half of the year in Europe and the United States. Mr. Huang said the company started working on self-driving technology eight years ago. It has more than a thousand people working on the project. "Our vision is that someday, every single car, every single truck, will be autonomous," Mr. Huang said. The Rubin chips are named for the astronomer Vera Rubin, a pioneering astronomer who helped find powerful evidence of dark matter.

Read more of this story at Slashdot.

Google Will Now Only Release Android Source Code Twice a Year

Slashdot - 23 orë 13 min më parë
Google will begin releasing Android Open Source Project (AOSP) source code only twice a year starting in 2026. "In the past, Google would release the source code for every quarterly Android release, of which there are four each year," notes Android Authority. From the report: Google told Android Authority that, effective 2026, Google will publish new source code to AOSP in Q2 and Q4. The reason is to ensure platform stability for the Android ecosystem and better align with Android's trunk-stable development model. Developers navigating to source.android.com today will see a banner confirming the change that reads as follows: "Effective in 2026, to align with our trunk-stable development model and ensure platform stability for the ecosystem, we will publish source code to AOSP in Q2 and Q4. For building and contributing to AOSP, we recommend utilizing android-latest-release instead of aosp-main. The aosp-latest-release manifest branch will always reference the most recent release pushed to AOSP. For more information, see Changes to AOSP." A spokesperson for Google offered some additional context on this decision, stating that it helps simplify development, eliminates the complexity of managing multiple code branches, and allows them to deliver more stable and secure code to Android platform developers. The spokesperson also reiterated that Google's commitment to AOSP is unchanged and that this new release schedule helps the company build a more robust and secure foundation for the Android ecosystem. Finally, Google told us that its process for security patch releases will not change and that the company will keep publishing security patches each month on a dedicated security-only branch for relevant OS releases just as it does today.

Read more of this story at Slashdot.

Vietnam Bans Unskippable Ads

Slashdot - Mar, 06/01/2026 - 11:40md
Vietnam will begin enforcing new online advertising rules in February 2026 that ban forced video ads longer than five seconds and must allow users to close ads with just one tap. "Furthermore, platforms must provide clear icons and instructions for users to report advertisements that violate the law, and allow them to opt out, turn off, or stop viewing inappropriate ads," reports a local news outlet (translated to English). "These reports must be received and processed promptly, and the results communicated to users as required." From the report: In cases where the entity posting the infringing advertisement cannot be identified or where specialized laws do not have specific regulations, the Ministry of Culture, Sports and Tourism is the focal agency to receive notifications and send requests to block or remove the advertisement to organizations and businesses providing online advertising services in Vietnam. Advertisers, advertising service providers, and advertising transmission and distribution units are responsible for blocking and removing infringing advertisements within 24 hours of receiving a request from the competent authority. For advertisements that infringe on national security, the blocking and removal must be carried out immediately, no later than 24 hours. In case of non-compliance, the Ministry of Culture, Sports and Tourism, in coordination with the Ministry of Public Security, will apply technical measures to block infringing advertisements and services and handle the matter according to the law. Telecommunications companies and Internet service providers must also implement technical measures to block access to infringing advertisements within 24 hours of receiving a request.

Read more of this story at Slashdot.

Intel Is Making Its Own Handheld Gaming PC Chips At CES 2026

Slashdot - Mar, 06/01/2026 - 11:02md
An anonymous reader quotes a report from IGN: Last year, Intel had the best iGPU on the market. This year, it's broken that record by over 70% with Panther Lake and it's a huge win for handhelds. "We've overdelivered" is how Intel CEO Lip Bu Tan categorized the Panther Lake launch during the company's CES 2026 Keynote address, and that really does seem to be the case. But the real highlight of the keynote speech wasn't the engineering behind Panther Lake, but rather the iGPU and the "handheld ecosystem" Intel is building to capitalize on the iGPU's performance gains. Formerly known as the 12 Xe-core variant, the new Intel Arc B390 iGPU offers up to 77% faster gaming performance over Lunar Lake's Arc 140V graphics chip. Intel's VP and General Manager of PC Products, Dan Rogers detailed the Arc B390's performance gains and announced a "whole ecosystem" of gaming handhelds. That ecosystem includes partnerships with MSI, Acer, Microsoft, CPD, Foxconn, and Pegatron. So we'll finally see more Intel handhelds hit the market. [...] Since Intel's Core Ultra 300 Panther Lake chip is built on Intel's proprietary 18A Foundry process node, it can be cut in a variety of different die slices. According to sources at Intel close to the matter, the company is planning a hardware-specific variant or variants of the Panther Lake CPU die. Currently branded as "Intel Core G3" these processors will be custom-built for handhelds. That means Intel can spec the chips to offer better performance on the GPU where you want it, with potential for even better performance than the current Arc B390 expectations.

Read more of this story at Slashdot.

Study Casts Doubt on Potential For Life on Jupiter's Moon Europa

Slashdot - Mar, 06/01/2026 - 10:22md
Jupiter's moon Europa is on the short list of places in our solar system seen as promising in the search for life beyond Earth, with a large subsurface ocean thought to be hidden under an outer shell of ice. But new research is raising questions about whether Europa in fact has what it takes for habitability. Reuters: The study assessed the potential on Europa's ocean bottom for tectonic and volcanic activity, which on Earth facilitate interactions between rock and seawater that generate essential nutrients and chemical energy for life. After modeling Europa's conditions, the researchers concluded that its rocky seafloor is likely mechanically too strong to allow such activity. The researchers considered factors including Europa's size, the makeup of its rocky core and the gravitational forces exerted by Jupiter, the solar system's largest planet. Their evaluation that there probably is little to no active faulting at Europa's seafloor suggests this moon is barren of life. "On Earth, tectonic activity such as fracturing and faulting exposes fresh rock to the environment where chemical reactions, principally involving water, generate chemicals such as methane that microbial life can use," said planetary scientist Paul Byrne of Washington University in St. Louis, lead author of the study published on Tuesday in the journal Nature Communications. "Without such activity, those reactions are harder to establish and sustain, making Europa's seafloor a challenging environment for life," Byrne added.

Read more of this story at Slashdot.

Nvidia's New G-Sync Pulsar Monitors Target Motion Blur at the Human Retina Level

Slashdot - Mar, 06/01/2026 - 9:42md
Nvidia's G-Sync Pulsar technology, first announced nearly two years ago as a solution to display motion blur caused by old images persisting on the viewer's retina, is finally arriving in consumer monitors this week. The first four Pulsar-equipped displays -- from Acer, AOC, Asus and MSI -- hit select retailers on Wednesday, all sharing the same core specs: 27-inch IPS panels running at 1440p resolution and up to 360 Hz refresh rates. Nvidia claims the technology delivers the "effective motion clarity of a theoretical 1,000 Hz monitor." The system uses a rolling scan scheme that pulses the backlight for one-quarter of a frame just before pixels are overwritten, giving them time to fully transition between colors before illumination. The approach also reduces how long old pixels persist on the viewer's retina. Previous "Ultra Low Motion Blur" features on other monitors worked only at fixed refresh rates, but Pulsar syncs its pulses to G-Sync's variable refresh rate. Early reviews are mixed. The Monitors Unboxed YouTube channel called it "clearly the best solution currently available" for limiting motion blur, while PC Magazine described the improvements as "minor in the grand scheme of things" and potentially hard for casual viewers to notice.

Read more of this story at Slashdot.

Lego Unveils Smart Bricks, Its 'Most Significant Evolution' in 50 years

Slashdot - Mar, 06/01/2026 - 9:01md
The Lego Group today unveiled Smart Bricks, a tiny computer that fits entirely inside a classic 2x4 brick and which the company is calling the most significant evolution in its building system since the introduction of the minifigure in 1978. The Smart Brick contains a custom ASIC smaller than a single Lego stud and includes light and sound output, light sensors, inertial sensors for detecting movement and tilt, and a microphone that functions as a virtual button rather than a recording device. The bricks detect NFC-equipped smart tags embedded in new tiles and minifigures, and they form a Bluetooth mesh network to sense each other's position and orientation. They charge wirelessly on a pad that can handle multiple bricks simultaneously. The first Smart Brick sets ship March 1 and are all Star Wars themed, ranging from a $70 Darth Vader's TIE Fighter at 473 pieces to a $160 Darth Vader's Throne Room Duel at 962 pieces. Lego confirmed there is no AI or camera in the product. The company quietly piloted the technology in a 2024 Lego City set and says Smart Play will continue to expand through new updates and launches.

Read more of this story at Slashdot.

Elite Colleges Are Back at the Top of the List For Company Recruiters

Slashdot - Mar, 06/01/2026 - 8:21md
The "talent is everywhere" approach that U.S. employers adopted during the white-hot pandemic job market is quietly giving way to something much older and more familiar: recruiting almost exclusively from a small set of elite and nearby universities. A 2025 survey of more than 150 companies by Veris Insights found that 26% were exclusively recruiting from a shortlist of schools, up from 17% in 2022. Diversity as a priority for school recruiting selection dropped to 31% of employers surveyed in 2025, down from nearly 60% in 2022. GE Appliances once sent recruiters on one or two passes through 45 to 50 schools each year; now the company attends four or five events per semester at just 15 universities, including Purdue and Auburn. McKinsey, the consulting firm that expanded recruitment well beyond the Ivy League after George Floyd's murder, recently removed language from its career page that said "We hire people, not degrees." The firm now hosts in-person events at a shortlist of about 20 core schools, including Vanderbilt and Notre Dame. Most companies now recruit at up to 30 American colleges out of about 4,000, said William Chichester III, who has directed entry-level recruiting at Target and Peloton. For students outside elite schools or those located near company headquarters? "God help you," he said.

Read more of this story at Slashdot.

HarperCollins Will Use AI To Translate Harlequin Romance Novels

Slashdot - Mar, 06/01/2026 - 7:42md
Book publisher HarperCollins said it will start translating romance novels under its famous Harlequin label in France using AI, reducing or eliminating the pay for the team of human contract translators who previously did this work. 404Media: Publisher's Weekly broke the news in English after French outlets reported on the story in December. According to a joint statement from French Association of Literary Translators (ATFL) and En Chair et en Os (In Flesh and Bone) -- an anti-AI activist group of French translators -- HarperCollins France has been contacting its translators to tell them they're being replaced with machines in 2026. The ATFL/ En Chair et en Os statement explained that HarperCollins France would use a third party company called Fluent Planet to run Harlequin romance novels through a machine translation system. The books would then be checked for errors and finalized by a team of freelancers. The ATFL and En Chair et en Os called on writers, book workers, and readers to refuse this machine translated future. They begged people to "reaffirm our unconditional commitment to human texts, created by human beings, in dignified working conditions."

Read more of this story at Slashdot.

Sebastian Wick: Improving the Flatpak Graphics Drivers Situation

Planet GNOME - Mar, 06/01/2026 - 12:30pd

Graphics drivers in Flatpak have been a bit of a pain point. The drivers have to be built against the runtime to work in the runtime. This usually isn’t much of an issue but it breaks down in two cases:

  1. If the driver depends on a specific kernel version
  2. If the runtime is end-of-life (EOL)

The first issue is what the proprietary Nvidia drivers exhibit. A specific user space driver requires a specific kernel driver. For drivers in Mesa, this isn’t an issue. In the medium term, we might get lucky here and the Mesa-provided Nova driver might become competitive with the proprietary driver. Not all hardware will be supported though, and some people might need CUDA or other proprietary features, so this problem likely won’t go away completely.

Currently we have runtime extensions for every Nvidia driver version which gets matched up with the kernel version, but this isn’t great.

The second issue is even worse, because we don’t even have a somewhat working solution to it. A runtime which is EOL doesn’t receive updates, and neither does the runtime extension providing GL and Vulkan drivers. New GPU hardware just won’t be supported and the software rendering fallback will kick in.

How we deal with this is rather primitive: keep updating apps, don’t depend on EOL runtimes. This is in general a good strategy. A EOL runtime also doesn’t receive security updates, so users should not use them. Users will be users though and if they have a goal which involves running an app which uses an EOL runtime, that’s what they will do. From a software archival perspective, it is also desirable to keep things working, even if they should be strongly discouraged.

In all those cases, the user most likely still has a working graphics driver, just not in the flatpak runtime, but on the host system. So one naturally asks oneself: why not just use that driver?

That’s a load-bearing “just”. Let’s explore our options.

Exploration

Attempt #1: Bind mount the drivers into the runtime.

Cool, we got the driver’s shared libraries and ICDs from the host in the runtime. If we run a program, it might work. It might also not work. The shared libraries have dependencies and because we are in a completely different runtime than the host, they most likely will be mismatched. Yikes.

Attempt #2: Bind mount the dependencies.

We got all the dependencies of the driver in the runtime. They are satisfied and the driver will work. But your app most likely won’t. It has dependencies that we just changed under its nose. Yikes.

Attempt #3: Linker magic.

Until here everything is pretty obvious, but it turns out that linkers are actually quite capable and support what’s called linker namespaces. In a single process one can load two completely different sets of shared libraries which will not interfere with each other. We can bind mount the host shared libraries into the runtime, and dlmopen the driver into its own namespace. This is exactly what libcapsule does. It does have some issues though, one being that the libc can’t be loaded into multiple linker namespaces because it manages global resources. We can use the runtime’s libc, but the host driver might require a newer libc. We can use the host libc, but now we contaminate the apps linker namespace with a dependency from the host.

Attempt #4: Virtualization.

All of the previous attempts try to load the host shared objects into the app. Besides the issues mentioned above, this has a few more fundamental issues:

  1. The Flatpak runtimes support i386 apps; those would require a i386 driver on the host, but modern systems only ship amd64 code.
  2. We might want to support emulation of other architectures later
  3. It leaks an awful lot of the host system into the sandbox
  4. It breaks the strict separation of the host system and the runtime

If we avoid getting code from the host into the runtime, all of those issues just go away, and GPU virtualization via Virtio-GPU with Venus allows us to do exactly that.

The VM uses the Venus driver to record and serialize the Vulkan commands, sends them to the hypervisor via the virtio-gpu kernel driver. The host uses virglrenderer to deserializes and executes the commands.

This makes sense for VMs, but we don’t have a VM, and we might not have the virtio-gpu kernel module, and we might not be able to load it without privileges. Not great.

It turns out however that the developers of virglrenderer also don’t want to have to run a VM to run and test their project and thus added vtest, which uses a unix socket to transport the commands from the mesa Venus driver to virglrenderer.

It also turns out that I’m not the first one who noticed this, and there is some glue code which allows Podman to make use of virgl.

You can most likely test this approach right now on your system by running two commands:

rendernodes=(/dev/dri/render*) virgl_test_server --venus --use-gles --socket-path /tmp/flatpak-virgl.sock --rendernode "${rendernodes[0]}" & flatpak run --nodevice=dri --filesystem=/tmp/flatpak-virgl.sock --env=VN_DEBUG=vtest --env=VTEST_SOCKET_NAME=/tmp/flatpak-virgl.sock org.gnome.clocks

If we integrate this well, the existing driver selection will ensure that this virtualization path is only used if there isn’t a suitable driver in the runtime.

Implementation

Obviously the commands above are a hack. Flatpak should automatically do all of this, based on the availability of the dri permission.

We actually already start a host program and stop it when the app exits: xdg-dbus-proxy. It’s a bit involved because we have to wait for the program (in our case virgl_test_server) to provide the service before starting the app. We also have to shut it down when the app exits, but flatpak is not a supervisor. You won’t see it in the output of ps because it just execs bubblewrap (bwrap) and ceases to exist before the app even started. So instead we have to use the kernel’s automatic cleanup of kernel resources to signal to virgl_test_server that it is time to shut down.

The way this is usually done is via a so called sync fd. If you have a pipe and poll the file descriptor of one end, it becomes readable as soon as the other end writes to it, or the file description is closed. Bubblewrap supports this kind of sync fd: you can hand in a one end of a pipe and it ensures the kernel will close the fd once the app exits.

One small problem: only one of those sync fds is supported in bwrap at the moment, but we can add support for multiple in Bubblewrap and Flatpak.

For waiting for the service to start, we can reuse the same pipe, but write to the other end in the service, and wait for the fd to become readable in Flatpak, before exec’ing bwrap with the same fd. Also not too much code.

Finally, virglrenderer needs to learn how to use a sync fd. Also pretty trivial. There is an older MR which adds something similar for the Podman hook, but it misses the code which allows Flatpak to wait for the service to come up, and it never got merged.

Overall, this is pretty straight forward.

Conclusion

The virtualization approach should be a robust fallback for all the cases where we don’t get a working GPU driver in the Flatpak runtime, but there are a bunch of issues and unknowns as well.

It is not entirely clear how forwards and backwards compatible vtest is, if it even is supposed to be used in production, and if it provides a strong security boundary.

None of that is a fundamental issue though and we could work out those issues.

It’s also not optimal to start virgl_test_server for every Flatpak app instance.

Given that we’re trying to move away from blanket dri access to a more granular and dynamic access to GPU hardware via a new daemon, it might make sense to use this new daemon to start the virgl_test_server on demand and only for allowed devices.

Andy Wingo: pre-tenuring in v8

Planet GNOME - Hën, 05/01/2026 - 4:38md

Hey hey happy new year, friends! Today I was going over some V8 code that touched pre-tenuring: allocating objects directly in the old space instead of the nursery. I knew the theory here but I had never looked into the mechanism. Today’s post is a quick overview of how it’s done.

allocation sites

In a JavaScript program, there are a number of source code locations that allocate. Statistically speaking, any given allocation is likely to be short-lived, so generational garbage collection partitions freshly-allocated objects into their own space. In that way, when the system runs out of memory, it can preferentially reclaim memory from the nursery space instead of groveling over the whole heap.

But you know what they say: there are lies, damn lies, and statistics. Some programs are outliers, allocating objects in such a way that they don’t die young, or at least not young enough. In those cases, allocating into the nursery is just overhead, because minor collection won’t reclaim much memory (because too many objects survive), and because of useless copying as the object is scavenged within the nursery or promoted into the old generation. It would have been better to eagerly tenure such allocations into the old generation in the first place. (The more I think about it, the funnier pre-tenuring is as a term; what if some PhD programs could pre-allocate their graduates into named chairs? Is going straight to industry the equivalent of dying young? Does collaborating on a paper with a full professor imply a write barrier? But I digress.)

Among the set of allocation sites in a program, a subset should pre-tenure their objects. How can we know which ones? There is a literature of static techniques, but this is JavaScript, so the answer in general is dynamic: we should observe how many objects survive collection, organized by allocation site, then optimize to assume that the future will be like the past, falling back to a general path if the assumptions fail to hold.

my runtime doth object

The high-level overview of how V8 implements pre-tenuring is based on per-program-point AllocationSite objects, and per-allocation AllocationMemento objects that point back to their corresponding AllocationSite. Initially, V8 doesn’t know what program points would profit from pre-tenuring, and instead allocates everything in the nursery. Here’s a quick picture:

A linear allocation buffer containing objects allocated with allocation mementos

Here we show that there are two allocation sites, Site1 and Site2. V8 is currently allocating into a linear allocation buffer (LAB) in the nursery, and has allocated three objects. After each of these objects is an AllocationMemento; in this example, M1 and M3 are AllocationMemento objects that point to Site1 and M2 points to Site2. When V8 allocates an object, it increments the “created” counter on the corresponding AllocationSite (if available; it’s possible an allocation comes from C++ or something where we don’t have an AllocationSite).

When the free space in the LAB is too small for an allocation, V8 gets another LAB, or collects if there are no more LABs in the nursery. When V8 does a minor collection, as the scavenger visits objects, it will look to see if the object is followed by an AllocationMemento. If so, it dereferences the memento to find the AllocationSite, then increments its “found” counter, and adds the AllocationSite to a set. Once an AllocationSite has had 100 allocations, it is enqueued for a pre-tenuring decision; sites with 85% survival get marked for pre-tenuring.

If an allocation site is marked as needing pre-tenuring, the code in which it is embedded it will get de-optimized, and then next time it is optimized, the code generator arranges to allocate into the old generation instead of the default nursery.

Finally, if a major collection collects more than 90% of the old generation, V8 resets all pre-tenured allocation sites, under the assumption that pre-tenuring was actually premature.

tenure for me but not for thee

What kinds of allocation sites are eligible for pre-tenuring? Sometimes it depends on object kind; wasm memories, for example, are almost always long-lived, so they are always pre-tenured. Sometimes it depends on who is doing the allocation; allocations from the bootstrapper, literals allocated by the parser, and many allocations from C++ go straight to the old generation. And sometimes the compiler has enough information to determine that pre-tenuring might be a good idea, as when it generates a store of a fresh object to a field in an known-old object.

But otherwise I thought that the whole AllocationSite mechanism would apply generally, to any object creation. It turns out, nope: it seems to only apply to object literals, array literals, and new Array. Weird, right? I guess it makes sense in that these are the ways to create objects that also creates the field values at creation-time, allowing the whole block to be allocated to the same space. If instead you make a pre-tenured object and then initialize it via a sequence of stores, this would likely create old-to-new edges, preventing the new objects from dying young while incurring the penalty of copying and write barriers. Still, I think there is probably some juice to squeeze here for pre-tenuring of class-style allocations, at least in the optimizing compiler or in short inline caches.

I suspect this state of affairs is somewhat historical, as the AllocationSite mechanism seems to have originated with typed array storage strategies and V8’s “boilerplate” object literal allocators; both of these predate per-AllocationSite pre-tenuring decisions.

fin

Well that’s adaptive pre-tenuring in V8! I thought the “just stick a memento after the object” approach is pleasantly simple, and if you are only bumping creation counters from baseline compilation tiers, it likely amortizes out to a win. But does the restricted application to literals point to a fundamental constraint, or is it just accident? If you have any insight, let me know :) Until then, happy hacking!

Faqet

Subscribe to AlbLinux agreguesi