You are here

Agreguesi i feed

Hackers Hijack npm Packages With 2 Billion Weekly Downloads in Supply Chain Attack

Slashdot - Hën, 08/09/2025 - 9:25md
An anonymous reader shares a report: In what is being called the largest supply chain attack in history, attackers have injected malware into NPM packages with over 2.6 billion weekly downloads after compromising a maintainer's account in a phishing attack. The package maintainer whose accounts were hijacked in this supply-chain attack confirmed the incident earlier today, stating that he was aware of the compromise and adding that the phishing email came from support [at] npmjs [dot] help, a domain that hosts a website impersonating the legitimate npmjs.com domain. In the emails, the attackers threatened that the targeted maintainers' accounts would be locked on September 10th, 2025, as a scare tactic to get them to click on the link redirecting them to the phishing sites.

Read more of this story at Slashdot.

What Are Container Escape Vulnerabilities?

LinuxSecurity.com - Hën, 08/09/2025 - 1:24md
For Linux admins and security professionals, containers are often the backbone of modern infrastructure. They're lightweight, portable, and isolate applications in predictable, well-defined environments. But here's the flipside: containers aren't impermeable fortresses. They have their flaws, and one of the most critical is the container escape vulnerability''a type of weakness that lets an attacker break out of the container's restricted environment and gain unauthorized access to the host system.

6.17-rc5: mainline

Kernel Linux - Dje, 07/09/2025 - 11:22md
Version:6.17-rc5 (mainline) Released:2025-09-07 Source:linux-6.17-rc5.tar.gz Patch:full (incremental)

Apple's Vision Pro Gaining Traction in Some Niches of Business

Slashdot - Sht, 06/09/2025 - 3:25pd
Apple's $3,500 Vision Pro is finding real traction in niche enterprise use, like CAE's pilot training, Lowe's kitchen design visualization, and Dassault's engineering workflows. "Over the last few weeks, I had an opportunity to try out some of those applications, and they are game-changers, albeit within their specific domains," writes Steven Rosenbush via the Wall Street Journal. "Companies should pay attention now to what's going on in these niche markets. Based on what I saw, these systems are having an impact on the way users integrate content development and engineering, which has implications for the way companies approach roles, teams and workflow." From the report: Home-improvement retailer Lowe's has deployed the Vision Pro at five locations in the San Francisco Bay Area and five locations in the Austin, Texas area. Customers use them to visualize how design ideas will look in their actual kitchen. The company plans to scale the effort to 100 of approximately 1,700 stores by the end of the year, eventually ramping up to 400 locations in markets with sufficient scale to justify the investment, Chief Digital and Information Officer Seemantini Godbole told me. [...] Dassault Systemes, the French industrial software company, has long created virtual worlds for commercial use. Scientists, manufacturing experts, product managers and others use its platforms to design and engineer molecules for drug development, as well as data centers, factories, aircraft and electric cars. The 3DExperience platform was launched more than a decade ago, pulling together a range of Dassault brands including 3DExcite on the premise that "everything is going to become an experience," 3DExcite Chief Executive Tom Acland said. In February, Dassault Systemes and Apple announced a collaboration to produce the 3DLive App, which went live February 7. Users include Hyundai, Virgin Galactic and Deutsche Aircraft, he said. [...] Canadian aircraft training company CAE is using Vision Pro to provide pilot training that complements full-motion flight simulator experience required for certification and recurrent checks, according to Chief Technology and Product Officer Emmanuel Levitte. The company has employed mixed reality and immersive training for at least 10 years. The Vision Pro has unlocked new capabilities, he said. The display is as sharp and readable as the controls in a real cockpit, which Levitte found not to be the case with other devices. The haptic feedback and audio quality also contribute to a more realistic training experience, he said. Remote crew members will also be able to be co-located virtually, enabling training that was previously only possible when individuals were physically in the same cockpit, according to Levitte.

Read more of this story at Slashdot.

America's First Sodium-Ion Battery Manufacturer Ceases Operations

Slashdot - Sht, 06/09/2025 - 2:45pd
Grady Martin writes: Natron Energy has announced the immediate cessation of all operations, including its manufacturing plant in Holland, Michigan, and plans to build a $1.4 billion "gigafactory" in North Carolina. A company representative cited "efforts to raise sufficient new funding [being] unsuccessful" as the rationale for the decision. When previously covered by Slashdot, comments on the merits of sodium-ion included the ability to use aluminum in lieu of heavier, more expensive copper anodes; a charge rate ten times that of lithium-ion; and Earth's abundance of sodium -- though at least one anonymous coward predicted the cancellation of the project.

Read more of this story at Slashdot.

Canada Delaying Plan To Force Automakers To Hit EVs Sales Targets

Slashdot - Sht, 06/09/2025 - 2:02pd
Longtime Slashdot reader sinij shares a report from CBC News: Prime Minister Mark Carney is delaying a plan to force automakers to hit minimum sales levels for electric vehicles. The move is part of a series of measures the government announced Friday to help the sectors most affected by U.S. President Donald Trump's tariffs. The EV mandate will be paused as the government conducts a 60-day review of the policy, and will be waived for 2026 models. Sources told CBC News that the review will look at the entire mandate and next steps. "We have an auto sector which, because of the massive change in U.S. policy, is under extreme pressure. We recognize that," Carney said at a news conference in Mississauga, Ont. "They've got enough on their plate right now. So we're taking that off." The government is using the review as part of broader look at all the government's climate measures, he added. [...] Brian Kingston, president of the Canadian Vehicle Manufacturers' Association, called it "an important first step." "The EV mandate imposes unsustainable costs on auto manufacturers, putting at risk Canadian jobs and investment in this critical sector of the economy," he said in a statement. "A full repeal of the regulation is the most effective way to provide immediate relief to the industry and keep it competitive."

Read more of this story at Slashdot.

Trump To Impose Tariffs On Semiconductor Imports From Firms Not Moving Production To US

Slashdot - Sht, 06/09/2025 - 1:20pd
An anonymous reader quotes a report from Reuters: President Donald Trump said on Thursday his administration would impose tariffs on semiconductor imports from companies not shifting production to the U.S., speaking ahead of a dinner with major technology company CEOs. "Yeah, I have discussed it with the people here. Chips and semiconductors -- we will be putting tariffs on companies that aren't coming in. We will be putting a tariff very shortly," Trump said without giving an exact time or rate. "We will be putting a very substantial tariff, not that high, but fairly substantial tariff with the understanding that if they come into the country, if they are coming in, building, planning to come in, there will not be a tariff," Trump told reporters. "If they are not coming in, there is a tariff," Trump said in his comments on semiconductors. "Like, I would say (Apple CEO) Tim Cook would be in pretty good shape," he added, as Cook sat across the table. Further reading: Trump Basks in Tech Leaders' Spending Vows at White House Dinner

Read more of this story at Slashdot.

Firefox Ending 32-bit Linux Support Next Year

Slashdot - Sht, 06/09/2025 - 12:40pd
Mozilla announced today that they will end 32-bit Linux support for Firefox in 2026, with version 144 being the last release and ESR 140 as the fallback option. Phoronix reports: Firefox has continued providing 32-bit Linux binaries even with most other web browsers and operating systems going all-in on x86_64 support. But given that 32-bit Linux support is waning by distributions and the vast majority of distributions aren't even shipping i686 install images anymore, they will be removing 32-bit Linux builds in 2026.

Read more of this story at Slashdot.

Boffins Build Automated Android Bug Hunting System

Slashdot - Sht, 06/09/2025 - 12:00pd
Researchers from Nanjing University and the University of Sydney developed an AI-powered bug-hunting agent that mimics human vulnerability discovery, validating flaws with proof-of-concept exploits. The Register reports: Ziyue Wang (Nanjing) and Liyi Zhou (Sydney) have expanded upon prior work dubbed A1, an AI agent that can develop exploits for cryptocurrency smart contracts, with A2, an AI agent capable of vulnerability discovery and validation in Android apps. They describe A2 in a preprint paper titled "Agentic Discovery and Validation of Android App Vulnerabilities." The authors claim that the A2 system achieves 78.3 percent coverage on the Ghera benchmark, surpassing static analyzers like APKHunt (30.0 percent). And they say that, when they used A2 on 169 production APKs, they found "104 true-positive zero-day vulnerabilities," 57 of which were self-validated via automatically generated proof-of-concept (PoC) exploits. One of these included a medium-severity flaw in an Android app with over 10 million installs.

Read more of this story at Slashdot.

Updated Debian 12: 12.12 released

Debian.org - Sht, 06/09/2025 - 12:00pd
The Debian project is pleased to announce the twelfth update of its oldstable distribution Debian 12 (codename bookworm). This point release mainly adds corrections for security issues, along with a few adjustments for serious problems. Security advisories have already been published separately and are referenced where available.

Updated Debian 13: 13.1 released

Debian.org - Sht, 06/09/2025 - 12:00pd
The Debian project is pleased to announce the first update of its stable distribution Debian 13 (codename trixie). This point release mainly adds corrections for security issues, along with a few adjustments for serious problems. Security advisories have already been published separately and are referenced where available.

Anthropic Agrees To Pay Record $1.5 Billion To Settle Authors' AI Lawsuit

Slashdot - Pre, 05/09/2025 - 11:20md
An anonymous reader quotes a report from Deadline: Anthropic has agreed to pay at least $1.5 billion into a class action fund as part of a settlement of litigation brought by a group of book authors. The sum, disclosed in a court filing on Friday, "will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the attorneys for the authors wrote. The settlement also includes a provision that releases Anthropic only for its conduct up the August 25, meaning that new claims could be filed over future conduct, according to the filing. Anthropic also has agreed to destroy the datasets used in its models. The settlement figure amounts to about $3,000 per class work, according to the filing. You can read the terms of Anthropic's copyright settlement here (PDF). A hearing in the case is scheduled for Sept. 8.

Read more of this story at Slashdot.

Thibault Martin: TIL that You can spot base64 encoded JSON, certificates, and private keys

Planet GNOME - Mar, 05/08/2025 - 3:00md

I was working on my homelab and examined a file that was supposed to contain encrypted content that I could safely commit on a Github repository. The file looked like this

{ "serial": 13, "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67", "meta": { "key_provider.pbkdf2.password_key": "eyJzYWx0IjoianpHUlpMVkFOZUZKcEpSeGo4UlhnNDhGZk9vQisrR0YvSG9ubTZzSUY5WT0iLCJpdGVyYXRpb25zIjo2MDAwMDAsImhhc2hfZnVuY3Rpb24iOiJzaGE1MTIiLCJrZXlfbGVuZ3RoIjozMn0=" }, "encrypted_data": "ONXZsJhz37eJA[...]", "encryption_version": "v0" }

Hm, key provider? Password key? In an encrypted file? That doesn't sound right. The problem is that this file is generated by taking a password, deriving a key from it, and encrypting the content with that key. I don't know what the derived key could look like, but it could be that long indecipherable string.

I asked a colleague to have a look and he said "Oh that? It looks like a base64 encoded JSON. Give it a go to see what's inside."

I was incredulous but gave it a go, and it worked!!

$ echo "eyJzYW[...]" | base64 -d {"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}

I couldn't believe my colleague had decoded the base64 string on the fly, so I asked. "What gave it away? Was it the trailing equal signs at the end for padding? But how did you know it was base64 encoded JSON and not just a base64 string?"

He replied,

Whenever you see ey, that's {" and then if it's followed by a letter, you'll get J followed by a letter.

I did a few tests in my terminal, and he was right! You can spot base64 json with your naked eye, and you don't need to decode it on the fly!

$ echo "{" | base64 ewo= $ echo "{\"" | base64 eyIK $ echo "{\"s" | base64 eyJzCg== $ echo "{\"a" | base64 eyJhCg== $ echo "{\"word\"" | base64 eyJ3b3JkIgo=

But there's even better! As tyzbit reported on the fediverse, you can even spot base64 encoded certificates and private keys! They all start with LS, which reminds of the LS in "TLS certificate."

$ echo -en "-----BEGIN CERTIFICATE-----" | base64 LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t

[!warning] Errata

As pointed out by gnabgib and athorax on Hacker News, this actually detects the leading dashes of the PEM format, commonly used for certificates, and a YAML file that starts with --- will yield the same result

$ echo "---\n" | base64 LS0tXG4K

This is not a silver bullet!

Thanks Davide and Denis for showing me this simple but pretty useful trick, and thanks tyzbit for completing it with certs and private keys!

Matthew Garrett: Cordoomceps - replacing an Amiga's brain with Doom

Planet GNOME - Mar, 05/08/2025 - 2:30pd
There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

comments

Victor Ma: It's alive!

Planet GNOME - Mar, 05/08/2025 - 2:00pd

In the last two weeks, I’ve been working on my lookahead-based word suggestion algorithm. And it’s finally functional! There’s still a lot more work to be done, but it’s great to see that the original problem I set out to solve is now solved by my new algorithm.

Without my changes

Here’s what the upstream Crosswords Editor looks like, with a problematic grid:

The editor suggests words like WORD and WORM, for the 4-Across slot. But none of the suggestions are valid, because the grid is actually unfillable. This means that there are no possible word suggestions for the grid.

The words that the editor suggests do work for 4-Across. But they do not work for 4-Down. They all cause 4-Down to become a nonsensical word.

The problem here is that the current word suggestion algorithm only looks at the row and column where the cursor is. So it sees 4-Across and 1-Down—but it has no idea about 4-Down. If it could see 4-Down, then it would realize that no word that fits in 4-Across also fits in 4-Down—and it would return an empty word suggestion list.

With my changes

My algorithm fixes the problem by considering every intersecting slot of the current slot. In the example grid, the current slot is 4-Across. So, my algorithm looks at 1-Down, 2-Down, 3-Down, and 4-Down. When it reaches 4-Down, it sees that no letter fits in the empty cell. Every possible letter leads to either 4-Across or 4-Down or both slots to contain an invalid word. So, my algorithm correctly returns an empty list of word suggestions.

Julian Hofer: Git Forges Made Simple: gh & glab

Planet GNOME - Hën, 04/08/2025 - 2:00pd

When I set the goal for myself to contribute to open source back in 2018, I mostly struggled with two technical challenges:

  • Python virtual environments, and
  • Git together with GitHub.

Solving the former is nowadays my job, so let me write up my current workflow for the latter.

Most people use Git in combination with modern Git forges like GitHub and GitLab. Git doesn’t know anything about these forges, which is why CLI tools exist to close that gap. It’s still good to know how to handle things without them, so I will also explain how to do things with only Git. For GitHub there’s gh and for GitLab there’s glab. Both of them are Go binaries without any dependencies that work on Linux, macOS and Windows. If you don’t like any of the provided installation methods, you can simply download the binary, make it executable and put it in your PATH.

Luckily, they also have mostly the same command line interface. First, you have to login with the command that corresponds to your git forge:

In the case of gh this even authenticates Git with GitHub. With GitLab, you still have to set up authentication via SSH.

Working Solo

The simplest way to use Git is to use it like a backup system. First, you create a new repository on either Github or GitLab. Then you clone the repository:

git clone <REPO>">

From that point on, all you have to do is:

  • do some work
  • commit
  • push
  • repeat

On its own there aren’t a lot of reasons to choose this approach over a file syncing service like Nextcloud. No, the main reason you do this, is because you are either already familiar with the git workflow, or want to get used to it.

Contributing

Git truly shines as soon as you start collaborating with others. On a high level this works like this:

  • You modify some files in a Git repository,
  • you propose your changes via the Git forge,
  • maintainers of the repository review your changes, and
  • as soon as they are happy with your changes, they will integrate your changes into the main branch of the repository.
Starting Out With a Fresh Branch

Let’s go over the exact commands.

  1. You will want to start out with the latest upstream changes in the default branch. You can find out its name by running the following command:

    git ls-remote --symref origin HEAD
  2. Chances are it displays either ref/heads/main or refs/heads/master. The last component is the branch, so the default branch will be called either main or master. Before you start a new branch, you will run the following two commands to make sure you start with the latest state of the repository:

    git switch <DEFAULT-BRANCH>git pullgit pull">
  3. You switch and create a new branch with:

    git switch --create <BRANCH>">

    That way you can work on multiple features at the same time and easily keep your default branch synchronized with the remote repository.

Open a Pull Request

The next step is to open a pull request on GitHub or merge request on GitLab. Even though they are named differently, they are exactly the same thing. Therefore, I will call both of them pull requests from now on. The idea of a pull request is to integrate the changes from one branch into another branch (typically the default branch). However, you don’t necessarily want to give every potential contributor the power to create new branches on your repository. That is why the concept of forks exists. Forks are copies of a repository that are hosted on the same Git forge. Contributors can now create branches on their own forks and open pull requests based on these branches.

  1. If you don’t have push access to the repository, now it’s time to create your own fork.

  2. Then, you open the Pull Request

Checking out Pull Requests

Often, you want to check out a pull request on your own machine to verify that it works as expected.

Emmanuele Bassi: Governance in GNOME

Planet GNOME - Dje, 03/08/2025 - 10:48md
How do things happen in GNOME?

Things happen in GNOME? Could have fooled me, right?

Of course, things happen in GNOME. After all, we have been releasing every six months, on the dot, for nearly 25 years. Assuming we’re not constantly re-releasing the same source files, then we have to come to the conclusion that things change inside each project that makes GNOME, and thus things happen that involve more than one project.

So let’s roll back a bit.

GNOME’s original sin

We all know Havoc Pennington’s essay on preferences; it’s one of GNOME’s foundational texts, we refer to it pretty much constantly both inside and outside the contributors community. It has guided our decisions and taste for over 20 years. As far as foundational text goes, though, it applies to design philosophy, not to project governance.

When talking about the inception and technical direction of the GNOME project there are really two foundational texts that describe the goals of GNOME, as well as the mechanisms that are employed to achieve those goals.

The first one is, of course, Miguel’s announcement of the GNOME project itself, sent to the GTK, Guile, and (for good measure) the KDE mailing lists:

We will try to reuse the existing code for GNU programs as much as possible, while adhering to the guidelines of the project. Putting nice and consistent user interfaces over all-time favorites will be one of the projects. — Miguel de Icaza, “The GNOME Desktop project.” announcement email

Once again, everyone related to the GNOME project is (or should be) familiar with this text.

The second foundational text is not as familiar, outside of the core group of people that were around at the time. I am referring to Derek Glidden’s description of the differences between GNOME and KDE, written five years after the inception of the project. I isolated a small fragment of it:

Development strategies are generally determined by whatever light show happens to be going on at the moment, when one of the developers will leap up and scream “I WANT IT TO LOOK JUST LIKE THAT” and then straight-arm his laptop against the wall in an hallucinogenic frenzy before vomiting copiously, passing out and falling face-down in the middle of the dance floor. — Derek Glidden, “GNOME vs KDE”

What both texts have in common is subtle, but explains the origin of the project. You may not notice it immediately, but once you see it you can’t unsee it: it’s the over-reliance on personal projects and taste, to be sublimated into a shared vision. A “bottom up” approach, with “nice and consistent user interfaces” bolted on top of “all-time favorites”, with zero indication of how those nice and consistent UIs would work on extant code bases, all driven by somebody’s with a vision—drug induced or otherwise—that decides to lead the project towards its implementation.

It’s been nearly 30 years, but GNOME still works that way.

Sure, we’ve had a HIG for 25 years, and the shared development resources that the project provides tend to mask this, to the point that everyone outside the project assumes that all people with access to the GNOME commit bit work on the whole project, as a single unit. If you are here, listening (or reading) to this, you know it’s not true. In fact, it is so comically removed from the lived experience of everyone involved in the project that we generally joke about it.

Herding cats and vectors sum

During my first GUADEC, back in 2005, I saw a great slide from Seth Nickell, one of the original GNOME designers. It showed GNOME contributors represented as a jumble of vectors going in all directions, cancelling each component out; and the occasional movement in the project was the result of somebody pulling/pushing harder in their direction.

Of course, this is not the exclusive province of GNOME: you could take most complex free and open source software projects and draw a similar diagram. I contend, though, that when it comes to GNOME this is not emergent behaviour but it’s baked into the project from its very inception: a loosey-goosey collection of cats, herded together by whoever shows up with “a vision”, but, also, a collection of loosely coupled projects. Over the years we tried to put a rest to the notion that GNOME is a box of LEGO, meant to be assembled together by distributors and users in the way they most like it; while our software stack has graduated from the “thrown together at the last minute” quality of its first decade, our community is still very much following that very same model; the only way it seems to work is because we have a few people maintaining a lot of components.

On maintainers

I am a software nerd, and one of the side effects of this terminal condition is that I like optimisation problems. Optimising software is inherently boring, though, so I end up trying to optimise processes and people. The fundamental truth of process optimisation, just like software, is to avoid unnecessary work—which, in some cases, means optimising away the people involved.

I am afraid I will have to be blunt, here, so I am going to ask for your forgiveness in advance.

Let’s say you are a maintainer inside a community of maintainers. Dealing with people is hard, and the lord forbid you talk to other people about what you’re doing, what they are doing, and what you can do together, so you only have a few options available.

The first one is: you carve out your niche. You start, or take over, a project, or an aspect of a project, and you try very hard to make yourself indispensable, so that everything ends up passing through you, and everyone has to defer to your taste, opinion, or edict.

Another option: API design is opinionated, and reflects the thoughts of the person behind it. By designing platform API, you try to replicate your toughts, taste, and opinions into the minds of the people using it, like the eggs of parasitic wasp; because if everybody thinks like you, then there won’t be conflicts, and you won’t have to deal with details, like “how to make this application work”, or “how to share functionality”; or, you know, having to develop a theory of mind for relating to other people.

Another option: you try to reimplement the entirety of a platform by yourself. You start a bunch of projects, which require starting a bunch of dependencies, which require refactoring a bunch of libraries, which ends up cascading into half of the stack. Of course, since you’re by yourself, you end up with a consistent approach to everything. Everything is as it ought to be: fast, lean, efficient, a reflection of your taste, commitment, and ethos. You made everyone else redundant, which means people depend on you, but also nobody is interested in helping you out, because you are now taken for granted, on the one hand, and nobody is able to get a word edgewise into what you made on the other.

I purposefully did not name names, even though we can all recognise somebody in these examples. For instance, I recognise myself. I have been all of these examples, at one point or another over the past 20 years.

Painting a target on your back

But if this is what it looks like from within a project, what it looks like from the outside is even worse.

Once you start dragging other people, you raise your visibility; people start learning your name, because you appear in the issue tracker, on Matrix/IRC, on Discourse and Planet GNOME. Youtubers and journalists start asking you questions about the project. Randos on web forums start associating you to everything GNOME does, or does not; to features, design, and bugs. You become responsible for every decision, whether you are or not, and this leads to being the embodiment of all evil the project does. You’ll get hate mail, you’ll be harrassed, your words will be used against you and the project for ever and ever.

Burnout and you

Of course, that ends up burning people out; it would be absurd if it didn’t. Even in the best case possible, you’ll end up burning out just by reaching empathy fatigue, because everyone has access to you, and everyone has their own problems and bugs and features and wouldn’t it be great to solve every problem in the world? This is similar to working for non profits as opposed to the typical corporate burnout: you get into a feedback loop where you don’t want to distance yourself from the work you do because the work you do gives meaning to yourself and to the people that use it; and yet working on it hurts you. It also empowers bad faith actors to hound you down to the ends of the earth, until you realise that turning sand into computers was a terrible mistake, and we should have torched the first personal computer down on sight.

Governance

We want to have structure, so that people know what to expect and how to navigate the decision making process inside the project; we also want to avoid having a sacrificial lamb that takes on all the problems in the world on their shoulders until we burn them down to a cinder and they have to leave. We’re 28 years too late to have a benevolent dictator, self-appointed or otherwise, and we don’t want to have a public consultation every time we want to deal with a systemic feature. What do we do?

Examples

What do other projects have to teach us about governance? We are not the only complex free software project in existence, and it would be an appaling measure of narcissism to believe that we’re special in any way, shape or form.

Python

We should all know what a Python PEP is, but if you are not familiar with the process I strongly recommend going through it. It’s well documented, and pretty much the de facto standard for any complex free and open source project that has achieved escape velocity from a centralised figure in charge of the whole decision making process. The real achievement of the Python community is that it adopted this policy long before their centralised figure called it quits. The interesting thing of the PEP process is that it is used to codify the governance of the project itself; the PEP template is a PEP; teams are defined through PEPs; target platforms are defined through PEPs; deprecations are defined through PEPs; all project-wide processes are defined through PEPs.

Rust

Rust has a similar process for language, tooling, and standard library changes, called “RFC”. The RFC process is more lightweight on the formalities than Python’s PEPs, but it’s still very well defined. Rust, being a project that came into existence in a Post-PEP world, adopted the same type of process, and used it to codify teams, governance, and any and all project-wide processes.

Fedora

Fedora change proposals exist to discuss and document both self-contained changes (usually fairly uncontroversial, given that they are proposed by the same owners of module being changed) and system-wide changes. The main difference between them is that most of the elements of a system-wide change proposal are required, wheres for self-contained proposals they can be optional; for instance, a system-wide change must have a contingency plan, a way to test it, and the impact on documentation and release notes, whereas as self-contained change does not.

GNOME

Turns out that we once did have “GNOME Enhancement Proposals” (GEP), mainly modelled on Python’s PEP from 2002. If this comes as a surprise, that’s because they lasted for about a year, mainly because it was a reactionary process to try and funnel some of the large controversies of the 2.0 development cycle into a productive outlet that didn’t involve flames and people dramatically quitting the project. GEPs failed once the community fractured, and people started working in silos, either under their own direction or, more likely, under their management’s direction. What’s the point of discussing a project-wide change, when that change was going to be implemented by people already working together?

The GEP process mutated into the lightweight “module proposal” process, where people discussed adding and removing dependencies on the desktop development mailing list—something we also lost over the 2.x cycle, mainly because the amount of discussions over time tended towards zero. The people involved with the change knew what those modules brought to the release, and people unfamiliar with them were either giving out unsolicited advice, or were simply not reached by the desktop development mailing list. The discussions turned into external dependencies notifications, which also died up because apparently asking to compose an email to notify the release team that a new dependency was needed to build a core module was far too much of a bother for project maintainers.

The creation and failure of GEP and module proposals is both an indication of the need for structure inside GNOME, and how this need collides with the expectation that project maintainers have not just complete control over every aspect of their domain, but that they can also drag out the process until all the energy behind it has dissipated. Being in charge for the long run allows people to just run out the clock on everybody else.

Goals

So, what should be the goal of a proper technical governance model for the GNOME project?

Diffusing responsibilities

This should be goal zero of any attempt at structuring the technical governance of GNOME. We have too few people in too many critical positions. We can call it “efficiency”, we can call it “bus factor”, we can call it “bottleneck”, but the result is the same: the responsibility for anything is too concentrated. This is how you get conflict. This is how you get burnout. This is how you paralise a whole project. By having too few people in positions of responsibility, we don’t have enough slack in the governance model; it’s an illusion of efficiency.

Responsibility is not something to hoard: it’s something to distribute.

Empowering the community

The community of contributors should be able to know when and how a decision is made; it should be able to know what to do once a decision is made. Right now, the process is opaque because it’s done inside a million different rooms, and, more importantly, it is not recorded for posterity. Random GitLab issues should not be the only place where people can be informed that some decision was taken.

Empowering individuals

Individuals should be able to contribute to a decision without necessarily becoming responsible for a whole project. It’s daunting, and requires a measure of hubris that cannot be allowed to exist in a shared space. In a similar fashion, we should empower people that want to contribute to the project by reducing the amount of fluff coming from people with zero stakes in it, and are interested only in giving out an opinion on their perfectly spherical, frictionless desktop environment.

It is free and open source software, not free and open mic night down at the pub.

Actual decision making process

We say we work by rough consensus, but if a single person is responsible for multiple modules inside the project, we’re just deceiving ourselves. I should not be able to design something on my own, commit it to all projects I maintain, and then go home, regardless of whether what I designed is good or necessary.

Proposed GNOME Changes✝

✝ Name subject to change

PGCs

We have better tools than what the GEP used to use and be. We have better communication venues in 2025; we have better validation; we have better publishing mechanisms.

We can take a lightweight approach, with a well-defined process, and use it not for actual design or decision-making, but for discussion and documentation. If you are trying to design something and you use this process, you are by definition Doing It Wrong™. You should have a design ready, and series of steps to achieve it, as part of a proposal. You should already know the projects involved, and already have an idea of the effort needed to make something happen.

Once you have a formal proposal, you present it to the various stakeholders, and iterate over it to improve it, clarify it, and amend it, until you have something that has a rough consensus among all the parties involved. Once that’s done, the proposal is now in effect, and people can refer to it during the implementation, and in the future. This way, we don’t have to ask people to remember a decision made six months, two years, ten years ago: it’s already available.

Editorial team

Proposals need to be valid, in order to be presented to the community at large; that validation comes from an editorial team. The editors of the proposals are not there to evaluate its contents: they are there to ensure that the proposal is going through the expected steps, and that discussions related to it remain relevant and constrained within the accepted period and scope. They are there to steer the discussion, and avoid architecture astronauts parachuting into the issue tracker or Discourse to give their unwarranted opinion.

Once the proposal is open, the editorial team is responsible for its inclusion in the public website, and for keeping track of its state.

Steering group

The steering group is the final arbiter of a proposal. They are responsible for accepting it, or rejecting it, depending on the feedback from the various stakeholders. The steering group does not design or direct GNOME as a whole: they are the ones that ensure that communication between the parts happens in a meaningful manner, and that rough consensus is achieved.

The steering group is also, by design, not the release team: it is made of representatives from all the teams related to technical matters.

Is this enough?

Sadly, no.

Reviving a process for proposing changes in GNOME without addressing the shortcomings of its first iteration would inevitably lead to a repeat of its results.

We have better tooling, but the problem is still that we’re demanding that each project maintainer gets on board with a process that has no mechanism to enforce compliance.

Once again, the problem is that we have a bunch of fiefdoms that need to be opened up to ensure that more people can work on them.

Whither maintainers

In what was, in retrospect, possibly one of my least gracious and yet most prophetic moments on the desktop development mailing list, I once said that, if it were possible, I would have already replaced all GNOME maintainers with a shell script. Turns out that we did replace a lot of what maintainers used to do, and we used a large Python service to do that.

Individual maintainers should not exist in a complex project—for both the project’s and the contributors’ sake. They are inefficiency made manifest, a bottleneck, a point of contention in a distributed environment like GNOME. Luckily for us, we almost made them entirely redundant already! Thanks to the release service and CI pipelines, we don’t need a person spinning up a release archive and uploading it into a file server. We just need somebody to tag the source code repository, and anybody with the right permissions could do that.

We need people to review contributions; we need people to write release notes; we need people to triage the issue tracker; we need people to contribute features and bug fixes. None of those tasks require the “maintainer” role.

So, let’s get rid of maintainers once and for all. We can delegate the actual release tagging of core projects and applications to the GNOME release team; they are already releasing GNOME anyway, so what’s the point in having them wait every time for somebody else to do individual releases? All people need to do is to write down what changed in a release, and that should be part of a change itself; we have centralised release notes, and we can easily extract the list of bug fixes from the commit log. If you can ensure that a commit message is correct, you can also get in the habit of updating the NEWS file as part of a merge request.

Additional benefits of having all core releases done by a central authority are that we get people to update the release notes every time something changes; and that we can sign all releases with a GNOME key that downstreams can rely on.

Embracing special interest groups

But it’s still not enough.

Especially when it comes to the application development platform, we have already a bunch of components with an informal scheme of shared responsibility. Why not make that scheme official?

Let’s create the SDK special interest group; take all the developers for the base libraries that are part of GNOME—GLib, Pango, GTK, libadwaita—and formalise the group of people that currently does things like development, review, bug fixing, and documentation writing. Everyone in the group should feel empowered to work on all the projects that belong to that group. We already are, except we end up deferring to somebody that is usually too busy to cover every single module.

Other special interest groups should be formed around the desktop, the core applications, the development tools, the OS integration, the accessibility stack, the local search engine, the system settings.

Adding more people to these groups is not going to be complicated, or introduce instability, because the responsibility is now shared; we would not be taking somebody that is already overworked, or even potentially new to the community, and plopping them into the hot seat, ready for a burnout.

Each special interest group would have a representative in the steering group, alongside teams like documentation, design, and localisation, thus ensuring that each aspect of the project technical direction is included in any discussion. Each special interest group could also have additional sub-groups, like a web services group in the system settings group; or a networking group in the OS integration group.

What happens if I say no?

I get it. You like being in charge. You want to be the one calling the shots. You feel responsible for your project, and you don’t want other people to tell you what to do.

If this is how you feel, then there’s nothing wrong with parting ways with the GNOME project.

GNOME depends on a ton of projects hosted outside GNOME’s own infrastructure, and we communicate with people maintaining those projects every day. It’s 2025, not 1997: there’s no shortage of code hosting services in the world, we don’t need to have them all on GNOME infrastructure.

If you want to play with the other children, if you want to be part of GNOME, you get to play with a shared set of rules; and that means sharing all the toys, and not hoarding them for yourself.

Civil service

What we really want GNOME to be is a group of people working together. We already are, somewhat, but we can be better at it. We don’t want rule and design by committee, but we do need structure, and we need that structure to be based on expertise; to have distinct sphere of competence; to have continuity across time; and to be based on rules. We need something flexible, to take into account the needs of GNOME as a project, and be capable of growing in complexity so that nobody can be singled out, brigaded on, or burnt to a cinder on the sacrificial altar.

Our days of passing out in the middle of the dance floor are long gone. We might not all be old—actually, I’m fairly sure we aren’t—but GNOME has long ceased to be something we can throw together at the last minute just because somebody assumed the mantle of a protean ruler, and managed to involve themselves with every single project until they are the literal embodiement of an autocratic force capable of dragging everybody else towards a goal, until the burn out and have to leave for their own sake.

We can do better than this. We must do better.

To sum up

Stop releasing individual projects, and let the release team do it when needed.

Create teams to manage areas of interest, instead of single projects.

Create a steering group from representatives of those teams.

Every change that affects one or more teams has to be discussed and documented in a public setting among contributors, and then published for future reference.

None of this should be controversial because, outside of the publishing bit, it’s how we are already doing things. This proposal aims at making it official so that people can actually rely on it, instead of having to divine the process out of thin air.

The next steps

We’re close to the GNOME 49 release, now that GUADEC 2025 has ended, so people are busy working on tagging releases, fixing bugs, and the work on the release notes has started. Nevertheless, we can already start planning for an implementation of a new governance model for GNOME for the next cycle.

First of all, we need to create teams and special interest groups. We don’t have a formal process for that, so this is also a great chance at introducing the change proposal process as a mechanism for structuring the community, just like the Python and Rust communities do. Teams will need their own space for discussing issues, and share the load. The first team I’d like to start is an “introspection and language bindings” group, for all bindings hosted on GNOME infrastructure; it would act as a point of reference for all decisions involving projects that consume the GNOME software development platform through its machine-readable ABI description. Another group I’d like to create is an editorial group for the developer and user documentation; documentation benefits from a consistent editorial voice, while the process of writing documentation should be open to everybody in the community.

A very real issue that was raised during GUADEC is bootstrapping the steering committee; who gets to be on it, what is the committee’s remit, how it works. There are options, but if we want the steering committee to be a representation of the technical expertise of the GNOME community, it also has to be established by the very same community; in this sense, the board of directors, as representatives of the community, could work on defining the powers and compositions of this committee.

There are many more issues we are going to face, but I think we can start from these and evaluate our own version of a technical governance model that works for GNOME, and that can grow with the project. In the next couple of weeks I’ll start publishing drafts for team governance and the power/composition/procedure of the steering committee, mainly for iteration and comments.

Tobias Bernard: GUADEC 2025

Planet GNOME - Dje, 03/08/2025 - 12:27md

Last week was this year’s GUADEC, the first ever in Italy! Here are a few impressions.

Local-First

One of my main focus areas this year was local-first, since that’s what we’re working on right now with the Reflection project (see the previous blog post). Together with Julian and Andreas we did two lightning talks (one on local-first generally, and one on Reflection in particular), and two BoF sessions.

Local-First BoF

At the BoFs we did a bit of Reflection testing, and reached a new record of people simultaneously trying the app:

This also uncovered a padding bug in the users popover :)

Andreas also explained the p2anda stack in detail using a new diagram we made a few weeks back, which visualizes how the various components fit together in a real app.

p2panda stack diagram

We also discussed some longer-term plans, particularly around having a system-level sync service. The motivation for this is twofold: We want to make it as easy as possible for app developers to add sync to their app. It’s never going to be “free”, but if we can at least abstract some of this in a useful way that’s a big win for developer experience. More importantly though, from a security/privacy point of view we really don’t want every app to have unrestricted access to network, Bluetooth, etc. which would be required if every app does its own p2p sync.

One option being discussed is taking the networking part of p2panda (including iroh for p2p networking) and making it a portal API which apps can use to talk to other instances of themselves on other devices.

Another idea was a more high-level portal that works more like a file “share” system that can sync arbitary files by just attaching the sync context to files as xattrs and having a centralized service handle all the syncing. This would have the advantage of not requiring special UI in apps, just a portal and some integration in Files. Real-time collaboration would of course not be possible without actual app integration, but for many use cases that’s not needed anyway, so perhaps we could have both a high- and low-level API to cover different scenarios?

There are still a lot of open questions here, but it’s cool to see how things get a little bit more concrete every time :)

If you’re interested in the details, check out the full notes from both BoF sessions.

Design

Jakub and I gave the traditional design team talk – a bit underprepared and last-minute (thanks Jakub for doing most of the heavy lifting), but it was cool to see in retrospect how much we got done in the past year despite how much energy unfortunately went into unrelated things. The all-new slate of websites is especially cool after so many years of gnome.org et al looking very stale. You can find the slides here.

Jakub giving the design talk

Inspired by the Summer of GNOME OS challenge many of us are doing, we worked on concepts for a new app to make testing sysexts of merge requests (and nightly Flatpaks) easier. The working title is “Test Ride” (a more sustainable version of Apple’s “Test Flight” :P) and we had fun drawing bicycles for the icon.

Test Ride app mockup

Jakub and I also worked on new designs for Georges’ presentation app Spiel (which is being rebranded to “Podium” to avoid the name clash with the a11y project). The idea is to make the app more resilient and data more future-proof, by going with a file-based approach and simple syntax on top of Markdown for (limited) layout customization.

Georges and Jakub discussing Podium designs Miscellaneous
  • There was a lot of energy and excitement around GNOME OS. It feels like we’ve turned a corner, finally leaving the “science experiment” stage and moving towards “daily-drivable beta”.
  • I was very happy that the community appreciation award went to Alice Mikhaylenko this year. The impact her libadwaita work has had on the growth of the app ecosystem over the past 5 years can not be overstated. Not only did she build dozens of useful and beautiful new adaptive widgets, she also has a great sense for designing APIs in a way that will get people to adopt them, which is no small thing. Kudos, very well deserved!
  • While some of the Foundation conflicts of the past year remain unresolved, I was happy to see that Steven’s and the board’s plans are going in the right direction.
Brescia

The conference was really well-organized (thanks to Pietro and the local team!), and the venue and city of Brescia had a number of advantages that were not always present at previous GUADECs:

  • The city center is small and walkable, and everyone was staying relatively close by
  • The university is 20 min by metro from the city center, so it didn’t feel like a huge ordeal to go back and forth
  • Multiple vegan lunch options within a few minutes walk from the university
  • Lots of tables (with electrical outlets!) for hacking at the venue
  • Lots of nice places for dinner/drinks outdoors in the city center
  • Many dope ice cream places
Piazza della Loggia at sunset

A few (minor) points that could be improved next time:

  • The timetable started veeery early every day, which contributed to a general lack of sleep. Realistically people are not going to sleep before 02:00, so starting the program at 09:00 is just too early. My experience from multi-day events in Berlin is that 12:00 is a good time to start if you want everyone to be awake :)
  • The BoFs could have been spread out a bit more over the two days, there were slots with three parallel ones and times with nothing on the program.
  • The venue closing at 19:00 is not ideal when people are in the zone hacking. Doesn’t have to be all night, but the option to hack until after dinner (e.g. 22:00) would be nice.
  • Since that the conference is a week long accommodation can get a bit expensive, which is not ideal since most people are paying for their own travel and accommodation nowadays. It’d have been great if there was a more affordable option for accommodation, e.g. at student dorms, like at previous GUADECs.
  • A frequent topic was how it’s not ideal to have everyone be traveling and mostly unavailable for reviews a week before feature freeze. It’s also not ideal because any plans you make at GUADEC are not going to make it into the September release, but will have to wait for March. What if the conference was closer to the beginning of the cycle, e.g. in May or June?

A few more random photos:

Matthias showing us fancy new dynamic icon stuff Dinner on the first night feat. yours truly, Robert, Jordan, Antonio, Maximiliano, Sam, Javier, Julian, Adrian, Markus, Adrien, and Andreas Adrian and Javier having an ad-hoc Buildstream BoF at the pizzeria Robert and Maximiliano hacking on Snapshot

Ubuntu Studio: Ubuntu Studio 24.10 Has Reached End-Of-Life (EOL)

Planet Ubuntu - Enj, 10/07/2025 - 2:00md

As of July 10, 2025, all flavors of Ubuntu 24.10, including Ubuntu Studio 24.10, codenamed “Oracular Oriole”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.

If you have not already done so, please upgrade to Ubuntu Studio 25.10 via the instructions provided here. If you do not do so as soon as possible, you will lose the ability without additional advanced configuration.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

Stuart Langridge: Making a Discord activity with PHP

Planet Ubuntu - Mar, 08/07/2025 - 9:11pd

Another post in what is slowly becoming a series, after describing how to make a Discord bot with PHP; today we're looking at how to make a Discord activity the same way.

An activity is simpler than a bot; Discord activities are basically a web page which loads in an iframe, and can do what it likes in there. You're supposed to use them for games and the like, but I suspect that it might be useful to do quite a few bot-like tasks with activities instead; they take up more of your screen while you're using them, but it's much, much easier to create a user-friendly experience with an activity than it is with a bot. The user interface for bots tends to look a lot like the command line, which appeals to nerds, but having to type !mybot -opt 1 -opt 2 is incomprehensible gibberish to real people. Build a little web UI, you know it makes sense.

Anyway, I have not yet actually published one of these activities, and I suspect that there is a whole bunch of complexity around that which I'm not going to get into yet. So this will get you up and running with a Discord activity that you can test, yourself. Making it available to others is step 2: keep an eye out for a post on that.

There are lots of "frameworks" out there for building Discord activities, most of which are all about "use React!" and "have this complicated build environment!" and "deploy a node.js server!", when all you actually need is an SPA web page1, a JS library, a small PHP file, and that's it. No build step required, no deploying a node.js server, just host it in any web space that does PHP (i.e., all of them). Keep it simple, folks. Much nicer.

Step 1: set up a Discord app

To have an activity, it's gotta be tied to a Discord app. Get one of these as follows:

  • Create an application at discord.com/developers/applications. Call it whatever you want
  • Copy the "Application ID" from "General Information" and make a secrets.php file; add the application ID as $clientid = "whatever";
  • In "OAuth2", "Reset Secret" under Client Secret and store it in secrets.php as $clientsecret
  • In "OAuth2", "Add Redirect": this URL doesn't get used but there has to be one, so fill it in as some URL you like (http://127.0.0.1 works fine)
  • Get the URL of your activity web app (let's say it's https://myserver/myapp/). Under URL Mappings, add myserver/myapp (no https://) as the Root Mapping. This tells Discord where your activity is
  • Under Settings, tick Enable Activities. (Also tick "iOS" and "Android" if you want it to work in the phone app)
  • Under Installation > Install Link, copy the Discord Provided Link. Open it in a browser. This will switch to the Discord desktop app. Add this app to the server of your choice (not to everywhere), and choose the server you want to add it to
  • In the Discord desktop client, click the Activities button (it looks like a playstation controller, at the end of the message entry textbox). Your app should now be in "Apps in this Server". Choose it and say Launch. Confirm that you're happy to trust it because you're running it for the first time

And this will then launch your activity in a window in your Discord app. It won't do anything yet because you haven't written it, but it's now loading.

Step 2: write an activity
  • You'll need the Discord Embedded SDK JS library. Go off to jsdelivr and see the URL it wants you to use (at time of writing this is https://cdn.jsdelivr.net/npm/@discord/embedded-app-sdk@2.0.0/+esm but check). Download this URL to get a JS file, which you should call discordsdk.js. (Note: do not link to this directly. Discord activities can't download external resources without some semi-complex setup. Just download the JS file)
  • Now write the home page for your app -- index.php is likely to be ideal for this, because you need the client ID that you put in secrets.php. A very basic one, which works out who the user is, looks something like this:
<html> <body> I am an activity! You are <output id="username">...?</output> <scr ipt type="module"> import {DiscordSDK} from './discordsdk.js'; const clientid = '<?php echo $clientid; ?>'; async function setup() { const discordSdk = new DiscordSDK(clientid); // Wait for READY payload from the discord client await discordSdk.ready(); // Pop open the OAuth permission modal and request for access to scopes listed in scope array below const {code} = await discordSdk.commands.authorize({ client_id: clientid, response_type: 'code', state: '', prompt: 'none', scope: ['identify'], }); const response = await fetch('/.proxy/token.php?code=' + code); const {access_token} = await response.json(); const auth = await discordSdk.commands.authenticate({access_token}); document.getElementById("username").textContent = auth.user.username; /* other properties you may find useful: server ID: discordSdk.guildId user ID: auth.user.id channel ID: discordSdk.channelId */ } setup()

You will see that in the middle of this, we call token.php to get an access token from the code that discordSdk.commands.authorize gives you. While the URL is /.proxy/token.php, that's just a token.php file right next to index.php; the .proxy stuff is because Discord puts all your requests through their proxy, which is OK. So you need this file to exist. Following the Discord instructions for authenticating users with OAuth, it should look something like this:

<?php require_once("secrets.php"); $postdata = http_build_query( array( "client_id" => $clientid, "client_secret" => $clientsecret, "grant_type" => "authorization_code", "code" => $_GET["code"] ) ); $opts = array('http' => array( 'method' => 'POST', 'header' => [ 'Content-Type: application/x-www-form-urlencoded', 'User-Agent: mybot/1.00' ], 'content' => $postdata, 'ignore_errors' => true ) ); $context = stream_context_create($opts); $result_json = file_get_contents('https://discord.com/api/oauth2/token', false, $context); if ($result_json == FALSE) { echo json_encode(array("error"=>"no response")); die(); } $result = json_decode($result_json, true); if (!array_key_exists("access_token", $result)) { error_log("Got JSON response from /token without access_token $result_json"); echo json_encode(array("error"=>"no token")); die(); } $access_token = $result["access_token"]; echo json_encode(array("access_token" => $access_token));

And... that's all. At this point, if you Launch your activity from Discord, it should load, and should work out who the running user is (and which channel and server they're in) and that's pretty much all you need. Hopefully that's a relatively simple way to get started.

  1. it's gotta be an SPA. Discord does not like it when the page navigates around

Faqet

Subscribe to AlbLinux agreguesi