There are many things that we take for granted in this world, and one of them is undoubtedly the ability to clean up your files - imagine a world where you can't just throw all those disk space hungry things that you no longer find useful. Though that might sound impossible, turns out some people have encountered a particularly interesting bug, that resulted in silent sweeping the Trash under the rug instead of emptying it in Nautilus. Since I was blessed to run into that issue myself, I decided to fix it and shed some light on the fun.
Trash after emptying in Nautilus, are the files really gone?
It all started with a 2009 Ubuntu launchpad ticket, reported against Nautilus. The user found 70 GB worth of files using disk analyzer in the ~/.local/share/Trash/expunged directory, even though they had emptied it with graphical interface. They did realize the offending files belonged to another user, however, they couldn't reproduce it easily at first. After all, when you try to move to trash a file or a directory not belonging to you, you would usually be correctly informed that you don't have necessary permissions, and perhaps even offer to permanently delete them instead. So what was so special about this case?
First let's get a better view of when we can and when we can't permanently delete files, something that is done at the end of a successful trash emptying operation. We'll focus only on the owners of relevant files, since other factors, such as file read/write/execute permissions, can be adjusted freely by their owners, and that's what trash implementations will do for you. Here are cases where you CAN delete files:
- when a file is in a directory owned by you, you can always delete it
- when a directory is in a directory owned by you and it's owned by you, you can obviously delete it
- when a directory is in a directory owned by you but you don't own it, and it's empty, you can surprisingly delete it as well
So to summarize, no matter who the owner of the file or a directory is, if it's in a directory owned by you, you can get rid of it. There is one exception to this - the directory must be empty, otherwise, you will not be able to remove neither it, nor its including files. Which takes us to an analogous list for cases where you CANNOT delete files:
- when a directory is in a directory owned by you but you don't own it, and it's not empty, you can't delete it.
- when a file is in a directory NOT owned by you, you can't delete it
- when a directory is in a directory NOT owned by you, you can't delete it either
In contrast with removing files in a directory you own, when you are not the owner of the parent directory, you cannot delete any of the child files and directories, without exceptions. This is actually the reason for the one case where you can't remove something from a directory you own - to remove a non-empty directory, first you need to recursively delete all of its including files and directories, and you can't do that if the directory is not owned by you.
Now let's look inside the trash can, or rather how it functions - the reason for separating permanently deleting and trashing operations, is obvious - users are expected to change their mind and be able to get their files back on a whim, so there's a need for a middle step. That's where the Trash specification comes, providing a common way in which all "Trash can" implementation should store, list, and restore trashed files, even across different filesystems - Nautilus Trash feature is one of the possible implementations. The way the trashing works is actually moving files to the $XDG_DATA_HOME/Trash/files directory and setting up some metadata to track their original location, to be able to restore them if needed. Only when the user empties the trash, are they actually deleted. If it's all about moving files, specifically outside their previous parent directory (i.e. to Trash), let's look at cases where you CAN move files:
- when a file is in a directory owned by you, you can move it
- when a directory is in a directory owned by you and you own it, you can obviously move it
We can see that the only exception when moving files in a directory you own, is when the directory you're moving doesn't belong to you, in which case you will be correctly informed you don't have permissions. In the remaining cases, users are able to move files and therefore trash them. Now what about the cases where you CANNOT move files?
- when a directory is in a directory owned by you but you don't own it, you can't move it
- when a file is in a directory NOT owned by you, you can't move it either
- when a directory is in a directory NOT owned by you, you still can't move it
In those cases Nautilus will either not expose the ability to trash files, or will tell user about the error, and the system is working well - even if moving them was possible, permanently deleting files in a directory not owned by you is not supported anyway.
So, where's the catch? What are we missing? We've got two different operations that can succeed or fail given different circumstances, moving (trashing) and deleting. We need to find a situation, where moving a file is possible, and such overlap exists, by chaining the following two rules:
- when a directory A is in a directory owned by you and it's owned by you, you can obviously move it
- when a directory B is in a directory A owned by you but you don't own it, and it's not empty, you can't delete it.
So a simple way to reproduce was found, precisely:
mkdir -p test/root
touch test/root/file
sudo chown root:root test/root
Afterwards trashing and emptying in Nautilus or gio trash command will result in the files not being deleted, and left in the ~/.local/share/Trash/expunged, which is used by the gvfsd-trash as an intermediary during emptying operation. The situations where that can happen are very rare, but they do exist - personally I have encountered this when manually cleaning container files created by podman in ~/.local/share/containers, which I arguably I shouldn't be doing in the first place, and rather leave it up to the podman itself. Nevertheless, it's still possible from the user perspective, and should be handled and prevented correctly. That's exactly what was done, a ticket was submitted and moved to appropriate place, which turned out to be glib itself, and I have submitted a MR that was merged - now both Nautilus and gio trash will recursively check for this case, and prevent you from doing this. You can expect it in the next glib release 2.85.1.
On the ending notes I want to thank the glib maintainer Philip Withnall who has walked me through on the required changes and reviewed them, and ask you one thing: is your ~/.local/share/Trash/expunged really empty? :)
The All Systems Go! 2025 Call for Participation Closes Tomorrow!
The Call for Participation (CFP) for All Systems Go! 2025 will close tomorrow, on 13th of June! We’d like to invite you to submit your proposals for consideration to the CFP submission site quickly!
Good evening, hackfolk. A quick note this evening to record a waypoint in my efforts to improve Guile’s memory manager.
So, I got Guile running on top of the Whippet API. This API can be implemented by a number of concrete garbage collector implementations. The implementation backed by the Boehm collector is fine, as expected. The implementation that uses the bump-pointer-allocation-into-holes strategy is less good. The minor reason is heap sizing heuristics; I still get it wrong about when to grow the heap and when not to do so. But the major reason is that non-moving Immix collectors appear to have pathological fragmentation characteristics.
Fragmentation, for our purposes, is memory under the control of the GC which was free after the previous collection, but which the current cycle failed to use for allocation. I have the feeling that for the non-moving Immix-family collector implementations, fragmentation is much higher than for size-segregated freelist-based mark-sweep collectors. For an allocation of, say, 1024 bytes, the collector might have to scan over many smaller holes until you find a hole that is big enough. This wastes free memory. Fragmentation memory is not gone—it is still available for allocation!—but it won’t be allocatable until after the current cycle when we visit all holes again. In Immix, fragmentation wastes allocatable memory during a cycle, hastening collection and causing more frequent whole-heap traversals.
The value proposition of Immix is that if there is too much fragmentation, you can just go into evacuating mode, and probably improve things. I still buy it. However I don’t think that non-moving Immix is a winner. I still need to do more science to know for sure. I need to fix Guile to support the stack-conservative, heap-precise version of the Immix-family collector which will allow for evacuation.
So that’s where I’m at: a load of gnarly Guile refactors to allow for precise tracing of the heap. I probably have another couple weeks left until I can run some tests. Fingers crossed; we’ll see!
Damned Lies is the name of GNOME’s web application for managing localization (l10n) across its projects. But why is it named like this?
Damned Lies about GNOMEOn the About page of GNOME’s localization site, the only explanation given for the name Damned Lies is a link to a Wikipedia article called “Lies, damned lies, and statistics”.
“Damned Lies” comes from the saying “Lies, damned lies, and statistics” which is a 19th-century phrase used to describe the persuasive power of statistics to bolster weak arguments, as described on Wikipedia. One of its earliest known uses appeared in a 1891 letter to the National Observer, which categorised lies into three types:
“Sir, —It has been wittily remarked that there are three kinds of falsehood: the first is a ‘fib,’ the second is a downright lie, and the third and most aggravated is statistics. It is on statistics and on the absence of statistics that the advocate of national pensions relies …”
To find out more, I asked in GNOME’s i18n Matrix room, and Alexandre Franke helped a lot, he said:
Stats are indeed lies, in many ways. Like if GNOME 48 gets 100% translated in your language on Damned Lies, it doesn’t mean the version of GNOME 48 you have installed on your system is 100% translated, because the former is a real time stat for the branch and the latter is a snapshot (tarball) at a specific time. So 48.1 gets released while the translation is at 99%, and then the translators complete the work, but you won’t get the missing translations until 48.2 gets released. Works the other way around: the translation is at 100% at the time of the release, but then there’s a freeze exception and the stats go 99% while the released version is at 100%. Or you are looking at an old version of GNOME for which there won’t be any new release, which wasn’t fully translated by the time of the latest release, but then a translator decided that they wanted to see 100% because the incomplete translation was not looking as nice as they’d like, and you end up with Damned Lies telling you that version of GNOME was fully translated when it never was and never will be. All that to say that translators need to learn to work smart, at the right time, on the right modules, and not focus on the stats.So there you have it: Damned Lies is a name that reminds us that numbers and statistics can be misleading even on GNOME’s I10n Web application.
This is a tool that is used to profile applications on Linux. It tracks function calls and other events in the system to provide a detailed view of what is happening in the system. It is a powerful tool that can help developers optimize their applications and understand performance issues. Visit Sysprof for more information.
sysprof-ebpfThis is a project I am working on as part of GSoC 2025 mentored by Christian Hergert. The goal is to create a new backend for Sysprof that uses eBPF to collect profiling data. This will mostly serve as groundwork for the coming eBPF capabilities that will be added to Sysprof. This will hopefully also serve as the design documentation for anyone reading the code for Sysprof-eBPF in the future.
TestingIf you want to test out the current state of the code, you can do so by following these steps:
sysprof-ebpf will be a subprocess that will be created by sysprofd when the user selects the eBPF backend on the UI. I will be adding an options menu on the UI to choose which tracers to activate after I am done with the initial implementation. You can find my current dirty code here. As of writing this blog, this MR has the following capabilities:
I planned on making this a single threaded process initially, but it dawned on me that not all ring-buffers will update at the same time and this will certainly block IO during polling, so I figured I’ll just put each tracer in it’s own DexFuture to do this capture in an async way. This has not been implemented as of writing this blog though.
The eBPF programs will follow the this block diagram in general. I haven’t made the config hashmap part of this yet, but I think I’ll make it only if it’s required in the future. All the currently planned features do not require this config map, but it certainly will be useful when I would need to make the program cross-platform or cross-kernel. This will be one of the last things I will be implementing in the project.
ConclusionI hope to make this a valuable addition to Sysprof. I will be writing more blogs as I make progress on the project. If you have any questions or suggestions, feel free to reach out to me on GitLab or Twitter. Also, I’d absolutely LOVE suggestions on how to improve the design of this project. I am still learning and I am open to any suggestions that can make this project better.
Kinda… GNOME doesn’t have a formal and well defined policy in place about systemd. The rule of thumb is that GNOME doesn’t strictly depend on systemd for critical desktop functionality, but individual features may break without it.
GNOME does strongly depend on logind, systemd’s session and seat management service. GNOME first introduced support for logind in 2011, then in 2015 ConsoleKit support was removed and logind became a requirement. However, logind can exist in isolation from systemd: the modern elogind service does just that, and even back in 2015 there were alternatives available. Some distributors chose to patch ConsoleKit support back into GNOME. This way, GNOME can run in environments without systemd, including the BSDs.
While GNOME can run with other init systems, most upstream GNOME developers are not testing GNOME in these situations. Our automated testing infrastructure (i.e. GNOME OS) doesn’t test any non-systemd codepaths. And many modules that have non-systemd codepaths do so with the expectation that someone else will maintain them and fix them when they break.
What’s changing?GNOME is about to gain a few strong dependencies on systemd, and this will make running GNOME harder in environments that don’t have systemd available.
Let’s start with the easier of the changes. GDM is gaining a dependency on systemd’s userdb infrastructure. GNOME and systemd do not support running more than one graphical session under the same user account, but GDM supports multi-seat configurations and Remote Login with RDP. This means that GDM may try to display multiple login screens at once, and thus multiple graphical sessions at once. At the moment, GDM relies on legacy behaviors and straight-up hacks to get this working, but this solution is incompatible with the modern dbus-broker and so we’re looking to clean this up. To that end, GDM now leverages systemd-userdb to dynamically allocate user accounts, and then runs each login screen as a unique user.
In the future, we plan to further depend on userdb by dropping the AccountsService daemon, which was designed to be a stop-gap measure for the lack of a rich user database. 15 years later, this “temporary” solution is still in use. Now that systemd’s userdb enables rich user records, we can start work on replacing AccountsService.
Next, the bigger change. Since GNOME 3.34, gnome-session uses the systemd user instance to start and manage the various GNOME session services. When systemd is unavailable, gnome-session falls back to a builtin service manager. This builtin service manager uses .desktop files to start up the various GNOME session services, and then monitors them for failure. This code was initially implemented for GNOME 2.24, and is starting to show its age. It has received very minimal attention in the 17 years since it was first written. Really, there’s no reason to keep maintaining a bespoke and somewhat primitive service manager when we have systemd at our disposal. The only reason this code hasn’t completely bit rotted is the fact that GDM’s aforementioned hacks break systemd and so we rely on the builtin service manager to launch the login screen.
Well, that has now changed. The hacks in GDM are gone, and the login screen’s session is managed by systemd. This means that the builtin service manager will now be completely unused and untested. Moreover: we’d like to implement a session save/restore feature, but the builtin service manager interferes with that. For this reason, the code is being removed.
So what should distros without systemd do?First, consider using GNOME with systemd. You’d be running in a configuration supported, endorsed, and understood by upstream. Failing that, though, you’ll need to implement replacements for more systemd components, similarly to what you have done with elogind and eudev.
To help you out, I’ve put a temporary alternate code path into GDM that makes it possible to run GDM without an implementation of userdb. When compiled against elogind, instead of trying to allocate dynamic users GDM will look-up and use the gdm-greeter user for the first login screen it spawns, gdm-greeter-2 for the second, and gdm-greeter-N for the Nth. GDM will have similar behavior with the gnome-initial-setup[-N] users. You can statically allocate as many of these users as necessary, and GDM will work with them for now. It’s quite likely that this will be necessary for GNOME 49.
Next: you’ll need to deal with the removal of gnome-session’s builtin service manager. If you don’t have a service manager running in the user session, you’ll need to get one. Just like system services, GNOME session services now install systemd unit files, and you’ll have to replace these unit files with your own service manager’s definitions. Next, you’ll need to replace the “session leader” process: this is the main gnome-session binary that’s launched by GDM to kick off session startup. The upstream session leader just talks to systemd over D-Bus to upload its environment variables and then start a unit, so you’ll need to replace that with something that communicates with your service manager instead. Finally, you’ll probably need to replace “gnome-session-ctl”, which is a tiny helper binary that’s used to coordinate between the session leader, the main D-Bus service, and systemd. It is also quite likely that this will be needed for GNOME 49
Finally: You should implement the necessary infrastructure for the userdb Varlink API to function. Once AccountsService is dropped and GNOME starts to depend more on userdb, the alternate code path will be removed from GDM. This will happen in some future GNOME release (50 or later). By then, you’ll need at the very least:
Apologies for the short timeline, but this blog post could only be published after I knew how exactly I’m splitting up gnome-session into separate launcher and main D-Bus service processes. Keep in mind that GNOME 48 will continue to receive security and bug fixes until GNOME 50 is released. Thus, if you cannot address these changes in time, you have the option of holding back the GNOME version. If you can’t do that, you might be able to get GNOME 49 running with gnome-session 48, though this is a configuration that won’t be tested or supported upstream so your mileage will vary (much like running GNOME on other init systems). Still, patching that scenario to work may buy you more time to upgrade to gnome-session 49.
And that should be all for now!
A couple weeks ago I was playing around with a multiple architecture CI setup with another team, and that led me to pull out my StarFive VisionFive 2 SBC again to see where I could make it this time with an install.
I left off about a year ago when I succeeded in getting an older version of Debian on it, but attempts to get the tooling to install a more broadly supported version of U-Boot to the SPI flash were unsuccessful. Then I got pulled away to other things, effectively just bringing my VF2 around to events as a prop for my multiarch talks – which it did beautifully! I even had one conference attendee buy one to play with while sitting in the audience of my talk. Cool.
I was delighted to learn how much progress had been made since I last looked. Canonical has published more formalized documentation: Install Ubuntu on the StarFive VisionFive 2 in the place of what had been a rather cluttered wiki page. So I got all hooked up and began my latest attempt.
My first step was to grab the pre-installed server image. I got that installed, but struggled a little with persistence once I unplugged the USB UART adapter and rebooted. I then decided just to move forward with the Install U-Boot to the SPI flash instructions. I struggled a bit here for two reasons:
And then I had to fly across the country. We’re spending a couple weeks around spring break here at our vacation house in Philadelphia, but the good thing about SBCs is that they’re incredibly portable and I just tossed my gear into my backpack and brought it along.
Thanks to Emil Renner Berthing (esmil) on the Ubuntu Matrix server for providing me with enough guidance to figure out where I had gone wrong above, and got me on my way just a few days after we arrived in Philly.
With the newer U-Boot installed, I was able to use the Ubuntu 24.04 livecd image on a micro SD Card to install Ubuntu 24.04 on an NVMe drive! That’s another new change since I last looked at installation, using my little NVMe drive as a target was a lot simpler than it would have been a year ago. In fact, it was rather anticlimactic, hah!
And with that, I was fully logged in to my new system.
elizabeth@r2kt:~$ cat /proc/cpuinfo
processor : 0
hart : 2
isa : rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb
mmu : sv39
uarch : sifive,u74-mc
mvendorid : 0x489
marchid : 0x8000000000000007
mimpid : 0x4210427
hart isa : rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb
It has 4 cores, so here’s the full output: vf2-cpus.txt
What will I do with this little single board computer? I don’t know yet. I joked with my husband that I’d “install Debian on it and forget about it like everything else” but I really would like to get past that. I have my little multiarch demo CI project in the wings, and I’ll probably loop it into that.
Since we were in Philly, I had a look over at my long-neglected Raspberry Pi 1B that I have here. When we first moved in, I used it as an ssh tunnel to get to this network from California. It was great for that! But now we have a more sophisticated network setup between the houses with a VLAN that connects them, so the ssh tunnel is unnecessary. In fact, my poor Raspberry Pi fell off the WiFi network when we switched to 802.1X just over a year ago and I never got around to getting it back on the network. I connected it to a keyboard and monitor and started some investigation. Honestly, I’m surprised the little guy was still running, but it’s doing fine!
And it had been chugging along running Rasbian based on Debian 9. Well, that’s worth an upgrade. But not just an upgrade, I didn’t want to stress the device and SD card, so I figured flashing it with the latest version of Raspberry Pi OS was the right way to go. It turns out, it’s been a long time since I’ve done a Raspberry Pi install.
I grabbed the Raspberry Pi Imager and went on my way. It’s really nice. I went with the Raspberry Pi OS Lite install since it’s the RP1, I didn’t want a GUI. The imager asked the usual installation questions, loaded up my SSH key, and I was ready to load it up in my Pi.
The only thing I need to finish sorting out is networking. The old USB WiFi adapter I have it in doesn’t initialize until after it’s booted up, so wpa_supplicant on boot can’t negotiate with the access point. I’ll have to play around with it. And what will I use this for once I do, now that it’s not an SSH tunnel? I’m not sure yet.
I realize this blog post isn’t very deep or technical, but I guess that’s the point. We’ve come a long way in recent years in support for non-x86 architectures, so installation has gotten a lot easier across several of them. If you’re new to playing around with architectures, I’d say it’s a really good time to start. You can hit the ground running with some wins, and then play around as you go with various things you want to help get working. It’s a lot of fun, and the years I spent playing around with Debian on Sparc back in the day definitely laid the groundwork for the job I have at IBM working on mainframes. You never know where a bit of technical curiosity will get you.