If you work with patches and git am, then you’re probably used to seeing patches fail to apply. For example:
$ git am CVE-2025-14512.patch Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings error: patch failed: gio/gfileattribute.c:166 error: gio/gfileattribute.c: patch does not apply Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings hint: Use 'git am --show-current-patch=diff' to see the failed patch hint: When you have resolved this problem, run "git am --continue". hint: If you prefer to skip this patch, run "git am --skip" instead. hint: To restore the original branch and stop patching, run "git am --abort". hint: Disable this message with "git config set advice.mergeConflict false"This is sad and frustrating because the entire patch has failed, and now you have to apply the entire thing manually. That is no good.
Here is the solution, which I wish I had learned long ago:
$ git config --global am.threeWay trueThis enables three-way merge conflict resolution, same as if you were using git cherry-pick or git merge. For example:
$ git am CVE-2025-14512.patch Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings Using index info to reconstruct a base tree... M gio/gfileattribute.c Falling back to patching base and 3-way merge... Auto-merging gio/gfileattribute.c CONFLICT (content): Merge conflict in gio/gfileattribute.c error: Failed to merge in the changes. Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings hint: Use 'git am --show-current-patch=diff' to see the failed patch hint: When you have resolved this problem, run "git am --continue". hint: If you prefer to skip this patch, run "git am --skip" instead. hint: To restore the original branch and stop patching, run "git am --abort". hint: Disable this message with "git config set advice.mergeConflict false"Now you have merge conflicts, which you can handle as usual. This seems like a better default for pretty much everybody, so if you use git am, you should probably enable it.
I’ve no doubt that many readers will have known about this already, but it’s new to me, and it makes me happy, so I wanted to share. You’re welcome, Internet!
I was excited to see Bilal’s announcement of goblint, and I’ve spent the past week getting Crosswords to work with it. This is a tool I’ve always wanted and I’m pretty convinced it will be a great boon for the GNOME ecosystem. I’m posting my notes in hope that more people try it out:
YMMV
Hello there,
You thought I’d given up on “status update” blog posts, did you ? I haven’t given up, despite my better judgement, this one is just even later than usual.
Recently I’ve been using my rather obscure platform as a blogger to theorize about AI and the future of the tech industry, mixed with the occasional life update, couched in vague terms, perhaps due to the increasing number of weirdos in the world who think doxxing and sending death threats to open source contributors is a meaningful use of their time.
In fact I do have some theories about how George Orwell (in “Why I Write”) and Italo Calvino (in “If On a Winter’s Night a Traveller”) made some good guesses from the 20th century about how easy access to LLMs would affect communication, politics and art here in the 21st. But I’ll leave that for another time.
It’s also 8 years since I moved to this new country where I live now, driving off the boat in a rusty transit van to enjoy a series of unexpected and amazing opportunities. Next week I’m going to mark the occasion with a five day bike ride through the mountains of Asturias, something I’ve been dreaming of doing for several years.
The original idea of writing a monthly post was to keep tabs on various open source software projects I sometimes manage to contribute to, and perhaps even to motivate me to do more such volunteering. Well that part didn’t work, house renovations and an unexpectedly successful gig playing synth and trombone took over all my free time; but after many years of working on corporate consultancy and doing a little open source in the background, I’m trying to make a space at work to contribute in the open again.
I could tell the whole story here of how Codethink became “the build system people”. Maybe I will actually. It all started with BuildStream. In fact, that’s not even true. it all started in 2011 when some colleagues working with MeeGo and Yocto thought, “This is horrible, isn’t it?”
They set out to create something better, and produced Baserock, which unfortunately turned out even worse. But it did have some good ideas. The concept of “cache keys” to identify build inputs and content-addressed storage to hold build outputs began there, as did the idea of opening a “workspace” to make drive-by changes in build inputs within a large project.
BuildStream took this core idea, extended it to support arbitrary source kinds and element kinds defined by plugins, and added a shiny interface on top. It used OSTree to store and distribute build artifacts initially, later migrating to the Google REAPI with the goal of supporting Enterprise(TM) infrastructure. You can even use it alongside Bazel, if you like having three thousand commandline options at your disposal.
Unfortunately it was 2016, so we wrote the whole thing in Python. (In our defence, the Rust programming language had only recently hit 1.0 and crates.io was still a ghost town, and we’d probably still be rewriting the ruamel.yaml package in Rust if we had taken that road.) But the company did make some great decisions, particularly making a condition of success for the BuildStream project that it could unify the 5 different build+integration systems that GNOME release team were maintaining. And that success meant not making a prototype, but the release team actually using BuildStream to make releases. Tristan even ended up joining the GNOME release team for a while. We discussed it all at the 2017 Manchester GUADEC, coincidentally. It was a great time. (Aside from the 6 months leading up to the conference.)
At this point, the Freedesktop SDK already existed, with the same rather terrible name that it has today, and was already the base runtime for this new app container tool that was named… xdg-app. (At least that eventually gained a better name). However, if you can remember 8 years ago, it had a very different form than today. Now, my memory of what happened next is especially hazy at this point, because like I told you in the beginning, I was on a boat with my transit van heading towards a new life in Spain. All I have to go on 8 years later is the Git history, but somehow the Freedesktop SDK grew a 3-stage compiler bootstrap, over 600 reusable BuildStream elements, its own Gitlab namespace, and even some controversial stickers. As a parting gift I apparently added support for building VMs, the idea being that we’d reinstate the old GNOME Continuous CI system that had unfortunately died of neglect several years earlier. This idea got somewhat out of hand, let’s say.
It took me a while to realize this, but today Freedesktop SDK is effectively the BuildStream reference distribution. What Poky is to BitBake in the Yocto project, this is what Freedesktop SDK is to BuildStream. And this is a pretty important insight. It explains the problem you may have experienced with the BuildStream documentation: you want to build some Linux package, so you read through the manual right to the end, and then you still have no fucking idea how to integrate that package.
This isn’t a failure on the part of the authors, instead the issue is that your princess is in another castle. Every BuildStream project I’ve ever worked on has junctioned freedesktop-sdk.git and re-used the elements, plugins, aliases, configurations and conventions defined there, all of which are rigorously undocumented. The Freedesktop SDK Guide, for reasons that I won’t go into, doesn’t venture much further than than reminding you how to call Make targets.
And this is something of a point of inflection. The BuildStream + Freedesktop SDK ecosystem has clearly not displaced Yocto, nor for that matter Linux Mint. But, like many of my favourite musicians, it has been quietly thriving in obscurity. People I don’t know are using it to do things that I don’t completely understand. I’ve seen it in comparison articles, and even job adverts. ChatGPT can generate credible BuildStream elements about as well as it can generate Dockerfiles (i.e. not very well, but it indicates a certain level of ubiquity). There have been conferences, drama, mistakes, neglect. It’s been through an 8 person corporate team hyper-optimizing the code, and its been though a mini dark age where volunteers thanklessly kept the lights on almost single handledly, and its even survived its transition to the Apache Foundation.
Through all of this, the secret to its success probably that its just a really nice tool to work with. As much as you can enjoy software integration, I enjoy using BuildStream to do it; things rarely break, when they do its rarely difficult to fix them, and most importantly the UI is really colourful! I’m now using it to build embedded system images for a product named CTRL, which you can think of as.. a Linux distribution. There are some technical details to this which I’m working to improve, which I won’t bore you with here.
I also won’t bore you with the topic of community governance this month, but that’s what’s currently on my mind. If you’ve been part of the GNOME Foundation for a few years, you’ll know this something that’s usually boring and occasionally becomes of almost life-or-death importance. The “let’s just be really sound” model works great, until one day when you least expect it, and then suddenly it really doesn’t. There is no perfect defence against this, and in open source communities its our diversity that brings the most resilience. When GNOME loses, KDE gains, and that way at least we still don’t have to use Windows. Indeed, this is one argument for investing in BuildStream even if it remains forever something of a minority spot. I guess I just need to remember that when you have to start thinking hard about governance, that’s a sign of success.
It’s a question I had to ask myself multiple times over the last few months. Depending on the context the answer can be:
If you are an app developer, you’re lucky and it’s almost always the first answer. If you develop something with a security boundary which involves files in any way, the correct answer is very likely the second one.
Opening a File, the Hard WayLike so often, the details depend on the specifics, but in the worst-case scenario, there is a process on either side of the security boundary, which operate on a filesystem tree which is shared by both processes.
Let’s say that the process with more privileges operates on a file on behalf of the process with less privileges. You might want to restrict this to files in a certain directory, to prevent the less privileged process from, for example, stealing your SSH key, and thus take a subpath that is relative to that directory.
The first obvious problem is that the subpath can refer to files outside of the directory if it contains ... If the privileged process gets called with a subpath of ../.ssh/id_ed25519, you are in trouble. Easy fix: normalize the path, and if we ever go outside of the directory, fail.
The next issue is that every component of the path might be a symlink. If the privileged process gets called with a subpath of link, and link is a symlink to ../.ssh/id_ed25519, you might be in trouble. If the process with less privileges cannot create files in that part of the tree, it cannot create a malicious symlink, and everything is fine. In all other scenarios, nothing is fine. Easy fix: resolve the symlinks, expand the path, then normalize it.
This is usually where most people think we’re done, opening a file is not that hard after all, we can all do more fun things now. Really, this is where the fun begins.
The fix above works, as long as the less privileged process cannot change the file system tree anywhere in the file’s path while the more privileged process tries to access it. Usually this is the case if you unpack an attacker-provided archive into a directory the attacker does not have access to. If it can however, we have a classic TOCTOU (time-of-check to time-of-use) race.
We have the path foo/id_ed25519, we resolve the smlinks, we expand the path, we normalize it, and while we did all of that, the other process just replaced the regular directory foo that we just checked with a symlink which points to ../.ssh. We just checked that the path resolves to a path inside the target directory though, and happily open the path foo/id_ed25519 which now points to your ssh key. Not an easy fix.
So, what is the fundamental issue here? A path string like /home/user/.local/share/flatpak/app/org.example.App/deploy describes a location in a filesystem namespace. It is not a reference to a file. By the time you finish speaking the path aloud, the thing it names may have changed.
The safe primitive is the file descriptor. Once you have an fd pointing at an inode, the kernel pins that inode. The directory can be unlinked, renamed, or replaced with a symlink; the fd does not care. A common misconception is that file descriptors represent open files. It is true that they can do that, but fds opened with O_PATH do not require opening the file, but still provide a stable reference to an inode.
The lesson that should be learned here is that you should not call any privileged process with a path. Period. Passing in file descriptors also has the benefit that they serve as proof that the calling process actually has access to the resource.
Another important lesson is that dropping down from a file descriptor to a path makes everything racy again. For example, let’s say that we want to bind mount something based on a file descriptor, and we only have the traditional mount API, so we convert the fd to a path, and pass that to mount. Unfortunately for the user, the kernel resolves the symlinks in the path that an attacker might have managed to place there. Sometimes it’s possible to detect the issue after the fact, for example by checking that the inode and device of the mounted file and the file descriptor match.
With that being said, sometimes it is not entirely avoidable to use paths, so let’s also look into that as well!
In the scenario above, we have a directory in which we want all the paths to resolve in, and that the attacker does not control. We can thus open it with O_PATH and get a file descriptor for it without the attacker being able to redirect it somewhere else.
With the openat syscall, we can open a path relative to the fd we just opened. It has all the same issues we discussed above, except that we can also pass O_NOFOLLOW. With that flag set, if the last segment of the path is a symlink, it does not follow it and instead opens the actual symlink inode. All the other components can still be symlinks, and they still will be followed. We can however just split up the path, and open the next file descriptor for the next path segment and resolve symlinks manually until we have done so for the entire path.
libglnx chaselibglnx is a utility library for GNOME C projects that provides fd-based filesystem operations as its primary API. Functions like glnx_openat_rdonly, glnx_file_replace_contents_at, and glnx_tmpfile_link_at all take directory fds and operate relative to them. The library is built around the discipline of “always have an fd, never use an absolute path when you can use an fd.”
The most recent addition is glnx_chaseat, which provides safe path traversal, and was inspired by systemd’s chase(), and does precisely what was described above.
int glnx_chaseat (int dirfd, const char *path, GlnxChaseFlags flags, GError **error);It returns an O_PATH | O_CLOEXEC fd for the resolved path, or -1 on error. The real magic is in the flags:
typedef enum _GlnxChaseFlags { /* Default */ GLNX_CHASE_DEFAULT = 0, /* Disable triggering of automounts */ GLNX_CHASE_NO_AUTOMOUNT = 1 << 1, /* Do not follow the path's right-most component. When the path's right-most * component refers to symlink, return O_PATH fd of the symlink. */ GLNX_CHASE_NOFOLLOW = 1 << 2, /* Do not permit the path resolution to succeed if any component of the * resolution is not a descendant of the directory indicated by dirfd. */ GLNX_CHASE_RESOLVE_BENEATH = 1 << 3, /* Symlinks are resolved relative to the given dirfd instead of root. */ GLNX_CHASE_RESOLVE_IN_ROOT = 1 << 4, /* Fail if any symlink is encountered. */ GLNX_CHASE_RESOLVE_NO_SYMLINKS = 1 << 5, /* Fail if the path's right-most component is not a regular file */ GLNX_CHASE_MUST_BE_REGULAR = 1 << 6, /* Fail if the path's right-most component is not a directory */ GLNX_CHASE_MUST_BE_DIRECTORY = 1 << 7, /* Fail if the path's right-most component is not a socket */ GLNX_CHASE_MUST_BE_SOCKET = 1 << 8, } GlnxChaseFlags;While it doesn’t sound too complicated to implement, a lot of details are quite hairy. The implementation uses openat2, open_tree and openat depending on what is available and what behavior was requested, it handles auto-mount behavior, ensures that previously visited paths have not changed, and a few other things.
An Aside on Standard LibrariesThe POSIX APIs are not great at dealing with the issue. The GLib/Gio APIs (GFile, etc.) are even worse and only accept paths. Granted, they also serve as a cross-platform abstraction where file descriptors are not a universal concept. Unfortunately, Rust also has this cross-platform abstraction which is based entirely on paths.
If you use any of those APIs, you very likely created a vulnerability. The deeper issue is that those path-based APIs are often the standard way to interact with files. This makes it impossible to reason about the security of composed code. You can audit your own code meticulously, open everything with O_PATH | O_NOFOLLOW, chain *at() calls carefully — and then call a third-party library that calls open(path) internally. The security property you established in your code does not compose through that library call.
This means that any system-level code that cares about filesystem security has to audit all transitive dependencies or avoid them in the first place.
So what would a better GLib cross-platform API look like? I would say not too different from chaseat(), but returning opaque handles instead of file descriptors, which on Unix would carry the O_PATH file descriptor and a path that can be used for printing, debugging and things like that. You would open files from those handles, which would yield another kind of opaque handle for reading, writing, and so on.
The current GFile was also designed to implement GVfs: g_file_new_for_uri("smb://server/share/file") gives you a GFile you can g_file_read() just like a local file. This is the right goal, but the wrong abstraction layer. Instead, this kind of access should be provided by FUSE, and the URI should be translated to a path on a specific FUSE mount. This would provide a few benefits:
Nowadays I maintain a small project called Flatpak. Codean Labs recently did a security analysis on it and found a number of issues. Even though Flatpak developers were aware of the dangers of filesystems, and created libglnx because of it, most of the discovered issues were just about that. One of them (CVE-2026-34078) was a complete sandbox escape.
flatpak run was designed as a command-line tool for trusted users. When you type flatpak run org.example.App, you control the arguments. The code that processes the arguments was written assuming the caller is legitimate. It accepted path strings, because that’s what command-line tools accept.
The Flatpak portal was then built as a D-Bus service that sandboxed apps could call to start subsandboxes — and it did this by effectively constructing a flatpak run invocation and executing it. This connected a component designed for trusted input directly to an untrusted caller (the sandboxed app).
Once that connection exists, every assumption baked into flatpak run about caller trustworthiness becomes a potential vulnerability. The fix wasn’t “change one function” — it was “audit the entire call chain from portal request to bubblewrap execution and replace every path string with an fd.” That’s commits touching the portal, flatpak-run, flatpak_run_app, flatpak_run_setup_base_argv, and the bwrap argument construction, plus new options (--app-fd, --usr-fd, --bind-fd, --ro-bind-fd) threaded through all of them.
If the GLib standard file and path APIs were secure, we would not have had this issue.
Another annoyance here is that the entire subsandboxing approach in Flatpak comes from 15 years ago, when unprivileged user namespaces were not common. Nowadays we could (and should) let apps use kernel-native unprivileged user namespaces to create their own subsandboxes.
Unfortunately with rather large changes comes a high likelihood of something going wrong. For a few days we scrambled to fix a few regressions that prevented Steam, WebKit, and Chromium-based apps from launching. Huge thanks to Simon McVittie!
In the end, we managed to fix everything, made Flatpak more secure, the ecosystem is now better equipped to handle this class of issues, and hopefully you learned something as well.
In the past I have written many blog posts on implementing various PDF features in CapyPDF. Typically they explain the feature being implemented, how confusing the documentation is, what perverse undocumented quirks one has to work around to get things working and so on. To save the effort of me writing and you reading yet another post of the same type, let me just say that you can now use CapyPDF to generate PDF forms that have widgets like text fields and radio buttons.
What makes this post special is that forms and widget annotations were pretty much the last major missing PDF feature Does that mean that it supports everything? No. Of course not. There is a whole bunch of subtlety to consider. Let's start with the fact that the PDF spec is massive, close to 1000 pages. Among its pages are features that are either not used or have been replaced by other features and deprecated.
The implementation principle of CapyPDF thus far has been "implement everything that needs special tracking, but only to the minimal level needed". This seems complicated but is in fact quite simple. As an example the PDF spec defines over 20 different kinds of annotations. Specifying them requires tracking each one and writing out appropriate entries in the document metadata structures. However once you have implemented that for one annotation type, the same code will work for all annotation types. Thus CapyPDF has only implemented a few of the most common annotations and the rest can be added later when someone actually needs them.
Many objects have lots of configuration options which are defined by adding keys and values to existing dictionaries. Again, only the most common ones are implemented, the rest are mostly a matter of adding functions to set those keys. There is no cross-referencing code that needs to be updated or so on. If nobody ever needs to specify the color with which a trim box should be drawn in a prepress preview application, there's no point in spending effort to make it happen.
The API should be mostly done, especially for drawing operations. The API for widgets probably needs to change. Especially since form submission actions are not done. I don't know if anything actually uses those, though. That work can be done based on user feedback.
When I have to play with a container image I have never met before, I like to deploy it on a test cluster to poke and prod it. I usually did that on a k3s cluster, but recently I've moved to Minikube to bring my test cluster with me when I'm on the go.
Minikube is a tiny one-node Kubernetes cluster meant to run on development machines. It's useful to test Deployments or StatefulSets with images you are not familiar with and build proper helm charts from them.
It provides volumes of the hostPath type by default. The major caveat of hostPath volumes is that they're mounted as root by default.
I usually handle mismatched ownership with a securityContext like the following to instruct the container to run with a specific UID and GID, and to make the volume owned by a specific group.
Typically in a StatefulSet it looks like this:
apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp # [...] spec: # [...] template: # [...] spec: securityContext: runAsUser: 10001 runAsGroup: 10001 fsGroup: 10001 containers: - name: myapp volumeMounts: - name: data mountPath: /data volumeClaimTemplates: - metadata: name: data spec: # [...]In this configuration:
The securityContext usually solves the problem, but that's not how hostPath works. For hostPath volumes, the securityContext.fsGroup property is silently ignored.
[!success] Init Container to the Rescue!
The solution in this specific case is to use an initContainer as root to chown the volume mounts to the unprivileged user.
In practice it will look like this.
apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp # [...] spec: # [...] template: # [...] spec: securityContext: runAsUser: 10001 runAsGroup: 10001 fsGroup: 10001 initContainers: - name: fix-perms image: busybox command: ["sh", "-c", "chown -R 10001:10001 /data"] securityContext: runAsUser: 0 volumeMounts: - name: data mountPath: /data containers: - name: myapp volumeMounts: - name: data mountPath: /data volumeClaimTemplates: - metadata: name: data spec: # [...]It took me a little while to figure it out, because I was used to testing my StatefulSets on k3s. K3s uses a local path provisioner, which gives me local volumes, not hostPath ones like Minikube.
In production I don't need the initContainer to fix permissions since I'm deploying this on an EKS cluster.
After wrapping up a four-part series on free trade and the left, I thought I was done with neoliberalism. I had come to the conclusion that neoliberals were simply not serious people: instead of placing value in literally any human concern, they value only a network of trade, and as such, cannot say anything of value. They should be ignored in public debate; we can find economists elsewhere.
I based this conclusion partly on Quinn Slobodian’s Globalists (2020), which describes Friedrich Hayek’s fascination with cybernetics in the latter part of his life. But Hayek himself died before the birth of the WTO, NAFTA, all the institutions “we” fought in Seattle; we fought his ghost, living on past its time.
Well, like I say, I thought I was done, but then a copy of Slobodian’s Hayek’s Bastards (2025) arrived in the post. The book contests the narrative that the right-wing “populism” that we have seen in the last couple decades is an exogenous reaction to elite technocratic management under high neoliberalism, and that actually it proceeds from a faction of the neoliberal project. It’s easy to infer a connection when we look at, say, Javier Milei‘s background and cohort, but Slobodian delicately unpicks the weft to expose the tensile fibers linking the core neoliberal institutions to the alt-right. Tonight’s note is a book review of sorts.
after hayekLet’s back up a bit. Slobodian’s argument in Globalists was that neoliberalism is not really about laissez-faire as such: it is a project to design institutions of international law to encase the world economy, to protect it from state power (democratic or otherwise) in any given country. It is paradoxical, because such an encasement requires state power, but it is what it is.
Hayek’s Bastards is also about encasement, but instead of protection from the state, the economy was to be protected from debasement by the unworthy. (Also there is a chapter on goldbugs, but that’s not what I want to talk about.)
The book identifies two major crises that push a faction of neoliberals to ally themselves with a culturally reactionary political program. The first is the civil rights movement of the 1960s and 1970s, together with decolonization. To put it crudely, whereas before, neoliberal economists could see themselves as acting in everyone’s best interest, having more black people in the polity made some of these white economists feel like their project was being perverted.
Faced with this “crisis”, at first the reactionary neoliberals reached out to race: the infant post-colonial nations were unfit to participate in the market because their peoples lacked the cultural advancement of the West. Already Globalists traced a line through Wilhelm Röpke‘s full-throated defense of apartheid, but the subjects of Hayek’s Bastards (Lew Rockwell, Charles Murray, Murray Rothbard, et al) were more subtle: instead of directly stating that black people were unfit to govern, Murray et al argued that intelligence was the most important quality in a country’s elite. It just so happened that they also argued, clothed in the language of evolutionary psychology and genetics, that black people are less intelligent than white people, and so it is natural that they not occupy these elite roles, that they be marginalized.
Before proceeding, three parentheses:
Some words have a taste. Miscegenation tastes like the juice at the bottom of a garbage bag left out in the sun: to racists, because of the visceral horror they feel at the touch of the other, and to the rest of us, because of the revulsion the very idea provokes.
I harbor an enmity to Silvia Plath because of The Bell Curve. She bears no responsibility; her book was The Bell Jar. I know this in my head but my heart will not listen.
I do not remember the context, but I remember a professor in university telling me that the notion of “race” is a social construction without biological basis; it was an offhand remark that was new to me then, and one that I still believe now. Let’s make sure the kids now hear the good word now too; stories don’t tell themselves.
The second crisis of neoliberalism was the fall of the Berlin Wall: some wondered if the negative program of deregulation and removal of state intervention was missing a positive putty with which to re-encase the market. It’s easy to stand up on a stage with a chainsaw, but without a constructive program, neoliberal wins in one administration are fragile in the next.
The reactionary faction of neoliberalism’s turn to “family values” responds to this objective need, and dovetails with the reaction to the civil rights movement: to protect the market from the unworthy, neo-reactionaries worked to re-orient the discourse, and then state policy, away from “equality” and the idea that idea that We Should Improve Society, Somewhat. Moldbug’s neofeudalism is an excessive rhetorical joust, but one that has successfully moved the window of acceptable opinions. The “populism” of the AfD or the recent Alex Karp drivel is not a reaction, then, to neoliberalism, but a reaction by a faction of neoliberals to the void left after communism. (And when you get down to it, what is the difference between Moldbug nihilistically rehashing Murray’s “black people are low-IQ” and Larry Summers’ “countries in Africa are vastly UNDER-polluted”?)
thotsSlobodian shows remarkable stomach: his object of study is revolting. He has truly done the work.
For all that, Hayek’s Bastards left me with a feeling of indigestion: why bother with the racism? Hayek himself had a thesis of sorts, woven through his long career, that there is none of us that is smarter than the market, and that in many (most?) cases, the state should curb its hubris, step back, and let the spice flow. Prices are a signal, axons firing in an ineffable network of value, sort of thing. This is a good thesis! I’m not saying it’s right, but it’s interesting, and I’m happy to engage with it and its partisans.
So why do Hayek’s bastards reach to racism? My first thought is that they are simply not worthy: Charles Murray et al are intellectually lazy and moreover base. My lip curls to think about them in any serious way. I can’t help but recall the DARVO tactic of abusers; neo-reactionaries blame “diversity” for “debasing the West”, but it is their ignorant appeals to “race science” that is without basis.
Then I wonder: to what extent is this all an overworked intellectual retro-justification for something they wanted all along? When Mises rejoiced in the violent defeat of the 1927 strike, he was certainly not against state power per se; but was he for the market, or was he just against a notion of equality?
I can only conclude that things are confusing. “Mathematical” neoliberals exist, and don’t need to lean on racism to support their arguments. There are also the alt-right/neo-reactionaries, who grew out from neoliberalism, not in opposition to it: no seasteader is a partisan of autarky. They go to the same conferences. It is a baffling situation.
While it is all more the more reason to ignore them both, intellectually, Slobodian’s book shows that politically we on the left have our work set out for us both in deconstructing the new racism of the alt-right, and in advocating for a positive program of equality to take its place.
I am very happy to announce a new version of Casilda!
A simple Wayland compositor widget for Gtk 4.
This release comes with several new features, bug fixes and extra polish that it is making it start to feel like a proper compositor.
It all started with a quick 1.2 release to port it to wlroots 0.19 because 0.18 was removed from Debian, while doing this on my new laptop I was able to reproduce a texture leak crash which lead to 1.2.1 and a fix in Gtk by Benjamin to support Vulkan drivers that return dmabufs with less fd than planes.
At this point I was invested to I decided to fix the rest of issues in the backlog…
Fractional scaleCasilda only supported integer scales not fractional scale so you could set your display scale to 200% but not 125%.
For reference this is how gtk4-demo looks like at 100% or scale 1 where 1 application/logical pixel corresponds to one device/display pixel.
*** Keep in mind its preferable to see all the following images without fractional scale itself and at full size ***
Clients would render at the next round scale if the application was started with a fractional scale set…
Or the client would render at scale 1 and look blurry if you switched from 1 to a fractional scale.
In both cases the input did not matched with the renderer window making the application really broken.
So if the client application draws a 4 logical pixel border, it will be 5 pixels in the backing texture this means that 1 logical pixel correspond to 1.25 device pixels. So in order for things to look sharp CasildaCompositor needs to make sure the coordinates it uses for position the client window will match to the device pixel grid.
My first attempt was to do
((int)x * scale) / scalebut that still looked blurry, and that is because I assumed window coordinate 0,0 was the same as its backing surface coordinates 0,0 but that is not the case because I forgot about the window shadow. Luckily there is API to get the offset, then all you have to do is add the logical position of the compositor widget and you get the surface origin coordinates
gtk_native_get_surface_transform (GTK_NATIVE (root), &surface_origin_x, &surface_origin_y); /* Add widget offset */ if (gtk_widget_compute_point (self, GTK_WIDGET (root), &GRAPHENE_POINT_INIT (0, 0), &out_point)) { surface_origin_x += out_point.x; surface_origin_y += out_point.y; }Once I had that I could finally calculate the right position
/* Snap logical coordinates to device pixel grid */ if (scale > 1.0) { x = floorf ((x + surface_origin_x) * scale) / scale - surface_origin_x; y = floorf ((y + surface_origin_y) * scale) / scale - surface_origin_y; }And this is how it looks now with 1.25 fractional scale.
Keyboard layoutsAnother missing feature was support for different keyboard layouts so switching layouts would work on clients too. Not really important for Cambalache but definitely necessary for a generic compositor.
Popups positionersCasilda now send clients all the necessary information for positioning popups in a place where they do not get cut out of the display area which is a nice thing to have.
Cursor shape protocolCurrent versions of Gtk 4 requires cursor shape protocol on wayland otherwise it fallback to 32×32 pixel size cursors which might not be the same size of your system cursors and look blurry with fractional scales.
In this case the client send an cursor id instead of a pixel buffer when it wants to change the cursor.
This was really easy to implement as all I had to do is call
gtk_widget_set_cursor_from_name (compositor, wlr_cursor_shape_v1_name (event->shape)); GreetingsAs usual this would not be possible without the help of the community, special thanks to emersion, Matthias and Benjamin for their help and support.
Release NotesSource code lives on GNOME gitlab here
git clone https://gitlab.gnome.org/jpu/casilda.git Matrix channelHave any question? come chat with us at #cambalache:gnome.org
MastodonFollow me in Mastodon @xjuan to get news related to Casilda and Cambalache development.
Happy coding!
If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: “Projects”
Why?With the recent 0.20 release of xdg-user-dirs we enabled the “Projects” directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.
The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the “Projects” directory, with output video being more at home in “Videos”.
By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a “project-centric” manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the “Documents” folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.
This sucks, I don’t like it!As usual, you are in control and can modify your system’s behavior. If you do not like the “Projects” folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.
If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).
What else is new?Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the “arbitrary code execution from unsanitized input” bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.
Thanks to everyone who contributed to this release!