There are a lot of prime classes, such as left truncating primes, twin primes, mersenne primes, palindromic primes, emirp primes and so on. The Wikipedia page on primes lists many more. Recently I got to thinking (as one is wont to do) how difficult would it be to come up with a brand new one. The only reliable way to know is to try it yourself.
The basic loopThe method I used was fairly straightforward:
In magic terminology, a Faro shuffle is one that cuts a deck of cards in half and then interleaves the results. It is also known as a perfect shuffle. There are two different types of Faro shuffle, an in shuffle and an out shuffle. They have the peculiar property that if you keep repeating the same operation, eventually the deck returns to the original order.
A prime p is a Faro prime if all numbers obtained by applying Faro shuffles (either in or out shuffles, but only one type) to its decimal representation are also prime. A Faro prime can be an Faro in prime, a Faro out prime or both. As an example, 19 is a Faro in prime, because a single in shuffle returns it to its original form. It is not an Faro out prime, because out shuffling it produces 91, which is not a prime (91 = 7*13).
The testing for this was not rigorous, but at least OEIS does not recognize it.
StatisticsI only used primes with an even number of digits. For odd number of digits you'd first need to decide how in and out shuffles should work. This is left as an exercise to the reader.
Within the first one milllion primes, there are 7492 in primes, 775 out primes and 38 that are both in and out primes.
The numbers with one or two digits are not particularly interesting. The first "actual" Faro in prime is 1103. It can be in shuffled once yielding 1013.
For the first out shuffle you need to go to 111533, which shuffles to 513131 and 153113.
The first prime longer than 2 digits that qualifies for both a Faro in and out prime is 151673. Its in shuffle primes are 165713, 176153 and 117563. The corresponding out shuffle primes are 151673, 617531 and 563117.
Within the first one million primes the largest in shuffle prime is 15484627, the largest out shuffle prime is 11911111 and the largest in and out prime is 987793.
Further questionsAs is typical in maths, finding out something immediately raises more questions. For example:
Why are there so many fewer out primes than in primes?
How would this look for primes with odd number of digits in them?
Is it possible to build primes by a mixture of in and out shuffles?
Most of the primes do not complete a "full shuffle", that is, they repeat faster than a deck of fully unique playing cards would. For any number n can you find a Faro prime that requires that many shuffles or is there an upper limit for the number of shuffles?
In my last post on this topic, I explained the history of SVG in GTK, and how I tricked myself into working on an SVG renderer in 2025.
Now we are in 2026, and on the verge of the GTK 4.22 release. A good time to review how far we’ve come.
TestsuitesWhile working on this over the last year, I was constantly looking for good tests to check my renderer against.
Eventually, I found the resvg testsuite, which has broad coverage and is refreshingly easy to work with. In my unscientific self-evaluation, GtkSvg passes 1250 of the 1616 tests in this testsuite now, which puts GTK one tier below where the web browsers are. It would be nice to catch up with them, but that will require closing some gaps in our rendering infrastructure to support more complex filters.
The resvg testsuite only covers static SVG.
Another testsuite that I’ve used a lot is the much older SVG 1.1 testsuite, which covers SVG animation. GtkSvg passes most of these tests as well, which I am happy about — animation was one of my motivations when going into this work.
https://blogs.gnome.org/gtk/files/2026/02/Screencast-From-2026-02-24-20-45-33.webm BenchmarksBut doing a perfect job of rendering complex SVG doesn’t do us much good if it slows GTK applications down too much. Recently, we’ve started to look at the performance implications of SVG rendering.
We have a ‘scrolling wall of icons’ benchmark in our gtk4-demo app, which naturally is good place to test the performance impact of icon rendering changes. When switching it over to GtkSvg, it initially dropped from 60fps to around 40 on my laptop. We’ve since done some optimizations and regained most of the lost fps.
The performance impact on typical applications will be much smaller, since they don’t usually present walls of icons in their UI.
Stressing our rendering infrastructure with some more demanding content was another motivation when I started to work on SVG, so I think I can declare success here.
Content CreatorsThe new SVG renderer needs new SVGs to take advantage of the new capabilities. Thankfully, Jakub Steiner has been hard at work to update many of the symbolic icons in GNOME.
Others are exploring what we can do with the animation capabilities of the new renderer. Expect these things to start showing up in apps over the next cycle.
Future workFeature-wise, GtkSvg is more than good enough for all our icon rendering needs, so making it cover more obscure SVG features may not be big priority in the short term.
GtkSvg will be available in GTK 4.22, but we will not use it for every SVG icon yet — we still have a much simpler symbolic icon parser which is used for icons that are looked up by icon name from an icontheme. Switching over to using GtkSvg for everything is on the agenda for the next development cycle, after we’ve convinced ourselves that we can do this without adverse effects on performance or resource consumption of apps.
Ongoing improvements of our rendering infrastructure will help ensure that that is the case.
Where you can helpOne of the most useful contributions is feedback on what does or doesn’t work, so please: try out GtkSvg, and tell us if you find SVGs that are rendered badly or with poor performance!
Update: GtkSvg is an unsandboxed, in-process SVG parser written in C, so we don’t recommend using it for untrusted content — it is meant for trusted content such as icons, logos and other application resources. If you want to load a random SVG of unknown providence, please use a proper image loading framework like glycin (but still, tell us if you find SVGs that crash GtkSvg).
Of course, contributions to GtkSvg itself are more than welcome too. Here is a list of possible things to work on.
If you are interested in working on an application, the simple icon editor that ships with GTK really needs to be moved to its own project and under separate maintainership. If that sounds appealing to you, please get in touch.
If you would like to support the GNOME foundation, who’s infrastructure and hosting GTK relies on, please donate.
Its moments of change that remain striking in your memory when you look back. I feel like i’m in a long period of change, and if like me you participate in the tech industry and open source then you probably feel the same. It’s going to be a wild time to look back on.
As humans we’re naturally drawn to exciting new changes. Its not just the tech industry. The Spanish transport minister recently announced ambicious plans to run trains at record speeds of 350km/h. Then two tragic accidents happened, apparently due to careless infrastructure maintenance. Its easy (and valid) to criticise the situation. But I can sympathise too. You don’t see many news reports saying “Infrastructure is being maintained really well at the moment and there haven’t been any accidents for years”. We all just take that shit for granted.
This is a “middle aged man states obvious truths” post, so here’s another one we forget in the software world: Automating work doesn’t make it go away. Lets say you automate a 10 step release process which takes an hour to do manually. That’s pretty great, now at release time you just push a button and wait. Maybe you can get on with some other work meanwhile — except you still need to check the automation finished and the release published correctly. What if step 5 fails? Now you have drop your other work again, push that out of your brain and try to remember how the release process worked, which will be hazy enough if you’ve stopped ever doing release manually.
Sometimes I’ll take an hour of manual work each month in preference to maintaining a complex, bespoke automation system.
Over time we do build great tools and successfully automate bits of our jobs. Forty or fifty years ago, most computer programmers could write assembly code and do register allocation in their heads. I can’t remember the last time I needed that skill. The C compiler does it for me.
The work of CPU register allocation hasn’t gone away, though. I’ve outsourced the cognitive load to researchers and compiler teams working at places like IBM / Red Hat, embecosm and Apple who maintain GCC and LLM.
When I first got into computer programming, at the tail end of the “MOV AX, 10h; INT 13h” era, part of the fun was this idea you could have wild ideas and simply create yourself piece by piece, making your own tools, and pulling yourself up by your bootstraps. Look at this teenager who created his own 3D game engine! Look at this crazy dude who made an entire operating system! Now I’m gonna do something cool that will change the world, and then ideally retire.
It took me the longest time to see that this “rock star” development model is all mythology. Just like actual rock stars, in fact. When a musician appears with stylish clothes and a bunch of great songs, the “origin story” is a carefully curated myth. The music world is a diverse community of artists, stylists, mentors, coaches, co-writers, producers, technicians, drivers, promotors, photographers, session musicians and social media experts, constantly trading our skills and ideas and collaborating to make them a reality. Nobody just walks out of their bedroom onto a stage and changes the world. But that doesn’t make for a good press release does it ?
The AI bubble is built on this same myth of the individual creator. I think LLMs are a transformative tool, and computer programming will never be the same; the first time you input some vaguely worded English prompt and get back a working unit test, you see a shining road ahead paved with automation, where you can finally turn ideas into products within days or weeks instead of having to chisel away at them painfully for years.
But here’s the reality: our monkey brains are still the same size, and you can’t If your new automation is flaky, then you’re going to spend as much time debugging and fixing things as you always did. Doing things the old way may take longer, but the limiting factor was never our typing speed, but our capacity to understand and communicate new ideas.
“The future belongs to idea guys who can just do things”. No it doesnt mate, the past, present and future belongs to diverse groups of people whose skills and abilities complement each other and who have collectively agreed on some sort of common goal. But that idea doesn’t sell very well.
If and when we do land on genuinely transformative new tool — something like a C compiler, or hypertext — then I promise you, everyone’s going to be on it in no time. How long did it take for ChatGPT to go from 0 to 1 billlion users wordwide?
In all of this, I’ve had an intense few months in a new role at Codethink. It’s been an intense winter too — by some measures Galicia is literally the wettest place on earth right now — so I guess it was a good time to learn new things. Since I rejoined back in 2021 I’ve nearly always been outsourced on different client projects. What I’m learning now is how the company’s R&D division works.
You all know that librsvg is developed in gitlab.gnome.org, not in GitHub. The README prominently says, "PLEASE DO NOT SEND PULL REQUESTS TO GITHUB".
So, of course, today librsvg got its first AI slop pull request and later a second one, both in GitHub. Fortunately (?) they were closed by the same account that opened them, four minutes and one minute after opening them, respectively.
I looked.
There is compiled Python code (nope, that's how you get another xz attack).
There are uncomfortably large Python scripts with jewels like subprocess.run("a single formatted string") (nope, learn to call commands correctly).
There are two vast JSON files with "suggestions" for branches to make changes to the code, with jewels like:
Suggestions to call standard library functions that do not even exist. The proposed code does not even use the nonexistent standard library function.
Adding enum variants to SVG-specific constructs for things that are not in the SVG spec.
Adding incorrect "safety checks". assert!(!c_string.is_null()) to be replaced by if c_string.is_null() { return ""; }.
Fix a "floating-point overflow"... which is already handled correctly, and with a suggestion to use a function that does not exist.
Adding a cache for something that does not need caching (without an eviction policy (so it is a memory leak)).
Parallelizing the entire rendering process through a 4-line function. Of course this does not work.
Adding two "missing" filters from the SVG spec (they are already implemented), and the implementation is todo!().
It's all like that. I stopped looking, and reported both PRs for spam.
Welcome to another GNOME Foundation update post, covering highlights from the past two weeks (this week and last week). It’s been a busy time, particularly due to conference planning and our upcoming audit – read on to find out more!
Linux App Summit 2026We were thrilled to be able to announce the location and dates of this year’s Linux App Summit this week. The conference will happen in Berlin on the 16th and 17th of May, at Betahaus Berlin. More information is available on the LAS website.
As usual, we are very pleased to be collaborating with KDE on this year’s LAS. Our partnership on LAS has been a real success that we hope to continue.
Travel sponsorship for LAS 2026 is available for Foundation members through the Travel Committee, so head over to the travel page if you would like to attend and need financial support.
February’s Board meetingThe Board of Directors it’s regular monthly meeting last week, on 9th February. Highlights from the meeting included:
The next Board meeting is scheduled for March 9th.
Audit submissionsAs I’ve mentioned in previous updates, the GNOME Foundation is due to be audited very soon. This is a routine occurrence for non-profits like us, but this is our first formal audit, so there’s a good deal of learning and setup to be done.
Last week was the deadline to submit all the documentation for the audit, which meant that many of us were extremely busy finalising numbers, filling in spreadsheets, and tidying up other documentation ready to send it all to the auditors.
Our finance team *really* went the extra mile for us to get everything ready on time, so I’d like to give them a huge thank you for helping us out.
The audit inspection itself will happen in the first week of March, so preparations continue, as we assemble and organise our records, update our policies, and so on.
GUADEC 2026Planning for this summer’s conference has continued over the past two weeks. In case you missed it, the location and dates have been announced, and accommodation bookings are open at a reduced rate. In the background we are gearing up to open the call for papers, and the sponsorship effort is on its way. Now is a good time to start thinking about any talk proposals that you’d like to submit.
Membership certificatesA cool community effort is currently underway to provide certificates for GNOME Foundation members. This is a great idea in my opinion, as it will allow contributors to get official recognition which can be used for job applications and so on. More volunteers to help out would definitely be welcome.
That’s it for this week. Thanks for reading, and feel free to ask questions in the comments.
A week ago we discussed free trade, and specifically took a look at the classical mechanism by which free trade is supposed to improve overall outcomes, as measured by GDP.
As I described it, the value proposition of free trade is ambiguous at best: there is an intangible sense that a country might have a higher GDP with lower trade barriers, but with a side serving of misery as international competition forces some local industries to close, and without any guarantee about how that trade advantage would be distributed among the population. Why bother? And why is my news feed full of EU commissioners signing new trade agreements? Where are these ideas coming from?
stave 2I asked around, placed some orders, and a week later a copy of Marc-William Palen’s Pax Economica came in the mail. I was hoping for a definitive, reasoned argument for or against free trade, from a leftist’s perspective (whatever that means). The book was both more and less than that. Less, in the sense that its narrative is most tightly woven in the century before the end of the second World War, becoming loose and frayed as it breezes through decolonization, the rise of neoliberalism, the end of history, and what it describes as our current neomercantilist moment. Less, also, in that Palen felt no need to summarize the classic economic arguments for free trade, leaving me to clumsily do so in the previous stave. And yet, the story it tells fills in gaps in my understanding that I didn’t even know I had.
To pick up one thread from the book, let’s go back to 1815. British parliament passes the Corn Laws, establishing a price floor for imported grain. This trade barrier essentially imposes a significant tax on all people who eat, to the profit of a small number of landowners. A movement to oppose these laws develops over the next 30 years, with Richard Cobden as its leading advocate. One of the arguments of the Anti-Corn Law League, which was actually a thing, is that cheaper food is good for workers; though wages might decline as bosses realize they don’t need to pay so much to keep their workers alive, relatively speaking workers will do better. More money left over at the end of the month also means more demand for other manufactured products, which is good for growing industry.
In the end, bad harvests in 1845 led to shortages and famine (yes, that one) and eventually a full repeal of the laws in 1846. Perhaps the Anti-Corn Law League’s success was inevitable: a bad harvest year is a stochastic certainty, and not having enough food is the kind of problem no government can ignore. In any case, the episode does prove the Corn Laws to be a front in a class war, and their repeal was a victory for the left, even if it occured under a Conservative government, and even if the campaign was essentially middle-class Liberal in character.
The repeal campaign was not just about domestic cost of living, however. Its exponents argued that free trade among nations was an essential step to a world of peaceful international cooperation. Palen’s book puts Cobden in context by comparison to Friedrich List, who, inspired by a stint in America in the 1820s, starts from the premise that for a nation to be great, it needs an empire of colonies to exploit, and to conquer and defend colonies, it needs a developed domestic industry and navy; and for a nation to develop its own industry, it needs protectionism. The natural state of empire is not exactly one of free trade with its neighbors.
The “commercial peace” movement that Palen describes cuts List’s argument short at “empire”: because there is no empire without war, a peace movement should scratch away at the causes of war, and the causes of the causes; a world living in pax economica would avoid imperial conflict by tying nations together through trade. It sounds idealistic, and it is, but it’s easy to forget that today we wage war through trade also: blockades and sanctions are often followed by bombs. Palen’s book draws clear lines from Cobdenism through such disparate groups as women’s peace societies, Christian internationalists, pre-war German socialists, and Lenin himself.
Marx understood history as a process of development, consisting of stages through which a society must necessarily pass on its way to socialism. This allies him with capitalism in many ways; he viewed free trade as a step towards a higher form of capitalism, which would necessarily lead to socialism. This, to me, is not a convincing argument in 2026: not only has the mechanistic vision of history failed to fruit, but its mechanism of plant closures and capital flight can be cruel and hard to campaign for politically. And yet, I think we do need a healthy dose of internationalism to remedy the ills of the present day: a jolt of ideals and even idealism to remind us that we are all travelling together on this spaceship Earth, and that those on the other side of a political line are just as much our brothers and sisters those on “our” side.
i went seeking clarityWhen you tend Marxist, you know in your bones that although the road to socialism is rough and winding, the winds of history are always at your back; there is an in-baked inevitability of success that softens defeat. There is something similar in the Christian and feminist narrative strands that Palen weaves: a sense not that victory is inevitable, at least in this lifetime, but that fighting for it is a moral imperative, and that God is on your side. The campaign for free trade was a means to a moral end, one of international peace and shared prosperity. And this, in 2026, sounds... good, actually?
Again from our 2026 perspective, I cannot help but agree that a trade barrier is often an act of war; preliminary, yes, but on the spectrum. I have had enough freedom fries in my life to have developed an allergy to anything that tastes of my-side-of-the-line-is-better-than-yours. Though I have not yet read Klein and Pettis’s deliciously titled Trade Wars are Class Wars, I do know that among the 1.5 million people who died as a result of the sanctions on Iraq in the 1990s, Saddam Hussein was not on the list. Sometimes I feel like we learned the lessons of Cobdenism backwards: in order to keep the people starving, we must impose anew the Corn Laws.
Palen’s book leaves me with one doubt, and one big question. The doubt is, to what extent do the lessons of the early 1800s apply today? Ricardo’s contemporary comparative advantage theories presupposed that capital was relatively fixed in space; nowadays this is much less the case. The threat of moving the plant elsewhere is always present in all union drives everywhere. Though history rhymes, it does not repeat; it will take some creativity to transplant pax economica to the soils of the 21st century.
The bigger question, though, is as regards the morality of protectionism as practiced by more and less developed economies: when is it morally right for a country to erect trade barriers? Palen’s book does not pretend to answer this question. And yet, this issue was foremost in our minds, as we shut down Seattle in 1999, as we died in Genoa in 2001. (Forgive the collectivism, if you aren’t of this tribe (yet?), but it was a lived experience.) Free trade was a moral cause in 1835; how did it become immoral in 1995, at least to us?
world without endWell. To answer that question, we need a history that picks up where Palen leaves off, and we have something like it in Quinn Slobodian’s Globalists, which we will look at next time. But before we go, two reflections.
One, in Europe we have kept the Corn Laws on the books, in a way, in the form of the EU Common Agricultural Policy (CAP). In France the dominant discourse is very much in opposition to the free trade agreement with Mercosur, and the main reason is the threat to French farmers. The tradeoff to get the Mercosur agreement over the line were additional subsidies under the CAP, which are a form of trade barrier. And yet, the way the CAP is structured allocates most of the money in proportion to the surface area of a farm, which is to say, to the largest agribusinesses and to the largest landowners. Greenpeace just put out an excellent briefing arguing that the CAP is just a subsidy to the heirs of the Duchess of Alba and their ilk. Again, are we running the 19th century in reverse?
Secondly, and harder to explain... in the 2000s I listened a lot to an anarchist radio show hosted by Lyn Gerry, Unwelcome Guests. (Have you heard the eponymous tune? It makes me shiver every time.) Anyway I remember one episode which discussed the gift economy and hunter-gather economics, in which a researcher asked a member of that community what he would do if he came into a lot of food at one time: the response, as I recall, was that he would store it “in his brother”. He would give it to others. One day, if he needed it, they would give to him.
I know that our world does not work this way, but there is an element of truth here, in that it’s not reasonable for France to grow everything that it eats, to never trade what it grows, to make all its own solar panels, to write all software used within its borders. We live richer lives when we share and learn from each other, without regards to which side of the line our home is.
nextStill here? Gosh me too. Next time we will look at what the kids call the “1900s” and perhaps approach present day. Until then, commercial peace be with you!
For a few days leading up to FOSDEM 2026, the GNOME OS developers met for a GNOME OS hackfest. Here are some of the things we talked about!
StableThe first big topic on our to-do list was GNOME OS stable. We started by defining the milestone: we can call GNOME OS “stable” when we settle on a configuration that we’re willing to support long-term. The most important blocker here is systemd-homed: we know that we want the stable release of GNOME OS to use systemd-homed, and we don’t want to have to support pre-homed GNOME OS installations forever. We discussed the possiblity of building a migration script to move people onto systemd-homed once it’s ready, but it’s simply too difficult and dangerous to deploy this in practice.
We did, however, agree that we can already start promoting GNOME OS a bit more heavily, provided that we make very clear that this is an unstsable product for very early adopters, who would be willing to occasionally reinstall their system (or manually migrate it).
We also discussed the importance of project documentation. GNOME OS’s documentation isn’t in a great state at the moment, and this makes it especially difficult to start contributing. BuildStream, which is GNOME OS’s build system, has a workflow that is unfamiliar to most people that may want to contribute. Despite its comprehensive documentation, there’s no easy “quick start” reference for the most common tasks and so it is ultimately a source of friction for potential contributors. This is especially unfourtunate given the current excitement around building next-gen “distroless” operating systems. Our user documentation is also pretty sparse. Finally, the little documentation we do have is spread across different places (markdown comitted to git, GitLab Wiki pages, the GNOME OS website, etc) and this makes it very difficult for people to find it.
Fixing /etcNext we talked about the situation with /etc on GNOME OS. /etc has been a bit of an unsolved problem in the UAPI group’s model of immutability: ideally all default configuration can be loaded from /usr, and so /etc would remain entirely for overrides by the system administrator. Unfourtunately, this isn’t currently the case, so we must have some solution to keep track of both upstream defaults and local changes in /etc.
So far, GNOME OS had a complicated set-up where parts of /usr would be symlinked into /etc. To change any of these files, the user would have to break the symlinks and replace them with normal files, potentially requiring copies of entire directories. This would then cause loads of issues, where the broken symlinks cause /etc to slowly drift away from the changing defaults in /usr.
For years, we’ve known that the solution would be overlayfs. This kernel filesystem allows us to mount the OS’s defaults underneath a writable layer for administrator overrides. For various reasons, however, we’ve struggled to deploy this in practice.
Modern systemd has native support for this arrangement via systemd-confext, and we decided to just give it a try at the hackfest. A few hours later, Valentin had a merge request to transition us to the new scheme. We’ve now fully rolled this out, and so the issue is solved in the latest GNOME OS nightlies.
FEX and FlatpakNext, we discussed integrating FEX with Flatpak so that we can run x86 apps on ARM64 devices.
Abderrahim kicked off the topic by telling us about fexwrap, a script that grafts two different Flatpak runtimes together to successfully run apps via FEX. After studying this implementation, we discussed what proper upstream support might look like.
Ultimately, we decided that the first step will be a new Flatpak runtime extension that bundles FEX, the required extra libraries, and the “thunks” (glue libraries that let x86 apps call into native ARM GPU drivers). From there, we’ll have to experiment and see what integrations Flatpak itself needs to make everything work seamlessly.
Abderrahim has already started hacking on this upstream.
AmutableThe Amutable crew were in Brussels for FOSDEM, and a few of them stopped in to attend our hackfest. We had some very interesting conversations! From a GNOME OS perspective, we’re quite excited about the potential overlap between our work and theirs.
We also used the opportunity to discuss GNOME OS, of course! For instance, we were able to resolve some kernel VFS blockers for GNOME OS delta updates and Flatpak v2.
mkosiFor a few years, we’ve been exploring ways to factor out GNOME OS’s image build scripts into a reusable component. This would make it trivial for other BuildStream-based projects to distribute themselves as UAPI.3 DDIs. It would also allow us to ship device-specific builds of GNOME OS, which are necessary to target mobile devices like the Fairphone 5.
At Boiling the Ocean 7, we decided to try an alternative approach. What if we could drop our bespoke image build steps, and just use mkosi? There, we threw together a prototype and successfully booted to login. With the concept proven, I put together a better prototype in the intervening months. This prompted a discussion with Daan, the maintainer of mkosi, and we ultimately decided that mkosi should just have native BuildStream support upstream.
At the hackfest, Daan put together a prototype for this native support. We were able to use his modified build of mkosi to build a freedesktop-sdk BuildStream image, package it up as a DDI, boot it in a virtual machine, set the machine up via systemd-firstboot, and log into a shell. Daan has since opened a pull request, and we’ll continue iterating on this approach in the coming months.
Overall, this hackfest was extremely productive! I think it’s pretty likely that we’ll organize something like this again next year!
Today, a very quick note on dynamic instance type checks in virtual machines with single inheritance.
The problem is that given an object o whose type is t, you want to check if o actually is of some more specific type u. To my knowledge, there are two sensible ways to implement these type checks.
if the set of types is fixed: dfs numberingConsider a set of types T := {t, u, ...} and a set of edges S := {<t|ε, u>, ...} indicating that t is the direct supertype of u, or ε if u is a top type. S should not contain cycles and is thus a direct acyclic graph rooted at ε.
First, compute a pre-order and post-order numbering for each t in the graph by doing a depth-first search over S from ε. Something like this:
def visit(t, counter): t.pre_order = counter counter = counter + 1 for u in S[t]: counter = visit(u, counter) t.post_order = counter return counterThen at run-time, when making an object of type t, you arrange to store the type’s pre-order number (its tag) in the object itself. To test if the object is of type u, you extract the tag from the object and check if tag–u.pre_order mod 2n < u.post_order–u.pre_order.
Two notes, probably obvious but anyway: one, you know the numbering for u at compile-time and so can embed those variables as immediates. Also, if the type has no subtypes, it can be a simple equality check.
Note that this approach applies only if the set of types T is fixed. This is the case when statically compiling a WebAssembly module in a system that doesn’t allow modules to be instantiated at run-time, like Wastrel. Interestingly, it can also be the case in JIT compilers, when modeling types inside the optimizer.
if the set of types is unbounded: the display hackIf types may be added to a system at run-time, maintaining a sorted set of type tags may be too much to ask. In that case, the standard solution is something I learned of as the display hack, but whose name is apparently ungooglable. It is described in a 4-page technical note by Norman H. Cohen, from 1991: Type-Extension Type Tests Can Be Performed In Constant Time.
The basic idea is that each type t should have an associated sorted array of supertypes, starting with its top type and ending with t itself. Each t also has a depth, indicating the number of edges between it and its top type. A type u is a subtype of t if u[t.depth]=t, if u.depth <= t.depth.
There are some tricks one can do to optimize out the depth check, but it’s probably a wash given the check performs a memory access or two on the way. But the essence of the whole thing is in Cohen’s paper; go take a look!
Jan Vitek notes in a followup paper (Efficient Type Inclusion Tests) that Christian Queinnec discovered the technique around the same time. Vitek also mentions the DFS technique, but as prior art, apparently already deployed in DEC Modula-3 systems. The term “display” was bouncing around in the 80s to describe some uses of arrays; I learned it from Dybvig’s implementation of flat closures, who learned it from Cardelli. I don’t know though where “display hack” comes from.
That’s it! If you know of any other standard techniques for type checks with single-inheritance subtyping, do let me know in the comments. Until next time, happy hacking!