You are here

Agreguesi i feed

GNOME Shell and Mutter Development: What is new in GNOME Kiosk 50

Planet GNOME - Mër, 01/04/2026 - 11:06pd

GNOME Kiosk, the lightweight, specialized compositor continues to evolve In GNOME 50 by adding new configuration options and improving accessibility.

Window configuration User configuration file monitoring

The user configuration file gets reloaded when it changes on disk, so that it is not necessary to restart the session.

New placement options

New configuration options to constrain windows to monitors or regions on screen have been added:

  • lock-on-monitor: lock a window to a monitor.
  • lock-on-monitor-area: lock to an area relative to a monitor.
  • lock-on-area: lock to an absolute area.

These options are intended to replicate the legacy „Zaphod“ mode from X11, where windows could be tied to a specific monitor. It even goes further than that, as it allows to lock windows on a specific area on screen.

The window/monitor association also remains true when a monitor is disconnected. Take for example a setup where each monitor, on a multiple monitors configuration, shows different timetables. If one of the monitors is disconnected (for whatever reason), the timetable showing on that monitor should not be moved to another remaining monitor. The lock-on-monitor option prevents that.

Initial map behavior was tightened

Clients can resize or change their state  before the window is mapped, so size, position, and fullscreen as set from the configuration could be skipped. Kiosk now makes sure to apply configured size, position, and fullscreen on first map when the initial configuration was not applied reliably.

Auto-fullscreen heuristics were adjusted
  • Only normal windows are considered when checking whether another window already covers the monitor (avoids false positives from e.g. xwaylandvideobridge).
  • The current window is excluded when scanning “other” fullscreen sized windows (fixes Firefox restoring monitor-sized geometry).
  • Maximized or fullscreen windows are no longer treated as non-resizable so toggling fullscreen still works when the client had already maximized.
Compositor behavior and command-line options

New command line options have been added:

  • --no-cursor: hides the pointer.
  • --force-animations: forces animations to be enabled.
  • --enable-vt-switch: restores VT switching with the keyboard.

The --no-cursor option can be used to hide the pointer cursor entirely for setups where user input does not involve a pointing device (it is similar to the -nocursor option in Xorg).

Animations can now be disabled using the desktop settings, and will also be automatically disabled when the backend reports no hardware-accelerated rendering for performance purpose. The option --force-animations can be used to forcibly enable animations in that case, similar to GNOME Shell.

The native keybindings, which include VT switching keyboard shortcuts are now disabled by default for kiosk hardening. Applications that rely on the user being able to switch to another console VT on Linux, such as e.g Anaconda, will need to explicit re-enable VT switching using --enable-vt-switch in their session.

These options need to be passed from the command line starting gnome-kiosk, which would imply updating the systemd definitions files, or better, create a custom one (taking example on the the ones provided with the GNOME Kiosk sessions).

Accessibility Accessibility panel An example of an accessibility panel is now included, to control the platform accessibility settings with a GUI. It is a simple Python application using GTK4.

(The gsettings options are also documented in the CONFIG.md file.)

Screen magnifier

Desktop magnification is now implemented, using the same settings as the rest of the GNOME desktop (namely screen-magnifier-enabled, mag-factor, see the CONFIG.md file for details).

It can can be enabled from the accessibility panel or from the keyboard shortcuts through the gnome-settings-daemon’s “mediakeys” plugin.

Accessibility settings

The default systemd session units now start the gnome-settings-daemon accessibility plugin so that Orca (the screen reader) can be enabled through the dedicated keyboard shortcut.

Notifications
  • A new, optional notification daemon implements org.freedesktop.Notifications and org.gtk.Notifications using GTK 4 and libadwaita.
  • A small utility to send notifications via org.gtk.Notifications is also provided.
Input sources GNOME Kiosk was ported to the new Mutter’s keymap API which allows remote desktop servers to mirror the keyboard layout used on the client side. Session files and systemd
    • X-GDM-SessionRegister is now set to false in kiosk sessions as GNOME Kiosk does not register the session itself (unlike GNOME Shell). That fixes a hang when terminating the session.
    • Script session: systemd is no longer instructed to restart the session when the script exits, so that users can logout of the script session when the script terminates.

Matthew Garrett: Self hosting as much of my online presence as practical

Planet GNOME - Mër, 01/04/2026 - 4:35pd

Because I am bad at giving up on things, I’ve been running my own email server for over 20 years. Some of that time it’s been a PC at the end of a DSL line, some of that time it’s been a Mac Mini in a data centre, and some of that time it’s been a hosted VM. Last year I decided to bring it in house, and since then I’ve been gradually consolidating as much of the rest of my online presence as possible on it. I mentioned this on Mastodon and a couple of people asked for more details, so here we are.

First: my ISP doesn’t guarantee a static IPv4 unless I’m on a business plan and that seems like it’d cost a bunch more, so I’m doing what I described here: running a Wireguard link between a box that sits in a cupboard in my living room and the smallest OVH instance I can, with an additional IP address allocated to the VM and NATted over the VPN link. The practical outcome of this is that my home IP address is irrelevant and can change as much as it wants - my DNS points at the OVH IP, and traffic to that all ends up hitting my server.

The server itself is pretty uninteresting. It’s a refurbished HP EliteDesk which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found under a pile of laptops in my office. We’re not talking rackmount Xeon levels of performance, but it’s entirely adequate for everything I’m doing here.

So. Let’s talk about the services I’m hosting.

Web

This one’s trivial. I’m not really hosting much of a website right now, but what there is is served via Apache with a Let’s Encrypt certificate. Nothing interesting at all here, other than the proxying that’s going to be relevant later.

Email

Inbound email is easy enough. I’m running Postfix with a pretty stock configuration, and my MX records point at me. The same Let’s Encrypt certificate is there for TLS delivery. I’m using Dovecot as an IMAP server (again with the same cert). You can find plenty of guides on setting this up.

Outbound email? That’s harder. I’m on a residential IP address, so if I send email directly nobody’s going to deliver it. Going via my OVH address isn’t going to be a lot better. I have a Google Workspace, so in the end I just made use of Google’s SMTP relay service. There’s various commerical alternatives available, I just chose this one because it didn’t cost me anything more than I’m already paying.

Blog

My blog is largely static content generated by Hugo. Comments are Remark42 running in a Docker container. If you don’t want to handle even that level of dynamic content you can use a third party comment provider like Disqus.

Mastodon

I’m deploying Mastodon pretty much along the lines of the upstream compose file. Apache is proxying /api/v1/streaming to the websocket provided by the streaming container and / to the actual Mastodon service. The only thing I tripped over for a while was the need to set the “X-Forwarded-Proto” header since otherwise you get stuck in a redirect loop of Mastodon receiving a request over http (because TLS termination is being done by the Apache proxy) and redirecting to https, except that’s where we just came from.

Mastodon is easily the heaviest part of all of this, using around 5GB of RAM and 60GB of disk for an instance with 3 users. This is more a point of principle than an especially good idea.

Bluesky

I’m arguably cheating here. Bluesky’s federation model is quite different to Mastodon - while running a Mastodon service implies running the webview and other infrastructure associated with it, Bluesky has split that into multiple parts. User data is stored on Personal Data Servers, then aggregated from those by Relays, and then displayed on Appviews. Third parties can run any of these, but a user’s actual posts are stored on a PDS. There are various reasons to run the others, for instance to implement alternative moderation policies, but if all you want is to ensure that you have control over your data, running a PDS is sufficient. I followed these instructions, other than using Apache as the frontend proxy rather than nginx, and it’s all been working fine since then. In terms of ensuring that my data remains under my control, it’s sufficient.

Backups

I’m using borgmatic, backing up to a local Synology NAS and also to my parents’ home (where I have another HP EliteDesk set up with an equivalent OVH IPv4 fronting setup). At some point I’ll check that I’m actually able to restore them.

Conclusion

Most of what I post is now stored on a system that’s happily living under a TV, but is available to the rest of the world just as visibly as if I used a hosted provider. Is this necessary? No. Does it improve my life? In no practical way. Does it generate additional complexity? Absolutely. Should you do it? Oh good heavens no. But you can, and once it’s working it largely just keeps working, and there’s a certain sense of comfort in knowing that my online presence is carefully contained in a small box making a gentle whirring noise.

Andy Wingo: wastrelly wabbits

Planet GNOME - Mar, 31/03/2026 - 10:34md

Good day! Today (tonight), some notes on the last couple months of Wastrel, my ahead-of-time WebAssembly compiler.

Back in the beginning of February, I showed Wastrel running programs that use garbage collection, using an embedded copy of the Whippet collector, specialized to the types present in the Wasm program. But, the two synthetic GC-using programs I tested on were just ported microbenchmarks, and didn’t reflect the output of any real toolchain.

In this cycle I worked on compiling the output from the Hoot Scheme-to-Wasm compiler. There were some interesting challenges!

bignums

When I originally wrote the Hoot compiler, it targetted the browser, which already has a bignum implementation in the form of BigInt, which I worked on back in the day. Hoot-generated Wasm files use host bigints via externref (though wrapped in structs to allow for hashing and identity).

In Wastrel, then, I implemented the imports that implement bignum operations: addition, multiplication, and so on. I did so using mini-gmp, a stripped-down implementation of the workhorse GNU multi-precision library. At some point if bignums become important, this gives me the option to link to the full GMP instead.

Bignums were the first managed data type in Wastrel that wasn’t defined as part of the Wasm module itself, instead hiding behind externref, so I had to add a facility to allocate type codes to these “host” data types. More types will come in time: weak maps, ephemerons, and so on.

I think bignums would be a great proposal for the Wasm standard, similar to stringref ideally (sniff!), possibly in an attenuated form.

exception handling

Hoot used to emit a pre-standardization form of exception handling, and hadn’t gotten around to updating to the newer version that was standardized last July. I updated Hoot to emit the newer kind of exceptions, as it was easier to implement them in Wastrel that way.

Some of the problems Chris Fallin contended with in Wasmtime don’t apply in the Wastrel case: since the set of instances is known at compile-time, we can statically allocate type codes for exception tags. Also, I didn’t really have to do the back-end: I can just use setjmp and longjmp.

This whole paragraph was meant to be a bit of an aside in which I briefly mentioned why just using setjmp was fine. Indeed, because Wastrel never re-uses a temporary, relying entirely on GCC to “re-use” the register / stack slot on our behalf, I had thought that I didn’t need to worry about the “volatile problem”. From the C99 specification:

[...] values of objects of automatic storage duration that are local to the function containing the invocation of the corresponding setjmp macro that do not have volatile-qualified type and have been changed between the setjmp invocation and longjmp call are indeterminate.

My thought was, though I might set a value between setjmp and longjmp, that would only be the case for values whose lifetime did not reach the longjmp (i.e., whose last possible use was before the jump). Wastrel didn’t introduce any such cases, so I was good.

However, I forgot about local.set: mutations of locals (ahem, objects of automatic storage duration) in the source Wasm file could run afoul of this rule. So, because of writing this blog post, I went back and did an analysis pass on each function to determine the set of locals which are mutated inside a try_block. Thank you, rubber duck readers!

bugs

Oh my goodness there were many bugs. Lacunae, if we are being generous; things not implemented quite right, which resulted in errors either when generating C or when compiling the C. The type-preserving translation strategy does seem to have borne fruit, in that I have spent very little time in GDB: once things compile, they work.

coevolution

Sometimes Hoot would use a browser facility where it was convenient, but for which in a better world we would just do our own thing. Such was the case for the number->string operation on floating-point numbers: we did something awful but expedient.

I didn’t have this facility in Wastrel, so instead we moved to do float-to-string conversions in Scheme. This turns out to have been a good test for bignums too; the algorithm we use is a bit dated and relies on bignums to do its thing. The move to Scheme also allows for printing floating-point numbers in other radices.

There are a few more Hoot patches that were inspired by Wastrel, about which more later; it has been good for both to work on the two at the same time.

tail calls

My plan for Wasm’s return_call and friends was to use the new musttail annotation for calls, which has been in clang for a while and was recently added to GCC. I was careful to limit the number of function parameters such that no call should require stack allocation, and therefore a compiler should have no reason to reject any particular tail call.

However, there were bugs. Funny ones, at first: attributes applying to a preceding label instead of the following call, or the need to insert if (1) before the tail call. More dire ones, in which tail callers inlined into their callees would cause the tail calls to fail, worked around with judicious application of noinline. Thanks to GCC’s Andrew Pinski for help debugging these and other issues; with GCC things are fine now.

I did have to change the code I emitted to return “top types only”: if you have a function returning type T, you can tail-call a function returning U if U is a subtype of T, but there is no nice way to encode this into the C type system. Instead, we return the top type of T (or U, it’s the same), e.g. anyref, and insert downcasts at call sites to recover the precise types. Not so nice, but it’s what we got.

Trying tail calls on clang, I ran into a funny restriction: clang not only requires that return types match, but requires that tail caller and tail callee have the same parameters as well. I can see why they did this (it requires no stack shuffling and thus such a tail call is always possible, even with 500 arguments), but it’s not the design point that I need. Fortunately there are discussions about moving to a different constraint.

scale

I spent way more time that I had planned to on improving the speed of Wastrel itself. My initial idea was to just emit one big C file, and that would provide the maximum possibility for GCC to just go and do its thing: it can see everything, everything is static, there are loads of always_inline helpers that should compile away to single instructions, that sort of thing. But, this doesn’t scale, in a few ways.

In the first obvious way, consider whitequark’s llvm.wasm. This is all of LLVM in one 70 megabyte Wasm file. Wastrel made a huuuuuuge C file, then GCC chugged on it forever; 80 minutes at -O1, and I wasn’t aiming for -O1.

I realized that in many ways, GCC wasn’t designed to be a compiler target. The shape of code that one might emit from a Wasm-to-C compiler like Wastrel is different from that that one would write by hand. I even ran into a segfault compiling with -Wall, because GCC accidentally recursed instead of iterated in the -Winfinite-recursion pass.

So, I dealt with this in a few ways. After many hours spent pleading and bargaining with different -O options, I bit the bullet and made Wastrel emit multiple C files. It will compute a DAG forest of all the functions in a module, where edges are direct calls, and go through that forest, greedily consuming (and possibly splitting) subtrees until we have “enough” code to split out a partition, as measured by number of Wasm instructions. They say that -flto makes this a fine approach, but one never knows when a translation unit boundary will turn out to be important. I compute needed symbol visibilities as much as I can so as to declare functions that don’t escape their compilation unit as static; who knows if this is of value. Anyway, this partitioning introduced no performance regression in my limited tests so far, and compiles are much much much faster.

scale, bis

A brief observation: Wastrel used to emit indented code, because it could, and what does it matter, anyway. However, consider Wasm’s br_table: it takes an array of n labels and an integer operand, and will branch to the nth label, or the last if the operand is out of range. To set up a label in Wasm, you make a block, of which there are a handful of kinds; the label is visible in the block, and for n labels, the br_table will be the most nested expression in the n nested blocks.

Now consider that block indentation is proportional to n. This means, the file size of an indented C file is quadratic in the number of branch targets of the br_table.

Yes, this actually bit me; there are br_table instances with tens of thousands of targets. No, wastrel does not indent any more.

scale, ter

Right now, the long pole in Wastrel is the compile-to-C phase; the C-to-native phase parallelises very well and is less of an issue. So, one might think: OK, you have partitioned the functions in this Wasm module into a number of files, why not emit the files in parallel?

I gave this a go. It did not speed up C generation. From my cursory investigations, I think this is because the bottleneck is garbage collection in Wastrel itself; Wastrel is written in Guile, and Guile still uses the Boehm-Demers-Weiser collector, which does not parallelize well for multiple mutators. It’s terrible but I ripped out parallelization and things are fine. Someone on Mastodon suggested fork; they’re not wrong, but also not Right either. I’ll just keep this as a nice test case for the Guile-on-Whippet branch I want to poke later this year.

scale, quator

Finally, I had another realization: GCC was having trouble compiling the C that Wastrel emitted, because Hoot had emitted bad WebAssembly. Not bad as in “invalid”; rather, “not good”.

There were two cases in which Hoot emitted ginormous (technical term) functions. One, for an odd debugging feature: Hoot does a CPS transform on its code, and allocates return continuations on a stack. This is a gnarly technique but it gets us delimited continuations and all that goodness even before stack switching has landed, so it’s here for now. It also gives us a reified return stack of funcref values, which lets us print Scheme-level backtraces.

Or it would, if we could associate data with a funcref. Unfortunately func is not a subtype of eq, so we can’t. Unless... we pass the funcref out to the embedder (e.g. JavaScript), and the embedder checks the funcref for equality (e.g. using ===); then we can map a funcref to an index, and use that index to map to other properties.

How to pass that funcref/index map to the host? When I initially wrote Hoot, I didn’t want to just, you know, put the funcrefs of interet into a table and let the index of a function’s slot be the value in the key-value mapping; that would be useless memory usage. Instead, we emitted functions that took an integer, and which would return a funcref. Yes, these used br_table, and yes, there could be tens of thousands of cases, depending on what you were compiling.

Then to map the integer index to, say, a function name, likewise I didn’t want a table; that would force eager allocation of all strings. Instead I emitted a function with a br_table whose branches would return string.const values.

Except, of course, stringref didn’t become a thing, and so instead we would end up lowering to allocate string constants as globals.

Except, of course, Wasm’s idea of what a “constant” is is quite restricted, so we have a pass that moves non-constant global initializers to the “start” function. This results in an enormous start function. The straightforward solution was to partition global initializations into separate functions, called by the start function.

For the funcref debugging, the solution was more intricate: firstly, we represent the funcref-to-index mapping just as a table. It’s fine. Then for the side table mapping indices to function names and sources, we emit DWARF, and attach a special attribute to each “introspectable” function. In this way, reading the DWARF sequentially, we reconstruct a mapping from index to DWARF entry, and thus to a byte range in the Wasm code section, and thus to source information in the .debug_line section. It sounds gnarly but Guile already used DWARF as its own debugging representation; switching to emit it in Hoot was not a huge deal, and as we only need to consume the DWARF that we emit, we only needed some 400 lines of JS for the web/node run-time support code.

This switch to data instead of code removed the last really long pole from the GCC part of Wastrel’s pipeline. What’s more, Wastrel can now implement the code_name and code_source imports for Hoot programs ahead of time: it can parse the DWARF at compile-time, and generate functions that look up functions by address in a sorted array to return their names and source locations. As of today, this works!

fin

There are still a few things that Hoot wants from a host that Wastrel has stubbed out: weak refs and so on. I’ll get to this soon; my goal is a proper Scheme REPL. Today’s note is a waypoint on the journey. Until next time, happy hacking!

Euro-Office Wants To Replace Google Docs and Microsoft Office

Slashdot - Mar, 31/03/2026 - 6:00md
Euro-Office is a new open-source project supported by several European companies that aims to offer a "truly open, transparent and sovereign solution for collaborate document editing," using OnlyOffice as a starting point. The project is positioned around European digital independence and familiar Office-style editing, though it has already drawn pushback from OnlyOffice over alleged licensing violations. "The company behind OnlyOffice is also based in Russia, and Russia is still heavily sanctioned by most European nations due to the country's ongoing invasion of Ukraine," adds How-To Geek. From the report: Euro-Office is a new open-source project supported by Nextcloud, EuroStack, Wiki, Proton, Soverin, Abilian, and other companies based in Europe. The goal is to build an online office suite that can open and edit standard Microsoft Office documents (DOCX, PPTX, XLSX) and the OpenDocument format (ODS, ODT, ODP) used by LibreOffice and OpenOffice. The current design is remarkably close to Microsoft Office and its tabbed toolbars, so there shouldn't be much of a learning curve for anyone used to Word, Excel, or PowerPoint. Importantly, Euro-Office is only the document editing component. It's designed to be added to cloud storage services, online wikis, project management tools, and other software. For example, you could have some Word documents in your Nextcloud file storage, and clicking them in a browser could open the Euro-Office editor. That way, Nextcloud (or Proton, or anyone else) doesn't have to build its own document editor from scratch. Euro-Office is based on OnlyOffice, which is open-source under the AGPL license. The project explained that "Contributing is impossible or greatly discouraged" with OnlyOffice's developers, with outside code changes rarely accepted, so a hard fork was required. The company behind OnlyOffice is also based in Russia, and Russia is still heavily sanctioned by most European nations due to the country's ongoing invasion of Ukraine. The project's home page explains, "A lot of users and customers require software that is not potentially influenced or controlled by the Russian government." As for why OnlyOffice was chosen over LibreOffice, the project simply said: "We believe open source is about collaboration, and we look for opportunities to integrate and collaborate with the LibreOffice community and companies like Collabora." UPDATE: Slashdot reader Elektroschock shares a statement from OnlyOffice CEO Lev Bannov, expressing his concerns about the Euro-Office inclusion of its software with trademarks removed: "We liked the AGPL v3 license because its 7th clause allows us to ensure that our code retains its original attributes, so that users are able to clearly identify the developers and the brand behind the program..." Bannov continued: "The core issue here isn't just about what the AGPL license states, but about the additional provisions we, as the authors, have included. This is a critical distinction, even if some may argue otherwise. We firmly assert that the Euro-Office project is currently infringing on our copyright in a deliberate and unacceptable manner." "As the creators of ONLYOFFICE, we want to make our position unequivocally clear: we do not grant anyone the right to remove our branding or alter our open-source code without proper attribution. This principle is non-negotiable and will never change. We demand that the Euro-Office project either restore our branding and attributions or roll back all forks of our project, refraining from using our code without proper acknowledgment of ONLYOFFICE."

Read more of this story at Slashdot.

US Paves Way For Private Assets To Be Included In 401(k) Retirement Plans

Slashdot - Mar, 31/03/2026 - 5:00md
An anonymous reader quotes a report from Reuters: The Trump administration on Monday issued a long-awaited proposed rule to open up retirement plans to alternative assets, paving the way for private equity and cryptocurrencies to be added to 401(k) accounts. The measure, announced by the U.S. Department of Labor, is intended to ease longstanding barriers to incorporating these less liquid and less transparent assets into American retirement plans. It follows an executive order from President Donald Trump last summer and could clear the way for alternative asset management firms to tap a large new source of capital. Industry groups have argued private market investments can enhance long-term returns and diversification for retirement savers, while skeptics warn higher fees, complexity and limited liquidity could limit those gains and pose risks for retail investors. Some private market funds that are already available to wealthier individual investors have shown signs of strain in recent months. Private credit funds known as business development companies have seen a wave of withdrawals. Treasury Secretary Scott Bessent said the proposed rule was "an initial step" and aimed to be "mindful of the importance of protecting retirement assets." The guidance lays out how plan trustees, who have a legal fiduciary duty to act in the best interest of members, can incorporate these assets. They would have to "objectively, thoroughly, and analytically consider, and make determinations on factors including performance, fees, liquidity, valuation, performance benchmarks, and complexity," the DOL said. Trustees who abide by them will be granted safe harbor that protects them from lawsuits, it added. The Supreme Court agreed earlier this year to hear one such case filed in 2019 by a former Intel employee claiming trustees made "imprudent" decisions by investing in hedge funds and private equity funds.

Read more of this story at Slashdot.

next-20260331: linux-next

Kernel Linux - Mar, 31/03/2026 - 4:07md
Version:next-20260331 (linux-next) Released:2026-03-31

Quadratic Gravity Theory Reshapes Quantum View of Big Bang

Slashdot - Mar, 31/03/2026 - 1:00md
Researchers at the University of Waterloo say a new "quadratic quantum gravity" framework could explain the universe's rapid early expansion without adding extra ingredients to Einstein's theory by hand. The idea is especially notable because it makes testable predictions, including a minimum level of primordial gravitational waves that future experiments may be able to detect. "Even though this model deals with incredibly high energies, it leads to clear predictions that today's experiments can actually look for," said Dr. Niayesh Afshordi, professor of physics and astronomy at the University of Waterloo and Perimeter Institute (PI). "That direct link between quantum gravity and real data is rare and exciting." Phys.org reports: The research team found that the Big Bang's rapid early expansion can emerge naturally from this simple, consistent theory of quantum gravity, without adding any extra ingredients. This early burst of expansion, often called inflation, is a central idea in modern cosmology because it explains why the universe looks the way it does today. Their model also predicts a minimum amount of primordial gravitational waves, which are tiny ripples in spacetime geometry created in the first moments after the Big Bang. These signals may be detectable in upcoming experiments, offering a rare chance to test ideas about the universe's quantum origins. [...] The team plans to refine their predictions for upcoming experiments to explore how their framework connects to particle physics and other puzzles about the early universe. Their long-term goal is to strengthen the bridge between quantum gravity and observational cosmology. The research has been published in the journal Physical Review Letters.

Read more of this story at Slashdot.

Thibault Martin: TIL that Sveltia is a good CMS for Astro

Planet GNOME - Mar, 31/03/2026 - 11:00pd

This website is built with the static site generator Astro. All my content is written in markdown and uploaded to a git repository. Once the content is merged into the main branch, Cloudflare deploys it publicly. The process to publish involves:

  1. Creating a new markdown file.
  2. Filling it with thoughts.
  3. Pushing it to a new branch.
  4. Waiting for CI to check my content respects some rules.
  5. Pressing the merge button.

This is pretty involved and of course requires access to a computer. This goes directly against the goal I’ve set for myself to reduce friction to publish.

I wanted a simple solution to write and publish short posts directly from mobile, without hosting an additional service.

Such an app is called a git-based headless CMS. Decap CMS is the most frequently cited solution for git-based content management, but it has two show-stoppers for me:

  1. It’s not mobile friendly (yet, since 2017) although there are community workarounds.
  2. It’s not entirely client-side. You need to host a serverless script e.g. on a Cloudflare Worker to complete authentication.

Because my website is completely static, it’s easy to take it off GitHub and Cloudflare and move it elsewhere. I want the CMS solution I choose to be purely client-side, so it doesn’t get in the way of moving elsewhere.

It turns out that Sveltia, an API-compatible and self-proclaimed successor to Decap, is a good fit for this job, with a few caveats.

Sveltia is a mobile-friendly Progressive Web App (PWA) that doesn’t require a backend. It's a static app that can be added to my static website. It has a simple configuration file to describe what fields each post expects (title, publication date, body, etc).

Once the configuration and authentication are done, I have access to a lightweight PWA that lets me create new posts.

The authentication is straightforward for technical people. I need to paste a GitHub Personal Access Token (PAT) in the login page, and that's it. Sveltia will fetch the existing content and display it.

The PWA itself is also easy to deploy: I need to add a page served under the /admin route, that imports the app. I could just import it from a third party CDN, but there’s also a npm package for it. It allows me to serve the javascript as a first party instead, all while easily staying up to date.

I installed it with

$ pnpm add @sveltia/cms

I then created an Astro page under src/pages/admin/index.astro with the following content

title="src/pages/admin/index.astro" <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Content Manager – ergaster.org</title> <script> import { init } from "@sveltia/cms"; init(); </script> </head> <body></body> </html>

I also created the config file under public/admin/config.yml with Sveltia Entry Collections matching my Astro content collections. The setup is straightforward and well documented.

Sveltia has a few caveats though:

  1. It can only work on a single branch, and not create a new branch per post. According to the maintainer, it should be possible to create new branches with “Editorial Workflowby Q2 or Q3 this year.
  2. It pushes content directly to its target branch, including drafts. I still want to run CI checks before merging my content, so I’ve created a drafts branch and configured Sveltia to push content there. Once the CI checks have passed I merge the branch manually from the GitHub mobile app.
  3. Having a single target branch also means I can only have one draft coming from Sveltia at a time. If I edited two drafts concurrently on the drafts branch, they would both be published the next time I merged drafts into main.
  4. It’s clunky to rename a picture uploaded via Sveltia.

Those are not deal breakers to me. The maintainer seems reactive, and the Editorial Workflow feature coming in Q2 or Q3 will fix the remaining clunkiness.

Scientists Shocked To Find Lab Gloves May Be Skewing Microplastics Data

Slashdot - Mar, 31/03/2026 - 9:00pd
Researchers found that common nitrile and latex lab gloves can shed stearate particles that closely resemble microplastics, potentially "increasing the risk of false positives when studying microplastic pollution," reports ScienceDaily. "We may be overestimating microplastics, but there should be none," said Anne McNeil, senior author of the study and U-M professor of chemistry, macromolecular science and engineering. "There's still a lot out there, and that's the problem." From the report: Researchers found that these gloves can unintentionally transfer particles onto lab tools used to analyze air, water, and other environmental samples. The contamination comes from stearates, which are not plastics but can closely resemble them during testing. Because of this, scientists may be detecting particles that are not true microplastics. To reduce this issue, U-M researchers Madeline Clough and Anne McNeil recommend using cleanroom gloves, which release far fewer particles. Stearates are salt-based, soap-like substances added to disposable gloves to help them separate easily from molds during manufacturing. However, their chemical similarity to certain plastics makes them difficult to distinguish in lab analyses, increasing the risk of false positives when studying microplastic pollution. "For microplastics researchers who have these impacted datasets, there's still hope to recover them and find a true quantity of microplastics," said researcher and recent doctoral graduate Madeline Clough. "This field is very challenging to work in because there's plastic everywhere," McNeil said. "But that's why we need chemists and people who understand chemical structure to be working in this field." The findings have been published in the journal Analytical Methods.

Read more of this story at Slashdot.

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C

Slashdot - Mar, 31/03/2026 - 5:30pd
An anonymous reader quotes a report from New Scientist: Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to "explode" in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas. They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. "The results we had were quite surprising," says Marinoni. "This could become a huge problem." Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn't been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn't otherwise be explained. University of Bristol researcher Chris Preist said the findings may be more complicated than they look. "It would be worth doing follow-up research to understand to what extent it's the heat generated from computation versus the heat generated from the building itself," he says. For example, the building being heated by sunlight may be part of the effect. The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.

Read more of this story at Slashdot.

Microsoft Plans To Build 100% Native Apps For Windows 11

Slashdot - Mar, 31/03/2026 - 1:00pd
Microsoft is reportedly shifting Windows 11 app development back toward fully native apps. Rudy Huyn, a Partner Architect at Microsoft working on the Store and File Explorer, said in a post on X that he is building a new team to work on Windows apps. "You don't need prior experience with the platform.. what matters most is strong product thinking and a deep focus on the customer," he wrote. "If you've built great apps on any platform and care about crafting meaningful user experiences, I'd love to hear from you." Huyn later said in a reply on X that the new Windows 11 apps will be "100% native." TechSpot reports: The description stands out at a time when many of Microsoft's built-in tools, including Clipchamp and Copilot, rely on web technologies and Progressive Web App architectures. The company's commitment to native performance suggests that some long-standing frustrations around responsiveness, memory use, and interface consistency could finally be addressed. For Windows developers, Huyn's comments hint at a change in direction. Microsoft's recent development priorities have leaned heavily on web-based approaches, with Progressive Web Apps (PWAs) replacing or supplementing many native programs. [...] Exactly which applications will be rebuilt, or how strictly "100% native" will be enforced, remains unclear. Some current Microsoft apps classified as native still depend on WebView for specific features. But the renewed emphasis already has developers paying attention.

Read more of this story at Slashdot.

After 16 Years and $8 Billion, the Military's New GPS Software Still Doesn't Work

Slashdot - Mar, 31/03/2026 - 12:00pd
An anonymous reader quotes a report from Ars Technica: Last year, just before the Fourth of July holiday, the US Space Force officially took ownership of a new operating system for the GPS navigation network, raising hopes that one of the military's most troubled space programs might finally bear fruit. The GPS Next-Generation Operational Control System, or OCX, is designed for command and control of the military's constellation of more than 30 GPS satellites. It consists of software to handle new signals and jam-resistant capabilities of the latest generation of GPS satellites, GPS III, which started launching in 2018. The ground segment also includes two master control stations and upgrades to ground monitoring stations around the world, among other hardware elements. RTX Corporation, formerly known as Raytheon, won a Pentagon contract in 2010 to develop and deliver the control system. The program was supposed to be complete in 2016 at a cost of $3.7 billion. Today, the official cost for the ground system for the GPS III satellites stands at $7.6 billion. RTX is developing an OCX augmentation projected to cost more than $400 million to support a new series of GPS IIIF satellites set to begin launching next year, bringing the total effort to $8 billion. Although RTX delivered OCX to the Space Force last July, the ground segment remains nonoperational. Nine months later, the Pentagon may soon call it quits on the program. Thomas Ainsworth, assistant secretary of the Air Force for space acquisition and integration, told Congress last week that OCX is still struggling. The GAO found the OCX program was undermined by "poor acquisition decisions and a slow recognition of development problems." By 2016, it had blown past cost and schedule targets badly enough to trigger a Pentagon review for possible cancellation. Officials also pointed to cybersecurity software issues, a "persistently high software development defect rate," the government's lack of software expertise, and Raytheon's "poor systems engineering" practices. Even after the military restructured the program, it kept running into delays and overruns, with Ainsworth telling lawmakers, "It's a very stressing program" and adding, "We are still considering how to ensure we move forward."

Read more of this story at Slashdot.

Samsung Is Bringing AirDrop-Style Sharing to Older Galaxy Devices

Slashdot - Hën, 30/03/2026 - 11:00md
Samsung is reportedly planning to roll out AirDrop-style file sharing for older Galaxy phones via a Quick Share update. Early reports suggest the feature is appearing on devices from the Galaxy S22 through the S25, though it is not actually working yet. Android Central reports: As spotted by Reddit users (via Tarun Vats on X), a Quick Share app update is rolling out via the Galaxy Store on older Samsung devices that appears to add support for AirDrop file sharing with Apple devices. Users report seeing the same new "Share with Apple devices" section we first saw on Galaxy S26 devices in the Settings app after updating Quick Share. The update is reportedly showing up on Galaxy models ranging from the Galaxy S22 to last year's Galaxy S25 series. The catch, however, is that the feature doesn't seem to be working yet. It's appearing on devices running One UI 8 as well as the One UI 8.5 beta, but enabling the toggle doesn't activate the functionality for now. Users say that turning on the feature doesn't make their device visible to Apple devices, and no Apple devices show up in Quick Share either. It's possible Samsung or Google still needs to enable it server-side, but it does confirm that broader rollout to older Galaxy devices is coming. The feature could arrive fully with the One UI 8.5 update.

Read more of this story at Slashdot.

OkCupid Settles FTC Case On Alleged Misuse of Its Users' Personal Data

Slashdot - Hën, 30/03/2026 - 10:00md
OkCupid and parent company Match Group settled an FTC case dating back to 2014 over allegations that the dating app shared users' photos and other personal data with a third party without proper disclosure or opt-out rights. Engadget reports: According to the FTC, OkCupid's privacy policy at the time noted that the company wouldn't share a user's personal information with others, except for some cases including "service providers, business partners, other entities within its family of businesses." However, the lawsuit accused OkCupid of sharing three million photos of its users to Clarifai, which the FTC claims is a "unrelated third party" that didn't fall under the allowed entities. On top of that, the lawsuit alleged that OkCupid didn't inform its users of this data sharing, nor give them a chance to opt out. Moving forward, the settlement would "permanently prohibit" Match Group, which owns OkCupid, and Humor Rainbow, which operates OkCupid, from misrepresenting what kind of personal information it collects, the purpose for collecting the data and any consumer choices to prevent data collection. Even after the 2014 incident, OkCupid was found with security flaws that could've exposed user account info but, which were quickly patched in 2020.

Read more of this story at Slashdot.

Life With AI Causing Human Brain 'Fry'

Slashdot - Hën, 30/03/2026 - 9:00md
fjo3 shares a report from France 24: Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits." The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said. [Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day." BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term." Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.

Read more of this story at Slashdot.

Judge Allows BitTorrent Seeding Claims Against Meta, Despite Lawyers 'Lame Excuses'

Slashdot - Hën, 30/03/2026 - 8:00md
An anonymous reader quotes a report from TorrentFreak: In an effort to gather material for its LLM training, Meta used BitTorrent to download pirated books from Anna's Archive and other shadow libraries. According to several authors, Meta facilitated the infringement of others by "seeding" these torrents. This week, the court granted the authors permission to add these claims to their complaint, despite openly scolding their counsel for "lame excuses" and "Meta bashing." [...] The judge acknowledged that the contributory infringement claim could and should have been added back in November 2024, when the authors amended their complaint to include the distribution claim. After all, both claims arise from the same factual allegations about Meta's torrenting activity. "The lawyers for the named plaintiffs have no excuse for neglecting to add a contributory infringement claim based on these allegations back in November 2024," Judge Chhabria wrote. The lawyers of the book authors claimed that the delay was the result of newly produced evidence that had "crystallized" their understanding of Meta's uploading activity. However, that did not impress the judge. He called it a "lame excuse" and "a bunch of doubletalk," noting that if the missing discovery truly prevented the contributory claim from being added in November 2024, the same logic would have prevented the distribution claim from being added at that time as well. "Rather than blaming Meta for producing discovery late, the plaintiffs' lawyers should have been candid with the Court, explaining that they missed an issue in a case of first impression..," the order reads. Judge Chhabria went further, noting that the authors' law firm, Boies Schiller, showed "an ongoing pattern" of distracting from its own mistakes by attacking Meta. He pointed specifically to the dispute over when Meta disclosed its fair use defense to the distribution claim, which we covered here recently, characterizing it as a false distraction. "The lawyers for the plaintiffs seem so intent on bashing Meta that they are unable to exercise proper judgment about how to represent the interests of their clients and the proposed class members," the order reads. Despite the criticism, Chhabria granted the motion. [...] For now, the case moves forward with a fourth amended complaint, three new loan-out companies added as named plaintiffs, and a growing list of BitTorrent-related claims for Judge Chhabria to resolve.

Read more of this story at Slashdot.

Microsoft Copilot Is Now Injecting Ads Into Pull Requests On GitHub

Slashdot - Hën, 30/03/2026 - 7:00md
Microsoft Copilot is reportedly injecting promotional "tips" into GitHub pull requests, with Neowin claiming more than 1.5 million PRs have been affected by messages advertising integrations like Raycast, Slack, Teams, and various IDEs. From the report: According to Melbourne-based software developer Zach Manson, a team member used the AI to fix a simple typo in a pull request. Copilot did the job, but it also took the liberty of editing the PR's description to include this message: "Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast." A quick search of that phrase on GitHub shows that the same promotional text appears in over 11,000 pull requests across thousands of repositories. Even merge requests on GitLab aren't safe from the injection. So what's happening? Well, Raycast has a Copilot extension that can do things like create pull requests from a natural language command. The ad directly names Raycast, so you might think that Raycast is injecting the promo into the PRs to market its own app. But it is more likely that Microsoft is the one doing the injecting. If you look at the raw markdown of the affected pull requests, there is a hidden HTML comment, "START COPILOT CODING AGENT TIPS" placed right just before the ad tip. This suggests Microsoft is using the comment to insert a "tip" that points back to its own developer ecosystem or partner integrations. UPDATE: Following backlash from developers, Microsoft has removed Copilot's ability to insert "tips" into pull requests. Tim Rogers, principal product manager for Copilot at GitHub, said the move was intended "to help developers learn new ways to use the agent in their workflow." "On reflection," Rogers said he has since realized that letting Copilot make changes to PRs written by a human without their knowledge "was the wrong judgement call."

Read more of this story at Slashdot.

Sony Shuts Down Nearly Its Entire Memory Card Business Due To SSD Shortage

Slashdot - Hën, 30/03/2026 - 6:00md
For the "foreseeable future," Sony says it has stopped accepting new orders for most of its CFexpress and SD memory card lines due to the an ongoing memory supply shortage. "Due to the global shortage of semiconductors (memory) and other factors, it is anticipated that supply will not be able to meet demand for CFexpress memory cards and SD memory cards for the foreseeable future," the company said in a notice. "Therefore, we have decided to temporarily suspend the acceptance of orders from our authorized dealers and from customers at the Sony Store from March 27, 2026 onwards. PetaPixel reports: The suspension includes all of Sony's memory card lines, including CFexpress Type A, CFexpress Type B, and SD cards. The 240GB, 480GB, 960GB, and 1920GB capacity Type A cards have been suspended, as have the 480GB and 240GB Type B cards. The full gamut of Sony's high-end SD cards has also been suspended, including the 256GB, 128GB, and 64GB TOUGH-branded cards and the lower-end 512GB, 256GB, 128GB, and 256GB plainly-branded Sony cards, which cap out at V60 speeds. Even Sony's lower-end, V30 128GB and 64GB SD cards have been suspended, showcasing that the SSD shortage affects all types of solid state, not just the high-end ones. It appears that only the 960GB CFexpress Type B card and the lowest-end SF-UZ series SD cards remain in production. However, those UHS-I SD cards are discontinued in the United States outside of a scant few retailers and resellers. "We sincerely apologize for any inconvenience this may cause our customers," Sony concludes.

Read more of this story at Slashdot.

Tech CEOs Suddenly Love Blaming AI For Mass Job Cuts

Slashdot - Hën, 30/03/2026 - 5:00md
An anonymous reader quotes a report from the BBC: Sweeping job cuts at Big Tech companies have become an annual tradition. How executives explain those decisions, however, has changed. Out are buzzwords like efficiency, over-hiring, and too many management layers. Today, all explanations stem from artificial intelligence (AI). In recent weeks, giants including Google, Amazon, Meta, as well as smaller firms such as Pinterest and Atlassian, have all announced or warned of plans to shrink their workforce, pointing to developments in AI that they say are allowing their firms to do more with fewer people. [...] But explaining cuts by pointing to advances in AI sounds better than citing cost pressures or a desire to please shareholders, says tech investor Terrence Rohan, who has had a seat on many company boards. "Pointing to AI makes a better blog post," Rohan says. "Or it at least doesn't make you seem as much the bad guy who just wants to cut people for cost-effectiveness." That does not mean there is no substance behind the words, Rohan added. Some of the companies he's backing are using code that is 25% to 75% AI-generated. That is a sign of the real threat that AI tools for writing code represent to jobs such as software developer, computer engineer and programmer, posts once considered a near-guarantee of highly paid, stable careers. "Some of it is that the narrative is changing, some of it is that we really are starting to see step changes in productivity," Anne Hoecker, a partner at Bain who leads the consultancy's technology practice, says of the recent job cuts. "Leaders more recently are seeing these tools are good enough that you really can do the same amount of work with fundamentally less people." There is another way that AI is driving job cuts -- and it has nothing to do with the technical abilities of coding tools and chatbots. Amazon, Meta, Google and Microsoft are collectively planning to pour $650 billion into AI in the coming year. As executives hunt for ways to try to ease investor shock at those costs, many are landing on payroll, typically tech firms' single biggest expense. [...] Although the expense of, for example, 30,000 corporate Amazon employees is dwarfed by that company's AI spending plans, firms of this size will now take any opportunity to cut costs, Rohan says. "They're playing a game of inches," Rohan says of cuts at Big Tech firms. "If you can even slightly tune the machine, that is helpful." Hoecker says cutting jobs also signals to stock market investors worried about the "real and huge" cost of AI development that executives are not blithely writing blank cheques. "It shows some discipline," says Hoecker. "Maybe laying off people isn't going to make much of a dent in that bill, but by creating a little bit of cashflow, it helps."

Read more of this story at Slashdot.

FortiClient EMS SQL Injection Risk on Linux Systems CVE-2026-21643

LinuxSecurity.com - Hën, 30/03/2026 - 3:41md
One unauthenticated HTTP request is all it takes. From there, attackers can move from the edge straight into your internal network, operating from a system your Linux servers already trust.CVE-2026-21643 in FortiClient EMS isn't just another SQL injection. It turns a management server into a pivot point, giving attackers the same access paths your administrators rely on.

Faqet

Subscribe to AlbLinux agreguesi