You are here

Agreguesi i feed

New Company Hopes to Build Age-Verification Tech into Vape Cartridges

Slashdot - Hën, 30/03/2026 - 1:34md
Their goal is to use biometric data and blockchain to build age-verification measures directly into disposable vape cartridges. Wired reports on a partnership between vape/cartridge manufacturer Ispire Technology and regulatory consulting company Chemular (which specializes in the nicotine market) — which they've named "Ike Tech": [Using blockchain-based security, the e-cig cartridge] would use a camera to scan some form of ID and then also take a video of the user's face. Once it verifies your identity and determines you're old enough to vape, it translates that information into anonymized tokens. That info goes to an identity service like ID.me or Clear. If approved, it bounces back to the app, which then uses a Bluetooth signal to give the vape the OK to turn on. "Everything is tokenized," [says Ispire CEO Michael Wang]. "As a result of this process, we don't communicate consumer personal private information." He says the process takes about a minute and a half... After that onetime check, the Bluetooth connection on the phone will recognize when the vape cartridge is nearby and keep it unlocked. Move the vape too far away from the phone, and it shuts off again. Based on testing, the companies behind Ike Tech claim this process has a 100 percent success rate in age verification, more or less calling the tech infallible. "The FDA told us it's the holy grail technology they were looking for," Wang says. "That's word-for-word what they said when we met with them...." Wang says the goal is to implement additional features in the verification process, like geo-fencing, which would force the vape to shut off while near a school or on an airplane. In the future, the plan is to license this biometric verification tech to other e-cig companies. The tech may also grow to include fingerprint readers and expand to other product categories; Wang suggests guns, which have a long history of age-verification features not quite working.

Read more of this story at Slashdot.

Thibault Martin: Gear review: Garmin Forerunner 165

Planet GNOME - Hën, 30/03/2026 - 2:00pd

Last year I bought one of those rock solid, simple, sturdy Casio Watches that are supposed to last you a long time. I still love it with all my heart and sometimes wear it, but my primary watch is now the Garmin Forerunner 165.

A dozen years ago I got into running and installed an app on my phone to track my progress. I kept pushing harder to beat my previous records, and eventually injured myself badly. I used to think that the quantified self was the root of all evil and that it was the reason why I overtrained. It actually was out of ego, and it turns out that quantified self and coaching to make sense of the data can yield amazing results.

[!success] I got a Garmin Forerunner 165 and I am very happy with it.

I recommend caution nonetheless: the watch is a great tool to gather metrics about how you run, but it is a terrible replacement for a coach. Working with a professional will give you much better results.

Why I got it

In my mid 30s I realized that my metabolism wasn't what it used to be. I started gaining weight and felt uneasy in my body. After two kids and today's geopolitics, my mental health started to degrade too.

I've decided to get back into running to help with both aspects. Exercising makes the body burn calories and produce endorphins. Both are useful to feel better. Yes, it's only in my mid 30s that I realized that exercising was a physiological need. A need I had neglected for too long.

But the last time I got into running I injured myself badly. I'm the least competitive person against others, but I'm very competitive against myself. This means I'm subject to overtraining. I needed something to keep me on track.

Several friends had Garmin watches and told me that their watches actually prevented them from overtraining. That was my cue: I would buy one, and use it to get back in shape. Even better: the model I wanted, the Forerunner 165, could last 11 days on a single charge. It means I could wear it at night to follow my sleep patterns, and it would vibrate gently on my wrist to wake me up silently. This promised to be a low maintenance and useful watch.

[!info] In summary

I bought the Forerunner 165 to:

  • Get back into running.
  • Prevent overtraining.
  • Follow my sleep patterns.
  • Not babysit the watch or be constantly nagged by it.
First impressions

I bought the watch at the end of August 2025, and I started running immediately with it. In addition to the watch, I also bought a pair of Merrell Trail Glove 7: "barefoot" shoes that have such a thin sole that you have to land on the front of the foot and not on the heel.

I installed the Garmin Connect app on my phone, and I was delighted to see that I didn't any subscription to start a coaching plan. I enrolled in a Garmin Coach program for beginners, and I started following it. At the end of each run, the watch would ask me how I felt, and each run was more painful than the last. I thought it was just muscles building up so I kept following the program. And this is how I injured myself.

I went to a physiotherapist, and we started a specific training plan for barefoot shoes. Of course I quit the Garmin Coach one. He taught me that barefoot shoes require a higher cadence (number of steps per minute) than regular running shoes. I could configure the watch to keep track of my cadence during my runs. A gauge would tell me if I ran too few or too many steps per minute.

After the physiotherapy sessions ended, I could keep running normally. As of writing, I go running three times a week. Each session is between 6 and 12km long.

What I like

I didn't want a smart watch because I don't want it to pester me, and I don't want to charge it every day. On those two fronts the Forerunner delivered. I don't receive any of my phone notifications on my watch, and it doesn't pester me with anything during the day. I use my watch when I need it, not when it needs me.

As for the battery, it is fantastic for this type of watch. With 3 runs a week, my watch lasts 9 to 10 days on a single charge. I keep my watch at night so it monitors my sleep too. And charging is fast: I can charge it from 10 to 100% in about 1.5 hour.

I can confidently go to sleep with it and know it will still have plenty of battery to wake me up the next day with a gentle vibration on my wrist. I hate alarms that scream at you in the morning. This gentle nudge is infinitely better.

I have pathologically bad sleep and the watch does a good job at tracking it. It even helped me detect sleep apnea, that doctors later confirmed. The watch gives you several metrics for the night: how long you've slept, your average and resting heart rate, your average and lowest respiration and more. It can also measure your pulse oximetry, but that depletes the battery twice as fast as if it doesn't. That watch also supports tracking naps.

When it comes to exercising, I only use it for running. I can't say anything about its accuracy, but my physiotherapist seemed to believe that all the measures were plausible.

The wristband has many holes, making it easy to adjust. It is comfortable to wear, even during exercise when the wrist can swell and sweat a bit. It is also slightly elastic, so it can stretch a bit for extra comfort.

It is possible to use the watch only to track how you run, or to configure workouts in the app depending on your objectives. During my physiotherapy training I would make it track my cadence, but you can track a lot more metrics. You can also configure several steps, e.g. warm-up, light run, fast run, series, etc.

There are also built-in, free Garmin Coach programs depending on your needs, but I can't say I have a positive experience with them. If you're new to running I really recommend going to a professional coach or physiotherapist to get you started.

At €230, the watch is not cheap, but seems fairly priced for the amount of value I get from it. I also don't expect to replace it anytime soon.

What I don't like

The watch has an odd recovery time metric can be difficult to understand: you can workout lightly to recover. To this day I'm not entirely sure what it does.

Beyond that it's a good watch I can't complain about!

Conclusion

I’m very happy with my Forerunner 165. It’s important to bear in mind it’s just a tool, not something that can replace a human coach. If your knees or tendons hurt and the watch tells you to go running, don’t. Go see a professional.

7.0-rc6: mainline

Kernel Linux - Hën, 30/03/2026 - 12:40pd
Version:7.0-rc6 (mainline) Released:2026-03-29 Source:linux-7.0-rc6.tar.gz Patch:full (incremental)

Thibault Martin: I realized that You don't care

Planet GNOME - Dje, 29/03/2026 - 6:00md

Quite a few of us maintain our own websites and publish our thoughts. We play in hard mode:

  • We need to build our website before even publishing our first post.
  • We don’t benefit from the network effect of bigger platforms to get eyeballs on our writing.
  • LLMs aggressively scrape the web and can serve our thoughts or expertise to their users without them visiting our websites.

And on top of that, you don’t care.

And I don’t expect you to care. Like the rest of us, you are flooded with information constantly. You’re fed so many words that you read the equivalent of whole books every day. How entitled would I be to expect you to care about my words when you have to filter through every story you’re bombarded with.

So why do we keep the small web alive?

I can’t speak for others, but I know why I maintain my website and why I publish my thoughts there. By increasing order of importance:

  1. I keep my web development skills reasonably up to date.
  2. I can shape my website to adapt to my content, and not the other way around.
  3. I have freedom of tone and vocabulary. I don’t have to censor words like "suicide" or "sex".
  4. I write long form posts that help me shape my thoughts, develop ideas, and receive feedback from my peers and readers.

If you can afford to, I can only encourage you to write and publish your thoughts on your own platform, as long as you don’t expect others to care in return.

Gedit Technology: gedit 50.0 released

Planet GNOME - Sht, 28/03/2026 - 11:00pd

gedit 50.0 has been released! Here are the highlights since version 49.0 from January. (Some sections are a bit technical).

No Large Language Models AI tools

The gedit project now disallows the use of LLMs for contributions.

The rationales:

Programming can be seen as a discipline between art and engineering. Both art and engineering require practice. It's the action of doing - modifying the code - that permits a deep understanding of it, to ensure correctness and quality.

When generating source code with an LLM tool, the real sources are the inputs given to it: the training dataset, plus the human commands.

Adding something generated to the version control system (e.g., Git) is usually frown upon. Moreover, we aim for reproducible results (to follow the best-practices of reproducible builds, and reproducible science more generally). Modifying afterwards something generated is also a bad practice.

Releasing earlier, releasing more often

To follow more closely the release early, release often mantra, gedit aims for a faster release cadence in 2026, to have smaller deltas between each version. Future will tell how it goes.

The website is now responsive

Since last time, we've made some efforts to the website. Small-screen-device readers should have a more pleasant experience.

libgedit-amtk becomes "The Good Morning Toolkit"

Amtk originally stands for "Actions, Menus and Toolbars Kit". There was a desire to expand it to include other GTK extras that are useful for gedit needs.

A more appropriate name would be libgedit-gtk-extras. But renaming the module - not to mention the project namespace - is more work. So we've chosen to simply continue with the name Amtk, just changing its scope and definition. And - while at it - sprinkle a bit of fun :-)

So there are now four libgedit-* modules:

  • libgedit-gfls, aka "libgedit-glib-extras", currently for "File Loading and Saving";
  • libgedit-amtk, aka "libgedit-gtk-extras" - it extends GTK for gedit needs at the exception of GtkTextView;
  • libgedit-gtksourceview - it extends GtkTextView and is a fork of GtkSourceView, to evolve the library for gedit needs;
  • libgedit-tepl - the Text Editor Product Line library, it provides a high-level API, including an application framework for creating more easily new text editors.

Note that all of these are still constantly in construction.

Some code overhaul

Work continues steadily inside libgedit-gfls and libgedit-gtksourceview to streamline document loading.

You might think that it's a problem solved (for many years), but it's actually not the case for gedit. Many improvements are still possible.

Another area of interest is the completion framework (part of libgedit-gtksourceview), where changes are still needed to make it fully functional under Wayland. The popup windows are sometimes misplaced. So between gedit 49.0 and 50.0 some progress has been made on this. The Word Completion gedit plugin works fine under Wayland, while the LaTeX completion with Enter TeX is still buggy since it uses more features from the completion system.

Thibault Martin: I realized that I created too much friction to publish

Planet GNOME - Sht, 28/03/2026 - 11:00pd

I love writing on my blog. I love taking a complex topic, breaking it down, understanding how things work, and writing about how things clicked for me. It serves a double purpose:

  1. I can organize my thoughts, ensure I understood the topic fully, and explain it to others.
  2. It helps my future self: if I forgot about the topic, I can read about what made it click for me.

But as of writing, the last time I published something on my blog was 5 months ago.

The blogging process

My blog posts tend to be lengthy. My writing and publishing process is the following.

  1. Take a nontrivial topic, something I didn't know about or didn't know how to do.
  2. Understand it, break it down, and get a clear picture of how things work.
  3. Write an outline for the post with the key points.
  4. Ask my smarter friends if the outline makes sense.
  5. Flesh out the outline into a proper blog posts, with all the details, code snippets, screenshots.
  6. Ask my smarter friends to review the post again.
  7. Get an illustrator to create a banner for the post, that also serves as an opengraph preview image.
  8. Publish the post.

That is a lot of work. I have many posts stuck between step 3 and 5, because they take quite a bit of time. Asking an illustrator to create a banner for the post also creates more friction: obviously I need to pay the illustrator, but I also need to wait for him to be done with the illustration.

Not everything has to be a blog post

Sometimes I have quick thoughts that I want to jot down and share with the rest of the world, and I want to be able to find it back. There are two people I follow that write a lot, often in short format.

  1. John Gruber on his blog Daring Fireball.
  2. Simon Willison, on his Weblog.

Both of them have very short format notes. Willison even blogged about what he thinks people should write about.

Reducing friction and just posting

I don't think friction should be avoided at all costs. Take emails for example: there's a delay between when you send a message and your peer receives it, or the other way around. That friction encourages longer form messages, which gives more time to organize thoughts.

I also welcome the friction I have created for my own posts: I get through a proper review process and publish higher quality posts.

But there's also room for spontaneity. So I've updated my website to let me publish two smaller formats:

  • TILs. Those are short posts about something I've learned and found interesting.
  • Thoughts. Those are shorter posts I jot down in less than 20 minutes to develop simple thoughts.

What Is a Checksum? Meaning, Examples & Why You Should Use Them

LinuxSecurity.com - Pre, 27/03/2026 - 12:00md
A checksum is a calculated value that represents the exact contents of a file or message. If the file changes '' even by a single byte '' the checksum changes as well. That's why it's often described as a digital fingerprint for data integrity.

6.12.79: longterm

Kernel Linux - Pre, 27/03/2026 - 10:52pd
Version:6.12.79 (longterm) Released:2026-03-27 Source:linux-6.12.79.tar.xz PGP Signature:linux-6.12.79.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.12.79

Sebastian Wick: Three Little Rust Crates

Planet GNOME - Pre, 27/03/2026 - 1:15pd

I published three Rust crates:

  • name-to-handle-at: Safe, low-level Rust bindings for Linux name_to_handle_at and open_by_handle_at system calls
  • pidfd-util: Safe Rust wrapper for Linux process file descriptors (pidfd)
  • listen-fds: A Rust library for handling systemd socket activation

They might seem like rather arbitrary, unconnected things – but there is a connection!

systemd socket activation passes file descriptors and a bit of metadata as environment variables to the activated process. If the activated process exec’s another program, the file descriptors get passed along because they are not CLOEXEC. If that process then picks them up, things could go very wrong. So, the activated process is supposed to mark the file descriptors CLOEXEC, and unset the socket activation environment variables. If a process doesn’t do this for whatever reason however, the same problems can arise. So there is another mechanism to help prevent it: another bit of metadata contains the PID of the target. Processes can check it against their own PID to figure out if they were the target of the activation, without having to depend on all other processes doing the right thing.

PIDs however are racy because they wrap around pretty fast, and that’s why nowadays we have pidfds. They are file descriptors which act as a stable handle to a process and avoid the ID wrap-around issue. Socket activation with systemd nowadays also passes a pidfd ID. A pidfd ID however is not the same as a pidfd file descriptor! It is the 64 bit inode of the pidfd file descriptor on the pidfd filesystem. This has the advantage that systemd doesn’t have to install another file descriptor in the target process which might not get closed. It can just put the pidfd ID number into the $LISTEN_PIDFDID environment variable.

Getting the inode of a file descriptor doesn’t sound hard. fstat(2) fills out struct stat which has the st_ino field. The problem is that it has a type of ino_t, which is 32 bits on some systems so we might end up with a process identifier which wraps around pretty fast again.

We can however use the name_to_handle syscall on the pidfd to get a struct file_handle with a f_handle field. The man page helpfully says that “the caller should treat the file_handle structure as an opaque data type”. We’re going to ignore that, though, because at least on the pidfd filesystem, the first 64 bits are the 64 bit inode. With systemd already depending on this and the kernel rule of “don’t break user-space”, this is now API, no matter what the man page tells you.

So there you have it. It’s all connected.

Obviously both pidfds and name_to_handle have more exciting uses, many of which serve my broader goal: making Varlink services a first-class citizen. More about that another time.

Andy Wingo: free trade and the left, quater: witches

Planet GNOME - Enj, 26/03/2026 - 11:03md

Good evening. Tonight, we wrap up our series on free trade and the left. To recap where we were, I started by retelling the story that free trade improves overall productivity, but expressed reserves about the way in which it does so: plant closures and threats thereof, regulatory arbitrage, and so on. Then we went back in history, discussing the progressive roots of free trade as a cause of the peace-and-justice crowd, in the 19th century. Then we looked at the leading exponents of free trade in the 20th century, the neoliberals , ending in an odd place: instead of free trade being a means for the end of peace and prosperity, neoliberalism turns this on its head, instead holding that war, immiseration, apartheid, dictatorship, ecological disaster, all are justified if they serve the ends of the “free market”, of which free trade is a component.

When I make this list of evils I find myself back in 1999, that clearly “we” were right then to shut down the WTO meetings in Seattle. With the distance of time, I start to wonder, not about then, but about now: for all the evil of our days, Trump at least has the virtue of making clear that trade barriers have a positive dot-product with acts of war. As someone who lives in the banlieue of Geneva, I am always amused when I find myself tut-tutting over the defunding of this or that institution of international collaboration.

I started this series by calling out four works. Pax Economica and Globalists have had adequate treatment. The third, Webs of Power, by Starhawk, is one that I have long seen as a bit of an oddball; forgive my normie white boy (derogatory) sensibilities, but I have often wondered how a book by a voice of “earth-based spirituality and Goddess religion” has ended up on my shelf. I am an atheist. How much woo is allowed to me?

choice of axiom

Conventional wisdom is to treat economists seriously, and Wiccans less so. In this instance, I have my doubts. The issue is that a neoliberal is at the same time a true believer in markets, and a skilled jurist. In service of the belief, any rhetorical device is permissible, if it works; if someone comes now and tries to tell me that the EU-Mercosur agreement is a good thing because of its effect on capybara populations, my first reaction is to doubt them, because maybe they are a neoliberal, and if so they would literally say anything.

Whereas if Starhawk has this Earth-mother-spiritual vibe... who am I to say? Yes, I think religion on the whole is a predatory force on vulnerable people, but that doesn’t mean that her interpretation of the web of life as divine is any less legitimate than neoliberal awe of the market. Let’s hear her argument and get on with things.

Starhawk’s book has three parts. The first is an as-I-lived-it chronicle, going from Seattle to Washington to Prague to Quebec City to Genoa, and thence to 9/11 and its aftermath, describing what it was like to be an activist seeking to disrupt the various WTO-adjacent meetings, seeking to build something else. She follows this up with 80 pages of contemporary-to-2002 topics such as hierarchy within the movement, nonviolence vs black blocs, ecological principles, cultural appropriation, and so on.

These first two sections inform the last final 20 pages, in which Starhawk attempts to synthesize what it is that “we” wanted, as a kind of memento and hopefully a generator of actions to come. She comes up with a list of nine principles, which I’ll just quote here because I don’t have an editor (the joke’s on all of us!):

  1. We must protect the viability of the life-sustaining systems of the planet, which are everywhere under attack.
  2. A realm of the sacred exists, of things too precious to be commodified, and must be respected.
  3. Communities must control their own resources and destinies.
  4. The rights and heritages of indigenous communities must be acknowledged and respected.
  5. Enterprises must be rooted in communities and be responsible to communities and to future generations.
  6. Opportunity for human beings to meet their needs and fulfill their dreams and aspirations should be open to all.
  7. Labor deserves just compensation, security, and dignity.
  8. The human community has a collective responsibility to assure the basic means of life, growth, and development for all its members.
  9. Democracy means that all people have a voice in the decisions that affect them, including economic decisions.

Now friends, this is Starhawk’s list, not mine, and a quarter-century-old list at that. I’m not here to judge it, though I think it’s not bad; what I find interesting is its multifaceted nature, that when contrasted with the cybernetic awe of late neoliberalism, that actually it’s the Witch who has the more down-to-earth concerns: a planet to live on, a Rawlsian concern with justice, and a control of the economic by the people.

which leaves us

Former European Central Bank president Mario Draghi published a report some 18 months ago diagnosing a European malaise and proposing a number of specific remedies. I find that we on my part of the left are oft ill-equipped to engage with the problem he identifies, not to mention the solutions. The whole question of productivity is very technical, to the extent that we might consider it owned by our enemies: our instinct is to deflect, “productivity for what”, that sort of thing. Worse, if we do concede the problem, we haven’t spent as much time sparring in the gyms of comparative advantage; we risk a first-round knockout. We come with Starhawk’s list in hand, and they smile at us condescendingly: “very nice but we need to focus on the economy, you know,” and we lose again.

But Starhawk was not wrong. We do need a set of principles that we can use to analyze the present and plot a course to the future. I do not pretend to offer such a set today, but after having looked into the free trade question over the last couple months, I have reached two simple conclusions, which I will share with you now.

The first is that, from an intellectual point of view, we should just ignore the neoliberals; they are not serious people. That’s not a value judgment on the price mechanism, but rather one on those that value nothing else: that whereas classical liberalism was a means to an end, neoliberalism admits no other end than commerce, and admits any means that furthers its end. And so, we can just ignore them. If neoliberals were the only ones thinking about productivity, well, we might need new branches of economics. Fortunately that’s not the case. Productivity is but one dimension of the good, and it is our collective political task to choose a point from the space of the possible according to our collective desires.

The second conclusion is that we should take back free trade from our enemies on the right. We are one people, but divided into states by historical accident. Although there is a productivity argument for trade, we don’t have to limit ourselves to it: the bond that one might feel between Colorado and Wyoming should be the same between Italy and Tunisia, between Canada and Mexico, indeed between France and Brasil. One people, differentiated but together, sharing ideas and, yes, things. Internationalism, not nationalism.

There is no reason to treat free trade as the sole criterion against which to judge a policy. States are heterogeneous: what works for the US might not be right for Haiti; states differ in the degree that they internalize environmental impacts; and they differ as regards public services. We can take these into account via policy, but our goal should be progress for all.

So while Thomas Piketty is right to decry a kind of absolutism among European decisionmakers regarding free trade, I can’t help but notice a chauvinist division being set up in the way we leftists are inclined to treat these questions: we in Europe are one bloc, despite e.g. very different carbon impacts of producing a dishwasher in Poland versus Spain, whereas a dishwasher from China belongs to a different, worse, more sinful category.

and mercosur?

To paraphrase Marley’s ghost, mankind is my business. I want an ever closer union with my brothers and sisters in Uruguay and Zambia and Cambodia and Palestine. Trade is a part of it. All things being equal, we should want to trade with Chile. We on the left should not oppose free trade with Mercosur out of a principle that goods produced far away are necessarily a bad thing.

All this is not to say that we should just doux it (although, gosh, Karthik is such a worthy foe); we can still participate in collective carrot-and-stick exercises such as carbon taxes and the like, and this appreciation of free trade would not have trumped the campaign to boycott apartheid South Africa, nor would it for apartheid Israel. But our default position should be to support free trade with Mercosur, in such a way that does improves the lot of all humanity.

I don’t know what to think about the concrete elements of the EU-Mercosur deal. The neoliberal play is to design legal structures that encase commerce, and a free trade deal risks subordinating the political to the economic. But unlike some of my comrades on the left, I am starting to think that we should want free trade with Bolivia, and that’s already quite a change from where I was 25 years ago.

fin

Emily Saliers famously went seeking clarity; I fear I have brought little. We are still firmly in the world of the political, and like Starhawk, still need a framework of pre-thunk thoughts to orient us when some Draghi comes with a new four-score-page manifesto. Good luck and godspeed.

But it is easier to find a solution if we cull the dimensionality of the problem. The neoliberals had their day, but perhaps these staves may be of use to you in exorcising their discursive domination; it is time we cut them off. Internationalist trade was ours anyway, and it should resume its place as a means to our ends.

And what ends? As with prices, we discover them on the margin, in each political choice we make. Some are easy; some less so. And while a list like Starhawk’s is fine enough, I keep coming back to a simpler question: which side are you on? The sheriff or the union? ICE or the immigrant? Which side are you on? The question cuts fine. For the WTO in Seattle, to me it said to shut it all down. For EU-Mercosur, to me it says, “let’s talk.”

Ubuntus GRUB Change: Fixing a Problem¦ or Creating One

LinuxSecurity.com - Enj, 26/03/2026 - 2:28md
At some point, it stopped being ''load kernel and go'' and turned into this thing that tries to understand every filesystem, every storage setup, encryption, all of it, before the system is even running. And that's where it keeps biting people.If you've dealt with GRUB breaking, it's almost never the basic path. It's trying to read something slightly non-standard and just falling over. Btrfs layouts, LVM stacking, and encrypted setups, stuff that works fine once the kernel is up, but GRUB has to guess at it first.The more GRUB understands, the more it can get wrong. This isn't about ''GRUB is bad,'' it's that GRUB turned into something way bigger than a bootloader, and now it carries all the risk that comes with that.

Chandra Resolves Why Black Holes Hit the Brakes On Growth

Slashdot - Mër, 25/03/2026 - 12:00md
alternative_right shares a report from Phys.org: Astronomers have an answer for a long-running mystery in astrophysics: why is the growth of supermassive black holes so much lower today than in the past? A study using NASA's Chandra X-ray Observatory and other X-ray telescopes found that supermassive black holes are unable to consume material as rapidly as they did in the distant past. The results appeared in the December 2025 issue of The Astrophysical Journal. [...] The team ran tests of the three main possible scenarios currently being considered for the slowdown of black hole growth. These options were: could the decline in black hole growth be caused by less efficient rates of consumption, or by smaller typical black hole masses, or by fewer actively growing black holes? Their analysis of the data, extending over billions of years of cosmic history, led them to the conclusion that black holes are indeed consuming material less rapidly the later they are found after the Big Bang. The researchers expect this trend of slower-growing black holes to continue into the future.

Read more of this story at Slashdot.

6.19.10: stable

Kernel Linux - Mër, 25/03/2026 - 11:13pd
Version:6.19.10 (stable) Released:2026-03-25 Source:linux-6.19.10.tar.xz PGP Signature:linux-6.19.10.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.19.10

6.18.20: longterm

Kernel Linux - Mër, 25/03/2026 - 11:11pd
Version:6.18.20 (longterm) Released:2026-03-25 Source:linux-6.18.20.tar.xz PGP Signature:linux-6.18.20.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.18.20

6.12.78: longterm

Kernel Linux - Mër, 25/03/2026 - 11:09pd
Version:6.12.78 (longterm) Released:2026-03-25 Source:linux-6.12.78.tar.xz PGP Signature:linux-6.12.78.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.12.78

6.6.130: longterm

Kernel Linux - Mër, 25/03/2026 - 11:06pd
Version:6.6.130 (longterm) Released:2026-03-25 Source:linux-6.6.130.tar.xz PGP Signature:linux-6.6.130.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.6.130

6.1.167: longterm

Kernel Linux - Mër, 25/03/2026 - 11:03pd
Version:6.1.167 (longterm) Released:2026-03-25 Source:linux-6.1.167.tar.xz PGP Signature:linux-6.1.167.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.1.167

Thibault Martin: TIL that Proxmox can provision Kubernetes Persistent Volumes

Planet GNOME - Mër, 25/03/2026 - 11:00pd

I wanted to dip my toes into Kubernetes for my homelab, but I knew I would need some flexibility to experiment. So instead of deploying k3s directly on my server, I

  1. Installed a base Debian on my server, encrypting the disk with LUKS and using LVM to partition it.
  2. Installed the Proxmox hypervisor on that base Debian
  3. Spun up a Debian VM, and installed k3s on it.

Proxmox supports several storage plugins. It allows me to create LVM Local Volumes for the VM disks for example.

This setup allows me to spin up fresh VMs for my experiments, all while leaving my production k3s intact. This is great, but it came up with two problems:

  1. When I provision the VM for k3s I need to allocate it a massive amount of disk space. This is because k3s uses a local path provisioner to provision new Persistent Volumes directly on the VM.
  2. I can't take snapshots of the Persistent Volumes when doing backups. There's a risk that the data will change while I perform the backup.

The situation looks like the following.

On the LVM disk of the host, I create a VM for k3s. This VM has a virtual disk that doesn't rely on LVM, so it can't create LVM Logical Volumes. The local provisioner can only create volumes on the virtual disk, because it can't escape the VM to create volumes on the Proxmox host.

Because the volumes are created on the virtual disk that doesn't rely on LVM, I can't use LVM snapshots to take snapshots of my volumes.

[!question] Why not LVM Thin?

One solution to address the massive disk requirement could be to use LVM Thin: it would allow me to allocate a lot of space in theory, but in practice in only fills up as the VM storage gets used.

I don't want to use LVM Thin because it puts me at risk of overprovisioning. I could allocate more storage than I actually have, and it would be difficult to realize that my disks are filling up before it's too late.

My colleague Quentin mentioned the Proxmox CSI Plugin. It is a plugin that replaces k3s' local path provisioner. Instead of creating the kubernetes Persistent Volumes inside the VM, it calls the Proxmox host, asks it to create a LVM Logical Volume and binds it to a Persistent Volume in kubernetes.

Using the Proxmox CSI volume, the situation would look like this.

It solves the two problems for me:

  1. I can now only provision a small disk for the k3s VM, since the Persistent Volumes will be created outside of the VM.
  2. Since Proxmox will create LVM Logical Volumes to provision the Persistent Volumes, I can either do a LVM Snapshot from Proxmox or use Kubernete's Volume Snapshot feature, with some caveats.

Setting up the Proxmox-CSI-Plugin for k3s can be a bit involved, but I'm writing a longer blog post about it.

NASA Halts Work On Gateway To Develop a Lunar Base

Slashdot - Mër, 25/03/2026 - 8:00pd
NASA is reportedly halting work on the lunar Gateway in favor of a more direct push to build a lunar base. The new plan would cost tens of billions over the next decade, though the change could face hurdles because Congress previously funded Gateway specifically. SpaceNews reports: "Starting today, we're building humanity's first deep space outpost," said Carlos Garcia-Galan, program executive for NASA's moon base effort. The lunar base will take place in three phases. Phase 1, running from 2026 to 2028, "is all about getting to the moon reliably," he said. That includes a significant increase in the cadence of lander missions through the Commercial Lunar Payload Services and other programs. It will also focus on developing enabling technologies and getting "ground truth" for potential base locations at the lunar south pole. Phase 2, from 2029 through 2031, starts building the base, he said. That would include building out communications, navigation, power and other infrastructure, developing larges CLPS cargo landers and supporting two crewed missions a year. Phase 3, beginning 2032, will enable "long distance and long duration human exploration" on the moon, he said, with routine logistics missions to the moon and uncrewed cargo return missions from the moon. Garcia-Galan said NASA foresees spending $10 billion each on Phases 1 and 2. Phase 3, lasting to at least 2036, would cost an additional $10 billion or more. The base would leverage existing programs, although with some changes. NASA is planning to revamp the Lunar Terrain Vehicle program after concluding the current approach would take too long to get a crew-capable rover to the moon. "We were projecting a delivery on the lunar surface by 2030," he said. The agency is instead issuing a draft request for proposals for simplified rovers that could be quicker and easier to develop but could be upgraded later. The base, though, would include some new capabilities and technologies. One example Garcia-Galan provided was MoonFall, a drone that would be able to hop from one location to another on the lunar surface. The drones will be "built on the legacy" of Ingenuity, the small Mars helicopter. "We're going to take everything that we learned from Ingenuity's systems, the avionics, all of that, to build this."

Read more of this story at Slashdot.

Hong Kong Police Can Demand Passwords Under New National Security Rules

Slashdot - Mër, 25/03/2026 - 4:30pd
An anonymous reader quotes a report from the BBC: Hong Kong police can now demand phone or computer passwords from those who are suspected of breaching the wide-ranging National Security Law (NSL). Those who refuse could face up to a year in jail and a fine of up to $12,700, and individuals who provide "false or misleading information" could face up to three years in jail. It comes as part of new amendments to a bylaw under the NSL that the government gazetted on Monday. The NSL was introduced in Hong Kong in 2020, in wake of massive pro-democracy protests the year before. Authorities say the laws, which target acts like terrorism and secession, are necessary for stability -- but critics say they are tools to quash dissent. The new amendments also give customs officials the power to seize items that they deem to "have seditious intention." Monday's amendments ensure that "activities endangering national security can be effectively prevented, suppressed and punished, and at the same time the lawful rights and interests of individuals and organizations are adequately protected," Hong Kong authorities said on Monday. Changes to the bylaw was announced by the city's leader, John Lee, bypassing the city's legislative council. The NSL also allows for some trials to be heard behind closed doors.

Read more of this story at Slashdot.

Faqet

Subscribe to AlbLinux agreguesi