You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 2 ditë 11 orë më parë

Jiri Eischmann: Fedora & CentOS at LinuxDays 2025

Mar, 07/10/2025 - 6:23md

Another edition of LinuxDays took place in Prague last weekend – the country’s largest Linux event drawing more than 1200 attendees and as every yearm we had a Fedora booth there – this time we also representing CentOS.

I was really glad that Tomáš Hrčka helped me staff the booth. I’m focused on the desktop part of Fedora and don’t follow the rest of the project in such detail. As a member of FESCo and Fedora infra team he has a great overview of what is going on in the project and our knowledge complemented each other very well when answering visitors’ questions. I’d also like to thank Adellaide Mikova who helped us tremendously despite not being a technical person.

This year I took our heavy 4K HDR display and showcased HDR support in Fedora Linux whose implementation was a multi-year effort for our team. I played HDR videos in two different video players (one that supports HDR and one that doesn’t), so that people could see a difference, and explained what needed to be implemented to make it work.

Another highlight of our booth were the laptops that run Fedora exceptionally well: Slimbook and especially Framework Laptop. Visitors were checking them out and we spoke about how the Fedora community works with the vendors to make sure Fedora Linux runs flawlessly on their laptops.

We also got a lot of questions about CentOS. We met quite a few people who were surprised that CentOS still exists. We explained to them that it lives on in the form of CentOS Stream and tried to dispel some of common misconceptions surrounding it.

Exhausting as it is, I really enjoy going to LinuxDays, but it’s also a great opportunity to explain things and get direct feedback from the community.

Ignacio Casal Quinteiro: Servo GTK

Mër, 01/10/2025 - 3:49md

I just checked and it seems that it has been 9 years since my last post in this blog :O

As part of my job at Amazon I started working in a GTK widget which will allow embedding a Servo Webview inside a GTK application. This was mostly a research project just to understand the current state of Servo and whether it was at a good enough state to migrate from WebkitGTK to it. I have to admit that it is always a pleasure to work with Rust and the great gtk-rs bindings. Instead, Servo while it is not yet ready for production, or at least not for what we need in our product, it was simple to embed and to get something running in just a few days. The community is also amazing, I had some problems along the way and they were providing good suggestions to get me unblocked in no time.

This project can be found in the following git repo: https://github.com/nacho/servo-gtk

I also created some Issues with some tasks that can be done to improve the project in case that anyone is interested.

Finally I leave you here a the usual mandatory screenshot:

Debarshi Ray: Ollama on Fedora Silverblue

Mër, 01/10/2025 - 2:30pd

I found myself dealing with various rough edges and questions around running Ollama on Fedora Silverblue for the past few months. These arise from the fact that there are a few different ways of installing Ollama, /usr is a read-only mount point on Silverblue, people have different kinds of GPUs or none at all, the program that’s using Ollama might be a graphical application in a Flatpak or part of the operating system image, and so on. So, I thought I’ll document a few different use-cases in one place for future reference or maybe someone will find it useful.

Different ways of installing Ollama

There are at least three different ways of installing Ollama on Fedora Silverblue. Each of those have their own nuances and trade-offs that we will explore later.

First, there’s the popular single command POSIX shell script installer:

$ curl -fsSL https://ollama.com/install.sh | sh

There is a manual step by step variant for those who are uncomfortable with running a script straight off the Internet. They both install Ollama in the operating system’s /usr/local or /usr or / prefix, depending on which one comes first in the PATH environment variable, and attempts to enable and activate a systemd service unit that runs ollama serve.

Second, there’s a docker.io/ollama/ollama OCI image that can be used to put Ollama in a container. The container runs ollama serve by default.

Finally, there’s Fedora’s ollama RPM.

Surprise

Astute readers might be wondering why I mentioned the shell script installer in the context of Fedora Silverblue, because /usr is a read-only mount point. Won’t it break the script? Not really, or the script breaks but not in the way one might expect.

Even though, /usr is read-only on Silverblue, /usr/local is not, because it’s a symbolic link to /var/usrlocal, and Fedora defaults to putting /usr/local/bin earlier in the PATH environment variable than the other prefixes that the installer attempts to use, as long as pkexec(1) isn’t being used. This happy coincidence allows the installer to place the Ollama binaries in their right places.

The script does fail eventually when attempting to create the systemd service unit to run ollama serve, because it tries to create an ollama user with /usr/share/ollama as its home directory. However, this half-baked installation works surprisingly well as long as nobody is trying to use an AMD GPU.

NVIDIA GPUs work, if the proprietary driver and nvidia-smi(1) are present in the operating system, which are provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion; and so does CPU fallback.

Unfortunately, the results would be the same if the shell script installer is used inside a Toolbx container. It will fail to create the systemd service unit because it can’t connect to the system-wide instance of systemd.

Using AMD GPUs with Ollama is an important use-case. So, let’s see if we can do better than trying to manually work around the hurdles faced by the script.

OCI image

The docker.io/ollama/ollama OCI image requires the user to know what processing hardware they have or want to use. To use it only with the CPU without any GPU acceleration:

$ podman run \ --name ollama \ --publish 11434:11434 \ --rm \ --security-opt label=disable \ --volume ~/.ollama:/root/.ollama \ docker.io/ollama/ollama:latest

This will be used as the baseline to enable different kinds of GPUs. Port 11434 is the default port on which the Ollama server listens, and ~/.ollama is the default directory where it stores its SSH keys and artificial intelligence models.

To enable NVIDIA GPUs, the proprietary driver and nvidia-smi(1) must be present on the host operating system, as provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion. The user space driver has to be injected into the container from the host using NVIDIA Container Toolkit, provided by the nvidia-container-toolkit package from Fedora, for Ollama to be able to use the GPUs.

The first step is to generate a Container Device Interface (or CDI) specification for the user space driver:

$ sudo nvidia-ctk cdi generate --output /etc/cdi/nvidia.yaml … …

Then the container needs to be run with access to the GPUs, by adding the --gpus option to the baseline command above:

$ podman run \ --gpus all \ --name ollama \ --publish 11434:11434 \ --rm \ --security-opt label=disable \ --volume ~/.ollama:/root/.ollama \ docker.io/ollama/ollama:latest

AMD GPUs don’t need the driver to be injected into the container from the host, because it can be bundled with the OCI image. Therefore, instead of generating a CDI specification for them, an image that bundles the driver must be used. This is done by using the rocm tag for the docker.io/ollama/ollama image.

Then container needs to be run with access to the GPUs. However, the --gpus option only works for NVIDIA GPUs. So, the specific devices need to be spelled out by adding the --devices option to the baseline command above:

$ podman run \ --device /dev/dri \ --device /dev/kfd \ --name ollama \ --publish 11434:11434 \ --rm \ --security-opt label=disable \ --volume ~/.ollama:/root/.ollama \ docker.io/ollama/ollama:rocm

However, because of how AMD GPUs are programmed with ROCm, it’s possible that some decent GPUs might not be supported by the docker.io/ollama/ollama:rocm image. The ROCm compiler needs to explicitly support the GPU in question, and Ollama needs to be built with such a compiler. Unfortunately, the binaries in the image leave out support for some GPUs that would otherwise work. For example, my AMD Radeon RX 6700 XT isn’t supported.

This can be verified with nvtop(1) in a Toolbx container. If there’s no spike in the GPU and its memory then its not being used.

It will be good to support as many AMD GPUs as possible with Ollama. So, let’s see if we can do better.

Fedora’s ollama RPM

Fedora offers a very capable ollama RPM, as far as AMD GPUs are concerned, because Fedora’s ROCm stack supports a lot more GPUs than other builds out there. It’s possible to check if a GPU is supported either by using the RPM and keeping an eye on nvtop(1), or by comparing the name of the GPU shown by rocminfo with those listed in the rocm-rpm-macros RPM.

For example, according to rocminfo, the name for my AMD Radeon RX 6700 XT is gfx1031, which is listed in rocm-rpm-macros:

$ rocminfo ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 Runtime Ext Version: 1.6 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 7 5800X 8-Core Processor Uuid: CPU-XX Marketing Name: AMD Ryzen 7 5800X 8-Core Processor Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU … … ******* Agent 2 ******* Name: gfx1031 Uuid: GPU-XX Marketing Name: AMD Radeon RX 6700 XT Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU … …

The ollama RPM can be installed inside a Toolbx container, or it can be layered on top of the base registry.fedoraproject.org/fedora image to replace the docker.io/ollama/ollama:rocm image:

FROM registry.fedoraproject.org/fedora:42 RUN dnf --assumeyes upgrade RUN dnf --assumeyes install ollama RUN dnf clean all ENV OLLAMA_HOST=0.0.0.0:11434 EXPOSE 11434 ENTRYPOINT ["/usr/bin/ollama"] CMD ["serve"]

Unfortunately, for obvious reasons, Fedora’s ollama RPM doesn’t support NVIDIA GPUs.

Conclusion

From the puristic perspective of not touching the operating system’s OSTree image, and being able to easily remove or upgrade Ollama, using an OCI container is the best option for using Ollama on Fedora Silverblue. Tools like Podman offer a suite of features to manage OCI containers and images that are far beyond what the POSIX shell script installer can hope to offer.

It seems that the realities of GPUs from AMD and NVIDIA prevent the use of the same OCI image, if we want to maximize our hardware support, and force the use of slightly different Podman commands and associated set-up. We have to create our own image using Fedora’s ollama RPM for AMD, and the docker.io/ollama/ollama:latest image with NVIDIA Container Toolkit for NVIDIA.

Hans de Goede: Fedora 43 will ship with FOSS Meteor, Lunar and Arrow Lake MIPI camera support

Mar, 30/09/2025 - 8:55md
Good news the just released 6.17 kernel has support for the IPU7 CSI2 receiver and the missing USBIO drivers have recently landed in linux-next. I have backported the USBIO drivers + a few other camera fixes to the Fedora 6.17 kernel.

I've also prepared an updated libcamera-0.5.2 Fedora package with support for IPU7 (Lunar Lake) CSI2 receivers as well as backporting a set of upstream SwStats and AGC fixes, fixing various crashes as well as the bad flicker MIPI camera users have been hitting with libcamera 0.5.2.

Together these 2 updates should make Fedora 43's FOSS MIPI camera support work on most Meteor Lake, Lunar Lake and Arrow Lake laptops!

If you want to give this a try, install / upgrade to Fedora 43 beta and install all updates. If you've installed rpmfusion's binary IPU6 stack please run:

sudo dnf remove akmod-intel-ipu6 'kmod-intel-ipu6*'

to remove it as it may interfere with the FOSS stack and finally reboot. Please first try with qcam:

sudo dnf install libcamera-qcam
qcam

which only tests libcamera and after that give apps which use the camera through pipewire a try like gnome's "Camera" app (snapshot) or video-conferencing in Firefox.

Note snapshot on Lunar Lake triggers a bug in the LNL Vulkan code, to avoid this start snapshot from a terminal with:

GSK_RENDERER=gl snapshot

If you have a MIPI camera which still does not work please file a bug following these instructions and drop me an email with the bugzilla link at hansg@kernel.org.

comments

Matthew Garrett: Investigating a forged PDF

Enj, 25/09/2025 - 12:22pd
I had to rent a house for a couple of months recently, which is long enough in California that it pushes you into proper tenant protection law. As landlords tend to do, they failed to return my security deposit within the 21 days required by law, having already failed to provide the required notification that I was entitled to an inspection before moving out. Cue some tedious argumentation with the letting agency, and eventually me threatening to take them to small claims court.

This post is not about that.

Now, under Californian law, the onus is on the landlord to hold and return the security deposit - the agency has no role in this. The only reason I was talking to them is that my lease didn't mention the name or address of the landlord (another legal violation, but the outcome is just that you get to serve the landlord via the agency). So it was a bit surprising when I received an email from the owner of the agency informing me that they did not hold the deposit and so were not liable - I already knew this.

The odd bit about this, though, is that they sent me another copy of the contract, asserting that it made it clear that the landlord held the deposit. I read it, and instead found a clause reading SECURITY: The security deposit will secure the performance of Tenant’s obligations. IER may, but will not be obligated to, apply all portions of said deposit on account of Tenant’s obligations. Any balance remaining upon termination will be returned to Tenant. Tenant will not have the right to apply the security deposit in payment of the last month’s rent. Security deposit held at IER Trust Account., where IER is International Executive Rentals, the agency in question. Why send me a contract that says you hold the money while you're telling me you don't? And then I read further down and found this:

Ok, fair enough, there's an addendum that says the landlord has it (I've removed the landlord's name, it's present in the original).

Except. I had no recollection of that addendum. I went back to the copy of the contract I had and discovered:

Huh! But obviously I could just have edited that to remove it (there's no obvious reason for me to, but whatever), and then it'd be my word against theirs. However, I'd been sent the document via RightSignature, an online document signing platform, and they'd added a certification page that looked like this:

Interestingly, the certificate page was identical in both documents, including the checksums, despite the content being different. So, how do I show which one is legitimate? You'd think given this certificate page this would be trivial, but RightSignature provides no documented mechanism whatsoever for anyone to verify any of the fields in the certificate, which is annoying but let's see what we can do anyway.

First up, let's look at the PDF metadata. pdftk has a dump_data command that dumps the metadata in the document, including the creation date and the modification date. My file had both set to identical timestamps in June, both listed in UTC, corresponding to the time I'd signed the document. The file containing the addendum? The same creation time, but a modification time of this Monday, shortly before it was sent to me. This time, the modification timestamp was in Pacific Daylight Time, the timezone currently observed in California. In addition, the data included two ID fields, ID0 and ID1. In my document both were identical, in the one with the addendum ID0 matched mine but ID1 was different.

These ID tags are intended to be some form of representation (such as a hash) of the document. ID0 is set when the document is created and should not be modified afterwards - ID1 initially identical to ID0, but changes when the document is modified. This is intended to allow tooling to identify whether two documents are modified versions of the same document. The identical ID0 indicated that the document with the addendum was originally identical to mine, and the different ID1 that it had been modified.

Well, ok, that seems like a pretty strong demonstration. I had the "I have a very particular set of skills" conversation with the agency and pointed these facts out, that they were an extremely strong indication that my copy was authentic and their one wasn't, and they responded that the document was "re-sealed" every time it was downloaded from RightSignature and that would explain the modifications. This doesn't seem plausible, but it's an argument. Let's go further.

My next move was pdfalyzer, which allows you to pull a PDF apart into its component pieces. This revealed that the documents were identical, other than page 3, the one with the addendum. This page included tags entitled "touchUp_TextEdit", evidence that the page had been modified using Acrobat. But in itself, that doesn't prove anything - obviously it had been edited at some point to insert the landlord's name, it doesn't prove whether it happened before or after the signing.

But in the process of editing, Acrobat appeared to have renamed all the font references on that page into a different format. Every other page had a consistent naming scheme for the fonts, and they matched the scheme in the page 3 I had. Again, that doesn't tell us whether the renaming happened before or after the signing. Or does it?

You see, when I completed my signing, RightSignature inserted my name into the document, and did so using a font that wasn't otherwise present in the document (Courier, in this case). That font was named identically throughout the document, except on page 3, where it was named in the same manner as every other font that Acrobat had renamed. Given the font wasn't present in the document until after I'd signed it, this is proof that the page was edited after signing.

But eh this is all very convoluted. Surely there's an easier way? Thankfully yes, although I hate it. RightSignature had sent me a link to view my signed copy of the document. When I went there it presented it to me as the original PDF with my signature overlaid on top. Hitting F12 gave me the network tab, and I could see a reference to a base.pdf. Downloading that gave me the original PDF, pre-signature. Running sha256sum on it gave me an identical hash to the "Original checksum" field. Needless to say, it did not contain the addendum.

Why do this? The only explanation I can come up with (and I am obviously guessing here, I may be incorrect!) is that International Executive Rentals realised that they'd sent me a contract which could mean that they were liable for the return of my deposit, even though they'd already given it to my landlord, and after realising this added the addendum, sent it to me, and assumed that I just wouldn't notice (or that, if I did, I wouldn't be able to prove anything). In the process they went from an extremely unlikely possibility of having civil liability for a few thousand dollars (even if they were holding the deposit it's still the landlord's legal duty to return it, as far as I can tell) to doing something that looks extremely like forgery.

There's a hilarious followup. After this happened, the agency offered to do a screenshare with me showing them logging into RightSignature and showing the signed file with the addendum, and then proceeded to do so. One minor problem - the "Send for signature" button was still there, just below a field saying "Uploaded: 09/22/25". I asked them to search for my name, and it popped up two hits - one marked draft, one marked completed. The one marked completed? Didn't contain the addendum.

comments

Arun Raghavan: Asymptotic on hiatus

Mër, 24/09/2025 - 8:14md

Asymptotic was started 6 years ago, when I wanted to build something that would be larger than just myself.

We’ve worked with some incredible clients in this time, on a wide range of projects. I would be remiss to not thank all the teams that put their trust in us.

In addition to working on interesting challenges, our goal was to make sure we were making a positive impact on the open source projects that we are part of. I think we truly punched above our weight class (pardon the boxing metaphor), on this front – all the upstream work we have done stands testament to that.

Of course, the biggest single contributor to what we were able to achieve is our team. My partner, Deepa, was instrumental in shaping how the company was formed and run. Sanchayan (who took a leap of faith in joining us first), and Taruntej were stellar colleagues and friends on this journey.

It’s been an incredibly rewarding experience, but the time has come to move on to other things, and we have now paused operations. I’ll soon write about some recent work and what’s next.

Alley Chaggar: Final Report

Sht, 13/09/2025 - 2:00pd
Intro:

Hi everyone, it’s the end of GSoc! I had a great experience throughout this whole process. I’ve learned so much. This is essentially the ‘final report’ for GSoC, but not my final report for this project in general by a long shot. I still have so much more I want to do, but here is what I’ve done so far.

Project:

JSON, YAML, and/or XML emitting and parsing integration into Vala’s compiler.

Mentor:

I would like to thank Lorenz Wildberg for being my mentor for this project, as well as the Vala community.

Description:

The main objective of this project is to integrate direct syntax support for parsing and emitting JSON, XML, and/or YAML formats in Vala. This will cut back the boilerplate code, making it more user-friendly and efficient for developers working with these formatting languages.

What I’ve done: Research
  • I’ve done significant research in both JSON and YAML parsing and emitting in various languages like C#, Java, Rust and Python.
  • Looked into how Vala currently handles JSON using JSON GLib classes, and I then modelled the C code after the examples I collected.
  • Modelled the JSON module after other modules in the codegen, specifically, mainly after Dbus, Gvariant, GObject, and GTK.
Custom JSON Overrides and Attribute
  • Created Vala syntax sugar specifically making a [JSON] attribute to do serialization.
  • Built support for custom overrides as in mapping JSON keys to differently named fields/properties.
  • Reduced boilerplate by generating C code behind the scenes.
Structs
  • I’ve created both Vala functions to deserialize and serialize structs using JSON boxed functions.
  • I created a Vala generate_struct_serialize_func function to create a C code function called _%s_serialize_func to serialize fields.
  • I then created a Vala function generate_struct_to_json to create a C code function called _json_%s_serialize_mystruct to fully serialize the struct by using boxed serialize functions.

  • I created a Vala generate_struct_deserialize_func function to create a C code function called _%s_deserialize_func to deserialize fields.
  • I then created a Vala function generate_struct_to_json to create a C code function called _json_%s_deserialize_mystruct to fully deserialize the struct by using boxed deserialized functions.
GObjects
  • I’ve created both Vala functions to deserialize and serialize GObjects using json_gobject_serialize and JSON generator.
  • I then created a Vala function generate_gclass_to_json to create a C code function called _json_%s_serialize_gobject_myclass to fully serialize GObjects.

  • I created a Vala generate_gclass_from_json function to create a C code function called _json_%s_deserialize_class to deserialize fields.
Non-GObjects
  • I’ve done serializing of non-GObjects using JSON GLib’s builder functions.
  • I then created a Vala function generate_class_to_json to create a C code function called _json_%s_serialize_myclass to fully serialize non-objects that aren’t inheriting from Object or Json.Serializable.
Future Work: Research
  • Research still needs to be put into integrating XML and determining which library to use.
  • The integration of YAML and other formatting languages not only JSON, YAML, or XML.
Custom Overrides and Attributes
  • I want to create more specialized attributes for JSON that only do serialization or deserialization. Such as [JsonDeserialize] and [JsonSerialize] or something similar.
  • [JSON] attribute needs to do both deserializing and serializing, and at the moment, the deserializing code has problems.
  • XML, YAML, and other formating languages will follow very similar attribute patterns: [Yaml], [Xml], [Json].
Bugs
  • unref c code functions are calling nulls, which shouldn’t be the cause. They need proper types going through.
  • Deserializing prompts a redefinition that needs to be corrected.
  • Overridden GObject properties need to have setters made to be able to get the values.
Links

Faqet