You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - http://planet.gnome.org/
Përditësimi: 3 ditë 8 orë më parë

Daniel García Moreno: mdl

Pre, 03/08/2018 - 1:22md

The last month I wrote a blog post about the LMDB Cache database and my wish to use that in Fractal. To summarize, LMDB is a memory-mapped key-value database that persist the data to the filesystem. I want to use this in the Fractal desktop application to replace the current state storage system (we're using simple json files) and as a side effect we can use this storage system to share data between threads because currently we're using a big struct AppOp shared with Arc<Mutex<AppOp>> and this cause some problems because we need to share and lock and update the state there.

The main goal is to define an app data model with smaller structs and store this using LMDB, then we can access to the same data querying the LMDB and we can update the app state storing to the LMDB.

With this change we don't need to share these structs, we only need to query to the LMDB to get the data and the work with that, and this should simplify our code. The other main benefit will be that we'll have this state in the filesystem by default so when we open the app after close, we'll stay in the same state.

Take a look to the gtk TODO example app to view how to use mdl with signals in a real gtk app.

What is mdl

mdl is Data model library to share app state between threads and process and persist the data in the filesystem. Implements a simple way to store structs instances in a LMDB database and other methods like BTreeMap.

I started to play with the LMDB rust binding and writing some simple tests. After some simple tests, I decided to write a simple abstraction to hide the LMDB internals and to provide a simple data storage and to do that I created the mdl crate.

The idea is to be able to define your app model as simple rust structs. LMDB is a key-value database so every struct instance will have an unique key to store in the cache.

The keys are stored in the cache ordered, so we can use some techniques to store related objects and to retrieve all objects of a kind, we only need to build keys correctly, following an scheme. For example, for fractal we can store rooms, members and messages like this:

  • rooms with key "room:roomid", to store all the room information, title, topic, icon, unread msgs, etc.
  • members with key "member:roomid:userid", to store all member information.
  • messages with key "msg:roomid:msgid" to store room messages.

Following this key assignment we can iterate over all rooms by querying all objects that starts with "room", we can get all members and all messages from a room.

This have some inconveniences, because we can't query directly an message by id if we don't know the roomid. If we need that kind of queries, we need to think about another key assignment or maybe we should duplicate data. key-value are simple databases so we don't have the power of relational databases.

Internals

LMDB is fast and efficient, because it's in memory so using this cache won't add a lot of overhead, but to make it simple to use I've to add some overhead, so mdl is easy by default and can be tuned to be really fast.

This crate has three main modules with traits to implement:

  • model: This contains the Model trait that should implement every struct that we want to make cacheable.
  • store: This contains the Store trait that's implemented by all the cache systems.
  • signal: This contains the Signaler trait and two structs that allow us to emit/subscribe to "key" signals.

And two more modules that implements the current two cache systems:

  • cache: LMDB cache that implements the Store trait.
  • bcache: BTreeMap cache that implements the Store trait. This is a good example of other cache system that can be used, this doesn't persist to the filesystem.

So we've two main concepts here, the Store and the Model. The model is the plain data and the store is the container of data. We'll be able to add models to the store or to query the store to get stored models. We store our models as key-value where the key is a String and the value is a Vec<u8>, so every model should be serializable.

This serialization is the bigger overhead added. We need to do this because we need to be able to store this in the LMDB database. Every request will create a copy of the object in the database, so we're not using the same data. This can be tuned to use pointers to the real data, but to do that we'll need to use unsafe code and I think that the performance that we'll get with this doesn't deserve the complexity that this will add.

By default, the Model trait has two methods fromb and tob to serialize and deserialize using bincode, so any struct that implements the Model trait and doesn't reimplement these two methods should implement Serialize and Deserialize from serde.

The signal system is an addition to be able to register callbacks to keys modifications in the store, so we can do something when a new objects is added, modified or deleted from the store. The signaler is optional and we should use it in a explicit way.

How to use it

First of all, you should define your data model, the struct that you want to be able to store in the database:

#[derive(Serialize, Deserialize, Debug)] struct A { pub p1: String, pub p2: u32, }

In this example we'll define a struct called A with two attributes, p1, a String, and p2, an u32. We derive Serialize and Deserialize because we're using the default fromb and tob from the Model trait.

Then we need to implement the Model trait:

impl Model for A { fn key(&self) -> String { format!("{}:{}", self.p1, self.p2) } }

We only reimplement the key method to build a key for every instance of A. In this case our key will be the String followed by the number, so for example if we've something like let a = A { p1: "myk", p2: 42 }; the key will be "myk:42".

Then, to use this we need to have a Store, in this example, we'll use the LMDB store that's the struct Cache:

// initializing the cache. This str will be the fs persistence path let db = "/tmp/mydb.lmdb"; let cache = Cache::new(db).unwrap();

We pass the path to the filesystem where we want to persist the cache as the first argument, in this example we'll persist to "/tmp/mydb.lmdb". When we ran the program for the first time a directory will be created there. The next time, that cache will be used with the information from the previous execution.

Then, with this cache object we can instantiate an A object and store in the cache:

// create a new *object* and storing in the cache let a = A{ p1: "hello".to_string(), p2: 42 }; let r = a.store(&cache); assert!(r.is_ok());

The store method will serialize the object and store a copy of that in the cache.

After the store, we can query for this object from other process, using the same lmdb path, or from the same process using the cache:

// querying the cache by key and getting a new *instance* let a1: A = A::get(&cache, "hello:42").unwrap(); assert_eq!(a1.p1, a.p1); assert_eq!(a1.p2, a.p2);

We'll get a copy of the original one.

This is the full example:

extern crate mdl; #[macro_use] extern crate serde_derive; use mdl::Cache; use mdl::Model; use mdl::Continue; #[derive(Serialize, Deserialize, Debug)] struct A { pub p1: String, pub p2: u32, } impl Model for A { fn key(&self) -> String { format!("{}:{}", self.p1, self.p2) } } fn main() { // initializing the cache. This str will be the fs persistence path let db = "/tmp/mydb.lmdb"; let cache = Cache::new(db).unwrap(); // create a new *object* and storing in the cache let a = A{ p1: "hello".to_string(), p2: 42 }; let r = a.store(&cache); assert!(r.is_ok()); // querying the cache by key and getting a new *instance* let a1: A = A::get(&cache, "hello:42").unwrap(); assert_eq!(a1.p1, a.p1); assert_eq!(a1.p2, a.p2); } Iterations

When we store objects with the same key prefix we can iterate over all of the objects, because we don't know the full key of all objects.

Currently there's two ways to iterate over all objects with the same prefix in a Store:

  • all

This is the simpler way, calling the all method we'll receive a Vec<T> so we've all the objects in a vector.

let hellows: Vec<A> = A::all(&cache, "hello").unwrap(); for h in hellows { println!("hellow: {}", h.p2); }

This has a little problem, because if we've a lot of objects, this will use a lot of memory for the vector and we'll be iterating over all objects twice. To solve this problems, the iter method was created.

  • iter

The iter method provides a way to call a closure for every object with this prefix in the key. This closure should return a Continue(bool) that will indicates if we should continue iterating of if we should stop the iteration here.

A::iter(&cache, "hello", |h| { println!("hellow: {}", h.p2); Continue(true) }).unwrap();

Using the Continue we can avoid to iterate over all the objects, for example if we're searching for one concrete object.

We're copying every object, but the iter method is better than the all, because if we don't copy or move the object from the closure, this copy only live in the closure scope, so we'll use less memory and also, we only iterate one. If we use all, we'll iterate over all objects with that prefix to build the vector so if we iterate over that vector another time this will cost more than the iter version.

Signal system

As I said before, the signal system provide us a way to register callbacks to key modifications. The signal system is independent of the Model and Store and can be used independently:

extern crate mdl; use mdl::Signaler; use mdl::SignalerAsync; use mdl::SigType; use std::sync::{Arc, Mutex}; use std::{thread, time}; fn main() { let sig = SignalerAsync::new(); sig.signal_loop(); let counter = Arc::new(Mutex::new(0)); // one thread for receive signals let sig1 = sig.clone(); let c1 = counter.clone(); let t1: thread::JoinHandle<_> = thread::spawn(move || { let _ = sig1.subscribe("signal", Box::new(move |_sig| { *c1.lock().unwrap() += 1; })); }); // waiting for threads to finish t1.join().unwrap(); // one thread for emit signals let sig2 = sig.clone(); let t2: thread::JoinHandle<_> = thread::spawn(move || { sig2.emit(SigType::Update, "signal").unwrap(); sig2.emit(SigType::Update, "signal:2").unwrap(); sig2.emit(SigType::Update, "signal:2:3").unwrap(); }); // waiting for threads to finish t2.join().unwrap(); let ten_millis = time::Duration::from_millis(10); thread::sleep(ten_millis); assert_eq!(*counter.lock().unwrap(), 3); }

In this example we're creating a SignalerAsync that can emit signal and we can subscribe a callback to any signal. The sig.signal_loop(); init the signal loop thread, that wait for signals and call any subscribed callback when a signal comes.

let _ = sig1.subscribe("signal", Box::new(move |_sig| { *c1.lock().unwrap() += 1; }));

We subscribe a callback to the signaler. The signaler can be cloned and the list of callbacks will be the same, if you emit a signal in a clone and subscribe in other clone, that signal will trigger the callback.

Then we're emiting some signals:

sig2.emit(SigType::Update, "signal").unwrap(); sig2.emit(SigType::Update, "signal:2").unwrap(); sig2.emit(SigType::Update, "signal:2:3").unwrap();

All of this three signals will trigger the previous callback because the subscription works as a signal starts with. This allow us to subscribe to all new room messages insertion if we follow the previous described keys, subscribing to "msg:roomid" and if we only want to register a callback to be called only when one message is updated we can subscribe to "msg:roomid:msgid" and this callback won't be triggered for other messages.

The callback should be a Box<Fn(signal)> where signal is the following struct:

#[derive(Clone, Debug)] pub enum SigType { Update, Delete, } #[derive(Clone, Debug)] pub struct Signal { pub type_: SigType, pub name: String, }

Currently only Update and Delete signal types are supported.

Signaler in gtk main loop

All the UI operations in a gtk app should be executed in the gtk main loop so we can't use the SignalerAsync in a gtk app, because this signaler creates one thread for the callbacks so all callbacks should implement the Send trait and if we want to modify, for example, a gtk::Label in a callback, that callback won't implement Send because gtk::Label can't be send between threads safely.

To solve this problem, I've added the SignalerSync. That doesn't launch any threads and where all operations runs in the same thread, even the callback. This is a problem if one of your callbacks locks the thread, because this will lock your interface in a gtk app, so any callback in the sync signaler should be non blocking.

This signaler should be used in a different way, so we should call from time to time to the signal_loop_sync method, that will check for new signals and will trigger any subscribed callback. This signaler doesn't have a signal_loop because we should do the loop in our thread.

This is an example of how to run the signaler loop inside a gtk app:

let sig = SignalerSync::new(); let sig1 = sig.clone(); gtk::timeout_add(50, move || { gtk::Continue(sig1.signal_loop_sync()) }); // We can subscribe callbacks using the sig here

In this example code we're registering a timeout callback, every 50ms this closure will be called, from the gtk main thread, and the signal_loop_sync will check for signals and call the needed callbacks.

This method returns a bool that's false when the signaler stops. You can stop the signaler calling the stop method.

Point of extension

I've tried to make this crate generic to be able to extend in the future and provide other kind of cache that can be used changing little code in the apps that uses mdl.

This is the main reason to use traits to implement the store, so the first point of extension is to add more cache systems, we're currently two, the LMDB and the BTreeMap, but it would be easy to add more key-value storages, like memcached, unqlie, mongodb, redis, couchdb, etc.

The signaler is really simple, so maybe we can start to think about new signalers that uses Futures and other kind of callbacks registration.

As I said before, mdl does a copy of the data on every write and on every read, so it could be cool to explore the implication of these copies in the performance and try to find methods to reduce this overhead.

Nick Richards: Pinpoint Flatpak

Pre, 03/08/2018 - 11:53pd

A while back I made a Pinpoint COPR repo in order to get access to this marvelous tool in Fedora. Well, now I work for Endless and the only way you can run apps on our system is in a Flatpak container. So I whipped up a quick Pinpoint Flatpak in order to give a talk at GUADEC this year.

Flatpak is actually very helpful here, since the libraries required are rapidly becoming antique, and carrying them around on your base system is gross as well as somewhat insecure. There isn’t a GUI to create or open files, and it’s somewhat awkward to use if you’re not already an expert, so I didn’t submit the app to Flathub, however you can easily download and install the bundle locally. I hope the two people for whom this is useful find it as useful as I did to make.

Nick Richards: Pinpoint COPR Repo

Pre, 03/08/2018 - 11:53pd

A few years ago I worked with a number of my former colleagues to create Pinpoint, a quick hack that made it easier for us to give presentations that didn’t suck. Now that I’m at Collabora I have a couple of presentations to make and using pinpoint was a natural choice. I’ve been updating our internal templates to use our shiny new brand and wanted to use some newer features that weren’t available in Fedora’s version of pinpoint.

There hasn’t been an official release for a little while and a few useful patches have built up on the master branch. I’ve packaged a git snapshot and created a COPR repo for Fedora so you can use these snapshots yourself. They’re good.

Matthew Garrett: Porting Coreboot to the 51NB X210

Pre, 03/08/2018 - 3:35pd
The X210 is a strange machine. A set of Chinese enthusiasts developed a series of motherboards that slot into old Thinkpad chassis, providing significantly more up to date hardware. The X210 has a Kabylake CPU, supports up to 32GB of RAM, has an NVMe-capable M.2 slot and has eDP support - and it fits into an X200 or X201 chassis, which means it also comes with a classic Thinkpad keyboard . We ordered some from a Facebook page (a process that involved wiring a large chunk of money to a Chinese bank which wasn't at all stressful), and a couple of weeks later they arrived. Once I'd put mine together I had a quad-core i7-8550U with 16GB of RAM, a 512GB NVMe drive and a 1920x1200 display. I'd transplanted over the drive from my XPS13, so I was running stock Fedora for most of this development process.

The other fun thing about it is that none of the firmware flashing protection is enabled, including Intel Boot Guard. This means running a custom firmware image is possible, and what would a ridiculous custom Thinkpad be without ridiculous custom firmware? A shadow of its potential, that's what. So, I read the Coreboot[1] motherboard porting guide and set to.

My life was made a great deal easier by the existence of a port for the Purism Librem 13v2. This is a Skylake system, and Skylake and Kabylake are very similar platforms. So, the first job was to just copy that into a new directory and start from there. The first step was to update the Inteltool utility so it understood the chipset - this commit shows what was necessary there. It's mostly just adding new PCI IDs, but it also needed some adjustment to account for the GPIO allocation being different on mobile parts when compared to desktop ones. One thing that bit me - Inteltool relies on being able to mmap() arbitrary bits of physical address space, and the kernel doesn't allow that if CONFIG_STRICT_DEVMEM is enabled. I had to disable that first.

The GPIO pins got dropped into gpio.h. I ended up just pushing the raw values into there rather than parsing them back into more semantically meaningful definitions, partly because I don't understand what these things do that well and largely because I'm lazy. Once that was done, on to the next step.

High Definition Audio devices (or HDA) have a standard interface, but the codecs attached to the HDA device vary - both in terms of their own configuration, and in terms of dealing with how the board designer may have laid things out. Thankfully the existing configuration could be copied from /sys/class/sound/card0/hwC0D0/init_pin_configs[2] and then hda_verb.h could be updated.

One more piece of hardware-specific configuration is the Video BIOS Table, or VBT. This contains information used by the graphics drivers (firmware or OS-level) to configure the display correctly, and again is somewhat system-specific. This can be grabbed from /sys/kernel/debug/dri/0/i915_vbt.

A lot of the remaining platform-specific configuration has been split out into board-specific config files. and this also needed updating. Most stuff was the same, but I confirmed the GPE and genx_dec register values by using Inteltool to dump them from the vendor system and copy them over. lspci -t gave me the bus topology and told me which PCIe root ports were in use, and lsusb -t gave me port numbers for USB. That let me update the root port and USB tables.

The final code update required was to tell the OS how to communicate with the embedded controller. Various ACPI functions are actually handled by this autonomous device, but it's still necessary for the OS to know how to obtain information from it. This involves writing some ACPI code, but that's largely a matter of cutting and pasting from the vendor firmware - the EC layout depends on the EC firmware rather than the system firmware, and we weren't planning on changing the EC firmware in any way. Using ifdtool told me that the vendor firmware image wasn't using the EC region of the flash, so my assumption was that the EC had its own firmware stored somewhere else. I was ready to flash.

The first attempt involved isis' machine, using their Beaglebone Black as a flashing device - the lack of protection in the firmware meant we ought to be able to get away with using flashrom directly on the host SPI controller, but using an external flasher meant we stood a better chance of being able to recover if something went wrong. We flashed, plugged in the power and… nothing. Literally. The power LED didn't turn on. The machine was very, very dead.

Things like managing battery charging and status indicators are up to the EC, and the complete absence of anything going on here meant that the EC wasn't running. The most likely reason for that was that the system flash did contain the EC's firmware even though the descriptor said it didn't, and now the system was very unhappy. Worse, the flash wouldn't speak to us any more - the power supply from the Beaglebone to the flash chip was sufficient to power up the EC, and the EC was then holding onto the SPI bus desperately trying to read its firmware. Bother. This was made rather more embarrassing because isis had explicitly raised concern about flashing an image that didn't contain any EC firmware, and now I'd killed their laptop.

After some digging I was able to find EC firmware for a related 51NB system, and looking at that gave me a bunch of strings that seemed reasonably identifiable. Looking at the original vendor ROM showed very similar code located at offset 0x00200000 into the image, so I added a small tool to inject the EC firmware (basing it on an existing tool that does something similar for the EC in some HP laptops). I now had an image that I was reasonably confident would get further, but we couldn't flash it. Next step seemed like it was going to involve desoldering the flash from the board, which is a colossal pain. Time to sleep on the problem.

The next morning we were able to borrow a Dediprog SPI flasher. These are much faster than doing SPI over GPIO lines, and also support running the flash at different voltage. At 3.5V the behaviour was the same as we'd seen the previous night - nothing. According to the datasheet, the flash required at least 2.7V to run, but flashrom listed 1.8V as the next lower voltage so we tried. And, amazingly, it worked - not reliably, but sufficiently. Our hypothesis is that the chip is marginally able to run at that voltage, but that the EC isn't - we were no longer powering the EC up, so could communicated with the flash. After a couple of attempts we were able to write enough that we had EC firmware on there, at which point we could shift back to flashing at 3.5V because the EC was leaving the flash alone.

So, we flashed again. And, amazingly, we ended up staring at a UEFI shell prompt[3]. USB wasn't working, and nor was the onboard keyboard, but we had graphics and were executing actual firmware code. I was able to get USB working fairly quickly - it turns out that Linux numbers USB ports from 1 and the FSP numbers them from 0, and fixing that up gave us working USB. We were able to boot Linux! Except there were a whole bunch of errors complaining about EC timeouts, and also we only had half the RAM we should.

After some discussion on the Coreboot IRC channel, we figured out the RAM issue - the Librem13 only has one DIMM slot. The FSP expects to be given a set of i2c addresses to probe, one for each DIMM socket. It is then able to read back the DIMM configuration and configure the memory controller appropriately. Running i2cdetect against the system SMBus gave us a range of devices, including one at 0x50 and one at 0x52. The detected DIMM was at 0x50, which made 0x52 seem like a reasonable bet - and grepping the tree showed that several other systems used 0x52 as the address for their second socket. Adding that to the list of addresses and passing it to the FSP gave us all our RAM.

So, now we just had to deal with the EC. One thing we noticed was that if we flashed the vendor firmware, ran it, flashed Coreboot and then rebooted without cutting the power, the EC worked. This strongly suggested that there was some setup code happening in the vendor firmware that configured the EC appropriately, and if we duplicated that it would probably work. Unfortunately, figuring out what that code was was difficult. I ended up dumping the PCI device configuration for the vendor firmware and for Coreboot in case that would give us any clues, but the only thing that seemed relevant at all was that the LPC controller was configured to pass io ports 0x4e and 0x4f to the LPC bus with the vendor firmware, but not with Coreboot. Unfortunately the EC was supposed to be listening on 0x62 and 0x66, so this wasn't the problem.

I ended up solving this by using UEFITool to extract all the code from the vendor firmware, and then disassembled every object and grepped them for port io. x86 systems have two separate io buses - memory and port IO. Port IO is well suited to simple devices that don't need a lot of bandwidth, and the EC is definitely one of these - there's no way to talk to it other than using port IO, so any configuration was almost certainly happening that way. I found a whole bunch of stuff that touched the EC, but was clearly depending on it already having been enabled. I found a wide range of cases where port IO was being used for early PCI configuration. And, finally, I found some code that reconfigured the LPC bridge to route 0x4e and 0x4f to the LPC bus (explaining the configuration change I'd seen earlier), and then wrote a bunch of values to those addresses. I mimicked those, and suddenly the EC started responding.

It turns out that the writes that made this work weren't terribly magic. PCs used to have a SuperIO chip that provided most of the legacy port functionality, including the floppy drive controller and parallel and serial ports. Individual components (called logical devices, or LDNs) could be enabled and disabled using a sequence of writes that was fairly consistent between vendors. Someone on the Coreboot IRC channel recognised that the writes that enabled the EC were simply using that protocol to enable a series of LDNs, which apparently correspond to things like "Working EC" and "Working keyboard". And with that, we were done.

Coreboot doesn't currently have ACPI support for the latest Intel graphics chipsets, so right now my image doesn't have working backlight control.Backlight control also turned out to be interesting. Most modern Intel systems handle the backlight via registers in the GPU, but the X210 uses the embedded controller (possibly because it supports both LVDS and eDP panels). This means that adding a simple display stub is sufficient - all we have to do on a backlight set request is store the value in the EC, and it does the rest.

Other than that, everything seems to work (although there's probably a bunch of power management optimisation to do). I started this process knowing almost nothing about Coreboot, but thanks to the help of people on IRC I was able to get things working in about two days of work[4] and now have firmware that's about as custom as my laptop.

[1] Why not Libreboot? Because modern Intel SoCs haven't had their memory initialisation code reverse engineered, so the only way to boot them is to use the proprietary Intel Firmware Support Package.
[2] Card 0, device 0
[3] After a few false starts - it turns out that the initial memory training can take a surprisingly long time, and we kept giving up before that had happened
[4] Spread over 5 or so days of real time

comments

Matthias Clasen: On Flatpak updates

Enj, 02/08/2018 - 7:21md

Maybe you remember times when updating your system was risky business – your web browser might crash of start to behave funny because the update pulled data files or fonts out from underneath the running process, leading to fireworks or, more likely, crashes.

Flatpak updates on the other hand are 100% safe. You can call

flatpak update

and the running instances of are not affected in any way. Flatpak keeps existing deployments around until the last user is gone.  If you quit the application and restart it, you will get the updated version, though.

This is very nice, and works just fine. But maybe we can do even better?

Improving the system

It would be great if the system was aware of the running instances, and offered me to restart them to take advantage of the new version that is now available. There is a good chance that GNOME Software will gain this feature before too long.

But for now, it does not have it.

Do it yourself

Many apps, in particular those that are not native to the Linux distro world, expect to update themselves, and we have had requests to enable this functionality in flatpak. We do think that updating software is a system responsibility that should be controlled by global policies and be under the users control, so we haven’t quite followed the request.

But Flatpak 1.0 does have an API that is useful in this context, the “Flatpak portal“. It has a Spawn method that allows applications to launch a process in a new sandbox.

Spawn (IN ay cwd_path, IN aay argv, IN a{uh} fds, IN a{ss} envs, IN u flags, IN a{sv} options, OUT u pid)

There are several use cases for this, from sandboxing thumbnailers (which create thumbnails for possibly untrusted content files) to sandboxing web browser tabs individually. The use case we are interested in here is restarting the latest version of the app itself.

One complication that I’ve run into when trying this out is the “unique application” pattern that is built into GApplication and similar application classes: Since there is already an owner for the application ID on the session bus, my newly spawned version will just back off and exit. Which is clearly not what I intended in this case.

Make it stop

The workaround I came up with is not very pretty, but functional. It requires several parts.

First, I need a “quit” action exported on the session bus. The newly spawned version will activate this action of the running instance to convince it to go away. Thankfully, my example app already had this action, for the Quit item in the app menu.

I don’t want this to happen unconditionally, but only if I am spawning a new version. To achieve this, I made my app only activate “quit” if the –replace option is present, and add that option to the commandline that I pass to the “Spawn” call.

The code for this part is less pretty than it could be, since GApplication gets in the way a bit. I have to manually check for the –replace option and do the “quit” D-Bus call by hand.

Doing the “quit” call synchronously is not quite enough to avoid a race condition between the running instance dropping the bus name and my new instance attempting to take it. Therefore, I explicitly wait for the bus name to become unowned before entering g_application_run().

https://blogs.gnome.org/mclasen/files/2018/08/Screencast-from-08-02-2018-124710-PM.webm

But it all works fine. To test it, i exported a “restart” action and added it to the app menu.

Tell me about it

But who can remember to open the app menu and click “Restart”. That is just too cumbersome. Thankfully, flatpak has a solution for this: When you update an app that is running, it creates a marker file named

/app/.updated

inside the sandbox for each running instance.

That makes it very easy for the app to find out when it has been updated, by just monitoring this file. Once the file appears, it can pop up a dialog that offers the user to restart the newer version of the app. A good quality implementation of this will of course save and restore the state when doing this.

https://blogs.gnome.org/mclasen/files/2018/08/Screencast-from-08-02-2018-125142-PM.webm

Voilá, updates made easy!

You can find the working example in the portal-test repository.

Fabián Orccón: How I built pipewire from source code in Linux with systemd

Mër, 01/08/2018 - 8:08md

Lately, I have been interested in contributing to Pipewire. One interesting thing about it is that it allows you to use the same video device (for example your webcam) in different applications at the same time.

If you have a webcam, try for example in your terminal:

gst-launch-1.0 v4l2src ! xvimagesink

It just will open a window with the capture of your webcam… don’t close it yet! If in other terminal, you run gst-launch-1.0 v4l2src ! xvimagesink again, you will get the following error:

and you realize that your webcam can only be used by one application at the same time, at least until you keep reading the rest of this post.

As I said, I was interested in contributing to pipewire, so you will see how to build it form source code. But if you just want to use the one that is already provided by your distro repositories, just go to the final part.

cd ~ git clone https://github.com/PipeWire/pipewire cd pipewire

Then you can build pipewire, but before that, for the time I am trying it (2018-08-01), I had the problem that pipewire wanted to override my local pipewire installation (the one that I have already installed from the official packages of Fedora) even when I set a prefix.

This error, as you may say, is because my user does not have permissions to write into /usr/lib/systemd/user. That’s fine. Actually, I am not sure how bad or good would be to override my system wide pipewire installation. Luckyly, a guy from the #pipewire channel with the nickname jadahl had a patch.

So if you have the same problem, use that patch. To use it just do,

git checkout -b systemduserunitdir curl -O http://cfoch.github.io/assets/posts/2018-08-01-how-i-built-pipewire-from-source/patch/systemduserunitdir.diff git apply systemduserunitdir.diff

Now we will build pipewire from source… but before, I some folders a directory where pipewire files will be installed to:

mkdir -p ~/env/pipewire

and also I create the following directory because the .socket and .service unit files should be installed there instead of /usr/lib/systemd/user:

mkdir ~/.local/share/systemd/user

Now, to build pipewire:

mkdir builddir cd builddir meson --prefix=$HOME/env/pipewire .. meson configure -Dsystemd_user_unit_dir=$HOME/.local/share/systemd/user ninja ninja install

If you had all the required dependencies, you will have pipewire installed now. In your ~/.bashrc file, you may want to add the following lines:

export PIPEWIRE_PREFIX=$HOME/env/pipewire export PREFIX_PATH=$PIPEWIRE_PREFIX/bin:$PATH export LD_LIBRARY_PATH=$PIPEWIRE_PREFIX/lib64:$LD_LIBRARY_PATH export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:$PIPEWIRE_PREFIX/lib64/pkgconfig export GST_PLUGIN_PATH_1_0=$PIPEWIRE_PREFIX/lib64/gstreamer-1.0/:$GST_PLUGIN_PATH_1_0

In my case, I didn’t add these lines to my .bashrc, because I have a script called env.sh in ~/env/pipewire/env.sh, so whenever I want to build an application that uses pipewire from master I just run source ~/env/pipewire/env.sh to enter in my environment. You can download my script from here but I warn that it may contain things that you don’t need!

Now, you have pipewire installed from master. You have to start the daemon, but you don’t want to start the daemon from the pipewire that is installed from the repositories of your Linux distribution.

The solution is to tell systemd to start the daemon for the current user session:

systemctl --user start pipewire

However, when I did that I had the following error:

I checked my log journalctl --user -u pipewire and the problem was because of an “error while loading shared libraries: libpipewire-0.2.so”:

for some reason, systemd wasn’t recognizing my environment. Looking at the manual page of it I was really happy when I found an option to set an environment variable. Before executing the following command take care you dont’t have the LD_LIBRARY_PATH already set in systemd, you can check that with systemctl --user show-environment | grep LD_LIBRARY_PATH. If the output is empty, run this command:

systemctl --user set-environment LD_LIBRARY_PATH=$HOME/env/pipewire/lib64

Now you can start the daemon:

systemctl --user start pipewire

Also, you will note that the output says that pipewire is loaded from the systemd unit user directory which in my case is /home/cfoch/.local/share/systemd/user/pipewire.service. That is exactly what I want.

To test your webcam being shared by multiple applications you can try the following command in two or more terminals:

gst-launch-1.0 pipewiresrc ! videoconvert ! xvimagesink

You can also try one of the pipewire’s examples:

cd ~/pipewire/builddir/src/examples/ ./video-play

Well… that’s it!

Christian Schaller: Supporting developers on Patreon (and similar)

Mër, 01/08/2018 - 4:35md

For some time now I been supporting two Linux developers on patreon. Namely Ryan Gordon of Linux game porting and SDL development fame and Tanu Kaskinen who is a lead developer on PulseAudio these days.

One of the things I often think about is how we can enable more people to make a living from working on the Linux desktop and related technologies. If your reading my blog there is a good chance that you are enabling people to make a living on working on the Linux desktop by paying for RHEL Workstation subscriptions through your work. So a big thank you for that. The fact that Red Hat has paying customers for our desktop products is critical in terms of our ability to do so much of the maintenance and development work we do around the Linux Desktop and Linux graphics stack.

That said I do feel we need more venues than just employment by companies such as Red Hat and this is where I would love to see more people supporting their favourite projects and developers through for instance Patreon. Because unlike one of funding campaigns repeat crowdfunding like Patreon can give developers predictable income, which means they don’t have to worry about how to pay their rent or how to feed their kids.

So in terms of the two Patreons I support Ryan is probably the closest to being able to rely on it for his livelihood, but of course more Patreon supporters will enable Ryan to be even less reliant on payments from game makers. And Tanu’s patreon income at the moment is helping him to spend quite a bit of time on PulseAudio, but it is definitely not providing him with a living income. So if you are reading this I strongly recommend that you support Ryan Gordon and Tanu Kaskinen on Patreon. You don’t need to pledge a lot, I think in general it is in fact better to have many people pledging 10 dollars a Month than a few pledging hundreds, because the impact of one person coming or going is thus a lot less. And of course this is not just limited to Ryan and Tanu, search around and see if any projects or developers you personally care deeply about are using crowdfunding and support them, because if more of us did so then more people would be able to make a living of developing our favourite open source software.

Update: Seems I wasn’t the only one thinking about this, Flatpak announced today that application devs can put their crowdfunding information into their flatpaks and it will be advertised in GNOME Software.

Jonathan Kang: GUADEC 2018

Mër, 01/08/2018 - 11:22pd

It’s been a few weeks after I got back from GUADEC 2018, which was hosted in Almeria, Spain. And I finally manage to find some time to write this blog.

First Impression

It’s my first time to Spain. And my first impression is that it’s hot here. I arrived at Malaga Airport around 13:30. The moment I stepped out of the airport, I thought: WOW, this is as hot as Beijing. But it turns out Almeria is a lot better. Cheers!

After around 20-hour flight(4-hour layover at CDG), I felt very tired. And another 3.5-hour bus was waiting for me. Unexpectedly, the scenary along the coastline kept me awake all the time. It IS very beautiful. And it’s definitely worthwhile flying to Malaga and take the bus to Almeria.

Talks

Talks I enjoyed are:

  • GTK4 Lighting Talks – It’s good the know how things are going in GTK+4, and I’m looking forward to using it.
  • GNOME Foundation: Looking into the Future – GNOME is expanding! Existing things are ahead!
  • Migrating from JHBuild to BuildStream – I’m been using jhbuild since I started to contribute to GNOME (in 2014). It’s done its job well, but sometimes it’s painful to use as well. I tried BuildStream a few months ago, it was good except that applications built using it doesn’t have access to your local filesystem(sandboxed in another word). It makes Logs basically useless, and I’m still using the old-fashion JHBuild.
  • Intern and Newcomer Lighting Talks – It’s always good to see new contributors in GUADEC. And hope they stay contributing to GNOME after GSoC.
BOF

I participated gitlab CI/CD BOF and Settings BOF. Settings BOF was really productive. We has lots discussions and came up some TODOs. I’ll start working on cleaning up codes in network panel to separate UI code and backend code interacting with NM.

Social Events

Various social events make GUADEC my favourite conference. Castle tour and Flamenco show were my top 2 picks. Emm, wait. Beach party make it to top 3 as well. I enjoyed it a lot, although I can’t swim. It definitely encourages me to learn to swim.

Here are some photos

In the end, I’d like to thank GNOME Foundation for sponsoring my trip and my employer SUSE for sponsoring my time.

Peter Hutterer: A Fedora COPR for libinput git master

Mër, 01/08/2018 - 3:07pd

To make testing libinput git master easier, I set up a whot/libinput-git Fedora COPR yesterday. This repo gets the push triggers directly from GitLab so it will rebuild with whatever is currently on git master.

To use the COPR, simply run:


sudo dnf copr enable whot/libinput-git
sudo dnf upgrade libinput
This will give you the libinput package from git. It'll have a date/time/git sha based NVR, e.g. libinput-1.11.901-201807310551git22faa97.fc28.x86_64. Easy to spot at least.

To revert back to the regular Fedora package run:


sudo dnf copr disable whot/libinput-git
sudo dnf distro-sync "libinput-*"

Disclaimer: This is an automated build so not every package is tested. I'm running git master exclusively (from a a ninja install) and I don't push to master unless the test suite succeeds. So the risk for ending up with a broken system is low.

On that note: if you are maintaining a similar repo for other distributions and would like me to add a push trigger in GitLab for automatic rebuilds, let me know.

Daniel Espinosa: GNOME Data Access 6.0

Mër, 01/08/2018 - 12:22pd

At master there are a set of fixes for GDA Library and its GTK+ widgets, its Control Center for Data Sources Management and its powerful GDA Browser.

Next major 6.0 release, is breaking API/ABI from older releases, in order to improve GObject Introspection bindings, including Vala ones.

One step forward to use Meson build system, has been done too. Indeed, that work helps to speed up development.

Please tryout GDA Graphical User Interfaces, that will expose more and issues, before a final release.

Screenshots

GDA Browser Initial Connection:

Gda Browser Adding values to a Data Base Table:

Gda Browser Data Manager using a Table as Source:

Gda Browser running SQL queries, this include any other like select, create databases, create tables and views, insert data, and any other supported by the provider.

GDA Control Center, allows to pre-configure, connections to data baser server, to save all required connection parameters, with an easy to use UI:

Use in your application

All above screenshots, use a set of pre-defined GTK+ 3.0 widgets, so you can embed any of them in your own application, while GDA Browser, still powerful, is just for demonstration.

Jim Hall: The next step in open data is open source

Mër, 01/08/2018 - 12:08pd
Governments at all levels are moving to embrace open data, where governments share public data proactively with citizens. Open data can be used, reused, mixed, and shared by anyone.

For example, The US Government has an open data portal that publishes data on various topics, including agriculture, education, energy, finance, and other public data sets. Where I work (Ramsey County, Minn.) we established an open data portal that shares expenses and other public data about the county that users can access in different views.

Through open data, governments become more transparent. We have seen this in several instances. The Oakland Police Department used a 2016 open data study from Stanford University to address racial bias in how officers behave towards African Americans versus Caucasians during routine traffic stops. In 2017, Steve Ballmer launched the USAFacts website that uses open data to reveal how governments spend tax dollars to benefit citizens. Also from 2017, Los Angeles, California’s comprehensive “Clean Streets LA” initiative uses open data to assess and improve the cleanliness of public streets.

Governments at all levels have recognized that open data feeds citizen engagement. By sharing data in a way that encourages citizens to remix and transform open data to provide new insights, governments and citizens move closer together. According to the Open Data Barometer, many municipalities already provide open data for geographic information, transportation, trade, health, and education, with a mix of other open data sets. Those governments that do not yet provide an open data portal are likely working to provide one.

What is the next step beyond open data? After sharing data, what is the next evolution for governments to engage with citizens?

I believe that next step is open source. Where we provide government data sets for anyone to view, adapt, and remix, we need to offer government source code for others to view, adapt, and remix.

While there is a balance to be made in moving to government open source, the default should be to share as much source code as possible. Just as governments found a balance in providing open data, government open source must consider what software can and cannot be shared as open source software. In the same way that some data needs to remain private because it identifies individuals or because it contains certain nonpublic data, some government source code may need to remain “closed source.”

In adopting government open source, we should follow the open data model. The default in government open data is to share as much data as possible, to release public data for public consumption. That should be the same with government open source. In cases where government application development teams write custom software, we should make as much of our source code available to the public as possible.

Some government agencies are already moving to an open source model, and that is good. In August 2016, US Chief Information Officer Tony Scott and US Chief Acquisitions Officer Anne Rung issued instructions for federal departments and agencies “to release at least 20 percent of new custom-developed code as Open Source Software (OSS) for three years.” In support of this directive, the US Government established an open source portal at Code.gov to share government source code under the Creative Commons Zero (CC0) and other open source software licenses. Via the open source portal, users can download open source projects, toolkits, installer profiles, online forms, JavaScript widgets, and other code samples.

The challenge we face in moving to government open source is not technical, but cultural. Many governments have relied on proprietary or “closed source” software for decades. Through the lens of these government IT departments, all software is proprietary. This view often extends to software that is custom-developed by municipalities.

It will take a culture shift for governments to release their source code for public access. But governments faced that same culture shift in moving to open data, and we overcame that cultural inertia. We can do the same with open source.

The benefits to adopting a government open source model are many. Like open data, government open source will provide additional transparency to citizens. Users will be free to investigate the source code, and re-use it for other purposes. Motivated citizen developers may modify the source code to fix bugs or add new features, and contribute those improvements back to the government. This last example is the ideal model, providing a feedback loop of engagement where the government partners with its citizens to improve services.

I believe the next iteration from open data is open source. I encourage government Chief Information Officers at all levels to investigate how software created by government application development teams can be made available to outside users. Use the US open source portal as a model to set goals and measure progress. Finally, establish relationships with partners most likely to engage in government open source, including local universities and businesses.

Through open data, governments became more transparent to citizens. With government open source, Chief Information Officers have an opportunity to lead the next evolution in citizen engagement. Through open source, we can take government transparency to the next level.

Jiri Eischmann: Story of GNOME Shell Extensions

Mar, 31/07/2018 - 4:07md

A long time ago (exactly 10 years ago) it was decided that the the shell for GNOME would be written in JavaScript. GNOME 3 was still looking for its new face, a lot of UI experimentation was taking place, and JavaScript looked like the best candidate for it. Moreover it was a popular language on the web, so barriers to entry for new contributors would be significantly lowered.

When you have the shell written in JavaScript you can very easily patch it and alter its look and behaviour. And that’s what people started doing. Upstream was not very keen to officially support extensions due to their nature: they’re just hot patching the GNOME Shell code. They have virtually unlimited possibilities in changing look and behaviour, but also in introducing instability.

But tweaking the shell became really popular. Why wouldn’t it? You can tweak your desktop by simply clicking buttons in your browser. No recompilations, no restarts. So extensions.gnome.org was introduced.

The number of available extensions grew to hundreds and instability some of them occasionally introduced seemed like a fair price for the unlimited tweakability. In the end when the Shell crashed it was just a blink. Xorg held up the session with opened clients, the Shell/Mutter was restarted and the show could go on.

In 2016 GNOME switches to Wayland by default. No Xorg and also nothing to hold up the session with opened clients when the Shell crashes. There is only Mutter as a Wayland compositor, but unfortunately it runs in the same process as GNOME Shell (a decision also made 10 years ago when it also looked like a good idea). If the Shell goes down, so does Mutter. Suddenly harmless blinks became desktop crashes with losing all unsaved data in opened applications.

I read user feedback and problems users are having with Fedora Workstation (and Linux desktop in general) a lot on the Internet. And desktop crashes caused by GS extensions are by far the most frequent problem I’ve seen recently. I read stories like “I upgraded my Fedora to 28 and suddenly my desktop crashes 5 times a day. I can’t take it any more and I’m out of ideas” on daily basis. If someone doesn’t step in and say: “Hey, do you have any GS extensions installed? If so, disable them and see if it keeps crashing. The extensions are not harmless, any error in them or incompatibility between them and the current version of GS can take the whole desktop down”, users usually leave with the experience of unstable Linux desktop. It hurts our reputation really badly.

Are there any ways to fix or at least improve the situation? Certainly:

  1. Extensions used to be disabled when the Shell crashed hard (couldn’t be restarted). Since on Wayland it’s the result of every crash, we should do that after every GS crash. And when the user goes back to GNOME Tweak Tool to enable the extensions again, she/he should be told that it was mostly likely one of the 3rd party extensions that made the desktop crash, and she/he should be careful when enabling them.
  2. Decoupling GNOME Shell and Mutter or/and other steps that would bring back the same behaviour like on Xorg: GS crash would not take everything down. This would require major changes in the architecture and a lot of work and GNOME Shell and Mutter developer community has already a lot on their plates.
  3. Discontinuing the unlimited extensions, introducing a limited API they can use instead of hot patching the GS code itself. This would be a very unpopular step because it’d mean that many of the existing extensions would be impossible to implement again. But it may become inevitable in the future.

Christian Hergert: Using Leak Sanitizer with JHBuild

Mar, 31/07/2018 - 5:52pd

For a subset of GNOME modules, I’m still using jhbuild. I also spend a great deal of time tracking down memory bugs in various libraries. So it is very handy to have libasan.so working with meson -Db_sanitize=address.

To make things work, you currently need to:

  • dnf install libasan liblsan (or similar for your OS).
  • Use meson from git (0.48 development), for this bug fix.
  • Configure your meson projects with -Db_sanitize=address.
  • Create a suppression file for leaks out of our control.
  • Set some environment variables in ~/.config/jhbuildrc.

Here is an example of what I put in ~/.config/lsan_suppressions.txt.

leak:FcCharSetCreate leak:FcLangSetCreate leak:__nptl_deallocate_tsd leak:_g_info_new_full leak:dconf_engine_watch_fast leak:g_get_language_names_with_category leak:g_intern_string leak:g_io_module_new leak:g_quark_init leak:libfontconfig.so.1

And add this to ~/.config/jhbuildrc.

import os os.environ['LSAN_OPTIONS'] = 'suppressions=' + \ os.path.expanduser('~/.config/lsan_suppressions.txt')

This has helped me track down a number of bugs in various modules this week and it would be useful if other people were doing it too.