You are here

Agreguesi i feed

next-20180807: linux-next

Kernel Linux - Mar, 07/08/2018 - 1:33md
Version:next-20180807 (linux-next) Released:2018-08-07

SingHealth Attack Potentially State-Linked

LinuxSecurity.com - Mar, 07/08/2018 - 11:39pd
LinuxSecurity.com: Last month's cyber-attack on SingHealth, which resulted in the breach of 1.5 million health records, might have been the work of an advanced persistent threat group, according to information disclosed by S. Iswaran, Singapore's minister for communications and information in Parliament today.

Linux kernel bug: TCP flaw lets remote attackers stall devices with tiny DoS attack

LinuxSecurity.com - Mar, 07/08/2018 - 11:34pd
LinuxSecurity.com: Security researchers are warning Linux system users of a bug in the Linux kernel version 4.9 and up that could be used to hit systems with a denial-of-service attack on networking kit.

Lubuntu Blog: This Week in Lubuntu Development #8

Planet Ubuntu - Mar, 07/08/2018 - 12:09pd
Here is the eighth issue of This Week in Lubuntu Development. You can read the last issue here. Translated into: español Changes General Lubuntu 18.04.1 has been released! Lubuntu 16.04.5 has been released! We’re taking a new direction. The past couple of weeks have been focused on more desktop polish and some heavy infrastructure and […]

4.4.146: longterm

Kernel Linux - Hën, 06/08/2018 - 4:24md
Version:4.4.146 (longterm) Released:2018-08-06 Source:linux-4.4.146.tar.xz PGP Signature:linux-4.4.146.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.4.146

4.9.118: longterm

Kernel Linux - Hën, 06/08/2018 - 4:23md
Version:4.9.118 (longterm) Released:2018-08-06 Source:linux-4.9.118.tar.xz PGP Signature:linux-4.9.118.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.9.118

4.14.61: longterm

Kernel Linux - Hën, 06/08/2018 - 4:20md
Version:4.14.61 (longterm) Released:2018-08-06 Source:linux-4.14.61.tar.xz PGP Signature:linux-4.14.61.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.14.61

4.17.13: stable

Kernel Linux - Hën, 06/08/2018 - 4:19md
Version:4.17.13 (stable) Released:2018-08-06 Source:linux-4.17.13.tar.xz PGP Signature:linux-4.17.13.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.17.13

Valorie Zimmerman: In my heart

Planet Ubuntu - Hën, 06/08/2018 - 11:57pd
Last night we were living outside as usual. It had cooled a bit and a stiff cool breeze began blowing, so we moved inside for the first time in a week. We had a wonderful discussion about the state of the world (worrying) and what we might do about it beyond working for freedom in our KDE work. I think I'm not alone in being concerned about visiting Austria since politics there turned "populist". Since I'm living in a country where the same is true at least on the Federal level, that might seem hypocritical. Perhaps it is, but I'm not the only one working to expand the scope of people we welcome, rather than the reverse. I believe the most fortunate--including me--should pay the highest taxes, to provide public goods to all: excellent schools, medical and social care, fine public transport, free libraries, and free software.

We can only do that last bit well with a healthy KDE community. This means uniting around our goals, contributing to the community along with the software; by creating good documentation, helping promote news, contributing timely information for release announcements, joining a working group or the e.V. itself and most important: living up to our Code of Conduct. Our Code of Conduct is one of the best and most positive in free software, and is a key reason I came to KDE and stayed to contribute. It is of little value, however, unless we occasionally re-read it and resolve to personally hold ourselves to a high standard of conduct, and in addition, daring to step up to help resolve situations where it requires courage to do so. This is an important bit:
If you witness others being attacked, think first about how you can offer them personal support. If you feel that the situation is beyond your ability to help individually, go privately to the victim and ask if some form of official intervention is needed. Similarly you should support anyone who appears to be in danger of burning out, either through work-related stress or personal problems.It is sometimes very difficult and discouraging to confront distressing situations, when those whom you respect and even love deeply disappoint. However if we are to grow and thrive as a family, and we are a huge family, this must be done.

I've recently stolen from Boud and Irina's huge library In Search of the Indo-Europeans: Language, Archaeology and Myth by J.P. Mallory. A bit old, but a lovely survey of Eurasia up to historical times. Just this morning with my breakfast I read:
In what did the Proto-Indo-Europeans believe, or, to use their own words, to what did they 'put in their hearts'? This archaic expression is still preserved in a roundabout way in English where the Latin verb credo 'I believe' has been borrowed to fashion our English creed. After our talk last night, this passage prompted me to write today.


More photos from Deventer:
Flower cheese!
Sage, parsley
Sunset
IPA even in Deventer!

Chipmaker TSMC Hit by Virus Outbreak

LinuxSecurity.com - Hën, 06/08/2018 - 11:31pd
LinuxSecurity.com: Taiwanese semiconductor firm TSMC has revealed that a malware outbreak which affected its IT systems last week could result in a 3% hit to revenue.

Privacy International Takes Police Phone 'Hacking' Case to IPC

LinuxSecurity.com - Hën, 06/08/2018 - 11:27pd
LinuxSecurity.com: Privacy International has written to the investigatory powers commissioner (IPC) requesting an urgent review into potentially unlawful use by the UK police of mobile phone extraction (MPE) technology.

4.18-rc8: mainline

Kernel Linux - Dje, 05/08/2018 - 9:37md
Version:4.18-rc8 (mainline) Released:2018-08-05 Source:linux-4.18-rc8.tar.gz Patch:full (incremental)

Sam Hewitt: Moving Beyond Themes

Planet Ubuntu - Dje, 05/08/2018 - 5:00md

FreeDesktop platforms have come a long way in terms of usability and as we strive to make them better platforms for application developers, I think it’s time to shed one more shackle that slows that down: themes.

Now, coming from me that view may be a surprise (because of all those themes that I call personal projects) but I do feel it’s necessary mainly because the level of visual customisation that is being done at the distribution level has led to widespread visual fragmentation which impacts both user- and developer-friendliness.

Letting the Past Go

What themes used to be were sets of preset or configuration files that would only tweak the details of the user interface such as the window borders or how buttons and scrollbars looked but the overall layout and function stayed the same.

But user interfaces of the past were much simpler, there were fewer window states, fewer points of interaction, less visual feedback, and just plain fewer pixels. These limitations in old toolkits meant that they largely stayed the same from theme to theme and things were relatively stable.

Fast-forward to today where we have modern toolkits like GTK+ 3 with more complex visuals and detailed interactions means that without the same level of quality control that you find at the toolkit level, maintaining a separate theme is a very fiddly and potentially buggy prospect. Not to mention getting all the details right matters for both usability and accessibility.

“Look and Feel” as a Toolkit Component

It’s unfortunate that “Adwaita” is thought of as a theme when in fact it is a core component of the toolkit, but this is mostly a holdover from how we’re used to thinking about look and feel as it relates to the user interface. Adwaita is as closely tied to GTK+ as Aqua is to the macOS user interface, and as a result it has broad implications applications built with GTK+.

The reality is that GTK+ 3 has no theme framework (there is no API or documentation for “themes”) and “Adwaita” is simply the name of the stylesheet deeply integrated in GTK+. So when third-party developers build GNOME apps, they rely on this stylesheet when determining the look and feel of their apps and, if necessary, use it as a reference when writing their own custom stylesheets (since it is a core toolkit component).

Today’s themes aren’t themes

GTK+ 3 themes are not themes in the traditional sense. They are not packages of presets designed to work with the user interface toolkit, they are more like custom stylesheets which exist outside of the application-UI framework and only work by essentially overriding the toolkit-level stylesheet (and quite often only the toolkit-level stylesheet).

When GTK+ 3 applications are being used under third-party themes, what is being broken is the boundary an application developer has set up to control both the quality of their application and how it looks and feels and this becomes really problematic when applications have custom CSS.

In order for third party themes to work properly and not cause cascading visual bugs, they have to either become monolithic and start incorporating all the custom stylesheets for all the applications that have them, or work with application developers to include stylesheets in their applications that support their themes. Neither of these solutions are good for platform or application development since it will become a task of never-ending maintenance.

Visual Fragmentation

Across the GNOME desktop ecosystem exists “visual fragmentation” and it’s a very real problem for app developers. Since very few distributions ship GNOME as-is, it is hard to determine what the visual identity of GNOME is and therefore it’s difficult to know which visual system to build your application for.

Integrating the stylesheet with the user interface toolkit, in theory, should have solved many issues regarding visual inconsistency across the GNOME platform, but that’s an unsolveable problem so long as themes persist.

The biggest offenders continue to be downstream projects that theme GNOME extensively by overriding the default icons and stylesheet, and insist that that’s part of their own brand identity, but so long as that practice carries on then this fragmentation will continue.

Upstream vs. Downstream Identity

It is extremely rare for a Linux distribution to also be the platform vendor, so it can be said that nearly all distros that ship a desktop platform (like GNOME) are “downstream” vendors.

Platforms like GNOME and KDE exist irrespective of distributions and they have their own visual and brand identities, and own guidelines around the user interface. On the other hand, distribution vendors see a need to have unique identities and some decide to extend that to the look and feel of the desktop and apply themes.

But this practice raises questions about whether it is right or not for distributions to cut out or override the upstream platform vendor’s identity to favour their own. Should distributions that ship GNOME be asked to leave the default look and feel and experience intact? I think yes.

A similar situation exists on Android where Google is trying to control the look and feel of Android and hardware OEMs all over the place are skinning it for their phones, but the blame for issues gets conflated with issues in Android (unless you do some monumental branding effort and effectively erase Android, like Samsung)

Distributions owe a lot to the desktop platforms, as such I think that effort should be made to respect the platform’s intended experience. Not to mention, the same concerns for quality assurance regarding applications also applies to the platform, GNOME developers lose out when then forced to dedicate time and resources to dealing with bugs related to issues created by downstream theming and deviations.

The Future

If ending the wild west of visual customisation (which would probably end all of those projects of mine) on GNOME is necessary to grow the ecosystem, so be it.

I would rather see GNOME evolve as a platform and become a little less developer-hostile by dropping support for third-party themes, than stagnate. Doing so would also bring us in line with the how the major (successful) platforms maintain a consistent look and feel and consider app developers’ control over their apps and their rights to their brand identities.

That said, I doubt such a hardline position will be widely warmly recieved, but I would like to see a more closed approach to look and feel. Though, perhaps actually building some sort of framework that allows for custom stylesheets (so that downstreams can have their unique visual identities) that doesn’t involve totally overriding the one at the toolkit level would be the best solution.

Amnesty International spearphished with government spyware

LinuxSecurity.com - Dje, 05/08/2018 - 11:46pd
LinuxSecurity.com: Amnesty International has been spearphished by a WhatsApp message bearing links to what the organization believes to be malicious, powerful spyware: specifically, Pegasus, which has been called History's Most Sophisticated Tracker Program.

DEF CON Invites Kids to 'Hack the Election'

LinuxSecurity.com - Dje, 05/08/2018 - 11:39pd
LinuxSecurity.com: DEF CON is kicking its Voting Village hacking event up a notch this year with a contest for kids to try their hand at hacking into replica election-results websites to change vote tallies and election results.

Daniel García Moreno: mdl

Planet GNOME - Pre, 03/08/2018 - 1:22md

The last month I wrote a blog post about the LMDB Cache database and my wish to use that in Fractal. To summarize, LMDB is a memory-mapped key-value database that persist the data to the filesystem. I want to use this in the Fractal desktop application to replace the current state storage system (we're using simple json files) and as a side effect we can use this storage system to share data between threads because currently we're using a big struct AppOp shared with Arc<Mutex<AppOp>> and this cause some problems because we need to share and lock and update the state there.

The main goal is to define an app data model with smaller structs and store this using LMDB, then we can access to the same data querying the LMDB and we can update the app state storing to the LMDB.

With this change we don't need to share these structs, we only need to query to the LMDB to get the data and the work with that, and this should simplify our code. The other main benefit will be that we'll have this state in the filesystem by default so when we open the app after close, we'll stay in the same state.

Take a look to the gtk TODO example app to view how to use mdl with signals in a real gtk app.

What is mdl

mdl is Data model library to share app state between threads and process and persist the data in the filesystem. Implements a simple way to store structs instances in a LMDB database and other methods like BTreeMap.

I started to play with the LMDB rust binding and writing some simple tests. After some simple tests, I decided to write a simple abstraction to hide the LMDB internals and to provide a simple data storage and to do that I created the mdl crate.

The idea is to be able to define your app model as simple rust structs. LMDB is a key-value database so every struct instance will have an unique key to store in the cache.

The keys are stored in the cache ordered, so we can use some techniques to store related objects and to retrieve all objects of a kind, we only need to build keys correctly, following an scheme. For example, for fractal we can store rooms, members and messages like this:

  • rooms with key "room:roomid", to store all the room information, title, topic, icon, unread msgs, etc.
  • members with key "member:roomid:userid", to store all member information.
  • messages with key "msg:roomid:msgid" to store room messages.

Following this key assignment we can iterate over all rooms by querying all objects that starts with "room", we can get all members and all messages from a room.

This have some inconveniences, because we can't query directly an message by id if we don't know the roomid. If we need that kind of queries, we need to think about another key assignment or maybe we should duplicate data. key-value are simple databases so we don't have the power of relational databases.

Internals

LMDB is fast and efficient, because it's in memory so using this cache won't add a lot of overhead, but to make it simple to use I've to add some overhead, so mdl is easy by default and can be tuned to be really fast.

This crate has three main modules with traits to implement:

  • model: This contains the Model trait that should implement every struct that we want to make cacheable.
  • store: This contains the Store trait that's implemented by all the cache systems.
  • signal: This contains the Signaler trait and two structs that allow us to emit/subscribe to "key" signals.

And two more modules that implements the current two cache systems:

  • cache: LMDB cache that implements the Store trait.
  • bcache: BTreeMap cache that implements the Store trait. This is a good example of other cache system that can be used, this doesn't persist to the filesystem.

So we've two main concepts here, the Store and the Model. The model is the plain data and the store is the container of data. We'll be able to add models to the store or to query the store to get stored models. We store our models as key-value where the key is a String and the value is a Vec<u8>, so every model should be serializable.

This serialization is the bigger overhead added. We need to do this because we need to be able to store this in the LMDB database. Every request will create a copy of the object in the database, so we're not using the same data. This can be tuned to use pointers to the real data, but to do that we'll need to use unsafe code and I think that the performance that we'll get with this doesn't deserve the complexity that this will add.

By default, the Model trait has two methods fromb and tob to serialize and deserialize using bincode, so any struct that implements the Model trait and doesn't reimplement these two methods should implement Serialize and Deserialize from serde.

The signal system is an addition to be able to register callbacks to keys modifications in the store, so we can do something when a new objects is added, modified or deleted from the store. The signaler is optional and we should use it in a explicit way.

How to use it

First of all, you should define your data model, the struct that you want to be able to store in the database:

#[derive(Serialize, Deserialize, Debug)] struct A { pub p1: String, pub p2: u32, }

In this example we'll define a struct called A with two attributes, p1, a String, and p2, an u32. We derive Serialize and Deserialize because we're using the default fromb and tob from the Model trait.

Then we need to implement the Model trait:

impl Model for A { fn key(&self) -> String { format!("{}:{}", self.p1, self.p2) } }

We only reimplement the key method to build a key for every instance of A. In this case our key will be the String followed by the number, so for example if we've something like let a = A { p1: "myk", p2: 42 }; the key will be "myk:42".

Then, to use this we need to have a Store, in this example, we'll use the LMDB store that's the struct Cache:

// initializing the cache. This str will be the fs persistence path let db = "/tmp/mydb.lmdb"; let cache = Cache::new(db).unwrap();

We pass the path to the filesystem where we want to persist the cache as the first argument, in this example we'll persist to "/tmp/mydb.lmdb". When we ran the program for the first time a directory will be created there. The next time, that cache will be used with the information from the previous execution.

Then, with this cache object we can instantiate an A object and store in the cache:

// create a new *object* and storing in the cache let a = A{ p1: "hello".to_string(), p2: 42 }; let r = a.store(&cache); assert!(r.is_ok());

The store method will serialize the object and store a copy of that in the cache.

After the store, we can query for this object from other process, using the same lmdb path, or from the same process using the cache:

// querying the cache by key and getting a new *instance* let a1: A = A::get(&cache, "hello:42").unwrap(); assert_eq!(a1.p1, a.p1); assert_eq!(a1.p2, a.p2);

We'll get a copy of the original one.

This is the full example:

extern crate mdl; #[macro_use] extern crate serde_derive; use mdl::Cache; use mdl::Model; use mdl::Continue; #[derive(Serialize, Deserialize, Debug)] struct A { pub p1: String, pub p2: u32, } impl Model for A { fn key(&self) -> String { format!("{}:{}", self.p1, self.p2) } } fn main() { // initializing the cache. This str will be the fs persistence path let db = "/tmp/mydb.lmdb"; let cache = Cache::new(db).unwrap(); // create a new *object* and storing in the cache let a = A{ p1: "hello".to_string(), p2: 42 }; let r = a.store(&cache); assert!(r.is_ok()); // querying the cache by key and getting a new *instance* let a1: A = A::get(&cache, "hello:42").unwrap(); assert_eq!(a1.p1, a.p1); assert_eq!(a1.p2, a.p2); } Iterations

When we store objects with the same key prefix we can iterate over all of the objects, because we don't know the full key of all objects.

Currently there's two ways to iterate over all objects with the same prefix in a Store:

  • all

This is the simpler way, calling the all method we'll receive a Vec<T> so we've all the objects in a vector.

let hellows: Vec<A> = A::all(&cache, "hello").unwrap(); for h in hellows { println!("hellow: {}", h.p2); }

This has a little problem, because if we've a lot of objects, this will use a lot of memory for the vector and we'll be iterating over all objects twice. To solve this problems, the iter method was created.

  • iter

The iter method provides a way to call a closure for every object with this prefix in the key. This closure should return a Continue(bool) that will indicates if we should continue iterating of if we should stop the iteration here.

A::iter(&cache, "hello", |h| { println!("hellow: {}", h.p2); Continue(true) }).unwrap();

Using the Continue we can avoid to iterate over all the objects, for example if we're searching for one concrete object.

We're copying every object, but the iter method is better than the all, because if we don't copy or move the object from the closure, this copy only live in the closure scope, so we'll use less memory and also, we only iterate one. If we use all, we'll iterate over all objects with that prefix to build the vector so if we iterate over that vector another time this will cost more than the iter version.

Signal system

As I said before, the signal system provide us a way to register callbacks to key modifications. The signal system is independent of the Model and Store and can be used independently:

extern crate mdl; use mdl::Signaler; use mdl::SignalerAsync; use mdl::SigType; use std::sync::{Arc, Mutex}; use std::{thread, time}; fn main() { let sig = SignalerAsync::new(); sig.signal_loop(); let counter = Arc::new(Mutex::new(0)); // one thread for receive signals let sig1 = sig.clone(); let c1 = counter.clone(); let t1: thread::JoinHandle<_> = thread::spawn(move || { let _ = sig1.subscribe("signal", Box::new(move |_sig| { *c1.lock().unwrap() += 1; })); }); // waiting for threads to finish t1.join().unwrap(); // one thread for emit signals let sig2 = sig.clone(); let t2: thread::JoinHandle<_> = thread::spawn(move || { sig2.emit(SigType::Update, "signal").unwrap(); sig2.emit(SigType::Update, "signal:2").unwrap(); sig2.emit(SigType::Update, "signal:2:3").unwrap(); }); // waiting for threads to finish t2.join().unwrap(); let ten_millis = time::Duration::from_millis(10); thread::sleep(ten_millis); assert_eq!(*counter.lock().unwrap(), 3); }

In this example we're creating a SignalerAsync that can emit signal and we can subscribe a callback to any signal. The sig.signal_loop(); init the signal loop thread, that wait for signals and call any subscribed callback when a signal comes.

let _ = sig1.subscribe("signal", Box::new(move |_sig| { *c1.lock().unwrap() += 1; }));

We subscribe a callback to the signaler. The signaler can be cloned and the list of callbacks will be the same, if you emit a signal in a clone and subscribe in other clone, that signal will trigger the callback.

Then we're emiting some signals:

sig2.emit(SigType::Update, "signal").unwrap(); sig2.emit(SigType::Update, "signal:2").unwrap(); sig2.emit(SigType::Update, "signal:2:3").unwrap();

All of this three signals will trigger the previous callback because the subscription works as a signal starts with. This allow us to subscribe to all new room messages insertion if we follow the previous described keys, subscribing to "msg:roomid" and if we only want to register a callback to be called only when one message is updated we can subscribe to "msg:roomid:msgid" and this callback won't be triggered for other messages.

The callback should be a Box<Fn(signal)> where signal is the following struct:

#[derive(Clone, Debug)] pub enum SigType { Update, Delete, } #[derive(Clone, Debug)] pub struct Signal { pub type_: SigType, pub name: String, }

Currently only Update and Delete signal types are supported.

Signaler in gtk main loop

All the UI operations in a gtk app should be executed in the gtk main loop so we can't use the SignalerAsync in a gtk app, because this signaler creates one thread for the callbacks so all callbacks should implement the Send trait and if we want to modify, for example, a gtk::Label in a callback, that callback won't implement Send because gtk::Label can't be send between threads safely.

To solve this problem, I've added the SignalerSync. That doesn't launch any threads and where all operations runs in the same thread, even the callback. This is a problem if one of your callbacks locks the thread, because this will lock your interface in a gtk app, so any callback in the sync signaler should be non blocking.

This signaler should be used in a different way, so we should call from time to time to the signal_loop_sync method, that will check for new signals and will trigger any subscribed callback. This signaler doesn't have a signal_loop because we should do the loop in our thread.

This is an example of how to run the signaler loop inside a gtk app:

let sig = SignalerSync::new(); let sig1 = sig.clone(); gtk::timeout_add(50, move || { gtk::Continue(sig1.signal_loop_sync()) }); // We can subscribe callbacks using the sig here

In this example code we're registering a timeout callback, every 50ms this closure will be called, from the gtk main thread, and the signal_loop_sync will check for signals and call the needed callbacks.

This method returns a bool that's false when the signaler stops. You can stop the signaler calling the stop method.

Point of extension

I've tried to make this crate generic to be able to extend in the future and provide other kind of cache that can be used changing little code in the apps that uses mdl.

This is the main reason to use traits to implement the store, so the first point of extension is to add more cache systems, we're currently two, the LMDB and the BTreeMap, but it would be easy to add more key-value storages, like memcached, unqlie, mongodb, redis, couchdb, etc.

The signaler is really simple, so maybe we can start to think about new signalers that uses Futures and other kind of callbacks registration.

As I said before, mdl does a copy of the data on every write and on every read, so it could be cool to explore the implication of these copies in the performance and try to find methods to reduce this overhead.

Nick Richards: Pinpoint Flatpak

Planet GNOME - Pre, 03/08/2018 - 11:53pd

A while back I made a Pinpoint COPR repo in order to get access to this marvelous tool in Fedora. Well, now I work for Endless and the only way you can run apps on our system is in a Flatpak container. So I whipped up a quick Pinpoint Flatpak in order to give a talk at GUADEC this year.

Flatpak is actually very helpful here, since the libraries required are rapidly becoming antique, and carrying them around on your base system is gross as well as somewhat insecure. There isn’t a GUI to create or open files, and it’s somewhat awkward to use if you’re not already an expert, so I didn’t submit the app to Flathub, however you can easily download and install the bundle locally. I hope the two people for whom this is useful find it as useful as I did to make.

Nick Richards: Pinpoint COPR Repo

Planet GNOME - Pre, 03/08/2018 - 11:53pd

A few years ago I worked with a number of my former colleagues to create Pinpoint, a quick hack that made it easier for us to give presentations that didn’t suck. Now that I’m at Collabora I have a couple of presentations to make and using pinpoint was a natural choice. I’ve been updating our internal templates to use our shiny new brand and wanted to use some newer features that weren’t available in Fedora’s version of pinpoint.

There hasn’t been an official release for a little while and a few useful patches have built up on the master branch. I’ve packaged a git snapshot and created a COPR repo for Fedora so you can use these snapshots yourself. They’re good.

Matthew Garrett: Porting Coreboot to the 51NB X210

Planet GNOME - Pre, 03/08/2018 - 3:35pd
The X210 is a strange machine. A set of Chinese enthusiasts developed a series of motherboards that slot into old Thinkpad chassis, providing significantly more up to date hardware. The X210 has a Kabylake CPU, supports up to 32GB of RAM, has an NVMe-capable M.2 slot and has eDP support - and it fits into an X200 or X201 chassis, which means it also comes with a classic Thinkpad keyboard . We ordered some from a Facebook page (a process that involved wiring a large chunk of money to a Chinese bank which wasn't at all stressful), and a couple of weeks later they arrived. Once I'd put mine together I had a quad-core i7-8550U with 16GB of RAM, a 512GB NVMe drive and a 1920x1200 display. I'd transplanted over the drive from my XPS13, so I was running stock Fedora for most of this development process.

The other fun thing about it is that none of the firmware flashing protection is enabled, including Intel Boot Guard. This means running a custom firmware image is possible, and what would a ridiculous custom Thinkpad be without ridiculous custom firmware? A shadow of its potential, that's what. So, I read the Coreboot[1] motherboard porting guide and set to.

My life was made a great deal easier by the existence of a port for the Purism Librem 13v2. This is a Skylake system, and Skylake and Kabylake are very similar platforms. So, the first job was to just copy that into a new directory and start from there. The first step was to update the Inteltool utility so it understood the chipset - this commit shows what was necessary there. It's mostly just adding new PCI IDs, but it also needed some adjustment to account for the GPIO allocation being different on mobile parts when compared to desktop ones. One thing that bit me - Inteltool relies on being able to mmap() arbitrary bits of physical address space, and the kernel doesn't allow that if CONFIG_STRICT_DEVMEM is enabled. I had to disable that first.

The GPIO pins got dropped into gpio.h. I ended up just pushing the raw values into there rather than parsing them back into more semantically meaningful definitions, partly because I don't understand what these things do that well and largely because I'm lazy. Once that was done, on to the next step.

High Definition Audio devices (or HDA) have a standard interface, but the codecs attached to the HDA device vary - both in terms of their own configuration, and in terms of dealing with how the board designer may have laid things out. Thankfully the existing configuration could be copied from /sys/class/sound/card0/hwC0D0/init_pin_configs[2] and then hda_verb.h could be updated.

One more piece of hardware-specific configuration is the Video BIOS Table, or VBT. This contains information used by the graphics drivers (firmware or OS-level) to configure the display correctly, and again is somewhat system-specific. This can be grabbed from /sys/kernel/debug/dri/0/i915_vbt.

A lot of the remaining platform-specific configuration has been split out into board-specific config files. and this also needed updating. Most stuff was the same, but I confirmed the GPE and genx_dec register values by using Inteltool to dump them from the vendor system and copy them over. lspci -t gave me the bus topology and told me which PCIe root ports were in use, and lsusb -t gave me port numbers for USB. That let me update the root port and USB tables.

The final code update required was to tell the OS how to communicate with the embedded controller. Various ACPI functions are actually handled by this autonomous device, but it's still necessary for the OS to know how to obtain information from it. This involves writing some ACPI code, but that's largely a matter of cutting and pasting from the vendor firmware - the EC layout depends on the EC firmware rather than the system firmware, and we weren't planning on changing the EC firmware in any way. Using ifdtool told me that the vendor firmware image wasn't using the EC region of the flash, so my assumption was that the EC had its own firmware stored somewhere else. I was ready to flash.

The first attempt involved isis' machine, using their Beaglebone Black as a flashing device - the lack of protection in the firmware meant we ought to be able to get away with using flashrom directly on the host SPI controller, but using an external flasher meant we stood a better chance of being able to recover if something went wrong. We flashed, plugged in the power and… nothing. Literally. The power LED didn't turn on. The machine was very, very dead.

Things like managing battery charging and status indicators are up to the EC, and the complete absence of anything going on here meant that the EC wasn't running. The most likely reason for that was that the system flash did contain the EC's firmware even though the descriptor said it didn't, and now the system was very unhappy. Worse, the flash wouldn't speak to us any more - the power supply from the Beaglebone to the flash chip was sufficient to power up the EC, and the EC was then holding onto the SPI bus desperately trying to read its firmware. Bother. This was made rather more embarrassing because isis had explicitly raised concern about flashing an image that didn't contain any EC firmware, and now I'd killed their laptop.

After some digging I was able to find EC firmware for a related 51NB system, and looking at that gave me a bunch of strings that seemed reasonably identifiable. Looking at the original vendor ROM showed very similar code located at offset 0x00200000 into the image, so I added a small tool to inject the EC firmware (basing it on an existing tool that does something similar for the EC in some HP laptops). I now had an image that I was reasonably confident would get further, but we couldn't flash it. Next step seemed like it was going to involve desoldering the flash from the board, which is a colossal pain. Time to sleep on the problem.

The next morning we were able to borrow a Dediprog SPI flasher. These are much faster than doing SPI over GPIO lines, and also support running the flash at different voltage. At 3.5V the behaviour was the same as we'd seen the previous night - nothing. According to the datasheet, the flash required at least 2.7V to run, but flashrom listed 1.8V as the next lower voltage so we tried. And, amazingly, it worked - not reliably, but sufficiently. Our hypothesis is that the chip is marginally able to run at that voltage, but that the EC isn't - we were no longer powering the EC up, so could communicated with the flash. After a couple of attempts we were able to write enough that we had EC firmware on there, at which point we could shift back to flashing at 3.5V because the EC was leaving the flash alone.

So, we flashed again. And, amazingly, we ended up staring at a UEFI shell prompt[3]. USB wasn't working, and nor was the onboard keyboard, but we had graphics and were executing actual firmware code. I was able to get USB working fairly quickly - it turns out that Linux numbers USB ports from 1 and the FSP numbers them from 0, and fixing that up gave us working USB. We were able to boot Linux! Except there were a whole bunch of errors complaining about EC timeouts, and also we only had half the RAM we should.

After some discussion on the Coreboot IRC channel, we figured out the RAM issue - the Librem13 only has one DIMM slot. The FSP expects to be given a set of i2c addresses to probe, one for each DIMM socket. It is then able to read back the DIMM configuration and configure the memory controller appropriately. Running i2cdetect against the system SMBus gave us a range of devices, including one at 0x50 and one at 0x52. The detected DIMM was at 0x50, which made 0x52 seem like a reasonable bet - and grepping the tree showed that several other systems used 0x52 as the address for their second socket. Adding that to the list of addresses and passing it to the FSP gave us all our RAM.

So, now we just had to deal with the EC. One thing we noticed was that if we flashed the vendor firmware, ran it, flashed Coreboot and then rebooted without cutting the power, the EC worked. This strongly suggested that there was some setup code happening in the vendor firmware that configured the EC appropriately, and if we duplicated that it would probably work. Unfortunately, figuring out what that code was was difficult. I ended up dumping the PCI device configuration for the vendor firmware and for Coreboot in case that would give us any clues, but the only thing that seemed relevant at all was that the LPC controller was configured to pass io ports 0x4e and 0x4f to the LPC bus with the vendor firmware, but not with Coreboot. Unfortunately the EC was supposed to be listening on 0x62 and 0x66, so this wasn't the problem.

I ended up solving this by using UEFITool to extract all the code from the vendor firmware, and then disassembled every object and grepped them for port io. x86 systems have two separate io buses - memory and port IO. Port IO is well suited to simple devices that don't need a lot of bandwidth, and the EC is definitely one of these - there's no way to talk to it other than using port IO, so any configuration was almost certainly happening that way. I found a whole bunch of stuff that touched the EC, but was clearly depending on it already having been enabled. I found a wide range of cases where port IO was being used for early PCI configuration. And, finally, I found some code that reconfigured the LPC bridge to route 0x4e and 0x4f to the LPC bus (explaining the configuration change I'd seen earlier), and then wrote a bunch of values to those addresses. I mimicked those, and suddenly the EC started responding.

It turns out that the writes that made this work weren't terribly magic. PCs used to have a SuperIO chip that provided most of the legacy port functionality, including the floppy drive controller and parallel and serial ports. Individual components (called logical devices, or LDNs) could be enabled and disabled using a sequence of writes that was fairly consistent between vendors. Someone on the Coreboot IRC channel recognised that the writes that enabled the EC were simply using that protocol to enable a series of LDNs, which apparently correspond to things like "Working EC" and "Working keyboard". And with that, we were done.

Coreboot doesn't currently have ACPI support for the latest Intel graphics chipsets, so right now my image doesn't have working backlight control.Backlight control also turned out to be interesting. Most modern Intel systems handle the backlight via registers in the GPU, but the X210 uses the embedded controller (possibly because it supports both LVDS and eDP panels). This means that adding a simple display stub is sufficient - all we have to do on a backlight set request is store the value in the EC, and it does the rest.

Other than that, everything seems to work (although there's probably a bunch of power management optimisation to do). I started this process knowing almost nothing about Coreboot, but thanks to the help of people on IRC I was able to get things working in about two days of work[4] and now have firmware that's about as custom as my laptop.

[1] Why not Libreboot? Because modern Intel SoCs haven't had their memory initialisation code reverse engineered, so the only way to boot them is to use the proprietary Intel Firmware Support Package.
[2] Card 0, device 0
[3] After a few false starts - it turns out that the initial memory training can take a surprisingly long time, and we kept giving up before that had happened
[4] Spread over 5 or so days of real time

comments

Matthias Clasen: On Flatpak updates

Planet GNOME - Enj, 02/08/2018 - 7:21md

Maybe you remember times when updating your system was risky business – your web browser might crash of start to behave funny because the update pulled data files or fonts out from underneath the running process, leading to fireworks or, more likely, crashes.

Flatpak updates on the other hand are 100% safe. You can call

flatpak update

and the running instances of are not affected in any way. Flatpak keeps existing deployments around until the last user is gone.  If you quit the application and restart it, you will get the updated version, though.

This is very nice, and works just fine. But maybe we can do even better?

Improving the system

It would be great if the system was aware of the running instances, and offered me to restart them to take advantage of the new version that is now available. There is a good chance that GNOME Software will gain this feature before too long.

But for now, it does not have it.

Do it yourself

Many apps, in particular those that are not native to the Linux distro world, expect to update themselves, and we have had requests to enable this functionality in flatpak. We do think that updating software is a system responsibility that should be controlled by global policies and be under the users control, so we haven’t quite followed the request.

But Flatpak 1.0 does have an API that is useful in this context, the “Flatpak portal“. It has a Spawn method that allows applications to launch a process in a new sandbox.

Spawn (IN ay cwd_path, IN aay argv, IN a{uh} fds, IN a{ss} envs, IN u flags, IN a{sv} options, OUT u pid)

There are several use cases for this, from sandboxing thumbnailers (which create thumbnails for possibly untrusted content files) to sandboxing web browser tabs individually. The use case we are interested in here is restarting the latest version of the app itself.

One complication that I’ve run into when trying this out is the “unique application” pattern that is built into GApplication and similar application classes: Since there is already an owner for the application ID on the session bus, my newly spawned version will just back off and exit. Which is clearly not what I intended in this case.

Make it stop

The workaround I came up with is not very pretty, but functional. It requires several parts.

First, I need a “quit” action exported on the session bus. The newly spawned version will activate this action of the running instance to convince it to go away. Thankfully, my example app already had this action, for the Quit item in the app menu.

I don’t want this to happen unconditionally, but only if I am spawning a new version. To achieve this, I made my app only activate “quit” if the –replace option is present, and add that option to the commandline that I pass to the “Spawn” call.

The code for this part is less pretty than it could be, since GApplication gets in the way a bit. I have to manually check for the –replace option and do the “quit” D-Bus call by hand.

Doing the “quit” call synchronously is not quite enough to avoid a race condition between the running instance dropping the bus name and my new instance attempting to take it. Therefore, I explicitly wait for the bus name to become unowned before entering g_application_run().

https://blogs.gnome.org/mclasen/files/2018/08/Screencast-from-08-02-2018-124710-PM.webm

But it all works fine. To test it, i exported a “restart” action and added it to the app menu.

Tell me about it

But who can remember to open the app menu and click “Restart”. That is just too cumbersome. Thankfully, flatpak has a solution for this: When you update an app that is running, it creates a marker file named

/app/.updated

inside the sandbox for each running instance.

That makes it very easy for the app to find out when it has been updated, by just monitoring this file. Once the file appears, it can pop up a dialog that offers the user to restart the newer version of the app. A good quality implementation of this will of course save and restore the state when doing this.

https://blogs.gnome.org/mclasen/files/2018/08/Screencast-from-08-02-2018-125142-PM.webm

Voilá, updates made easy!

You can find the working example in the portal-test repository.

Faqet

Subscribe to AlbLinux agreguesi