You are here

Site në gjuhë të huaj

Reproducible builds folks: Reproducible Builds: week 93 in Stretch cycle

Planet Debian - Sht, 11/02/2017 - 1:23md

Here's what happened in the Reproducible Builds effort between Sunday January 29 and Saturday February 4 2017:

Media coverage

Dennis Gilmore and Holger Levsen presented "Reproducible Builds and Fedora" (Video, Slides) at on February 27th 2017.

On February 1st, stretch/armhf reached 90% reproducible packages in our test framework, so that now all four tested architectures are ≥ 90% reproducible in stretch. Yay! For armhf this means 22472 reproducible source packages (in main); for amd64, arm64 and i386 these figures are 23363, 23062 and 22607 respectively.

Chris Lamb appeared on the Changelog podcast to talk about reproducible builds:

Holger Levsen pitched Reproducible Builds and our need for a logo in the "Open Source Design" room at FOSDEM 2017 (Video, 09:36 - 12:00).

Upcoming Events
  • The Reproducible Build Zoo will be presented by Vagrant Cascadian at the Embedded Linux Conference in Portland, Oregon, February 22nd.

  • Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th.

  • Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th-26th.

Reproducible work in other projects

We learned that the "slightly more secure" Heads firmware (a Coreboot payload) is now reproducibly built regardless of host system or build directory. A picture says more than a thousand words:

Docker started preliminary work on making image builds reproducible.

Toolchain development and fixes

Ximin Luo continued to write code and test cases for the BUILD_PATH_PREFIX_MAP environment variable. He also did extensive research on cross-platform and cross-language issues with enviroment variables, filesystem paths, and character encodings, and started preparing a draft specification document to describe all of this.

Chris Lamb asked CPython to implement an environment variable PYTHONREVERSEDICTKEYORDER to add an an option to reverse iteration order of items in a dict. However this was rejected because they are planning to formally fix this order in the next language version.

Bernhard Wiedemann and Florian Festi added support for our SOURCE_DATE_EPOCH environment variable, to the RPM Package Manager.

James McCoy uploaded devscripts 2.17.1 with a change from Guillem Jover for dscverify(1), adding support for .buildinfo files. (Closes: #852801)

Piotr Ożarowski uploaded dh-python 2.20170125 with a change from Chris Lamb for a patch to fix #835805.

Chris Lamb added documentation to diffoscope, strip-nondeterminism, disorderfs, reprotest and trydiffoscope about uploading signed tarballs when releasing. He also added a link to these on our website's tools page.

Packages reviewed and bugs filed

Bugs filed:

Reviews of unreproducible packages

83 package reviews have been added, 86 have been updated and 276 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (6)
diffoscope development

Work on the next version (71) continued in git this week:

  • Mattia Rizzolo:
    • Override a lintian warning.
  • Chris Lamb:
    • Update and consolidate documentation
    • Many test additions and improvements
    • Various code quality and software architecture improvements
  • anthraxx:
    • Update arch package, cdrkit -> cdrtools.
reproducible-website development

Daniel Shahaf added more notes on our "How to chair a meeting" document.

Holger unblacklisted pspp and tiledarray. If you think further packages should also be unblacklisted (possibly only on some architectures), please tell us.


This week's edition was written by Ximin Luo, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Mark Brown: We show up

Planet Debian - Sht, 11/02/2017 - 1:12md

It’s really common for pitches to managements within companies about Linux kernel upstreaming to focus on cost savings to vendors from getting their code into the kernel, especially in the embedded space. These benefits are definitely real, especially for vendors trying to address the general market or extend the lifetime of their devices, but they are only part of the story. The other big thing that happens as a result of engaging upstream is that this is a big part of how other upstream developers become aware of what sorts of hardware and use cases there are out there.

From this point of view it’s often the things that are most difficult to get upstream that are the most valuable to talk to upstream about, but of course it’s not quite that simple as a track record of engagement on the simpler drivers and the knowledge and relationships that are built up in that process make having discussions about harder issues a lot easier. There are engineering and cost benefits that come directly from having code upstream but it’s not just that, the more straightforward upstreaming is also an investment in making it easier to work with the community solve the more difficult problems.

Fundamentally Linux is made by and for the people and companies who show up and participate in the upstream community. The more ways people and companies do that the better Linux is likely to meet their needs.

Noah Meyerhans: Using FAI to customize and build your own cloud images

Planet Debian - Sht, 11/02/2017 - 8:42pd

At this past November's Debian cloud sprint, we classified our image users into three broad buckets in order to help guide our discussions and ensure that we were covering the common use cases. Our users fit generally into one of the following groups:

  1. People who directly launch our image and treat it like a classic VPS. These users most likely will be logging into their instances via ssh and configuring it interactively, though they may also install and use a configuration management system at some point.
  2. People who directly launch our images but configure them automatically via launch-time configuration passed to the cloud-init process on the agent. This automatic configuration may optionally serve to bootstrap the instance into a more complete configuration management system. The user may or may not ever actually log in to the system at all.
  3. People who will not use our images directly at all, but will instead construct their own image based on ours. They may do this by launching an instance of our image, customizing it, and snapshotting it, or they may build a custom image from scratch by reusing and modifying the tools and configuration that we use to generate our images.

This post is intended to help people in the final category get started with building their own cloud images based on our tools and configuration. As I mentioned in my previous post on the subject, we are using the FAI project with configuration from the fai-cloud-images. It's probably a good idea to get familiar with FAI and our configs before proceeding, but it's not necessary.

You'll need to use FAI version 5.3.4 or greater. 5.3.4 is currently available in stretch and jessie-backports. Images can be generated locally on your non-cloud host, or on an existing cloud instance. You'll likely find it more convenient to use a cloud instance so you can avoid the overhead of having to copy disk images between hosts. For the most part, I'll assume throughout this document that you're generating your image on a cloud instance, but I'll highlight the steps where it actually matters. I'll also be describing the steps to target AWS, though the general workflow should be similar if you're targeting a different platform.

To get started, install the fai-server package on your instance and clone the fai-cloud-images git repository. (I'll assume the repository is cloned to /srv/fai/config.) In order to generate your own disk image that generally matches what we've been distributing, you'll use a command like:

sudo fai-diskimage --hostname stretch-image --size 8G \ --class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2 \ /tmp/stretch-image.raw

This command will create an 8 GB raw disk image at /tmp/stretch-image.raw, create some partitions and filesystems within it, and install and configure a bunch of packages into it. Exactly what packages it installs and how it configures them will be determined by the FAI config tree and the classes provided on the command line. The package_config subdirectory of the FAI configuration contains several files, the names of which are FAI classes. Activating a given class by referencing it on the fai-diskimage command line instructs FAI to process the contents of the matching package_config file if such a file exists. The files use a simple grammar that provides you with the ability to request certain packages to be installed or removed.

Let's say for example that you'd like to build a custom image that looks mostly identical to Debian's images, but that also contains the Apache HTTP server. You might do that by introducing a new file to package_config/HTTPD file, as follows:

PACKAGES install apache2

Then, when running fai-diskimage, you'll add HTTPD to the list of classes:

sudo fai-diskimage --hostname stretch-image --size 8G \ --class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2,HTTPD \ /tmp/stretch-image.raw

Aside from custom package installation, you're likely to also want custom configuration. FAI allows the use of pretty much any scripting language to perform modifications to your image. A common task that these scripts may want to perform is the installation of custom configuration files. FAI provides the fcopy tool to help with this. Fcopy is aware of FAI's class list and is able to select an appropriate file from the FAI config's files subdirectory based on classes. The scripts/EC2/10-apt script provides a basic example of using fcopy to select and install an apt sources.list file. The files/etc/apt/sources.list/ subdirectory contains both an EC2 and a GCE file. Since we've enabled the EC2 class on our command line, fcopy will find and install that file. You'll notice that the sources.list subdirectory also contains a preinst file, which fcopy can use to perform additional actions prior to actually installing the specified file. postinst scripts are also supported.

Beyond package and file installation, FAI also provides mechanisms to support debconf preseeding, as well as hooks that are executed at various stages of the image generation process. I recommend following the examples in the fai-cloud-images repo, as well as the FAI guide for more details. I do have one caveat regarding the documentation, however: FAI was originally written to help provision bare-metal systems, and much of its documentation is written with that use case in mind. The cloud image generation process is able to ignore a lot of the complexity of these environments (for example, you don't need to worry about pxeboot and tftp!) However, this means that although you get to ignore probably half of the FAI Guide, it's not immediately obvious which half it is that you get to ignore.

Once you've generated your raw image, you can inspect it by telling Linux about the partitions contained within, and then mount and examine the filesystems. For example:

admin@ip-10-0-0-64:~$ sudo partx --show /tmp/stretch-image.raw NR START END SECTORS SIZE NAME UUID 1 2048 16777215 16775168 8G ed093314-01 admin@ip-10-0-0-64:~$ sudo partx -a /tmp/stretch-image.raw partx: /dev/loop0: error adding partition 1 admin@ip-10-0-0-64:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk ├─xvda1 202:1 0 1007.5K 0 part └─xvda2 202:2 0 8G 0 part / loop0 7:0 0 8G 0 loop └─loop0p1 259:0 0 8G 0 loop admin@ip-10-0-0-64:~$ sudo mount /dev/loop0p1 /mnt/ admin@ip-10-0-0-64:~$ ls /mnt/ bin/ dev/ home/ initrd.img.old@ lib64/ media/ opt/ root/ sbin/ sys/ usr/ vmlinuz@ boot/ etc/ initrd.img@ lib/ lost+found/ mnt/ proc/ run/ srv/ tmp/ var/ vmlinuz.old@

In order to actually use your image with your cloud provider, you'll need to register it with them. Strictly speaking, these are the only steps that are provider specific and need to be run on your provider's cloud infrastructure. AWS documents this process in the User Guide for Linux Instances. The basic workflow is:

  1. Attach a secondary EBS volume to your EC2 instance. It must be large enough to hold the raw disk image you created.
  2. Use dd to write your image to the secondary volume, e.g. sudo dd if=/tmp/stretch-image.raw of=/dev/xvdb
  3. Use the script in the fail-cloud-image repo to snapshot the volume and register the resulting snapshot with AWS as a new AMI. Example: ./ vol-04351c30c46d7dd6e

The script must be run with access to AWS credentials that grant access to several EC2 API calls: describe-snapshots, create-snapshot, and register-image. It recognizes a --help command-line flag and several options that modify characteristics of the AMI that it registers. When completes, it will print the AMI ID of your new image. You can now work with this image using standard AWS workflows.

As always, we welcome feedback and contributions via the debian-cloud mailing list or #debian-cloud on IRC.

Emmanuele Bassi: Epoxy

Planet GNOME - Sht, 11/02/2017 - 2:34pd

Epoxy is a small library that GTK+, and other projects, use in order to access the OpenGL API in somewhat sane fashion, hiding all the awful bits of craziness that actually need to happen because apparently somebody dosed the water supply at SGI with large quantities of LSD in the mid-‘90s, or something.

As an added advantage, Epoxy is also portable on different platforms, which is a plus for GTK+.

Since I’ve started using Meson for my personal (and some work-related) projects as well, I’ve been on the lookout for adding Meson build rules to other free and open source software projects, in order to improve both their build time and portability, and to improve Meson itself.

As a small, portable project, Epoxy sounded like a good candidate for the port of its build system from autotools to Meson.

To the Bat Build Machine!


Since you may be interested just in the numbers, building Epoxy with Meson on my Kaby Lake four Core i7 and NMVe SSD takes about 45% less time than building it with autotools.

A fairly good fraction of the autotools time is spent going through the autogen and configure phases, because they both aren’t parallelised, and create a ton of shell invocations.

Conversely, Meson’s configuration phase is incredibly fast; the whole Meson build of Epoxy fits in the same time the and configure scripts complete their run.


Epoxy is a simple library, which means it does not need a hugely complicated build system set up; it does have some interesting deviations, though, which made the porting an interesting challenge.

For instance, on Linux and similar operating systems Epoxy uses pkg-config to find things like the EGL availability and the X11 headers and libraries; on Windows, though, it relies on finding the opengl32 shared or static library object itself. This means that we get something straightforward in the former case, like:

# Optional dependencies gl_dep = dependency('gl', required: false) egl_dep = dependency('egl', required: false)

and something slightly less straightforward in the latter case:

if host_system == 'windows' # Required dependencies on Windows opengl32_dep = cc.find_library('opengl32', required: true) gdi32_dep = cc.find_library('gdi32', required: true) endif

And, still, this is miles better than what you have to deal with when using autotools.

Let’s take a messy thing in autotools, like checking whether or not the compiler supports a set of arguments; usually, this involves some m4 macro that’s either part of autoconf-archive or some additional repository, like the xorg macros. Meson handles this in a much better way, out of the box:

# Use different flags depending on the compiler if cc.get_id() == 'msvc' test_cflags = [ '-W3', ..., ] elif cc.get_id() == 'gcc' test_cflags = [ '-Wpointer-arith', ..., ] else test_cflags = [ ] endif common_cflags = [] foreach cflag: test_cflags if cc.has_argument(cflag) common_cflags += [ cflag ] endif endforeach

In terms of speed, the configuration step could be made even faster by parallelising the compiler argument checks; right now, Meson has to do them all in a series, but nothing except some additional parsing effort would prevent Meson from running the whole set of checks in parallel, and gather the results at the end.

Generating code

In order to use the GL entry points without linking against libGL or libGLES* Epoxy takes the XML description of the API from the Khronos repository and generates the code that ends up being compiled by using a Python script to parse the XML and generating header and source files.

Additionally, and unlike most libraries in the G* stack, Epoxy stores its public headers inside a separate directory from its sources:

libepoxy ├── cross ├── doc ├── include │   └── epoxy ├── registry ├── src └── test

The autotools build has the src/ script create both the source and the header file for each XML at the same time using a rule processed when recursing inside the src directory, and proceeds to put the generated header under $(top_builddir)/include/epoxy, and the generated source under $(top_builddir)/src. Each code generation rule in the Makefile manually creates the include/epoxy directory under the build root to make up for parallel dispatch of each rule.

Meson makes is harder to do this kind of spooky-action-at-a-distance build, so we need to generate the headers in one pass, and the source in another. This is a bit of a let down, to be honest, and yet a build that invokes the generator script twice for each API description file is still faster under Ninja than a build with the single invocation under Make.

There are sill issues in this step that are being addressed by the Meson developers; for instance, right now we have to use a custom target for each generated header and source separately instead of declaring a generator and calling it multiple times. Hopefully, this will be fixed fairly soon.


Epoxy has a very small footprint, in terms of API, but it still benefits from having some documentation on its use. I decided to generate the API reference using Doxygen, as it’s not a G* library and does not need the additional features of gtk-doc. Sadly, Doxygen’s default style is absolutely terrible; it would be great if somebody could fix it to make it look half as good as the look gtk-doc gets out of the box.

Cross-compilation and native builds

Now we get into “interesting” territory.

Epoxy is portable; it works on Linux and *BSD systems; on macOS; and on Windows. Epoxy also works on both Intel Architecture and on ARM.

Making it run on Unix-like systems is not at all complicated. When it comes to Windows, though, things get weird fast.

Meson uses cross files to determine the environment and toolchain of the host machine, i.e. the machine where the result of the build will eventually run. These are simple text files with key/value pairs that you can either keep in a separate repository, in case you want to share among projects; or you can keep them in your own project’s repository, especially if you want to easily set up continuous integration of cross-compilation builds.

Each toolchain has its own; for instance, this is the description of a cross compilation done on Fedora with MingW:

[binaries] c = '/usr/bin/x86_64-w64-mingw32-gcc' cpp = '/usr/bin/x86_64-w64-mingw32-cpp' ar = '/usr/bin/x86_64-w64-mingw32-ar' strip = '/usr/bin/x86_64-w64-mingw32-strip' pkgconfig = '/usr/bin/x86_64-w64-mingw32-pkg-config' exe_wrapper = 'wine'

This section tells Meson where the binaries of the MingW toolchain are; the exe_wrapper key is useful to run the tests under Wine, in this case.

The cross file also has an additional section for things like special compiler and linker flags:

[properties] root = '/usr/x86_64-w64-mingw32/sys-root/mingw' c_args = [ '-pipe', '-Wp,-D_FORTIFY_SOURCE=2', '-fexceptions', '--param=ssp-buffer-size=4', '-I/usr/x86_64-w64-mingw32/sys-root/mingw/include' ] c_link_args = [ '-L/usr/x86_64-w64-mingw32/sys-root/mingw/lib' ]

These values are taken from the equivalent bits that Fedora provides in their MingW RPMs.

Luckily, the tool that generates the headers and source files is written in Python, so we don’t need an additional layer of complexity, with a tool built and run on a different platform and architecture in order to generate files to be built and run on a different platform.

Continuous Integration

Of course, any decent process of porting, these days, should deal with continuous integration. CI gives us confidence as to whether or not any change whatsoever we make actually works — and not just on our own computer, and our own environment.

Since Epoxy is hosted on GitHub, the quickest way to deal with continuous integration is to use TravisCI, for Linux and macOS; and Appveyor for Windows.

The requirements for Meson are just Python3 and Ninja; Epoxy also requires Python 2.7, for the dispatch generation script, and the shared libraries for GL and the native API needed to create a GL context (GLX, EGL, or WGL); it also optionally needs the X11 libraries and headers and Xvfb for running the test suite.

Since Travis offers an older version of Ubuntu LTS as its base system, we cannot build Epoxy with Meson; additionally, running the test suite is a crapshoot because the Mesa version if hopelessly out of date and will either cause most of the tests to be skipped or, worse, make them segfault. To sidestep this particular issue, I’ve prepared a Docker image with its own harness, and I use it as the containerised environment for Travis.

On Appveyor, thanks to the contribution of Thomas Marrinan we just need to download Python3, Python2, and Ninja, and build everything inside its own root; as an added bonus, Appveyor allows us to take the build artefacts when building from a tag, and shoving them into a zip file that gets deployed to the release page on GitHub.


Most of this work has been done off and on over a couple of months; the rough Meson build conversion was done last December, with the cross-compilation and native builds taking up the last bit of work.

Since Eric does not have any more spare time to devote to Epoxy, he was kind enough to give me access to the original repository, and I’ve tried to reduce the amount of open pull requests and issues there.

I’ve also released version 1.4.0 and I plan to do a 1.4.1 release soon-ish, now that I’m positive Epoxy works on Windows.

I’d like to thank:

  • Eric Anholt, for writing Epoxy and helping out when I needed a hand with it
  • Jussi Pakkanen and Nirbheek Chauhan, for writing Meson and for helping me out with my dumb questions on #mesonbuild
  • Thomas Marrinan, for working on the Appveyor integration and testing Epoxy builds on Windows
  • Yaron Cohen-Tal, for maintaining Epoxy in the interim

Jon Nordby: Live programming IoT systems with MsgFlo+Flowhub

Planet GNOME - Pre, 10/02/2017 - 11:38md

Last weekend at FOSDEM I presented in the Internet of Things (IoT) devroom,
showing how one can use MsgFlo with Flowhub to visually live-program devices that talk MQTT.

Video recording of MsgFlo Internet of Things presentation at FOSDEM 2017.

If the video does not work, try the alternatives here. See also full presentation notes, incl example code.


Since announcing MsgFlo in 2015, it has mostly been used to build scalable backend systems (“cloud”), using AMQP and RabbitMQ. At The Grid we’ve been processing hundred thousands of jobs each week, so that usecase is pretty well tested by now.

However, MsgFlo was designed from the beginning to support multiple messaging systems (including MQTT), as well as other kinds of distributed systems – like a networks of embedded devices working together (one aspect of “IoT”). And in MsgFlo 0.10 this is starting to work pretty nicely.

Visual system architecture

Typical MQTT devices have the topic names hidden in code. Any documentation is typically kept in sync (or not…) by hand.
MsgFlo lets you represent your devices and services as FBP/dataflow “components”, and a system as a connected graph of component instances. Each device periodically sends a discovery message to the broker. This message describing the role name, as well as what ports exists (including the MQTT topic name). This leads to a system architecture which can be visualized easily:

Imaginary solution to a typically Norwegian problem: Avoiding your waterpipes freezing in the winter.

Rewire live system

In most MQTT devices, output is sent directly to the input of another device, by using the same MQTT topic name. This hardcodes the system functionality, reducing encapsulation and reusability.
MsgFlo each device *should* send output and receive inports on topic namespaced to the device.
Connections between devices are handled on the layer above, by the broker/router binding different topics together. With Flowhub, one can change these connections while the system is running.

Change program values on the fly

Changing a parameter or configuration of an embedded device usually requires changing the code and flashing it. This means recompiling and usually being connected to the device over USB. This makes the iteration cycle pretty slow and tedious.
In MsgFlo, devices can (and should!) expose their parameters on MQTT and declare them as inports.
Then they can be changed in Flowhub, the device instantly reflecting the new values.

Great for exploratory coding; quickly trying out different values to find the right one.
Examples are when tweaking animations or game mechanics, as it is near impossible to know up front what feels right.

Add component as adapters

MsgFlo encourages devices to be fairly stupid, focused on a single generally-useful task like providing sensor data, or a way to cause actions in the real world. This lets us define “applications” without touching the individual devices, and adapt the behavior of the system over time.

Imagine we have a device which periodically sends current temperature, as a floating-point number in Celcius unit. And a display device which can display text (for instance a small OLED). To show current temperature, we could wire them directly:

Our display would show something like “22.3333333”. Not very friendly, how does one know what this number means?

Better add a component to do some formatting.

Adding a Python component

Component formatting incoming temperature number to a friendly text string

And then insert it before the display. This will create a new process, and route the data through it.

Our display would now show “Temperature: 22.3 C”

Over time maybe the system grows further

Added another sensor, output now like “Inside 22.2 C Outside: -5.5 C”.

Getting started with MsgFlo

If you have existing “things” that support MQTT, you can start using MsgFlo by either:
1) Modifying the code to also send the discovery message.
2) Use the msgflo-foreign-participant tool to provide discovery without code changes

If you have new things, using one of the MsgFlo libraries is a quick way to support MQTT and MsgFlo. Right now there are libraries for Python, C++11, Node.js, NoFlo and Arduino.

Henri Bergius: Working on an Android tablet, 2017 edition

Planet GNOME - Pre, 10/02/2017 - 7:22md

Back in 2013 I was working exclusively on an Android tablet. Then with the NoFlo Kickstarter I needed a device with a desktop browser. What followed were brief periods working on a Chromebook, on a 12” MacBook, and even an iPad Pro.

But from April 2016 onwards I’ve been again working with an Android device. Some people have asked me about my setup, and so here is an update.

Why work on a tablet?

When I started on this path in 2013, using a tablet for “real work” was considered crazy. While every story on tablet productivity still brings out the people claiming it is not a real computer for real work, using tablets for real work is becoming more and more common.

A big contributor to this has been the plethora of work-oriented tablets and convertibles released since then. Microsoft’s popular Surface Pro line brought the PC to tablet form factor, and Apple’s iPad Pro devices gave the iPad a keyboard.

Here are couple of great posts talking about how it feels to work on an iPad:

With all the activity going on, one could claim using a tablet for work has been normalized. But why work on a tablet instead of a “real computer”? Here are some reasons, at least for me:

Free of legacy cruft

Desktop operating systems have become clunky. Window management. File management. Multiple ways to discover, install, and uninstall applications. Broken notification mechanisms.

With a tablet you can bypass pretty much all of that, and jump into a simpler, cleaner interface designed for the modern connected world.

I think this is also the reason driving some developers back to Linux and tiling window managers — cutting manual tweaking and staying focused.

Amazing endurance

Admittedly, laptop battery life has increased a lot since 2013. But with some manufacturers using this an excuse to ship thinner devices, tablets still win the endurance game.

With my current work tablet, I’m customarily getting 12 or more hours of usage. This means I can power through the typical long days of a startup founder without having to plug in. And when traveling, I really don’t have to care where power sockets are located on trains, airplanes, and conference centers.

Low power usage also means that I can really get a lot of more runtime by utilizing the mobile battery pack I originally bought to use with my phone. While I’ve never actually had to try this, back-of-the-envelope math claims I should be able to get a full workweek from the combo without plugging in.

Work and play

The other aspect of using a tablet is that it becomes a very nice content consumption device after I’m done working. Simply disconnect the keyboard and lean back, and the same device you used for writing software becomes a great e-reader, video player, or a gaming machine.

This combined with the battery life has meant that I’ve actually stopped carrying a Kindle with me. While an e-ink screen is still nicer to read, not needing an extra device has its benefits, especially for a frequent one-bag traveller.

The setup

I’m writing this on a Pixel C, a 10.2” Android tablet made by Google. I got the device last spring when there were developer discounts available at ramp-up to the Android 7 release, and have been using it full-time since.


Surprisingly little has changed in my software use since 2013 — I still spend the most of the time writing software in either Flowhub or terminal. Here are the apps I use on daily basis:

Looking back to the situation in early 2013, the biggest change is that Slack has pretty much killed work email.

Termux is a new app that has done a lot to improve the local development situation. By starting the app you get a very nice Linux chroot environment where a lot of software is only a quick apt install away.

Since much of my non-Flowhub work is done in tmux and vim, I get the exactly same working environment on both local chroot and cloud machines by simply installing my dotfiles on each of them.


When I’m on the road I’m using the Pixel C keyboard. This doubles as a screen protector, and provides a reasonable laptop-like typing environment. It attaches to the tablet with very strong magnets and allows a good amount of flexibility on the screen angles.

However, when stationary, no laptop keyboard compares to a real mechanical keyboard. When I’m in the office I use a Filco MiniLa Air, a bluetooth keyboard with quiet-ish Cherry MX brown switches.

This tenkeyless (60%) keyboard is extremely comfortable to type on. However, the sturdy metal case means that it is a little too big and heavy to carry on a daily basis.

In practice I’ve only taken with me when there has been a longer trip where I know that I’ll be doing a lot of typing. To solve this, I’m actually looking to build a more compact custom mechanical keyboard so I could always have it with me.

Comparison with iOS

So, why work on Android instead of getting an iPad Pro? I’ve actually worked on both, and here are my reasons:

  • Communications between apps: while iOS has extensions now, the ability to send data from an app to another is still a hit-or-miss. Android had intents from day one, meaning pretty much any app can talk to any other app
  • Standard charging: all of my other devices charge with the same USB-C chargers and cables. iPads still use the proprietary Lightnight plug, requiring custom dongles for everything
  • Standard accessories: this boils down to USB-C just like charging. With Android I can plug in a network adapter or even a mouse, and it’ll just work
  • Ecosystem lock-in: we’re moving to a world where everything — from household electronics to cars — is either locked to the Apple ecosystem or following standards. I don’t want to be locked to a single vendor for everything digital
  • Browser choice: with iOS you only get one web renderer, the rather dated Safari. On Android I can choose between Chrome, Firefox, or any other browser that has been ported to the platform

Of course, iOS has its own benefits. Apple has a stronger stance on privacy than Google. And there is more well-made tablet software available for iPads than Android. But when almost everything I use is available on the web, this doesn’t matter that much.

The future

As a software developer working on Android tablets, the weakest point of the platform is still that there are no browser developer tools available. This was a problem in 2013, and it is still a problem now.

From my conversations with some Chrome developers, it seems Google has very little interest in addressing this. However, there is a bright spot: the new breed of convertible Chromebooks being released now. And they run Android apps:

Chrome OS is another clean, legacy free, modern computing interface. With these new devices you get the combination of a full desktop browser and the ability to run all Android tablet software.

The Samsung Chromebook Pro/Plus mentioned above is definitely interesting. A high-res 12” screen and a digital pen which I see as something very promising for visual programming purposes.

However, given that I already have a great mechanical keyboard, I’d love a device that shipped without an attached keyboard. We’ll see what kind of devices get out later this year.

Carlos Garcia Campos: Accelerated compositing in WebKitGTK+ 2.14.4

Planet GNOME - Pre, 10/02/2017 - 6:23md

WebKitGTK+ 2.14 release was very exciting for us, it finally introduced the threaded compositor to drastically improve the accelerated compositing performance. However, the threaded compositor imposed the accelerated compositing to be always enabled, even for non-accelerated contents. Unfortunately, this caused different kind of problems to several people, and proved that we are not ready to render everything with OpenGL yet. The most relevant problems reported were:

  • Memory usage increase: OpenGL contexts use a lot of memory, and we have the compositor in the web process, so we have at least one OpenGL context in every web process. The threaded compositor uses the coordinated graphics model, that also requires more memory than the simple mode we previously use. People who use a lot of tabs in epiphany quickly noticed that the amount of memory required was a lot more.
  • Startup and resize slowness: The threaded compositor makes everything smooth and performs quite well, except at startup or when the view is resized. At startup we need to create the OpenGL context, which is also quite slow by itself, but also need to create the compositing thread, so things are expected to be slower. Resizing the viewport is the only threaded compositor task that needs to be done synchronously, to ensure that everything is in sync, the web view in the UI process, the OpenGL viewport and the backing store surface. This means we need to wait until the threaded compositor has updated to the new size.
  • Rendering issues: some people reported rendering artifacts or even nothing rendered at all. In most of the cases they were not issues in WebKit itself, but in the graphic driver or library. It’s quite diffilcult for a general purpose web engine to support and deal with all possible GPUs, drivers and libraries. Chromium has a huge list of hardware exceptions to disable some OpenGL extensions or even hardware acceleration entirely.

Because of these issues people started to use different workarounds. Some people, and even applications like evolution, started to use WEBKIT_DISABLE_COMPOSITING_MODE environment variable, that was never meant for users, but for developers. Other people just started to build their own WebKitGTK+ with the threaded compositor disabled. We didn’t remove the build option because we anticipated some people using old hardware might have problems. However, it’s a code path that is not tested at all and will be removed for sure for 2.18.

All these issues are not really specific to the threaded compositor, but to the fact that it forced the accelerated compositing mode to be always enabled, using OpenGL unconditionally. It looked like a good idea, entering/leaving accelerated compositing mode was a source of bugs in the past, and all other WebKit ports have accelerated compositing mode forced too. Other ports use UI side compositing though, or target a very specific hardware, so the memory problems and the driver issues are not a problem for them. The imposition to force the accelerated compositing mode came from the switch to using coordinated graphics, because as I said other ports using coordinated graphics have accelerated compositing mode always enabled, so they didn’t care about the case of it being disabled.

There are a lot of long-term things we can to to improve all the issues, like moving the compositor to the UI (or a dedicated GPU) process to have a single GL context, implement tab suspension, etc. but we really wanted to fix or at least improve the situation for 2.14 users. Switching back to use accelerated compositing mode on demand is something that we could do in the stable branch and it would improve the things, at least comparable to what we had before 2.14, but with the threaded compositor. Making it happen was a matter of fixing a lot bugs, and the result is this 2.14.4 release. Of course, this will be the default in 2.16 too, where we have also added API to set a hardware acceleration policy.

We recommend all 2.14 users to upgrade to 2.14.4 and stop using the WEBKIT_DISABLE_COMPOSITING_MODE environment variable or building with the threaded compositor disabled. The new API in 2.16 will allow to set a policy for every web view, so if you still need to disable or force hardware acceleration, please use the API instead of WEBKIT_DISABLE_COMPOSITING_MODE and WEBKIT_FORCE_COMPOSITING_MODE.

We really hope this new release and the upcoming 2.16 will work much better for everybody.

Jonas Meurer: debian lts report 2017.01

Planet Debian - Pre, 10/02/2017 - 5:07md
Debian LTS report for January 2017

January 2017 was my fifth month as a Debian LTS team member. I was allocated 12 hours and had 6,75 hours left over from December 2016. This makes a total of 18,75 hours. Unfortunately I found less time than expected to work on Debian LTS in January. In total, I spent 9 hours on the following security updates:

  • DLA 787-1: XSS protection via Content Security Policy for otrs2
  • DLA 788-1: fix vulnerability in pdns-recursor by dropping illegitimate long querys
  • DLA 798-1: fix multiple vulnerabilities in pdns

Jiri Eischmann: Dark Adwaita and HighContrast Themes for Qt

Planet GNOME - Pre, 10/02/2017 - 4:55md

One of our goals for Fedora Workstation is to run Qt applications in GNOME as seamlessly as possible. Their look should be as close to their GTK+ counterparts as possible, you shouldn’t have to set things on two different places just to make the change in both GTK+ and Qt applications.

A while back, we introduced the Adwaita theme for Qt and QGnomePlatform which makes sure all settings get translated from the GTK+ world to the Qt one. The original Adwaita theme was written from scratch. To write a theme for Qt is pretty complex and the look of Adwaita for Qt was close to Adwaita for GTK+, but not close enough. Then Martin Bříza, who is working on this, decided to change the approach and based the new version on the default KDE theme and kept changing it until he got a theme that is very similar to Adwaita for GTK+. And indeed it’s now much closer than the first version.

Martin also worked on the dark variant of Adwaita for Qt, so that if you switch to this variant, Qt apps still don’t look out of place. Or if there is a Qt app that uses a dark theme it can have a look that fits into GNOME.

Martin didn’t stop there. GNOME also offers a high contrast theme for those with visual impairment which prevents them from using standard themes. They’re also not left behind. If you switch to the HighContrast theme in GNOME Qt apps will switch to it, too.

On the video below, you can see a mix of Qt and GTK+ apps and how they change when you switch between different themes:

These changes should land in Fedora 26 Workstation, but you can already try them out. Martin created a Copr repository. Keep in mind it’s work in progress. If you’d like to report bugs or help with tuning the themes, all the code is on Github.

Alexander Larsson: Maintaining a flatpak repository

Planet GNOME - Pre, 10/02/2017 - 4:05md

So, you’ve built a flatpak, using flatpak-builder, and now you have a directory called repo. How do you go from here to something that your users can install the  application from?

To start with, lets make a minimal sample application, so that we have a repo to look at:

$ echo $'#!/bin/sh\necho hello world' > $ flatpak build-init appdir com.example.App \ org.freedesktop.Sdk \ org.freedesktop.Platform 1.4 $ flatpak build appdir mkdir /app/bin $ flatpak build appdir install --mode=750 /app/bin $ flatpak build-finish appdir

This produces the following appdir directory:

├── export ├── files/bin/ └── metadata

We can now export this to the repo directory:

flatpak build-export repo appdir stable

If we copy this directory to a webserver, so that it is accessible via http, lets say as, then anyone can install your application like this:

flatpak remote-add --no-gpg-verify example \ flatpak install example com.example.App

Note: You can of cours add a local remote using the path of the repo instead of a uri if you just want to test your build.

The anatomy of a repo

So, how does this work? Lets look a bit deeper at the repository. It
looks something like this:

├── config ├── objects │ ├── 44/6a0ef11b7cc167f3b...0b6c5488.dirmeta │ ├── 6e/340b9cffb37a989ca...617afa01d.dirtree │ ├── 74/cfc6e8dd69905a525...64fc0d5e5.dirtree │ ├── 75/af15131ec214e5074...41858318a.commit │ ├── c4/083227ca305c44cd6...ceeecd27c.dirtree │ ├── d8/fdc0a16351fa8bcd8...ac90e63e2.filez │ ├── e4/2aa65aae7c7b61539...5557cad40.filez │ └── e9/c16c731270e0a8718...cc8be2601.dirtree ├── refs/heads/app/com.example.App/x86_64/stable └── summary

If you’ve ever seen a git repository, this looks very similar. The application is referred to by what is called a ref, in this case app/com.example.App/x86_64/stable. If we look at the file with that name in the refs/heads directory we get latest commit id:

$ cat repo/refs/heads/app/com.example.App/x86_64/stable 75af15131ec214e5074...41858318a

When flatpak installs the app it starts by pulling from the repository, which means we download into a local repository the corresponding commit object (i.e. objects/75/af15131ec214e5074...41858318a.commit) and then follow that until all required objects are downloaded.

Another important feature is the summary file. This is a single file that contains information from all the refs files in the repository. It is regenerated from the refs files each time you export a new app or run flatpak build-update-repo. This file is needed because when the repository is accessed via http there is no standard way to list all the refs files, for instance when listing available apps.

This is basically all you have to do to get something running, just build it and copy the results to a webserver. However, there are some details that get important when you’re maintaining a production repository.

GPG signatures

For an official build you want to have a GPG key to sign the repository. In ostree (which is what flatpak uses) every commit is signed, as well as the summary file. When we pull from the repository the summary and the commit signatures are verified, to make sure nothing was modified. This is important because it allows us to use unencrypted http for faster download, and it allows secure use of mirrored repos.

The example repo above did not have signatures, so lets create some. If you have a GPG key already you can use it, although it is recommended that you create a custom key for the repository. That way you can transfer maintainership without having to give anyone else your private key.

So, lets create a custom key:

$ mkdir gpg $ gpg2 --homedir gpg --quick-gen-key

This eventually create a keyring in the gpg directory and print it out:

pub rsa2048 2017-02-09 [S] 770B194227ED91BF6C9038B83F4BF10E09A00F3B uid [ultimate] sub rsa2048 2017-02-09 []

We can then sign the repo after the fact (you can also sign it at build-export time):

$ flatpak build-sign repo \ --gpg-sign=770B194227ED91BF6C9038B83F4BF10E09A00F3B \ --gpg-homedir=gpg

This will create the signature of the commit:


You also need to sign the summary file:

$ flatpak build-update-repo repo \ --gpg-sign=770B194227ED91BF6C9038B83F4BF10E09A00F3B \ --gpg-homedir=gpg

Which will recreate the summary file, now with a summary.sig file next to it.

If we export the public key for the repo we can give it to users:

$ gpg2 --homedir=gpg \ --export 770B194227ED91BF6C9038B83F4BF10E09A00F3B > example.gpg

And you can then use this instead of --no-gpg-verify when adding the remote:

$ flatpak remote-add --gpg-import=example.gpg \ example Flatpakref files

Having to both add a remote (with the right gpg key file) and then install the application is not a great user experience. To make this easier for users, flatpak has something called flatpakref files. These contain both the information about the repository, including the GPG key, and the application id. The result is that you can just point flatpak at one file and it will install the application.

Lets create a example-app.flatpakref file for our app:

[Flatpak Ref] Title=Example App Name=com.example.App Branch=stable Url= GPGKey=mQENBFicjMMBCAC... IsRuntime=false RuntimeRepo=

The GPGKey field is just a base64 version of the key, which you can get like this:

$ base64 example.gpg | tr -d '\n'

Most of the fields are obvious, but the last one need a special explanation. It specifies a file that describes the repository containing the runtimes that the application uses. If this repository is not configured on the users system it will be automatically added and the runtimes will be installed from it as needed.

Once the flatpakref file is put on the webpage your application can be installed in a single command:

flatpak install

The user can also click on the flatpakref link, and it will open in the default application for software installation (like gnome-software). There are also some additional fields that will be shown in gnome-software: Homepage, Comment, Description and Icon.

Static deltas

When downloading an application, flatpak will do individual requests for each file in the application. If you have a lot of small files this can be slow, so you want to make sure the HTTP keep-alive is enabled on the web server. But even then it can be slow. Additionally, as objects are whole files, we still have to download the entire file even if just one byte in the file changed.

In order to solve this flatpak supports something called static deltas. These are pre-generated delta files (using bsdiff) that contain all the data needed to go from one version to another (or from nothing to a version). If these files are available they are used instead of the individual objects, which allows pulls to be much more efficient.

You can generate static deltas for the latest versions of all apps by passing --generate-static-deltas to flatpak build-update-repo, and I recommend everyone do this for production repositories.


There are a couple of configurations in a flatpak repo that you can set via build-update-repo. One is the title (--title=TITLE). This will be used by default as the title when listing the configured remotes.

Another is the default branch (--default-branch=BRANCH). If this is set, then this will be used as the default branch name (the last part of the ref) when it is not specified. This is useful when you have multiple versions of the same app (for instance a nightly build and a stable build) in the same repository.

AppStream branches

In order to support nice graphical frontends like gnome-software, flatpak uses a metadata format called AppStream. The way this works is that each application ships an AppData xml file, and an icon. Then when you run flatpak build-update-repo, each such xml file and icon are extracted and put in a separate per-arch appdata branch (called e.g. appstream/x86_64). Flatpak then mirrors this branch locally for each remote, and this data is used by gnome-software.

So, make sure your application ships an appdata file, and make sure you run build-update-repo to update the branches whenever an application changes.

Syncing updates

If you build a new version of your app on you will get an updated repository locally that has both the old build and the new build. You then need then to copy this updated repo to the webserver. Generally if you just copy the files over with e.g. rsync then things will work fine.

However, there are some race conditions if the new repo files are written at the same time as someone is pulling from the repository. For instance, if you write the summary file before all the objects are copied, then there is a short period where a new pull will not see a complete version of the commit. Another race is if you delete objects from an older version before the new commit is fully uploaded.

All these races are easily avoided by ordering the sync so that the summary file is uploaded after all the objects, and any deletes are done at the end. The ostree-releng-scripts repo has a script to do this.


Many developers don’t run their own server to host things on, relying instead on services like github or gitlab to host releases. Unfortunately these are generally designed to host single-file releases (such as tarballs), rather than a multi-file repository.

Since flatpak repositories are just a bunch of static files using Amazon S3, or similar services is very effective. I personally use S3 for the Skype and Spotify flatpak repos, and the cost is about $0.01 a month so far. So, in practice this is essentially free. However, you still need to register with Amazon and have a credit card.

I haven’t found any perfect completely free alternative, but there are some workarounds:

On github, you can use the github pages feature. To do this you create a new git repo and then you commit the flatpak repository into the git repo. Then you enable github pages for the repo and point it at the master branch.

Here is an example of using this:
With the flatpak repo itself available as:

The maximum size of a github pages size is 1GB, and there is a soft bandwidth limit at 100GB per month, so depending on the size of your application and how popular it is, this could be a useful approach.

On gitlab you can use the gitlab pages feature to do something similar.

Rhonda D'Vine: Anouk

Planet Debian - Pre, 10/02/2017 - 1:19md

I need music to be more productive. Sitting in an open workspace it helps to shut off outside noice too. And often enough I just turn cmus into shuffle mode and let it play what comes along. Yesterday I just stumbled upon a singer again that I fell in love with her voice a long time ago. This is about Anouk.

The song was on a compilation series that I followed because it so easily brought great groups to my attention in a genre that I simply love. It was called "Crossing All Over!" and featured several groups that I digged further into and still love to listen to.

Anyway, don't want to delay the songs for you any longer, so here they are:

  • Nobody's Wife: The first song I heard from her, and her voice totally catched me.
  • Lost: A more quite song for a break.
  • Modern World: A great song about the toxic beauty norms that society likes to paint. Lovely!

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

Dirk Eddelbuettel: anytime 0.2.1

Planet Debian - Pre, 10/02/2017 - 12:37md

An updated anytime package arrived at CRAN yesterday. This is release number nine, and the first with a little gap to the prior release on Christmas Eve as the features are stabilizing, as is the implementation.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, ... format to either POSIXct or Date objects -- and to do so without requiring a format string. See the anytime page, or the GitHub for a few examples.

This releases addresses two small things related to the anydate() and utcdate() conversion (see below) and adds one nice new format, besides some internal changes detailed below:

R> library(anytime) R> anytime("Thu Sep 01 10:11:12 CDT 2016") [1] "2016-09-01 10:11:12 CDT" R> anytime("Thu Sep 01 10:11:12.123456 CDT 2016") # with frac. seconds [1] "2016-09-01 10:11:12.123456 CDT" R>

Of course, all commands are also fully vectorised. See the anytime page, or the GitHub for more examples.

Changes in anytime version 0.2.1 (2017-02-09)
  • The new DatetimeVector class from Rcpp is now used, and proper versioned Depends: have been added (#43)

  • The anydate and utcdate functions convert again from factor and ordered (#46 closing #44)

  • A format similar to RFC 28122 but with additonal timezone text can now be parsed (#48 closing #47)

  • Conversion from POSIXt to Date now also respect the timezone (#50 closing #49)

  • The internal .onLoad functions was updated

  • The Travis setup uses https to fetch the run script

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Peter Hutterer: libinput knows about internal and external touchpads

Planet GNOME - Pre, 10/02/2017 - 1:27pd

libinput has a couple of features that 'automagically' work on touchpads such as disable-while-typing and the lid switch triggered disabling of touchpads and disabling the touchpad when an external mouse is plugged in [1]. But not all of these features make sense on all touchpads. For example, an Apple Magic Trackpad doesn't need disable-while-typing because unless you have a creative arrangement of input devices [2], the touchpad won't be where your palm is likely to hit it. Likewise, a Logitech T650 connected over a unifying receiver shouldn't get disabled when the laptop lid closes.

For this to work, libinput has some code to figure out whether a touchpad is internal or external. Initially we had some code to detect this but eventually moved this to the ID_INPUT_TOUCHPAD_INTEGRATION property now set by udev's hwdb (systemd 231 and later). Having it in the hwdb makes it quite trivial to override locally where the current rules are insufficient (and until the hwdb is fixed, thanks for filing a bug). We still have the fallback code though in case the tag is missing. On a sufficiently modern distribution, udevadm info /sys/class/input/event4 for your touchpad device node should show something like ID_INPUT_TOUCHPAD_INTEGRATION=internal.

So for any feature that libinput adds for touchpads, we only enable it where it makes sense. That's why your external touchpad doesn't trigger disable-while-typing or the lid switch.

[1] ok, I admit, this is something we should've left to the client, but now we have the feature.
[2] yes, I'm sure there's at least one person out there that uses the touchpad upside down in front of the keyboard and is now angry that libinput doesn't allow arbitrary rotation of the device combined with configurable dwt. I think of you every night I cry myself to sleep.

Sam Thursfield: GUADEC 2017: Friday 28th July to Wednesday 2nd August in Manchester, UK

Planet GNOME - Enj, 09/02/2017 - 9:56md

The GUADEC 2017 team is happy to officially announce the dates and location of this year’s conference.

GUADEC 2017 will run from Friday 28th July to Wednesday 2nd August. The first three days will include talks and social events, as well as the GNOME Foundation’s AGM. This part of the conference will also include a 20th anniversary celebration for the GNOME project.

The second 3 days (from Monday 31st July to Wednesday 2nd August) are unconference-style and will include space for hacking, project BoF sessions and possibly training workshops.

The conference days will be at Manchester Metropolitan University’s Brooks Building. The unconference days will be in a nearby University building named The Shed.

Registration and a call for papers will be open later this month. More details, including travel and accommodation tips, are available now at the conference website:

We are interested in running training workshops on Monday 31st July but nothing is planned yet. We would like to hear from anyone who interested in helping to organise a training workshop.

Inside view of MMU Brooks Building


Charles Plessy: Beware of libinput 1.6.0-1

Planet Debian - Enj, 09/02/2017 - 2:22md

Since I updated this evening, touch to click with my touchpad is almost totally broken. Fortunately, a correction is pending.

update: To reinstall packages version 1.5.5-4 solves the problem in the meantime.

Sven Hoexter: Limit host access based on LDAP groupOfUniqueNames with sssd

Planet Debian - Enj, 09/02/2017 - 1:02md

For CentOS 4 to CentOS 6 we used pam_ldap to restrict host access to machines, based on groupOfUniqueNames listed in an openldap. With RHEL/CentOS 6 RedHat already deprecated pam_ldap and highly recommended to use sssd instead, and with RHEL/CentOS 7 they finally removed pam_ldap from the distribution.

Since pam_ldap supported groupOfUniqueNames to restrict logins a bigger collection of groupOfUniqueNames were created to restrict access to all kind of groups/projects and so on. But sssd is in general only able to filter based on an "ldap_access_filter" or use the host attribute via "ldap_user_authorized_host". That does not allow the use of "groupOfUniqueNames". So to allow a smoth migration I had to configure sssd in some way to still support groupOfUniqueNames. The configuration I ended up with looks like this:

[domain/hostacl] autofs_provider = none ldap_schema = rfc2307bis # to work properly we've to keep the search_base at the highest level ldap_search_base = ou=foo,ou=people,o=myorg ldap_default_bind_dn = cn=ro,ou=ldapaccounts,ou=foo,ou=people,o=myorg ldap_default_authtok = foobar id_provider = ldap auth_provider = ldap chpass_provider = none ldap_uri = ldaps://ldapserver:636 ldap_id_use_start_tls = false cache_credentials = false ldap_tls_cacertdir = /etc/pki/tls/certs ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt ldap_tls_reqcert = allow ldap_group_object_class = groupOfUniqueNames ldap_group_member = uniqueMember access_provider = simple simple_allow_groups = fraappmgmtt [sssd] domains = hostacl services = nss, pam config_file_version = 2

Important side note: With current sssd versions you're more or less forced to use ldaps with a validating CA chain, though hostnames are not required to match the CN/SAN so far.

Relevant are:

  • set the ldap_schema to rfc2307bis to use a schema that knows about groupOfUniqueNames at all
  • set the ldap_group_object_class to groupOfUniqueNames
  • set the the ldap_group_member to uniqueMember
  • use the access_provider simple

In practise what we do is match the member of the groupOfUniqueNames to the sssd internal group representation.

The best explanation about the several possible object classes in LDAP for group representation I've found so far is unfortunately in a german blog post. Another explanation is in the LDAP wiki. In short: within a groupOfUniqueNames you'll find a full DN, while in a posixGroup you usually find login names. Different kind of object class requires a different handling.

Next step would be to move auth and nss functionality to sssd as well.

Vincent Bernat: Integration of a Go service with systemd

Planet Debian - Enj, 09/02/2017 - 9:32pd

Unlike other programming languages, Go’s runtime doesn’t provide a way to reliably daemonize a service. A system daemon has to supply this functionality. Most distributions ship systemd which would fit the bill. A correct integration with systemd is quite straightforward. There are two interesting aspects: readiness & liveness.

As an example, we will daemonize this service whose goal is to answer requests with nifty 404 errors:

package main import ( "log" "net" "net/http" ) func main() { l, err := net.Listen("tcp", ":8081") if err != nil { log.Panicf("cannot listen: %s", err) } http.Serve(l, nil) }

You can build it with go build 404.go.

Here is the service file, 404.service1:

[Unit] Description=404 micro-service [Service] Type=notify ExecStart=/usr/bin/404 WatchdogSec=30s Restart=on-failure [Install] Readiness§

The classic way for an Unix daemon to signal its readiness is to daemonize. Technically, this is done by calling fork(2) twice (which also serves other intents). This is a very common task and the BSD systems, as well as some other C libraries, supply a daemon(3) function for this purpose. Services are expected to daemonize only when they are ready (after reading configuration files and setting up a listening socket, for example). Then, a system can reliably initialize its services with a simple linear script:

syslogd unbound ntpd -s

Each daemon can rely on the previous one being ready to do its work. The sequence of actions is the following:

  1. syslogd reads its configuration, activates /dev/log, daemonizes.
  2. unbound reads its configuration, listens on, daemonizes.
  3. ntpd reads its configuration, connects to NTP peers, waits for clock to be synchronized2, daemonizes.

With systemd, we would use Type=fork in the service file. However, Go’s runtime does not support that. Instead, we use Type=notify. In this case, systemd expects the daemon to signal its readiness with a message to an Unix socket. go-systemd package handles the details for us:

package main import ( "log" "net" "net/http" "" ) func main() { l, err := net.Listen("tcp", ":8081") if err != nil { log.Panicf("cannot listen: %s", err) } daemon.SdNotify(false, "READY=1") // ❶ http.Serve(l, nil) // ❷ }

It’s important to place the notification after net.Listen() (in ❶): if the notification was sent earlier, a client would get “connection refused” when trying to use the service. When a daemon listens to a socket, connections are queued by the kernel until the daemon is able to accept them (in ❷).

If the service is not run through systemd, the added line is a no-op.


Another interesting feature of systemd is to watch the service and restart it if it happens to crash (thanks to the Restart=on-failure directive). It’s also possible to use a watchdog: the service sends watchdog keep-alives at regular interval. If it fails to do so, systemd will restart it.

We could insert the following code just before http.Serve() call:

go func() { interval, err := daemon.SdWatchdogEnabled(false) if err != nil || interval == 0 { return } for { daemon.SdNotify(false, "WATCHDOG=1") time.Sleep(interval / 3) } }()

However, this doesn’t add much value: the goroutine is unrelated to the core business of the service. If for some reason, the HTTP part gets stuck, the goroutine will happily continue to send keep-alives to systemd.

In our example, we can just do a HTTP query before sending the keep-alive. The internal loop can be replaced with this code:

for { _, err := http.Get("") // ❸ if err == nil { daemon.SdNotify(false, "WATCHDOG=1") } time.Sleep(interval / 3) }

In ❸, we connect to the service to check if it’s still working. If we get some kind of answer, we send a watchdog keep-alive. If the service is unavailable or if http.Get() gets stuck, systemd will trigger a restart.

There is no universal recipe. However, checks can be split into two groups:

  • Before sending a keep-alive, you execute an active check on the components of your service. The keep-alive is sent only if all checks are successful. The checks can be internal (like in the above example) or external (for example, check with a query to the database).

  • Each component reports its status, telling if it’s alive or not. Before sending a keep-alive, you check the reported status of all components (passive check). If some components are late or reported fatal errors, don’t send the keep-alive.

If possible, recovery from errors (for example, with a backoff retry) and self-healing (for example, by reestablishing a network connection) is always better, but the watchdog is a good tool to handle the worst cases and avoid too complex recovery logic.

For example, if a component doesn’t know how to recover from an exceptional condition3, instead of using panic(), it could signal its situation before dying. Another dedicated component could try to resolve the situation by restarting the faulty component. If it fails to reach an healthy state in time, the watchdog timer will trigger and the whole service will be restarted.

  1. Depending on the distribution, this should be installed in /lib/systemd/system or /usr/lib/systemd/system. Check with the output of the command pkg-config systemd --variable=systemdsystemunitdir. 

  2. This highly depends on the NTP daemon used. OpenNTPD doesn’t wait unless you use the -s option. ISC NTP doesn’t either unless you use the --wait-sync option. 

  3. An example of an exceptional condition is to reach the limit on the number of file descriptors. Self-healing from this situation is difficult and it’s easy to get stuck in a loop. 

Iustin Pop: Solarized colour theme

Planet Debian - Enj, 09/02/2017 - 1:18pd

A while back I was looking for some information on the web, and happened upon a blog post about the subject. I don't remember what I was looking for, but on the same blog, there was a screen shot of what I then learned was the Solarized theme. This caught my eye that I decided to try it myself ASAP.

Up until last year, I've been using for many years the 'black on light yellow' xterm scheme. This is good during the day, but too strong during night, so on some machines I switched to 'white on black', but this was not entirely satisfying.

The solarized theme promises to have consistent colours over both light and dark background, which would help to make my setups finally consistent, and extends to a number of programs. Amongst these, there are themes for mutt on both light and dark backgrounds using only 16 colours. This was good, as my current hand-built theme is based on 256 colours, and this doesn't work well in the Linux console.

So I tried changing my terminal to the custom colours, played with it for about 10 minutes, then decided that its contrast is too low, bordering on unreadable. I switch to another desktop where I still had open an xterm using white-on-black, and—this being at night—my eyes immediately go 'no no no too high contrast'. In about ten minutes I got so used to it that the old theme was really really uncomfortable. There was no turning back now ☺

Interestingly, the light theme was not that much better than black-on-light-yellow, as that theme is already pretty well behaved. But I still migrated for consistency.


Starting from the home page and the internet, I found resources for:

  • Vim and Emacs (for which I use the debian package elpa-solarized-theme).
  • Midnight Commander, for which I currently use peel's theme, although I'm not happy with it; interestingly, the default theme almost works on 16-custom-colours light terminal scheme, but not quite on the dark one.
  • Mutt, which is both in the main combined repository but also on the separate one. I'm not really happy with mutt's theme either, but that seems mostly because I was using a quite different theme before. I'll try to improve what I feel is missing over time.
  • dircolors; I found this to be an absolute requirement for good readability of ls --color, as the defaults are too bad
  • I also took the opportunity to unify my git diff and colordiff theme, but this was not really something that I found and took 'as-is' from some repository; I basically built my own theme.
16 vs 256 colours

The solarized theme/configuration can be done in two ways:

  • by changing the Xresources/terminal 16 basic colours to custom RGB values, or:
  • by using approximations from the fixed 256 colours available in the xterm-256color terminfo

Upstream recommends the custom ones, as they are precisely tuned, instead of using the approximated ones; honestly I don't know if they would make a difference. It's too bad upstream went silent a few years back, as technically it's possible to override also colours above 16 in the 256-colour palette, but in any case, each of the two options has its own cons:

  • using customised 16-colour means that all terminal programs get the new colours scheme, even if they were designed (colour-wise) based on the standard values; this makes some things pretty unreadable (hence the need to fix dircolors), but at least somewhat consistent.
  • using 256-colour palette, unchanged programs stay the same, but now they look very different than the programs that were updated to solarized; note thought I haven't tested this, but that's how I understand things would be.

So either way it's not perfect.

Desktop-wide consistency

Also not perfect is that for proper consistent look, many more programs would have to be changed; but I don't see that happening in today's world. I've seen for example 3 or 4 Midnight Commander themes, but none of them were actually in the spirit of solarized, even though they were tweaked for solarized.

Even between vim and emacs, which both have one canonical solarized theme, the look is close but not really the same (looking at the markdown source for this blog post: URLs, headers and spelling mistakes are all different), but this might be due not necessarily the theme itself.

So no global theme consistency (I'd wish), but still, I find this much better on the eyes and not lower on readability after getting used to it.

Thanks Ethan!

Manuel A. Fernandez Montecelo: FOSDEM 2017: People, RISC-V and ChaosKey

Planet Debian - Enj, 09/02/2017 - 12:52pd

This year, for the first time, I attended FOSDEM.

There I met...


... including:

  • friends that I don't see very often;
  • old friends that I didn't expect to see there, some of whom decided to travel from far away in the last minute;
  • met people in person for the first time, which previously I had known only though the internet -- one of whom is a protagonist in a previous blog entry, about the Debian port for OpenRISC;

I met new people in:

  • bars/pubs,
  • restaurants,
  • breakfast tables at lodgings,
  • and public transport.

... from the first hour to the last hour of my stay in Brussels.

In summary, lots of people around.

I also hoped to meet or spend some (more) time with a few people, but in the end I didn't catch them, our could not spend as much time with them as I would wish.

For somebody like me who enjoys quiet time by itsef, it was a bit too intensive in terms of interacting with people. But overall it was a nice winter break, definitely worth to attend, and even a much better experience than what I had expected.

Talks / Events

Of course, I also attended a few talks, some of which were very interesting; although the event is so (sometimes uncomfortably) crowded that the rooms were full more often than not, in which case it was not possible to enter (the doors were closed) or there were very long queues for waiting.

And with so many talks crammed into a weekend, I had so many schedule clashes with the talks that I had pre-selected as interesting, that I ended up missing most of them.

In terms of technical stuff, I have specially enjoyed the talk by Arun Thomas RISC-V -- Open Hardware for Your Open Source Software, and some conversations related with toolchain stuff and other upstream stuff, as well as on the Debian port for RISC-V.

The talk Resurrecting dinosaurs, what can possibly go wrong? -- How Containerised Applications could eat our users, by Richard Brown, was also very good.


Apart from that, I have witnessed a shady cash transaction in a bus from the city centre to FOSDEM in exchange for hardware, not very unlike what I had read about only days before.

So I could not help but to get involved in a subsequent transaction myself, to get my hands laid upon a ChaosKey.

Steve Kemp: Old packages are interesting.

Planet Debian - Mër, 08/02/2017 - 11:00md

Recently Vincent Bernat wrote about writing his own simple terminal, using vte. That was a fun read, as the sample code built really easily and was functional.

At the end of his post he said :

evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don’t remember why I didn’t pick it.

That set me off looking at evilvte, and it was one of those rare projects which seems to be pretty stable, and also hasn't changed in any recent release of Debian GNU/Linux:

  • lenny had 0.4.3-1.
  • etch had nothing.
  • squeeze had 0.4.6-1.
  • wheezy has release 0.5.1-1.
  • jessie has release 0.5.1-1.
  • stretch has release 0.5.1-1.
  • sid has release 0.5.1-1.

I wonder if it would be possible to easily generate a list of packages which have the same revision in multiple distributions? Anyway I had a look at the source, and unfortunately spotted that it didn't entirely handle clicking on hyperlinks terribly well. Clicking on a link would pretty much run:

firefox '%s'

That meant there was an obvious security problem.

It is a great terminal though, and it just goes to show how short, simple, and readable such things can be. I enjoyed looking at the source, and furthermore enjoyed using it. Unfortunately due to a dependency issue it looks like this package will be removed from stretch.


Subscribe to AlbLinux agreguesi - Site në gjuhë të huaj