You are here

Planet Debian

Subscribe to Feed Planet Debian
Planet Debian -
Përditësimi: 8 months 1 ditë më parë

Steinar H. Gunderson: RC-bugginess

Enj, 21/03/2019 - 12:34pd

The RMs very helpfully unblocked my Nageru upload so that a bunch of Futatabi bugfixes could go to buster. I figured out this was a good time to find a long-standing RC bug to debug and fix in return.

(Granted, I didn't upload yet, so the bug isn't closed. But a patch should go a long way.)

Only 345 to go…

Simon Josefsson: Planning for a new OpenPGP key

Enj, 21/03/2019 - 12:05pd

I’m the process of migrating to a new OpenPGP key. I have been using GnuPG with keys stored on external hardware (smartcards) for a long time, and I’m firmly committed to that choice. Algorithm wise, RSA was the best choice back for me when I created my key in 2002, and I used it successfully with a non-standard key size for many years. In 2014 it was time for me to move to a new stronger key, and I still settled on RSA and a non-standard key size. My master key was 3744 bits instead of 1280 bits, and the smartcard subkeys were 2048 bits instead of 1024 bits. At that time, I had already moved from the OpenPGP smartcard to the NXP-based YubiKey NEO (version 3) that runs JavaCard applets. The primary relevant difference for me was the availability of source code for the OpenPGP implementation running on the device, in the ykneo-openpgp project. The device was still a proprietary hardware and firmware design though.

Five years later, it is time for a new key again, and I allow myself to revisit some decisions that I made last time.

GnuPG has supported Curve25519/Ed25519 for some time, and today I prefer it over RSA. Infrastructure has been gradually introducing support for it as well, to the point that I now believe I can cut the ropes to the old world with RSA. Having a offline master key is still a strong preference, so I will stick to that decision. You shouldn’t run around with your primary master key if it is possible to get by with subkeys for daily use, and that has worked well for me over the years.

Hardware smartcard support for Curve25519/Ed25519 has been behind software support. NIIBE Yutaka developed the FST-01 hardware device in 2011, and the more modern FST-01G device in 2016. He also wrote the Gnuk software implementation of the OpenPGP card specification that runs on the FST-01 hardware (and other devices). The FST-01 hardware design is open, and it only runs the Gnuk free software. You can buy the FST-01G device from the FSF. The device has not received the FSF Respects Your Freedom stamp, even though it is sold by FSF which seems a bit hypocritical. Hardware running Gnuk are the only free software OpenPGP smartcard that supports Curve25519/Ed25519 right now, to my knowledge. The physical form factor is not as slick as the YubiKey (especially the nano-versions of the YubiKey that can be emerged into the USB slot), but it is a trade-off I can live with. Niibe introduced the FST-01SZ at FOSDEM’19 but to me it does not appear to offer any feature over the FST-01G and is not available for online purchase right now.

I have always generated keys in software using GnuPG. My arguments traditionally was that I 1) don’t trust closed-source RSA key generation implementations, and 2) want to be able to reproduce my setup with a brand new device. With Gnuk the first argument doesn’t hold any longer. However, I still prefer to generate keys with GnuPG on a Linux-based Debian machine because that software stack is likely to receive more auditing than Gnuk. It is a delicated decision though, since GnuPG on Debian is many orders of complexity higher than the Gnuk software. My second argument is now the primary driver for this decision.

I prefer the SHA-2 family of hashes over SHA-1, and earlier had to configure GnuPG for this. Today I believe the defaults have been improved and this is no longer an issue.

Back in 2014, I had a goal of having a JPEG image embedded in my OpenPGP key. I never finished that process, and I have not been sorry for missing out on anything as a result. On the contrary, the size of the key with an embedded image woud have been even more problematic than the already large key holding 4 embedded RSA public keys in it.

To summarize, my requirements for my OpenPGP key setup in 2019 are:

  • Curve25519/Ed25519 algorithms.
  • Master key on USB stick.
  • USB stick only used on an offline computer.
  • Subkeys for daily use (signature, encryption and authentication).
  • Keys are generated in GnuPG software and imported to the smartcard.
  • Smartcard is open hardware and running free software.

Getting this setup up and running sadly requires quite some detailed work, which will be the topic of other posts… stay tuned!

Lucas Nussbaum: Call for help: graphing Debian trends

Mër, 20/03/2019 - 9:29md

It has been raised in various discussions how much it’s difficult to make large-scale changes in Debian.

I think that one part of the problem is that we are not very good at tracking those large-scale changes, and I’d like to change that. A long time ago, I did some graphs about Debian (first in 2011, then in 2013, then again in 2015). An example from 2015 is given below, showing the market share of packaging helpers.

Those were generated using a custom script. Since then, classification tags were added to lintian, and I’d like to institutionalize that a bit, to make it easier to track more trends in Debian, and maybe motivate people with switching to new packaging standards. This could include stuff like VCS used, salsa migration, debhelper compat levels, patch systems and source formats, but also stuff like systemd unit files vs traditional init scripts, hardening features, etc. The process would look like:

  1. Add classification tags to lintian for relevant stuff (maybe starting with being able to regenerate the graphs from 2015).
  2. Use lintian to scan all packages on, which stores all packages ever uploaded to Debian (well, since 2005), and generate a dataset
  3. Generate nice graphs

Given my limited time available for Debian, I would totally welcome some help. I can probably take care of the second step (I actually did it recently on a subset of packages to check feasibility), but I would need:

  • The help of someone with Perl knowledge, willing to modify lintian to add additional classification tags. There’s no need to be a Debian Developer, and lintian has an extensive test suite, that should make it quite fun to hack on. The code could either be integrated in lintian, or live in a lintian fork that would only be used to generate this data.
  • Ideally (but that’s less important at this stage), the help of someone with web skills to generate a nice website.

Let me know if you are interested.

Bits from Debian: DebConf19 registration is open!

Mër, 20/03/2019 - 8:30md

Registration for DebConf19 is now open. The event will take place from July 21st to 28th, 2019 at the Central campus of Universidade Tecnológica Federal do Paraná - UTFPR, in Curitiba, Brazil, and will be preceded by DebCamp, from July 14th to 19th, and an Open Day on the 20th.

DebConf is an event open to everyone, no matter how you identify yourself or how others perceive you. We want to increase visibility of our diversity and work towards inclusion at Debian Project, drawing our attendees from people just starting their Debian journey, to seasoned Debian Developers or active contributors in different areas like packaging, translation, documentation, artwork, testing, specialized derivatives, user support and many other. In other words, all are welcome.

To register for the event, log into the registration system and fill out the form. You will be able to edit and update your registration at any point. However, in order to help the organisers have a better estimate of how many people will attend the event, we would appreciate if you could access the system and confirm (or cancel) your participation in the Conference as soon as you know if you will be able to come. The last day to confirm or cancel is June 14th, 2019 23:59:59 UTC. If you don't confirm or you register after this date, you can come to the DebConf19 but we cannot guarantee availability of accommodation, food and swag (t-shirt, bag…).

For more information about registration, please visit Registration Information

Bursary for travel, accomodation and meals

In an effort to widen the diversity of DebConf attendees, the Debian Project allocates a part of the financial resources obtained through sponsorships to pay for bursaries (travel, accommodation, and/or meals) for participants who request this support when they register.

As resources are limited, we will examine the requests and decide who will receive the bursaries. They will be destined:

  • To active Debian contributors.
  • To promote diversity: newcomers to Debian and/or DebConf, especially from under-represented communities.

Giving a talk, organizing an event or helping during DebConf19 is taken into account when deciding upon your bursary, so please mention them in your bursary application. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

For more information about bursaries, please visit Applying for a Bursary to DebConf

Attention: the registration for DebConf19 will be open until Conference, but the deadline to apply for bursaries using the registration form before April 15th, 2019 23:59:59 UTC. This deadline is necessary in order to the organisers use time to analyze the requests, and for successful applicants to prepare for the conference.

To register for the Conference, either with or without a bursary request, please visit:

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Infomaniak and Google. DebConf19 is still accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

Jonathan Carter: GitLab and Debian

Mër, 20/03/2019 - 7:43md

As part of my DPL campaign, I thought that I’d break out a few items out in blog posts that don’t quite fit into my platform. This is the first post in that series.

When Debian was hunting for a new VCS-based collaboration suite in 2017, the administrators of the then current platform, called Alioth (which was a FusionForge instance) strongly considered adopting Pagure, a git hosting framework from Fedora. I was a bit saddened that GitLab appeared to be losing the race, since I’ve been a big fan of the project for years already. At least Pagure would be a huge improvement over the status quo and it’s written in Python, which I considered a plus over GitLab, so at least it wasn’t going to be all horrible.

The whole discussion around GitLab vs Pagure turned out to be really fruitful though. GitLab did some introspection around its big non-technical problems, especially concerning their contributor licence agreement, and made some major improvements which made GitLab a lot more suitable for large free software projects, which shortly lead to its adoption by both the Debian project and the Gnome project. I think it’s a great example of how open communication and engagement can help reduce friction and make things better for everyone. GitLab has since became even more popular and is now the de facto self-hosted git platform across all types of organisations.

Fun fact: I run a few GitLab instances myself, and often get annoyed with all my tab favicons looking the same, so the first thing I do is create a favicon for my GitLab instances. I’m also the creator of the favicon for the, it’s basically the GitLab logo re-arranged and mangled to be a crude representation of the debian swirl:

The move to GitLab had some consequences that I’m not sure was completely intended. For example, across the project, we used to use a whole bunch of different version control systems (git, bzr, mercurial, etc), but since GitLab only supports git, it has made git the gold standard in Debian too. For better or worse, I do think that it makes it easier for new contributors to get involved since they can contribute to different teams without having to learn a different VCS for each one.

I don’t think it’s a problem that some teams don’t use salsa (or even git for that matter), but within salsa we have quite a number of team-specific workflows that I think can be documented a lot better and I think in doing so, may merge some of the cases a bit so that it’s more standardised.

When I started working on my DPL platform, I pondered whether I should host my platform in a git repository. I decided to go ahead and do so because it would make me more accountable since any changes I make can be tracked and tagged.

I also decided to run for DPL rather late, and prepared my platform under some pressure, making quite a few mistakes. In another twist of unintended consequences for using git, I woke up this morning with a pleasant surprise of 2 merge requests that fixed those mistakes.

I think GitLab is the best thing that has happened to Debian in a long time, and I think whoever becomes DPL should consider making both git and the a regular piece of the puzzle for new processes that are put in place. Git is becoming so ubiquitous that over time, it’s not even going to be something that an average person would need to learn anymore when getting involved in Debian and it makes sense to embrace it.

Jan Wagner: HAProxy - a journey into multithreading (and SSL)

Mër, 20/03/2019 - 7:39md

I'm running some load balancers which are using HAProxy to distribute HTTP traffic to multiple systems.

While using SSL with HAProxy is possible since some time, it wasn't in the early days. So we decided for some customers, which was in need to provide encryption, to offload it with Apache.
When later HAProxy got added SSL support this also had benefits when keeping this setup for larger sites, because HAProxy had a single process model and doing encryption is indeed way more resource consuming.
Still using Apache for SSL offloading was a good choice because it comes with the Multi-Processing Modules worker and event that are threading capable. We did choose the event mpm cause it should deal better with the 'keep alive problem' in HTTP. So far so good.

Last year some large setups started to suffer accepting new connections out of the blue. Unfortunately I found nothing in the logs and also couldn't reproduce this behaviour. After some time I decided to try using another Apache mpm and switched over to the worker model. And guess what ... the connection issues vanished.
Some days later I surprisingly learned about the Apache Bug in Debian BTS "Event MPM listener thread may get blocked by SSL shutdowns" which was an exact description of my problem.

While being back in safe waters I thought it would be good to have a look into HAProxy again and learned that threading support was added in version 1.8 and in 1.9 got some more improvements.
So we started to look into it on a system with a couple of real CPUs:

# grep processor /proc/cpuinfo | tail -1 processor : 19

At first we needed to install a newer version of HAProxy, since 1.8.x is available via backports but 1.9.x is only available via I thought I should start with a simple configuration and keep 2 spare CPUs for other tasks:

global # one process nbproc 1 # 18 threads nbthread 18 # mapped to the first 18 CPU cores cpu-map auto:1/1-18 0-17

Now let's start:

# haproxy -c -V -f /etc/haproxy/haproxy.cfg # service haproxy reload # pstree haproxy No processes found. # grep "worker #1" /var/log/haproxy.log | tail -2 Mar 20 13:06:51 lb13 haproxy[22156]: [NOTICE] 078/130651 (22156) : New worker #1 (22157) forked Mar 20 13:06:51 lb13 haproxy[22156]: [ALERT] 078/130651 (22156) : Current worker #1 (22157) exited with code 139 (Segmentation fault)

Okay .. cool! ;) So I started lowering the used CPUs since without threading I did not experiencing segfaults. With 17 threads it seems to be better:

# service haproxy restart # pstree haproxy haproxy---16*[{haproxy}] # grep "worker #1" /var/log/haproxy.log | tail -2 Mar 20 13:06:51 lb13 haproxy[22156]: [ALERT] 078/130651 (22156) : Current worker #1 (22157) exited with code 139 (Segmentation fault) Mar 20 13:14:33 lb13 haproxy[27001]: [NOTICE] 078/131433 (27001) : New worker #1 (27002) forked

Now I started to move traffic from Apache to HAProxy slowly and watching logs carefully. With shifting more and more traffic over, the amount of SSL handshake failure entries went up. While there was the possibility this were just some clients not supporting our ciphers and/or TLS versions I had some doubts, but our own monitoring was unsuspicious. So I started to have a look on external monitoring and after some time I cought some interesting errors:

error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac

The last time I had issues I lowered the thread count so I did this again. And you might guessed it already, this worked out. With 12 threads I had no issues anymore:

global # one process nbproc 1 # 12 threads nbthread 12 # mapped to the first 12 CPU cores (with more then 17 cpus haproxy segfaults, with 16 cpus we have a high rate of ssl errors) cpu-map auto:1/1-12 0-11

So we got rid of SSL offloading and the proxy on localhost, with the downside that HAProxy is failing 1/146 h2spec test, which is a conformance testing tool for HTTP/2 implementation, where Apache was failing not a single test.

Antoine Beaupré: Securing registration email

Mër, 20/03/2019 - 4:28md

I've been running my own email server basically forever. Recently, I've been thinking about possible attack vectors against my personal email. There's of course a lot of private information in that email address, and if someone manages to compromise my email account, they will see a lot of personal information. That's somewhat worrisome, but there are possibly more serious problems to worry about.

TL;DR: if you can, create a second email address to register on websites and use stronger protections on that account from your regular mail.

Hacking accounts through email

Strangely what keeps me up at night is more what kind of damage an attacker could do to other accounts I hold with that email address. Because basically every online service is backed by an email address, if someone controls my email address, they can do a password reset on every account I have online. In fact, some authentication systems just gave up on passwords algother and use the email system itself for authentication, essentially using the "password reset" feature as the authentication mechanism.

Some services have protections against this: for example, GitHub require a 2FA token when doing certain changes which the attacker hopefully wouldn't have (although phishing attacks have been getting better at bypassing those protections). Other services will warn you about the password change which might be useful, except the warning is usually sent... to the hacked email address, which doesn't help at all.

The solution: a separate mailbox

I had been using an extension ( to store registration mail in a separate folder for a while already. This allows me to bypass greylisting on the email address, for one. Greylisting is really annoying when you register on a service or do a password reset... The extension also allows me to sort those annoying emails in a separate folder automatically with a simple Sieve rule.

More recently, I have been forced to use a completely different email alias ( on some services that dislike having plus signs (+) in email address, even though they are perfectly valid. That got me thinking about the security problem again: if I have a different alias why not make it a completely separate account and harden that against intrusion. With a separate account, I could enforce things like SSH-only access or 2FA that would be inconvenient for my main email address when I travel, because I sometimes log into webmail for example. Because I don't frequently need access to registration mail, it seemed like a good tradeoff.

So I created a second account, with a locked password and SSH-only authentication. That way the only way someone can compromise my "registration email" is by hacking my physical machine or the server directly, not by just bruteforcing a password.

Now of course I need to figure out which sites I'm registered on with a "non-registration" email ( before I thought of using the register@ alias, I sometimes used my normal address instead. So I'll have to track those down and reset those. But it seems I already blocked a large attack surface with a very simple change and that feels quite satisfying.

Implementation details

Using syncmaildir (SMD) to sync my email, the change was fairly simple. First I need to create a second SMD profile:

if [ $(hostname) = "marcos" ]; then exit 1 fi SERVERNAME=smd-server-register CLIENTNAME=$(hostname)-register MAILBOX_LOCAL=Maildir/.register/ MAILBOX_REMOTE=Maildir TRANSLATOR_LR="smd-translate -m move -d LR register" TRANSLATOR_RL="smd-translate -m move -d RL register" EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"

Very similar to the normal profile, except mails get stored in the already existing Maildir/.register/ and different SSH profile and translation rules are used. The new SSH profile is basically identical to the previous one:

# wrapper for smd Host smd-server-register Hostname BatchMode yes Compression yes User register IdentitiesOnly yes IdentityFile ~/.ssh/id_ed25519_smd

Then we need to ignore the register folder in the normal configuration:

diff --git a/.smd/config.default b/.smd/config.default index c42e3d0..74a8b54 100644 --- a/.smd/config.default +++ b/.smd/config.default @@ -59,7 +59,7 @@ TRANSLATOR_RL="smd-translate -m move -d RL default" # EXCLUDE_LOCAL="Mail/spam Mail/trash" # EXCLUDE_REMOTE="OtherMail/with%20spaces" #EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*" -EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*" +EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/* Maildir/.register/*" #EXCLUDE_LOCAL="$MAILBOX_LOCAL/.notmuch/hooks/* $MAILBOX_LOCAL/.notmuch/xapian/*" #EXCLUDE_REMOTE="$MAILBOX_REMOTE/.notmuch/hooks/* $MAILBOX_REMOTE/.notmuch/xapian/*" #EXCLUDE_REMOTE="Maildir/Koumbit Maildir/Koumbit* Maildir/Koumbit/* Maildir/Koumbit.INBOX.Archives/ Maildir/Koumbit.INBOX.Archives.2012/ Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"

And finally we add the new profile to the systemd services:

diff --git a/.config/systemd/user/smd-pull.service b/.config/systemd/user/smd-pull.service index a841306..498391d 100644 --- a/.config/systemd/user/smd-pull.service +++ b/.config/systemd/user/smd-pull.service @@ -8,6 +8,7 @@ ConditionHost=!marcos Type=oneshot # --show-tags gives email counts ExecStart=/usr/bin/smd-pull --show-tags +ExecStart=/usr/bin/smd-pull --show-tags register [Install] diff --git a/.config/systemd/user/smd-push.service b/.config/systemd/user/smd-push.service index 10d53c7..caa588e 100644 --- a/.config/systemd/user/smd-push.service +++ b/.config/systemd/user/smd-push.service @@ -8,6 +8,7 @@ ConditionHost=!marcos Type=oneshot # --show-tags gives email counts ExecStart=/usr/bin/smd-push --show-tags +ExecStart=/usr/bin/smd-push --show-tags register [Install]

That's about it on the client side. On the server, the user is created with a locked password the mailbox moved over:

adduser --disabled-password register mv ~anarcat/Maildir/.register/ ~register/Maildir/ chown -R register:register Maildir/

The SSH authentication key is added to .ssh/authorized_keys, and the alias is reversed:

--- a/aliases +++ b/aliases @@ -24,7 +24,7 @@ spamtrap: anarcat spampd: anarcat junk: anarcat devnull: /dev/null -register: anarcat+register +anarcat+register: register # various sandboxes anarcat-irc: anarcat

... and the email is also added to /etc/postgrey/whitelist_recipients.

That's it: I now have a hardened email service! Of course there are other ways to harden an email address. On-disk encryption comes to mind but that only works with password-based authentication from what I understand, which is something I want to avoid to remove bruteforce attacks.

Your advice and comments are of course very welcome, as usual

Michal Čihař: translation-finder 1.1

Mër, 20/03/2019 - 2:45md

The translation-finder module has been released in version 1.1. It is used by Weblate to detect translatable files in the repository making setup of translation components in Weblate much easier. This release brings lot of improvements based on feedback from our users, making the detection more reliable and accurate.

Full list of changes:

  • Improved detection of translation with full language code.
  • Improved detection of language code in directory and file name.
  • Improved detection of language code separated by full stop.
  • Added detection for app store metadata files.
  • Added detection for JSON files.
  • Ignore symlinks during discovery.
  • Improved detection of matching pot files in several corner cases.
  • Improved detection of monolingual Gettext.

Filed under: Debian English SUSE Weblate

Reproducible builds folks: Reproducible Builds: Weekly report #203

Mër, 20/03/2019 - 1:51md

Here’s what happened in the Reproducible Builds effort between Sunday March 10 and Saturday March 16 2019:

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy which offers paid internships to work on free software. Internships are open to applicants around the world and are paid a stipend for the three month internship with an additional travel stipend to attend conferences. So far, we received more than ten initial requests from candidates and the closing date for applicants is April 2nd. More information is available on the application page.

Packages reviewed and fixed, and bugs filed strip-nondeterminism

strip-nondeterminism is our tool that post-processes files to remove known non-deterministic output. This week, Chris Lamb:

Test framework development

We operate a comprehensive Jenkins-based testing framework that powers This week, the following changes were made:

  • Alexander Couzens (OpenWrt support):
    • Correct the arguments for the reproducible_openwrt_package_parser script. []
    • Copy over Package-* files when building. []
    • Fix the Packages.manifest parser. [] []
  • Mattia Rizzolo:

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Jonathan Dowland: First successful Amiga disk-dumping session

Mër, 20/03/2019 - 12:04md

This is the seventh part in a series of blog posts. The previous post was Learning new things about my old Amiga A500.

X-COPY User Interface

Totoro Soot Sprites?

"Cyberpunk" party invitation

My childhood home

HeroQuest board game guide

I've finally dumped some of my Amiga floppies, and started to recover some old files! The approach I'm taking is to use the real Amiga to read the floppies (in the external floppy disk drive) and then copy them onto a virtual floppy disk image on the Gotek Floppy Emulator. I use X-COPY to perform the copy (much as I would have done back in 1992).

FlashFloppy's default mode of operation is to scan over the filesystem on the attached USB and assign a number to every disk image that it discovers (including those in sub-folders). If your Gotek device has the OLED display, then it reports the path to the disk image to you; but I have the simpler model that simply displays the currently selected disk slot number.

For the way I'm using it, its more basic "indexed" mode fits better: you name files in the root of the USB's filesystem using a sequential scheme starting at DSKA0000.ADF (which corresponds to slot 0) and it's then clear which image is active at any given time. I set up the banks with Workbench, X-COPY and a series of blank floppy disk images to receive the real contents, which I was able to generate using FS-UAE (they aren't just full of zeroes).

A few weeks ago I had a day off work and spent an hour in the morning dumping floppies. I managed to dump around 20 floppies successfully, with only a couple of unreadable disks (from my collection of 200). I've prioritised home-made disks, in particular ones that are likely to contain user-made content rather than just copies of commercial disks. But in some cases it's hard to know for sure what's on a disk, and sometimes I've made copies of e.g. Deluxe Paint and subsequently added home-made drawings on top.

Back on my laptop, FS-UAE can quite happily read the resulting disk images, and Deluxe Paint IV via FS-UAE can happily open the drawings that I've found (and it was a lot of fun to fire up DPaint for the first time in over 20 years. This was a really nice piece of software. I must have spent days of my youth exploring it).

I tried a handful of user-mode tools for reading the disk images (OFS format) but they all had problems. In the end I just used the Linux kernel's AFFS driver and loop-back mounts. (I could have looked at libguestfs instead).

To read Deluxe Paint image files on a modern Linux system one can use ImageMagick (via netpbm back-end) or ffmpeg. ffmpeg can also handle Deluxe Paint animation files, but more care is needed with these: It does not appear to correctly convert frame durations, setting the output animations to a constant 60fps. Given the input image format colour depth, it's tempting to output to animated GIF, rather than a lossy video compression format, but from limited experimentation it seems some nuances of the way that palettes are used in the source files are not handled optimally in the output either. More investigation here is required.

Enjoy a selection of my childhood drawings…

Jonathan Dowland: WadC 3.0

Mër, 20/03/2019 - 11:55pd

blockmap.wl being reloaded (click for animation)

A couple of weeks ago I release version 3.0 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

3.0 introduces more flexible randomness with rand; two new test maps (blockmap and bsp) that demonstrate approaches to random dungeon generation; some useful data structures in the library; better Hexen support and a bunch of other improvements.

Check the release notes for the full details, and check out the gallery of examples to see the kind of things you can do.

Version 3.0 of WadC is dedicated to Lu (1972-2019). RIP.

Neil McGovern: GNOME ED Update – February

Mar, 19/03/2019 - 11:43md

Another update is now due from what we’ve been doing at the Foundation, and we’ve been busy!

As you may have seen, we’ve hired three excellent people over the past couple of months. Kristi Progri has joined us as Program Coordinator, Bartłomiej Piorski as a devops sysadmin, and Emmanuele Bassi as our GTK Core developer. I hope to announce another new hire soon, so watch this space…

There’s been quite a lot of discussion around the Google API access, and GNOME Online Accounts. The latest update is that I submitted the application to Google to get GOA verified, and we’ve got a couple of things we’re working through to get this sorted.

Events all round!

Although the new year’s conference season is just kicking off, it’s been a busy one for GNOME already. We were at FOSDEM in Brussels where we had a large booth, selling t-shirts, hoodies and of course, the famous GNOME socks. I held a meeting of the Advisory Board, and we had a great GNOME Beers event – kindly sponsored by Codethink.

We also had a very successful GTK Hackfest – moving us one step closer to GTK 4.0.

Coming up, we’ll have a GNOME booth at:

  • SCALEx17 – Pasadena, California (7th – 10th March)
  • LibrePlanet – Boston Massachusetts (23rd – 24th March)
  • FOSS North – Gothenburg, Sweden (8th – 9th April)
  • Linux Fest North West – Bellingham, Washington (26th – 28th April)

If you’re at any of these, please come along and say hi! We’re also planning out events for the rest of the year. If anyone has any particularly exciting conferences we may not have heard of, please let us know.


It hasn’t yet been announced, but we’re trialling an instance of Discourse for the GTK and Engagement teams. It’s hopeful that this may replace mailman, but we’re being quite careful to make sure that email integration continues to work. Expect more information about this in the coming month. If you want to go have a look, the instance is available at

Keith Packard: metro-snek

Mar, 19/03/2019 - 9:08md
MetroSnek — snek on Metro M0 Express

When I first mentioned Snek a few months ago, Phillip Torrone from Adafruit pointed me at their Metro M0 board, which uses an Arduino-compatible layout but replaces the ATMega 328P with a SAMD21G18A. This chip is an ARM Cortex M0 part with 256kB of flash and 32kB of RAM. Such space!

Even though there is already a usable MicroPython port for this board, called CircuitPython, I figured it would be fun to get Snek running as well. The CircuitPython build nearly fills the chip, so the Circuit Python boards all include an off-chip flash part for storing applications. With Snek, there will be plenty of space inside the chip itself for source code, so one could build a cheaper/smaller version without the extra part.

UF2 Boot loader

I decided to leave the existing boot loader in place instead of replacing it with the AltOS version. This makes it easy to swap back to CircuitPython without needing any custom AltOS tools.

The Metro M0 Express boot loader is reached by pressing the reset button twice; it's pretty sweet in exposing a virtual storage device with a magic file, CURRENT.UF2, into which you write the ROM image. You write a UF2 formatted file to this name and the firmware extracts the data on the fly and updates the flash in the device. Very slick.

To make this work with AltOS, I had to adjust the start location of the operating system to 0x2000 and leave a bit of space at the end of ROM and RAM clear for the boot loader to use.

Porting AltOS

I already have an embedded operating system that works on Cortex M0 parts, AltOS, which I've been developing for nearly 10 years for use in rocketry and satellite applications. It's also what powers [ChaosKey])(

Getting AltOS running on another Cortex M0 part is a simple matter of getting clocks running and writing drivers.

What I haven't really settled on is whether to leave this code as a part of AltOS, or to pull the necessary bits into the Snek repository and doing a bare-metal implementation.

I've set up the Snek distribution to make integrating it into another operating system simple; that's how the NuttX port works, for instance. It does make the build process more complicated as you have to build and install Snek, then build AltOS for the target device.

SAMD21 Clocks

Every SoC has a different way of configuring and wiring clocks within the system. Most that I've used have a complex clock-tree that you plug various configuration values into to generate clocks for the processor and peripherals.

The SAMD21 is simpler in offering a set of general-purpose clock controllers that can source a variety of clock signals and divide them by an integer. The processor uses clock controller 0; all of the other peripherals can be configured to use any clock controller you like.

The Metro M0 express and Feather M0 express have only a 32.768kHz crystal; they don't have a nice even-MHz crystal connected to the high-speed oscillator. As a result, to generate a '48MHz' clock for the processor and USB controller, I ended up multiplying the 32.768kHz frequency by 1464 using a PLL to generate a 47.972352MHz signal, which is about 0.06% low. Close enough for USB to work.

At first, I typo'd a register value leaving the PLL un-locked. The processor still ran fine, but when I looked at the clock with my oscilloscope, it was very ragged with a mean frequency around 30MHz. It took a few hours to track down the incorrect value, at which point the clock stabilized at about 48MHz.


Next on the agenda was getting a USART to work; nothing terribly complicated there, aside from the clock problem mentioned above which generated a baud rate of around 6000 instead of 9600.

I like getting a USART working because it's usually (always?) easier than USB, plus demonstrates that clocking is working as expected. I can debug serial data with a simple logic analyzer. This time, the logic analyzer is how I discovered the clocking issue -- a bit time of 166µs does not equal 9600 baud.


While I like having USB on-chip in the abstract, the concrete adventure of implementing USB for a new chip is always fraught with peril. In this case, the chip documentation was missing a couple of key details that I had to discover experimentally.

I'm still trying to come up with an abstraction for writing USB drivers for small systems; every one is different enough that I keep using copy&paste instead of building a driver core on top of hardware-specific primitives. In this case, the USB driver is 883 lines of code; the second shortest in AltOS with the ATMega32u4 driver being slightly smaller.


The only hardware that works today is one USARTs and USB. I also go Snek compiled and running. Left to do:

  • Digital GPIO controls. I've got basic GPIO functionality available in the underlying operating system, but it isn't exposed through Snek yet.

  • Analog outputs. This will involve hooking timers to outputs so that we can PWM them.

  • Analog inputs. That requires getting an ADC driver written and then hooking that into Snek.

  • On-board source storage. I think the ATMega model of storing just one set of source code on the device and reading that at boot time is clean and simple, so I want to do the same here. I think it will be simpler to use the on-chip flash instead of the external flash part. That means reserving a specific chunk of that for source code.

  • Figure out whether this code is part of AltOS, or part of Snek.


Steinar H. Gunderson: When your profiler fools you

Mar, 19/03/2019 - 8:42pd

If you've been following my blog, you'll know about Nageru, my live video mixer, and Futatabi, my instant replay program with slow motion. Nageru and Futatabi both work on the principle that the GPU should be responsible for all the pixel pushing—it's just so much better suited than the CPU—but to do that, the GPU first needs to get at the data.

Thus, in Nageru, pushing the data from the video card to the GPU is one of the main CPU drivers. (The CPU also runs the UI, does audio processing, runs an embedded copy of Chromium if needed—we don't have full GPU acceleration there yet—and not the least encodes the finished video with x264 if you don't want to use Quick Sync for that.) It's a simple task; take two pre-generated OpenGL textures (luma and chroma) with an associated PBO, take the frame that the video capture card has DMAed into system RAM, and copy it while splitting luma from chroma. It goes about as fast as memory bandwidth will allow.

However, when you also want to run Futatabi, it runs on a different machine and also needs to get at the video somehow. Nageru solves this by asking Quick Sync (through VA-API) to encode it to JPEG, and then sending it over the network. Assuming your main GPU is an NVIDIA one (which gives oodles of more headroom for complicated things than the embedded Intel GPU), this means you now need an extra copy, and that's when things started to get hairy for me.

To simplify a long and confusing search, what I found was that with five inputs (three 1080p, two 720p), Nageru would be around two cores (200% CPU in top), and with six (an aditional 1080p), it would go to 300%. (This is on a six-core machine.) This made no sense, so I pulled up “perf record”, which said… nothing. The same routines and instructions were hot (mostly the memcpy into the NVIDIA GPU's buffers); there was no indication of anything new taking time. How could it be?

Eventually I looked at top instead of perf, and saw that some of the existing threads went from 10% to 30% CPU. In other words, everything just became uniformly slower. I had a hunch, and tried “perf stat -ddd“ instead. One value stood out:

3 587 503 LLC-load-misses # 94,72% of all LL-cache hits (71,91%)

Aha! It was somehow falling off a cliff in L3 cache usage. It's a bit like going out of RAM and starting to swap, just more subtle.

So, what was using so much extra cache? I'm sure someone extremely well-versed in perf or Intel VTune would figure out a way to look at all the memory traffic with PEBS or something, but I'm not a perf expert, so I went for the simpler choice: Remove things. E.g., if you change a memcpy to a memset(dst, 0, len), the GPU will be still be encoding a frame, and you'll still be writing things to memory, but you won't be reading anymore. I noticed a really confusing thing: Occasionally, when I did less work, CPU usage would be going up. E.g., have a system at 2.4 cores used, remove some stuff, go up to 3.1 cores! Even worse, occasionally after start, it would be at e.g. 2.0 cores, then ten seconds later, it would be at 3.0. I suspected overheating, but even stopping and waiting ten minutes would not be enough to get it reliably down to 2.0 again.

It eventually turned out (again through perf stat) that power saving was getting the better of me. I hadn't expected it to kick in, but seemingly it did, and top doesn't compensate in any way. Change from the default powersave governor to performance (I have electric heating anyway, and it's still winter up here), and tada... My original case was now down from 3.0 to 2.6 cores.

However, this still wasn't good enough, so I had to do some real (if simple) optimization. Some rearchitecting saved a copy; instead of the capture card receiver thread memcpy-ing into a buffer and the MJPEG encoder thread memcpy-ing it from there to the GPU's memory-mapped buffers, I let the capture card receiver memcpy it directly over instead. (This wasn't done originally due to some complications around locking and such; it took a while to find a reasonable architecture. Of course, when you look at the resulting patch, it doesn't look hard at all.)

This took me down from 3.0 to 2.1 cores for the six-camera case. Success! But still too much; the cliff was still there. It was pretty obvious that it was stalling on something in the six-camera case:

8 552 429 486 cycle_activity.stalls_mem_any # 5 inputs 17 082 914 261 cycle_activity.stalls_mem_any # 6 inputs

I didn't get a whole lot of more L3 misses, though; they seemed to just get much slower (occasionally as much as 1200 cycles on average, which is about four times as much as normal). Unfortunately, you have zero insight in what the Intel GPU is doing to your memory bus, so it's not easy to know if it's messing up your memory or what. Turning off the actual encoding didn't help much, though; it was really the memcpy into them that hurt. And since it was into uncached (write-combined) memory, doing things like non-temporal writes didn't help at all. Neither did combining the two copies into one loop (read once, write twice). Or really anything else I could think of.

Eventually someone on ##intel-media came up with a solution; instead of copying directly into the memory-mapped buffers (vaDeriveImage), copy into a VAImage in system RAM, and then let the GPU do the copy. For whatever reason, this helped a lot; down from 2.1 to 1.4 cores! And 0.6 of those cores were non-VA-API related things. So more patch and happiness all around.

At this point, I wanted to see how far I could go, so I borrowed more capture inputs and set up my not-so-trusty SDI matrix to create an eight-headed capture monster:

Seemingly after a little more tuning of freelist sizes and such, it could sustain eight 1080p59.94 MJPEG inputs, or 480 frames per second if you wanted to—at around three cores again. Now the profile was starting to look pretty different, too, so there were more optimization opportunities, resulting in this pull request (helping ~15% of a core). Also, setting up the command buffers for the GPU copy seemingly takes ~10% of a core now, but I couldn't find a good way of improving it. Most of the time now is spent in the original memcpy to NVIDIA buffers, and I don't think I can do much better than that without getting the capture card to peer-to-peer DMA directly into the GPU buffers (which is a premium feature you'll need to buy Quadro cards for, it seems). In any case, my original six-camera case now is a walk in the park (leaving CPU for a high-quality x264 encode), which was the goal of the exercise to begin with.

So, lesson learned: Sometimes, you need to look at the absolutes, because the relative times (which is what you usually want) can fool you.

Jonathan Carter: Running for DPL

Mar, 19/03/2019 - 4:30pd

I am running for Debian Project Leader, my official platform is published on the Debian website (currently looks a bit weird, but a fix is pending publication), with a more readable version available on my website as well as a plain-text version.

Shortly after I finished writing the first version of my platform page, I discovered an old talk from Ian Murdock at Microsoft Research where he said something that resonated well with me, and I think also my platform. Paraphrasing him:

You don’t want design by committee, but you want to tap in to the wisdom of the crowd. Lay out a vision for solving the problem, but be flexible enough to understand that you don’t have all the answers yourself, that other people are equally if not more intelligent than you are, and collectively, the crowd is the most intelligent of all.

– paraphrasing of Ian Murdock during a talk at Microsoft Research.

I’m feeling oddly calm about all of this and I like it. It has been a good exercise taking the time to read what everyone has said across all the mediums and trying to piece together all the pertinent parts and form a workable set of ideas.

I’m also glad that we have multiple candidates, I hope that it will inspire some thought, ideas and actions around the topics that occupy our collective head space.

The campaign period remains open until 2019-04-06. During this period, you can can ask me or any of the other candidates about how they envision their term as DPL.

There are 5 candidates in total, here’s the full list of candidates (in order of self-nomination with links to platforms as available):

Laura Arjona Reina: A weekend for the Debian website and friends

Hën, 18/03/2019 - 9:05md

Last weekend (15-17 March 2019) some members of the Debian web team have met at my place in Madrid to advance work together in what we call a Debian Sprint. A report will be published in the following days, but I want to say thank you to everybody that made possible this meeting happen.

We have shared 2-3 very nice days, we have agreed in many topics and started to design an new homepage focused in newcomers (since a Debianite usually just go to the subpage they need) and showing that Debian the operating system is made by a community of people. We are committed to simplify the content of and the structure of, we have fixed some bugs already, and talked about many other aspects. We shared some time to know each other, and I think all of us became motivated to continue working on the website (because there is still a lot of work to do!) and make easy for new contributors to get involved too.

For me, a person who rarely finds the way to attend Debian events, it was amazing to meet in person with people who I have shared work and mails and IRC chats since years in some cases, and to offer my place for the meeting. I have enjoyed a lot preparing all the logistics and I’m very happy that it went very well. Now I walk through my neighbourhood for my daily commute and every place is full of small memories of these days. Thanks, friends!

Dirk Eddelbuettel: RQuantLib 0.4.8: Small updates

Dje, 17/03/2019 - 8:12md

A new version 0.4.8 of RQuantLib reached CRAN and Debian. This release was triggered by a CRAN request for an update to the script which was easy enough (and which, as it happens, did not result in changes in the configure script produced). I also belatedly updated the internals of RQuantLib to follow suit to an upstream change in QuantLib. We now seamlessly switch between shared_ptr<> from Boost and from C++11 – Luigi wrote about the how and why in an excellent blog post that is part of a larger (and also excellent) series of posts on QuantLib internals.

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

In other news, we finally have a macOS binary package on CRAN. After several rather frustrating months of inaction on the pull request put together to enable this, it finally happened last week. Yay. So CRAN currently has an 0.4.7 macOS binary and should get one based on this release shortly. With Windows restored with the 0.4.7 release, we are in the best shape we have been in years. Yay and three cheers for Open Source and open collaboration models!

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.8 (2019-03-17)
  • Changes in RQuantLib code:

    • Source code supports Boost shared_ptr and C+11 shared_ptr via QuantLib::ext namespace like upstream.
  • Changes in RQuantLib build system:

    • The file no longer upsets R CMD check; the change does not actually change configure.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Emmanuel Kasper: Splitting a large mp3 / flac / ogg by detecting silence gaps

Dje, 17/03/2019 - 7:50md
If you have a large audio file coming for instance from a whole music album, the excellent mp3splt can do this for you:

mp3splt -o @n-@f -s my_long_file.mp3

will autodetect the silences, and create a list of tracks based on the large file.

mp3splt is  available in the Debian / Ubuntu archive.

Dirk Eddelbuettel: Rcpp 1.0.1: Updates

Dje, 17/03/2019 - 2:49md

Following up on the 10th anniversary and the 1.0.0. release, we excited to share the news of the first update release 1.0.1 of Rcpp. package turned ten on Monday—and we used to opportunity to mark the current version as 1.0.0! It arrived at CRAN overnight, Windows binaries have already been built and I will follow up shortly with the Debian binary.

We had four years of regular bi-monthly release leading up to 1.0.0, and having now taken four months since the big 1.0.0 one. Maybe three (or even just two) releases a year will establish itself a natural cadence. Time will tell.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1598 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 152 in BioConductor release 3.8. Per the (partial) logs of CRAN downloads, we currently average 921,000 downloads a month.

This release feature a number of different pull requests detailed below.

Changes in Rcpp version 1.0.1 (2019-03-17)
  • Changes in Rcpp API:

    • Subsetting is no longer limited by an integer range (William Nolan in #920 fixing #919).

    • Error messages from subsetting are now more informative (Qiang and Dirk).

    • Shelter increases count only on non-null objects (Dirk in #940 as suggested by Stepan Sindelar in #935).

    • AttributeProxy::set() and a few related setters get Shield<> to ensure rchk is happy (Romain in #947 fixing #946).

  • Changes in Rcpp Attributes:

    • A new plugin was added for C++20 (Dirk in #927)

    • Fixed an issue where 'stale' symbols could become registered in RcppExports.cpp, leading to linker errors and other related issues (Kevin in #939 fixing #733 and #934).

    • The wrapper macro gets an UNPROTECT to ensure rchk is happy (Romain in #949) fixing #948).

  • Changes in Rcpp Documentation:

    • Three small corrections were added in the 'Rcpp Quickref' vignette (Zhuoer Dong in #933 fixing #932).

    • The Rcpp-modules vignette now has documentation for .factory (Ralf Stubner in #938 fixing #937).

  • Changes in Rcpp Deployment:

    • Travis CI again reports to (Dirk and Ralf Stubner in #942 fixing #941).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jo Shields: Too many cores

Mër, 13/03/2019 - 3:48md
Arming yourself

ARM is important for us. It’s important for IOT scenarios, and it provides a reasonable proxy for phone platforms when it comes to developing runtime features.

We have big beefy ARM systems on-site at Microsoft labs, for building and testing Mono – previously 16 Softiron Overdrive 3000 systems with 8-core AMD Opteron A1170 CPUs, and our newest system in provisional production, 4 Huawei Taishan XR320 blades with 2×32-core HiSilicon Hi1616 CPUs.

The HiSilicon chips are, in our testing, a fair bit faster per-core than the AMD chips – a good 25-50%. Which begged the question “why are our Raspbian builds so much slower?”

Blowing a raspberry

Raspbian is the de-facto main OS for Raspberry Pi. It’s basically Debian hard-float ARM, rebuilt with compiler flags better suited to ARM11 76JZF-S (more precisely, the ARMv6 architecture, whereas Debian targets ARMv7). The Raspberry Pi is hugely popular, and it is important for us to be able to offer packages optimized for use on Raspberry Pi.

But the Pi hardware is also slow and horrible to use for continuous integration (especially the SD-card storage, which can be burned through very quickly, causing maintenance headaches), so we do our Raspbian builds on our big beefy ARM64 rack-mount servers, in chroots. You can easily do this yourself – just grab the raspbian-archive-keyring package from the Raspbian archive, and pass the Raspbian mirror to debootstrap/pbuilder/cowbuilder instead of the Debian mirror.

These builds have always been much slower than all our Debian/Ubuntu ARM builds (v5 soft float, v7 hard float, aarch64), but on the new Huawei machines, the difference became much more stark – the same commit, on the same server, took 1h17 to build .debs for Ubuntu 16.04 armhf, and 9h24 for Raspbian 9. On the old Softiron hardware, Raspbian builds would rarely exceed 6h (which is still outrageously slow, but less so). Why would the new servers be worse, but only for Raspbian? Something to do with handwavey optimizations in Raspbian? No, actually.

When is a superset not a superset

Common wisdom says ARM architecture versions add new instructions, but can still run code for older versions. This is, broadly, true. However, there are a few cases where deprecated instructions become missing instructions, and continuity demands those instructions be caught by the kernel, and emulated. Specifically, three things are missing in ARMv8 hardware – SWP (swap data between registers and memory), SETEND (set the endianness bit in the CPSR), and CP15 memory barriers (a feature of a long-gone control co-processor). You can turn these features on via abi.cp15_barrier, abi.setend, and abi.swp sysctl flags, whereupon the kernel fakes those instructions as required (rather than throwing SIGILL).

CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.

But it’s worse than that – CP15 memory barriers affect the whole system. Vince’s proposal was that the HiSilicon chips were performing so much worse than the AMD ones, because I had 64 cores not 8 – and that I could improve performance by running a VM, with only one core in it (so CP15 calls inside that environment would only affect the entire VM, not the rest of the computer).

Escape from the Pie Folk

I already had libvirtd running on all my ARM machines, from a previous fit of “hey one day this might be useful” – and as it happened, it was. I had to grab a qemu-efi-aarch64 package, containing a firmware, but otherwise I was easily able to connect to the system via virt-manager on my desktop, and get to work setting up a VM. virt-manager has vastly improved its support for non-x86 since I last used it (once upon a time it just wouldn’t boot systems without a graphics card), but I was easily able to boot an Ubuntu 18.04 arm64 install CD and interact with it over serial just as easily as via emulated GPU.

Because I’m an idiot, I then wasted my time making a Raspbian stock image bootable in this environment (Debian kernel, grub-efi-arm64, battling file-size constraints with the tiny /boot, etc) – stuff I would not repeat. Since in the end I just wanted to be as near to our “real” environment as possible, meaning using pbuilder, this simply wasn’t a needed step. The VM’s host OS didn’t need to be Raspbian.

Point is, though, I got my 1-core VM going, and fed a Mono source package to it.

Time taken? 3h40 – whereas the same commit on the 64-core host took over 9 hours. The “use a single core” hypothesis more than proven.

Next steps

The gains here are obvious enough that I need to look at deploying the solution non-experimentally as soon as possible. The best approach to doing so is the bit I haven’t worked out yet. Raspbian workloads are probably at the pivot point between “I should find some amazing way to automate this” and “automation is a waste of time, it’s quicker to set it up by hand”

Many thanks to the #debian-uk community for their curiosity and suggestions with this experiment!