You are here

Planet Debian

Subscribe to Feed Planet Debian
musings and rants Thinking inside the box random musings and comments A place for my random thoughts about software random musings and comments Recent content in Debian on /home/athos Recent content in Gsoc18 on rebel with rather too many causes An informal account of my adventures in coding and Free Software world. I intend to cover a great variety of themes. Joachim Breitners Denkblogade Entries tagged english pabs Recent content in pdo on Active Low Dude! Sweet! Reproducible builds blog Things that I work on in Debian Blog from the Debian Project As time goes by ... Thoughts, actions and projects intrigeri's blog liw's English language blog feed Enrico Zini: pdo Mindblogging the world to itself Blog from the Debian Project rebel with rather too many causes showing latest 10 Entries tagged english Matthew Garrett - Dreamwidth Studios "Passion and dispassion. Choose two." -- Larry Wall Dude! Sweet! Joachim Breitners Denkblogade Blog from the Debian Project Free Software Hacking Reproducible builds blog random musings and comments Joachim Breitners Denkblogade Thinking inside the box Blog from the Debian Project spwhitton Joachim Breitners Denkblogade Blog from the Debian Project The public face of jwiltshire Just another weblog anarcat anarcat intrigeri's blog Just another weblog Random thoughts about everything tagged by Debian David Bremner jmtd Debian and Free Software Recent content in Gsoc18 on
Përditësimi: 8 months 1 javë më parë

Weblate 3.1

Pre, 27/07/2018 - 3:30md

Weblate 3.1 has been released today. It contains mostly bug fixes, but there are some new feature as well, for example support for Amazon Translate.

Full list of changes:

  • Upgrades from older version than 3.0.1 are not supported.
  • Allow to override default commit messages from settings.
  • Improve webhooks compatibility with self hosted environments.
  • Added support for Amazon Translate.
  • Compatibility with Django 2.1.
  • Django system checks are now used to diagnose problems with installation.
  • Removed support for soon shutdown libravatar service.
  • New addon to mark unchanged translations as needing edit.
  • Add support for jumping to specific location while translating.
  • Downloaded translations can now be customized.
  • Improved calculation of string similarity in translation memory matches.
  • Added support by signing Git commits by GnuPG.


Weblate 3.1.1 was released as well fixing test suite failure on some setups:

  • Fix testsuite failure on some setup.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Michal Čihař Michal Čihař's Weblog, posts tagged by Debian

Debcamp activities 2018

Pre, 27/07/2018 - 3:19md
Emacs 2018-07-23
  • NMUed cdargs
  • NMUed silversearcher-ag-el
  • Uploaded the partially unbundelled emacs-goodies-el to Debian unstable
  • packaged and uploaded graphviz-dot-mode
  • packaged and uploaded boxquote-el
  • uploaded apache-mode-el
  • Closed bugs in graphviz-dot-mode that were fixed by the new version.
  • filed lintian bug about empty source package fields
  • packaged and uploaded emacs-session
  • worked on sponsoring tabbar-el
  • uploaded dh-make-elpa
Notmuch 2018-07-2[23]

Wrote patch series to fix bug noticed by seanw while (seanw was) working working on a package inspired by policy workflow.

  • Finished reviewing a patch series from dkg about protected headers.
  • Helped sean w find right config option for his bug report

  • Reviewed change proposal from aminb, suggested some issues to watch out for.

  • Add test for threading issue.
Nullmailer 2018-07-25
  • uploaded nullmailer backport
  • add "envelopefilter" feature to remotes in nullmailer-ssh
Perl 2018-07-23 2018-07-24
  • Forwarded #704527 to
  • Uploaded libemail-abstract-perl to fix Vcs-* urls
  • Updated debhelper compat and Standards-Version for libemail-thread-perl
  • Uploaded libemail-thread-perl
  • fixed RC bug #904727 (blocking for perl transition)
Policy and procedures 2018-07-22
  • seconded #459427
  • seconded #813471
  • seconded #628515
  • read and discussed draft of salvaging policy with Tobi
  • Discussed policy bug about short form License and License-Grant
  • worked with Tobi on salvaging proposal
David Bremner blog/tags/planet

My PhD topic

Pre, 27/07/2018 - 12:16md

I'm long overdue writing about what I'm doing for my PhD, so here goes. To stop this getting too long I haven't defined a lot of concepts so it might not make sense to folks without a Computer Science background. I'm happy to answer any questions in the comments.

I'm investigating whether there are advantages to building a distributed stream processing system using pure functional programming, specifically, whether the reasoning abilites one has about purely functional systems allow us to build efficient stream processing systems.

We have a proof-of-concept of a stream processing system built using Haskell called STRIoT (Stream Processing for IoT). Via STRIoT, a user can define a graph of stream processing operations from a set of 8 purely functional operators. The chosen operators have well-understood semantics, so we can apply strong reasoning to the user-defined stream graph. STRIoT supports partitioning a stream graph into separate sub-graphs which are distributed to separate nodes, interconnected via the Internet. The examples provided with STRIoT use Docker and Docker Compose for the distribution.

The area I am currently focussing on is whether and how STRIoT could rewrite the stream processing graph, preserving it's functional behaviour, but improving its performance against one or more non-functional requirements: for example making it perform faster, or take up less memory, or a more complex requirement such as maximising battery life for a battery-operated component, or something similar.

Pure FP gives us the ability to safely rewrite chunks of programs by applying equational reasoning. For example, we can always replace the left-hand side of this equation by the right-hand side, which is functionally equivalent, but more efficient in both time and space terms:

map f . map g = map (f . g)

However, we need to reason about potentially conflicting requirements. We might sometimes increase network latency or overall processing time in order to reduce the power usage of nodes, such as smart watches or battery-operated sensors deployed in difficult-to-reach locations. This has implications on the design of the Optimizer, which I am exploring.

jmtd Jonathan Dowland's Weblog

Report from DebCamp18

Pre, 27/07/2018 - 12:14md

This was a nice DebCamp! Here is what I've been up to.

AppArmor Packaging and distro integration Policy Misc
  • Tried to give Thunderbird a custom reportbug script that includes the status of the AppArmor profile in bug reports, in order to ease the Thunderbird maintainers' task when triaging newly reported bugs. Sadly, computing this status requires root credentials so this won't work. Instead, explained in README.apparmor how to get this information, so that the Thunderbird maintainers can point users there when they have a doubt.
Perl team
  • Triaged and investigated a few packages that don't build reproducibly.
  • Identified a few new candidates for removal from sid.
  • Removing packages that depend on obsolete libraries from the GNOME 2 area:
    • updated status of this process that I've started at DebCamp17 last year ⇒ filed a bunch of removal bugs;
    • filed RC bugs to prevent a number of other packages from being shipped in Buster.
Misc intrigeri intrigeri's blog

DebConf18 invites you to Debian Open Day at National Chiao Tung University, Microelectronics and Information Research Center (NCTU MIRC), in Hsinchu

Pre, 27/07/2018 - 12:00md

DebConf, the annual conference for Debian contributors and users interested in improving the Debian operating system, will be held in National Chiao Tung University, Microelectronics and Information Research Center (NCTU MIRC) in Hsinchu, Taiwan, from July 29th to August 5th, 2018. The conference is preceded by DebCamp, July 21th to July 27th, and the DebConf18 Open Day on July 28th.

Debian is an operating system consisting entirely of free and open source software, and is known for its adherence to the Unix and Free Software philosophies and for its extensiveness. Thousands of volunteers from all over the world work together to create and maintain Debian software, and more than 400 are expected to attend DebConf18 to meet in person and work together more closely.

The conference features presentations and workshops, and video streams are made available in real-time and archived.

The DebConf18 Open Day, Saturday, July 28, is open to the public with events of interest to a wide audience.

The detailed schedule of the Open Day's events include, among others:

  • Questions and Answers Session with Minister Audrey Tang,
  • Debian Meets Smart City Applications with SZ Lin
  • a Debian Packaging Workshop,
  • panel discussion: Story of Debian contributors around the world,
  • sessions in English or Chinese about different aspects of the Debian project and community, and other free software projects like LibreOffice, Clonezilla and DRBL, LXDE/LXQt desktops, EzGo...

Everyone is welcome to attend, attendance is free, and it is a great possibility for interested users to meet the Debian community.

The full schedule for Open Day's events and the rest of the conference is at and the video streaming will be available at the DebConf18 website

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the numerous sponsors for their commitment to DebConf18, particularly its Platinum Sponsor Hewlett Packard Enterprise, the Bureau of Foreign Trade, Ministry of Economic Affairs via the MEET TAIWAN program, and its venue sponsors, the National Chiao Tung University 國立交通大學 and the National Center for High-performance Computing 國家高速網路與計算中心.

For media contacts, please contact DebConf organization: 林上智 (SZ Lin), Cell: 0911-162297

Laura Arjona Reina, Héctor Orón Martínez Bits from Debian

Project cleanup

Pre, 27/07/2018 - 11:45pd

For the past couple of days I've gone back over my golang projects, and updated each of them to have zero golint/govet warnings.

Nothing significant has changed, but it's nice to be cleaner.

I did publish a new project, which is a webmail client implemented in golang. Using it you can view the contents of a remote IMAP server in your browser:

  • View folders.
  • View messages.
  • View attachments
  • etc.

The (huge) omission is the ability to reply to messages, compose new mails, or forward/delete messages. Still as a "read only webmail" it does the job.

Not a bad hack, but I do have the problem that my own mailserver presents ~/Maildir over IMAP and I have ~1000 folders. Retrieving that list of folders is near-instant - but retrieving that list of folders and the unread-mail count of each folder takes over a minute.

For the moment I've just not handled folders-with-new-mail specially, but it is a glaring usability hole. There are solutions, the two most obvious:

  • Use an AJAX call to get/update the unread-counts in the background.
    • Causes regressions as soon as you navigate to a new page though.
  • Have some kind of open proxy-process to maintain state and avoid accessing IMAP directly.
    • That complicates the design, but would allow "instant" fetches of "stuff".

Anyway check it out if you like. Bug reports welcome.

Steve Kemp Steve Kemp's Blog

Sixth GSoC Report

Pre, 27/07/2018 - 7:28pd

After finishing the the evaluations of the SSO solutions, formorer asked me to look into integrating one of the solutions into the existing Debian SSO infrastructure. is a Django application that basically provides a way of creating and managing client certificates. It does not do authentication itself, but uses the REMOTE_USER authentication source of Django. I tested integration with lemonldap-ng, and after some troubles setting up the clone on my infrastructure (thanks to Enrico for pointing me in the right direction) the authentication using the apaches authnz module worked. To integrate lemonldap-ng i only had to add a ProxyPass and a ProxyPassReverse directive in the apache config. I tested the setup using gitlab and it worked.

I’ve also added some additional features to nacho: on the one hand, i’ve added a management command that removes stale temporary accounts that have never been activated. The idea is to run that command in regular intervals via cron (or systemd timers). To implement that feature, i basically followed the howto for writing custom django-admin commands from the django manual. Based on that knowledge i then implemented two other commands that provide backup and restore functionality. The backup command prints the contents of the LDAP database on stdout in LDIF format. The restore command expects LDIF on stdin and writes those values to the ldap database. I also did some cleanup in the codebase and documented the test cases.

The third big project i looked into was to implement oauth2 authentication for one of the existing websites that use I chose for that, because it is based on Django. I used a lot of time to look for existing modules for Django that implement oauth2 authentication and tesed some of them. There is for example django-allauth that provides authentication against a lot of authentication providers. I did manage to create an addiational authentication provider for Keycloak, but it seemed a bit overengineered to use such a big application for only one provider. So i sat down and wrote a small Django app that does oauth2 authentication. As soon as that worked with a clean Django installation, it took just some small adjustments to use it for the newmaintainer interface. You can find the branch on salsa

bisco Gsoc18 on

Local qemu/kvm virtual machines, 2018

Pre, 27/07/2018 - 7:00pd

For work I run a personal and a work VM on my laptop. When I was at VMware I dogfooded internal builds of Workstation which worked well, but was always a challenge to have its additions consistently building against latest kernels. About 5 and half years ago, the only practical alternative option was VirtualBox. IIRC SPICE maybe didn't even exist or was very early, and while VNC is OK to fiddle with something, completely impractical for primary daily use.

VirtualBox is fine, but there is the promised land of all the great features of qemu/kvm and many recent improvements in 3D integration always calling. I'm trying all this on my Fedora 28 host, with a Fedora 28 guest (which has been in-place upgraded since Fedora 19), so everything is pretty recent. Periodically I try this conversion again, but, spoiler alert, have not yet managed to get things quite right.

As I happened to close an IRC window, somehow my client seemed to crash X11. How odd ... so I thought, everything has just disappeared anyway; I might as well try switching again.

Image conversion has become much easier. My primary VM has a number of snapshots, so I used the VirtualBox GUI to clone the VM and followed the prompts to create the clone with squashed snapshots. Then simply convert the VDI to a RAW image with

$ qemu-img convert -p -f vdi -O raw image.vdi image.raw

Note if you forget the progress meter, send the pid a SIGUSR1 to get it to spit out a progress.

virt-manager has come a long way too. Creating a new VM was trivial. I wanted to make sure I was using all the latest SPICE gl etc., stuff. Here I hit some problems with what seemed to be permission denials on drm devices before even getting the machine started. Something suggested using libvirt in session mode, with the qemu:///session URL -- which seemed more like what I want anyway (a VM for only my user). I tried that, put the converted raw image in my home directory and the VM would boot. Yay!

It was a bit much to expect it to work straight away; while GRUB did start, it couldn't find the root disks. In hindsight, you should probably generate a non-host specific initramfs before converting the disk, so that it has a larger selection of drivers to find the boot devices (especially the modern virtio drivers). On Fedora that would be something like

sudo dracut --no-hostonly --regenerate-all -f

As it turned out, I "simply" attached a live-cd and booted into that, then chrooted into my old VM and regenerated the initramfs for the latest kernel manually. After this the system could find the LVM volumes in the image and would boot.

After a fiddly start, I was hopeful. The guest kernel dmesg DRM sections showed everything was looking good for 3D support, along with the glxinfo showing all the virtio-gpu stuff looking correct. However, I could not get what I hoped was trivial automatic window resizing happening no matter what. After a bunch of searching, ensuring my agents were running correctly, etc. it turns out that has to be implemented by the window-manager now, and it is not supported by my preferred XFCE (see Note you can do this manually with xrandr --output Virtual-1 --auto to get it to resize, but that's rather annoying.

I thought that it is 2018 and I could live with Gnome, so installed that. Then I tried to ping something, and got another selinux denial (on the host) from qemu-system-x86 creating icmp_socket. I am guessing this has to do with the interaction between libvirt session mode and the usermode networking device (filed I figured I'd limp along with ICMP and look into details later...

Finally when I moved the window to my portrait-mode external monitor, the SPICE window expanded but the internal VM resolution would not expand to the full height. It looked like it was taking the height from the portrait-orientation width.

Unfortunately, forced swapping of environments and still having two/three non-trivial bugs to investigate exceeded my practical time to fiddle around with all this. I'll stick with VirtualBox for a little longer; 2020 might be the year!

Ian Wienand Technovelty

Starting your first Python project

Enj, 26/07/2018 - 6:25md

There's a gap between learning the syntax of the Python programming language and being able to build a project from scratch. When you finish reading your first tutorial or book about Python, you're good to go for writing a Fibonacci suite calculator, but that does not help you starting your actual project.

There are a few questions that pop up in your mind, and that's normal. Let's take a stab at those!

Which Python version should I use?

It's not a secret that Python has several versions that are supported at the same time. Each minor version of the interpreter gets bugfix support for 18 months and security support for 5 years. For example, Python 3.7, released on 27th June 2018, will be supported until Python 3.8 is released, around October 2019 (15 months later). Around December 2019, the last bugfix release of Python 3.7 will occur, and everyone is expected to switch to Python 3.8.

Current Python 3.7/3.8 release schedule

That's important to be aware of as the version of the interpreter will be entirely part of your software lifecycle.

On top of that, we should take into consideration the Python 2 versus Python 3 question. That still might be an open question for people working with (very) old platforms.

In the end, the question of which version of Python one should use is well worth asking.

Here are some short answers:

  • Versions 2.6 and older are really obsolete by now, so you don't have to worry about supporting them at all. If you intend on supporting these older versions anyway, be warned that you'll have an even harder time ensuring that your program supports Python 3.x as well. Though you might still run into Python 2.6 on some older systems; if that's the case, sorry for you!
  • Version 2.7 is and will remain the last version of Python 2.x. I don't think there is a system where Python 3 is not available one way or the other nowadays. So unless you're doing archeology once again, forget it. Python 2.7 will not be supported after the year 2020, so the last thing you want to do is build a new software based on it.
  • Versions 3.7 is the most recent version of the Python 3 branch as of this writing, and that's the one that you should target. Most recent operating systems ship at least 3.6, so in the case where you'd target those, you can make sure your application also work with 3.7.
Project Layout

Starting a new project is always a puzzle. You never know how to organize your files. However, once you have a proper understanding of the best practice out there, it's pretty simple.

First, your project structure should be fairly basic. Use packages and hierarchy wisely: a deep hierarchy can be a nightmare to navigate, while a flat hierarchy tends to become bloated.

Then, avoid making a few common mistakes. Don't leave unit tests outside the package directory. These tests should be included in a sub-package of your software so that:

  • They don't get automatically installed as a tests top-level module by setuptools (or some other packaging library) by accident.
  • They can be installed and eventually used by other packages to build their unit tests.

The following diagram illustrates what a standard file hierarchy should look like:

A Python project files and directories hierarchy is the standard name for Python installation script, along with its companion setup.cfg, which should contain the installation script configuration. When run, installs your package using the Python distribution utilities.

You can also provide valuable information to users in README.rst (or README.txt, or whatever filename suits your fancy). Finally, the docs directory should contain the package's documentation in reStructuredText format, that will be consumed by Sphinx.

Packages often have to provide extra data, such as images, shell scripts, and so forth. Unfortunately, there's no universally accepted standard for where these files should be stored. Just put them wherever makes the most sense for your project: depending on their functions, for example, Web application templates could go in a templates directory in your package root directory.

The following top-level directories also frequently appear:

  • etc for sample configuration files.
  • tools for shell scripts or related tools.
  • bin for binary scripts you've written that will be installed by

There's another design issue that I often encounter. When creating files or modules, some developers create them based on the type of code they will store. For example, they would create or files. This is a terrible approach. It doesn't help any developer when navigating the code. The code organization doesn't benefit from this, and it forces readers to jump between files for no good reason. There are a few exceptions, such as libraries, in some instances, because they do expose a complete API for consumers. However, other than that, think twice before doing that in your application.

Organize your code based on features, not based on types.

Creating a module directory with just an file in it is also a bad idea. For example, don't create a directory named hooks with a single file named hooks/ in it where would have been enough instead. If you create a directory, it should contain several other Python files that belong to the category the directory represents.

Be also very careful about the code that you put in the files: it is going to be called and executed the first time that any of the module contained in the directory is loaded. This can have unwanted side effects. Those files should be empty most of the time unless you know what you're doing.

Version Numbering

Software version needs to be stamped to know which one is more recent than another. As every piece of code evolves, it's a requirement for every project to be able to organize its timeline.

There is an infinite number of way to organize your version numbers, but PEP 440 introduces a version format that every Python package, and ideally every application, should follow. This way, programs and packages will be able to quickly and reliably identify which versions of your package they require.

PEP 440 defines the following regular expression format for version numbering:


This allows for standard numbering like 1.2 or 1.2.3.

However, please do note that:

  • 1.2 is equivalent to 1.2.0; 1.3.4 is equivalent to, and so forth.
  • Versions matching N[.N]+ are considered final releases.
  • Date-based versions such as 2013.06.22 are considered invalid. Automated tools designed to detect PEP 440-format version numbers will (or should) raise an error if they detect a version number greater than or equal to 1980.

Final components can also use the following format:

  • N[.N]+aN (e.g. 1.2a1) denotes an alpha release, a version that might be unstable and missing features.
  • N[.N]+bN (e.g. 2.3.1b2) denotes a beta release, a version that might be feature-complete but still buggy.
  • N[.N]+cN or N[.N]+rcN (e.g. 0.4rc1) denotes a (release) candidate, a version that might be released as the final product unless significant bugs emerge. While the rc and c suffixes have the same meaning, if both are used, rc releases are considered to be newer than c releases.

These suffixes can also be used:

  • .postN (e.g.1.4.post2) indicates a post-release. These are typically used to address minor errors in the publication process (e.g. mistakes in release notes). You shouldn't use .postN when releasing a bugfix version; instead, you should increment the minor version number.
  • .devN (e.g. 2.3.4.dev3) indicates a developmental release. This suffix is discouraged because it is harder for humans to parse. It indicates a prerelease of the version that it qualifies: e.g. 2.3.4.dev3 indicates the third developmental version of the 2.3.4 release, before any alpha, beta, candidate or final release.

This scheme should be sufficient for most common use cases.

You might have heard of Semantic Versioning, which provides its own guidelines for version numbering. This specification partially overlaps with PEP 440, but unfortunately, they're not entirely compatible. For example, Semantic Versioning's recommendation for prerelease versioning uses a scheme such as 1.0.0-alpha+001 that is not compliant with PEP 440.

Many DVCS platforms, such as Git and Mercurial, can generate version numbers using an identifying hash (for Git, refer to git describe). Unfortunately, this system isn't compatible with the scheme defined by PEP 440: for one thing, identifying hashes aren't orderable.

Those are only some of the first questions you could have. If you have any other one that you would like me to answer, feel free to write a comment below. Some goes if you have any other pieces of advice you'd like to share!

Julien Danjou Julien Danjou


Enj, 26/07/2018 - 2:36md

This work has been brought to you by the wonderful DebCamp.

I needed to reproduce a build issue on an i386 architecture, so I started going through the instructions for finding a porterbox and setting up a chroot.

And then I though, this is long and boring. A program could do that.

So I created a program to do that:

$ debug-on-porterbox --help usage: debug-on-porterbox [-h] [--verbose] [--debug] [--cleanup] [--git] [--reuse] [--dist DIST] [--host HOST] arch [package] set up a build environment to debug a package on a porterbox positional arguments: arch architecture name package package name optional arguments: -h, --help show this help message and exit --verbose, -v verbose output --debug debug output --cleanup cleanup a previous build, removing porterbox data and git remotes --git setup a git clone of the current branch --reuse reuse an existing session --dist DIST distribution (default: sid) --host HOST hostname to use (autodetected by default)

On a source directory, you can run debug-on-porterbox i386 and it will:

  • find out the program name from debian/control (but if you provide it explicitly, you do not need to be in the source directory)
  • look up's LDAP to find a porterbox for that architecture
  • log into the machine via ssh
  • create a work directory
  • create the chroot, update it, install build dependencies
  • get the source with apt-get build-dep
  • alternatively, if using --git and running inside a git repo, create a git repo on the porterbox, push the local git branch to it, and add a remote to push/pull to/from it

The only thing left for you to do is to log into the machine debug-on-porterbox tells you, run the command porterbox tells you to enter the chroot, and debug away.

At the end you can clean everything up, including the remote chroot and the git remote in the local repo, with: debug-on-porterbox [--git] --cleanup i386

The code is on Salsa: have fun!

Enrico Zini Enrico Zini: pdo

at daemon 3.1.23, with some fixes and now a signature

Enj, 26/07/2018 - 2:28md

This is the public announcement of release 3.1.23.

I have made some non public releases of at daemon for internal development of the Debian package, since 3.1.20. This fixes some reported bugs and the development was made using You can download the tar from here and the signature from here.

The Changelog:

at 3.1.21 (2018-07-23): Jose M Calhariz: 832368-Using_of_the_meaningless_fcntl Fix call of fcntl by replacing (long) 1 with FD_CLOEXEC 892819-at__improvements_to_atd.service Improve atd.service, see bug report 892819 885891-at__stale_batchjobs_after_reboot Remove stale at jobs after a boot. 897669-897670-Some_fixes_in_the_manuals Fix some warnings in manpages at.1 and atd.8 883730-Remove_invalid_email_from_man_page Remove invalid email from man pages. at 3.1.22 (2018-07-24): Jose M Calhariz: Draft of a release script at 3.1.23 (2018-07-24): Jose M Calhariz Finalised script to release software. Jose M. Calhariz One Suggestion by ... Calhariz

A non official backport of amanda 3.5.1 for Debian stretch

Enj, 26/07/2018 - 11:45pd

I have checked the conditions for an official backport of amanda in Debian, but I think there is not demand enough. I have made an non official backport of amanda 3.5.1 for Debian stretch amd64 because of new features, fixes bugs and to support my users. You can download the tar file with the debs from here.

Jose M. Calhariz One Suggestion by ... Calhariz

Add a PGP subkey to Yubikey 4

Enj, 26/07/2018 - 10:03pd

I have a Yubikey from the job and wanted to start signing git commit without copying my Debian PGP key to the work computer. No, I did not want to create a second class PGP key just for the work. Here are the instructions for someone else do the same.

On the master computer

  • Create a second home dir for gpg

Because of bug #904596 I recommend to move your GPG home directory out of the way. Copy it into the original directory before starting.

mv ~/.gnupg ~/.gnupg.ref cp -r ~/.gnupg.ref ~/.gnupg
  • Create a subkey just for signing.

Create a subkey and take noticy of is id.

gpg --edit-key <KEY ID> addkey list save
  • Move into the Yubikey.

Select the new subkey and move it into the Yubikey.

gpg --edit-key <KEY ID> key <SUB KEY ID> keytocard save
  • Publish the updated PGP Key
gpg --keyserver http://... --send-keys <KEY ID>
  • Store the public URL of the key on Yubikey
gpg --edit-card url http://... quit
  • Backup both GPG home dir

On your master computer you need to use the old GPG home dir. But need to store both for the future.

mv ~/.gnupg ~/.gnupg.yubikey4 mv ~/.gnupg.ref ~/.gnupg cd ~ tar cf gnupg-homedir.backup.tar .gnupg .gnupg.yubikey4
  • Test
gpg --armor --sign

Should work without asking for the Yubikey.

  • Wait for the Key server to update your public key with the new subkey.

On a new computer

  • Plug the Yubikey
  • Through Yubikey fetch the public PGP Key
gpg --edit-card fetch quit
  • Test
gpg -armor --sign

Should ask for the Yubikey.

Jose M. Calhariz One Suggestion by ... Calhariz

Inception: VM inside Docker inside KVM – Testing Debian VM installation builds on Travis CI

Mër, 25/07/2018 - 5:31md

Back in 2006 I started to write a tool called grml-debootstrap. grml-debootstrap is a wrapper around debootstrap for installing Debian systems. Using grml-debootstrap, it’s possible to install Debian systems from the command line, without having to boot a Debian installer ISO. This is very handy when you’re running a live system (like Grml or Tails) and want to install Debian. It’s as easy as running:

% sudo grml-debootstrap --target /dev/sda1 --grub /dev/sda

I’m aware that grml-debootstrap is used in Continuous Integration/Delivery environments, installing Debian systems several hundreds or even thousands of times each month. Over the time grml-debootstrap gained many new features. For example, since 2011 grml-debootstrap supports installation into VM images:

% sudo grml-debootstrap --vmfile --vmsize 3G --target debian.img

In 2016 we also added (U)EFI support (the target device in this example is a logical device on LVM):

% sudo grml-debootstrap --grub /dev/sdb --target /dev/mapper/debian--server-rootfs --efi /dev/sdb1

As you might imagine, every new feature we add also increases the risk of breaking something™ existing. Back in 2014, I contributed a setup using Packer to build automated machine images, using grml-debootstrap. That allowed me to generate Vagrant boxes with VirtualBox automation via Packer, serving as a base for reproducing customer environments, but also ensuring that some base features of grml-debootstrap work as intended (including backwards compatibility until Debian 5.0 AKA lenny).

The problem of this Packer setup though is, contributors usually don’t necessarily have Packer and VirtualBox (readily) available. They also might not have the proper network speed/bandwidth to run extensive tests. To get rid of those (local) dependencies and make contributing towards grml-debootstrap more accessible (we’re currently working on e.g. systemd-networkd integration), I invested some time at DebCamp at DebConf18.

I decided to give Travis CI a spin. Travis CI is a well known Continuous Integration service in the open source community. Among others, it’s providing Ubuntu Linux environments, either Container-based or as full Virtual Machines, providing us what we need. Working on the Travis CI integration, I started with enabling ShellCheck (which is also available as Debian package, BTW!), serving as lint tool for shell scripts. All of that takes place in an isolated docker container.

To be able to execute grml-debootstrap, we need to install the latest version of grml-debootstrap from Git. That’s where helps us – it is a hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every update. The result is a Debian package (grml-debootstrap_*.deb) which we can use for installation, ensuring that we run exactly what we will ship to users (including scripts, configuration + dependencies). This also takes place in an isolated docker instance.

Then it’s time to start a Debian/stretch docker container, installing the resulting grml-debootstrap*.deb file from the container run there. Inside it, we execute grml-debootstrap with its VM installation feature, to install Debian into a qemu.img file. Via qemu-system-x86_64 we can boot this VM file. Finally, goss takes care of testing and validation of the resulting system.

The overall architecture looks like:

So Travis CI is booting a KVM instance on GCE (Google Compute Engine) for us, inside of which we start three docker instances:

  1. shellcheck (koalaman/shellcheck:stable)
  2. (debian:stretch + debian:unstable, controlled via TRAVIS_DEBIAN_DISTRIBUTION)
  3. VM image installation + validation (debian:stretch)

Inside the debian/stretch docker environment, we install and execute grml-debootstrap. Finally we’re booting it via Qemu/KVM and running tests against it.

An example of such a Travis CI run is available at

Travis CI builds heavily depend on a bunch of external resources, which might result in false negatives in builds, this is something that we might improve by further integrating and using our infrastructure with Jenkins, GitLab etc. Anyway, it serves as a great base to make contributions and refactoring of grml-debootstrap easier.

Thanks to Christian Hofstaedtler + Darshaka Pathirana for for proof-reading this.

mika Debian – mikas blog

Debian/TeX Live 2018.20180724-1

Mar, 24/07/2018 - 12:13md

After more than two months finally an update to TeX Live in Debian again. I was a bit distracted by work, private life, travels, and above all the update to texdoc which required a few changes. Anyway, here is the new shipload, should be arriving at your computer in due time.

Having skipped more than two months gives a huge bunch of updates, hard to pick some interesting ones. As usual the work by Michael Sharpe, this time the extension of the Stix2 fonts in the stickstoo package, are greatly admired by me. I never understood where he finds all the time.

On the update side I am happy to see that Takuto Asakura has taken over responsability of texdoc, added already fuzzy search and better command line parsing, and I am sure we will see great improvements over time in this so very important puzzle piece to find relevant documentation.

With this I am diving into the preparations for DebConf18 in Taiwan, where I will report besides other on the status of typesetting CJK languages with TeX in Debian. Looking forward to meet a lot of interesting people in Taiwan.

Please enjoy.

New packages

axessibility, beamertheme-focus, biblatex-socialscienceshuberlin, cellprops, cqubeamer, ecothesis, endnotesj, erw-l3, etsvthor, gatherenum, guitartabs, hyperbar, jnuexam, kanaparser, lualatex-truncate, luavlna, modulus, onedown, padcount, pdfoverlay, pdfpc-movie, penrose, postage, powerdot-tuliplab, pst-contourplot, statistics, stickstoo, tagpdf, texdate, tikz-nef, tikzmarmots, tlc-article, topletter, xbmks.

Updated packages

academicons, achemso, acmart, alegreya, animate, apxproof, arabluatex, arara, babel, babel-french, babel-ukrainian, beebe, bezierplot, bib2gls, biblatex-archaeology, biblatex-caspervector, biblatex-ext, biblatex-gb7714-2015, biblatex-sbl, bibleref, bidi, bundledoc, bxjscls, cabin, caption, carlisle, cascade, catechis, classicthesis, clipboard, cochineal, colophon, colortbl, contracard, cooking-units, crossrefware, ctex, dashundergaps, datepicker-pro, datetime2, datetime2-galician, datetime2-irish, datetime2-latin, datetime2-lsorbian, dccpaper, doclicense, docsurvey, dozenal, dynkin-diagrams, elsarticle, esami, eso-pic, etoc, europecv, exercisebank, factura, fduthesis, fetchbibpes, filecontents, fira, fontawesome, fontawesome5, gbt7714, gentombow, geometry, getmap, glossaries, glossaries-extra, handin, ipaex-type1, isodoc, japanese-otf-uptex, japanese-otf-uptex-nonfree, jlreq, jsclasses, ketcindy, knowledge, komacv-rg, l3build, l3experimental, l3kernel, l3packages, latex, latex-make, latex-via-exemplos, latex2e-help-texinfo, latex2e-help-texinfo-spanish, latex2man, latexindent, latexmk, libertinus-otf, libertinust1math, lm, lni, lstbayes, luatexja, luaxml, lwarp, ly1, lyluatex, make4ht, marginnote, mcf2graph, media9, mhchem, minitoc, musicography, musixtex, na-position, ncctools, newtx, newtxsf, nicematrix, ocgx2, optidef, paracol, pgfornament-han, pkuthss, plantuml, platex, pst-ode, pstricks, ptex, ptex2pdf, pxjahyper, regexpatch, register, reledmac, roboto, scientific-thesis-cover, scsnowman, semantic-markup, serbian-lig, siunitx, stix, structmech, struktex, synctex, t2, tex-gyre, tex4ebook, tex4ht, texdoc, texdoctk, texlive-de, texlive-en, thesis-gwu, thucoursework, thuthesis, tikz-relay, tikzducks, tikzsymbols, todonotes, tools, toptesi, tracklang, turabian-formatting, uantwerpendocs, unicode-data, updmap-map, uptex, venndiagram, visualtikz, witharrows, xassoccnt, xcharter, xepersian, xint, xltabular, xsavebox, xurl, yathesis, zxjafont, zxjatype.

Norbert Preining There and back again

libhandy 0.0.2

Mar, 24/07/2018 - 11:32pd

Last month we tagged the first release of libhandy, a GTK+ library to ease the development of GNOME applications for mobile devices and small screens. Two of the contained widgets, HdyLeaflet and HdyColumn, are containers to address the specific size constraints of phones (video by Adrien). The rest are special purpose widgets, needed more than once on mobile devices, e.g. a Keypad (video).

This time around for the v0.0.2 release we mostly have bugfixes. From the Debian package's changelog:

[ Adrien Plazas ] * dialer: Make the grid visible and forbid show all. * example: Drop usage of show_all() * dialer: Add column-spacing and row-spacing props. * example: Change the grid's spacing and minimum size request. * flatpak: Allow access to the dconf config dir. * Replace phone-dial-symbolic by call-start-symbolic. * column: Fix height for width request. [ Guido Günther ] * Use instead of * Add AUTHORS file * gitlab-ci: Build on Debian buster using provided build-deps. * arrows: test object construction * Multiple gtk-doc fixes * docs: Abort on warnings. * DialerButton: free letters

The Debian package was uploaded to Debian's NEW queue.

Guido Günther Colors of Noise - Entries tagged planetdebian

Rcpp 0.12.18: Another batch of updates

Mar, 24/07/2018 - 2:30pd

Another bi-monthly update in the 0.12.* series of Rcpp landed on CRAN early this morning following less than two weekend in the incoming/ directory of CRAN. As always, thanks to CRAN for all the work they do so well.

So once more, this release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, the 0.12.14.release in November 2017, the 0.12.15.release in January 2018, the 0.12.16.release in March 2018, and the 0.12.17 release in May 2018 making it the twenty-second release at the steady and predictable bi-montly release frequency (which started with the 0.11.* series).

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1403 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 138 in the current BioConductor release 3.7.

A pretty decent number of changes, contributed by a number of Rcpp core team members as well as Rcpp user, went into this. Full details are below.

Changes in Rcpp version 0.12.18 (2018-07-21)
  • Changes in Rcpp API:

    • The StringProxy::operator== is now const correct (Romain in #855 fixing #854).

    • The Environment::new_child() is now const (Romain in #858 fixing #854).

    • Next eval codes now properly unwind (Lionel in the large and careful #859 fixing #807).

    • In debugging mode, more type information is shown on abort() (Jack Wasey in #860 and #882 fixing #857).

    • A new class was added which allow suspension of the RNG synchronisation to address an issue seen in RcppDE (Kevin in #862).

    • Evaluation calls now happen in the base environment (which may fix an issue seen between conflicted and some BioConductor packages) (Kevin in #863 fixing #861).

    • Call stack display on error can now be controlled more finely (Romain in #868).

    • The new Rcpp_fast_eval is used instead of Rcpp_eval though this still requires setting RCPP_USE_UNWIND_PROTECT before including Rcpp.h (Qiang Kou in #867 closing #866).

    • The Rcpp::unwindProtect() function extracts the unwinding from the Rcpp_fast_eval() function and makes it more generally available. (Lionel in #873 and #877).

    • The tm_gmtoff part is skipped on AIX too (#876).

  • Changes in Rcpp Attributes:

    • The sourceCpp() function now evaluates R code in the correct local environment in which a function was compiled (Filip Schouwenaars in #852 and #869 fixing #851).

    • Filenames are now sorted in a case-insenstive way so that the RcppExports files are more stable across locales (Jack Wasey in #878).

  • Changes in Rcpp Sugar:

    • The sugar functions min and max now recognise empty vectors (Dirk in #884 fixing #883).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel Thinking inside the box

Extremely hot and humid - over 40℃ in Tokyo

Hën, 23/07/2018 - 4:21md
I can't do anything, it's too hot and humid... hope it'd be better in Hsinchu, Taiwan.

Yes, I'll go to DebConf18, see you there. Hideki Yamane Henrich plays with Debian

Reproducible Builds: Weekly report #169

Hën, 23/07/2018 - 3:35md

Here’s what happened in the Reproducible Builds effort between Sunday July 15 and Saturday July 21 2018:

Packages reviewed and fixed, and bugs filed development

There were a number of updates to our Jenkins-based testing framework that powers


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks

Passwords Used by Daemons

Hën, 23/07/2018 - 9:11pd

There’s a lot of advice about how to create and manage user passwords, and some of it is even good. But there doesn’t seem to be much advice about passwords for daemons, scripts, and other system processes.

I’m writing this post with some rough ideas about the topic, please let me know if you have any better ideas. Also I’m considering passwords and keys in a fairly broad sense, a private key for a HTTPS certificate has more in common with a password to access another server than most other data that a server might use. This also applies to SSH host secret keys, keys that are in ssh authorized_keys files, and other services too.

Passwords in Memory

When SSL support for Apache was first released the standard practice was to have the SSL private key encrypted and require the sysadmin enter a password to start the daemon. This practice has mostly gone away, I would hope that would be due to people realising that it offers little value but it’s more likely that it’s just because it’s really annoying and doesn’t scale for cloud deployments.

If there was a benefit to having the password only in RAM (IE no readable file on disk) then there are options such as granting read access to the private key file only during startup. I have seen a web page recommending running “chmod 0” on the private key file after the daemon starts up.

I don’t believe that there is a real benefit to having a password only existing in RAM. Many exploits target the address space of the server process, Heartbleed is one well known bug that is still shipping in new products today which reads server memory for encryption keys. If you run a program that is vulnerable to Heartbleed then it’s SSL private key (and probably a lot of other application data) are vulnerable to attackers regardless of whether you needed to enter a password at daemon startup.

If you have an application or daemon that might need a password at any time then there’s usually no way of securely storing that password such that a compromise of that application or daemon can’t get the password. In theory you could have a proxy for the service in question which runs as a different user and manages the passwords.

Password Lifecycle

Ideally you would be able to replace passwords at any time. Any time a password is suspected to have been leaked then it should be replaced. That requires that you know where the password is used (both which applications and which configuration files used by those applications) and that you are able to change all programs that use it in a reasonable amount of time.

The first thing to do to achieve this is to have one password per application not one per use. For example if you have a database storing accounts used for a mail server then you would be tempted to have an outbound mail server such as Postfix and an IMAP server such as Dovecot both use the same password to access the database. The correct thing to do is to have one database account for the Dovecot and another for Postfix so if you need to change the password for one of them you don’t need to change passwords in two locations and restart two daemons at the same time. Another good option is to have Postfix talk to Dovecot for authenticating outbound mail, that means you only have a single configuration location for storing the password and also means that a security flaw in Postfix (or more likely a misconfiguration) couldn’t give access to the database server.

Passwords Used By Web Services

It’s very common to run web sites on Apache backed by database servers, so common that the acronym LAMP is widely used for Linux, Apache, Mysql, and PHP. In a typical LAMP installation you have multiple web sites running as the same user which by default can read each other’s configuration files. There are some solutions to this.

There is an Apache module mod_apparmor to use the Apparmor security system [1]. This allows changing to a specified Apparmor “hat” based on the URI or a specified hat for the virtual server. Each Apparmor hat is granted access to different files and therefore files that contain passwords for MySQL (or any other service) can be restricted on a per vhost basis. This only works with the prefork MPM.

There is also an Apache module mpm-itk which runs each vhost under a specified UID and GID [2]. This also allows protecting sites on the same server from each other. The ITK MPM is also based on the prefork MPM.

I’ve been thinking of writing a SE Linux MPM for Apache to do similar things. It would have to be based on prefork too. Maybe a change to mpm-itk to support SE Linux context as well as UID and GID.

Managing It All

Once the passwords are separated such that each service runs with minimum privileges you need to track and manage it all. At the simplest that needs a document listing where all of the passwords are used and how to change them. If you use a configuration management tool then that could manage the passwords. Here’s a list of tools to manage service passwords in tools like Ansible [3].

Related posts:

  1. Email Passwords I was doing some routine sysadmin work for a client...
  2. SE Linux Play Machine and Passwords My SE Linux Play Machine has been online again since...
  3. Case Sensitivity and Published Passwords When I first started running a SE Linux Play Machine...
etbe etbe – Russell Coker