You are here

Agreguesi i feed

Elana Hashman: SREcon19 Americas Talk Resources

Planet Debian - Pre, 22/03/2019 - 5:00pd

At SREcon19 Americas, I gave a talk called "Operating within Normal Parameters: Monitoring Kubernetes". Here's some links and resources related to my talk, for your reference.

Operating within Normal Parameters: Monitoring Kubernetes Additional Prometheus metrics sources Related readings

I'm including these documents for reference to add some context around what's currently happening (as of 2019Q1) in the Kubernetes instrumentation SIG and wider ecosystem.

Note that GitHub links are pinned to their most recent commit to ensure they will not break; if you want the latest version, make sure to switch the branch to "master".

Simon Josefsson: Offline Ed25519 OpenPGP key with subkeys on FST-01G running Gnuk

Planet Debian - Enj, 21/03/2019 - 9:45md

Below I describe how to generate an OpenPGP key and import it to a FST-01G device running Gnuk. See my earlier post on planning for my new OpenPGP key and the post on preparing the FST-01G to run Gnuk. For comparison with a RSA/YubiKey based approach, you can read about my setup from 2014.

Most of the steps below are covered by the Gnuk manual. The primary complication for me is the use of a offline machine and storing GnuPG directory stored on a USB memory device.

Offline machine

I use a laptop that is not connected to the Internet and boot it from a read-only USB memory stick. Finding a live CD that contains the necessary tools for using GnuPG with smartcards (gpg-agent, scdaemon, pcscd) is significantly harder than it should be. Using a rarely audited image begs the question of whether you can trust it. A patched kernel/gpg to generate poor randomness would be an easy and hard to notice hack. I’m using the PGP/PKI Clean Room Live CD. Recommendations on more widely used and audited alternatives would be appreciated. Select “Advanced Options” and “Run Shell” to escape the menus. Insert a new USB memory device, and prepare it as follows:

pgp@pgplive:/home/pgp$ sudo wipefs -a /dev/sdX pgp@pgplive:/home/pgp$ sudo fdisk /dev/sdX # create a primary partition of Linux type pgp@pgplive:/home/pgp$ sudo mkfs.ext4 /dev/sdX1 pgp@pgplive:/home/pgp$ sudo mount /dev/sdX1 /mnt pgp@pgplive:/home/pgp$ sudo mkdir /mnt/gnupghome pgp@pgplive:/home/pgp$ sudo chown pgp.pgp /mnt/gnupghome pgp@pgplive:/home/pgp$ sudo chmod go-rwx /mnt/gnupghome GnuPG configuration

Set your GnuPG home directory to point to the gnupghome directory on the USB memory device. You will need to do this in every terminal windows you open that you want to use GnuPG in.

pgp@pgplive:/home/pgp$ export GNUPGHOME=/mnt/gnupghome pgp@pgplive:/home/pgp$

At this point, you should be able to run gpg --card-status and get output from the smartcard.

Create master key

Create a master key and make a backup copy of the GnuPG home directory with it, together with an export ASCII version.

pgp@pgplive:/home/pgp$ gpg --quick-gen-key "Simon Josefsson <>" ed25519 sign 216d gpg: keybox '/mnt/gnupghome/pubring.kbx' created gpg: /mnt/gnupghome/trustdb.gpg: trustdb created gpg: key D73CF638C53C06BE marked as ultimately trusted gpg: directory '/mnt/gnupghome/openpgp-revocs.d' created gpg: revocation certificate stored as '/mnt/gnupghome/openpgp-revocs.d/B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE.rev' pub ed25519 2019-03-20 [SC] [expires: 2019-10-22] B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE uid Simon Josefsson <> pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/masterkey.txt pgp@pgplive:/home/pgp$ sudo cp -a $GNUPGHOME $GNUPGHOME-backup-masterkey pgp@pgplive:/home/pgp$ Create subkeys

Create subkeys and make a backup of them too, as follows.

pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE cv25519 encr 216d pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE ed25519 auth 216d pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE ed25519 sign 216d pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/mastersubkeys.txt pgp@pgplive:/home/pgp$ gpg -a --export-secret-subkeys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/subkeys.txt pgp@pgplive:/home/pgp$ sudo cp -a $GNUPGHOME $GNUPGHOME-backup-mastersubkeys pgp@pgplive:/home/pgp$ Move keys to card

Prepare the card by setting Admin PIN, PIN, your full name, sex, login account, and key URL as you prefer, following the Gnuk manual on card personalization.

Move the subkeys from your GnuPG keyring to the FST01G using the keytocard command.

Take a final backup — because moving the subkeys to the card modifes the local GnuPG keyring — and create a ASCII armored version of the public key, to be transferred to your daily machine.

pgp@pgplive:/home/pgp$ gpg --list-secret-keys /mnt/gnupghome/pubring.kbx -------------------------- sec ed25519 2019-03-20 [SC] [expires: 2019-10-22] B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE uid [ultimate] Simon Josefsson <> ssb> cv25519 2019-03-20 [E] [expires: 2019-10-22] ssb> ed25519 2019-03-20 [A] [expires: 2019-10-22] ssb> ed25519 2019-03-20 [S] [expires: 2019-10-22] pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/masterstubs.txt pgp@pgplive:/home/pgp$ gpg -a --export-secret-subkeys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/subkeysstubs.txt pgp@pgplive:/home/pgp$ gpg -a --export B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/publickey.txt pgp@pgplive:/home/pgp$ cp -a $GNUPGHOME $GNUPGHOME-backup-masterstubs pgp@pgplive:/home/pgp$ Transfer to daily machine

Copy publickey.txt to your day-to-day laptop and import it and create stubs using --card-status.

jas@latte:~$ gpg --import < publickey.txt gpg: key D73CF638C53C06BE: public key "Simon Josefsson <>" imported gpg: Total number processed: 1 gpg: imported: 1 jas@latte:~$ gpg --card-status Reader ...........: Free Software Initiative of Japan Gnuk (FSIJ-1.2.14-67252015) 00 00 Application ID ...: D276000124010200FFFE672520150000 Version ..........: 2.0 Manufacturer .....: unmanaged S/N range Serial number ....: 67252015 Name of cardholder: Simon Josefsson Language prefs ...: sv Sex ..............: male URL of public key : Login data .......: jas Signature PIN ....: not forced Key attributes ...: ed25519 cv25519 ed25519 Max. PIN lengths .: 127 127 127 PIN retry counter : 3 3 3 Signature counter : 0 Signature key ....: A3CC 9C87 0B9D 310A BAD4 CF2F 5172 2B08 FE47 45A2 created ....: 2019-03-20 23:40:49 Encryption key....: A9EC 8F4D 7F1E 50ED 3DEF 49A9 0292 3D7E E76E BD60 created ....: 2019-03-20 23:40:26 Authentication key: CA7E 3716 4342 DF31 33DF 3497 8026 0EE8 A9B9 2B2B created ....: 2019-03-20 23:40:37 General key info..: sub ed25519/51722B08FE4745A2 2019-03-20 Simon Josefsson <> sec ed25519/D73CF638C53C06BE created: 2019-03-20 expires: 2019-10-22 ssb> cv25519/02923D7EE76EBD60 created: 2019-03-20 expires: 2019-10-22 card-no: FFFE 67252015 ssb> ed25519/80260EE8A9B92B2B created: 2019-03-20 expires: 2019-10-22 card-no: FFFE 67252015 ssb> ed25519/51722B08FE4745A2 created: 2019-03-20 expires: 2019-10-22 card-no: FFFE 67252015 jas@latte:~$

Before the key can be used after the import, you must update the trust database for the secret key.

Now you should have a offline master key with subkey stubs. Note in the output below that the master key is not available (sec#) and the subkeys are stubs for smartcard keys (ssb>).

jas@latte:~$ gpg --list-secret-keys sec# ed25519 2019-03-20 [SC] [expires: 2019-10-22] B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE uid [ultimate] Simon Josefsson <> ssb> cv25519 2019-03-20 [E] [expires: 2019-10-22] ssb> ed25519 2019-03-20 [A] [expires: 2019-10-22] ssb> ed25519 2019-03-20 [S] [expires: 2019-10-22] jas@latte:~$

If your environment variables are setup correctly, SSH should find the authentication key automatically.

jas@latte:~$ ssh-add -L ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzCFcHHrKzVSPDDarZPYqn89H5TPaxwcORgRg+4DagE cardno:FFFE67252015 jas@latte:~$

GnuPG and SSH are now ready to be used with the new key. Thanks for reading!

Simon Josefsson: Installing Gnuk on FST-01G running NeuG

Planet Debian - Enj, 21/03/2019 - 9:39md

The FST-01G device that you order from the FSF shop runs NeuG. To be able to use the device as a OpenPGP smartcard, you need to install Gnuk. While Niibe covers this on his tutorial, I found the steps a bit complicated to follow. The following guides you from buying the device to getting a FST-01G running Gnuk ready for use with GnuPG.

Once you have received the device and inserted it into a USB port, your kernel log (sudo dmesg) will show something like the following:

[628772.874658] usb 1-1.5.1: New USB device found, idVendor=234b, idProduct=0004 [628772.874663] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [628772.874666] usb 1-1.5.1: Product: Fraucheky [628772.874669] usb 1-1.5.1: Manufacturer: Free Software Initiative of Japan [628772.874671] usb 1-1.5.1: SerialNumber: FSIJ-0.0 [628772.875204] usb-storage 1-1.5.1:1.0: USB Mass Storage device detected [628772.875452] scsi host6: usb-storage 1-1.5.1:1.0 [628773.886539] scsi 6:0:0:0: Direct-Access FSIJ Fraucheky 1.0 PQ: 0 ANSI: 0 [628773.887522] sd 6:0:0:0: Attached scsi generic sg2 type 0 [628773.888931] sd 6:0:0:0: [sdb] 128 512-byte logical blocks: (65.5 kB/64.0 KiB) [628773.889558] sd 6:0:0:0: [sdb] Write Protect is off [628773.889564] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 [628773.890305] sd 6:0:0:0: [sdb] No Caching mode page found [628773.890314] sd 6:0:0:0: [sdb] Assuming drive cache: write through [628773.902617] sdb: [628773.906066] sd 6:0:0:0: [sdb] Attached SCSI removable disk

The device comes up as a USB mass storage device. Conveniently, it contain documentation describing what it is, and you identify the version of NeuG it runs as follows.

jas@latte:~/src/gnuk$ head /media/jas/Fraucheky/README NeuG - a true random number generator implementation (for STM32F103) Version 1.0.7 2018-01-19 Niibe Yutaka Free Software Initiative of Japan

To convert the device into the serial-mode that is required for the software upgrade, use the eject command for the device (above it came up as /dev/sdb): sudo eject /dev/sdb. The kernel log will now contain something like this:

[628966.847387] usb 1-1.5.1: reset full-speed USB device number 27 using ehci-pci [628966.955723] usb 1-1.5.1: device firmware changed [628966.956184] usb 1-1.5.1: USB disconnect, device number 27 [628967.115322] usb 1-1.5.1: new full-speed USB device number 28 using ehci-pci [628967.233272] usb 1-1.5.1: New USB device found, idVendor=234b, idProduct=0001 [628967.233277] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [628967.233280] usb 1-1.5.1: Product: NeuG True RNG [628967.233283] usb 1-1.5.1: Manufacturer: Free Software Initiative of Japan [628967.233286] usb 1-1.5.1: SerialNumber: FSIJ-1.0.7-67252015 [628967.234034] cdc_acm 1-1.5.1:1.0: ttyACM0: USB ACM device

The strings NeuG True RNG and FSIJ-1.0.7 suggest it is running NeuG version 1.0.7.

Now both Gnuk itself and reGNUal needs to be built, as follows. If you get any error message, you likely don’t have the necessary dependencies installed.

jas@latte:~/src$ git clone jas@latte:~/src$ git clone jas@latte:~/src$ cd gnuk/src/ jas@latte:~/src/gnuk/src$ git submodule update --init jas@latte:~/src/gnuk/src$ ./configure --vidpid=234b:0000 ... jas@latte:~/src/gnuk/src$ make ... jas@latte:~/src/gnuk/src$ cd ../regnual/ jas@latte:~/src/gnuk/regnual$ make jas@latte:~/src/gnuk/regnual$ cd ../../

You are now ready to flash the device, as follows.

jas@latte:~/src$ sudo neug/tool/ -f gnuk/regnual/regnual.bin gnuk/src/build/gnuk.bin gnuk/regnual/regnual.bin: 4544 gnuk/src/build/gnuk.bin: 113664 CRC32: 931cab51 Device: Configuration: 1 Interface: 1 20000e00:20005000 Downloading flash upgrade program... start 20000e00 end 20001f00 # 20001f00: 31 : 196 Run flash upgrade program... Wait 3 seconds... Device: 08001000:08020000 Downloading the program start 08001000 end 0801bc00 jas@latte:~/src$

Remove and insert the device and the kernel log should contain something like this:

[629120.399875] usb 1-1.5.1: new full-speed USB device number 32 using ehci-pci [629120.511003] usb 1-1.5.1: New USB device found, idVendor=234b, idProduct=0000 [629120.511008] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [629120.511011] usb 1-1.5.1: Product: Gnuk Token [629120.511014] usb 1-1.5.1: Manufacturer: Free Software Initiative of Japan [629120.511017] usb 1-1.5.1: SerialNumber: FSIJ-1.2.14-67252015

The device can now be used with GnuPG as a smartcard device.

jas@latte:~/src/gnuk$ gpg --card-status Reader ...........: 234B:0000:FSIJ-1.2.14-67252015:0 Application ID ...: D276000124010200FFFE672520150000 Version ..........: 2.0 Manufacturer .....: unmanaged S/N range Serial number ....: 67252015 Name of cardholder: [not set] Language prefs ...: [not set] Sex ..............: unspecified URL of public key : [not set] Login data .......: [not set] Signature PIN ....: forced Key attributes ...: rsa2048 rsa2048 rsa2048 Max. PIN lengths .: 127 127 127 PIN retry counter : 3 3 3 Signature counter : 0 Signature key ....: [none] Encryption key....: [none] Authentication key: [none] General key info..: [none] jas@latte:~/src/gnuk$


Simon Josefsson: OpenPGP 2019 Key Transition Statement

Planet Debian - Enj, 21/03/2019 - 9:30md

I have created a new OpenPGP key and will be transitioning away from my old key. If you have signed my old key, I would appreciate signatures on my new key as well. I have created a transition statement that can be downloaded from

Below is the signed statement.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OpenPGP Key Transition Statement for Simon Josefsson <> I have created a new OpenPGP key and will be transitioning away from my old key. The old key has not been compromised and will continue to be valid for some time, but I prefer all future correspondence to be encrypted to the new key, and will be making signatures with the new key going forward. I would like this new key to be re-integrated into the web of trust. This message is signed by both keys to certify the transition. My new and old keys are signed by each other. If you have signed my old key, I would appreciate signatures on my new key as well, provided that your signing policy permits that without re-authenticating me. The old key, which I am transitioning away from, is: pub rsa3744 2014-06-22 [SC] 9AA9 BDB1 1BB1 B99A 2128 5A33 0664 A769 5426 5E8C The new key, to which I am transitioning, is: pub ed25519 2019-03-20 [SC] B1D2 BD13 75BE CB78 4CF4 F8C4 D73C F638 C53C 06BE The key may be downloaded from: To fetch the full new key from a public key server using GnuPG, run: gpg --keyserver \ --recv-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE If you already know my old key, you can now verify that the new key is signed by the old one: gpg --check-sigs B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE If you are satisfied that you've got the right key, and the User IDs match what you expect, I would appreciate it if you would sign my key: gpg --sign-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE You can upload your signatures to a public keyserver directly: gpg --keyserver \ --send-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE Or email (possibly encrypted) the output from: gpg --armor --export B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE If you'd like any further verification or have any questions about the transition please contact me directly. To verify the integrity of this statement: wget -q -O- | gpg --verify /Simon -----BEGIN PGP SIGNATURE----- iQIHBAEBCgAdFiEEmqm9sRuxuZohKFozBmSnaVQmXowFAlyT8SQACgkQBmSnaVQm XoxASQ6fUqFbueRikTu5Mp8V/J6BUoU94coqii3Pd15A2Kss9yzXpt+6ls5gpwzE oxOubhxtFZ2WqNxVXwV/8e/48XDbDyy7WWh6Ao+8wQl+zl5CU8KUhM5zhUVR0BS4 IfTTs/JudrJASCocEPvRyuJ9cdhn66KCqleWIC+SEzPoxo+E941FxYUhHpL1jSul ln1TR/0SGhSx19Cy6emej26p1Hs+kwHaiTo8eWgdQAg/yjY7z0RQJ1itVwfZaPJn Ob2Bbs082U1Tho8RpjMS1mC9+cjsYadbMBgYTJ6HLkQ4xjuTFS021eWwdd0a39Pd f4terKu+QT6y3FoQgQE8fZ+eaqEf5VLqVR/SxSR36LcrCX3GhBlEUo5RvYEWdRtd uyBKR60G8zS0yGfDrsGjRT2Rag3B5rBbjml4Tn9nijG1LACeTci828y5+JykD7+l l3kzrES90IOUwvrNQg9QyJxOJJ/SsZw2dcHEtltfg0o9nXxQqQQCA4STUSTLlf6p G6T2+vd6LVYD5Zs6e4iutcvEpUzWYCvOC4RI+YMHrMU/nP44sgfjm4izx5CaKPH8 /UwQNhiS/ccsxMwEgnYTXi8shAUwA9gd6/92WVKCIMd5BpBi7JZ7QSoRiHUEARYK AB0WIQSx0r0Tdb7LeEz0+MTXPPY4xTwGvgUCXJPxJAAKCRDXPPY4xTwGvuxpAQDn Ws6Hn0RBqKyN5LJ4cXt55FDhaFpeJh7ZG4sHEdn3bAD/ags7v19305cAkvpbSEdX MJoESOiUD1BwNTihVH9XBwc= =r0qK -----END PGP SIGNATURE-----

Birger Schacht: Installing Debian with encrypted boot using GRML

Planet Debian - Enj, 21/03/2019 - 7:28md

A couple of days ago an interesting step-by-step guide on how to install Debian with full disk encryption, including /boot, using debian-installer was posted on the debian-boot mailinglist. This reminded me of the steps I used and wrote down a couple of month ago to create a similar setup. These steps describe a full disk (including /boot) encrypted setup on a non coreboot enabled system using the great grml live distro. (And just to be sure I just redid the same setup on a test device with the newest grml release Gnackwatschn):

The first step was to set up the network using grml-network after which I started by preparing the disk. I wiped the disks old partition table using sgdisk(8) and then created a 512MB EFI System partition and used the rest of the disk for a Linux partition:

sgdisk --zap-all /dev/sda sgdisk -n1:1M:+512M -t1:EF00 /dev/sda sgdisk -n2:0:0 -t2:8300 /dev/sda

Then I initialized the LUKS partition, set a passphrase and opened the LUKS device:

cryptsetup luksFormat /dev/sda2 cryptsetup luksOpen /dev/sda2 sda2_crypt

The LUKS device is then used to create a LVM volume group which in this example is called vg-2560p. In that volume group I created a logical volume for the /root filesystem:

pvcreate /dev/ampper/sda2_crypt vgcreate vg-2560p /dev/mapper/sda2_crypt lvcreate -L 120G vg-2560p -n root

The next step was to create an ext4 filesystem on the /root volume and a msdos filesystem with a 32bit file allocation table and the label EFI on the EFI System partition:

mkfs.ext4 /dev/vg-2560p/root mkdosfs -F 32 -n EFI /dev/sda1

I then mounted the root partition, debootstrapped buster onto the partition, mounted the EFI partition and remounted /dev, /proc, /sys and /run into the new system:

mount /dev/vg-2560p/root /mnt debootstrap buster /mnt mkdir /mnt/boot/efi mount /dev/sda1 /mnt/boot/efi mount --rbind /dev /mnt/dev/ mount --rbind /proc /mnt/proc mount --rbind /sys /mnt/sys mount --rbind /run /mnt/run

After that I used chroot(8) to change into the buster installation and do some initial configuration. I first told apt(8) not to install recommended packages and then installed a kernel, grub, cryptsetup, lvm2 and sudo:

chroot /mnt /bin/bash echo "Apt::Install-Recommends 0;" >> /etc/apt/apt.conf.d/local-recommends apt install linux-image-amd64 cryptsetup lvm2 grub-efi-amd64 sudo

On the new system, the /etc/fstab file is empty and so I added the filesystems and I also added information about the encrypted disk to the /etc/crypttab file:

echo PARTUUID=$(blkid -s PARTUUID -o value /dev/sda1) /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab echo UUID=$(blkid -s UUID -o value /dev/mapper/vg--2560p-root) / ext4 defaults 0 1 >> /etc/fstab echo sda2_crypt PARTUUID=$(blkid -s PARTUUID -o value /dev/sda2) none luks,discard,initramfs >> /etc/crypttab

I also had to tell grub to enable device decryption:

echo "GRUB_ENABLE_CRYPTODISK=y" >> /etc/default/grub update-initramfs -c -k all update-grub grub-install --target=x86_64-efi

The final step, which I forget nearly every time when i install a system using debootstrap(8), was to ad a user account:

adduser bisco adduser bisco sudo

PS: On the laptop I installed a couple of month ago, I had to set the path to the EFI Grub file (\EFI\debian\grubx64.efi) in bios. On the laptop i used to reproduce the above steps, i didn’t find that setting in bios (its from 2011, maybe a bios update would have helped), but I was able to choose the file during boot.

Arturo Borrero González: The martian packet case in our Neutron floating IP setup

Planet Debian - Enj, 21/03/2019 - 9:00pd

A community member opened a bug the other day related to a weird networking behavior in the Cloud VPS service, offered by the Cloud Services team at Wikimedia Foundation. This VPS hosting service is based on Openstack, and we implement the networking bits by means of Neutron.

Our current setup is based on Openstack Mitaka (old, I know) and the networking architecture we use is extensively described in our docs. What is interesting today is our floating IP setup, which Neutron uses by means of the Netfilter NAT engine.

Neutron creates a couple of NAT rules for each floating IP, to implement both SNAT and DNAT. In our setup, if a VM uses a floating IP, then all its traffic to and from The Internet will use this floating IP. In our case, the floating IP range is made of public IPv4 addresses.

The bug/weird behavior consisted on the VM being unable to contact itself using the floating IP. A packet is generated in the VM with destination address the floating IP, a packet like this: > ICMP echo request

This packet reaches the neutron virtual router, and I could see it in tcpdump:

root@neutron-router:~# tcpdump -n -i qr-defc9d1d-40 icmp and host 11:51:48.652815 IP > ICMP echo request, id 32318, seq 1, length 64

Then, the PREROUTING NAT rules applies, translating into The corresponding conntrack NAT engine event:

root@neutron-router:~# conntrack -E -p icmp --src [NEW] icmp 1 30 src= dst= type=8 code=0 id=32395 [UNREPLIED] src= dst= type=0 code=0 id=32395

When this happens, the packet is put again in the wire, and I could see it again in a tcpdump running in the Neutron server box. You can see the 2 packets, the first without NAT, the second with the NAT applied:

root@neutron-router:~# tcpdump -n -i qr-defc9d1d-40 icmp and host 11:51:48.652815 IP > ICMP echo request, id 32318, seq 1, length 64 11:51:48.652842 IP > ICMP echo request, id 32318, seq 1, length 64

The Neutron virtual router routes this packet back to the original VM, and you can see the NATed packet reaching the interface. Note how I selected only incoming packets in tcpdump using -Q in

root@vm-instance:~# tcpdump -n -i eth0 -Q in icmp 11:51:48.650504 IP > ICMP echo request, id 32318, seq 1, length 64

And here is the thing. That packet can’t be routed by the VM:

root@vm-instance:~# ip route get from iif eth0 RTNETLINK answers: Invalid argument

This is known as a martian packet and you can actually see the kernel complaining if you turn on martian packet logging:

root@vm-instance:~# sysctl net.ipv4.conf.all.log_martians=1 root@vm-instance:~# dmesg -T | tail -2 [Tue Mar 19 12:16:26 2019] IPv4: martian source from, on dev eth0 [Tue Mar 19 12:16:26 2019] ll header: 00000000: fa 16 3e d9 29 75 fa 16 3e ae f5 88 08 00 ..>.)u..>.....

The problem is that for local IP address, we recv a packet with same src/dst IPv4, with different src/dst MAC address. That’s nonsense from the network stack if not configured otherwise. If one wants to instruct the network stack to allow this, the fix is pretty easy:

root@vm-instance:~# sysctl net.ipv4.conf.all.accept_local=1

Now, ping from the VM to the floating IP works:

root@vm-intance:~# ping PING ( 56(84) bytes of data. 64 bytes from icmp_seq=1 ttl=64 time=0.202 ms 64 bytes from icmp_seq=2 ttl=64 time=0.228 ms ^C --- ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1011ms rtt min/avg/max/mdev = 0.202/0.215/0.228/0.013 ms

And ip route reports it correctly:

root@vm-intance:~# ip route get from iif eth0 local from dev lo cache <local> iif eth0

You can read more about all the sysctl configs for network in the Linux kernel docs. In concrete this one:

accept_local - BOOLEAN Accept packets with local source addresses. In combination with suitable routing, this can be used to direct packets between two local interfaces over the wire and have them accepted properly. default FALSE

The Cloud VPS service offered by the Wikimedia Foundation is an open project, open to use by anyone connected with the Wikimedia movement and we encourage the community to work with us in improving it. Yes, is open to collaboration as well, also technical / engineering contributors, and you are welcome to contribute to this or any of the many other collaborative efforts in this global movement.

Ian Jackson: Pandemic Rising Tide - a new board design

Planet Debian - Enj, 21/03/2019 - 2:02pd
As I wrote previously (link added here): ceb gave me the board game Pandemic Rising Tide for Christmas. I like it a lot. However, the board layout, while very pretty and historically accurate, is awkward for play. I decided to produce a replacement board design, with a schematic layout.

This project is now complete at last! Not only do I have PDFs ready for printing on a suitable printer, but I have made a pretty good properly folding actual board.

Why a new board design The supplied board is truly a work of art. Every wrinkle in the coastline and lots of details of boundaries of various parts of the Netherlands are faithfully reproduced.

To play the game, though, it is necessary to see quickly which "squares" (faces of the boundary graph; the rules call them regions) are connected to which others, and what the fastest walking route is, and so on. Also one places dyke tokens - small brown sticks - along some of the edges; it is often necessary to quickly see whether a face has any dykes on any of its edges, or whether there is a dyke between two adjacent faces.

This is hard to do on the original board. This has been at least one forum thread and one player shared their modifications involving pipe cleaners and glue!

Results - software, and PDFs Much of the work in this project was producing the image to go on the board - in particular, laying out the graph was quite hard and involved shaving a number of yaks. (I'll be posting properly about my planar graph layout tool too.)

In case you like my layout, I have published a complete set of PDFs suitable for printing out yourself. There's a variety depending on what printer you are going to use. See the README.txt in that directory for details.

Of course the source code is available too. (Building it is not so easy - see the same README for details.)

Results - physical board I consulted with ceb who had very useful bookbinding expertise and gave copious and useful advice, and also very kindly let me use some of their supplies. I had a local print shop print out a suitable PDF on their excellent A1 colour laserprinter, with very good results. (The photos below don't do justice to the colour rendering.)

The whole board is backed with bookcloth (the cloth which is used for the spines of hardback books), and that backing forms one of the two hinges. The other hinge is a separate piece of bookcloth on the top face. Then on top of that is the actual board image sheet, all put on in one go (so it all aligns correctly) and then cut along the "convex" hinge after the glue was dry.

I did some experiments to get the hang of the techniques and materials, and to try out a couple of approaches. Then I wrote myself a set of detailed instruction notes, recalculated the exact sizes, and did a complete practice run at 1/sqrt(8) scale. That served me well.

The actual construction took most of a Saturday afternoon and evening, and then the completed board had to be pressed for about 48h while it dried, to stop it warping.

There was one part that it wasn't really practical to practice: actually pasting a 624 x 205mm sheet of 120gsm paper, covered in a mixture of PVA and paste, onto a slightly larger arrangement of boards, is really quite tricky to do perfectly - even if you have a bookbinder on hand to help with another pair of hands. So if you look closely at my finished article you can see some blemishes. But, overall, I am pleased.

Pictures If you just want to admire my board design, you can look at this conveniently sized PDF. I also took some photographs. But, for here, a taster:


Steinar H. Gunderson: RC-bugginess

Planet Debian - Enj, 21/03/2019 - 12:34pd

The RMs very helpfully unblocked my Nageru upload so that a bunch of Futatabi bugfixes could go to buster. I figured out this was a good time to find a long-standing RC bug to debug and fix in return.

(Granted, I didn't upload yet, so the bug isn't closed. But a patch should go a long way.)

Only 345 to go…

Simon Josefsson: Planning for a new OpenPGP key

Planet Debian - Enj, 21/03/2019 - 12:05pd

I’m the process of migrating to a new OpenPGP key. I have been using GnuPG with keys stored on external hardware (smartcards) for a long time, and I’m firmly committed to that choice. Algorithm wise, RSA was the best choice back for me when I created my key in 2002, and I used it successfully with a non-standard key size for many years. In 2014 it was time for me to move to a new stronger key, and I still settled on RSA and a non-standard key size. My master key was 3744 bits instead of 1280 bits, and the smartcard subkeys were 2048 bits instead of 1024 bits. At that time, I had already moved from the OpenPGP smartcard to the NXP-based YubiKey NEO (version 3) that runs JavaCard applets. The primary relevant difference for me was the availability of source code for the OpenPGP implementation running on the device, in the ykneo-openpgp project. The device was still a proprietary hardware and firmware design though.

Five years later, it is time for a new key again, and I allow myself to revisit some decisions that I made last time.

GnuPG has supported Curve25519/Ed25519 for some time, and today I prefer it over RSA. Infrastructure has been gradually introducing support for it as well, to the point that I now believe I can cut the ropes to the old world with RSA. Having a offline master key is still a strong preference, so I will stick to that decision. You shouldn’t run around with your primary master key if it is possible to get by with subkeys for daily use, and that has worked well for me over the years.

Hardware smartcard support for Curve25519/Ed25519 has been behind software support. NIIBE Yutaka developed the FST-01 hardware device in 2011, and the more modern FST-01G device in 2016. He also wrote the Gnuk software implementation of the OpenPGP card specification that runs on the FST-01 hardware (and other devices). The FST-01 hardware design is open, and it only runs the Gnuk free software. You can buy the FST-01G device from the FSF. The device has not received the FSF Respects Your Freedom stamp, even though it is sold by FSF which seems a bit hypocritical. Hardware running Gnuk are the only free software OpenPGP smartcard that supports Curve25519/Ed25519 right now, to my knowledge. The physical form factor is not as slick as the YubiKey (especially the nano-versions of the YubiKey that can be emerged into the USB slot), but it is a trade-off I can live with. Niibe introduced the FST-01SZ at FOSDEM’19 but to me it does not appear to offer any feature over the FST-01G and is not available for online purchase right now.

I have always generated keys in software using GnuPG. My arguments traditionally was that I 1) don’t trust closed-source RSA key generation implementations, and 2) want to be able to reproduce my setup with a brand new device. With Gnuk the first argument doesn’t hold any longer. However, I still prefer to generate keys with GnuPG on a Linux-based Debian machine because that software stack is likely to receive more auditing than Gnuk. It is a delicated decision though, since GnuPG on Debian is many orders of complexity higher than the Gnuk software. My second argument is now the primary driver for this decision.

I prefer the SHA-2 family of hashes over SHA-1, and earlier had to configure GnuPG for this. Today I believe the defaults have been improved and this is no longer an issue.

Back in 2014, I had a goal of having a JPEG image embedded in my OpenPGP key. I never finished that process, and I have not been sorry for missing out on anything as a result. On the contrary, the size of the key with an embedded image woud have been even more problematic than the already large key holding 4 embedded RSA public keys in it.

To summarize, my requirements for my OpenPGP key setup in 2019 are:

  • Curve25519/Ed25519 algorithms.
  • Master key on USB stick.
  • USB stick only used on an offline computer.
  • Subkeys for daily use (signature, encryption and authentication).
  • Keys are generated in GnuPG software and imported to the smartcard.
  • Smartcard is open hardware and running free software.

Getting this setup up and running sadly requires quite some detailed work, which will be the topic of other posts… stay tuned!

Lucas Nussbaum: Call for help: graphing Debian trends

Planet Debian - Mër, 20/03/2019 - 9:29md

It has been raised in various discussions how much it’s difficult to make large-scale changes in Debian.

I think that one part of the problem is that we are not very good at tracking those large-scale changes, and I’d like to change that. A long time ago, I did some graphs about Debian (first in 2011, then in 2013, then again in 2015). An example from 2015 is given below, showing the market share of packaging helpers.

Those were generated using a custom script. Since then, classification tags were added to lintian, and I’d like to institutionalize that a bit, to make it easier to track more trends in Debian, and maybe motivate people with switching to new packaging standards. This could include stuff like VCS used, salsa migration, debhelper compat levels, patch systems and source formats, but also stuff like systemd unit files vs traditional init scripts, hardening features, etc. The process would look like:

  1. Add classification tags to lintian for relevant stuff (maybe starting with being able to regenerate the graphs from 2015).
  2. Use lintian to scan all packages on, which stores all packages ever uploaded to Debian (well, since 2005), and generate a dataset
  3. Generate nice graphs

Given my limited time available for Debian, I would totally welcome some help. I can probably take care of the second step (I actually did it recently on a subset of packages to check feasibility), but I would need:

  • The help of someone with Perl knowledge, willing to modify lintian to add additional classification tags. There’s no need to be a Debian Developer, and lintian has an extensive test suite, that should make it quite fun to hack on. The code could either be integrated in lintian, or live in a lintian fork that would only be used to generate this data.
  • Ideally (but that’s less important at this stage), the help of someone with web skills to generate a nice website.

Let me know if you are interested.

Bits from Debian: DebConf19 registration is open!

Planet Debian - Mër, 20/03/2019 - 8:30md

Registration for DebConf19 is now open. The event will take place from July 21st to 28th, 2019 at the Central campus of Universidade Tecnológica Federal do Paraná - UTFPR, in Curitiba, Brazil, and will be preceded by DebCamp, from July 14th to 19th, and an Open Day on the 20th.

DebConf is an event open to everyone, no matter how you identify yourself or how others perceive you. We want to increase visibility of our diversity and work towards inclusion at Debian Project, drawing our attendees from people just starting their Debian journey, to seasoned Debian Developers or active contributors in different areas like packaging, translation, documentation, artwork, testing, specialized derivatives, user support and many other. In other words, all are welcome.

To register for the event, log into the registration system and fill out the form. You will be able to edit and update your registration at any point. However, in order to help the organisers have a better estimate of how many people will attend the event, we would appreciate if you could access the system and confirm (or cancel) your participation in the Conference as soon as you know if you will be able to come. The last day to confirm or cancel is June 14th, 2019 23:59:59 UTC. If you don't confirm or you register after this date, you can come to the DebConf19 but we cannot guarantee availability of accommodation, food and swag (t-shirt, bag…).

For more information about registration, please visit Registration Information

Bursary for travel, accomodation and meals

In an effort to widen the diversity of DebConf attendees, the Debian Project allocates a part of the financial resources obtained through sponsorships to pay for bursaries (travel, accommodation, and/or meals) for participants who request this support when they register.

As resources are limited, we will examine the requests and decide who will receive the bursaries. They will be destined:

  • To active Debian contributors.
  • To promote diversity: newcomers to Debian and/or DebConf, especially from under-represented communities.

Giving a talk, organizing an event or helping during DebConf19 is taken into account when deciding upon your bursary, so please mention them in your bursary application. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

For more information about bursaries, please visit Applying for a Bursary to DebConf

Attention: the registration for DebConf19 will be open until Conference, but the deadline to apply for bursaries using the registration form before April 15th, 2019 23:59:59 UTC. This deadline is necessary in order to the organisers use time to analyze the requests, and for successful applicants to prepare for the conference.

To register for the Conference, either with or without a bursary request, please visit:

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Infomaniak and Google. DebConf19 is still accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

Jonathan Carter: GitLab and Debian

Planet Debian - Mër, 20/03/2019 - 7:43md

As part of my DPL campaign, I thought that I’d break out a few items out in blog posts that don’t quite fit into my platform. This is the first post in that series.

When Debian was hunting for a new VCS-based collaboration suite in 2017, the administrators of the then current platform, called Alioth (which was a FusionForge instance) strongly considered adopting Pagure, a git hosting framework from Fedora. I was a bit saddened that GitLab appeared to be losing the race, since I’ve been a big fan of the project for years already. At least Pagure would be a huge improvement over the status quo and it’s written in Python, which I considered a plus over GitLab, so at least it wasn’t going to be all horrible.

The whole discussion around GitLab vs Pagure turned out to be really fruitful though. GitLab did some introspection around its big non-technical problems, especially concerning their contributor licence agreement, and made some major improvements which made GitLab a lot more suitable for large free software projects, which shortly lead to its adoption by both the Debian project and the Gnome project. I think it’s a great example of how open communication and engagement can help reduce friction and make things better for everyone. GitLab has since became even more popular and is now the de facto self-hosted git platform across all types of organisations.

Fun fact: I run a few GitLab instances myself, and often get annoyed with all my tab favicons looking the same, so the first thing I do is create a favicon for my GitLab instances. I’m also the creator of the favicon for the, it’s basically the GitLab logo re-arranged and mangled to be a crude representation of the debian swirl:

The move to GitLab had some consequences that I’m not sure was completely intended. For example, across the project, we used to use a whole bunch of different version control systems (git, bzr, mercurial, etc), but since GitLab only supports git, it has made git the gold standard in Debian too. For better or worse, I do think that it makes it easier for new contributors to get involved since they can contribute to different teams without having to learn a different VCS for each one.

I don’t think it’s a problem that some teams don’t use salsa (or even git for that matter), but within salsa we have quite a number of team-specific workflows that I think can be documented a lot better and I think in doing so, may merge some of the cases a bit so that it’s more standardised.

When I started working on my DPL platform, I pondered whether I should host my platform in a git repository. I decided to go ahead and do so because it would make me more accountable since any changes I make can be tracked and tagged.

I also decided to run for DPL rather late, and prepared my platform under some pressure, making quite a few mistakes. In another twist of unintended consequences for using git, I woke up this morning with a pleasant surprise of 2 merge requests that fixed those mistakes.

I think GitLab is the best thing that has happened to Debian in a long time, and I think whoever becomes DPL should consider making both git and the a regular piece of the puzzle for new processes that are put in place. Git is becoming so ubiquitous that over time, it’s not even going to be something that an average person would need to learn anymore when getting involved in Debian and it makes sense to embrace it.

Jan Wagner: HAProxy - a journey into multithreading (and SSL)

Planet Debian - Mër, 20/03/2019 - 7:39md

I'm running some load balancers which are using HAProxy to distribute HTTP traffic to multiple systems.

While using SSL with HAProxy is possible since some time, it wasn't in the early days. So we decided for some customers, which was in need to provide encryption, to offload it with Apache.
When later HAProxy got added SSL support this also had benefits when keeping this setup for larger sites, because HAProxy had a single process model and doing encryption is indeed way more resource consuming.
Still using Apache for SSL offloading was a good choice because it comes with the Multi-Processing Modules worker and event that are threading capable. We did choose the event mpm cause it should deal better with the 'keep alive problem' in HTTP. So far so good.

Last year some large setups started to suffer accepting new connections out of the blue. Unfortunately I found nothing in the logs and also couldn't reproduce this behaviour. After some time I decided to try using another Apache mpm and switched over to the worker model. And guess what ... the connection issues vanished.
Some days later I surprisingly learned about the Apache Bug in Debian BTS "Event MPM listener thread may get blocked by SSL shutdowns" which was an exact description of my problem.

While being back in safe waters I thought it would be good to have a look into HAProxy again and learned that threading support was added in version 1.8 and in 1.9 got some more improvements.
So we started to look into it on a system with a couple of real CPUs:

# grep processor /proc/cpuinfo | tail -1 processor : 19

At first we needed to install a newer version of HAProxy, since 1.8.x is available via backports but 1.9.x is only available via I thought I should start with a simple configuration and keep 2 spare CPUs for other tasks:

global # one process nbproc 1 # 18 threads nbthread 18 # mapped to the first 18 CPU cores cpu-map auto:1/1-18 0-17

Now let's start:

# haproxy -c -V -f /etc/haproxy/haproxy.cfg # service haproxy reload # pstree haproxy No processes found. # grep "worker #1" /var/log/haproxy.log | tail -2 Mar 20 13:06:51 lb13 haproxy[22156]: [NOTICE] 078/130651 (22156) : New worker #1 (22157) forked Mar 20 13:06:51 lb13 haproxy[22156]: [ALERT] 078/130651 (22156) : Current worker #1 (22157) exited with code 139 (Segmentation fault)

Okay .. cool! ;) So I started lowering the used CPUs since without threading I did not experiencing segfaults. With 17 threads it seems to be better:

# service haproxy restart # pstree haproxy haproxy---16*[{haproxy}] # grep "worker #1" /var/log/haproxy.log | tail -2 Mar 20 13:06:51 lb13 haproxy[22156]: [ALERT] 078/130651 (22156) : Current worker #1 (22157) exited with code 139 (Segmentation fault) Mar 20 13:14:33 lb13 haproxy[27001]: [NOTICE] 078/131433 (27001) : New worker #1 (27002) forked

Now I started to move traffic from Apache to HAProxy slowly and watching logs carefully. With shifting more and more traffic over, the amount of SSL handshake failure entries went up. While there was the possibility this were just some clients not supporting our ciphers and/or TLS versions I had some doubts, but our own monitoring was unsuspicious. So I started to have a look on external monitoring and after some time I cought some interesting errors:

error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac

The last time I had issues I lowered the thread count so I did this again. And you might guessed it already, this worked out. With 12 threads I had no issues anymore:

global # one process nbproc 1 # 12 threads nbthread 12 # mapped to the first 12 CPU cores (with more then 17 cpus haproxy segfaults, with 16 cpus we have a high rate of ssl errors) cpu-map auto:1/1-12 0-11

So we got rid of SSL offloading and the proxy on localhost, with the downside that HAProxy is failing 1/146 h2spec test, which is a conformance testing tool for HTTP/2 implementation, where Apache was failing not a single test.

Antoine Beaupré: Securing registration email

Planet Debian - Mër, 20/03/2019 - 4:28md

I've been running my own email server basically forever. Recently, I've been thinking about possible attack vectors against my personal email. There's of course a lot of private information in that email address, and if someone manages to compromise my email account, they will see a lot of personal information. That's somewhat worrisome, but there are possibly more serious problems to worry about.

TL;DR: if you can, create a second email address to register on websites and use stronger protections on that account from your regular mail.

Hacking accounts through email

Strangely what keeps me up at night is more what kind of damage an attacker could do to other accounts I hold with that email address. Because basically every online service is backed by an email address, if someone controls my email address, they can do a password reset on every account I have online. In fact, some authentication systems just gave up on passwords algother and use the email system itself for authentication, essentially using the "password reset" feature as the authentication mechanism.

Some services have protections against this: for example, GitHub require a 2FA token when doing certain changes which the attacker hopefully wouldn't have (although phishing attacks have been getting better at bypassing those protections). Other services will warn you about the password change which might be useful, except the warning is usually sent... to the hacked email address, which doesn't help at all.

The solution: a separate mailbox

I had been using an extension ( to store registration mail in a separate folder for a while already. This allows me to bypass greylisting on the email address, for one. Greylisting is really annoying when you register on a service or do a password reset... The extension also allows me to sort those annoying emails in a separate folder automatically with a simple Sieve rule.

More recently, I have been forced to use a completely different email alias ( on some services that dislike having plus signs (+) in email address, even though they are perfectly valid. That got me thinking about the security problem again: if I have a different alias why not make it a completely separate account and harden that against intrusion. With a separate account, I could enforce things like SSH-only access or 2FA that would be inconvenient for my main email address when I travel, because I sometimes log into webmail for example. Because I don't frequently need access to registration mail, it seemed like a good tradeoff.

So I created a second account, with a locked password and SSH-only authentication. That way the only way someone can compromise my "registration email" is by hacking my physical machine or the server directly, not by just bruteforcing a password.

Now of course I need to figure out which sites I'm registered on with a "non-registration" email ( before I thought of using the register@ alias, I sometimes used my normal address instead. So I'll have to track those down and reset those. But it seems I already blocked a large attack surface with a very simple change and that feels quite satisfying.

Implementation details

Using syncmaildir (SMD) to sync my email, the change was fairly simple. First I need to create a second SMD profile:

if [ $(hostname) = "marcos" ]; then exit 1 fi SERVERNAME=smd-server-register CLIENTNAME=$(hostname)-register MAILBOX_LOCAL=Maildir/.register/ MAILBOX_REMOTE=Maildir TRANSLATOR_LR="smd-translate -m move -d LR register" TRANSLATOR_RL="smd-translate -m move -d RL register" EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"

Very similar to the normal profile, except mails get stored in the already existing Maildir/.register/ and different SSH profile and translation rules are used. The new SSH profile is basically identical to the previous one:

# wrapper for smd Host smd-server-register Hostname BatchMode yes Compression yes User register IdentitiesOnly yes IdentityFile ~/.ssh/id_ed25519_smd

Then we need to ignore the register folder in the normal configuration:

diff --git a/.smd/config.default b/.smd/config.default index c42e3d0..74a8b54 100644 --- a/.smd/config.default +++ b/.smd/config.default @@ -59,7 +59,7 @@ TRANSLATOR_RL="smd-translate -m move -d RL default" # EXCLUDE_LOCAL="Mail/spam Mail/trash" # EXCLUDE_REMOTE="OtherMail/with%20spaces" #EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*" -EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*" +EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/* Maildir/.register/*" #EXCLUDE_LOCAL="$MAILBOX_LOCAL/.notmuch/hooks/* $MAILBOX_LOCAL/.notmuch/xapian/*" #EXCLUDE_REMOTE="$MAILBOX_REMOTE/.notmuch/hooks/* $MAILBOX_REMOTE/.notmuch/xapian/*" #EXCLUDE_REMOTE="Maildir/Koumbit Maildir/Koumbit* Maildir/Koumbit/* Maildir/Koumbit.INBOX.Archives/ Maildir/Koumbit.INBOX.Archives.2012/ Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"

And finally we add the new profile to the systemd services:

diff --git a/.config/systemd/user/smd-pull.service b/.config/systemd/user/smd-pull.service index a841306..498391d 100644 --- a/.config/systemd/user/smd-pull.service +++ b/.config/systemd/user/smd-pull.service @@ -8,6 +8,7 @@ ConditionHost=!marcos Type=oneshot # --show-tags gives email counts ExecStart=/usr/bin/smd-pull --show-tags +ExecStart=/usr/bin/smd-pull --show-tags register [Install] diff --git a/.config/systemd/user/smd-push.service b/.config/systemd/user/smd-push.service index 10d53c7..caa588e 100644 --- a/.config/systemd/user/smd-push.service +++ b/.config/systemd/user/smd-push.service @@ -8,6 +8,7 @@ ConditionHost=!marcos Type=oneshot # --show-tags gives email counts ExecStart=/usr/bin/smd-push --show-tags +ExecStart=/usr/bin/smd-push --show-tags register [Install]

That's about it on the client side. On the server, the user is created with a locked password the mailbox moved over:

adduser --disabled-password register mv ~anarcat/Maildir/.register/ ~register/Maildir/ chown -R register:register Maildir/

The SSH authentication key is added to .ssh/authorized_keys, and the alias is reversed:

--- a/aliases +++ b/aliases @@ -24,7 +24,7 @@ spamtrap: anarcat spampd: anarcat junk: anarcat devnull: /dev/null -register: anarcat+register +anarcat+register: register # various sandboxes anarcat-irc: anarcat

... and the email is also added to /etc/postgrey/whitelist_recipients.

That's it: I now have a hardened email service! Of course there are other ways to harden an email address. On-disk encryption comes to mind but that only works with password-based authentication from what I understand, which is something I want to avoid to remove bruteforce attacks.

Your advice and comments are of course very welcome, as usual

Michal &#268;iha&#345;: translation-finder 1.1

Planet Debian - Mër, 20/03/2019 - 2:45md

The translation-finder module has been released in version 1.1. It is used by Weblate to detect translatable files in the repository making setup of translation components in Weblate much easier. This release brings lot of improvements based on feedback from our users, making the detection more reliable and accurate.

Full list of changes:

  • Improved detection of translation with full language code.
  • Improved detection of language code in directory and file name.
  • Improved detection of language code separated by full stop.
  • Added detection for app store metadata files.
  • Added detection for JSON files.
  • Ignore symlinks during discovery.
  • Improved detection of matching pot files in several corner cases.
  • Improved detection of monolingual Gettext.

Filed under: Debian English SUSE Weblate

Reproducible builds folks: Reproducible Builds: Weekly report #203

Planet Debian - Mër, 20/03/2019 - 1:51md

Here’s what happened in the Reproducible Builds effort between Sunday March 10 and Saturday March 16 2019:

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy which offers paid internships to work on free software. Internships are open to applicants around the world and are paid a stipend for the three month internship with an additional travel stipend to attend conferences. So far, we received more than ten initial requests from candidates and the closing date for applicants is April 2nd. More information is available on the application page.

Packages reviewed and fixed, and bugs filed strip-nondeterminism

strip-nondeterminism is our tool that post-processes files to remove known non-deterministic output. This week, Chris Lamb:

Test framework development

We operate a comprehensive Jenkins-based testing framework that powers This week, the following changes were made:

  • Alexander Couzens (OpenWrt support):
    • Correct the arguments for the reproducible_openwrt_package_parser script. []
    • Copy over Package-* files when building. []
    • Fix the Packages.manifest parser. [] []
  • Mattia Rizzolo:

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Jonathan Dowland: First successful Amiga disk-dumping session

Planet Debian - Mër, 20/03/2019 - 12:04md

This is the seventh part in a series of blog posts. The previous post was Learning new things about my old Amiga A500.

X-COPY User Interface

Totoro Soot Sprites?

"Cyberpunk" party invitation

My childhood home

HeroQuest board game guide

I've finally dumped some of my Amiga floppies, and started to recover some old files! The approach I'm taking is to use the real Amiga to read the floppies (in the external floppy disk drive) and then copy them onto a virtual floppy disk image on the Gotek Floppy Emulator. I use X-COPY to perform the copy (much as I would have done back in 1992).

FlashFloppy's default mode of operation is to scan over the filesystem on the attached USB and assign a number to every disk image that it discovers (including those in sub-folders). If your Gotek device has the OLED display, then it reports the path to the disk image to you; but I have the simpler model that simply displays the currently selected disk slot number.

For the way I'm using it, its more basic "indexed" mode fits better: you name files in the root of the USB's filesystem using a sequential scheme starting at DSKA0000.ADF (which corresponds to slot 0) and it's then clear which image is active at any given time. I set up the banks with Workbench, X-COPY and a series of blank floppy disk images to receive the real contents, which I was able to generate using FS-UAE (they aren't just full of zeroes).

A few weeks ago I had a day off work and spent an hour in the morning dumping floppies. I managed to dump around 20 floppies successfully, with only a couple of unreadable disks (from my collection of 200). I've prioritised home-made disks, in particular ones that are likely to contain user-made content rather than just copies of commercial disks. But in some cases it's hard to know for sure what's on a disk, and sometimes I've made copies of e.g. Deluxe Paint and subsequently added home-made drawings on top.

Back on my laptop, FS-UAE can quite happily read the resulting disk images, and Deluxe Paint IV via FS-UAE can happily open the drawings that I've found (and it was a lot of fun to fire up DPaint for the first time in over 20 years. This was a really nice piece of software. I must have spent days of my youth exploring it).

I tried a handful of user-mode tools for reading the disk images (OFS format) but they all had problems. In the end I just used the Linux kernel's AFFS driver and loop-back mounts. (I could have looked at libguestfs instead).

To read Deluxe Paint image files on a modern Linux system one can use ImageMagick (via netpbm back-end) or ffmpeg. ffmpeg can also handle Deluxe Paint animation files, but more care is needed with these: It does not appear to correctly convert frame durations, setting the output animations to a constant 60fps. Given the input image format colour depth, it's tempting to output to animated GIF, rather than a lossy video compression format, but from limited experimentation it seems some nuances of the way that palettes are used in the source files are not handled optimally in the output either. More investigation here is required.

Enjoy a selection of my childhood drawings…

Jonathan Dowland: WadC 3.0

Planet Debian - Mër, 20/03/2019 - 11:55pd

blockmap.wl being reloaded (click for animation)

A couple of weeks ago I release version 3.0 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

3.0 introduces more flexible randomness with rand; two new test maps (blockmap and bsp) that demonstrate approaches to random dungeon generation; some useful data structures in the library; better Hexen support and a bunch of other improvements.

Check the release notes for the full details, and check out the gallery of examples to see the kind of things you can do.

Version 3.0 of WadC is dedicated to Lu (1972-2019). RIP.

Neil McGovern: GNOME ED Update – February

Planet Debian - Mar, 19/03/2019 - 11:43md

Another update is now due from what we’ve been doing at the Foundation, and we’ve been busy!

As you may have seen, we’ve hired three excellent people over the past couple of months. Kristi Progri has joined us as Program Coordinator, Bartłomiej Piorski as a devops sysadmin, and Emmanuele Bassi as our GTK Core developer. I hope to announce another new hire soon, so watch this space…

There’s been quite a lot of discussion around the Google API access, and GNOME Online Accounts. The latest update is that I submitted the application to Google to get GOA verified, and we’ve got a couple of things we’re working through to get this sorted.

Events all round!

Although the new year’s conference season is just kicking off, it’s been a busy one for GNOME already. We were at FOSDEM in Brussels where we had a large booth, selling t-shirts, hoodies and of course, the famous GNOME socks. I held a meeting of the Advisory Board, and we had a great GNOME Beers event – kindly sponsored by Codethink.

We also had a very successful GTK Hackfest – moving us one step closer to GTK 4.0.

Coming up, we’ll have a GNOME booth at:

  • SCALEx17 – Pasadena, California (7th – 10th March)
  • LibrePlanet – Boston Massachusetts (23rd – 24th March)
  • FOSS North – Gothenburg, Sweden (8th – 9th April)
  • Linux Fest North West – Bellingham, Washington (26th – 28th April)

If you’re at any of these, please come along and say hi! We’re also planning out events for the rest of the year. If anyone has any particularly exciting conferences we may not have heard of, please let us know.


It hasn’t yet been announced, but we’re trialling an instance of Discourse for the GTK and Engagement teams. It’s hopeful that this may replace mailman, but we’re being quite careful to make sure that email integration continues to work. Expect more information about this in the coming month. If you want to go have a look, the instance is available at

Keith Packard: metro-snek

Planet Debian - Mar, 19/03/2019 - 9:08md
MetroSnek — snek on Metro M0 Express

When I first mentioned Snek a few months ago, Phillip Torrone from Adafruit pointed me at their Metro M0 board, which uses an Arduino-compatible layout but replaces the ATMega 328P with a SAMD21G18A. This chip is an ARM Cortex M0 part with 256kB of flash and 32kB of RAM. Such space!

Even though there is already a usable MicroPython port for this board, called CircuitPython, I figured it would be fun to get Snek running as well. The CircuitPython build nearly fills the chip, so the Circuit Python boards all include an off-chip flash part for storing applications. With Snek, there will be plenty of space inside the chip itself for source code, so one could build a cheaper/smaller version without the extra part.

UF2 Boot loader

I decided to leave the existing boot loader in place instead of replacing it with the AltOS version. This makes it easy to swap back to CircuitPython without needing any custom AltOS tools.

The Metro M0 Express boot loader is reached by pressing the reset button twice; it's pretty sweet in exposing a virtual storage device with a magic file, CURRENT.UF2, into which you write the ROM image. You write a UF2 formatted file to this name and the firmware extracts the data on the fly and updates the flash in the device. Very slick.

To make this work with AltOS, I had to adjust the start location of the operating system to 0x2000 and leave a bit of space at the end of ROM and RAM clear for the boot loader to use.

Porting AltOS

I already have an embedded operating system that works on Cortex M0 parts, AltOS, which I've been developing for nearly 10 years for use in rocketry and satellite applications. It's also what powers [ChaosKey])(

Getting AltOS running on another Cortex M0 part is a simple matter of getting clocks running and writing drivers.

What I haven't really settled on is whether to leave this code as a part of AltOS, or to pull the necessary bits into the Snek repository and doing a bare-metal implementation.

I've set up the Snek distribution to make integrating it into another operating system simple; that's how the NuttX port works, for instance. It does make the build process more complicated as you have to build and install Snek, then build AltOS for the target device.

SAMD21 Clocks

Every SoC has a different way of configuring and wiring clocks within the system. Most that I've used have a complex clock-tree that you plug various configuration values into to generate clocks for the processor and peripherals.

The SAMD21 is simpler in offering a set of general-purpose clock controllers that can source a variety of clock signals and divide them by an integer. The processor uses clock controller 0; all of the other peripherals can be configured to use any clock controller you like.

The Metro M0 express and Feather M0 express have only a 32.768kHz crystal; they don't have a nice even-MHz crystal connected to the high-speed oscillator. As a result, to generate a '48MHz' clock for the processor and USB controller, I ended up multiplying the 32.768kHz frequency by 1464 using a PLL to generate a 47.972352MHz signal, which is about 0.06% low. Close enough for USB to work.

At first, I typo'd a register value leaving the PLL un-locked. The processor still ran fine, but when I looked at the clock with my oscilloscope, it was very ragged with a mean frequency around 30MHz. It took a few hours to track down the incorrect value, at which point the clock stabilized at about 48MHz.


Next on the agenda was getting a USART to work; nothing terribly complicated there, aside from the clock problem mentioned above which generated a baud rate of around 6000 instead of 9600.

I like getting a USART working because it's usually (always?) easier than USB, plus demonstrates that clocking is working as expected. I can debug serial data with a simple logic analyzer. This time, the logic analyzer is how I discovered the clocking issue -- a bit time of 166µs does not equal 9600 baud.


While I like having USB on-chip in the abstract, the concrete adventure of implementing USB for a new chip is always fraught with peril. In this case, the chip documentation was missing a couple of key details that I had to discover experimentally.

I'm still trying to come up with an abstraction for writing USB drivers for small systems; every one is different enough that I keep using copy&paste instead of building a driver core on top of hardware-specific primitives. In this case, the USB driver is 883 lines of code; the second shortest in AltOS with the ATMega32u4 driver being slightly smaller.


The only hardware that works today is one USARTs and USB. I also go Snek compiled and running. Left to do:

  • Digital GPIO controls. I've got basic GPIO functionality available in the underlying operating system, but it isn't exposed through Snek yet.

  • Analog outputs. This will involve hooking timers to outputs so that we can PWM them.

  • Analog inputs. That requires getting an ADC driver written and then hooking that into Snek.

  • On-board source storage. I think the ATMega model of storing just one set of source code on the device and reading that at boot time is clean and simple, so I want to do the same here. I think it will be simpler to use the on-chip flash instead of the external flash part. That means reserving a specific chunk of that for source code.

  • Figure out whether this code is part of AltOS, or part of Snek.



Subscribe to AlbLinux agreguesi