You are here

Planet Debian

Subscribe to Feed Planet Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 7 months 27 min më parë

Sylvain Beucler: RenPyWeb - Ren'Py in your HTML5 web browser

Mar, 19/02/2019 - 7:38md

I like the Ren'Py project, a popular game engine aimed at Visual Novels - that can also be used as a portable Python environment.

One limitation was that it required downloading games, while nowadays people are used to Flash- or HTML5- based games that play in-browser without having to (de)install.

Can this fixed? While maintaining compatibility with Ren'Py's several DSLs? And without rewriting everything in JavaScript?
Can Emscripten help? While this is a Python/Cython project?
After lots of experimenting, and full-stack patching/contributing, it turns out the answer is yes!

Live demo:
https://renpy.beuc.net/

At last I finished organizing and cleaning-up, published under a permissive free software / open source license, like Python and Ren'Py themselves.
Python port:
https://www.beuc.net/python-emscripten/python/dir?ci=tip
Build system:
https://github.com/renpy/renpyweb

Development in going on, consider supporting the project!
https://www.patreon.com/Beuc

Reproducible builds folks: Reproducible Builds: Weekly report #199

Mar, 19/02/2019 - 1:03md

Here’s what happened in the Reproducible Builds effort between Sunday February 10th and Saturday February 16th 2019:

  • strip-nondeterminism is our tool that post-processes files to remove known non-deterministic output. This week, Chris Lamb adjusted its behaviour to deduplicate hardlinks via stat(2) before processing to avoid issues when handling files in parallel; as the per-filetype handlers are yet currently guaranteed to be atomic, one process could temporarily truncate a file which can cause errors in other processes operating on the “same” file under a different pathname. This was thus causing package build failures in packages that de-duplicate hardlinks in their build process such as the Debian Administrator’s Handbook (#922168).

  • There was a brief update from the Debian Ruby maintainers on whether the language might need to strip -fdebug-prefix-map from the tools used to build extensions.

  • On our mailing list, Holger Levsen re-raised a question regarding uploading the “official” .buildinfo files to buildinfo.debian.net.

  • On Tuesday 26th February Chris Lamb will speak at Speck&Tech 31 “Open Security” on Reproducible Builds in Trento, Italy.

  • Jelle van der Waa fixed some spelling mistakes on the reproducible-builds.org project website. []

  • 6 Debian package reviews were added, 4 were updated and 16 were removed in this week, adding to our knowledge about identified issues.

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week:

  • Chris Lamb:
    • Add support for comparing .crx Chrome browser extensions. (#41)
    • Add support for comparing MP3 and files with similar metadata. (#43)
    • Replace the literal xxd(1) output (!)(!) in tests/data/hello.wasm with its binary equivalent (#47) and ensure both WebAssembly test data files are actually unique. (#42)
    • Catch tracebacks when mounting invalid filesystem images under guestfs. []
    • Fix tests when using Ghostscript 9.20 vs 9.26 for Debian stable and for stable with the security repositories enabled. [][]
    • Temporarily drop ubuntu-devel from internal test matrix due to a linux-firmware package installation issue. []
  • Ed Maste:
    • Include relocations in objdump disassembly. (#48)
  • Graham Christensen:
    • Clarify “no file-specific differences” message when we fallback to a binary diff. (!19)
  • Mattia Rizzolo:
    • Make test_ps.test_text_diff pass with Ghostscript version 9.26. []

In addition, Vagrant Cascadian updated diffoscope in GNU Guix [] and went on to upload disorderfs [] and trydiffoscope [] too.

Packages reviewed and fixed, and bugs filed Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org.

  • Hans-Christoph Steiner:
    • Set the LANG and LC_ALL environment variables for F-Droid builds to workaround an unsolved issue in Java/Gradle. [][]
    • Modernise some dependencies. []
    • Node maintenance. []
  • Holger Levsen:
    • Increased the diskspace for the two OSU Open Source Lab Arch Linux build nodes from 50GB to 350GB.
    • Upgraded all 47 nodes running Debian to the newly-released Debian 9.8.
    • Fix the version checking for diffoscope in Arch Linux. []
    • Install kernels as a separate step to ignore failures when installing/upgrading Debian backports’ kernels. []
    • Fix a number of issues with our Munin diskspace plugin. [][]
    • Correct grammar of Arch Linux IRC message. []
  • Mattia Rizzolo:

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Russ Allbery: INN 2.6.3

Hën, 18/02/2019 - 11:30md

INN 2.6.3 has been released. This is a bug fix and minor feature release over INN 2.6.2, and the upgrade should be painless. The main ISC downloads page will be updated shortly; in the meantime, you can download the new release from ftp.isc.org or my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

The big change in this release is support for Python 3. Embedded Python filtering and authentication hooks for innd and nnrpd can now use version 3.3.0 or later of the Python interpreter. Python 2.x is still supported (2.3.0 or later).

Also fixed in this release are several issues with TLS: fixed selection of elliptic curve selection, a new configuration parameter to fine-tune the cipher suites with TLS 1.3, and better logging of failed TLS session negotiation. This release also returns the error message from Python and Perl filter hook rejects to CHECK and TAKETHIS commands and fixes various other, more minor bugs.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

Chris Lamb: Book Review: Painting the Sand

Hën, 18/02/2019 - 7:43md

Painting the Sand (2017)

Kim Hughes

Staff Sargeant Kim Hughes is a bomb disposal operator in the British Army undergoing a gruelling six-month tour of duty in Afghanistan during which he defuses over 100 improvised explosives devices, better known as IEDs.

Cold opening in the heat of the desert, it begins extremely strongly with a set piece of writing that his editor should be proud of. The book contains colourful detail throughout and readers will quickly feel like "one of the lads" with the shibboleth military lingo and acronyms. Despite that, this brisk and no-nonsense account — written in that almost-stereotypical squaddie's tone of voice — is highly accessible and furthermore is refreshing culturally-speaking given the paucity of stories in the zeitgest recounting British derring-do in "Afghan", to adopt Hughes' typical truncation of the country.

However, apart from a few moments (such as when the Taliban adopt devices undetectable with a metal detector) the tension deflates slowly rather than mounting like a lit fuse. For example, I would have wished for a bit more improvised suspense and drama when they were being played, trapped or otherwise manipulated by the Taliban for once, and only so much of that meekness can be attributed to Hughes' self-effacing and humble manner. Indeed, the book was somewhat reminiscent of The Secret Barrister in that the ending is a little too quick for my taste, rushing through "current" events and becoming a little too self-referential in parts, this comparison between the two words being aided by both writers having somewhat of a cynical affect about them.

One is left with the distinct impression that Hughes went to Afghan a job. He gets it done with minimal fuss and no unnecessary nonsense … and that's what exactly what he does as a memorialist.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2019

Hën, 18/02/2019 - 3:05md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 204.5 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 12 hours (out of 12 hours allocated).
  • Antoine Beaupré did 9 hours (out of 20.5 hours allocated, thus keeping 11.5h extra hours for February).
  • Ben Hutchings did 24 hours (out of 20 hours allocated plus 5 extra hours from December, thus keeping one extra hour for February).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 42.5 hours (out of 20.5 hours allocated + 25.25 extra hours, thus keeping 3.25 extra hours for February).
  • Hugo Lefeuvre did 20 hours (out of 20 hours allocated).
  • Lucas Kanashiro did 5 hours (out of 4 hours allocated plus one extra hour from December).
  • Markus Koschany did 20.5 hours (out of 20.5 hours allocated).
  • Mike Gabriel did 10 hours (out of 10 hours allocated).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated + 6.5 extra hours, thus keeping 8 extra hours for February, as he also gave 2h back to the pool).
  • Roberto C. Sanchez did 10.75 hours (out of 20.5 hours allocated, thus keeping 9.75 extra hours for February).
  • Thorsten Alteholz did 20.5 hours (out of 20.5 hours allocated).
Evolution of the situation

In January we again managed to dispatch all available hours (well, except one) to contributors. We also still had one new contributor in training, though starting in February Adrian Bunk has become a regular contributor. But: we will lose another contributor in March, so we are still very much looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file has 42 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Thomas Lange: Netplan support in FAI

Hën, 18/02/2019 - 2:34md

The new version FAI 5.8.1 now generates the configuration file for Ubuntu's netplan tool. It's a YAML description for setting up the network devices, replacing the /etc/network/interfaces file. The FAI CD/USB installation image for Ubuntu now offers two different variants to be installed, Ubuntu desktop and Ubuntu server without a desktop environment. Both are using Ubuntu 18.04 aka Bionic Beaver.

FAI 5.8.1 also improves UEFI support for network installations. UEFI boot is still missing for the ISO images.

The FAI ISO images are available from [1]. The FAIme build service [2] for customized cloud and installation images also uses the newest FAI version.

[1] https://fai-project.org/fai-cd/

[2] https://fai-project.org/FAIme

FAI Ubuntu

Niels Thykier: Making debug symbols discoverable and fetchable

Dje, 17/02/2019 - 10:03md

Michael wrote a few days ago about the experience of debugging programs on Debian.  And he is certainly not the only one, who found it more difficult to find debug symbols on Linux systems in general.

But fortunately, it is a fixable problem.  Basically, we just need a service to map a build-id to a downloadable file containing that build-id.  You can find the source code to my (prototype) of such a dbgsym service on salsa.debian.org.

It exposes one API endpoint, “/api/v1/find-dbgsym”, which accepts a build-id and returns some data about that build-id (or HTTP 404 if we do not know the build-id).  An example:

$ curl --silent http://127.0.0.1:8000/api/v1/find-dbgsym/5e625512829cfa9b98a8d475092658cb561ad0c8/ | python -m json.tool { "package": { "architecture": "amd64", "build_ids": [ "5e625512829cfa9b98a8d475092658cb561ad0c8" ], "checksums": { "sha1": "a7a38b49689031dc506790548cd09789769cfad3", "sha256": "3706bbdecd0975e8c55b4ba14a481d4326746f1f18adcd1bd8abc7b5a075679b" }, "download_size": 18032, "download_urls": [ "https://snapshot.debian.org/archive/debian-debug/20161028T101957Z/pool/main/6/6tunnel/6tunnel-dbgsym_0.12-1_amd64.deb" ], "name": "6tunnel-dbgsym", "version": "1:0.12-1" } }

Notice how it includes a download URL and a SHA256 checksum, so with this you can download the package containing the build-id directly from this and verify the download. The sample_client.py included in the repo does that and might be a useful basis for others interested in developing a client for this service.

 

To seed the database, so it can actually answer these queries, there is a bulk importer that parses Packages files from the Debian archive (for people testing: the ones from debian-debug archive are usually more interesting as they have more build-ids).

Possible improvements
  • Have this service deployed somewhere on the internet rather than the loopback interface on my machine.
  • The concept is basically distribution agnostic (Michael’s post in fact links to a similar service) and this could be a standard service/tool for all Linux distributions (or even just organizations).  I am happy to work with people outside Debian to make the code useful for their distribution (including non-Debian based distros).
    • The prototype was primarily based around Debian because it was my point of reference (plus what I had data for).
  • The bulk importer could (hopefully) be faster on new entries.
  • The bulk importer could import the download urls as well, so we do not have to fetch the relevant data online the first time when people are waiting.
  • Most of the django code / setup could probably have been done better as this has been an excuse to learn django as well.

Patches and help welcome.

Kudos

This prototype would not have been possible without python3, django, django’s restframework, python’s APT module, Postgresql, PoWA and, of course, snapshot.debian.org (including its API).

Birger Schacht: Sway in experimental

Dje, 17/02/2019 - 8:52md

A couple of days ago the 1.0-RC2 version of Sway, a Wayland compositor, landed in Debian experimental. Sway is a drop in replacement for the i3 tiling window manager for wayland. Drop in replacement means that, apart from minor adaptions, you can reuse your existing i3 configuration file for Sway. On the Website of sway you can find a short introduction video that shows the most basic concepts of using Sway, though if you have worked with i3 you will feel at home soon.

In the video the utility swaygrab is mentioned, but this tool is not part of Sway anymore. There is another screenshot tool now though, called grim which you can combine with the tool slurp if you want to select regions for screenshots. The video also mentions swaylock, which is a screen locking utility similar to i3lock. It was split out of the main Sway release a couple of weeks ago but there also exists a Debian package by now. And there is a package for swayidle, which is a idle management daemon, which comes handy for locking the screen or for turning of your display after a timeout. If you need clipboard manager, you can use wl-clipboard. There is also a notification daemon called mako (the Debian package is called mako-notifier and is in NEW) and if you don’t like the default swaybar, you can have a look at waybar (not yet in Debian, see this RFS). If you want to get in touch with other Sway users there is a #sway IRC channel on freenode. For some tricks setting up Sway you can browse the wiki.

If you want to try Sway, beware that is is a release candiate and there are still bugs. I’m using Sway since a couple of month and though i had crashes when it still was the 1.0-beta.1 i hadn’t any since beta.2. But i’m using a pretty conservative setup.

Sway was started by Drew DeVault who is also the upstream maintainer of wlroots, the Wayland compositor library Sway is using and who some might now from his sourcehut project (LWN Article). He also just published an article about Wayland misconceptions. The upstream of grim, slurp and mako is Simon Ser, who also contributes to sway. A lot of thanks for the Debian packaging is due to nicoo who did most of the heavy lifting and to Sean for having patience when reviewing my contributions. Also thanks to Guido for maintaining wlroots!

Andrew Cater: Debian 9.8 released : Desktop environments and power settings

Dje, 17/02/2019 - 4:12md
Debian 9.8 - the latest update to Debian Stretch - was released yesterday. Updated installation media can be found at the Debian CD Netnstall page, for example, at As part of the testing, I was using a very old i686 laptop which powers down at random intervals because the battery is old. It tends to suspend almost immediately. I found that the power management settings for the Cinnamon desktop were hart to find: using a Mate disktop allowed me to find the appropriate settings in the Debian menu layout much more easily Kudos to Steve McIntyre and Andy Simpkins (amongst others) for such a good job producing and testing the Debian CDs

Jonathan Dowland: embedding Haskell in AsciiDoc

Sht, 16/02/2019 - 11:50md

I'm a fan of the concept of Literate Programming (I explored it a little in my Undergraduate Dissertation a long time ago) which can be briefly (if inadequately) summarised as follows: the normal convention for computer code is by default the text within a source file is considered to be code; comments (or, human-oriented documentation) are exceptional and must be demarked in some way (such as via a special symbol). Literate Programming (amongst other things) inverts this. By default, the text in a source file is treated as comments and ignored by the compiler, code must be specially delimited.

Haskell has built-in support for this scheme: by naming your source code files .lhs, you can make use of one of two conventions for demarking source code: either prefix each source code line with a chevron (called Bird-style, after Richard Bird), or wrap code sections in a pair of delimiters \begin{code} and \end{code} (TeX-style, because it facilitates embedding Haskell into a TeX-formatted document).

For various convoluted reasons I wanted to embed Haskell into an AsciiDoc-formatted document and I couldn't use Bird-style literate Haskell, which would be my preference. The AsciiDoc delimiter for a section of code is a line of dash symbols, which can be interleaved with the TeX-style delimiters:

------------ \begin{code} next a = if a == maxBound then minBound else succ a \end{code} ------------

Unfortunately the Tex-style delimiters show up in the output once the AsciiDoc is processed. Luckily, we can swap the order of the AsciiDoc and Literate-Haskell delimiters, because the AsciiDoc ones are treated as a source-code comment by Haskell and ignored. This moves the visible TeX-style delimiters out of the code block, which is a minor improvement:

\begin{code} ------------ next a = if a == maxBound then minBound else succ a ------------ \end{code}

We can disguise the delimiters outside of the code block further by defining an empty AsciiDoc macro called "code". Macros are marked up with surrounding braces, leaving just stray \begin and \end tokens in the text. Towards the top of the AsciiDoc file, in the pre-amble:

= Document title Document author :code:

This could probably be further improved by some AsciiDoc markup to change the style of the text outside of the code block immediately prior to the \begin token (perhaps make the font 0pt or the text colour the same as the background colour) but this is legible enough for me, for now.

The resulting file can be fed to an AsciiDoc processor (like asciidoctor, or intepreted by GitHub's built-in AsciiDoc formatter) and to a Haskell compiler. Unfortunately GitHub insists on a .adoc extension to interpret the file as AsciiDoc; GHC insists on a .lhs extension to interpret it as Literate Haskell (who said extensions were semantically meaningless these days…). So I commit the file as .adoc for GitHub's benefit and maintain a local symlink with a .lhs extension for my own.

Finally, I am not interested in including some of the Haskell code in my document that I need to include in the file in order for it to work as Haskell source. This can be achieved by changing from the code delimiter to AsciiDoc comment delimeters on the outside:

//////////// \begin{code} utilityFunction = "necessary but not interesting for the document" \end{code} ////////////

You can see an example of a combined AsciiDoc-Haskell file here (which is otherwise a work in progress):

https://github.com/jmtd/striot/blob/0f40d110f366ccfe8c4f07b76338ce215984113b/writeup.adoc

Vasudev Kamath: Note to Self: Growing the Root File System on First Boot

Sht, 16/02/2019 - 5:43md

These 2 are the use cases I came across for expanding root file system.

  1. RaspberryPi images which comes in smaller image size like 1GB which you write bigger size SD cards like 32GB but you want to use full 32GB of space when system comes up.
  2. You have a VM image which is contains basic server operating system and you want to provision the same as a VM with much larger size root file system.

My current use case was second but I learnt the trick from 1, that is the RaspberryPi3 image spec by Debian project.

Idea behind the expanding root file system is first expanding the root file system to full available size and then run resize2fs on the expanded partition to grow file system. resize2fs is a tool specific for ext2/3/4 file system. But this needs to be done before the file system is mounted.

Here is my modified script from raspi3-image-spec repo. Only difference is I've changed the logic of extracting root partition device to my need, and of course added comments based on my understanding.

#!/bin/sh # Just extracts root partition and removes partition number to get the device # name eg. /dev/sda1 becomes /dev/sda roottmp=$(lsblk -l -o NAME,MOUNTPOINT | grep '/$') rootpart=/dev/${roottmp%% */} rootdev=${rootpart%1} # Use sfdisk to extend partition to all available free space on device. flock $rootdev sfdisk -f $rootdev -N 2 <<EOF ,+ EOF sleep 5 # Wait for all pending udev events to be handled udevadm settle sleep 5 # detect the changes to partition (we extended it). flock $rootdev partprobe $rootdev # remount the root partition in read write mode mount -o remount,rw $rootpart # Finally grow the file system on root partition resize2fs $rootpart exit 0fs

raspi3-image-spec uses sytemd service file to execute this script just before any file system is mounted. This is done by a making service execute before local-fs.pre target. From the man page for systemd.special

local-fs.target
systemd-fstab-generator(3) automatically adds dependencies of type Before= to all mount units that refer to local mount points for this target unit. In addition, it adds dependencies of type Wants= to this target unit for those mounts listed in /etc/fstab that have the auto mount option set.

Service also disables itself on executing to avoid re-runs on every boot. I've used the service file from raspi3-image-spec as is.

Testing with VM

raspi3-image-spec is well tested, but I wanted to make sure this works with my use case for VM. Since I didn't have any spare physical disks to experiment with I used kpartx with raw file images. Here is what I did

  1. Created a stretch image using vmdb2 with grub installed. Image size is 1.5G
  2. I created another raw disk using fallocate of 4G size.
  3. I created a partition on 4G disk.
  4. Loop mounted the disk and wrote 1G image on it using dd
  5. Finally created a VM using virt-install with this loop mounted device as root disk.

Below is my vmdb configuration yml again derived from raspi3-image-spec one with some modifications to suit my needs.

# See https://wiki.debian.org/RaspberryPi3 for known issues and more details. steps: - mkimg: "{{ output }}" size: 1.5G - mklabel: msdos device: "{{ output }}" - mkpart: primary device: "{{ output }}" start: 0% end: 100% tag: / - kpartx: "{{ output }}" - mkfs: ext4 partition: / label: RASPIROOT - mount: / - unpack-rootfs: / - debootstrap: stretch mirror: http://localhost:3142/deb.debian.org/debian target: / variant: minbase components: - main - contrib - non-free unless: rootfs_unpacked # TODO(https://bugs.debian.org/877855): remove this workaround once # debootstrap is fixed - chroot: / shell: | echo 'deb http://deb.debian.org/debian buster main contrib non-free' > /etc/apt/sources.list apt-get update unless: rootfs_unpacked - apt: install packages: - ssh - parted - dosfstools - linux-image-amd64 tag: / unless: rootfs_unpacked - grub: bios tag: / - cache-rootfs: / unless: rootfs_unpacked - shell: | echo "experimental" > "${ROOT?}/etc/hostname" # '..VyaTFxP8kT6' is crypt.crypt('raspberry', '..') sed -i 's,root:[^:]*,root:..VyaTFxP8kT6,' "${ROOT?}/etc/shadow" sed -i 's,#PermitRootLogin prohibit-password,PermitRootLogin yes,g' "${ROOT?}/etc/ssh/sshd_config" install -m 644 -o root -g root fstab "${ROOT?}/etc/fstab" install -m 644 -o root -g root eth0 "${ROOT?}/etc/network/interfaces.d/eth0" install -m 755 -o root -g root rpi3-resizerootfs "${ROOT?}/usr/sbin/rpi3-resizerootfs" install -m 644 -o root -g root rpi3-resizerootfs.service "${ROOT?}/etc/systemd/system" mkdir -p "${ROOT?}/etc/systemd/system/systemd-remount-fs.service.requires/" ln -s /etc/systemd/system/rpi3-resizerootfs.service "${ROOT?}/etc/systemd/system/systemd-remount-fs.service.requires/rpi3-resizerootfs.service" install -m 644 -o root -g root rpi3-generate-ssh-host-keys.service "${ROOT?}/etc/systemd/system" mkdir -p "${ROOT?}/etc/systemd/system/multi-user.target.requires/" ln -s /etc/systemd/system/rpi3-generate-ssh-host-keys.service "${ROOT?}/etc/systemd/system/multi-user.target.requires/rpi3-generate-ssh-host-keys.service" rm -f ${ROOT?}/etc/ssh/ssh_host_*_key* root-fs: / # Clean up archive cache (likely not useful) and lists (likely outdated) to # reduce image size by several hundred megabytes. - chroot: / shell: | apt-get clean rm -rf /var/lib/apt/lists # TODO(https://github.com/larswirzenius/vmdb2/issues/24): remove once vmdb # clears /etc/resolv.conf on its own. - shell: | rm "${ROOT?}/etc/resolv.conf" root-fs: /

I could not run with vmdb2 installed from Debian archive, so I cloned raspi3-image-spec and used vmdb2 submodule from it. And here are rest of commands used for testing the script.

fallocate -l 4G rootdisk.img # Create one partition with full disk sfdisk -f rootdisk.img <<EOF ,+ EOF kpartx -av rootdisk.img # mounts on /dev/loop0 for me dd if=vmdb.img of=/dev/loop0 sudo virt-install --name experimental --memory 1024 --disk path=/dev/loop0 --controller type=scsi,model=virtio-scsi --boot hd --network bridge=lxcbr0

Once VM booted I could see the root file system is 4G of size instead of 1.5G it was after using dd to write image on to it. So success!.

Steve Kemp: Updated myy compiler, and bought a watch.

Sht, 16/02/2019 - 5:26md

The simple math-compiler I introduced in my previous post has had a bit of an overhaul, so that now it is fully RPN-based.

Originally the input was RPN-like, now it is RPN for real. It handles error-detection at run-time, and generates a cleaner assembly-language output:

In other news I bought a new watch, which was a fun way to spend some time.

I love mechanical watches, clocks, and devices such as steam-engines. While watches are full of tiny and intricate parts I like the pretence that you can see how they work, and understand them. Steam engines are seductive because their operation is similar; you can almost look at them and understand how they work.

I've got a small collection of watches at the moment, ranging from €100-€2000 in price, these are universally skeleton-watches, or open-heart watches.

My recent purchase is something different. I was looking at used Rolexs, and found some from 1970s. That made me suddenly wonder what had been made the same year as I was born. So I started searching for vintage watches, which had been manufactured in 1976. In the end I found a nice Soviet Union piece, made by Raketa. I can't prove that this specific model was actually manufactured that year, but I'll keep up the pretence. If it is +/- 10 years that's probably close enough.

My personal dream-watch is the Rolex Oyster (I like to avoid complications). The Oyster is beautiful, and I can afford it. But even with insurance I'd feel too paranoid leaving the house with that much money on my wrist. No doubt I'll find a used one, for half that price, sometime. I'm not in a hurry.

(In a horological-sense a "complication" is something above/beyond the regular display of time. So showing the day, the date, or phase of the moon would each be complications.)

Ben Hutchings: Debian LTS work, January 2019

Sht, 16/02/2019 - 5:01md

I was assigned 20 hours of work by Freexian's Debian LTS initiative and carried over 5 hours from December. I worked 24 hours and so will carry over 1 hour.

I prepared another stable update for Linux 3.16 (3.16.63), but did not upload a new release yet.

I also raised the issue that the installer images for Debian 8 "jessie" would need to be updated to include a fix for CVE-2019-3462.

Molly de Blanc: Free software activities (January, 2019)

Mar, 05/02/2019 - 6:03md

January was another quiet month for free software. This isn’t to say I wasn’t busy, but merely that there were fewer things going on, with those things being more demanding of my attention. I’m including some more banal activities both to pad out the list, but to also draw some attention to the labor that goes into free software participation.

January activities (personal)
  • Debian Anti-harassment covered several incidents. These have not yet been detailed in an email to the Debian Project mailing list. I won’t get into details here, due to the sensitive nature of some of the conversations.
  • We began planning for Debian involvement in Google Summer of Code and Outreachy.
  • I put together a slide deck and prepared for FOSDEM. More details about FOSDEM next month! In the mean time, check out my talk description.
January activities (professional)
  • We wrapped up the end of the year fundraiser.
  • We’re planning LibrePlanet 2019! I hope to see you there.
  • I put together a slide deck for CopyLeft Conf, which I’ll detail more in February. I had drew and scanned in my slides, which is a time consuming process.

Reproducible builds folks: Reproducible Builds: Weekly report #197

Mar, 05/02/2019 - 4:15md

Here’s what happened in the Reproducible Builds effort between Sunday January 27th and Saturday February 2nd 2019:

  • There was yet more progress towards making the Debian Installer images reproducible. Following-on from last week, Chris Lamb performed some further testing of the generated images resulting in two patches to ensure that builds were reproducible regardless of both the user’s umask(2) (filed as #920631) and even the underlying ordering of files on disk (#920676). It is hoped these can be merged for the next Debian Installer alpha/beta after the recent “Alpha 5” release.

  • Tails, the privacy-oriented “live” operating system released its first USB image, which is reproducible.

  • Chris Lamb implemented a check in the Lintian static analysis tool that performs automated checks against Debian packages in order to add a check for .sass-cache directories. As as they contain non-deterministic subdirectories they immediately contribute towards an unreproducible build (#920593).
  • disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into filesystems for easy and reliable testing. Chris Lamb fixed an issue this week in the handling of the fsyncdir system call to ensure dpkg(1) can “flush” /var/lib/dpkg correctly [].

  • Hervé Boutemy made more updates to the reproducible-builds.org project website, including documenting mvn.build-root []. In addition, Chris Smith fixed a typo on the tools page [] and Holger Levsen added a link to Lukas’s report to the recent Paris Summit page [].

  • strip-nondeterminism is our our tool that post-processes files to remove known non-deterministic output) version. This week, Chris Lamb investigated an issue regarding the tool not normalising file ownerships in .epub files that was originally identified by Holger Levsen, as well as clarified the negative message in test failures [] and performed some code cleanups (eg. []).

  • Chris Lamb updated the SSL certificate for try.diffoscope.org to ensure validation after the deprecation of TLS-SNI-01 validation in LetsEncrypt.

  • Reproducible Builds were present at FOSDEM 2019 handing out t-shirts to contributors. Thank you!

  • On Tuesday February 26th Chris Lamb will speak at Speck&Tech 31 “Open Security” on Reproducible Builds in Trento, Italy.

  • 6 Debian package reviews were added, 3 were updated and 5 were removed in this week, adding to our knowledge about identified issues. Chris Lamb unearthed a new toolchain issue randomness_in_documentation_underscore_downloads_generated_by_sphinx, .
Packages reviewed and fixed, and bugs filed Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, Holger Levsen made a large number of improvements including:

  • Arch Linux-specific changes:
    • The scheduler is now run every 4h so present stats for this time period. []
    • Fix detection of bad builds. []
  • LEDE/OpenWrt-specific changes:
    • Make OpenSSH usable with a TCP port other than 22. This is needed for our OSUOSL nodes. []
    • Perform a minor refactoring of the build script. []
  • NetBSD-specific changes:
    • Add a ~jenkins/.ssh/config to fix jobs regarding OpenSSH running on non-standard ports. []
    • Add a note that osuosl171 is constantly online. []
  • Misc/generic changes:
    • Use same configuration for df_inode as for df to reduce noise. []
    • Remove a now-bogus warning; we have its parallel in Git now. []
    • Define ControlMaster and ControlPath in our OpenSSH configurations. []

In addition, Mattia Rizzolo and Vagrant Cascadian performed maintenance of the build nodes. ([], [], [], etc.)

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, intrigeri & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Michael Stapelberg: Looking for a new Raspberry Pi image maintainer

Mar, 05/02/2019 - 9:42pd

This is taken care of: Gunnar Wolf has taken on maintenance of the Raspberry Pi image. Thank you!

(Cross-posting this message I sent to pkg-raspi-maintainers for broader visibility.)

I started building Raspberry Pi images because I thought there should be an easy, official way to install Debian on the Raspberry Pi.

I still believe that, but I’m not actually using Debian on any of my Raspberry Pis anymore¹, so my personal motivation to do any work on the images is gone.

On top of that, I realize that my commitments exceed my spare time capacity, so I need to get rid of responsibilities.

Therefore, I’m looking for someone to take up maintainership of the Raspberry Pi images. Numerous people have reached out to me with thank you notes and questions, so I think the user interest is there. Also, I’ll be happy to answer any questions that you might have and that I can easily answer. Please reply here (or in private) if you’re interested.

If I can’t find someone within the next 7 days, I’ll put up an announcement message in the raspi3-image-spec README, wiki page, and my blog posts, stating that the image is unmaintained and looking for a new maintainer.

Thanks for your understanding,

① just in case you’re curious, I’m now running cross-compiled Go programs directly under a Linux kernel and minimal userland, see https://gokrazy.org/

Michael Stapelberg: TurboPFor: an analysis

Mar, 05/02/2019 - 9:18pd
Motivation

I have recently been looking into speeding up Debian Code Search. As a quick reminder, search engines answer queries by consulting an inverted index: a map from term to documents containing that term (called a “posting list”). See the Debian Code Search Bachelor Thesis (PDF) for a lot more details.

Currently, Debian Code Search does not store positional information in its index, i.e. the index can only reveal that a certain trigram is present in a document, not where or how often.

From analyzing Debian Code Search queries, I knew that identifier queries (70%) massively outnumber regular expression queries (30%). When processing identifier queries, storing positional information in the index enables a significant optimization: instead of identifying the possibly-matching documents and having to read them all, we can determine matches from querying the index alone, no document reads required.

This moves the bottleneck: having to read all possibly-matching documents requires a lot of expensive random I/O, whereas having to decode long posting lists requires a lot of cheap sequential I/O.

Of course, storing positions comes with a downside: the index is larger, and a larger index takes more time to decode when querying.

Hence, I have been looking at various posting list compression/decoding techniques, to figure out whether we could switch to a technique which would retain (or improve upon!) current performance despite much longer posting lists and produce a small enough index to fit on our current hardware.

Literature

I started looking into this space because of Daniel Lemire’s Stream VByte post. As usual, Daniel’s work is well presented, easily digestible and accompanied by not just one, but multiple implementations.

I also looked for scientific papers to learn about the state of the art and classes of different approaches in general. The best I could find is Compression, SIMD, and Postings Lists. If you don’t have access to the paper, I hear that Sci-Hub is helpful.

The paper is from 2014, and doesn’t include all algorithms. If you know of a better paper, please let me know and I’ll include it here.

Eventually, I stumbled upon an algorithm/implementation called TurboPFor, which the rest of the article tries to shine some light on.

TurboPFor

If you’re wondering: PFor stands for Patched Frame Of Reference and describes a family of algorithms. The principle is explained e.g. in SIMD Compression and the Intersection of Sorted Integers (PDF).

The TurboPFor project’s README file claims that TurboPFor256 compresses with a rate of 5.04 bits per integer, and can decode with 9400 MB/s on a single thread of an Intel i7-6700 CPU.

For Debian Code Search, we use unsigned integers of 32 bit (uint32), which TurboPFor will compress into as few bits as required.

Dividing Debian Code Search’s file sizes by the total number of integers, I get similar values, at least for the docid index section:

  • 5.49 bits per integer for the docid index section
  • 11.09 bits per integer for the positions index section

I can confirm the order of magnitude of the decoding speed, too. My benchmark calls TurboPFor from Go via cgo, which introduces some overhead. To exclude disk speed as a factor, data comes from the page cache. The benchmark sequentially decodes all posting lists in the specified index, using as many threads as the machine has cores¹:

  • ≈1400 MB/s on a 1.1 GiB docid index section
  • ≈4126 MB/s on a 15.0 GiB position index section

I think the numbers differ because the position index section contains larger integers (requiring more bits). I repeated both benchmarks, capped to 1 GiB, and decoding speeds still differed, so it is not just the size of the index.

Compared to Streaming VByte, a TurboPFor256 index comes in at just over half the size, while still reaching 83% of Streaming VByte’s decoding speed. This seems like a good trade-off for my use-case, so I decided to have a closer look at how TurboPFor works.

① See cmd/gp4-verify/verify.go run on an Intel i9-9900K.

Methodology

To confirm my understanding of the details of the format, I implemented a pure-Go TurboPFor256 decoder. Note that it is intentionally not optimized as its main goal is to use simple code to teach the TurboPFor256 on-disk format.

If you’re looking to use TurboPFor from Go, I recommend using cgo. cgo’s function call overhead is about 51ns as of Go 1.8, which will easily be offset by TurboPFor’s carefully optimized, vectorized (SSE/AVX) code.

With that caveat out of the way, you can find my teaching implementation at https://github.com/stapelberg/goturbopfor

I verified that it produces the same results as TurboPFor’s p4ndec256v32 function for all posting lists in the Debian Code Search index.

On-disk format

Note that TurboPFor does not fully define an on-disk format on its own. When encoding, it turns a list of integers into a byte stream:

size_t p4nenc256v32(uint32_t *in, size_t n, unsigned char *out);

When decoding, it decodes the byte stream into an array of integers, but needs to know the number of integers in advance:

size_t p4ndec256v32(unsigned char *in, size_t n, uint32_t *out);

Hence, you’ll need to keep track of the number of integers and length of the generated byte streams separately. When I talk about on-disk format, I’m referring to the byte stream which TurboPFor returns.

The TurboPFor256 format uses blocks of 256 integers each, followed by a trailing block — if required — which can contain fewer than 256 integers:

SIMD bitpacking is used for all blocks but the trailing block (which uses regular bitpacking). This is not merely an implementation detail for decoding: the on-disk structure is different for blocks which can be SIMD-decoded.

Each block starts with a 2 bit header, specifying the type of the block:

Each block type is explained in more detail in the following sections.

Note that none of the block types store the number of elements: you will always need to know how many integers you need to decode. Also, you need to know in advance how many bytes you need to feed to TurboPFor, so you will need some sort of container format.

Further, TurboPFor automatically choses the best block type for each block.

Constant block

A constant block (all integers of the block have the same value) consists of a single value of a specified bit width ≤ 32. This value will be stored in each output element for the block. E.g., after calling decode(input, 3, output) with input being the constant block depicted below, output is {0xB8912636, 0xB8912636, 0xB8912636}.

The example shows the maximum number of bytes (5). Smaller integers will use fewer bytes: e.g. an integer which can be represented in 3 bits will only use 2 bytes.

Bitpacking block

A bitpacking block specifies a bit width ≤ 32, followed by a stream of bits. Each value starts at the Least Significant Bit (LSB), i.e. the 3-bit values 0 (000b) and 5 (101b) are encoded as 101000b.

Bitpacking with exceptions (bitmap) block

The constant and bitpacking block types work well for integers which don’t exceed a certain width, e.g. for a series of integers of width ≤ 5 bits.

For a series of integers where only a few values exceed an otherwise common width (say, two values require 7 bits, the rest requires 5 bits), it makes sense to cut the integers into two parts: value and exception.

In the example below, decoding the third integer out2 (000b) requires combination with exception ex0 (10110b), resulting in 10110000b.

The number of exceptions can be determined by summing the 1 bits in the bitmap using the popcount instruction.

Bitpacking with exceptions (variable byte)

When the exceptions are not uniform enough, it makes sense to switch from bitpacking to a variable byte encoding:

Decoding: variable byte

The variable byte encoding used by the TurboPFor format is similar to the one used by SQLite, which is described, alongside other common variable byte encodings, at github.com/stoklund/varint.

Instead of using individual bits for dispatching, this format classifies the first byte (b[0]) into ranges:

  • [0—176]: the value is b[0]
  • [177—240]: a 14 bit value is in b[0] (6 high bits) and b[1] (8 low bits)
  • [241—248]: a 19 bit value is in b[0] (3 high bits), b[1] and b[2] (16 low bits)
  • [249—255]: a 32 bit value is in b[1], b[2], b[3] and possibly b[4]

Here is the space usage of different values:

  • [0—176] are stored in 1 byte (as-is)
  • [177—16560] are stored in 2 bytes, with the highest 6 bits added to 177
  • [16561—540848] are stored in 3 bytes, with the highest 3 bits added to 241
  • [540849—16777215] are stored in 4 bytes, with 0 added to 249
  • [16777216—4294967295] are stored in 5 bytes, with 1 added to 249

An overflow marker will be used to signal that encoding the values would be less space-efficient than simply copying them (e.g. if all values require 5 bytes).

This format is very space-efficient: it packs 0-176 into a single byte, as opposed to 0-128 (most others). At the same time, it can be decoded very quickly, as only the first byte needs to be compared to decode a value (similar to PrefixVarint).

Decoding: bitpacking Regular bitpacking

In regular (non-SIMD) bitpacking, integers are stored on disk one after the other, padded to a full byte, as a byte is the smallest addressable unit when reading data from disk. For example, if you bitpack only one 3 bit int, you will end up with 5 bits of padding.

SIMD bitpacking (256v32)

SIMD bitpacking works like regular bitpacking, but processes 8 uint32 little-endian values at the same time, leveraging the AVX instruction set. The following illustration shows the order in which 3-bit integers are decoded from disk:

In Practice

For a Debian Code Search index, 85% of posting lists are short enough to only consist of a trailing block, i.e. no SIMD instructions can be used for decoding.

The distribution of block types looks as follows:

  • 72% bitpacking with exceptions (bitmap)
  • 19% bitpacking with exceptions (variable byte)
  • 5% constant
  • 4% bitpacking

Constant blocks are mostly used for posting lists with just one entry.

Conclusion

The TurboPFor on-disk format is very flexible: with its 4 different kinds of blocks, chances are high that a very efficient encoding will be used for most integer series.

Of course, the flip side of covering so many cases is complexity: the format and implementation take quite a bit of time to understand — hopefully this article helps a little! For environments where the C TurboPFor implementation cannot be used, smaller algorithms might be simpler to implement.

That said, if you can use the TurboPFor implementation, you will benefit from a highly optimized SIMD code base, which will most likely be an improvement over what you’re currently using.

Michael Stapelberg: sbuild-debian-developer-setup(1)

Hën, 04/02/2019 - 7:08md

I have heard a number of times that sbuild is too hard to get started with, and hence people don’t use it.

To reduce hurdles from using/contributing to Debian, I wanted to make sbuild easier to set up.

sbuild ≥ 0.74.0 provides a Debian package called sbuild-debian-developer-setup. Once installed, run the sbuild-debian-developer-setup(1) command to create a chroot suitable for building packages for Debian unstable.

On a system without any sbuild/schroot bits installed, a transcript of the full setup looks like this:

% sudo apt install -t unstable sbuild-debian-developer-setup Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libsbuild-perl sbuild schroot Suggested packages: deborphan btrfs-tools aufs-tools | unionfs-fuse qemu-user-static Recommended packages: exim4 | mail-transport-agent autopkgtest The following NEW packages will be installed: libsbuild-perl sbuild sbuild-debian-developer-setup schroot 0 upgraded, 4 newly installed, 0 to remove and 1454 not upgraded. Need to get 1.106 kB of archives. After this operation, 3.556 kB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 http://localhost:3142/deb.debian.org/debian unstable/main amd64 libsbuild-perl all 0.74.0-1 [129 kB] Get:2 http://localhost:3142/deb.debian.org/debian unstable/main amd64 sbuild all 0.74.0-1 [142 kB] Get:3 http://localhost:3142/deb.debian.org/debian testing/main amd64 schroot amd64 1.6.10-4 [772 kB] Get:4 http://localhost:3142/deb.debian.org/debian unstable/main amd64 sbuild-debian-developer-setup all 0.74.0-1 [62,6 kB] Fetched 1.106 kB in 0s (5.036 kB/s) Selecting previously unselected package libsbuild-perl. (Reading database ... 276684 files and directories currently installed.) Preparing to unpack .../libsbuild-perl_0.74.0-1_all.deb ... Unpacking libsbuild-perl (0.74.0-1) ... Selecting previously unselected package sbuild. Preparing to unpack .../sbuild_0.74.0-1_all.deb ... Unpacking sbuild (0.74.0-1) ... Selecting previously unselected package schroot. Preparing to unpack .../schroot_1.6.10-4_amd64.deb ... Unpacking schroot (1.6.10-4) ... Selecting previously unselected package sbuild-debian-developer-setup. Preparing to unpack .../sbuild-debian-developer-setup_0.74.0-1_all.deb ... Unpacking sbuild-debian-developer-setup (0.74.0-1) ... Processing triggers for systemd (236-1) ... Setting up schroot (1.6.10-4) ... Created symlink /etc/systemd/system/multi-user.target.wants/schroot.service → /lib/systemd/system/schroot.service. Setting up libsbuild-perl (0.74.0-1) ... Processing triggers for man-db (2.7.6.1-2) ... Setting up sbuild (0.74.0-1) ... Setting up sbuild-debian-developer-setup (0.74.0-1) ... Processing triggers for systemd (236-1) ... % sudo sbuild-debian-developer-setup The user `michael' is already a member of `sbuild'. I: SUITE: unstable I: TARGET: /srv/chroot/unstable-amd64-sbuild I: MIRROR: http://localhost:3142/deb.debian.org/debian I: Running debootstrap --arch=amd64 --variant=buildd --verbose --include=fakeroot,build-essential,eatmydata --components=main --resolve-deps unstable /srv/chroot/unstable-amd64-sbuild http://localhost:3142/deb.debian.org/debian I: Retrieving InRelease I: Checking Release signature I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010) I: Retrieving Packages I: Validating Packages I: Found packages in base already in required: apt I: Resolving dependencies of required packages... […] I: Successfully set up unstable chroot. I: Run "sbuild-adduser" to add new sbuild users. ln -s /usr/share/doc/sbuild/examples/sbuild-update-all /etc/cron.daily/sbuild-debian-developer-setup-update-all Now run `newgrp sbuild', or log out and log in again. % newgrp sbuild % sbuild -d unstable hello sbuild (Debian sbuild) 0.74.0 (14 Mar 2018) on x1 +==============================================================================+ | hello (amd64) Mon, 19 Mar 2018 07:46:14 +0000 | +==============================================================================+ Package: hello Distribution: unstable Machine Architecture: amd64 Host Architecture: amd64 Build Architecture: amd64 Build Type: binary […]

I hope you’ll find this useful.

Michael Stapelberg: dput usability changes

Hën, 04/02/2019 - 7:08md

dput-ng ≥ 1.16 contains two usability changes which make uploading easier:

  1. When no arguments are specified, dput-ng auto-selects the most recent .changes file (with confirmation).
  2. Instead of erroring out when detecting an unsigned .changes file, debsign(1) is invoked to sign the .changes file before proceeding.

With these changes, after building a package, you just need to type dput (in the correct directory of course) to sign and upload it.

Julien Danjou: How to Log Properly in Python

Hën, 04/02/2019 - 11:15pd

Logging is one of the most underrated features. Often ignored by software engineers, it can save your time when your application's running in production.

Most teams don't think about it until it's too late in their development process. It's when things start to get wrong in deployments that somebody realizes too late that logging is missing.

Guidelines

The Twelve-Factor App defines logs as a stream of aggregated, time-ordered events collected from the output streams of all running processes. It also describes how applications should handle their logging. We can summarize those guidelines as:

  • Logs have no fixed beginning or end.
  • Print logs to stdout.
  • Print logs unbuffered.
  • The environment is responsible for capturing the stream.

From my experience, this set of rules is a good trade-off. Logs have to be kept pretty simple to be efficient and reliable. Building complex logging systems might make it harder to get insight into a running application.

There's also no point in duplication effort in log management (e.g., log file rotation, archival policy, etc) in your different applications. Having an external workflow that can be shared across different programs seems more efficient.

In Python

Python provides a logging subsystem with its logging module. This module provides a Logger object that allows you to emit messages with different levels of criticality. Those messages can then be filtered and send to different handlers.

Let's have an example:

import logging logger = logging.getLogger("myapp") logger.error("something wrong")

Depending on the version of Python you're running you'll either see:

No handlers could be found for logger "test123"

or:

something wrong

Python 2 used to have no logging setup by default, so it would print an error message about no handler being found. Since Python 3, a default handler outputting to stdout is now installed — matching the requirements from the 12factor App.

However, this default setup is far from being perfect.

Shortcomings

The default format that Python uses does not embed any contextual information. There is no way to know the name of the logger — myapp in the previous example — nor the date and time of the logged message.

You must configure Python logging subsystem to enhance its output format.

To do that, I advise using the daiquiri module. It provides an excellent default configuration and a simple API to configure logging, plus some exciting features.

Logging Setup

When using daiquiri, the first thing to do is to set up your logging correctly. This can be done with the daiquiri.setup function as this:

import daiquiri daiquiri.setup()

As simple as that. You can tweak the setup further by asking it to log to file, to change the default string formats, etc, but just calling daiquiri.setup is enough to get a proper logging default.

See:

import daiquiri daiquiri.setup() daiquiri.getLogger("myapp").error("something wrong")

outputs:

2018-12-13 10:24:04,373 [38550] ERROR myapp: something wrong

If your terminal supports writing text in colors, the line will be printed in red since it's an error. The format provided by daiquiri is better than Python's default: this one includes a timestamp, the process ID,  the criticality level and the logger's name. Needless to say that this format can also be customized.

Passing Contextual Information

Logging strings are boring. Most of the time, engineers end up writing code such as:

logger.error("Something wrong happened with %s when writing data at %d", myobject.myfield, myobject.mynumber")

The issue with this approach is that you have to think about each field that you want to log about your object, and to make sure that they are inserted correctly in your sentence. If you forget an essential field to describe your object and the problem, you're screwed.

A reliable alternative to this manual crafting of log strings is to pass interesting objects as keyword arguments. Daiquiri supports it, and it works that way:

import attr import daiquiri import requests daiquiri.setup() logger = daiquiri.getLogger("myapp") @attr.s class Request: url = attr.ib() status_code = attr.ib(init=False, default=None) def get(self): r = requests.get(self.url) self.status_code = r.status_code r.raise_for_status() return r user = "jd" req = Request("https://google.com/not-this-page") try: req.get() except Exception: logger.error("Something wrong happened during the request", request=req, user=user)

If anything goes wrong with the request, it will be logged with the stack trace, like this:

2018-12-14 10:37:24,586 [43644] ERROR myapp [request: Request(url='https://google.com/not-this-page', status_code=404)] [user: jd]: Something wrong happened during the request

As you can see, the call to logger.error is pretty straight-forward: a line that explains what's wrong, and then the different interesting objects are passed as keyword arguments.

Daiquiri logs those keyword arguments with a default format of [key: value] that is included as a prefix to the log string. The value is printed using its __format__ method — that's why I'm using the attr module here: it automatically generates this method for me and includes all fields by default. You can also customize daiquiri to use any other format.

Following those guidelines should be a perfect start for logging correctly with Python!

Faqet