You are here

Planet Debian

Subscribe to Feed Planet Debian
Thinking inside the box liw's English language blog feed Stuff, Debian, Free Software and Craig Stuff, Debian, Free Software and Craig (y eso no es poca cosa) Thinking inside the box Just another WordPress.com weblog agrep -pB IT /dev/life Insider infos, master your Debian/Ubuntu distribution Thinking inside the box a blog sesse's blog Random thoughts about everything tagged by Debian Thinking inside the box Reproducible builds blog random musings and comments Recent content in Debian Blog on RESEARCHUT Any sufficiently advanced thinking is indistinguishable from madness Thinking inside the box showing latest 10 Jaldhar Vyas's Debian GNU/Linux Weblog ganbatte kudasai! thoughts of a free software enthusiast jmtd Blog from the Debian Project Entries tagged english random musings and comments Politics, tech and herding cats from Neil McGovern Comments on family, technology, and society random musings and comments Reproducible builds blog The public face of jwiltshire Thoughts about programming, sysadmin, Perl, Debian ... Free Software Hacking agrep -pB IT /dev/life agrep -pB IT /dev/life As time goes by ... (y eso no es poca cosa) showing latest 10 anarcat Just another WordPress.com weblog pabs Entries tagged english "Passion and dispassion. Choose two." -- Larry Wall Ben Hutchings's diary of life and technology sesse's blog Dude! Sweet!
Përditësimi: 1 month 3 javë më parë

My Open-Source Activities from September to October 2018

Mar, 06/11/2018 - 12:51md

Welcome readers, this is a infrequently updated post series that logs my activities within open-source communities. I want my work to be as transparent as possible in order to promote open governance, a policy feared even by some “mighty” nations.

I do not work on open-source full-time, although I sincerely would love to. Therefore the posts may cover a ridiculously long period (even a whole year).

Unfortunately this blog site does not support commenting. So if anyone has anything to discuss regarding the posts, feel free to reach me via the social links at the footer of the page.

Debian

Debian is a general-purpose Linux distribution that is widely used on the planet. I am currently a Debian Maintainer who works on packages related to Android SDK and the Java ecosystem.

Some Android SDK Packages Now Available on Every Architecture

For a long time our packages are only available on x86, ARM and MIPS architectures. This is due to the fact that AOSP is only designed to support those 3 major instruction sets in the market. Another physical limitation is that libunwind, a common dependency by most Android SDK components, can only be built on said architectures after being patched by AOSP. ADB and fastboot now even removed the support on MIPS because we now build them against BoringSSL which does not support MIPS at all. In light of the removal of MIPS support in NDK as well, I can assume that the entire AOSP will say goodbye to MIPS at some point.

But not all components rely on libunwind. With some minor efforts and investigations, we can now enable some of them to build on every architecture that Debian supports. For now they include:

There will surely be more on the road, stay tuned.

DD Application Approved

For those who aren’t familiar with the term, DD means Debian Developer who are official member of the Debian Project and usually have access and permissions to most parts of Debian’s infrastructure.

As a Debian Maintainer (DM), I can ask for upload permission on any packages and upload them without needing a sponsor. But in case of introducing new binary packages or hijacking ones from another source package I will need a sponsor. I believe with a DD account working in Debian will become smoother and easier.

So I applied for DD about… 6 months ago! After a marathon of Q&A sessions I finally got approved by my AM Joerg Jaspert. Now I still have to wait for further review by the system, perhaps I can get the account in November.

Big thanks to Hans-Christoph Steiner, Markus Koschany and Emmanuel Bourg who advocated me, and my AM Joerg Jaspert.

Voidbuilder Release 0.2.0

Voidbuilder is a simple program that mimics pbuilder but uses Docker as the isolation engine. I have been using it privately and am quite satisfied.

Last month I released a 0.2.0 version with the following changes:

  • The login sub-command no longer builds the source-only bundle and this task must be done by the user.
  • One failed hook no longer fail the entire job, instead a message will pop up.
殷啟聰 | Kai-Chung Yan https://seamlik.github.io/ Blog by seamlik

How to run CEWE photo creator on Debian

Hën, 05/11/2018 - 8:28md

Hi

This post describes how I debug an issue with a proprietary software. I hope this will give you some hint on how to proceed should you face a similar issue. If you’re in a hurry, you can read the TL;DR; version at the end.

After the summer vacations, I’ve decided to offer a photo-book to my mother. I searched for open-source solution but the printed results were lackluster.

Unfortunately, the only possible solution was to use professional service. Some of these services offer a web application to create photo books, but this is painful to use on a slow DSL line. Other services provide a program named CEWE. This proprietary program can be downloaded for Windows, Mac and, lo and behold: Linux !

The download goes quite fast as the downloaded program is a Perl script that does the actual download. I would have preferred a proper Debian package, but at least Linux amd64 is supported.

Once installed, CEWE program is available as an executable and a bunch of shared libraries.

This program works quite well to create a photo album. I won’t go into the details there.

I ran into trouble when trying to connect the application to the service site to order the photo-book: the connection fails with a cryptic message “error code 10000”.

Commercial support was not much help as they insisted that I check my proxy settings. I downloaded again CEWE from another photo service. The new CEWE installation gave me the same error. This showed that the issue was on my side and not on the server’s side.

Given that the error occurred quite fast when trying to connect, I guessed that the connection setup was going south. Since the URL shown in the installation script began with https, I had to check for SSL issues.

I checked certificate issues: curl had no problem connecting to the server mentioned in the Perl script. Wireshark showed that the connection to the server was reset by the server quite fast. I wondered which version of SSL was used by CEWE and ran ldd. To my surprise, I found that ldd did not list libssl. Something weird was going on: SSL was required but CEWE was not linked to libssl…

I used another trick: explore all the menus of the application. This was a good move as I found a checkbox to enable debug report in CEWE in “Options -> paramètres -> Service” menu (that may be “options-> parameters -> support” in English CEWE). When set, debug traces are also shown on standard output of CEWE,

And, somewhere in the debug traces, I found:

W (2018-10-30T18:36:37.143) [ 0] ==> QSslSocket: cannot resolve SSLv3_client_method <==

So CEWE was looking for SSL symbols even though ldd did not require libssl…

I guessed that CEWE was using dlopen to open the ssl library. But which file was opened by dlopen ?

Most likely, the guys who wrote the call to dlopen did not want to handle file names with so version (i.e. like libssl.so.1.0.2), and added code to open directly libssl.so. This file is provided by libssl-dev package, which was already installed on my system.

But wait, CEWE was probably written for Debian stable with an older libssl. I tried libssl1.0-dev.. which conflicts with libssl-dev. Oh well, I can live with that for a while…

And that was it ! With libssl1.0-dev installed, CEWE was able to connect to the photo service web site without problems.

So here’s the TL;DR; version. To run CEWE on Debian, run:

sudo apt install libssl1.0-dev

Last but not least, here are some suggestions for CEWE:

  • use libssl1.1. as libssl1.0 is deprecated and will be removed from Debian
  • place the debug checkbox in “System” widget. This widget was the first I opened when I began troubleshooting. “Service” does not mean much to me. Having this checkbox in both “Service” and “System” widgets would not harm

All the best

[ Edit: I first blamed CEWE for loading libssl in a non-standard way. libssl is actually loaded by QtNetwork. Depending on the way Qt is built, SSL is either disabled (-no-openssl option), loaded by dlopen (default) or loaded with dynamic linking (-openssl-linked). The way Qt is built is CEWE choice. Thanks Uli Schlachter for the heads-up]

 

dod https://ddumont.wordpress.com Dominique Dumont's Blog

The Best flake8 Extensions for your Python Project

Hën, 05/11/2018 - 11:00pd

In the last blog post about coding style, we dissected what the state of the art was regarding coding style check in Python.

As we've seen, Flake8 is a wrapper around several tools and is extensible via plugins: meaning that you can add your own checks. I'm a heavy user of Flake8 and relies on a few plugins to extend the check coverage of common programming mistakes in Python. Here's the list of the ones I can't work without. As a bonus, you'll find at the end of this post, a sample of my go-to tox.ini file.

flake8-import-order

The name is quite explicit: this extension checks the order of your import statements at the beginning of your files. By default, it uses a style that I enjoy, which looks like:

import os import sys import requests import yaml import myproject from myproject.utils import somemodule

The builtin modules are grouped as the first ones. Then comes a group for each third-party modules that are imported. Finally, the last group manages the modules of the current project. I find this way of organizing modules import quite clear and easy to read.

To make sure flake8-import-order knows about the name of your project module name, you need to specify it in tox.ini with the application-import-names option.

If you beg to differ, you can use any of the other styles that flake8-import-order offers by default by setting the import-order-style option. You can obviously provide your own style.

flake8-blind-except

The flake8-blind-except extension checks that no except statement is used without specifying an exception type. The following excerpt is, therefore, considered invalid:

try: do_something() except: pass

Using except without any exception type specified is considered bad practice as it might catch unwanted exceptions. It forces the developer to think about what kind of errors might happen and should really be caught.

In the rare case any exception should be caught, it's still possible to use except Exception anyway.

flake8-builtins

The flake8-builtins plugin checks that there is no name collision between your code and the Python builtin variables.

For example, this code would trigger an error:

def first(list): return list[0]

As list is a builtin in Python (to create a list!), shadowing its definition by using list as the name of a parameter in a function signature would trigger a warning from flake8-builtins.

While the code is valid, it's a bad habit to override Python builtins functions. It might lead to tricky errors; in the above example, if you ever need to call list(), you won't be able to.

flake8-logging-format

This module is handy as it is still slapping my fingers once in a while. When using the logging module, it prevents from writing this kind of code:

mylogger.info("Hello %s" % mystring)

While this works, it's suboptimal as it forces the string interpolation. If the logger is configured to print only messages with a logging level of warning or above, doing a string interpolation here is pointless.

Therefore, one should instead write:

mylogger.info("Hello %s", mystring)

Same goes if you use format to do any formatting.

Be aware that contrary to other flake8 modules, this one does not enable the check by default. You'll need to add enable-extensions=G in your tox.ini file.

flake8-docstrings

The flake8-docstrings module checks the content of your Python docstrings for respect of the PEP 257. This PEP is full of small details about formatting your docstrings the right way, which is something you wouldn't be able to do without such a tool. A simple example would be:

class Foobar: """A foobar"""

While this seems valid, there is a missing point at the end of the docstring.

Trust me, especially if you are writing a library that is consumed by other developers, this is a must-have.

flake8-rst-docstrings

This extension is a good complement to flake8-docstrings: it checks that the content of your docstrings is valid RST. It's a no-brainer, so I'd install it without question. Again, if your project exports a documented API that is built with Sphinx, this is a must-have.

My standard tox.ini

Here's the standard tox.ini excerpt that I use in most of my projects. You can copy paste it and use

[testenv:pep8] deps = flake8 flake8-import-order flake8-blind-except flake8-builtins flake8-docstrings flake8-rst-docstrings flake8-logging-format commands = flake8 [flake8] exclude = .tox # If you need to ignore some error codes in the whole source code # you can write them here # ignore = D100,D101 show-source = true enable-extensions=G application-import-names = <myprojectname>

Before disabling an error code for your entire project, remember that you can force flake8 to ignore a particular instance of the error by adding the # noqa tag at the end of the line.

If you have any flake8 extension that you think is useful, please let me know in the comment section!

Julien Danjou https://julien.danjou.info/ Julien Danjou

Ahh, the joy of Cloudflare SNI certificates

Sht, 03/11/2018 - 8:23md

Nice neighbourhood, https://www.amsterdam.nl...

For your copy and paste pleasure:

openssl s_client -connect www.amsterdam.nl:443 < /dev/null | openssl x509 -noout -text | grep DNS:

Update

03.11.18: Cloudflare fixed this mess somewhat. They now look for SNI servernames and use customer-specific certs. See:

openssl s_client -servername www.amsterdam.nl -connect www.amsterdam.nl:443 < /dev/null | openssl x509 -noout -text | grep DNS:

(notice the -servername in the above vs. the original command that will fail with something like 140246838507160:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:769: now)

Daniel Lange https://daniel-lange.com/ Daniel Lange's blog

Firefox asking to be made the default browser again and again

Sht, 03/11/2018 - 8:03md

Firefox on Linux can develop the habit to (rather randomly) ask again and again to be made the default browser. E.g. when started from Thunderbird by clicking a link it asks but when starting from a shell all is fine.

The reason to this is often two (or more) .desktop entries competing with each other.

So, walkthrough: (GOTO 10 in case you are sure to have all the basics right)

update-alternatives --display x-www-browser
update-alternatives --display gnome-www-browser

should both show firefox for you. If not

update-alternatives --config <entry>

the entry to fix the preference on /usr/bin/firefox.

Check (where available)

exo-preferred-applications

that the "Internet Browser" is "Firefox".

Check (where available)

xfce4-mime-settings

that anything containing "html" points to Firefox (or is left at a non-user set default).

Check (where available)

xdg-settings get default-web-browser

that you get firefox.desktop. If not run

xdg-settings check default-web-browser firefox.desktop

If you are running Gnome, check

xdg-settings get default-url-scheme-handler http

and the same for https.

LABEL 10:

Run

sensible-editor ~/.config/mimeapps.list

and remove all entries that contain something like userapp-Firefox-<random>.desktop.

Run

find ~/.local/share/applications -iname "userapp-firefox*.desktop"

and delete these files or move them away.

Done.

Once you have it working again consider disabling the option for Firefox to check whether it is the default browser. Because it will otherwise create those pesky userapp-Firefox-<random>.desktop files again.

Configuring Linux is easy, innit?

Daniel Lange https://daniel-lange.com/ Daniel Lange's blog

My Debian Activities in October 2018

Sht, 03/11/2018 - 5:15md

FTP master

This month I accepted 211 packages, which is almost the same amount as last month. On the other side I was a bit reluctant and rejected only 36 uploads. The overall number of packages that got accepted this month was 370.

Debian LTS

This was my fifty second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads or prepared security uploads of:

  • [DLA 1555-1] libmspack security update for two CVEs
  • [DLA 1556-1] paramiko security update for two CVEs
  • [DLA 1557-1] tiff security update for three CVEs
  • [DLA 1558-1] ruby2.1 security update for two CVEs
  • [DSA 4325-1] mosquitto security update for four CVEs
  • #912159 for libmspack and two CVEs in Stretch

I could also mark all emerging CVEs of wireshark as not affected for Jessie. I prepared a debdiff for ten CVEs affecting tiff in Stretch and sent it to the security team and the maintainer. Unfortunately it did not result in an upload yet.

I also worked on imagemagick and expect an upload soon.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the fifth ELTS month.

During my allocated time I uploaded:

  • ELA-52-1 for net-snmp

There was also one CVE for the python package requests, that could be marked as not-affected. The version in Wheezy did contain the correct code, whereas later versions contained the issue.

As like in LTS, I worked on wireshark (marking all CVEs as not-affected for Wheezy) and tiff3, but did not do an upload yet.

Moreover this was a strange month related to the packages I selected for work. So please everybody check twice whether to add an entry to ela-needed.txt.

As like in LTS, I also did some days of frontdesk duties.

Other stuff

I uploaded new upstream versions of …

Further I continued to sponsor some glewlwyd packages for Nicolas Mora. From my point of view he should become a DM now, so he started his NM process.

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

Cross compiling CMake-based projects using Ubuntu/Debian's multi arch

Pre, 02/11/2018 - 6:20md
As you probably already know Ubuntu (and then Debian) added Multi-Arch support quite some time ago. This means that you can install library packages from multiple architectures on the same machine.

Thanks to the work of many people, in which I would like to specially mention Helmut Grohne, we are now able to cross compile Debian packages using standard sbuild chroots. He even was kind enough to provide me with some numbers:

We have 28790 source packages in Debian unstable.
Of those, 13358 (46.3%) build architecture-dependent binary packages.
Of those, 7301 (54.6%) have satisfiable cross Build-Depends.
Of those, 3696 (50.6% of buildable, 27.6% of sources) were attempted.
Of those, 2695 (72.9% of built, 36.9% of buildable, 20.1% of sources) were successful.
633 bugs affecting 772 packages (7.23% of 10663 unsuccessful) are reported.
Now I asked myself if I could use this to cross compile the code I'm working on without the need of doing a full Debian package build.

My projects uses CMake, so we can cross compile by providing a suitable CMAKE_TOOLCHAIN_FILE.

And so the first question is:

How do we create the necessary file using what Multi-Arch brings to our table?
I asked Helmut and he did not only provide me with lots of tips, he also provided me with the following script, which I modified a little:

Now we can run the script providing it with the desired host arch and voilá, we have our toolchain file.

#!/bin/sh

#set -x

ARCH=$1

DEB_HOST_GNU_TYPE=$(dpkg-architecture -f "-a$1" -qDEB_HOST_GNU_TYPE)
DEB_HOST_GNU_CPU=$(dpkg-architecture -f "-a$1" -qDEB_HOST_GNU_CPU)
case "$(dpkg-architecture -f "-a$1" -qDEB_HOST_ARCH_OS)" in
        linux) system_name=Linux; ;;
        kfreebsd) system_name=kFreeBSD; ;;
        hurd) system_name=GNU; ;;
        *) exit 1; ;;
esac

cat <> cmake_toolchain_$1.cmake
# Use it while calling CMake:
#   mkdir build; cd build
#   cmake -DCMAKE_TOOLCHAIN_FILE="../cmake_toolchain_.cmake" ../
set(CMAKE_SYSTEM_NAME "$system_name")
set(CMAKE_SYSTEM_PROCESSOR "$DEB_HOST_GNU_CPU")
set(CMAKE_C_COMPILER "$DEB_HOST_GNU_TYPE-gcc")
set(CMAKE_CXX_COMPILER "$DEB_HOST_GNU_TYPE-g++")
set(PKG_CONFIG_EXECUTABLE "$DEB_HOST_GNU_TYPE-pkg-config")
set(PKGCONFIG_EXECUTABLE "$DEB_HOST_GNU_TYPE-pkg-config")
EOF

Can we improve this?Helmut mentioned that meson provides debcrossgen, a script that automates this step. Meson is written in python, so it only needs to know the host architecture to create the necessary definitions.

CMake is not interpreted, but maybe it has a way to know the host arch in advance. If this is true maybe a helper could be added to help in the process. Ideas (or even better, patches/code!) welcomed. Lisandro Damián Nicanor Pérez Meyer noreply@blogger.com Solo sé que sé querer, que tengo Dios y tengo fe.

Montreal's Debian &amp; Stuff - November 2018

Pre, 02/11/2018 - 5:00pd

November's wet, My socks are too, Away from keyboard; still on the net, Let's fix /usr/bin/$foo.

November can be a hard month in the Northen Hemisphere. It tends to be dark, rainy and cold. Montreal sure has been dark, rainy and cold lately.

That's why you should join us at our next Debian & Stuff later this month. Come by and work on Debian-related stuff - or not! Hanging out and chatting with folks is also perfectly fine. As always, everyone's welcome.

The date hasn't been decided yet, so be sure to fill out this poll before November 10th. This time we'll be hanging out at Koumbit.

What else can I say; if not for the good company, the bad poutine from the potato shack next door or the nice craft beer from the very hipster beer shop a little bit further down the street, you should drop by to keep November from creeping in too far.

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Written too much javascript and python feels less liberal.

Pre, 02/11/2018 - 1:16pd
Written too much javascript and python feels less liberal. Dict is not an Object.

Junichi Uekawa http://www.netfort.gr.jp/~dancer/diary/201811.html.en Dancer's daily hackings

October 2018 report: LTS, Monkeysphere, Flatpak, Kubernetes, CD archival and calendar project

Enj, 01/11/2018 - 9:12md
Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

GnuTLS

As discussed last month, one of the options to resolve the pending GnuTLS security issues was to backport the latest 3.3.x series (3.3.30), an update proposed then uploaded as DLA-1560-1. I after a suggestion, I've included an explicit NEWS.Debian item warning people about the upgrade, a warning also included in the advisory itself.

The most important change is probably dropping SSLv3, RC4, HMAC-SHA384 and HMAC-SHA256 from the list of algorithms, which could impact interoperability. Considering how old RC4 and SSLv3 are, however, this should be a welcome change. As for the HMAC changes, those are mandatory to fix the targeted vulnerabilities (CVE-2018-10844, CVE-2018-10845, CVE-2018-10846).

Xen

Xen updates had been idle for a while in LTS, so I bit the bullet and made a first discovery of the pending vulnerabilities. I sent the result to the folks over at Credativ who maintain the 4.4 branch and they came back with a set of proposed updates which I briefly review. Unfortunately, the patches were too deep for me: all I was able to do was to confirm consistency with upstream patches.

I also brought up a discussion regarding the viability of Xen in LTS, especially regarding the "speculative execution" vulnerabilities (XSA-254 and related). My understanding is upstream Xen fixes are not (yet?) complete, but apparently that is incorrect as Peter Dreuw is "condident in the Xen project to provide a solution for these issues". I nevertheless consider, like RedHat that the simpler KVM implementation might provide more adequate protection against those kind of attacks and LTS users should seriously consider switching to KVM for hosing untrusted virtual machines, even if only because that code is actually mainline in the kernel while Xen is unlikely to ever be. It might be, as Dreuw said, simpler to upgrade to stretch than switch virtualization systems...

When all is said and done, however, Linux and KVM are patches in Jessie at the time of writing, while Xen is not (yet).

Enigmail

I spent a significant amount of time working on Enigmail this month again, this time specifically working on reviewing the stretch proposed update to gnupg from Daniel Kahn Gillmor (dkg). I did not publicly share the code review as we were concerned it would block the stable update, which seemed to be in jeopardy when I started working on the issue. Thankfully, the update went through but it means it might impose extra work on leaf packages. Monkeysphere, in particular, might fail to build from source (FTBFS) after the gnupg update lands.

In my tests, however, it seems that packages using GPG can deal with the update correctly. I tested Monkeysphere, Password Store, git-remote-gcrypt and Enigmail, all of which passed a summary smoke test. I have tried to summarize my findings on the mailing list. Basically our options for the LTS update are:

  1. pretend Enigmail works without changing GnuPG, possibly introducing security issues

  2. ship a backport of GnuPG and Enigmail through jessie-sloppy-backports

  3. package OpenPGP.js and backport all the way down to jessie

  4. remove Enigmail from jessie

  5. backport the required GnuPG patchset from stretch to jessie

So far I've taken that last step as my favorite approach...

Firefox / Thunderbird and finding work

... which brings us to the Firefox and Thunderbird updates. I was assuming those were going ahead, but the status of those updates currently seems unclear. This is a symptom of a larger problem in the LTS work organization: some packages can stay "claimed" for a long time without an obvious status update.

We discussed ways of improving on this process and, basically, I will try to be more proactive in taking over packages from others and reaching out to others to see if they need help.

A note on GnuPG

As an aside to the Enigmail / GnuPG review, I was struck by the ... peculiarities in the GnuPG code during my review. I discovered that GnuPG, instead of using the standard resolver, implements its own internal full-stack DNS server, complete with UDP packet parsing. That's 12 000 lines of code right there. There are also abstraction leaks like using "1" and "0" as boolean values inside functions (as opposed to passing an integer and converting as string on output).

A major change in the proposed patchset are changes to the --with-colons batch output, which GnuPG consumers (like GPGME) are supposed to use to interoperate with GnuPG. Having written such a parser myself, I can witness to how difficult parsing those data structures is. Normally, you should always be using GPGME instead of parsing those directly, but unfortunately GPGME does not do everything GPG does: signing operations and keyring management, for example, has long been considered out of scope, so users are force to parse that output.

Long story short, GPG consumers still use --with-colons directly (and that includes Enigmail) because they have to. In this case, critical components were missing from that output (e.g. knowing which key signed which UID) so they were added in the patch. That's what breaks the Monkeysphere test suite, which doesn't expect a specific field to be present. Later versions of the protocol specification have been updated (by dkg) to clarify that might happen, but obviously some have missed the notice, as it came a bit late.

In any case, the review did not make me confident in the software architecture or implementation of the GnuPG program.

autopkgtest testing

As part of our LTS work, we often run tests to make sure everything is in order. Starting with Jessie, we are now seeing packages with autopkgtest enabled, so I started meddling with that program. One of the ideas I was hoping to implement was to unify my virtualization systems. Right now I'm using:

Because sbuild can talk with autopkgtest, and autopkgtest can talk with qemu (which can use KVM images), I figured I could get rid of schroot. Unfortunately, I met a few snags;

  • #911977: how do we correctly guess the VM name in autopkgtest?
  • #911963: qemu build fails with proxy_cmd: parameter not set (fixed and provided a patch)
  • #911979: fails on chown in autopkgtest-qemu backend
  • #911981: qemu server warns about missing CPU features

So I gave up on that approach. But I did get autopkgtest working and documented the process in my quick Debian development guide.

Oh, and I also got sucked down into wiki stylesheet (#864925) after battling with the SystemBuildTools page.

Spamassassin followup

Last month I agreed we could backport the latest upstream version of SpamAssassin (a recurring pattern). After getting the go from the maintainer, I got a test package uploaded but the actual upload will need to wait for the stretch update (#912198) to land to avoid a versioning conflict.

Salt Stack

My first impression of Salt was not exactly impressive. The CVE-2017-7893 issue was rather unclear: first upstream fixed the issue, but reverted the default flag which would enable signature forging after it was discovered this would break compatibility with older clients.

But even worse, the 2014 version of Salt shipped in Jessie did not have master signing in the first place, which means there was simply no way to protect from master impersonation, a worrisome concept. But I assumed this was expected behavior and triaged this away from jessie, and tried to forgot about the horrors I had seen.

phpLDAPadmin with sunweaver

I looked next at the phpLDAPadmin (or PHPLDAPadmin?) vulnerabilities, but could not reproduce the issue using the provided proof of concept. I have also audited the code and it seems pretty clear the code is protected against such an attack, as was explained by another DD in #902186. So I asked Mitre for rejection, and uploaded DLA-1561-1 to fix the other issue (CVE-2017-11107). Meanwhile the original security researcher acknowledged the security issue was a "false positive", although only in a private email.

I almost did a NMU for the package but the security team requested to wait, and marked the package as grave so it gets kicked out of buster instead. I at least submitted the patch, originally provided by Ubuntu folks, upstream.

Smarty3

Finally, I worked on the smart3 package. I confirmed the package in jessie is not vulnerable, because Smarty hadn't yet had the brilliant idea of "optimizing" realpath by rewriting it with new security vulnerabilities. Indeed, the CVE-2018-13982 proof of content and CVE-2018-16831 proof of content both fail in jessie.

I have tried to audit the patch shipped with stretch to make sure it fixed the security issue in question (without introducing new ones of course) abandoned parsing the stretch patch because this regex gave me a headache:

'%^(?<root>(?:<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2F2018-11-01-report&amp;page=%3Aalpha%3A" rel="nofollow">?</a>:alpha:</span>:[\\\\]|/|[\\\\]{2}<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2F2018-11-01-report&amp;page=%3Aalpha%3A" rel="nofollow">?</a>:alpha:</span>+|<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2F2018-11-01-report&amp;page=%3Aprint%3A" rel="nofollow">?</a>:print:</span>{2,}:[/]{2}|[\\\\])?)(?<path>(?:<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2F2018-11-01-report&amp;page=%3Aprint%3A" rel="nofollow">?</a>:print:</span>*))$%u', "who is supporting our users?"

I finally participated in a discussion regarding concerns about support of cloud images for LTS releases. I proposed that, like other parts of Debian, responsibility of those images would shift to the LTS team when official support is complete. Cloud images fall in that weird space (ie. "Installing Debian") which is not traditionally covered by the LTS team.

Hopefully that will become the policy, but only time will tell how this will play out.

Other free software work irssi sandbox

I had been uncomfortable running irssi as my main user on my server for a while. It's a constantly running network server, sometimes connecting to shady servers too. So it made sense to run this as a separate user and, while I'm there, start it automatically on boot.

I created the following file in /etc/systemd/system/irssi@.service, based on this gist:

[Unit] Description=IRC screen session After=network.target [Service] Type=forking User=%i ExecStart=/usr/bin/screen -dmS irssi irssi ExecStop=/usr/bin/screen -S irssi -X stuff '/quit\n' NoNewPrivileges=true [Install] WantedBy=multi-user.target

A whole apparmor/selinux/systemd profile could be written for irssi of course, but I figured I would start with NoNewPrivileges. Unfortunately, that line breaks screen, which is sgid utmp which is some sort of "new privilege". So I'm running this as a vanilla service. To enable, simply enable the service with the right username, previously created with adduser:

systemctl enable irssi@foo.service systemctl start irssi@foo.service

Then I join the session by logging in as the foo user, which can be configured in .ssh/config as a convenience host:

Host irc.anarc.at Hostname shell.anarc.at User foo IdentityFile ~/.ssh/id_ed25519_irc # using command= in authorized_keys until we're all on buster #RemoteCommand screen -x RequestTTY force

Then the ssh irc.anarc.at command rejoins the screen session.

Monkeysphere revival

Monkeysphere was in bad shape in Debian buster. The bit rotten test suite was failing and the package was about to be removed from the next Debian release. I filed and worked on many critical bugs (Debian bug #909700, Debian bug #908228, Debian bug #902367, Debian bug #902320, Debian bug #902318, Debian bug #899060, Debian bug #883015) but the final fix came from another user. I was also welcome on the Debian packaging team which should allow me to make a new release next time we have similar issues, which was a blocker this time round.

Unfortunately, I had to abandon the Monkeysphere FreeBSD port. I had simply forgotten about that commitment and, since I do not run FreeBSD anywhere anymore, it made little sense to keep on doing so, especially since most of the recent updates were done by others anyways.

Calendar project

I've been working on a photography project since the beginning of the year. Each month, I pick the best picture out of my various shoots and will collect those in a 2019 calendar. I documented my work in the photo page, but most of my work in October was around finding a proper tool to layout the calendar itself. I settled on wallcalendar, a beautiful LaTeX template, because the author was very responsive to my feature request.

I also figured out which events to include in the calendar and a way to generate moon phases (now part of the undertime package) for the local timezone. I still have to figure out which other astronomical events to include. I had no response from the local Planetarium but (as always) good feedback from NASA folks which pointed me at useful resources to top up the calendar.

Kubernetes

I got deeper into Kubernetes work by helping friends setup a cluster and share knowledge on how to setup and manage the platforms. This led me to fix a bug in Kubespray, the install / upgrade tool we're using to manage Kubernetes. To get the pull request accepted, I had to go through the insanely byzantine CLA process of the CNCF, which was incredibly frustrating, especially since it was basically a one-line change. I also provided a code review of the Nextcloud helm chart and reviewed the python-hvac ITP, one of the dependencies of Kubespray.

As I get more familiar with Kubernetes, it does seem like it can solve real problems especially for shared hosting providers. I do still feel it's overly complex and over-engineered. It's difficult to learn and moving too fast, but Docker and containers are such a convenient way to standardize shipping applications that it's hard to deny this new trend does solve a problem that we have to fix right now.

CD archival

As part of my work on archiving my CD collection, I contributed three pull requests to fix issues I was having with the project, mostly regarding corner cases but also improvements on the Dockerfile. At my suggestion, upstream also enabled automatic builds for the Docker image which should make it easier to install and deploy.

I still wish to write an article on this, to continue my series on archives, which could happen in November if I can find the time...

Flatpak conversion

After reading a convincing benchmark I decided to give Flatpak another try and ended up converting all my Snap packages to Flatpak.

Flatpak has many advantages:

  • it's decentralized: like APT or F-Droid repositories, anyone can host their own (there is only one Snap repository, managed by Canonical)

  • it's faster: the above benchmarks hinted at this, but I could also confirm Signal starts and runs faster under Flatpak than Snap

  • it's standardizing: many of the work Flatpak is doing to make sense of how to containerize desktop applications is being standardized (and even adopted by Snap)

Much of this was spurred by the breakage of Zotero in Debian (Debian bug #864827) due to the Firefox upgrade. I made a wiki page to tell our users how to install Zotero in Debian considering Zotero might take a while to be packaged back in Debian (Debian bug #871502).

Debian work

Without my LTS hat, I worked on the following packages:

Other work

Usual miscellaneous:

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Linux Audio Miniconf 2018 report

Enj, 01/11/2018 - 6:12md

The audio miniconference was held on the 21st in the offices of Cirrus Logic in Edinburgh with 15 attendees from across the industry including userspace and kernel developers, with people from several OS vendors and a range of silicon companies.

Community

We started off with a discussion of community governance lead by Takashi Iwai. We decided that for the ALSA-hosted projects we’ll follow the kernel and adopt the modified version of the contributor covenant that they have adopted, Sound Open Firmware already has a code. We also talked a bit about moving to use some modern hosted git with web based reviews. While this is not feasible for the kernel components we decided to look at doing this for the userspace components, Takashi will start a discussion on alsa-devel. Speaking of the lists Vinod Koul also volunteered to work with the Linux Foundation admin team to get them integrated with lore.kernel.org.

Liam Girdwood presenting virtualization (photo: Arun Raghavan) Kernel

Liam Girdwood then kicked off the first technical discussion of the day, covering virtualization. Intel have a new hypervisor called ACRN which they are using as part of a solution to expose individual PCMs from their DSPs to virtual clients, they have a virtio specification for control. There were a number of concerns about the current solution being rather specific to both the hardware and use case they are looking at, we need to review that this can work on architectures that aren’t cache coherent or systems where rather than exposing a DSP the host system is using a sound server.

We then moved on to AVB, several vendors have hardware implementations already but it seems clear that these have been built by teams who are not familiar with audio hardware, hopefully this will improve in future but for now there are some regrettable real time requirements. Sakamoto-san suggested looking at FireWire which has some similar things going on with timestamps being associated with the audio stream.

For SoundWire, basic functionality for x86 systems is now 90% there – we still need support for multiple CPU DAIs in the ASoC core (which is in review on the lists) and the Intel DSP drivers need to plumb in the code to instantiate the drivers.

We also covered testing, there may be some progress here this year as Intel have a new hypervisor called ACRN and some out of tree QEMU models for some of their newer systems both of which will help with the perennial problem that we need hardware for a lot of the testing we want to do. We also reviewed the status with some other recurring issues, including PCM granularity and timestamping, for PCM granularity Takashi Iwai will make some proposals on the list and for timestamping Intel will make sure that the rest of their driver changes for this are upstreamed. For dimen we agreed that Sakamoto-san’s work is pretty much done and we just need some comments in the header, and that his control refactoring was a good idea. There was discussion of user defined card elements, there were no concerns with raising the number of user defined elements that can be created but some fixes needed for cleanup of user defined card elements when applications close. The compressed audio userspace is also getting some development with the focus on making things easier to test, integrating with ffmpeg to give something that’s easier for user to work with.

Charles Keepax covered his work on rate domains (which we decided should really be much more generic than just covering sample rates), he’d posted some patches on the list earlier in the week and gave a short presentation about his plans which sparked quite a bit of discussion. His ideas are very much in line with what we’ve discussed before in this area but there’s still some debate as to how we configure the domains – the userspace interface is of course still there but how we determine which settings to use once we pass through something that can do conversions is less clear. The two main options are that the converters can expose configuration to userspace or that we can set constraints on other widgets in the card graph and then configure converters automatically when joining domains. No firm conclusion was reached, and since substantial implementation will be required it is not yet clear what will prove most sensible in practical systems.

Userspace

Sakamoto-san also introduced some discussion of new language bindings. He has been working on a new library designed for use with GObject introspection which people were very interested in, especially with the discussion of testing – having something like this would simplify a lot of the boilerplate that is involved in using the C API and allow people to work in a wider variety of languages without needing to define specific bindings or use the individual language’s C adaptations. People also mentioned the Rust bindings that David Henningsson had been working on, they were particularly interesting for the ChromeOS team as they have been adopting Rust in their userspace.

We talked a bit about higher level userspace software too. PulseAudio development has been relatively quiet recently, Arun talked briefly about his work on native compressed audio support and we discussed if PulseAudio would be able to take advantage of the new timestamping features added by Pierre-Louis Bossart. There’s also the new PipeWire sound server stack, this is a new stack which was originally written for video but now also has some audio support. The goal is to address architectural limitations in the existing JACK and PulseAudio stacks, offering the ability to achieve low latencies in a stack which is more usable for general purpose applications than JACK is.

DSPs

Discussions of DSP related issues were dominated by Sound Open Firmware which is continuing to progress well and now has some adoption outside Intel. Liam gave an overview of the status there and polled interest from the DSP vendors who were present. We talked about how to manage additions to the topology ABI for new Sound Open Firmware features including support for loading and unloading pieces of the DSP topology separately when dynamically adding to the DSP graph at runtime, making things far more flexible. The issues around downloading coefficient data were also covered, the discussion converged on the idea of adding something to hwdep and extending alsa-lib and tinyalsa to make this appear integrated with the standard control API. This isn’t ideal but it seems unlikely that anything will be. Techniques for handling long sequences of RPC calls to DSPs efficiently were also discussed, the conclusion was that the simplest thing was just to send commands asynchronously and then roll everything back if there are any errors.

Summary

Thanks again to all the attendees for their time and contributions and to Cirrus Logic for their generosity in hosting this in their Edinburgh office. It was really exciting to see all the active development that’s going on these days, it’ll be great to see some of that bear fruit over the next year!

Group photo broonie https://blog.sirena.org.uk Technicalities

My Work on Debian LTS (October 2018)

Enj, 01/11/2018 - 4:13md

after some nice family vacation in Tuscany, I did four hours of work on the Debian LTS project as a paid contributor at the end of this month. Thanks to all LTS sponsors for making this possible.

I move over a backlog of 4h from October to November (so I will work 12h on Debian LTS in November 2018).

Furthermore, I have signed up for Debian ELTS work with another 4h (as a start, more availability planned for upcoming months).

This is my list of work done in October 2018:

  • Upload of poppler (DLA 1562-1 [1]), fixing 4 CVEs
  • Discuss my research on CVE-2018-12689 in phpldapadmin from August 2018 with Antoine Beaupré who identified the published exploit as 'false positive' (for details, see his monthly LTS report for Octobre 2018).

light+love
Mike

References

[1] https://lists.debian.org/debian-lts-announce/2018/10/msg00024.html

sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

FLOSS Activities October 2018

Enj, 01/11/2018 - 11:45pd
Changes Issues Review Administration
  • Debian: investigate MSA issue, investigate Fastly outage, investigate buildd issue, forwarded mail bounce info
  • Debian wiki: unblacklist IP range, clean up stray tmp files, whitelist email addresses, ask on IRC about accounts with bouncing email, ping accounts with bouncing email, disable accounts with bouncing email
  • Debian derivatives census: merge patches, deploy changes, clean up cruft, delete giant source packages
Communication Sponsors

All work was done on a volunteer basis.

Paul Wise http://bonedaddy.net/pabs3/log/ Log

Time for an official MIME type for patches?

Enj, 01/11/2018 - 8:15pd

As part of my involvement in the Nikita archive API project, I've been importing a fairly large lump of emails into a test instance of the archive to see how well this would go. I picked a subset of my notmuch email database, all public emails sent to me via @lists.debian.org, giving me a set of around 216 000 emails to import. In the process, I had a look at the various attachments included in these emails, to figure out what to do with attachments, and noticed that one of the most common attachment formats do not have an official MIME type registered with IANA/IETF. The output from diff, ie the input for patch, is on the top 10 list of formats included in these emails. At the moment people seem to use either text/x-patch or text/x-diff, but neither is officially registered. It would be better if one official MIME type were registered and used everywhere.

To try to get one official MIME type for these files, I've brought up the topic on the media-types mailing list. If you are interested in discussion which MIME type to use as the official for patch files, or involved in making software using a MIME type for patches, perhaps you would like to join the discussion?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Montreal's Debian &amp; Stuff - November 2018

Enj, 01/11/2018 - 5:00pd

November's wet, My socks are too, Away from keyboard; still on the net, Let's fix /usr/bin/$foo.

November can be a hard month in the Northen Hemisphere. It tends to be dark, rainy and cold. Montreal sure has been dark, rainy and cold lately.

That's why you should join us at our next Debian & Stuff later this month. Come by and work on Debian-related stuff - or not! Hanging out and chatting with folks is also perfectly fine. As always, everyone's welcome.

The date hasn't been decided yet, so be sure to fill out this poll before November 10th. This time we'll be hanging out at Koumbit.

What else can I say; if not for the good company, the bad poutine from the potato shack next door or the nice craft beer from the very hipster beer shop a little bit further down the street, you should drop by to keep November from creeping in too far.

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Review: In Pursuit of the Traveling Salesman

Enj, 01/11/2018 - 4:25pd

Review: In Pursuit of the Traveling Salesman, by William J. Cook

Publisher: Princeton University Copyright: 2012 ISBN: 0-691-15270-5 Format: Kindle Pages: 272

In Pursuit of the Traveling Salesman is a book-length examination of the traveling salesman problem (TSP) in computer science, written by one of the foremost mathematicians working on solutions to the TSP. Cook is Professor of Applied Mathematics and Statistics at Johns Hopkins University and is one of the authors of the Concorde TSP Solver.

First, a brief summary of the TSP for readers without a CS background. While there are numerous variations, the traditional problem is this: given as input a list of coordinates on a two-dimensional map representing cities, construct a minimum-length path that visits each city exactly once and then returns to the starting city. It's famous in computer science in part because it's easy to explain and visualize but still NP-hard, which means that not only do we not know of a way to exactly solve this problem in a reasonable amount of time for large numbers of cities, but also that a polynomial-time solution to the TSP would provide a solution to a vast number of other problems. (For those familiar with computability theory, the classic TSP is not NP-complete because it's not a decision problem and because of some issues with Euclidean distances, but when stated as a graph problem and converted into a decision problem by, for example, instead asking if there is a solution with length less than n, it is NP-complete.)

This is one of those books where the quality of the book may not matter as much as its simple existence. If you're curious about the details of the traveling salesman problem specifically, but don't want to read a lot of mathematics and computer science papers, algorithm textbooks, or books on graph theory, this book is one of your few options. Thankfully, it's also fairly well-written. Cook provides a history of the problem, a set of motivating problems (the TSP doesn't come up as much and isn't as critical as some NP-complete problems, but optimal tours are still more common than one might think), and even a one-chapter tour of the TSP in art. The bulk of the book, though, is devoted to approximation methods, presented in roughly chronological order of development.

Given that the TSP is NP-hard, we obviously don't know a good exact solution, but I admit I was a bit disappointed that Cook spent only one chapter exploring the exact solutions and explaining to the reader what makes the problem difficult. Late in the book, he does describe the Held-Karp dynamic programming algorithm that gets the work required for an exact solution down to exponential in n, provides a basic introduction to complexity theory, and explains that the TSP is NP-complete by reduction from the Hamiltonian path problem, but doesn't show the reduction of 3SAT to Hamiltonian paths. Since my personal interest ran a bit more towards theory and less towards practical approximations, I would have appreciated a bit more discussion of the underlying structure of the problem and why it's algorithmically hard. (I did appreciate the explanation of why it's not clear whether the general Euclidean TSP is even in NP due to problems with square roots, though.)

That said, I suppose there isn't as much to talk about in exact solutions (the best one we know dates to 1962) and much more to talk about in approximations, which is where Cook has personally spent his time. That's the topic of most of this book, and includes a solid introduction to the basic concept of linear programming (a better one than I ever got in school) and some of its other applications, as well as other techniques (cutting planes, branch-and-bound, and others). The math gets a bit thick here, and Cook skips over a lot of the details to try to keep the book suitable for a general audience, so I can't say I followed all of it, but it certainly satisfied my curiosity about practical approaches to the TSP. (It also made me want to read more about linear programming.)

If you're looking for a book like this, you probably know that already, and I can reassure you that it delivers what it promises and is well-written and approachable. If you aren't already curious about a brief history of practical algorithms for one specific problem, I don't think this book is sufficiently compelling to worth seeking out anyway. This is not a general popularization of interesting algorithms (see Algorithms to Live By if you're looking for that), or (despite Cook's efforts) particularly approachable if this is your first deep look at computer algorithms. It's a niche book that delivers on its promise, but probably won't convince you the topic is interesting if you don't see the appeal.

Rating: 7 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

Debian LTS work, October 2018

Mër, 31/10/2018 - 11:26md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 4 hours from September. I worked all 19 hours.

I released security updates for the linux (DLA 1529-1) and linux-4.9 (DLA 1531-1) packages. I prepared and released another stable update for Linux 3.16 (3.16.60), but have not yet included this in a Debian upload. I also released a security update for libssh (DLA 1548-1).

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

RHL'19 St-Cergue, Switzerland, 25-27 January 2019

Mër, 31/10/2018 - 10:06md

(translated from original French version)

The Rencontres Hivernales du Libre (RHL) (Winter Meeting of Freedom) takes place 25-27 January 2019 at St-Cergue.

Swisslinux.org invites the free software community to come and share workshops, great meals and good times.

This year, we celebrate the 5th edition with the theme «Exploit».

Please think creatively and submit proposals exploring this theme: lectures, workshops, performances and other activities are all welcome.

RHL'19 is situated directly at the base of some family-friendly ski pistes suitable for beginners and more adventurous skiers. It is also a great location for alpine walking trails.

Why, who?

RHL'19 brings together the forces of freedom in the Leman basin, Romandy, neighbouring France and further afield (there is an excellent train connection from Geneva airport). Hackers and activists come together to share a relaxing weekend and discover new things with free technology and software.

If you have a project to present (in 5 minutes, an hour or another format) or activities to share with other geeks, please send an email to rhl-team@lists.swisslinux.org or submit it through the form.

If you have any specific venue requirements please contact the team.

You can find detailed information on the event web site.

Please ask if you need help finding accommodation or any other advice planning your trip to the region.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Free software activities in October 2018

Mër, 31/10/2018 - 4:47md

Here is my monthly update covering what I have been doing in the free software world during October 2018 (previous month):

We intend to maintain changes to these modules under their original open source licenses and applying only free and open fixes and updates. You can find out more at goodformcode.com.

  • My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.

  • I created Github-esque ribbons to display on Salsa-hosted websites. (Salsa being the collaborative development server for Debian and is the replacement for the now-deprecated Alioth service.)

  • Started a highly work-in-progress "Debbugs Enhancement Suite" Chrome browser extension to enhance various parts of the bugs.debian.org web interface.

  • Even more hacking on the Lintian static analysis tool for Debian packages:

    • New features:

      • Warn about packages that use PIUPARTS_* in maintainer scripts. (#912040)
      • Check for packages that parse /etc/passwd in maintainer scripts. (#911157)
      • Emit a warning for packages that do not specify Build-Depends-Package in symbol files. (#911451)
      • Check for non-Python files in top-level Python module directories. [...]
      • Check packages missing versioned dependencies on init-system-helpers. (#910594)
      • Detect calls to update-inetd(1) that use --group without --add, etc. (#909511)
      • Check for packages that encode a Python version number in their source package name. [...]
    • Bug fixes:

    • Misc:

      • Also show the maintainer name on the tag-specific reporting HTML. [...]
      • Tidy a number of references regarding the debhelper-compat virtual package. [...]
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:

  • I attended the Tandon School of Engineering (part of New York University) to speak and work with students from the Application Security course on the topic of reproducible builds.

  • Wrote and forwarded patch for Fontconfig to ensure the cache filenames are determinstic. [...]

  • I sent two previously-authored patches for GNU mtools to ensure the Debian Installer images could become reproducible. (1 & 2)

  • Submitted 11 Debian patches to fix reproducibility issues in fast5, libhandy, lmfit-py, mp3fs, opari2, pjproject, radon, sword, syndie, wit & zsh-antigen. I also submitted an upstream pull request for python-changelog.

  • Made a large number of changes to our website, including adding step-by-step instructions and screenshots on how to signup to our project on Salsa and migrating the TimestampsProposal page on the Debian Wiki to our website.

  • Fixed an issue in disorderfs — our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues — where touch -m and touch -a were not working as expected (#911281). In addition, ensured that failing an XFail test should in-itself be a failure [...].

  • Made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues to:

    • Add support for comparing OCaml files via ocamlobjinfo. (#910542)

    • Add support for comparing PDF metadata using PyPDF2. (#911446)

    • Support gnumeric 1.12.43. [...]

    • Use str.startswith(...) over str.index(...) == 0 in the Macho comparator to prevent tracebacks if text cannot be found on the line. (#910540).

    • Add note on how to regenerate debian/tests/control.in and regenerate debian/tests/control with no material changes to add the regeneration comment itself. (1, 2)

    • Prevent test failures when running under stretch-backports by checking the OCaml version number. (#911846)

    • I also added a Salsa ribbon to the diffoscope.org website. [...]

  • Categorised a huge number of packages and issues in the Reproducible Builds "notes" repository and kept isdebianreproducibleyet.com up to date [...].

  • Worked on publishing our weekly reports. (#180, #181, #182 & #183)

  • Lastly, I fixed an issue in our Jenkins-based testing framework that powers tests.reproducible-builds.org to suppress some warnings from the cryptsetup initramfs hook which were causing some builds to be marked as "unstable". [...]


Debian


Debian bugs & patches filed
  • debbugs: Correct "favicon" location in <link/> HTML header. (#912186)

  • ikiwiki: "po" plugin can insert raw file contents with [[!inline]] directives. (#911356)

  • kitty: Please update homepage. (#911848)

  • pipenv: Bundles a large number of third-party libraries. (#910107)

  • mailman: Please include List-Id header on confirmation mails. (#910378)

  • fswatch: Clarify Files-Excluded entries. (#910330)

  • fuse3: Please obey nocheck build profile. (#910029)

  • gau2grid: Please add a non-boilerplate long description. (#911532)

  • hiredis: Please backport to stretch-backports. (#911732)

  • Please remove unnecessary overrides in fuse3 (#910030), puppet-module-barbican (#910374), python-oslo.vmware (#910011) & python3-antlr3(#910012)

  • python3-pypdf2: Python 3.x package ships non-functional Python 2.x examples. (#911649)

  • mtools: New upstream release. (#912285)

I also a filed requests with the stable release managers to update lastpass-cli (#911767) and python-django (#910821).


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Multiple "frontdesk" shifts, triaging upstream CVEs, liasing with the Security Team, etc.

  • Issued DLA 1528-1 to prevent a denial-of-service (DoS) vulnerability in strongswan, a virtual private network (VPN) client and server where verification of an RSA signature with a very short public key caused an integer underflow in a length check that resulted in a heap buffer overflow.

  • Issued DLA 1547-1 for the Apache PDFBox library to fix a potential DoS issue where a malicious file could have triggered an extremely long running computation when parsing the PDF page tree.

  • Issued DLA 1550-1 for src:drupal7 to close remote code execution and an external URL injection exploit in the Drupal web-based content management framework as part of Drupal's SA-CORE-2018-006 security release.

  • Issued ELA-49-1 for the Adplug sound library to fix potential DoS attack due to double-free vulnerability.


Uploads
  • redis:

    • 5.0~rc5-2 — Use the Debian hiredis library now that #907259 has landed. (#907258)
    • 5.0.0-1 — New upstream release.
    • 5.0.0-2 — Update patch to sentinel.conf to ensure the correct runtime PID file location (#911407), listen on ::1 interfaces too for redis-sentinel to match redis-server, & run the new LOLWUT command in the autopkgtests.
  • python-django:

    • 1.11.16-1 — New upstream bugfix release.
    • 1.11.16-2 — Fix some broken README.txt symlinks. (#910120)
    • 1.11.16-3 — Default to supporting Spatialite 4.2. (#910240)
    • 2.1.2-1 — New upstream security release.
    • 2.1.2-2 — Default to supporting Spatialite 4.2. (#910240)
  • libfiu:

  • 0.96-5 — Apply patch from upstream to write fiu_ctrl.py atomically to avoid a.parallel build failure. (#909843)

  • 0.97-1 — New upstream release.
  • 0.97-2 — Mangle return offset sizes for 64-bit variants to prevent build failures on 32-bit architectures. (#911733)

  • adminer (4.6.3-2) — Use continue 2 to avoid a switch/continue warning in PHP 7.3, thus preventing an autopkgtest regression. (#911825)

  • bfs (1.2.4-1) — New upstream release.

  • django-auto-one-to-one (3.1.1-1) — New upstream release.

  • lastpass-cli (1.3.1-5) — Add ca-certificates to Depends.

  • python-redis (2.10.6-5) — Fix debian/watch file.

  • python-daiquiri (1.5.0-1) — New upstream release.


I also sponsored uploads of elpy (1.25.0-1) and hiredis (0.14.0-1).

FTP Team


As a Debian FTP assistant I ACCEPTed 95 packages: barrier, cct, check-pgactivity, cloudkitty-dashboard, cmark-gfm, eclipse-emf, eclipse-jdt-core, eclipse-platform-team, eclipse-platform-ua, eclipse-platform-ui, eos-sdk, equinox-p2, fontcustom, fonts-fork-awesome, fswatch, fuse3, gau2grid, gitlab, glom, grapefruit, grub-cloud, gsequencer, haskell-base-compat-batteries, haskell-invariant, haskell-parsec-numbers, haskell-reinterpret-cast, haskell-resolv, haskell-shelly, haskell-skylighting-core, haskell-wcwidth, hollywood, intelhex, javapoet, libgpg-error, libjsoncpp, libnbcompat, lintian-brush, llvm-toolchain-snapshot, mando, mat2, mini-httpd-run, modsecurity, mtree-netbsd, neutron-tempest-plugin, ngspice, openstack-cluster-installer, pg-checksums, pg-cron, pg-dirtyread, pg-qualstats, pg-repack, pg-similarity, pg-stat-kcache, pgaudit, pgextwlist, pgfincore, pgl-ddl-deploy, pgmemcache, pgpool2, pgrouting, pgsql-ogr-fdw, pgstat, pipenv, postgresql-hll, postgresql-plproxy, postgresql-plsh, puppet-module-barbican, puppet-module-icann-quagga, puppet-module-icann-tea, puppet-module-rodjek-logrotate, pykwalify, pyocd, python-backports.csv, python-fastfunc, python-httptools, python-redmine, python-tld, python-yaswfp, python3-simpletal, r-cran-eaf, r-cran-emoa, r-cran-ggally, r-cran-irace, r-cran-parallelmap, r-cran-popepi, r-cran-pracma, r-cran-spp, radon, rust-semver-parser-0.7, syndie, unicycler, vitetris, volume-key, weston & zram-tools.

I additionally filed 14 RC bugs against packages that had potentially-incomplete debian/copyright files against fontcustom, fuse3, intelhex, libnbcompat, mat2, modsecurity, mtree-netbsd, puppet-module-barbican, python-redmine, r-cran-eaf, r-cran-emoa, r-cran-pracma, radon & syndie.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

SAT solvers for fun and fairness

Mar, 30/10/2018 - 10:55md

Trøndisk 2018, the first round of the Norwegian ultimate series (the frisbee sport, not the fighting style) is coming up this weekend! Normally that would mean that I would blog something about all the new and exciting things we are doing with Nageru for the stream, but for now, I will just point out that the stream is on plastkast.no and will be live from 0945–1830 CET on Saturday (group stage) and 1020–1450 (playoffs) on Sunday.

Instead, I wanted to talk about a completely different but interesting subproblem we had to solve; how do you set up a good schedule for the group stages? There are twelve teams, pre-seeded and split into two groups (call them A0–A5 and B0–B5) that are to play round-robin, but there are only two fields—and only one of them is streamed. You want a setup that maximizes fairness in the sense that people get adequate rest between matches, and also more or less equal number of streamed games. Throw in that one normally wants the more exciting games last, and it starts to get really tricky to make something good by hand. Could we do it programmatically?

My first thought was that since this is all about the ordering, it sounded like a variant of the infamous travelling salesman problem. It's well-known that TSP is NP-hard (or NP-complete, but I won't bother with the details), but there are excellent heursitic implementations in practice. In particular, I had already used OR-Tools, Google's optimization toolkit, to solve TSP problems in the past; it contains a TSP solver that can deal with all sorts of extra details, like multiple agents to travel (in our case, multiple fields), subconstraints on ordering and so on. (OR-Tools probably doesn't contain the best TSP solver in the world—there are specialized packages that do even better—but it's much better than anything I could throw together myself.)

However, as I tried figuring out something, and couldn't quite get it to fit (there are so many extra nonlocal constraints), I saw that the OR-Tools documentation had a subsection on scheduling problems. It turns out this kind of scheduling can be represented as a so-called SAT (satisfiability) problem, and OR-Tools also has a SAT solver. (SAT, in its general forms, is also NP-hard, but again, there are great heuristics.) I chose the Python frontend, which probably wasn't the best idea in the world (it's poorly documented, and I do wonder when Python will take the step into the 90s and make spelling errors in variables into compile-time errors instead of throwing a runtime exception four hours into a calculation), but that's what the documentation used, and the backend is in C++ anyway, so speed doesn't matter.

The SAT solver works by declaring variables and various constraints between them, and then asking the machine to either come up with a solution that fits, or to prove that it's not possible. Let's have a look of some excerpts to get a feel for how it all works:

We know we have 15 rounds, two fields on each, and every field should contain a match. So let's generate 30 such variables, each containing a match number (we use the convention that match 0, 2, 4, 6, etc. are on the stream field and 1, 3, 5, 7, etc. are played in parallel on the other field):

matchnums = [] for match_idx in range(num_matches): matchnums.append(model.NewIntVar(0, num_matches - 1, "matchnum%d" % (match_idx)))

So this is 30 variables, and each go from 0 to 29, inclusive. We start a fairly obvious constraint; we can only play each match once:

model.AddAllDifferent(matchnums)

The SAT solver might make this into a bunch of special constraints underneath, or it might not. We don't care; it's abstracted away for us.

Now, it's not enough to just find any ordering—after all, we want to find an ordering with some constraints. However, the constraints are rarely about the match numbers, but more about the teams that play in those matches. So we'll need some helper variables. For instance, it would be interesting to know which teams play in each match:

home_teams = [] away_teams = [] for match_idx in range(num_matches): home_teams.append(model.NewIntVar(0, num_teams - 1, "home_team_match%i" % (match_idx))) away_teams.append(model.NewIntVar(0, num_teams - 1, "away_team_match%i" % (match_idx))) model.AddElement(matchnums[match_idx], home_teams_for_match_num, home_teams[match_idx]) model.AddElement(matchnums[match_idx], away_teams_for_match_num, away_teams[match_idx])

AddElement() here simply is an indexing operation; since there's no difference between home and away teams for us, we've just pregenerated all the matches as A0 vs. A1, A0 vs. A2, etc. up until A4 vs. A6, A5 vs. A6 and then similarly for the other gruop. The “element” constraint makes sure that e.g. home_team_match0 = home_teams_for_match_num[matchnum0]. Note that even though I think of this is as an assignment where the home team for match 0 follows logically from which match is being played as match 0, it is a constraint that goes both ways; the solver is free to do inference that way, or instead first pick the home team and then deal with the consequences for the match number. (E.g., if it picks A5 as the home team, the match number most certainly needs to be 14, which corresponds to A5–A6.)

We're not quite done with the helpers yet; we want to explode these variables into booleans:

home_team_in_match_x_is_y = [[ model.NewBoolVar('home_team_in_match_%d_is_%d' % (match_idx, team_idx)) for team_idx in range(num_teams) ] for match_idx in range(num_matches)] for match_idx in range(num_matches): model.AddMapDomain(matchnums[match_idx], match_x_has_num_y[match_idx])

and similarly for away team and match number.

So now we have a bunch of variables of the type “is the home team in match 6 A4 or not?”. Finally we can make some interesting constraints! For instance, we've decided already that the group finals (A0–A1 and B0–B1) should be the last two matches of the day, and on the stream field:

model.AddBoolOr([match_x_has_num_y[28][0], match_x_has_num_y[28][15]]) model.AddBoolOr([match_x_has_num_y[26][0], match_x_has_num_y[26][15]])

This is a hard constraint; we don't have a solution unless match 0 and match 15 are the last two (and we earlier said that they must be different).

We're going to need even more helper variables now. It's useful to know whether a team is playing at all in a given round; that's the case if they are the home or away team on either field:

plays_in_round = {} for team_idx in range(num_teams): plays_in_round[team_idx] = {} for round_idx in range(num_rounds): plays_in_round[team_idx][round_idx] = model.NewBoolVar('plays_in_round_t%d_r%d' % (team_idx, round_idx)) model.AddMaxEquality(plays_in_round[team_idx][round_idx], [ home_team_in_match_x_is_y[round_idx * 2 + 0][team_idx], home_team_in_match_x_is_y[round_idx * 2 + 1][team_idx], away_team_in_match_x_is_y[round_idx * 2 + 0][team_idx], away_team_in_match_x_is_y[round_idx * 2 + 1][team_idx]])

Now we can establish a few other very desirable properties; in particular, each team should never need to play two matches back-to-back:

for round_idx in range(num_rounds - 1): for team_idx in range(num_teams): model.AddBoolOr([plays_in_round[team_idx][round_idx].Not(), plays_in_round[team_idx][round_idx + 1].Not()])

Note that there's nothing here that says the same team can't be assigned to play on both fields at the same time! However, this is taken care of by some constraints on the scheduling that I'm not showing for brevity (in particular, we established that each round must have exactly one game from group A and one from group B).

Now we're starting to get out of the “hard constraint” territory and more into things that would be nice. For this, we need objectives. One such objective is what I call ”tiredness”; playing matches nearly back-to-back (ie., game - rest - game) should have a penalty, and the solution should try to avoid it.

tired_matches = [] for round_idx in range(num_rounds - 2): for team_idx in range(num_teams): tired = model.NewBoolVar('team_%d_is_tired_in_round_%d' % (team_idx, round_idx)) model.AddMinEquality(tired, [plays_in_round[team_idx][round_idx], plays_in_round[team_idx][round_idx + 2]]) tired_matches.append(tired) sum_tiredness = sum(tired_matches)

So here we have helper variables that are being set to the minimum (effectively a logical AND) of “do I play in round N” and “do I play in round N + 2”. Tiredness is simply a sum of those 0–1 variables, which we can seek to minimize:

model.Minimize(sum_tiredness)

You may wonder how we went from a satisfiability problem to an optimization problem. Conceptually, however, this isn't so hard. Just ask the solver to find any solution, e.g. something with sum_tiredness 20. Then simply add a new constraint saying sum_tiredness <= 19 and ask for a re-solve (or continue). Eventually, the solver will either come back with a better solution (in which case you can tighten the constraint further), or the message that you've asked for something impossible, in which case you know you have the optimal solution. (I have no idea whether modern SAT solvers actually work this way internally, but again, conceptually it's simple.)

As an extra bonus, you do get incrementally better solutions as you go. These problems are theoretically very hard—in fact, I let it run for fun for a week now, and it's still not found an optimal solution—and in practice, you just take some intermediate solution that is “good enough”. There are always constraints that you don't bother adding to the program anyway, so there's some eyeballing involved, but still feels like a more fair process than trying to nudge it by hand.

We had many more objectives, some of them contradictory (e.g., games between more closely seeded opponents are more “exciting”, and should be put last—but they should also be put on the stream, so do you put them early on the stream field or late on the non-stream field?). It's hard to weigh all the factors against each other, but in the end, I think we ended up with something pretty nice. Every team gets to play two or three times (out of five) on the stream, only one team needs to be “tired” twice (and I checked; if you ask for a hard maximum of once for every team, it comes back pretty fast as infeasible), many of the tight matches are scheduled near the end… and most importantly, we don't have to play the first matches while I'm still debugging the stream. :-)

You can see final schedule here. Good luck to everyone, and consider using a SAT solver next time you have a thorny scheduling problem!

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Faqet