You are here

Planet Debian

Subscribe to Feed Planet Debian
Planet Debian -
Përditësimi: 3 months 3 javë më parë

Montreal's Debian & Stuff - November 2018

Enj, 01/11/2018 - 5:00pd

November's wet, My socks are too, Away from keyboard; still on the net, Let's fix /usr/bin/$foo.

November can be a hard month in the Northen Hemisphere. It tends to be dark, rainy and cold. Montreal sure has been dark, rainy and cold lately.

That's why you should join us at our next Debian & Stuff later this month. Come by and work on Debian-related stuff - or not! Hanging out and chatting with folks is also perfectly fine. As always, everyone's welcome.

The date hasn't been decided yet, so be sure to fill out this poll before November 10th. This time we'll be hanging out at Koumbit.

What else can I say; if not for the good company, the bad poutine from the potato shack next door or the nice craft beer from the very hipster beer shop a little bit further down the street, you should drop by to keep November from creeping in too far.

Louis-Philippe Véronneau Louis-Philippe Véronneau

Review: In Pursuit of the Traveling Salesman

Enj, 01/11/2018 - 4:25pd

Review: In Pursuit of the Traveling Salesman, by William J. Cook

Publisher: Princeton University Copyright: 2012 ISBN: 0-691-15270-5 Format: Kindle Pages: 272

In Pursuit of the Traveling Salesman is a book-length examination of the traveling salesman problem (TSP) in computer science, written by one of the foremost mathematicians working on solutions to the TSP. Cook is Professor of Applied Mathematics and Statistics at Johns Hopkins University and is one of the authors of the Concorde TSP Solver.

First, a brief summary of the TSP for readers without a CS background. While there are numerous variations, the traditional problem is this: given as input a list of coordinates on a two-dimensional map representing cities, construct a minimum-length path that visits each city exactly once and then returns to the starting city. It's famous in computer science in part because it's easy to explain and visualize but still NP-hard, which means that not only do we not know of a way to exactly solve this problem in a reasonable amount of time for large numbers of cities, but also that a polynomial-time solution to the TSP would provide a solution to a vast number of other problems. (For those familiar with computability theory, the classic TSP is not NP-complete because it's not a decision problem and because of some issues with Euclidean distances, but when stated as a graph problem and converted into a decision problem by, for example, instead asking if there is a solution with length less than n, it is NP-complete.)

This is one of those books where the quality of the book may not matter as much as its simple existence. If you're curious about the details of the traveling salesman problem specifically, but don't want to read a lot of mathematics and computer science papers, algorithm textbooks, or books on graph theory, this book is one of your few options. Thankfully, it's also fairly well-written. Cook provides a history of the problem, a set of motivating problems (the TSP doesn't come up as much and isn't as critical as some NP-complete problems, but optimal tours are still more common than one might think), and even a one-chapter tour of the TSP in art. The bulk of the book, though, is devoted to approximation methods, presented in roughly chronological order of development.

Given that the TSP is NP-hard, we obviously don't know a good exact solution, but I admit I was a bit disappointed that Cook spent only one chapter exploring the exact solutions and explaining to the reader what makes the problem difficult. Late in the book, he does describe the Held-Karp dynamic programming algorithm that gets the work required for an exact solution down to exponential in n, provides a basic introduction to complexity theory, and explains that the TSP is NP-complete by reduction from the Hamiltonian path problem, but doesn't show the reduction of 3SAT to Hamiltonian paths. Since my personal interest ran a bit more towards theory and less towards practical approximations, I would have appreciated a bit more discussion of the underlying structure of the problem and why it's algorithmically hard. (I did appreciate the explanation of why it's not clear whether the general Euclidean TSP is even in NP due to problems with square roots, though.)

That said, I suppose there isn't as much to talk about in exact solutions (the best one we know dates to 1962) and much more to talk about in approximations, which is where Cook has personally spent his time. That's the topic of most of this book, and includes a solid introduction to the basic concept of linear programming (a better one than I ever got in school) and some of its other applications, as well as other techniques (cutting planes, branch-and-bound, and others). The math gets a bit thick here, and Cook skips over a lot of the details to try to keep the book suitable for a general audience, so I can't say I followed all of it, but it certainly satisfied my curiosity about practical approaches to the TSP. (It also made me want to read more about linear programming.)

If you're looking for a book like this, you probably know that already, and I can reassure you that it delivers what it promises and is well-written and approachable. If you aren't already curious about a brief history of practical algorithms for one specific problem, I don't think this book is sufficiently compelling to worth seeking out anyway. This is not a general popularization of interesting algorithms (see Algorithms to Live By if you're looking for that), or (despite Cook's efforts) particularly approachable if this is your first deep look at computer algorithms. It's a niche book that delivers on its promise, but probably won't convince you the topic is interesting if you don't see the appeal.

Rating: 7 out of 10

Russ Allbery Eagle's Path

Debian LTS work, October 2018

Mër, 31/10/2018 - 11:26md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 4 hours from September. I worked all 19 hours.

I released security updates for the linux (DLA 1529-1) and linux-4.9 (DLA 1531-1) packages. I prepared and released another stable update for Linux 3.16 (3.16.60), but have not yet included this in a Debian upload. I also released a security update for libssh (DLA 1548-1).

Ben Hutchings Better living through software

RHL'19 St-Cergue, Switzerland, 25-27 January 2019

Mër, 31/10/2018 - 10:06md

(translated from original French version)

The Rencontres Hivernales du Libre (RHL) (Winter Meeting of Freedom) takes place 25-27 January 2019 at St-Cergue. invites the free software community to come and share workshops, great meals and good times.

This year, we celebrate the 5th edition with the theme «Exploit».

Please think creatively and submit proposals exploring this theme: lectures, workshops, performances and other activities are all welcome.

RHL'19 is situated directly at the base of some family-friendly ski pistes suitable for beginners and more adventurous skiers. It is also a great location for alpine walking trails.

Why, who?

RHL'19 brings together the forces of freedom in the Leman basin, Romandy, neighbouring France and further afield (there is an excellent train connection from Geneva airport). Hackers and activists come together to share a relaxing weekend and discover new things with free technology and software.

If you have a project to present (in 5 minutes, an hour or another format) or activities to share with other geeks, please send an email to or submit it through the form.

If you have any specific venue requirements please contact the team.

You can find detailed information on the event web site.

Please ask if you need help finding accommodation or any other advice planning your trip to the region.

Daniel.Pocock - debian

Free software activities in October 2018

Mër, 31/10/2018 - 4:47md

Here is my monthly update covering what I have been doing in the free software world during October 2018 (previous month):

We intend to maintain changes to these modules under their original open source licenses and applying only free and open fixes and updates. You can find out more at

  • My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.

  • I created Github-esque ribbons to display on Salsa-hosted websites. (Salsa being the collaborative development server for Debian and is the replacement for the now-deprecated Alioth service.)

  • Started a highly work-in-progress "Debbugs Enhancement Suite" Chrome browser extension to enhance various parts of the web interface.

  • Even more hacking on the Lintian static analysis tool for Debian packages:

    • New features:

      • Warn about packages that use PIUPARTS_* in maintainer scripts. (#912040)
      • Check for packages that parse /etc/passwd in maintainer scripts. (#911157)
      • Emit a warning for packages that do not specify Build-Depends-Package in symbol files. (#911451)
      • Check for non-Python files in top-level Python module directories. [...]
      • Check packages missing versioned dependencies on init-system-helpers. (#910594)
      • Detect calls to update-inetd(1) that use --group without --add, etc. (#909511)
      • Check for packages that encode a Python version number in their source package name. [...]
    • Bug fixes:

    • Misc:

      • Also show the maintainer name on the tag-specific reporting HTML. [...]
      • Tidy a number of references regarding the debhelper-compat virtual package. [...]
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:

  • I attended the Tandon School of Engineering (part of New York University) to speak and work with students from the Application Security course on the topic of reproducible builds.

  • Wrote and forwarded patch for Fontconfig to ensure the cache filenames are determinstic. [...]

  • I sent two previously-authored patches for GNU mtools to ensure the Debian Installer images could become reproducible. (1 & 2)

  • Submitted 11 Debian patches to fix reproducibility issues in fast5, libhandy, lmfit-py, mp3fs, opari2, pjproject, radon, sword, syndie, wit & zsh-antigen. I also submitted an upstream pull request for python-changelog.

  • Made a large number of changes to our website, including adding step-by-step instructions and screenshots on how to signup to our project on Salsa and migrating the TimestampsProposal page on the Debian Wiki to our website.

  • Fixed an issue in disorderfs — our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues — where touch -m and touch -a were not working as expected (#911281). In addition, ensured that failing an XFail test should in-itself be a failure [...].

  • Made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues to:

    • Add support for comparing OCaml files via ocamlobjinfo. (#910542)

    • Add support for comparing PDF metadata using PyPDF2. (#911446)

    • Support gnumeric 1.12.43. [...]

    • Use str.startswith(...) over str.index(...) == 0 in the Macho comparator to prevent tracebacks if text cannot be found on the line. (#910540).

    • Add note on how to regenerate debian/tests/ and regenerate debian/tests/control with no material changes to add the regeneration comment itself. (1, 2)

    • Prevent test failures when running under stretch-backports by checking the OCaml version number. (#911846)

    • I also added a Salsa ribbon to the website. [...]

  • Categorised a huge number of packages and issues in the Reproducible Builds "notes" repository and kept up to date [...].

  • Worked on publishing our weekly reports. (#180, #181, #182 & #183)

  • Lastly, I fixed an issue in our Jenkins-based testing framework that powers to suppress some warnings from the cryptsetup initramfs hook which were causing some builds to be marked as "unstable". [...]


Debian bugs & patches filed
  • debbugs: Correct "favicon" location in <link/> HTML header. (#912186)

  • ikiwiki: "po" plugin can insert raw file contents with [[!inline]] directives. (#911356)

  • kitty: Please update homepage. (#911848)

  • pipenv: Bundles a large number of third-party libraries. (#910107)

  • mailman: Please include List-Id header on confirmation mails. (#910378)

  • fswatch: Clarify Files-Excluded entries. (#910330)

  • fuse3: Please obey nocheck build profile. (#910029)

  • gau2grid: Please add a non-boilerplate long description. (#911532)

  • hiredis: Please backport to stretch-backports. (#911732)

  • Please remove unnecessary overrides in fuse3 (#910030), puppet-module-barbican (#910374), python-oslo.vmware (#910011) & python3-antlr3(#910012)

  • python3-pypdf2: Python 3.x package ships non-functional Python 2.x examples. (#911649)

  • mtools: New upstream release. (#912285)

I also a filed requests with the stable release managers to update lastpass-cli (#911767) and python-django (#910821).

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Multiple "frontdesk" shifts, triaging upstream CVEs, liasing with the Security Team, etc.

  • Issued DLA 1528-1 to prevent a denial-of-service (DoS) vulnerability in strongswan, a virtual private network (VPN) client and server where verification of an RSA signature with a very short public key caused an integer underflow in a length check that resulted in a heap buffer overflow.

  • Issued DLA 1547-1 for the Apache PDFBox library to fix a potential DoS issue where a malicious file could have triggered an extremely long running computation when parsing the PDF page tree.

  • Issued DLA 1550-1 for src:drupal7 to close remote code execution and an external URL injection exploit in the Drupal web-based content management framework as part of Drupal's SA-CORE-2018-006 security release.

  • Issued ELA-49-1 for the Adplug sound library to fix potential DoS attack due to double-free vulnerability.

  • redis:

    • 5.0~rc5-2 — Use the Debian hiredis library now that #907259 has landed. (#907258)
    • 5.0.0-1 — New upstream release.
    • 5.0.0-2 — Update patch to sentinel.conf to ensure the correct runtime PID file location (#911407), listen on ::1 interfaces too for redis-sentinel to match redis-server, & run the new LOLWUT command in the autopkgtests.
  • python-django:

    • 1.11.16-1 — New upstream bugfix release.
    • 1.11.16-2 — Fix some broken README.txt symlinks. (#910120)
    • 1.11.16-3 — Default to supporting Spatialite 4.2. (#910240)
    • 2.1.2-1 — New upstream security release.
    • 2.1.2-2 — Default to supporting Spatialite 4.2. (#910240)
  • libfiu:

  • 0.96-5 — Apply patch from upstream to write atomically to avoid a.parallel build failure. (#909843)

  • 0.97-1 — New upstream release.
  • 0.97-2 — Mangle return offset sizes for 64-bit variants to prevent build failures on 32-bit architectures. (#911733)

  • adminer (4.6.3-2) — Use continue 2 to avoid a switch/continue warning in PHP 7.3, thus preventing an autopkgtest regression. (#911825)

  • bfs (1.2.4-1) — New upstream release.

  • django-auto-one-to-one (3.1.1-1) — New upstream release.

  • lastpass-cli (1.3.1-5) — Add ca-certificates to Depends.

  • python-redis (2.10.6-5) — Fix debian/watch file.

  • python-daiquiri (1.5.0-1) — New upstream release.

I also sponsored uploads of elpy (1.25.0-1) and hiredis (0.14.0-1).

FTP Team

As a Debian FTP assistant I ACCEPTed 95 packages: barrier, cct, check-pgactivity, cloudkitty-dashboard, cmark-gfm, eclipse-emf, eclipse-jdt-core, eclipse-platform-team, eclipse-platform-ua, eclipse-platform-ui, eos-sdk, equinox-p2, fontcustom, fonts-fork-awesome, fswatch, fuse3, gau2grid, gitlab, glom, grapefruit, grub-cloud, gsequencer, haskell-base-compat-batteries, haskell-invariant, haskell-parsec-numbers, haskell-reinterpret-cast, haskell-resolv, haskell-shelly, haskell-skylighting-core, haskell-wcwidth, hollywood, intelhex, javapoet, libgpg-error, libjsoncpp, libnbcompat, lintian-brush, llvm-toolchain-snapshot, mando, mat2, mini-httpd-run, modsecurity, mtree-netbsd, neutron-tempest-plugin, ngspice, openstack-cluster-installer, pg-checksums, pg-cron, pg-dirtyread, pg-qualstats, pg-repack, pg-similarity, pg-stat-kcache, pgaudit, pgextwlist, pgfincore, pgl-ddl-deploy, pgmemcache, pgpool2, pgrouting, pgsql-ogr-fdw, pgstat, pipenv, postgresql-hll, postgresql-plproxy, postgresql-plsh, puppet-module-barbican, puppet-module-icann-quagga, puppet-module-icann-tea, puppet-module-rodjek-logrotate, pykwalify, pyocd, python-backports.csv, python-fastfunc, python-httptools, python-redmine, python-tld, python-yaswfp, python3-simpletal, r-cran-eaf, r-cran-emoa, r-cran-ggally, r-cran-irace, r-cran-parallelmap, r-cran-popepi, r-cran-pracma, r-cran-spp, radon, rust-semver-parser-0.7, syndie, unicycler, vitetris, volume-key, weston & zram-tools.

I additionally filed 14 RC bugs against packages that had potentially-incomplete debian/copyright files against fontcustom, fuse3, intelhex, libnbcompat, mat2, modsecurity, mtree-netbsd, puppet-module-barbican, python-redmine, r-cran-eaf, r-cran-emoa, r-cran-pracma, radon & syndie.

Chris Lamb lamby: Items or syndication on Planet Debian.

SAT solvers for fun and fairness

Mar, 30/10/2018 - 10:55md

Trøndisk 2018, the first round of the Norwegian ultimate series (the frisbee sport, not the fighting style) is coming up this weekend! Normally that would mean that I would blog something about all the new and exciting things we are doing with Nageru for the stream, but for now, I will just point out that the stream is on and will be live from 0945–1830 CET on Saturday (group stage) and 1020–1450 (playoffs) on Sunday.

Instead, I wanted to talk about a completely different but interesting subproblem we had to solve; how do you set up a good schedule for the group stages? There are twelve teams, pre-seeded and split into two groups (call them A0–A5 and B0–B5) that are to play round-robin, but there are only two fields—and only one of them is streamed. You want a setup that maximizes fairness in the sense that people get adequate rest between matches, and also more or less equal number of streamed games. Throw in that one normally wants the more exciting games last, and it starts to get really tricky to make something good by hand. Could we do it programmatically?

My first thought was that since this is all about the ordering, it sounded like a variant of the infamous travelling salesman problem. It's well-known that TSP is NP-hard (or NP-complete, but I won't bother with the details), but there are excellent heursitic implementations in practice. In particular, I had already used OR-Tools, Google's optimization toolkit, to solve TSP problems in the past; it contains a TSP solver that can deal with all sorts of extra details, like multiple agents to travel (in our case, multiple fields), subconstraints on ordering and so on. (OR-Tools probably doesn't contain the best TSP solver in the world—there are specialized packages that do even better—but it's much better than anything I could throw together myself.)

However, as I tried figuring out something, and couldn't quite get it to fit (there are so many extra nonlocal constraints), I saw that the OR-Tools documentation had a subsection on scheduling problems. It turns out this kind of scheduling can be represented as a so-called SAT (satisfiability) problem, and OR-Tools also has a SAT solver. (SAT, in its general forms, is also NP-hard, but again, there are great heuristics.) I chose the Python frontend, which probably wasn't the best idea in the world (it's poorly documented, and I do wonder when Python will take the step into the 90s and make spelling errors in variables into compile-time errors instead of throwing a runtime exception four hours into a calculation), but that's what the documentation used, and the backend is in C++ anyway, so speed doesn't matter.

The SAT solver works by declaring variables and various constraints between them, and then asking the machine to either come up with a solution that fits, or to prove that it's not possible. Let's have a look of some excerpts to get a feel for how it all works:

We know we have 15 rounds, two fields on each, and every field should contain a match. So let's generate 30 such variables, each containing a match number (we use the convention that match 0, 2, 4, 6, etc. are on the stream field and 1, 3, 5, 7, etc. are played in parallel on the other field):

matchnums = [] for match_idx in range(num_matches): matchnums.append(model.NewIntVar(0, num_matches - 1, "matchnum%d" % (match_idx)))

So this is 30 variables, and each go from 0 to 29, inclusive. We start a fairly obvious constraint; we can only play each match once:


The SAT solver might make this into a bunch of special constraints underneath, or it might not. We don't care; it's abstracted away for us.

Now, it's not enough to just find any ordering—after all, we want to find an ordering with some constraints. However, the constraints are rarely about the match numbers, but more about the teams that play in those matches. So we'll need some helper variables. For instance, it would be interesting to know which teams play in each match:

home_teams = [] away_teams = [] for match_idx in range(num_matches): home_teams.append(model.NewIntVar(0, num_teams - 1, "home_team_match%i" % (match_idx))) away_teams.append(model.NewIntVar(0, num_teams - 1, "away_team_match%i" % (match_idx))) model.AddElement(matchnums[match_idx], home_teams_for_match_num, home_teams[match_idx]) model.AddElement(matchnums[match_idx], away_teams_for_match_num, away_teams[match_idx])

AddElement() here simply is an indexing operation; since there's no difference between home and away teams for us, we've just pregenerated all the matches as A0 vs. A1, A0 vs. A2, etc. up until A4 vs. A6, A5 vs. A6 and then similarly for the other gruop. The “element” constraint makes sure that e.g. home_team_match0 = home_teams_for_match_num[matchnum0]. Note that even though I think of this is as an assignment where the home team for match 0 follows logically from which match is being played as match 0, it is a constraint that goes both ways; the solver is free to do inference that way, or instead first pick the home team and then deal with the consequences for the match number. (E.g., if it picks A5 as the home team, the match number most certainly needs to be 14, which corresponds to A5–A6.)

We're not quite done with the helpers yet; we want to explode these variables into booleans:

home_team_in_match_x_is_y = [[ model.NewBoolVar('home_team_in_match_%d_is_%d' % (match_idx, team_idx)) for team_idx in range(num_teams) ] for match_idx in range(num_matches)] for match_idx in range(num_matches): model.AddMapDomain(matchnums[match_idx], match_x_has_num_y[match_idx])

and similarly for away team and match number.

So now we have a bunch of variables of the type “is the home team in match 6 A4 or not?”. Finally we can make some interesting constraints! For instance, we've decided already that the group finals (A0–A1 and B0–B1) should be the last two matches of the day, and on the stream field:

model.AddBoolOr([match_x_has_num_y[28][0], match_x_has_num_y[28][15]]) model.AddBoolOr([match_x_has_num_y[26][0], match_x_has_num_y[26][15]])

This is a hard constraint; we don't have a solution unless match 0 and match 15 are the last two (and we earlier said that they must be different).

We're going to need even more helper variables now. It's useful to know whether a team is playing at all in a given round; that's the case if they are the home or away team on either field:

plays_in_round = {} for team_idx in range(num_teams): plays_in_round[team_idx] = {} for round_idx in range(num_rounds): plays_in_round[team_idx][round_idx] = model.NewBoolVar('plays_in_round_t%d_r%d' % (team_idx, round_idx)) model.AddMaxEquality(plays_in_round[team_idx][round_idx], [ home_team_in_match_x_is_y[round_idx * 2 + 0][team_idx], home_team_in_match_x_is_y[round_idx * 2 + 1][team_idx], away_team_in_match_x_is_y[round_idx * 2 + 0][team_idx], away_team_in_match_x_is_y[round_idx * 2 + 1][team_idx]])

Now we can establish a few other very desirable properties; in particular, each team should never need to play two matches back-to-back:

for round_idx in range(num_rounds - 1): for team_idx in range(num_teams): model.AddBoolOr([plays_in_round[team_idx][round_idx].Not(), plays_in_round[team_idx][round_idx + 1].Not()])

Note that there's nothing here that says the same team can't be assigned to play on both fields at the same time! However, this is taken care of by some constraints on the scheduling that I'm not showing for brevity (in particular, we established that each round must have exactly one game from group A and one from group B).

Now we're starting to get out of the “hard constraint” territory and more into things that would be nice. For this, we need objectives. One such objective is what I call ”tiredness”; playing matches nearly back-to-back (ie., game - rest - game) should have a penalty, and the solution should try to avoid it.

tired_matches = [] for round_idx in range(num_rounds - 2): for team_idx in range(num_teams): tired = model.NewBoolVar('team_%d_is_tired_in_round_%d' % (team_idx, round_idx)) model.AddMinEquality(tired, [plays_in_round[team_idx][round_idx], plays_in_round[team_idx][round_idx + 2]]) tired_matches.append(tired) sum_tiredness = sum(tired_matches)

So here we have helper variables that are being set to the minimum (effectively a logical AND) of “do I play in round N” and “do I play in round N + 2”. Tiredness is simply a sum of those 0–1 variables, which we can seek to minimize:


You may wonder how we went from a satisfiability problem to an optimization problem. Conceptually, however, this isn't so hard. Just ask the solver to find any solution, e.g. something with sum_tiredness 20. Then simply add a new constraint saying sum_tiredness <= 19 and ask for a re-solve (or continue). Eventually, the solver will either come back with a better solution (in which case you can tighten the constraint further), or the message that you've asked for something impossible, in which case you know you have the optimal solution. (I have no idea whether modern SAT solvers actually work this way internally, but again, conceptually it's simple.)

As an extra bonus, you do get incrementally better solutions as you go. These problems are theoretically very hard—in fact, I let it run for fun for a week now, and it's still not found an optimal solution—and in practice, you just take some intermediate solution that is “good enough”. There are always constraints that you don't bother adding to the program anyway, so there's some eyeballing involved, but still feels like a more fair process than trying to nudge it by hand.

We had many more objectives, some of them contradictory (e.g., games between more closely seeded opponents are more “exciting”, and should be put last—but they should also be put on the stream, so do you put them early on the stream field or late on the non-stream field?). It's hard to weigh all the factors against each other, but in the end, I think we ended up with something pretty nice. Every team gets to play two or three times (out of five) on the stream, only one team needs to be “tired” twice (and I checked; if you ask for a hard maximum of once for every team, it comes back pretty fast as infeasible), many of the tight matches are scheduled near the end… and most importantly, we don't have to play the first matches while I'm still debugging the stream. :-)

You can see final schedule here. Good luck to everyone, and consider using a SAT solver next time you have a thorny scheduling problem!

Steinar H. Gunderson Steinar H. Gunderson

Enabling Wake-on-Lan with the N34 Mini PC

Mar, 30/10/2018 - 8:58md

There is a room at the top of my house which was originally earmarked for storage (the loft is full of insulation rather than being a useful option). Then I remembered I still had my pico projector and it ended up as a cinema room as well. The pico projector needs really low light conditions with a long throw, so the fact the room only has a single small window is a plus.

I bought an “N34” mini PC to act as a media player - I already had a spare DVB-T2 stick to Freeview enable things, and the Kodi box downstairs has all my DVDs stored on it for easy streaming. It’s a Celeron N3450 based box with 4G RAM and a 32GB internal eMMC (though I’m currently running off an SD card because that’s what I initially used to set it up and I haven’t bothered to copy it onto the internal device yet). My device came from Amazon and is branded “Kodlix” (whose website no longer works) but it appears to be the same thing as the Beelink AP34.

Getting Linux onto it turned out to be a hassle. GRUB does not want to play with the EFI BIOS; it can be operated sometimes if manually called from the EFI Shell, but it does not work as the default EFI image to load. Various forum posts recommended the use of rEFInd, which mostly works fine.

Other than that Debian Stretch worked without problems. I had to pull in a backports kernel in order to make the DVB-T2 stick work properly, but the hardware on the N34 itself was all supported out of the box.

The other issue was trying to get Wake-on-Lan to work. The room isn’t used day to day so I want to be able to tie various pieces together with home automation such that I can have everything off by default and a scene configured to set things up ready for use. The BIOS has an entry for Wake-on-Lan, ethtool reported Supports Wake-on: g which should mean MagicPacket wakeup was enabled, but no joy. Looking at /proc/acpi/wakeup gave:

/proc/acpi/wakeup contents Device S-state Status Sysfs node HDAS S3 *disabled pci:0000:00:0e.0 XHC S3 *enabled pci:0000:00:15.0 XDCI S4 *disabled BRCM S0 *disabled RP01 S4 *disabled PXSX S4 *disabled RP02 S4 *disabled PXSX S4 *disabled RP03 S4 *disabled pci:0000:00:13.0 PXSX S4 *disabled pci:0000:01:00.0 RP04 S4 *disabled PXSX S4 *disabled RP05 S4 *disabled PXSX S4 *disabled RP06 S4 *disabled pci:0000:00:13.3 PXSX S4 *disabled pci:0000:02:00.0 PWRK S4 *enabled platform:PNP0C0C:00

pci:0000:01:00.0 is the network card:

01:00.0 Ethernet controller [0200]: Realtek […] Ethernet Controller [10ec:8168] (rev 0c)

I need this configured to allow wakeups which apparently is done via sysfs these days:

echo enabled > /sys/bus/pci/devices/0000\:01\:00.0/power/wakeup

This has to be done every boot so I just tied it into /etc/network/interfaces.

All of this then enables Home Assistant to control the Kodi box:

Home Assistant Kodi WoL configuration wake_on_lan: media_player: - platform: kodi name: Kodi (Cinema) host: port: 8000 username: kodi password: !secret kodi_cinema_pass enable_websocket: false turn_on_action: service: wake_on_lan.send_magic_packet data: mac: 84:39:be:11:22:33 broadcast_address: turn_off_action: service: media_player.kodi_call_method data: entity_id: media_player.kodi_cinema method: System.Shutdown

My Home Assistant container sits on a different subnet to the media box, and I found that the N34 wouldn’t respond to a Wake-on-Lan packet to the broadcast MAC address. So I’ve configured the broadcast_address for Home Assistant to be the actual IP of the media box, allowed UDP port 9 (discard) through on the firewall and statically nailed the ARP address of the media box on the router, so it transmits the packet with the correct destination MAC:

ip neigh change lladdr 84:39:be:11:22:33 nud permanent dev eth0

I’ve still got some other bits to glue together (like putting the pico projector on a SonOff), but this gets me started on that process.

(And yes, the room is a bit cosier these days than when that photograph was taken.)

Jonathan McDowell Noodles' Emptiness

I was a podcast guest on The REPL

Pre, 26/10/2018 - 10:00md

Daniel Compton hosted me on his Clojure podcast, The REPL, where I talked about Debian, packaging Leiningen, and the Clojure ecosystem in Debian. It's got everything: spooky abandoned packages, anarchist collectives, software security policies, and Debian release cycles. Absolutely no shade was thrown at other distros.

Give it a listen:

Your browser does not support the audio element.

Download: MP3

More Q&A

After the podcast was published, Ivan Sagalaev wrote me with a great question about how the different versions of Clojure in Ubuntu 18.04 work:

First of all, THANK YOU for making sudo apt install leiningen work! It's so much better and more consistent than sourcing bash scripts :-)

I have a quick question for you. After installing leiningen and clojure on Ubuntu 18.04 I see that lein repl starts with clojure 1.8.0, while the clojure package itself seems to be independent and is version 1.9.0. How is it possible? I frankly haven't even seen lein downloading its own clojure.jar...

I replied:

Leiningen is "ahead-of-time (AOT) compiled", which is a fancy way of saying that the Leiningen you download from Ubuntu is pre-built. This means it is already compiled to Java bytecode, which can be run directly by Java. I ship the binary Leiningen package as an "uberjar", which means all its dependencies are also included inside the Leiningen jar.

Leiningen depends on and is built with Clojure 1.8, so the Leiningen uberjar in Debian also depends on Clojure 1.8. The "clojure" package in 18.04 defaults to installing Clojure 1.9, but that can be installed simultaneously with the "clojure1.8" package that Leiningen depends on in order to build. You can change your default Clojure to 1.8 using alternatives.

When you launch lein repl, by default the Clojure 1.8 runtime that's compiled in is used. If you run lein repl in the root of a Clojure 1.9 project, Leiningen will download Clojure 1.9 from Clojars and launch a 1.9 repl. If you want to use the Clojure 1.9 shipped with Debian, you can change :local-repo to point at /usr/share/maven-repo, but be careful to also set :offline? to true so you don't try to install things into the system maven repo by accident.

Elana Hashman


Pre, 26/10/2018 - 11:46pd

I don't do much Debian stuff these days (too busy) but I have adopted some packages over the last year. This has happened if a package that I rely on is lacking person-power and was at risk of being removed from Debian. I thought I should write about some of them. First up, smartmontools.

smartmontools let you query the "Self-Monitoring, Analysis and Reporting Technology" (S.M.A.R.T.) information in your computer's storage devices (hard discs and solid-state equivalents), as well as issue S.M.A.R.T. commands to them, such as instructing them to execute self-tests.

I rescued smartmontools for the Debian release in 2015, but I thought that was a one-off. Since I've just done it again I'm now considering it something I (co-)maintain1.

S.M.A.R.T. can, in theory, give you advance warning about a disc that is "not well" and could stop working. In practice, it isn't very good at predicting disc failures2 — which might explain why the package hasn't received more attention — but it can still be useful: last year it helped me to detect an issue with excessive drive-head parking I was experiencing on one of my drives.

  1. Personally I think the notion of single-maintainers for packages is old and destructive, and I think it should be the exception rather than the norm. Unfortunately it's still baked into a lot of our processes, policies and tools. ↩

jmtd Jonathan Dowland's Weblog

Migrated website from ikiwiki to Hugo

Enj, 25/10/2018 - 8:42md

So, I’ve been using ikiwiki for my website since 2011. At the time, I was hosting the website on a tiny hosting package included in a DSL contract - nothing dynamic possible, so a static site generator seemed like a good idea. ikiwiki was a good social fit at the time, as it was packaged in Debian and developed by a Debian Developer.

Today, I finished converting it to Hugo.


I did not really have a huge problem with ikiwiki, but I recently converted my blog from wordpress to hugo and it seemed to make sense to have one technology for both, especially since I don’t update the website very often and forget ikiwiki’s special things.

One thing that was somewhat annoying is that I built a custom ikiwiki plugin for the menu in my template, so I had to clone it’s repository into ~/.ikiwiki every time, rather than having a self-contained website. Well, it was a submodule of my dotfiles repo.

Another thing was that ikiwiki had a lot of git integration, and when you build your site it tries to push things to git repositories and all sorts of weird stuff – Hugo just does one thing: It builds your page.

One thing that Hugo does a lot better than ikiwiki is the built-in server which allows you to run `hugo server´ and get a local http URL you can open in the browser with live-reload as you save files. Super convenient to check changes (and of course, for writing this blog post)!

Also, in general, Hugo feels a lot more modern. ikiwiki is from 2006, Hugo is from 2013. Especially recent Hugo versions added quite a few features for asset management.

  • Fingerprinting of assets like css (inserting hash into filename) - ikiwiki just contains its style in style.css (and your templates in other statically named files), so if you switch theming details, you could break things because the CSS the browser has cached does not match the CSS the page expects.
  • Asset minification - Hugo can minimize CSS and JavaScript for you. This means browers have to fetch less data.
  • Asset concatenation - Hugo can concatenate CSS and JavaScript. This allows you to serve only one file per type, reducing the number of round trips a client has to make.

There’s also proper theming support, so you can easily clone a theme into the themes/ directory, or add it as a submodule like I do for my blog. But I don’t use it for the website yet.

Oh, and Hugo automatically generates sitemap.xml files for your website, teaching search engines which pages exist and when they have been modified.

I also like that it’s written in Go vs in Perl, but I think that’s just another more modern type of thing. Gotta keep up with the world!

Basic conversion

The first part to the conversion was to split the repository of the website: ikiwiki puts templates into a templates/ subdirectory of the repository and mixes all other content. Hugo on the other hand splits things into content/ (where pages go), layouts (page templates), and static/ (other files).

The second part was to inject the frontmatter into the markdown files. See, ikiwiki uses shortcuts like this to set up the title, and gets its dates from git:

[[!meta title="My page title"]]

on the other hand, Hugo uses frontmatter - some YAML at the beginning of the markdown, and specifies the creation date in there:

--- title: "My page title" date: Thu, 18 Oct 2018 21:36:18 +0200 ---

You can also have lastmod in there when modifying it, but I set enableGitInfo = true in config.toml so Hugo picks up the mtime from the git repo.

I wrote a small script to automatize those steps, but it was obviously not perfect (also, it inserted lastmod, which it should not have).

One thing it took me some time to figure out was that index.mdown needs to become in the content/ directory of Hugo, otherwise no pages below it are rendered - not entirely obvious.

The theme

Converting the template was surprisingly easy, it was just a matter of replacing <TMPL_VAR BASEURL> and friends with { .Site.BaseURL } and friends - the names are basically the same, just sometimes there’s .Site at the front of it.

Then I had to take care of the menu generation loop. I had my bootmenu plugin for ikiwiki which allowed me to generate menus from the configuration file. The template for it looked like this:


I converted this to:

{{ $currentPage := . }} {{ range .Site.Menus.main }} <li class="{{ if $currentPage.IsMenuCurrent "main" . }}active{{ end }}"> <a href="{{ .URL }}"> {{ .Pre | safeHTML }} <span>{{ .Name }}</span> </a> {{ .Post }} </li> {{ end }}

this allowed me to configure my menu in config.toml like this:

[menu] [[menu.main]] name = "dh-autoreconf" url = "/projects/dh-autoreconf" weight = -110

I can also specify pre and post parts and a right menu, and I use pre and post in the right menu to render a few icons before and after items, for example:

[[menu.right]] pre = "<i class='fab fa-mastodon'></i>" post = "<i class='fas fa-external-link-alt'></i>" url = "" name = "Mastodon" weight = -70

Setting class="active" on the menu item does not seem to work yet, though; I think I need to find out the right code for that…

Fixing up the details

Once I was done with that steps, the next stage was to convert ikiwiki shortcodes to something hugo understands. This took 4 parts:

The first part was converting tables. In ikiwiki, tables look like this:

[[!table format=dsv data=""" Status|License|Language|Reference Active|GPL-3+|Java|[github]( """]]

The generated HTML table had the class="table" set, which the bootstrap framework needs to render a nice table. Converting that to a straightforward markdown hugo table did not work: Hugo did not add the class, so I had to convert pages with tables in them to the mmark variant of markdown, which allows classes to be set like this {.table}, so the end result then looked like this:

{.table} Status|License|Language|Reference ------|-------|--------|--------- Active|GPL-3+|Java|[github](

I’ll be able to get rid of this in the future by using the bootstrap sources and then having table inherit .table properties, but this requires saas or less, and I only have the CSS at the moment, so using mmark was slightly easier.

The second part was converting ikiwiki links like [[MyPage]] and [[my title|MyPage]] to Markdown links. This was quite easy, the first one became [MyPage](MyPage) and the second one [my title](my page).

The third part was converting custom shortcuts: I had [[!lp <number>]] to generate a link LP: #<number> to the corresponding launchpad bug, and [[!Closes <number>]] to generate Closes: #<number> links to the Debian bug tracker. I converted those to normal markdown links, but I could have converted them to Hugo shortcodes. But meh.

The fourth part was about converting some directory indexes I had. For example, [[!map pages="projects/dir2ogg/0.12/* and ! projects/dir2ogg/0.12/*/*"]] generated a list of all files in projects/dir2ogg/0.12. There was a very useful shortcode for that posted on the Hugo documentation, I used a variant of it and then converted pages like this to {{< directoryindex path="/static/projects/dir2ogg/0.12" pathURL="/projects/dir2ogg/0.12" >}}. As a bonus, the new directory index also generates SHA256 hashes for all files!

Further work

The website is using an old version of bootstrap, and the theme is not split out yet. I’m not sure if I want to keep a bootstrap theme for the website, seeing as the blog theme is Bulma-based - it would be easier to have both use bulma.

I also might want to update both the website and the blog by pushing to GitHub and then using CI to build and push it. That would allow me to write blog posts when I don’t have my laptop with me. But I’m not sure, I might lose control if there’s a breach at travis.

Julian Andres Klode Posts on Blog of Julian Andres Klode

MQTT enabling my doorbell

Enj, 25/10/2018 - 8:05md

One of the things about my home automation journey is that I don’t always start out with a firm justification for tying something into my setup. There’s not really any additional gain at present from my living room lights being remotely controllable. When it came to tying the doorbell into my setup I had a clear purpose in mind: I often can’t hear it from my study.

The existing device was a Byron BY101. This consists of a 433MHz bell-push and a corresponding receiver that plugs into a normal mains socket for power. I tried moving the receiver to a more central location, but then had issues with it not reliably activating when the button was pushed. I could have attempted the inverse of Colin’s approach and tried to tie in a wired setup to the wireless receiver, but that would have been too simple.

I first attempted to watch for the doorbell via a basic 433MHz receiver. It seems to use a simple 16 bit identifier followed by 3 bits indicating which tone to use (only 4 are supported by mine; I don’t know if other models support more). The on/off timings are roughly 1040ms/540ms vs 450ms/950ms. I found I could reliably trigger the doorbell using these details, but I’ve not had a lot of luck with reliable 433MHz reception on microcontrollers; generally I use PulseView in conjunction with a basic Cypress FX2 logic analyser to capture from a 433MHz receiver and work out timings. Plus I needed a receiver that could be placed close enough to the bell-push to reliably pick it up.

Of course I already had a receiver that could decode the appropriate codes - the doorbell! Taking it apart revealed a PSU board and separate receiver/bell board. The receiver uses a PT4318-S with a potted chip I assume is the microcontroller. There was an HT24LC02 I2C EEPROM on the bottom of the receiver board; monitoring it with my BusPirate indicated that the 16 bit ID code was stored in address 0x20. Sadly it looked like the EEPROM was only used for data storage; only a handful of values were read on power on.

Additionally there were various test points on the board; probing while pressing the bell-push led to the discovery of a test pad that went to 1.8v when a signal was detected. Perfect. I employed an ESP82661 in the form of an ESP-07, sending out an MQTT message containing “ON” or “OFF” as appropriate when the state changed. I had a DS18B20 lying around so I added that for some temperature monitoring too; it reads a little higher due to being inside the case, but not significantly so.

All of this ended up placed in the bedroom, which conveniently had a socket almost directly above the bell-push. Tying it into Home Assistant was easy:

binary_sensor: - platform: mqtt name: Doorbell state_topic: "doorbell/master-bedroom/button"

I then needed something to alert me when the doorbell was pushed. Long term perhaps I’ll add some sounders around the house hooked in via MQTT, and there’s a Kodi notifier available, but that’s only helpful when the TV is on. I ended up employing my Alexa via Notify Me:

notify: - name: alexa platform: rest message_param_name: notification resource: data: accessCode: !secret notifyme_key

and then an automation in automations.yaml:

- id: alexa_doorbell alias: Notify Alexa when the doorbell is pushed trigger: - platform: state entity_id: binary_sensor.doorbell to: 'on' action: - service: notify.alexa data_template: message: "Doorbell rang at {{ states('sensor.time') }}"

How well does this work? Better than expected! A couple of days after installing everything we were having lunch when Alexa chimed; the door had been closed and music playing, so we hadn’t heard the doorbell. Turned out to be an unexpected delivery which we’d otherwise have missed. It also allows us to see when someone has rang the doorbell when we were in - useful for seeing missed deliveries etc.

(Full disclosure: When initially probing out the mains doorbell for active signals I did so while it was plugged into the mains. My ‘scope is not fully isolated it seems and at one point I managed to trip the breaker on the mains circuit and blow the ringer part of the doorbell. Ooops. I ended up ordering an identical replacement (avoiding the need to replace the bell-push) and subsequently was able to re-use the ‘broken’ device as the ESP8266 receiver - the receiving part was still working, just not making a noise. The new receiver ended up in the living room, so the doorbell still sounds normally.)

  1. I have a basic ESP8266 MQTT framework I’ve been using for a bunch of devices based off Tuan PM’s work. I’ll put it up at some point. 

Jonathan McDowell Noodles' Emptiness

Review: Move Fast and Break Things

Enj, 25/10/2018 - 6:52pd

Review: Move Fast and Break Things, by Jonathan Taplin

Publisher: Little, Brown and Company Copyright: April 2017 Printing: 2018 ISBN: 0-316-27574-3 Format: Kindle Pages: 288

Disclaimer: I currently work for Dropbox, a Silicon Valley tech company. While it's not one of the companies that Taplin singles out in this book, I'm sure he'd consider it part of the problem. I think my reactions to this book are driven more by a long association with the free software movement and its take on copyright issues, and from reading a lot of persuasive work both good and bad, but I'm not a disinterested party.

Taplin is very angry about a lot of things that I'm also very angry about: the redefinition of monopoly to conveniently exclude the largest and most powerful modern companies, the ability of those companies to run roughshod over competitors in ways that simultaneously bring innovation and abusive market power, a toxic mix of libertarian and authoritarian politics deeply ingrained in the foundations of Silicon Valley companies, and a blithe disregard for the social effects of technology and for how to police the new communities that social media has created. This is a book-length rant about the dangers of monopoly domination of industries, politics, on-line communities, and the arts. And the central example of those dangers is the horrific and destructive power of pirating music on the Internet.

If you just felt a mental record-scratch and went "wait, what?", you're probably from a community closer to mine than Taplin's.

I'm going to be clear up-front: this is a bad book. I'm not going to recommend that you read it; quite the contrary, I recommend actively avoiding it. It's poorly written, poorly argued, facile, and unfair, and I say that with a great deal of frustration because I agree with about 80% of its core message. This is the sort of book from an erstwhile ally that makes me cringe: it's a significant supply of straw men, weak arguments, bad-faith arguments, and motivated reasoning that make the case for economic reform so much harder. There are good arguments against capitalism in the form in which we're practicing it. Taplin makes only some of them, and makes them badly.

Despite that, I read the entire book, and I'm still somewhat glad that I did, because it provides a fascinating look at the way unexamined premises lead people to far different conclusions. It also provides a more visceral feel for how people, like Taplin, who are deeply and personally invested in older ways of doing business, reach for a sort of reflexive conservatism when pushing back against the obvious abuses of new forms of inequality and market abuse. I found a reminder here to take a look at my own knee-jerk reactions and think about places where I may be reaching for backward-looking rather than forward-looking solutions.

This is a review, though, so before I get lost in introspection, I should explain why I think so poorly of this book as an argument.

I suspect most people who read enough partisan opinion essays on-line will notice the primary flaw in Move Fast and Break Things as early as I did: this is the kind of book that's full of carefully-chosen quotes designed to make the person being quoted look bad. You'll get a tour of the most famous ill-chosen phrases, expressions of greed, and cherry-picked bits of naked capitalism from the typical suspects: Google, Facebook, and Amazon founders, other Silicon Valley venture capitalists and CEOs, and of course Peter Thiel. Now, Thiel is an odious reactionary and aspiring fascist who yearns for the days when he could live as an unchallenged medieval lord. There's almost no quote you could cherry-pick from him that would make him look worse than he actually is, so I'll give Taplin a free pass on that one. But for the rest, Taplin is not even attempting to understand or engage with the arguments that his opponents are making. He's just finding the most damning statements, the ones that look the ugliest out of context, and parading them before the reader in an attempt to provoke an emotional reaction.

There is a long-standing principle of argument that you should engage with your opponents' position in its strongest form. If you cannot understand the merits and strengths of the opposing position and restate them well enough that an advocate of the opposing view would accept your summary as fair, you aren't prepared to argue the point. Taplin does not even come close to doing that. In the debate over the new Internet monopolies and monopsonies, one central conflict is between the distorting and dangerous concentration of power and the vast and very real improvements they've brought for consumers. I don't like Amazon as a company, and yet I read this book on a Kindle because their products are excellent and the consumer experience of their store is first-rate. I don't like Google as a company, but their search engine is by far the best available. One can quite legitimately take a wide range of political, economic, and ethical positions on that conflict, but one has to acknowledge there is a real conflict. Taplin is not particularly interested in doing that.

Similarly, and returning to the double-take moment with which I began this review, Taplin is startlingly unwilling to examine the flaws of the previous economic systems that he's defending. He writes a paean to the wonderful world of mutual benefit, artistic support, and economic fairness of record labels! Admittedly, I was not deeply enmeshed in that industry the way that he was, and he restrains his praise primarily to the 1960s and 1970s, so it's possible this isn't as mind-boggling as it sounds on first presentation. But, even apart from the numerous stories of artists cheated out of the profits of their work by the music industry long before Silicon Valley entered the picture, Taplin only grudgingly recognizes that the merits he sees in that industry were born of a specific moment in time, a specific pattern of demand, supply, sales method, and cultural moment, and that this world would not have lasted regardless of Napster or YouTube.

In other words, Taplin does the equivalent of arguing against Uber by claiming the taxi industry was a model of efficiency, economic fairness, and free competition. There are many persuasive arguments against new exploitative business practices. This is not one of them.

More tellingly to me, there is zero acknowledgment in this book that I can recall of one of the defining experiences of my generation and younger: the decision by the music and motion picture industries to fight on-line copying of their product by launching a vicious campaign of legal terrorism against teenagers and college students. Taplin's emotional appeals and quote cherry-picking falls on rather deaf ears when I vividly remember the RIAA and MPAA setting out to deliberately destroy people's lives in order to make an example of them, a level of social coercion that Google and Facebook have not yet stooped to, at least at that scale. Taplin is quite correct that his ideological opponents are scarily oblivious to some of the destruction they're wreaking on social and artistic communities, but he needs to come to terms with the fact that some of his allies are thugs.

This is where my community departs from Taplin's. I've been part of the free software community for decades, which includes a view of copyright that is neither the constrained economic model that Taplin advocates as a way to hopefully support artists, nor the corporate libertarian free-for-all from which Google draws its YouTube advertising profits. The free software community stands mostly opposed to both of those economic models, while pursuing the software equivalent of artist collectives. We have our own issues with creeping corporate control of our communities, and with the balance to strike between expanding the commons and empowering amoral companies like Google, Facebook, and Amazon to profit off of our work. Those fights play out in software licensing discussions routinely. But returning to a 1950s model of commercial music (which looks a lot like the 1980s model of commercial software) is clearly not possible, or even desirable if it were.

And that, apart from the poor argumentative technique and the tendency to engage with the weakest of his opponents' arguments, is the largest flaw I see in Taplin's book: he's invested in a binary fight between the economic world of his youth, which worked in ways that he considers fair, and a new economic world that is breaking the guarantees that he considers ethically important. He's not wrong about the problem, and I completely agree with him on the social benefit of putting artists in a more central position of influence in society. But he's not looking deeply at examples of artistic communities that have navigated this better than his own beloved music industry (book publishing, for example, which certainly has its problems with Amazon's monopsony power but is also in some ways stronger than it has ever been). And he's not looking at communities that are approaching the same problem from a different angle, such as free software. He's so caught up on what he sees as the fundamental unfairness of artists not being paid directly by each person consuming their work that he isn't stepping back to look at larger social goals and alternative ways they could be met.

I'm sure I'm making some of these same mistakes, in other places and in other ways. These problems are hard and some of the players truly are malevolent, so you cannot assume good will and good faith on all fronts. But there are good opposing arguments and simple binary analysis will fail.

Taplin, to give him credit, does try to provide some concrete solutions in the last chapter. He realizes that you cannot put the genie of easy digital copies back in the bottle, and tries to talk about alternate approaches that aren't awful (although they're things like micropayments and subscription services that are familiar ground for anyone familiar with this problem). I agree wholeheartedly with his arguments for returning to a pre-Reagan definition of monopoly power and stricter regulation of Internet advertising business. He might even be able to convince me that take-down-and-stay-down (the doctrine that material removed due to copyright complaints has to be kept off the same platform in the future) is a workable compromise... if he would also agree to fines, paid to the victim, of at least $50,000 per instance for every false complaint from a media company claiming copyright on material to which they have no rights. (Taplin seems entirely unaware of the malevolent abuses of copyright complaint systems by his beloved media industry.) As I said, I agree with about 80% of his positions.

But, sadly, this is not the book to use to convince anyone of those positions, or even the book to read for material in one's own debates. It would need more thoughtful engagement of the strongest of the arguments from new media and technology companies, a broader eye to allied fights, a deep look at the flaws in the capitalist system that made these monopoly abuses possible, and a willingness to look at the related abuses of Taplin's closest friends. Without those elements, I'm afraid this book isn't worth your time.

Rating: 3 out of 10

Russ Allbery Eagle's Path

Enj, 25/10/2018 - 2:00pd

The “properly quote eMail messages and on Usenet” documentation is hosted on a server that appears to not get too much care at the moment. I’ve dug out workable versions:

The original link, with its redirection, which contained the links to the translations into Dutch and English, unfortunately no longer works.

I’m asking everyone to please honour these guidelines when posting in Usenet and responding to eMail messages, as not doing so is an insult to all the (multiple, in the case of Usenet and mailing lists) readers / recipients of your messages. Even if you have to spend a little time trimming the quote, it’s much less than the time spent by all readers trying to figure out a TOFU (reply over fullquote) message.

Ich bitte jeden darum, sich bitte beim Posten im Usenet und Verfassen von eMails sich an diese Richtilinien zu halten; dies nicht zu tun ist ein Affront wider alle (im Falle von Usenet und Mailinglisten viele) Leser bzw. Empfänger eurer Nachrichten. Selbst wenn man zum Kürzen des Zitats ein bißchen Zeit aufwenden muß ist das immer noch deutlich weniger als die Mühe, die jeder einzelne Leser aufwenden muß, herauszufinden, was mit einer als TOFU (Text oben, Vollzitat unten) geschriebenen eMail gemeint ist.

Mag ik iederéén verzoeken, postings in het Usenet en mailtjes volgens deze regels te schrijven? Als het niet te doen is vies tegen alle ontvanger’s en moeilijk om te lezen. Zelfs als je een beetje tijd nodig heb om het oorspronkelijke deel te korten is het nog steeds minder dan de moeite van alleman, om een TOFU (antwoord boven, fullquote beneden) boodschap proberen te begrepen.

Thorsten Glaser debian tag cloud

Salsa ribbons

Mër, 24/10/2018 - 4:55md

Salsa is the name of the collaborative development server for Debian and is the replacement for the now-deprecated Alioth service.

To make it easier to show the world that you use Salsa, I've created a number of Github-esque ribbons that you can overlay on your projects' sites by copying & pasting the appropriate snippet into your HTML.

For example:

You can find them, with instructions, here:

If you're not satisfied with one of the colours, the original source is available.

Chris Lamb lamby: Items or syndication on Planet Debian.

Freexian’s report about Debian Long Term Support, September 2018

Mër, 24/10/2018 - 12:13md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, about 227 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 11 hours (out of 10 hours allocated + 5 extra hours, thus keeping 4 extra hours for October).
  • Antoine Beaupré did 24 hours.
  • Ben Hutchings did 29 hours (out of 15 hours allocated + 18 extra hours, thus keeping 4 extra hours for October).
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did not publish his report yet (he had 29.25 hours hours allocated).
  • Holger Levsen did 2.5 hours (out of 8 hours allocated + 14 extra hours, thus keeping 19.5 extra hours for October).
  • Hugo Lefeuvre did 10 hours.
  • Markus Koschany did 29.25 hours.
  • Mike Gabriel did 10 hours (out of 8 hours allocated + 2 extra hours).
  • Ola Lundqvist did 7 hours (out of 8 hours allocated + 11.5 remaining hours, but gave back 4.5 hours, thus keeping 8 extra hours for October).
  • Roberto C. Sanchez did 15 hours (out of 18h allocated + 12 extra hours, and gave back the 15 remaining hours).
  • Santiago Ruano Rincón did 4 hours (out of 20 hours allocated + 12 extra hours, thus keeping 28 extra hours for October).
  • Thorsten Alteholz did 29.25 hours.
Evolution of the situation

The number of sponsored hours decreased to 205 hours per month, we lost another small sponsor. Hopefully this trend will not continue. Time to subscribe your company if it’s not yet done!

The security tracker currently lists 30 packages with a known CVE and the dla-needed.txt file has 24 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog apt-get install debian-wizard

Idea for a Debian QA service: monitoring install size with dependencies

Mër, 24/10/2018 - 9:55pd

This is an idea. I don't have the time to work on it myself, but I thought I'd throw it out in case someone else finds it interesting.

When you install a Debian package, it pulls in its dependencies and recommended packages, and those pull in theirs. For simple cases, this is all fine, but sometimes there's surprises. Installing mutt to a base system pulls in libgpgme, which pulls in gnupg, which pulls in a pinentry package, which can pull in all of GNOME. Or at least people claim that.

It strikes me that it'd be cool for someone to implement a QA service for Debian that measures, for each package, how much installing it adds to the system. It should probably do this in various scenarios:

  • A base system, i.e., the output of debootstrap.
  • A build system, with build-essentian installed.
  • A base GNOME system, with gnome-core installed.
  • A full GNOME system, with gnome installed.
  • Similarly for KDE and each other desktop environment in Debian.

The service would do the installs regularly (daily?), and produce reports. It would also do alerts, such as notify the maintainers when installed size grows too large compared to installing it in stable, or a previous run in unstable. For example, if installing mutt suddenly installs 100 gigabytes more than yesterday, it's probably a good idea to alert interested parties.

Implementing this should be fairly easy, since the actual test is just running debootstrap, and possibly apt-get install. Some experimentation with configuration, caching, and eatmydata may be useful to gain speed. Possibly actual package installation can be skipped, and the whole thing could be implemented just by analysing package metadata.

Maybe it even exists, and I just don't know about it. That'd be cool, too.

Lars Wirzenius' blog englishfeed

Reproducible Builds: Weekly report #182

Mar, 23/10/2018 - 3:15md

Here’s what happened in the Reproducible Builds effort between Sunday October 14 and Saturday October 20 2018:

Another reminder that the Reproducible Builds summit will be taking place between 11th—13th December 2018 Paris at Mozilla’s offices. If you are interested in attending, please send an email to More details can be found on the corresponding event page of our website.

Packages reviewed and fixed, and bugs filed Test framework development

There were a large number of updates to our Jenkins-based testing framework that powers by Holger Levsen this month, including:


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks

A visual basic server

Mar, 23/10/2018 - 11:01pd

So my previous post described a BASIC interpreter I'd written.

Before the previous release I decided to ensure that it was easy to embed, and that it was possible to extend the BASIC environment such that it could call functions implemented in golang.

One of the first things that came to mind was to allow a BASIC script to plot pixels in a PNG. So I made that possible by adding "PLOT x,y" and "SAVE" primitives.

Taking that step further I then wrote a HTTP-server which would allow you to enter a BASIC program and view the image it created. It's a little cute at least.

Install it from source, or fetch a binary if you prefer, via:

$ go get -u

Then launch it and point your browser at http://localhost:8080, and you'll be presented with something like this:

Fun times.

Steve Kemp Steve Kemp's Blog

Review: The Stone Sky

Mar, 23/10/2018 - 6:53pd

Review: The Stone Sky, by N.K. Jemisin

Series: The Broken Earth #3 Publisher: Orbit Copyright: August 2017 ISBN: 0-316-22925-3 Format: Kindle Pages: 464

So, this is it: the epic conclusion of the series that began with The Fifth Season. And it is a true conclusion. Jemisin's world is too large and her characters too deep (and too real) to wrap up into a simple package, but there's a finality to this conclusion that makes me think it unlikely Jemisin will write a direct sequel any time soon. (And oh my do you not want to start with this book. This series must be read in order.)

I'm writing this several months after finishing the novel in part because I still find it challenging to put my feelings about this book into words. There are parts of this story I found frustrating and others I found unsatisfying, but each time I dig into those disagreements, I find new layers of story and meaning and I can't see how the book could have gone any other way. The Stone Sky is in many ways profoundly uncomfortable and unsettling, but that's also what makes it so good. Jemisin is tackling problems, emotions, and consequences that are unsettling, that should be unsettling. Triumphant conclusions would be a lie. This story hurt all the way through; it's fitting that the ending did as well. But it's also strangely hopeful, in a way that doesn't take away the pain.

World-building first. This is, thankfully, not the sort of series that leaves one with a host of unanswered questions or a maddeningly opaque background. Jemisin puts all of her cards on the table. We find out exactly how Essun's world was created, what the obelisks are, who the stone eaters are, who the Guardians are, and something even of the origin of orogeny. This is daring after so much intense build-up, and Jemisin deserves considerable credit for an explanation that (at least for me) held together and made sense of much of what had happened without undermining it.

I do have some lingering reservations about the inhuman villain of this series, which I still think is too magically malevolent (and ethically simplistic) for the interwoven complexity of the rest of the world-building. They're just reservations, not full objections, but buried in the structure of the world is an environmental position that's a touch too comfortable, familiar, and absolute, particularly by the standards of the rest of the series.

For the human villains, though, I have neither objections nor reservations. They are all too believable and straightforward, both in the backstory of the deep past and in its reverberations and implications up to Essun's time. There is a moment when the book's narrator is filling in details in the far past, an off-hand comment about how life was sacred to their civilization. And, for me, a moment of sucked-in breath and realization that of course it was. Of course they said life was sacred. It explained so very much, about so very many things: a momentary flash of white-hot rage, piercing the narrative like a needle, knitting it together.

Against that backdrop, the story shifts in this final volume from its primary focus on Essun to a balanced split between Essun and her daughter, continuing a transition that began in The Obelisk Gate. Essun by now is a familiar figure to the reader: exhausted, angry, bitter, suspicious, and nearly numb, but driving herself forward with unrelenting force. Her character development in The Stone Sky comes less from inside herself and more from unexpected connections and empathy she taught herself not to look for. Her part of this story is the more traditional one, the epic fantasy band of crusaders out to save the world, or Essun's daughter, or both.

Essun's daughter's story is... not that, and is where I found both the frustrations and the joy of this conclusion. She doesn't have Essun's hard experience, her perspective on the world, or Essun's battered, broken, reforged, and hardened sense of duty. But she has in many ways a clearer view, for all its limitations. She realizes some things faster than Essun does, and the solutions she reaches for are a critique of the epic fantasy solutions that's all the more vicious for its gentle emotional tone.

This book offers something very rare in fiction: a knife-edge conclusion resting on a binary choice, where as a reader I was, and still am, deeply conflicted about which choice would have been better. Even though by normal epic fantasy standards the correct choice is obvious.

The Stone Sky is, like a lot of epic fantasy, a story about understanding and then saving the world, but that story is told in counterpoint with a biting examination of the nature of the world that's being saved. It's also a story about a mother and a daughter, about raising a child who's strong enough to survive in a deeply unfair and vicious world, and about what it means to succeed in that goal. It's a story about community, and empathy, and love, and about facing the hard edge of loss inside all of those things and asking whether it was worth it, without easy answers.

The previous books in this series were angry in a way that I rarely see in literature. The anger is still there in The Stone Sky, but this book is also sad, in a way that's profound and complicated and focused on celebrating the relationships that matter enough to make us sad. There are other stories that I have enjoyed reading more, but there are very few that I thought were as profound or as unflinching.

Every book in this series won a Hugo award. Every book in this series deserved it. This is a modern masterpiece of epic fantasy that I am quite certain we will still be talking about fifty years from now. It's challenging, powerful, emotional, and painful in a way that you may have to brace yourself to read, but it is entirely worth the effort.

Rating: 9 out of 10

Russ Allbery Eagle's Path

security things in Linux v4.19

Mar, 23/10/2018 - 1:17pd

Previously: v4.18.

Linux kernel v4.19 was released today. Here are some security-related things I found interesting:

L1 Terminal Fault (L1TF)

While it seems like ages ago, the fixes for L1TF actually landed at the start of the v4.19 merge window. As with the other speculation flaw fixes, lots of people were involved, and the scope was pretty wide: bare metal machines, virtualized machines, etc. LWN has a great write-up on the L1TF flaw and the kernel’s documentation on L1TF defenses is equally detailed. I like how clean the solution is for bare-metal machines: when a page table entry should be marked invalid, instead of only changing the “Present” flag, it also inverts the address portion so even a speculative lookup ignoring the “Present” flag will land in an unmapped area.

protected regular and fifo files

Salvatore Mesoraca implemented an O_CREAT restriction in /tmp directories for FIFOs and regular files. This is similar to the existing symlink restrictions, which take effect in sticky world-writable directories (e.g. /tmp) when the opening user does not match the owner of the existing file (or directory). When a program opens a FIFO or regular file with O_CREAT and this kind of user mismatch, it is treated like it was also opened with O_EXCL: it gets rejected because there is already a file there, and the kernel wants to protect the program from writing possibly sensitive contents to a file owned by a different user. This has become a more common attack vector now that symlink and hardlink races have been eliminated.

syscall register clearing, arm64

One of the ways attackers can influence potential speculative execution flaws in the kernel is to leak information into the kernel via “unused” register contents. Most syscalls take only a few arguments, so all the other calling-convention-defined registers can be cleared instead of just left with whatever contents they had in userspace. As it turns out, clearing registers is very fast. Similar to what was done on x86, Mark Rutland implemented a full register-clearing syscall wrapper on arm64.

Variable Length Array removals, part 3

As mentioned in part 1 and part 2, VLAs continue to be removed from the kernel. While CONFIG_THREAD_INFO_IN_TASK and CONFIG_VMAP_STACK cover most issues with stack exhaustion attacks, not all architectures have those features, so getting rid of VLAs makes sure we keep a few classes of flaws out of all kernel architectures and configurations. It’s been a long road, and it’s shaping up to be a 4-part saga with the remaining VLA removals landing in the next kernel. For v4.19, several folks continued to help grind away at the problem: Arnd Bergmann, Kyle Spiers, Laura Abbott, Martin Schwidefsky, Salvatore Mesoraca, and myself.

shift overflow helper
Jason Gunthorpe noticed that while the kernel recently gained add/sub/mul/div helpers to check for arithmetic overflow, we didn’t have anything for shift-left. He added check_shl_overflow() to round out the toolbox and Leon Romanovsky immediately put it to use to solve an overflow in RDMA.

Edit: I forgot to mention this next feature when I first posted:

trusted architecture-supported RNG initialization

The Random Number Generator in the kernel seeds its pools from many entropy sources, including any architecture-specific sources (e.g. x86’s RDRAND). Due to many people not wanting to trust the architecture-specific source due to the inability to audit its operation, entropy from those sources was not credited to RNG initialization, which wants to gather “enough” entropy before claiming to be initialized. However, because some systems don’t generate enough entropy at boot time, it was taking a while to gather enough system entropy (e.g. from interrupts) before the RNG became usable, which might block userspace from starting (e.g. systemd wants to get early entropy). To help these cases, Ted T’so introduced a toggle to trust the architecture-specific entropy completely (i.e. RNG is considered fully initialized as soon as it gets the architecture-specific entropy). To use this, the kernel can be built with CONFIG_RANDOM_TRUST_CPU=y (or booted with “random.trust_cpu=on“).

That’s it for now; thanks for reading. The merge window is open for v4.20! Wish us luck. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

kees Debian – codeblog