You are here

Planet Debian

Subscribe to Feed Planet Debian
Planet Debian -
Përditësimi: 5 months 3 javë më parë

Reproducible builds folks: Reproducible Builds: Weekly report #194

Mër, 16/01/2019 - 5:39md

Here’s what happened in the Reproducible Builds effort between Sunday January 6 and Saturday January 12 2019:

Packages reviewed and fixed, and bugs filed Website development

There were a number of updates to the project website this week, including:

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers this week, including:

  • Holger Levsen:
    • Arch Linux-specific changes:
      • Use Debian’s sed, untar and others with sudo as they are not available in the bootstrap.tar.gz file ([], [], [], [], etc.).
      • Fix incorrect sudoers(5) regex. []
      • Only move old schroot away if it exists. []
      • Add and drop debug code, cleanup cruft, exit on cleanup(), etc. ([], [])
      • cleanup() is only called on errors, thus exit 1. []
    • Debian-specific changes:
      • Revert “Support arbitrary package filters when generating deb822 output” ([]) and re-open the corresponding merge request
      • Show the total number of packages in a package set. []
    • Misc/generic changes:
      • Node maintenance. ([, [], [], etc.)
  • Mattia Rizzolo:
    • Fix the NODE_NAME value in case it’s not a full-qualified domain name. []
    • Node maintenance. ([], etc.)
  • Vagrant Cascadian:

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Iain R. Learmonth: A Solution for Authoritative DNS

Mër, 16/01/2019 - 5:30md

I’ve been thinking about improving my DNS setup. So many things will use e-mail verification as a backup authentication measure that it is starting to show as a real weak point. An Ars Technica article earlier this year talked about how “[f]ederal authorities and private researchers are alerting companies to a wave of domain hijacking attacks that use relatively novel techniques to compromise targets at an almost unprecedented scale.”

The two attacks that are mentioned in that article, changing the nameserver and changing records, are something that DNSSEC could protect against. Records wouldn’t have to be changed on my chosen nameservers, a BGP-hijacking could just give another server the queries for records on my domain instead and then reply with whatever it chooses.

After thinking for a while, my requirements come down to:

  • Offline DNSSEC signing
  • Support for storing signing keys on a HSM (YubiKey)
  • Version control
  • No requirement to run any Internet-facing infrastructure myself

After some searching I discovered GooDNS, a “good” DNS hosting provider. They have an interesting setup that looks to fit all of my requirements. If you’re coming from a more traditional arrangement with either a self-hosted name server or a web panel then this might seem weird, but if you’ve done a little “infrastructure as code” then maybe it is not so weird.

The inital setup must be completed via the web interface. You’ll need to have an hardware security module (HSM) for providing a time based one time password (TOTP), an SSH key and optionally a GPG key as part of the registration. You will need the TOTP to make any changes via the web interface, the SSH key will be used to interact with the git service, and the GPG key will be used for any email correspondance including recovery in the case that you lose your TOTP HSM or password.

You must validate your domain before it will be served from the GooDNS servers. There are two options for this, one for new domains and one “zero-downtime” option that is more complex but may be desirable if your domain is already live. For new domains you can simply update your nameservers at the registrar to validate your domain, for existing domains you can add a TXT record to the current DNS setup that will be validated by GooDNS to allow for the domain to be configured fully before switching the nameservers. Once the domain is validated, you will not need to use the web interface again unless updating contact, security or billing details.

All the DNS configuration is managed in a single git repository. There are three branches in the repository: “master”, “staging” and “production”. These are just the default branches, you can create other branches if you like. The only two that GooDNS will use are the “staging” and “production” branches.

GooDNS provides a script that you can install at /usr/local/bin/git-dns (or elsewhere in your path) which provides some simple helper commands for working with the git repository. The script is extremely readable and so it’s easy enough to understand and write your own scripts if you find yourself needing something a little different.

When you clone your git repository you’ll find one text file on the master branch for each of your configured zones:

irl@computer$ git clone Cloning into 'irl1'... remote: Enumerating objects: 3, done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3 Receiving objects: 100% (3/3), 22.55 KiB | 11.28 MiB/s, done. Resolving deltas: 100% (1/1), done. irl@computer$ ls irl@computer$ cat @ IN SOA ( _SERIAL_ 28800 7200 864000 86400 ) @ IN NS @ IN NS @ IN NS

In the backend GooDNS is using OpenBSD 6.4 servers with nsd(8). This means that the zone files use the same syntax. If you don’t know what this means then that is fine as the documentation has loads of examples in it that should help you to configure all the record types you might need. If a record type is not yet supported by nsd(8), you can always specify the record manually and it will work just fine.

One thing you might note here is that the string _SERIAL_ appears instead of a serial number. The git-dns script will replace this with a serial number when you are ready to publish the zone file.

I’ll assume that you already have you GPG key and SSH key set up, now let’s set up the DNSSEC signing key. For this, we will use one of the four slots of the YubiKey. You could use either 9a or 9e, but here I’ll use 9e as 9a is already the SSH key for me.

To set up the token, we will need the yubico-piv-tool. Be extremely careful when following these steps especially if you are using a production device. Try to understand the commands before pasting them into the terminal.

First, make sure the slot is empty. You should get an output similar to the following one:

irl@computer$ yubico-piv-tool -s 9e -a status CHUID: ... CCC: No data available PIN tries left: 10

Now we will use git-dns to create our key signing key (KSK):

irl@computer$ git dns kskinit --yubikey-neo Successfully generated a new private key. Successfully generated a new self signed certificate. Found YubiKey NEO. Slots available: (1) 9a - Not empty (2) 9e - Empty Which slot to use for DNSSEC signing key? 2 Successfully imported a new certificate. CHUID: ... CCC: No data available Slot 9e: Algorithm: ECCP256 Subject DN: Issuer DN: Fingerprint: 97dda8a441a401102328ab6ed4483f08bc3b4e4c91abee8a6e144a6bb07a674c Not Before: Feb 01 13:10:10 2019 GMT Not After: Feb 01 13:10:10 2021 GMT PIN tries left: 10

We can see the public key for this new KSK:

irl@computer$ git dns pubkeys DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw==

Next we will create a zone signing key (ZSK). These are stored in the keys/ folder of your git repository but are not version controlled. You can optionally encrypt these with GnuPG (and so requiring the YubiKey to sign zones) but I’ve not done that here. Operations using slot 9e do not require the PIN so leaving the YubiKey connected to the computer is pretty much the same as leaving the KSK on the disk. Maybe a future YubiKey will not have this restriction or will add more slots.

irl@computer$ git dns zskinit Created ./keys/ Successfully generated a new private key. irl@computer$ git dns pubkeys DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw= DNSKEY 257 3 13 kS7DoH7fxDsuH8o1vkvNkRcMRfTbhLqAZdaT2SRdxjRwZSCThxxpZ3S750anoPHV048FFpDrS8Jof08D2Gqj9w==

Now we can go to our domain registrar and add DS records to the registry for our domain using the public keys. First though, we should actually sign the zone. To create a signed zone:

irl@computer$ git dns signall Signing Signing [production 51da0f0] Signed all zone files at 2019-02-01 13:28:02 2 files changed, 6 insertions(+), 0 deletions(-)

You’ll notice that all the zones were signed although we only created one set of keys. Set ups where you have one shared KSK and individual ZSK per zone are possible but they provide questionable additional security. Reducing the number of keys required for DNSSEC helps to keep them all under control.

To make these changes live, all that is needed is to push the production branch. To keep things tidy, and to keep a backup of your sources, you can push the master branch too. git-dns provides a helper function for this:

irl@computer$ git dns push Pushing master...done Pushing production...done Pushing staging...done

If I now edit a zone file on the master branch and want to try out the zone before making it live, all I need to do is:

irl@computer$ git dns signall --staging Signing Signing [staging 72ea1fc] Signed all zone files at 2019-02-01 13:30:12 2 files changed, 8 insertions(+), 0 deletions(-) irl@computer$ git dns push Pushing master...done Pushing production...done Pushing staging...done

If I now use the staging resolver or lookup records at then I’ll see the zone live. The staging resolver is a really cool idea for development and testing. They give you a couple of unique IPv6 addresses just for you that will serve your staging zone files and act as a resolver for everything else. You just have to plug these into your staging environment and everything is ready to go. In the future they are planning to allow you to have more than one staging environment too.

All that is left to do is ensure that your zone signatures stay fresh. This is easy to achieve with a cron job:

0 3 * * * /usr/local/bin/git-dns cron --repository=/srv/dns/ --quiet

I monitor the records independently and disable the mail output from this command but you might want to drop the --quiet if you’d like to get mails from cron on errors/warnings.

On the GooDNS blog they talk about adding an Onion service for the git server in the future so that they do not have logs that could show the location of your DNSSEC signing keys, which allows you to have even greater protection. They already support performing the git push via Tor but the addition of the Onion service would make it faster and more reliable.

Unfortunately, GooDNS is entirely fictional and you can’t actually manage your DNS in this way, but wouldn’t it be nice? This post has drawn inspiration from the following:

Daniel Silverstone: Plans for 2019

Mër, 16/01/2019 - 12:29md

At the end of last year I made eight statements about what I wanted to do throughout 2019. I tried to split them semi-evenly between being a better adult human and being a better software community contributor. I have had a few weeks now to settle my thoughts around what they mean and I'd like to take some time to go through the eight and discuss them a little more.

I've been told that doing this reduces the chance of me sticking to the points because simply announcing the points and receiving any kind of positive feedback may stunt my desire to actually achieve the goals. I'm not sure about that though, and I really want my wider friends community to help keep me honest about them all. I've set a reminder for April 7th to review the situation and hopefully be able to report back positively on my progress.

My list of goals was stated in a pair of tweets:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday
Weight and fitness

Some of you may be aware already, others may not, that I have been making an effort to shed some of my excess weight over the past six or seven months. I "started" in May of 2018 weighing approximately 141kg and I am, as of this morning, weighing approximately 101kg. Essentially that's a semi-steady rate of 5kg per month, though it has, obviously, been slowing down of late.

In theory, given my height of roughly 178cm I should aim for a weight of around 70kg. I am trying to improve my fitness and to build some muscle and as such I'm aiming long-term for roughly 75kg. My goal for this year is to continue my improvement and to reach and maintain 80kg or better. I think this will make a significant difference to my health and my general wellbeing. I'm already sleeping better on average, and I feel like I have more energy over all. I bought a Garmin Vivoactive 3 and have been using that to track my general health and activity. My resting heart rate has gone down a few BPM over the past six months, and I can see my general improvement in sleep etc over that time too. I bought a Garmin Index Scale to track my weight and body composition, and that is also showing me good values as well as encouraging me to weigh myself every day and to learn how to interpret the results.

I've been managing my weight loss partly by means of a 16:8 intermittent fasting protocol, combined with a steady calorie deficit of around 1000kcal/day. While this sounds pretty drastic, I was horrendously overweight and this was critical to getting my weight to shift quickly. I expect I'll reduce that deficit over the course of the year, hence I'm only aiming for a 20kg drop over a year rather than trying to maintain what could in theory be a drop of 30kg or more.

In addition to the IF/deficit, I have been more active. I bought an e-bike and slowly got going on that over the summer, along with learning to enjoy walks around my local parks and scrubland. Since the weather got bad enough that I didn't want to be out of doors I joined a gym where I have been going regularly since September. Since the end of October I have been doing a very basic strength training routine and my shoulders do seem to be improving for it. I can still barely do a pushup but it's less embarassingly awful than it was.

Given my efforts toward my fitness, my intention this year is to extend that to include a Couch to 5k type effort. Amusingly, Garmin offer a self adjusting "coach" called Garmin Coach which I will likely use to guide me through the process. While I'm not committing to any, maybe I'll get involved in some parkruns this year too. I'm not committing to reach an ability to run 5k because, quite simply, my bad leg may not let me, but I am committing to give it my best. My promise to myself was to start some level of jogging once I hit 100kg, so that's looking likely by the end of this month. Maybe February is when I'll start the c25k stuff in earnest.


I have put three items down in this category to get better at this year. One is a big thing for our house. I am, quite simply put, awful at tidying up. I leave all sorts of things lying around and I am messy and lazy. I need to fix this. My short-term goal in this respect is to pick one room of the house where the mess is mostly mine, and learn to keep it tidy before my checkpoint in April. I think I'm likely to choose the Study because it's where others of my activities for this year will centre and it's definitely almost entirely my mess in there. I'm not yet certain how I'll learn to do this, but it has been a long time coming and I really do need to. It's not fair to my husband for me to be this awful all the time.

The second of these points is to explicitly save money for renovations. Last year we had a new bathroom installed and I've been seriously happy about that. We will need to pay that off this year (we have the money, we're just waiting as long as we can to earn the best interest on it first) and then I'll want to be saving up for another spot of renovations. I'd like to have the kitchen and dining room done - new floor, new units and sink in the kitchen, fix up the messy wall in the dining room, have them decorated, etc. I imagine this will take quite a bit of 2019 to save for, but hopefully this time next year I'll be saying that we managed that and it's time for the next part of the house.

Finally I want to take a proper holiday this year. It has been a couple of years since Rob and I went to Seoul for a month, and while that was excellent, it was partly "work from home" and so I'd like to take a holiday which isn't also a conference, or working from home, or anything other than relaxation and seeing of interesting things. This will also require saving for, so I imagine we won't get to do it until mid to late 2019, but I feel like this is part of a general effort I've been making to take care of myself more. The fitness stuff above being physical, but a proper holiday being part of taking better care of my mental health.

Software, Hardware, and all the squishy humans in between

2018 was not a great year for me in terms of getting projects done. I have failed to do almost anything with Gitano and I did not doing well with Debian or other projects I am part of. As such, I'm committing to do better by my projects in 2019.

First, and foremost, I'm pledging to focus my efforts on finishing projects which I've already started. I am very good at thinking "Oh, that sounds fun" and starting something new, leaving old projects by the wayside and not getting them to any state of completion. While software is never entirely "done", I do feel like I should get in-progress projects to a point that others can use them and maybe contribute too.

As such, I'll be making an effort to sort out issues which others have raised in Gitano (though I doubt I'll do much more feature development for it) so that it can be used by NetSurf and so that it doesn't drop out of Debian. Since the next release of Debian is due soon, I will have to pull my finger out and get this done pretty soon.

I have been working, on and off, with Rob on a new point-of-sale for our local pub Ye Olde Vic and I am committing to get it done to a point that we can experiment with using it in the pub by the summer. Also I was working on a way to measure fluid flow through a pipe so that we can correlate the pulled beer with the sales and determine wastage etc. I expect I'll get back to the "beer'o'meter" once the point-of-sale work is in place and usable. I am not going to commit to getting it done this year, but I'd like to make a dent in the remaining work for it.

I have an on-again off-again relationship with some code I wrote quite a while ago when learning Rust. I am speaking of my Yarn implementation called (imaginatively) rsyarn. I'd like to have that project reworked into something which can be used with Cargo and associated tooling nicely so that running cargo test in a Rust project can result in running yarns as well.

There may be other projects which jump into this category over the year, but those listed above are the ones I'm committing to make a difference to my previous lackadaisical approach.

On a more community-minded note, one of my goals is to ensure that I'm always a net benefit to any project I join or work on in 2019. I am very aware that in a lot of cases, I provide short drive-by contributions to projects which can end up costing that project more than I gave them in benefit. I want to stop that behaviour and instead invest more effort into fewer projects so that I always end up a net benefit to the project in question. This may mean spending longer to ensure that an issue I file has enough in it that I may not need to interact with it again until verification of a correct fix is required. It may mean spending time fixing someone elses' issues so that there is the engineering bandwidth for someone else to fix mine. I can't say for sure how this will manifest, beyond being up-front and requesting of any community I decide to take part in, that they tell me if I end up costing more than I'm bringing in benefit.

Rust and the Rust community

I've mentioned Rust above, and this is perhaps the most overlappy of my promises for 2019. I want to give back to the Rust community because over the past few years as I've learned Rust and learned more and more about the community, I've seen how much of a positive effect they've had on my life. Not just because they made learning a new programming langauge so enjoyable, but because of the community's focus on programmers as human beings. The fantastic documentation ethics, and the wonderfully inclusive atmosphere in the community meant that I managed to get going with Rust so much more effectively than with almost any other language I've ever tried to learn since Lua.

I have, since Christmas, been slowly involving myself in the Rust community more and more. I joined one of the various Discord servers and have been learning about how is managed and I have been contributing to which is the initial software interface most Rust users encounter and forms such an integral part of the experience of the ecosystem that I feel it's somewhere I can make a useful impact.

While I can't say a significant amount more right now, I hope I'll be able to blog more in the future on what I'm up to in the Rust community and how I hope that will benefit others already in, and interested in joining, the fun that is programming in Rust.

In summary, I hope at least some of you will help to keep me honest about my intentions for 2019, and if, in return, I can help you too, please feel free to let me know.

Russ Allbery: Review: Aerial Magic Season 1

Mër, 16/01/2019 - 5:02pd

Review: Aerial Magic Season 1, by walkingnorth

Series: Aerial Magic #1 Publisher: LINE WEBTOON Copyright: 2018 Format: Online graphic novel Pages: 156 Aerial Magic is a graphic novel published on the LINE WEBTOON platform by the same author as the wonderful Always Human, originally in weekly episodes. It is readable for free, starting with the prologue. I was going to wait until all seasons were complete and then review the entire work, like I did with Always Human, but apparently there are going to be five seasons and I don't want to wait that long. This is a review of the first season, which is now complete in 25 episodes plus a prologue.

As with Always Human, the pages metadata in the sidebar is a bit of a lie: a very rough guess on how many pages this would be if it were published as a traditional graphic novel (six times the number of episodes, since each episode seems a bit longer than in Always Human). A lot of the artwork is large panels, so it may be an underestimate. Consider it only a rough guide to how long it might take to read.

Wisteria Kemp is an apprentice witch. This is an unusual thing to be — not the witch part, which is very common in a society that appears to use magic in much the way that we use technology, but the apprentice part. Most people training for a career in magic go to university, but school doesn't agree with Wisteria. There are several reasons for that, but one is that she's textblind and relies on a familiar (a crow-like bird named Puppy) to read for her. Her dream is to be accredited to do aerial magic, but her high-school work was... not good, and she's very afraid she'll be sent home after her ten-day trial period.

Magister Cecily Moon owns a magical item repair shop in the large city of Vectum and agreed to take Wisteria on as an apprentice, something that most magisters no longer do. She's an outgoing woman with a rather suspicious seven-year-old, two other employees, and a warm heart. She doesn't seem to have the same pessimism Wisteria has about her future; she instead is more concerned with whether Wisteria will want to stay after her trial period. This doesn't reassure Wisteria, nor do her initial test exercises, all of which go poorly.

I found the beginning of this story a bit more painful than Always Human. Wisteria has such a deep crisis of self-confidence, and I found Cecily's lack of awareness of it quite frustrating. This is not unrealistic — Cecily is clearly as new to having an apprentice as Wisteria is to being one, and is struggling to calibrate her style — but it's somewhat hard reading since at least some of Wisteria's unhappiness is avoidable. I wish Cecily had shown a bit more awareness of how much harder she made things for Wisteria by not explaining more of what she was seeing. But it does set up a highly effective pivot in tone, and the last few episodes were truly lovely. Now I'm nearly as excited for more Aerial Magic as I would be for more Always Human.

walkingnorth's art style is much the same as that in Always Human, but with more large background panels showing the city of Vectum and the sky above it. Her faces are still exceptional: expressive, unique, and so very good at showing character emotion. She occasionally uses an exaggerated chibi style for some emotions, but I feel like she's leaning more on subtlety of expression in this series and doing a wonderful job with it. Wisteria's happy expressions are a delight to look at. The backgrounds are not generally that detailed, but I think they're better than Always Human. They feature a lot of beautiful sky, clouds, and sunrise and sunset moments, which are perfect for walkingnorth's pastel palette.

The magical system underlying this story doesn't appear in much detail, at least yet, but what is shown has an interesting animist feel and seems focused on the emotions and memories of objects. Spells appear to be standardized symbolism that is known to be effective, which makes magic something like cooking: most people use recipes that are known to work, but a recipe is not strictly required. I like the feel of it and the way that magic is woven into everyday life (personal broom transport is common), and am looking forward to learning more in future seasons.

As with Always Human, this is a world full of fundamentally good people. The conflict comes primarily from typical interpersonal conflicts and inner struggles rather than any true villain. Also as with Always Human, the world features a wide variety of unremarked family arrangements, although since it's not a romance the relationships aren't quite as central. It makes for relaxing and welcoming reading.

Also as in Always Human, each episode features its own soundtrack, composed by the author. I am again not reviewing those because I'm a poor music reviewer and because I tend to read online comics in places and at times where I don't want the audio, but if you like that sort of thing, the tracks I listened to were enjoyable, fit the emotions of the scene, and were unobtrusive to listen to while reading.

This is an online comic on a for-profit publishing platform, so you'll have to deal with some amount of JavaScript and modern web gunk. I at least (using up-to-date Chrome on Linux with UMatrix) had fewer technical problems with delayed and partly-loaded panels than I had with Always Human.

I didn't like this first season quite as well as Always Human, but that's a high bar, and it took some time for Always Human to build up to its emotional impact as well. What there is so far is a charming, gentle, and empathetic story, full of likable characters (even the ones who don't seem that likable at first) and a fascinating world background. This is an excellent start, and I will certainly be reading (and reviewing) later seasons as they're published.

walkingnorth has a Patreon, which, in addition to letting you support the artist directly, has various supporting material such as larger artwork and downloadable versions of the music.

Rating: 7 out of 10

Keith Packard: newt-lola

Mar, 15/01/2019 - 8:13md
Newt: Replacing Bison and Flex

Bison and Flex (or any of the yacc/lex family members) are a quick way to generate reliable parsers and lexers for language development. It's easy to write a token recognizer in Flex and a grammar in Bison, and you can readily hook code up to the resulting parsing operation. However, neither Bison nor Flex are really designed for embedded systems where memory is limited and malloc is to be avoided.

When starting Newt, I didn't hesitate to use them though; it's was nice to use well tested and debugged tools so that I could focus on other parts of the implementation.

With the rest of Newt working well, I decided to go take another look at the cost of lexing and parsing to see if I could reduce their impact on the system.

A Custom Lexer

Most mature languages end up with a custom lexer. I'm not really sure why this has to be the case, but it's pretty rare to run across anyone still using Flex for lexical analysis. The lexical structure of Python is simple enough that this isn't a huge burden; the hardest part of lexing Python code is in dealing with indentation, and Flex wasn't really helping with that part much anyways.

So I decided to just follow the same path and write a custom lexer. The result generates only about 1400 bytes of Thumb code, a significant savings from the Flex version which was about 6700 bytes.

To help make the resulting language LL, I added lexical recognition of the 'is not' and 'not in' operators; instead of attempting to sort these out in the parser, the lexer does a bit of look-ahead and returns a single token for both of these.

Parsing on the Cheap

Many of common languages are "almost" LL; this may come from using recursive-descent parsers. In 'pure' form, recursive descent parsers can only recognize LL languages, but it's easy to hack them up to add a bit of look ahead here and there to make them handle non-LL cases.

Which is one reason we end up using parser generators designed to handle LALR languages instead; that class of grammars does cover most modern languages fairly well, only requiring a small number of kludges to paper over the remaining gaps. I think the other reason I like using Bison is that the way an LALR parser works makes it really easy to handle synthetic attributes during parsing. Synthetic attributes are those built from collections of tokens that match an implicit parse tree of the input.

The '$[0-9]+' notation within Bison actions represent the values of lower-level parse tree nodes, while '$$' is the attribute value passed to higher levels of the tree.

However, LALR parser generators are pretty complicated, and the resulting parse tables are not tiny. I wondered how much space I could save by using a simpler parser structure, and (equally important), one designed for embedded use. Because Python is supposed to be an actual LL language, I decided to pull out an ancient tool I had and give it a try.

Lola: Reviving Ancient Lisp Code

Back in the 80s, I wrote a little lisp called Kalypso. One of the sub-projects resulted an LL parser generator called Lola. LL parsers are a lot easier to understand than LALR parsers, so that's what I wrote.

A program written in a long-disused dialect of lisp offers two choices:

1) Get the lisp interpreter running again

2) Re-write the program in an available language.

I started trying to get Kalypso working again, and decided that it was just too hard. Kalypso was not very portably written and depended on a lot of historical architecture, including the structure of a.out files and the mapping of memory.

So, as I was writing a Python-like language anyways, I decided to transliterate Lola into Python. It's now likely the least "Pythonic" program around as it reflects a lot of common Lisp-like programming ideas. I've removed the worst of the recursive execution, but it is still full of list operations. The Python version uses tuples for lists, and provides a 'head' and 'rest' operation to manipulate them. I probably should have just called these 'car' and 'cdr'...

One benefit of using Lisp was that I could write the grammar as s-expressions and avoid needing a parser for the Lola input language. Of course, Lola is a parser generator, so it actually bootstraps itself by having the grammar for the Lola language written as Python data structures, generating a parser for that and then parsing the user's input. Here's what the Lola grammar looks like, in Lola grammar syntax:

start : non-term start | ; non-term : SYMBOL @NONTERM@ COLON rules @RULES@ SEMI ; rules : rule rules-p ; rules-p : VBAR rule rules-p | ; rule : symbols @RULE@ ; symbols : SYMBOL @SYMBOL@ symbols | ;

Lola now has a fairly clean input syntax, including the ability to code actions in C (or whatever language). It has two output modules; a Python module that generates just the Python parse table structure, and a C module that generates a complete parser, ready to incorporate into your application much as Bison does.

Lola is available in my git repository,

Actions in Bison vs Lola

Remember how I said that Bison makes processing synthetic attributes really easy? Well, the same is not true of the simple LL parser generated by Lola.

Actions in Lola are chucks of C code executed when they appear at to top of the parse stack. However, there isn't any context for them in the parsing process itself; the parsing process discards any knowledge of production boundaries. As a result, the actions have to manually track state on a separate attribute stack. There are pushes to this stack in one action that are expected to be matched by pops in another.

The resulting actions are not very pretty, and writing them somewhat error prone. I'd love to come up with a cleaner mechanism, and I've got some ideas, but those will have to wait for another time.

Bison vs Lola in Newt

Bison generated 4kB of parse tables and a 1470 byte parser. Lola generates 2kB of parse tables and a a 1530 byte parser. So, switching has saved about 2kB of memory. Most of the parser code in both cases is probably the actions, which my guess as to why they're similar. I think the 2kB savings is worth it, but it's a close thing for sure.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2018

Mar, 15/01/2019 - 11:26pd

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, about 224 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 8 hours (out of 8 hours allocated).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 15 hours (out of 20 hours allocated, thus keeping 5 extra hours for January).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 44 hours (out of 30 hours allocated + 39.25 extra hours, thus keeping 25.25 extra hours for January).
  • Hugo Lefeuvre did 20 hours (out of 20 hours allocated).
  • Lucas Kanashiro did 3 hours (out of 4 hours allocated, thus keeping one extra hour for January).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 21 hours (out of 10 hours allocated and 1 extra hour from November and 10 hours additionally allocated during the month).
  • Ola Lundqvist did 8 hours (out of 8 hours allocated + 7 extra hours, thus keeping 7 extra hours for January).
  • Roberto C. Sanchez did 12.75 hours (out of 12 hours allocated + 0.75 extra hours from November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).
Evolution of the situation

In December we managed to dispatch all the hours available to contributors again, and we had one new contributor in training. Still, we continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 37 packages with a known CVE and the dla-needed.txt file has 31 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Petter Reinholdtsen: CasparCG Server for TV broadcast playout in Debian

Mar, 15/01/2019 - 12:10pd

The layered video playout server created by Sveriges Television, CasparCG Server, entered Debian today. This completes many months of work to get the source ready to go into Debian. The first upload to the Debian NEW queue happened a month ago, but the work upstream to prepare it for Debian started more than two and a half month ago. So far the casparcg-server package is only available for amd64, but I hope this can be improved. The package is in contrib because it depend on the non-free fdk-aac library. The Debian package lack support for streaming web pages because Debian is missing CEF, Chromium Embedded Framework. CEF is wanted by several packages in Debian. But because the Chromium source is not available as a build dependency, it is not yet possible to upload CEF to Debian. I hope this will change in the future.

The reason I got involved is that the Norwegian open channel Frikanalen is starting to use CasparCG for our HD playout, and I would like to have all the free software tools we use to run the TV channel available as packages from the Debian project. The last remaining piece in the puzzle is Open Broadcast Encoder, but it depend on quite a lot of patched libraries which would have to be included in Debian first.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Jonathan Dowland: Amiga/Gotek boot test

Hën, 14/01/2019 - 8:42md

This is the fourth part in a series of blog posts. The previous post was part 3: preliminaries.

A500 mainboard

In 2015 Game 2.0, a Retro Gaming exhibition visited the Centre for Life in Newcastle. On display were a range of vintage home computers and consoles, rigged up so you could play them. There was a huge range of machines, including some Arcade units and various (relatively) modern VR systems that were drawing big crowds but something else caught my attention: a modest little Amiga A500, running the classic puzzle game, "Lemmings".

A couple of weeks ago I managed to disassemble my Amiga and remove the broken floppy disk drive. The machine was pretty clean under the hood, considering its age. I fed new, longer power and data ribbon cables out of the floppy slot in the case (in order to attach the Gotek Floppy Emulator externally) and re-assembled it.

Success! Lemmings!

I then iterated a bit with setting up disk images and configuring the firmware on the Gotek. It was supplied with FlashFloppy, a versatile and open source firmware that can operate in a number of different modes and read disk images in a variety of formats. I had some Amiga images in "IPF" format, others in "ADF" and also some packages with "RP9" suffixes. After a bit of reading around, I realised the "IPF" ones weren't going to work, the "RP9" ones were basically ZIP archives of other disk images and metadata, and the "ADF" format was supported.

Amiga & peripherals on my desk

For my first boot test of the Gotek adaptor, the disk image really had to be Lemmings. Success! Now that I knew the hardware worked, I spent some time re-arranging my desk at home, to try and squeeze the Amiga, its peripherals and the small LCD monitor alongside the equipment I use for my daily work. It was a struggle but they just about fit.

The next step was to be planning out and testing a workflow for writing to virtual floppies via the Gotek. Unfortunately, before I could get as far as performing the first write test, my hardware developed a problem…

Jonathan Dowland: Amiga floppy recovery project, part 3: preliminaries

Hën, 14/01/2019 - 8:42md

This is the third part in a series of blog posts, following Amiga floppy recovery project, part 2. The next part is Amiga/Gotek boot test.

The first step for my Amiga project was to recover the hardware from my loft and check it all worked.

When we originally bought the A500 (in, I think, 1991) we bought a RAM expansion at the same time. The base model had a whole 512KiB of RAM, but it was common for people to buy a RAM expander that doubled the amount of memory to a whopping 1 MiB. The official RAM expander was the Amiga 501, which fit into a slot on the underside of the Amiga, behind a trapdoor.

The 501 also featured a real-time clock (RTC), which was powered by a backup NiCad battery soldered onto the circuit board. These batteries are notorious for leaking over a long enough time-frame, and our Amiga had been in a loft for at least 20 years. I had heard about this problem when I first dug the machine back out in 2015, and had a vague memory that I checked the board at the time and could find no sign of leakage, but reading around the subject more recently made me nervous, so I double-checked.

AMRAM-NC-issue 2 RAM expansion

Lo and behold, we don't have an official Commodore RAM expander: we were sold a third-party one, an "AMRAM-NC-issue 2". It contains the 512KiB RAM and a DIP switch, but no RTC or corresponding battery, so no leakage. The DIP switch was used to enable and disable the RAM expansion. Curiously it is currently flicked to "disable". I wonder if we ever actually had it switched on?

The follow-on Amiga models A500+ and A600 featured the RTC and battery directly on the machine's mainboard. I wonder if that has resulted in more of those units being irrevocably damaged from leaking batteries, compared to the A500. My neighbours had an A600, but they got rid of it at some point in the intervening decades. If I were looking to buy an Amiga model today, I'd be quite tempted by the A600, due to its low profile, lacking the numpad, and integrated composite video output and RF modulator.

Kickstart 1.3 (firmware) prompt

I wasn't sure whether I was going to have to rescue my old Philips CRT monitor from the loft. It would have been a struggle to find somewhere to house the Amiga and the monitor combined, as my desk at home is already a bit cramped. Our A500 was supplied with a Commodore A520 RF adapter which we never used in the machine's heyday. Over the Christmas break I tested it and it works, meaning I can use the A500 with my trusty 15" TFT TV (which has proven very useful for testing old equipment, far outlasting many monitors I've had since I bought it).

A520 RF modulator and external FDD

Finally I recovered my old Amiga external floppy disk drive. From what I recall this had very light usage in the past, so hopefully it still works, although I haven't yet verified it. I had partially disassembled this back in 2015, intending to re-purpose the drive into the Amiga. Now I have the Gotek, my plans have changed, so I carefully re-assembled it. Compared to enclosures I've used for PC equipment, it's built like a tank!

The next step is to remove the faulty internal floppy disk drive from the A500 and wire up the Gotek. I was thwarted from attempting this over the Christmas break. The Amiga enclosure's top part is screwed on with T9 Torx screws, and I lacked the right screwdriver part to remove it. I've now obtained the right screwdriver bit and can proceed.

Bits from Debian: "futurePrototype" will be the default theme for Debian 10

Hën, 14/01/2019 - 1:15md

The theme "futurePrototype" by Alex Makas has been selected as default theme for Debian 10 'buster'.

After the Debian Desktop Team made the call for proposing themes, a total of eleven choices have been submitted, and any Debian contributor has received the opportunity to vote on them in a survey. We received 3,646 responses ranking the different choices, and futurePrototype has been the winner among them.

We'd like to thank all the designers that have participated providing nice wallpapers and artwork for Debian 10, and encourage everybody interested in this area of Debian, to join the Design Team.

Congratulations, Alex, and thank you very much for your contribution to Debian!

Russ Allbery: Review: The Wonder Engine

Hën, 14/01/2019 - 6:32pd

Review: The Wonder Engine, by T. Kingfisher

Series: The Clocktaur War #2 Publisher: Red Wombat Tea Company Copyright: 2018 ASIN: B079KX1XFD Format: Kindle Pages: 318

The Wonder Engine is the second half of The Clocktaur War duology, following Clockwork Boys. Although there is a substantial transition between the books, I think it's best to think of this as one novel published in two parts. T. Kingfisher is a pen name for Ursula Vernon when she's writing books for adults.

The prologue has an honest-to-God recap of the previous book, and I cannot express how happy that makes me. This time, I read both books within a month of each other and didn't need it, but I've needed that sort of recap so many times in the past and am mystified by the usual resistance to including one.

Slate and company have arrived in Anuket City and obtained temporary housing in an inn. No one is trying to kill them at the moment; indeed, the city seems oblivious to the fact that it's in the middle of a war. On the plus side, this means that they can do some unharried investigation into the source of the Clocktaurs, the war machines that are coming ever closer to smashing their city. On the minus side, it's quite disconcerting, and ominous, that the Clocktaurs involve so little apparent expenditure of effort.

The next steps are fairly obvious: pull on the thread of research of the missing member of Learned Edmund's order, follow the Clocktaurs and scout the part of the city they're coming from, and make contact with the underworld and try to buy some information. The last part poses some serious problems for Slate, though. She knows the underworld of Anuket City well because she used to be part of it, before making a rather spectacular exit. If anyone figures out who she is, death by slow torture is the best she can hope for. But the underworld may be their best hope for the information they need.

If this sounds a lot like a D&D campaign, I'm giving the right impression. The thief, ranger, paladin, and priest added a gnole to their company in the previous book, but otherwise closely match a typical D&D party in a game that's de-emphasizing combat. It's a very good D&D campaign, though, with some excellent banter, the intermittent amusement of Vernon's dry sense of humor, and some fascinating tidbits of gnole politics and gnole views on humanity, which were my favorite part of the book.

Somewhat unfortunately for me, it's also a romance. Slate and Caliban, the paladin, had a few exchanges in passing in the first book, but much of The Wonder Engine involves them dancing around each other, getting exasperated with each other, and trying to decide if they're both mutually interested and if a relationship could possibly work. I don't object to the relationship, which is quite fun in places and only rarely drifts into infuriating "why won't you people talk to each other" territory. I do object to Caliban, who Slate sees as charmingly pig-headed, a bit simple, and physically gorgeous, and who I saw as a morose, self-righteous jerk.

As mentioned in my review of the previous book, this series is in part Vernon's paladin rant, and much more of that comes into play here as the story centers more around Caliban and digs into his relationship with his god and with gods in general. Based on Vernon's comments elsewhere, one of the points is to show a paladin in a different (and more religiously realistic) light than the cliche of being one crisis of faith away from some sort of fall. Caliban makes it clear that when you've had a god in your head, a crisis of faith is not the sort of thing that actually happens, since not much faith is required to believe in something you've directly experienced. (Also, as is rather directly hinted, religions tend not to recruit as paladins the people who are prone to thinking about such things deeply enough to tie themselves up in metaphysical knots.) Guilt, on the other hand... religions are very good at guilt.

Caliban is therefore interesting on that level. What sort of person is recruited as a paladin? How does that person react when they fall victim to what they fight in other people? What's the relationship between a paladin and a god, and what is the mental framework they use to make sense of that relationship? The answers here are good ones that fit a long-lasting structure of organized religious warfare in a fantasy world of directly-perceivable gods, rather than fragile, crusading, faith-driven paladins who seem obsessed with the real world's uncertainty and lack of evidence.

None of those combine into characteristics that made me like Caliban, though. While I can admire him as a bit of world-building, Slate wants to have a relationship with him. My primary reaction to that was to want to take Slate aside and explain how she deserves quite a bit better than this rather dim piece of siege equipment, no matter how good he might look without his clothes on. I really liked Slate in the first book; I liked her even better in the second (particularly given how the rescue scene in this book plays out). Personally, I think she should have dropped Caliban down a convenient well and explored the possibilities of a platonic partnership with Grimehug, the gnole, who was easily my second-favorite character in this book.

I will give Caliban credit for sincerely trying, at least in between the times when he decided to act like an insufferable martyr. And the rest of the story, while rather straightforward, enjoyably delivered on the setup in the first book and did so with a lot of good banter. Learned Edmund was a lot more fun as a character by the end of this book than he was when introduced in the first book, and that journey was fun to see. And the ultimate source of the Clocktaurs, and particularly how they fit into the politics of Anuket City, was more interesting than I had been expecting.

This book is a bit darker than Clockwork Boys, including some rather gruesome scenes, a bit of on-screen gore, and quite a lot of anticipation of torture (although thankfully no actual torture scenes). It was more tense and a bit more uncomfortable to read; the ending is not a light romp, so you'll want to be in the right mood for that.

Overall, I do recommend this duology, despite the romance. I suspect some (maybe a lot) of my reservations are peculiar to me, and the romance will work better for other people. If you like Vernon's banter (and if you don't, we have very different taste) and want to see it applied at long novel length in a D&D-style fantasy world with some truly excellent protagonists, give this series a try.

The Clocktaur War is complete with this book, but the later Swordheart is set in the same universe.

Rating: 8 out of 10

Russell Coker: Are Men the Victims?

Dje, 13/01/2019 - 2:08md

A very famous blog post is Straight White Male: The Lowest Difficulty Setting There Is by John Scalzi [1]. In that post he clearly describes that life isn’t great for straight white men, but that there are many more opportunities for them.

Causes of Death

When this post is mentioned there are often objections, one common objection is that men have a lower life expectancy. The CIA World factbook (which I consider a very reliable source about such matters) says that the US life expectancy is 77.8 for males and 82.3 for females [2]. The country with the highest life expectancy is Monaco with 85.5 for males and 93.4 years for females [3]. The CDC in the US has a page with links to many summaries about causes of death [4]. The causes where men have higher rates in 2015 are heart disease (by 2.1%), cancer (by 1.7%), unintentional injuries (by 2.8%), and diabetes (by 0.4%). The difference in the death toll for heart disease, cancer, unintentional injuries, and diabetes accounts for 7% of total male deaths. The male top 10 lists of causes of death also includes suicide (2.5%) and chronic liver disease (1.9%) which aren’t even in the top 10 list for females (which means that they would each comprise less than 1.6% of the female death toll).

So the difference in life expectancy would be partly due to heart problems (which are related to stress and choices about healthy eating etc), unintentional injuries (risk seeking behaviour and work safety), cancer (the CDC reports that smoking is more popular among men than women [5] by 17.5% vs 13.5%), diabetes (linked to unhealthy food), chronic liver disease (alcohol), and suicide. Largely the difference seems to be due to psychological and sociological issues.

The American Psychological Association has for the first time published guidelines for treating men and boys [6]. It’s noteworthy that the APA states that in the past “psychology focused on men (particularly white men), to the exclusion of all others” and goes on to describe how men dominate the powerful and well paid jobs. But then states that “men commit 90 percent of homicides in the United States and represent 77 percent of homicide victims”. They then go on to say “thirteen years in the making, they draw on more than 40 years of research showing that traditional masculinity is psychologically harmful and that socializing boys to suppress their emotions causes damage that echoes both inwardly and outwardly”. The article then goes on to mention use of alcohol, tobacco, and unhealthy eating as correlated with “traditional” ideas about masculinity. One significant statement is “mental health professionals must also understand how power, privilege and sexism work both by conferring benefits to men and by trapping them in narrow roles”.

The news about the new APA guidelines focuses on the conservative reaction, the NYT has an article about this [7].

I think that there is clear evidence that more flexible ideas about gender etc are good for men’s health and directly connect to some of the major factors that affect male life expectancy. Such ideas are opposed by conservatives.

Risky Jobs

Another point that is raised is the higher rate of work accidents for men than women. In Australia it was illegal for women to work in underground mines (one of the more dangerous work environments) until the late 80’s (here’s an article about this and other issues related to women in the mining industry [8]).

I believe that people should be allowed to work at any job they are qualified for. I also believe that we need more occupational health and safety legislation to reduce the injuries and deaths at work. I don’t think that the fact that a group of (mostly male) politicians created laws to exclude women from jobs that are dangerous and well-paid while also not creating laws to mitigate the danger is my fault. I’ll vote against such politicians at every opportunity.

Military Service

Another point that is often raised is that men die in wars.

In WW1 women were only allowed to serve in the battlefield as nurses. Many women died doing that. Deaths in war has never been an exclusively male thing. Women in many countries are campaigning to be allowed to serve equally in the military (including in combat roles).

As far as I am aware the last war where developed countries had conscription was the Vietnam war. Since then military technology has developed to increasingly complex and powerful weapons systems with an increasing number of civilians and non-combat military personnel supporting each soldier who is directly involved in combat. So it doesn’t seem likely that conscription will be required for any developed country in the near future.

But not being directly involved in combat doesn’t make people safe. NPR has an interesting article about the psychological problems (potentially leading up to suicide) that drone operators and intelligence staff experience [9]. As an aside the article reference two women doing that work.

Who Is Ignoring These Things?

I’ve been accused of ignoring these problems, it’s a general pattern on the right to accuse people of ignoring these straight white male problems whenever there’s a discussion of problems that are related to not being a straight white man. I don’t think that I’m ignoring anything by failing to mention death rates due to unsafe workplaces in a discussion about the treatment of trans people. I try to stay on topic.

The New York Times article I cited shows that conservatives are the ones trying to ignore these problems. When the American Psychological Association gives guidelines on how to help men who suffer psychological problems (which presumably would reduce the suicide rate and bring male life expectancy closer to female life expectancy) they are attacked by Fox etc.

My electronic communication (blog posts, mailing list messages, etc) is mostly connected to the free software community, which is mostly male. The majority of people who read what I write are male. But it seems that the majority of positive feedback when I write about such issues is from women. I don’t think there is a problem of women or left wing commentators failing men. I think there is a problem of men and conservatives failing men.

What Can We Do?

I’m sure that there are many straight white men who see these things as problems but just don’t say anything about it. If you don’t want to go to the effort of writing a blog post then please consider signing your name to someone else’s. If you are known for your work (EG by being a well known programmer in the Linux community) then you could just comment “I agree” on a post like this and that makes a difference while also being really easy to do.

Another thing that would be good is if we could change the hard drinking culture that seems connected to computer conferences etc. Kara has an insightful article on Model View Culture about drinking and the IT industry [10]. I decided that drinking at Linux conferences had got out of hand when about 1/3 of the guys at my table at a conference dinner vomited.

Linux Conf Au (the most prestigious Linux conference) often has a Depression BoF which is really good. I hope they have one this year. As an aside I have problems with depression, anyone who needs someone to talk to about such things and would rather speak to me than attend a BoF is welcome to contact me by email (please take a failure to reply immediately as a sign that I’m behind on checking my email not anything else) or social media.

If you have any other ideas on how to improve things please make a comment here, or even better write a blog post and link to it in a comment.

Related posts:

  1. Links August 2012 Google are providing some really good employee benefits including benefits...
  2. How to Support Straight Marriage There is currently a lot of discussion about how to...
  3. death threats against Kathy Sierra The prominent blogger and author Kathy Sierra has recently cancelled...

Russ Allbery: DocKnot 2.00

Dje, 13/01/2019 - 6:12pd

This is a new major release of the utility I use to generate package documentation. It's the start of a restructure that will eventually let me merge more of my package maintenance tools into this package (and possibly eventually my web site building tools).

The functions previously provided by the docknot command-line tool have been moved to docknot generate, and the arguments have been changed around a bit. There's also a new docknot generate-all, and more default values so that one doesn't have to pass in as many arguments. The Perl module has been similarly restructured, with documentation generation moved into a new App::DocKnot::Generate module.

On the documentation template front, this release also adds a separate TESTING section for Perl modules and changes some of the templating for standard documentation of how to run the test suite.

You can get the latest release from the DocKnot distribution page or from CPAN.

Ben Hutchings: Debian LTS work, December 2018

Sht, 12/01/2019 - 12:50pd

I was assigned 20 hours of work by Freexian's Debian LTS initiative and worked 15 hours. I carried the remaining hours over to January.

I prepared and released another stable update for Linux 3.16 (3.16.62) and rebased jessie's linux package on this version, but did not upload a new release yet.

I also discussed the outstanding speculation-related vulnerabilities affecting Xen in jessie.

Joachim Breitner: Teaching to read Haskell

Pre, 11/01/2019 - 10:17md

TL;DR: New Haskell tutorial at

Half a year ago, I left the normal academic career path and joined the DFINITY Foundation, a non-profit start-up that builds a blockchain-based “Internet Computer” which will, if everything goes well, provide a general purpose, publicly owned, trustworthy service hosting platform.

DFINITY heavily bets on Haskell as a programming language to quickly develop robust and correct programs (and it was my Haskell experience that opened this door for me). DFINITY also builds heavily on innovative cryptography and cryptographic protocols to make the Internet Computer work, and has assembled an impressive group of crypto researchers.

Crypto is hard, and so is implementing crypto. How do we know that the Haskell code correctly implements what the cryptography researchers designed? Clearly, our researchers will want to review the code and make sure that everything is as intended.

But surprisingly, not everybody is Haskell-literate. This is where I come in, given that I have taught Haskell classes before, and introduce Haskell to those who do not know it well enough yet.

At first I thought I’d just re-use the material I created for the CIS 194 Haskell course at the University of Pennsylvania. But I noticed that I am facing quite a different audience. Instead of young students with fairly little computer scientist background who can spent quite a bit of time to learn to write Haskell, I am now addressing senior and very smart computer scientists with many other important things to do, who want to learn to read Haskell.

Certainly, a number of basics are needed in either case; basic functional programming for example. But soon, the needs diverge:

  • In order to write Haskell, I have to learn how to structure a program, how to read error message and deal with Haskell’s layout rule, but I don’t need to know all idioms and syntax features yet.
  • If I want to read Haskell, I need to navigate possibly big files, recognize existing structure, and deal with a plenitude of syntax, but I don’t need to worry about setting up a compiler or picking the right library.

So I set out to create a new Haskell course, “Haskell for Readers”, that is specifically tailored to this audience. It leaves out a few things that are not necessary for reading Haskell, is relatively concise and densely packed, but it starts with the basics and does not assume prior exposure to functional programming.

As it behooves for a non-profit-foundation, DFINITY is happy for me to develop the lecture material in the open, and release it to the public under a permissive creative commons license, so I invite you to read the in-progress document, and maybe learn something. Of course, my hope is to also get constructive feedback in return, and hence profit from this public release. Sources on GitHub.

Mike Gabriel: Upcoming FreeRDP v1.1 updates for Debian jessie (LTS) and Debian stretch (please test!)

Pre, 11/01/2019 - 4:06md

Recently, Bernhard Miklautz, Martin Fleisz and myself have been working on old FreeRDP code. Our goal was, to get FreeRDP in Debian jessie LTS and Debian stretch working again against recent Microsoft RDP servers.

It has been done now.


In Debian LTS, we were discussing a complex update of the freerdp (v1.1) package. That was before X-mas.

The status of FreeRDP v1.1 (jessie/stretch) then was and still is:

  • Since March 2018 freerdp in stretch (and jessie) (Git snapshot of never released v1.1) has been unusable against latest Microsoft Windows servers. All MS Windows OS versions switched to RDP proto version 6 plus CredSSP version 3 and the freerdp versions in Debian jessie/stretch do not support that, yet.
  • For people using Debian stretch, the only viable work-around is using freerdp2 from stretch-backports.
  • People using Debian jessie LTS don't have any options (except from upgrading to stretch and using freerdp2 from stretch-bpo).
  • Currently, we know of four unfixed no-DSA CVE issues in freerdp (v1.1) (that are fixed in buster's freerdp2).

With my Debian LTS contributor hat on, I have started working on the open freerdp CVE issues (whose backported fixes luckily appeared in a Ubuntu security update, so not much work on this side) and ...

... I have started backporting the required patches (at least these: [0a,0b,0c]) to get RDP proto version 6 working in Debian jessie's and Debian stretch's freerdp v1.1 version. It turned out later that the third referenced patch [0c] is not required.

With the LTS team it was agreed that this complete endeavour for LTS only makes sense if the stable release team is open to accepting such a complex change to Debian stretch, too.

While working on these patches, I regularly got feedback from FreeRDP upstream developer Bernhard Miklautz. That was before X-mas. Over the X-mas holidays (when I took time off with the family), Bernhard Miklautz and also Martin Fleisz from FreeRDP upstream took over and a couple of days ago I was presented with a working solution. Well done, my friends. Very cool and very awesome!

As already said, recently, more and more people installed FreeRDP v2 from stretch-backports (if on stretch), but we know of many people / sysadmins that are not allowed to use packages from Debian backports' repository. Using FreeRDPv2 from stretch-backports is still a good (actually the best) option for people without strict software policies. But to those, who are not permitted to use software from Debian backports, now we can provide you with a solution.

Please test FreeRDP v1.1 upload candidates

We would love to get some feedback from brave test users. Actually, if the new update works for you, there is no need for giving feedback. However, let us know when things fail for you.

Packages have been upload to my personal staging repository:

APT URL (stretch):

deb stretch main

APT URL (jessie):

deb jessie main

Obtain the archive key:

$ wget -qO - | sudo apt-key add -

Install the FreeRDP-X11 package:

% sudo apt update $ sudo apt install freerdp-x11

As the staging repo contains various other packages, please disable that repo immediately after having installed the new FreeRDP package versions. Thanks!

Next steps

The changeset (.debdiff) has already been sent for pre-approval to the Debian stable (aka stretch) release team [2].

I will at least postpone the upload by some more days (let's say 5 days) to give people a chance for giving feedback. When these days are over and once (and if) I have got the release team's ACK to proceed, I will upload the updated package.

Once FreeRDP has been updated in Debian stretch, I will do an immediate upload of nearly the same package (with some formal changes) to Debian jessie LTS (installable via

For Debian stretch, the updated FreeRDP version will be available to all Debian stretch users with the next Debian stable point release at the latest (if nothing of the above gets delayed). The release team may give this update some priority and make it available via stable-updates prior to the next point release.

For Debian jessie, the updated FreeRDP version will be available once the update has been acknowledged by the Debian stable release team.


Dirk Eddelbuettel: pinp 0.0.7: More small YAML options

Pre, 11/01/2019 - 12:24md

A good six months after the previous release, another small feature release of our pinp package for snazzier one or two column Markdown-based pdf vignettes got onto CRAN minutes ago as another [CRAN-pretest-publish] release indicating a fully automated process (as can be done for packages free of NOTES, WARNING, ERRORS, and without ‘changes to worse’ in their reverse dependency checks).

One new option was suggested (and implemented) by Ilya Kashnitsky: the bold and small subtitle carrying a default of ‘this version built on …’ with the date is now customisable; motivation was for example stating a post-publication DOI which is indeed useful. In working with DOI I also finally realized that I was blocking displays of DOIs in the references: the PNAS style use \doi{} for a footer display (which we use e.g. for vignette info) shadowing the use via the JSS.cls borrowed from the Journal of Statistical Software setup. So we renamed the YAML header option to doi_footer for clarity, still support the old entries for backwards compatibility (yes, we care), renamed the macro for this use — and with an assist from LaTeX wizard Achim Zeileis added a new \doi{} now displaying DOIs in the references as they should! We also improved some internals as e.g. the Travis CI checks but I should blog about that another time, and documented yet more YAML header options in the vignette.

A screenshot of the package vignette can be seen below. Additional screenshots of are at the pinp page.

The NEWS entry for this release follows.

Changes in pinp version 0.0.7 (2019-01-11)
  • Added some more documentation for different YAML header fields.

  • A new option has been added for a 'date_subtitle' (Ilya Kashnitsky in #64 fixing #63).

  • 'doi' YAML option renamed to 'doi_footer' to permit DOIs in refs, 'doi' header still usable (Dirk in #66 fixing #65).

  • The 'doi' macro was redefined to create a hyperlink.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Hideki Yamane: Debian Bug Squash Party Tokyo 2019-01

Pre, 11/01/2019 - 2:55pd
Hi, we'll hold an event "Debian Bug Squash Party Tokyo 2019-01" (19th, Jan).
Happy bug squashing, see you there! :)

Bits from Debian: DebConf19 is looking for sponsors!

Enj, 10/01/2019 - 6:30md

DebConf19 will be held in Curitiba, Brazil from July 21th to 28th, 2019. It will be preceded by DebCamp, July 14th to 19th, and Open Day on the 20th.

DebConf, Debian's annual developers conference, is an amazing event where Debian contributors from all around the world gather to present, discuss and work in teams around the Debian operating system. It is a great opportunity to get to know people responsible for the success of the project and to witness a respectful and functional distributed community in action.

The DebConf team aims to organize the Debian Conference as a self-sustaining event, despite its size and complexity. The financial contributions and support by individuals, companies and organizations are pivotal to our success.

There are many different possibilities to support DebConf and we are in the process of contacting potential sponsors from all around the globe. If you know any organization that could be interested or who would like to give back resources to FOSS, please consider handing them the sponsorship brochure or contact the fundraising team with any leads. If you are a company and want to sponsor, please contact us at

Let’s work together, as every year, on making the best DebConf ever. We are waiting for you at Curitiba!

Jonathan Dowland: Amiga floppy recovery project, part 2

Enj, 10/01/2019 - 5:41md

This is the second part in a series of blog posts. The first part is Amiga floppy recovery project. The next part is Amiga floppy recovery project, part 3: preliminaries.

Happy New Year!

Nearly four years ago I wrote about my ambition to read the data from my old collection of Amiga floppy disks. I'd wanted to do this for a long time prior to writing that. I knew we still had the Amiga but my family were fairly sure we didn't have the floppies. It was the floppies turning up in 2015 that made the project viable.

I haven't done much for this project in the last four years. I began to catalogue the floppies, and the Amiga itself, the floppies and a 2½ foot pile of old Amiga magazines moved from my parent's loft to mine (as they moved house). During the move, we did briefly power on the Amiga to make sure it still worked. It booted fine, but we couldn't read any disks. I tried about 5 before I felt it was more likely the drive was dead than all the floppies we had tried. If that were the case, then the broken drive might damage any disks we tried in it.

The next step is to replace the internal disk drive in the Amiga, so that I could boot software. After that there are a number of different approaches I could take to move data onto a modern machine, including attempting to connect the Amiga to a network for the first time in its life. I have an external floppy disk drive that I considered swapping into the Amiga, but didn't attempt it.

This Christmas my brother bought me a Gotek floppy disk emulator, modified for use with Amiga computers. This is a lightweight device that can be connected to the Amiga in place of the real floppy disk drive, re-using the existing power and data ribbon cables. It features a USB port, two buttons and a small LCD numeric display. The intention is that you insert a USB drive containing Amiga disk images, and the Gotek presents them to the Amiga as if they were real disks.

Thanks to this device, I can now finally embark upon my disk-reading project!