You are here

Planet Debian

Subscribe to Feed Planet Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 5 months 3 javë më parë

Ian Jackson: Nailing Cargo (the Rust build tool)

Sht, 09/03/2019 - 3:21md
Introduction I quite like the programming language Rust, although it's not without flaws and annoyances.

One of those annoyances is Cargo, the language-specific package manager. Like all modern programming languages, Rust has a combined build tool and curlbashware package manager. Apparently people today expect their computers to download and run code all the time and get annoyed when that's not sufficiently automatic.

I don't want anything on my computer that automatically downloads and executes code from minimally-curated repositories like crates.io. So this is a bit of a problem.

Dependencies available in Debian Luckily I can get nearly all what I have needed so far from Debian, at least if I'm prepared to use Debian testing (buster, now in freeze). Debian's approach to curation is not perfect, but it's mostly good enough for me.

But I still need to arrange to use the packages from Debian instead of downloading things.

Of course anything in Debian written in Rust faces the same problem: Debian source packages are quite rightly Not Allowed to download random stuff from the Internet during the package build. So when I tripped across this I reasoned that the Debian Rust team must already have fixed this problem somehow. And, indeed they have. The result is not too awful:

$ egrep '^[^#]' ~/.cargo/config [source] [source.debian-packages] directory = "/usr/share/cargo/registry" [source.crates-io] replace-with = "debian-packages" $ I cloned and hacked this from the cargo docs after reading the Debian Rust Team packaging policy.

The effect is to cause my copy of cargo to think that crates.io is actually located (only) on my local machine in /usr/share/cargo. If I mention a dependency, cargo will look in /usr/share/cargo for it. If it's not there I get an error, which I fix by installing the appropriate Debian rust package using apt.

So far so good.

Edited 2019-03-07, to add: To publish things on crates.io I then needed another workaround too (scroll down to "Further annoyance from Cargo"

Dependencies not in Debian A recent project of mine involved some dependencies which were not in Debian, notably Rust bindings to the GNU Scientific Library and to the NLopt nonlinear optimisation suite. A quick web search found me the libraries rust-GSL and rust-nlopt. They are on crates.io, of course. I thought I would check them out so that I could check them out. Ie, decide if they looked reasonable, and do a bit of a check on the author, and so on. Digression - a rant about Javascript The first problem is of course that crates.io itself does not work at all without enabling Javascript in my browser. I have always been reluctant about JS. Its security properties have always been suboptimal and much JS is used for purposes which are against the interests of the user.

But I really don't like the way that everyone is still pretending that it is safe to run untrusted code on one's computer, despite Spectre. Most people seem to think that if you are up to date with your patches, Spectre isn't a problem. This is not true at all. Spectre is basically unfixable on affected CPUs and it is not even possible to make a comparably fast CPU without this bug. The patches are bodges which make the attacks more complicated and less effective. (At least, no-one knows how to do so yet.)

And of course Javascript is a way of automatically downloading megabytes of stuff from all over the internet and running it on your computer. Urgh. So I run my browser with JS off by default.

There is absolutely no good reason why crates.io won't let me even look at the details of some library without downloading a pile of code from their website and running it on my computer. But, I guess, it is probably OK to allow it? So on I go, granting JS permission. Then I can click through to the actual git repository. JS just to click a link! At least we can get back to the main plot now...

The "unpublished local crate" problem So I git clone rust-GSL and have a look about. It seems to contain the kind of things I expect. The author seems to actually exist. The git history seems OK on cursory examination. I decide I am going to actually to use it. So I write [dependencies] GSL = "1" in my own crate's metadata file.

I realise that I am going to have to tell cargo where it is. Experimentally, I run cargo build and indeed it complains that it's never heard of GSL. Fair enough. So I read the cargo docs for the local config file, to see how to tell it to look at ../rust-GSL.

It turns out that there is no sensible way to do this!

There is a paths thing you can put in your config, but it does not work for an unpublished crate. (And of course, from the point of view of my cargo, GSL is a unpublished crate - because only crates that I have installed from .debs are "published".)

Also paths actually uses some of the metadata from the repository, which is of course not what you wanted. In my reading I found someone having trouble because crates.io had a corrupted metadata file for some crate, which their cargo rejected. They could just get the source themselves and fix it, but they had serious difficulty hitting cargo on the head hard enough to stop it trying to read the broken online metadata.

The same overall problem would arise if I had simply written the other crate myself and not published it yet. (And indeed I do have a crate split out from my graph layout project, which I have yet to publish.)

You can edit your own crate's Cargo.toml metadata file to say something like this:

GSL = { path = "../rust-GSL", optional = true } but of course that's completely wrong. That's a fact about the setup on my laptop and I dont want to commit it to my git tree. And this approach gets quickly ridiculous if I have indirect dependencies: I would have to make a little local branch in each one just to edit each one's Cargo.toml to refer to the others'. Awful.

Well, I have filed an issue. But that won't get me unblocked.

So, I also wrote a short but horrific shell script. It's a wrapper for cargo, which edits all your Cargo.toml's to refer to each other. Then, when cargo is done, it puts them all back, leaving your git trees clean.

Writing this wrapper script would have been a lot easier if Cargo had been less prescriptive about what things are called and where they must live. For example, if I could have specified an alternative name for Cargo.toml, my script wouldn't have had to save the existing file and write a replacement; it could just have idempotently written a new file.

Even so, nailing-cargo works surprisingly well. I run it around make. I sometimes notice some of the "nailed" Cargo.toml files if I update my magit while a build is running, but that always passes. Even in a nightmare horror of a script, it was worth paying attention to the error handling.

I hope the cargo folks fix this before I have to soup up nailing-cargo to use a proper TOML parser, teach it a greater variety of Cargo.toml structures, give it a proper config file reader and a manpage, and generally turn it into a proper, but still hideous, product.

edited 2019-03-09 to fix tag spelling



comments

Ian Jackson: Rust doubly-linked list

Sht, 09/03/2019 - 3:19md
I have now released (and published on crates.io) my doubly-linked list library for Rust.

Of course in Rust you don't usually want a doubly-linked list. The VecDeque array-based double-ended queue is usually much better. I discuss this in detail in my module's documentation.

Why a new library
Weirdly, there is a doubly linked list in the Rust standard library but it is good for literally nothing at all. Its API is so limited that you can always do better with a VecDeque. There's a discussion (sorry, requires JS) about maybe deprecating it.

There's also another doubly-linked list available but despite being an 'intrusive' list (in C terminology) list it only supports one link per node, and insists on owning the items you put into it. I needed several links per node for my planar graph work, and I needed Rc-based ownership.

Indeed given my analysis of when a doubly-linked list is needed, rather than a VecDeque, I think it will nearly always involve something like Rc too.

My module
You can read the documentation online.

It provides the facilities I needed, including lists where each node can be on multiple lists with runtime selection of the list link within each node. It's not threadsafe (so Rust will stop you using it across multiple threads) and would be hard to make threadsafe, I think.

Notable wishlist items: entrypoints for splitting and joining lists, and good examples in the documentation. Both of these would be quite easy to add.

Further annoyance from Cargo
As I wrote earlier, because I am some kind of paranoid from the last century, I have hit cargo on the head so that it doesn't randomly download and run code from the internet.

This is done with stuff in my ~/.cargo/config. Of course this stops me actually accessing the real public repository (cargo automatically looks for .cargo/config in all parent directories, not just in $PWD and $HOME). No problem - I was expecting to have to override it.

However there is no way to sensibly override a config file!

So I have had to override it in a silly way: I made a separate area on my laptop which belongs to me but which is not underneath my home directory. Whenever I want to run cargo publish, I copy the crate to be published to that other area, which is not a direct or indirect subdirectory of anything containing my usual .cargo/config.

Cargo really is quite annoying: it has opinions about how everything is and how everything ought to be done. I wouldn't mind that, but unfortunately when it happens to be wrong it is often lacking a good way to tell it what should be done instead. This is kind of thing is serious problem in a build system tool.

comments

Chris Lamb: Book Review: Jeeves and the King of Clubs

Sht, 09/03/2019 - 2:39md

Jeeves and the King of Clubs (2018)

Ben Schott

For the P.G. Wodehouse fan the idea of bringing back such a beloved duo such as Jeeves and Wooster will either bring out delight or dread. Indeed, the words you find others using often reveals their framing of such endeavours; is this a tribute, homage, pastiche, an imitation…?

Whilst neither parody nor insult, let us start with the "most disagreeable, sir." Rather jarring were the voluminous and Miscellany-like footnotes that let you know that the many allusions and references are all checked, correct and contemporaneous. All too clever by half and would ironically be a negative trait if this was personified by a character within the novel itself. Bertie's uncharactestic knowledge of literature was also eyebrow-raising: whilst he should always have the mot juste within easy reach — especially for that perfect parliamentary insult — Schott's Wooster was just a bit too learned and bookish, ultimately lacking that blithe An Idiot Abroad element that makes him so affably charming.

Furthermore, Wodehouse's far-right Black Shorts group (who "seek to promote the British way of life, the British sense of fair play and the British love of Britishness") was foregrounded a little too much for my taste. One surely reaches for Wodehouse to escape contemporary political noise and nonsense, to be transported to that almost-timeless antebellum world which, of course, never really existed in the first place?

Saying that, the all-important vernacular is full of "snap and vim", the eponymous valet himself is superbly captured, and the plot has enough derring-do and high jinks to possibly assuage even the most ardent fan. The fantastic set pieces in both a Savile Row tailor and a ladies underwear store might be worth the price of admission alone.

To be sure, this is certainly ersatz Wodehouse, but should one acquire it? «Indeed, sir,» intoned Jeeves.

Andrew Cater: Debian BSP Cambridge March 9th 2019 - post 1

Sht, 09/03/2019 - 1:32md
At Steve's. Breakfast and coffee have happened. There's a table full of developers, release managers, cables snaking all over the floor and a coffee machine. This is heaven for Debian developers (probably powered by ThinkPads). On IRC on #debian-uk (as ever) and #debian-bugs Catch up with us there

Dirk Eddelbuettel: RcppArmadillo 0.9.200.7.1

Sht, 09/03/2019 - 3:48pd

A minor RcppArmadillo bugfix release arrived on CRAN today. This version 0.9.200.7.1 has two local changes. R 3.6.0 will bring a change in sample() (to correct a subtle bug for large samples) meaning many tests will fail, so in one unit test file we reset the generator to the old behaviour to ensure we match the (old) test expectation. We also backported a prompt upstream fix for an issue with drawing Wishart-distributed random numbers via Armadillo which was uncovered this week. I also just uploaded the Debian version.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 559 other packages on CRAN.

Changed are listed below:

Changes in RcppArmadillo version 0.9.200.7.1 (2019-03-08)
  • Explicit setting of RNGversion("3.5.0") in one unit test to accomodate the change in sample() in R 3.6.0

  • Back-ported a fix to the Wishart RNG from upstream (Dirk in #248 fixing #247)

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Muammar El Khatib: Spotify And Local Files

Sht, 09/03/2019 - 12:38pd

My favorite music player is undoubtedly Spotify. It is not a secret that its Linux support might not be the best and that some artists have just decided not to upload the music in the service. One of them is Tool, one of my favorite bands, too. I recently decided to play my Tool mp3 files with Spotify as local files and they were not playing. In order to fix that one has to:

Jo Shields: Bootstrapping RHEL 8 support on mono-project.com

Pre, 08/03/2019 - 5:15md
Preamble

On mono-project.com, we ship packages for Debian 8, Debian 9, Raspbian 8, Raspbian 9, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, RHEL/CentOS 6, and RHEL/CentOS 7. Because this is Linux packaging we’re talking about, making one or two repositories to serve every need just isn’t feasible – incompatible versions of libgif, libjpeg, libtiff, OpenSSL, GNUTLS, etc, mean we really do need to build once per target distribution.

For the most part, this level of “LTS-only” coverage has served us reasonably well – the Ubuntu 18.04 packages work in 18.10, the RHEL 7 packages work in Fedora 28, and so on.

However, when Fedora 29 shipped, users found themselves running into installation problems.

I was not at all keen on adding non-LTS Fedora 29 to our build matrix, due to the time and effort required to bootstrap a new distribution into our package release system. And, as if in answer to my pain, the beta release of Red Hat Enterprise 8 landed.

Cramming a square RPM into a round Ubuntu

Our packaging infrastructure relies upon a homogenous pool of Ubuntu 16.04 machines (x64 on Azure, ARM64 and PPC64el on-site at Microsoft), using pbuilder to target Debian-like distributions (building i386 on the x64 VMs, and various ARM flavours on the ARM64 servers); and mock to target RPM-like distributions. So in theory, all I needed to do was drop a new RHEL 8 beta mock config file into place, and get on with building packages.

Just one problem – between RHEL 7 (based on Fedora 19) and RHEL 8 (based on Fedora 28), the Red Hat folks had changed package manager, dropping Yum in favour of DNF. And mock works by using the host distribution’s package manager to perform operations inside the build root – i.e. yum.deb from Ubuntu.

It’s not possible to install RHEL 8 beta with Yum. It just doesn’t work. It’s also not possible to update mock to $latest and use a bootstrap chroot, because reasons. The only options: either set up Fedora VMs to do our RHEL 8 builds (since they have DNF), or package DNF for Ubuntu 16.04.

For my sins, I opted for the latter. It turns out DNF has a lot of dependencies, only some of which are backportable from post-16.04 Ubuntu. The dependency tree looked something like:

  •  Update mock and put it in a PPA
    •  Backport RPM 4.14+ and put it in a PPA
    •  Backport python3-distro and put it in a PPA
    •  Package dnf and put it in a PPA
      •  Package libdnf and put it in a PPA
        •  Backport util-linux 2.29+ and put it in a PPA
        •  Update libsolv and put it in a PPA
        •  Package librepo and put it in a PPA
          •  Backport python3-xattr and put it in a PPA
          •  Backport gpgme1.0 and put it in a PPA
            •  Backport libgpg-error and put it in a PPA
        •  Package modulemd and put it in a PPA
          •  Backport gobject-introspection 1.54+ and put it in a PPA
          •  Backport meson 0.47.0+ and put it in a PPA
            •  Backport googletest and put it in a PPA
        •  Package libcomps and put it in a PPA
    •  Package dnf-plugins-core and put it in a PPA
  •  Hit all the above with sticks until it actually works
  •  Communicate to community stakeholders about all this, in case they want it

This ended up in two PPAs – the end-user usable one here, and the “you need these to build the other PPA, but probably don’t want them overwriting your system packages” one here. Once I convinced everything to build, it didn’t actually work – a problem I eventually tracked down and proposed a fix for here.

All told it took a bit less than two weeks to do all the above. The end result is, on our Ubuntu 16.04 infrastructure, we now install a version of mock capable of bootstrapping DNF-requiring RPM distributions, like RHEL 8.

RHEL isn’t CentOS

We make various assumptions about package availability, which are true for CentOS, but not RHEL (8). The (lack of) availability of the EPEL repository for RHEL 8 was a major hurdle – in the end I just grabbed the relevant packages from EPEL 7, shoved them in a web server, and got away with it. The second is structural – for a bunch of the libraries we build against, the packages are available in the public RHEL 8 repo, but the corresponding -devel packages are in a (paid, subscription required) repository called “CodeReady Linux Builder” – and using this repo isn’t mock-friendly. In the end, I just grabbed the three packages I needed via curl, and transferred them to the same place as the EPEL 7 packages I grabbed.

Finally, I was able to begin the bootstrapping process.

RHEL isn’t Fedora

After re-bootstrapping all the packages from the CentOS 7 repo into our “””CentOS 8″”” repo (we make lots of naming assumptions in our control flow, so the world would break if we didn’t call it CentOS), I tried installing on Fedora 29, and… Nope. Dependency errors. Turns out there are important differences between the two distributions. The main one is that any package with a Python dependency is incompatible, as the two handle Python paths very differently. Thankfully, the diff here was pretty small.

The final, final end result: we now do every RPM build on CentOS 6, CentOS 7, and RHEL 8. And the RHEL 8 repo works on Fedora 29

MonoDevelop 7.7 on Fedora 29.

The only errata: MonoDevelop’s version control addin is built without support for ssh+git:// repositories, because RHEL 8 does not offer a libssh2-devel. Other than that, hooray!

Enrico Zini: Starting tornado on a random free port

Pre, 08/03/2019 - 12:00pd

One of the software I maintain for work is a GUI data browser that uses Tornado as a backend and a web browser as a front-end.

It is quite convenient to start the command and have the browser open automatically on the right URL. It's quite annoying to start the command and be told that the default port is already in use.

I've needed this trick quite often, also when writing unit tests, and it's time I note it down somewhere, so it's easier to find than going through Tornado's unittest code where I found it the first time.

This is how to start Tornado on a free random port:

from tornado.options import define, options import tornado.netutil import tornado.httpserver define("web_port", type=int, default=None, help="listening port for web interface") application = Application(self.db_url) if options.web_port is None: sockets = tornado.netutil.bind_sockets(0, '127.0.0.1') self.web_port = sockets[0].getsockname()[:2][1] server = tornado.httpserver.HTTPServer(application) server.add_sockets(sockets) else: server = tornado.httpserver.HTTPServer(application) server.listen(options.web_port)

Chris Lamb: Book Review: The Sellout

Enj, 07/03/2019 - 7:03md

The Sellout (2016)

Paul Beatty

I couldn't put it down… is the go-to cliché for literature so I found it deeply ironic to catch myself in quite-literally this state at times. Winner of the 2016 Man Booker Prize, the first third of this were perhaps the most engrossing and compulsive reading experience I've had since I started seriously reading.

This book opens in medias res within the Supreme Court of the United States where the narrator lights a spliff under the table. As the book unfolds, it is revealed that this very presence was humbly requested by the Court due to his attempt to reinstate black slavery and segregation in his local Los Angeles neighbourhood. Saying that, outlining the plot would be misleading here as it is far more the ad-hoc references, allusions and social commentary that hang from this that make this such an engrossing work.

The tranchant, deep and unreserved satire might perhaps be merely enough for an interesting book but where it got really fascinating to me (in a rather inside baseball manner) is how the the latter pages of the book somehow don't live up the first 100. That appears like a straight-up criticism but this flaw is actually part of this book's appeal to me — what actually changed in these latter parts? It's not overuse of the idiom or style and neither is it that it strays too far from the original tone or direction, but I cannot put my finger on why which has meant the book sticks to this day in my mind. I can almost, just almost, imagine a devilish author such as Paul deliberately crippling one's output for such an effect…

Now, one cannot unreservedly recommend this book. The subject matter itself, compounded by being dealt with in such an flippant manner will be unpenetrable to many and deeply offensive to others, but if you can see your way past that then you'll be sure to get something—whatever that may be—from this work.

Shirish Agarwal: How to deal with insanity ?

Enj, 07/03/2019 - 3:20md

There is a friend of mine, from college days. As it happens, we drifted apart from each other, he chose some other vocation and I chose another. At the time we were going to college, mobile phones were expensive. E-mail was also expensive but I chose to spend my money on e-mail and other methods of communication. Then about 6-8 months back, out of the blue, my friend called me back. It took me sometime as I wasn’t able to place him (I just cannot remember people and also features change a lot.) but do remember experiences. We promised to meet but at the time we were supposed to meet, it rained like it never rained before. I waited for an hour but wasn’t able to see him. I tried SMS, called him but no answer. I did try few times but never got him. He used to send me a message once in a while and I used to send a reply back. I was able to talk with his mum some days after that. Yesterday, I was trying to call some people, and his name popped up. On a hunch, dialed his number and his sis. came on the line. She was able to place me (I guess I might have met 6-8 years back or more) but still she was able to place me and she told me he’s gone insane. While I’m supposed to meet the family on the week-end to know what happened, I am still not able to process how it happened. I had known he had fallen into some bad company (his mum had shared this titbit) but can’t figure out for the life of me what could have driven him insane. I told I would be coming on Sunday as I have work, but more importantly trying to create some sense of space or control so I can digest what’s happened. While I know it happens to people, not to people I know, not to people I do care about. I also came to know that all the time my phone was not able to get through is because he has a shitty jio connection or the place where they live, jio doesn’t have a good presence.

Now one part of me has a sort of morbid curiousity as to what chain of events lead to it while at the same time dunno if I would be able to help them or what I should say or do ? Feel totally helpless. If anybody have any ideas, please yell, comment.

Daniel Leidert: Exclude files from being exported into the zip/tar source archives on github.com

Enj, 07/03/2019 - 1:52md

GitHub.com (and probably GitLab too) provides various ways to export the Git branch contents or tags and releases as Zip- or Tar-archives. When creating a release, these tar-/zipballs are automatically created and added to the release. I often find archives, which contain a lot of files not useful to the end user, like .github directories, Git (.gitignore, .gitattributes) or CI related files (.travis.yml, .appveyor.yml). Sometimes they also contain directories (e.g. for test files), upstream hosts in Git, but does not need for the source distribution. But there is an easy way to keep these files out of the automatically created source archives and keep the latter clean by using the export-ignore attribute in the .gitattributes files:


# don't export the github-pages source
/docs export-ignore
# export some other irrelevant directories
/foo export-ignore
# don't export the files necessary for CI
Gemfile export-ignore
.appveyor.yml export-ignore
.travis.yml export-ignore
# ignore Git related files
.gitattributes export-ignore
.gitignore export-ignore

Jonathan Dowland: Learning new things about my old Amiga A500

Enj, 07/03/2019 - 12:55md

This is the sixth part in a series of blog posts. The previous post was glitched Amiga video.

Sysinfo output for my A500

I saw a tweet from Sophie Haskins who is exploring her own A500 and discovered that it had an upgraded Agnus chip. The original A500 shipped with a set of chips which are referred to as the Original Chip Set (OCS). The second generation of the chips were labelled Enhanced Chip Set (ECS). A500s towards the end of their production lifetime were manufactured with some ECS chips instead. I had no idea which chipset was in my A500, but Sophie's tweet gave me a useful tip, she was using some software called sysinfo to enumerate what was going on. I found an ADF disk image that included Sysinfo ("LSD tools") and gave it a try. To my surprise, my Amiga has an ECS "AGNUS" chip too!

I originally discovered Sophie due to her Pizzabox Computer project: An effort to acquire, renovate and activate a pantheon of vintage "pizzabox" form-factor workstation computers. I once had one of these, the Sun SPARCStation 10, but it's long since gone. I'm mildly fascinated to learn more about some of these other machines. After proofreading Fabien Senglard's DOOM book, I was interested to know more about NeXTstations, and Sophie is resurrecting a NeXTstation mono, but there are plenty of other interesting esoteric things on that site, such as Apple A/UX UNIX on a Quadra 610 (the first I'd heard of both Apple's non-macOS UNIX, and their pizzabox form-factor machines).

Jonathan Dowland: sysinfo.jpeg

Enj, 07/03/2019 - 12:44md

Daniel Silverstone: Releasing Rustup 1.17.0

Enj, 07/03/2019 - 10:12pd

Today marks the release of rustup version 1.17.0 which is both the first version of rustup which I have contributed code to, and also the first version which I was responsible for preparing the release of. I thought I ought to detail the experience, but first, a little background…

At the end of last year, leading into this year, I made some plans which included an explicit statement to "give back" to the Rust community as I'd received a lot of help with, and enjoyment in, Rust from the community over the previous couple of years. I looked for ways I could contribute, including making a tiny wording PR against the compiler which I won't even bother linking here, but eventually I decided to try and help with the rust-lang/rustup.rs repository and tried to tackle some of the issues therein.

Nick Cameron was, at the time, about to step down as a lead of the tools team and he ended up talking to me about maybe joining a working group to look after Rustup. I agreed and a little earlier this year, I became part of the Rustup working group, which is a sub-group of the Cargo team, part of the Rust developer tools teams.

Over the past few weeks we've been preparing a new release of Rustup to include some useful bug fixes and a few little feature tweaks. Rustup is not as glamorous a part of the ecosystem as perhaps Cargo or Rustc itself, but it's just as important I think, since it's the primary gateway through which people acquire Rust, and interact with the Rust toolchain ecosystem.

On Tuesday evening, as part of our weekly meeting, we discussed the 1.17.0 release plans and process, and since I'm very bad at stepping back at the right moment, I ended up volunteering to run the release checklist through and push 1.17.0 out of the door. Thankfully, between Nick and Alex Crichton we had a good set of instructions and so I set about making the release. I prepared a nice series of commits updating the version numbers, ensuring the lock file was up to date, making the shell script installer frontend include the right version numbers, and pushed them off to be built by the CI. Unfortunately a break in a library we depend on, which only showed its face on our mingw builders (not normally part of the CI since there are so few available to the org) meant that I had to reissue the build and go to bed.

Note that I said I had to go to bed - this was nearing midnight and I was due up before 7am the following day. This might give you some impression of the state of mind I was in trying to do this release and thus perhaps a hint of where I'm going to be going with this post…

In the morning, I checked and the CI pipelines had gone green, so I waited until Alex showed up (since he was on UTC-6) and as soon as I spotted him online, around 14:45 UTC, I pinged him and he pushed the button to prep the release after we did a final check that things looked okay together. The release went live at 14:47 UTC.

And by 15:00 UTC we'd found a previously unnoticed bug - in the shell installer frontend - that I had definitely tested the night before. A "that can't possibly have ever worked" kind of bug which was breaking any CI job which downloaded rustup from scratch. Alex deployed a hotfix straight to the dist server at 15:06 UTC to ensure that as few people as possible encountered the issue, though we did get one bug report (filed a smidge later at 15:15 UTC) about it.

By this point I was frantic - I KNEW that I'd tested this code, so how on earth was it broken? I went rummaging back through the shell history on the system where I'd done the testing, reconstructing the previous night's fevered testing process and eventually discovered what had gone wrong. I'd been diffing the 1.16.0 and 1.17.0 releases and had somehow managed to test the OLD shell frontend rather than the new one. So the change to it which broke the world hadn't been noticed by me at that point.

I sorted a fix PR out and we now have some issues open regarding ensuring that this never happens again. But what can we do to ensure that the next release goes more smoothly? For one, we need as a team to work out how to run mingw tests more regularly, and ideally on the PRs. For two, we need to work out how we can better test, the shell frontend which is currently only manually verified, under CI when its sole purpose is to download rustup from the Internet, making it a bit of a pain to verify in a CI environment.

But… we will learn, we will grow, and we won't make these mistakes again. I had been as careful as I thought I could be in preparing 1.17.0, and I still had two painful spikes, one from uncommonly run CI, and one from untested code. No matter how careful one is, one can still be bitten by things.

On a lighter note, for those who use rustup and wonder what's in 1.17.0 over the previous (1.16.0) release, here's a simplified view onto a mere subset of the changes...

  • Better formatting of long download times. Manish Goregaokar
  • Various improvements to rustup-init.sh. Lzu Tao
  • A variety of error message improvements. Hirokazu Hata
  • Prevent panic on missing components. Nick Cameron
  • Support non-utf8 arguments in proxies. Andy Russell
  • More support for homebrew. Markus Reiter
  • Support for more documents in rustup doc. Wang Kong
  • Display progress during component unpack. Daniel Silverstone
  • Don't panic on bad default-host. Daniel Silverstone
  • A variety of code cleanups and fixes. So many of them. Dale Wijnand
  • Better error reporting for missing binaries. Alik Aslanyan
  • Documentation of, and testing for, powershell completions. Matt Gucci
  • Various improvements to display of information in things like rustup default or rustup status. Trevor Miranda
  • Ignoring of EPIPE in certain circumstances to improve scripting use of rustup. Niklas Claesson
  • Deprecating cURL in rustup's download internal crate. Trevor Miranda
  • Error message improvements wrt. unavailable components. Daniel Silverstone
  • Improvements in component listing API for better automation. Naftuli Kay

If I missed your commits out, it doesn't mean I thought they weren't important, it merely means I am lazy

As you can see, we had a nice selection of contributors, from Rustup WG members, to drive-by typo fixes (unlisted for the most part) to some excellent new contributors who are being more and more involved as time passes.

We have plenty of plans for 1.18.0, mostly centered around tidying up the codebase more, getting rid of legacies in the code where we can, and making it easier to see the wood for the trees as we bring rustup up-to-snuff as a modern part of the Rust ecosystem.

If you'd like to participate in Rustup development, why not join us on our discord server? You can visit https://discord.gg/rust-lang and once you've jumped through some of the anti-spam hoops (check your DMs on joining) you can come along to #wg-rustup and we'll be pleased to have you help. Failing that, you can always just open issues or PRs on https://github.com/rust-lang/rustup.rs if you have something useful to contribute.

Russ Allbery: Net::Duo 1.02

Enj, 07/03/2019 - 6:52pd

This is an alternative Perl interface to the Duo Security second-factor authentication service. This release supports the new required pagination for returning lists of users and integrations, which will take effect on March 15, 2019. It does this in the simplest way possible: just making repeated calls until it retrieves the full list.

With this release, I'm also orphaning the package. I wrote this package originally for Stanford, and had thought I'd continue to find a reason to maintain it after I left. But I'm not currently maintaining any Duo integrations that would benefit from it, and my current employer doesn't use Perl. Given that, I want to make it obvious that it's not keeping up with the current Duo API and you would be better off using the Perl code that Duo themselves provide.

That said, I think the object-oriented model this package exposes is nicer and makes for cleaner Perl code when interacting with Duo. If you agree, please feel welcome to pick up maintenance, and let me know if you want the web site redirected to its new home.

This release also updates test code, supporting files, and documentation to my current standards, since I was making a release anyway.

You can get the current release from the Net::Duo distribution page.

Dirk Eddelbuettel: RInside 0.2.15

Enj, 07/03/2019 - 1:40pd

A new release 0.2.15 of RInside arrived on CRAN and in Debian today. This marks the first release in almost two years, and it brings some build enhancements. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

RInside is stressing the CRAN system a little in that it triggers a number of NOTE and WARNING messages. Some of these are par for the course as we get close to R internals not all of which are “officially” in the API. My continued thanks to the CRAN team for supporting the package.

It has (once again!) been nearly two years since the last release, and a number of nice extensions, build robustifications (mostly for Windows) and fixes had been submitted over this period—see below for the three key pull requests. There are no new user-facing changes.

The most recent change, and the one triggering the change, was based on a rchk report: the one time we call Rf_eval() could conceivably have a memory allocation race so two additional PROTECT calls make it more watertight. The joys of programming with the C API …

But thanks so much to Tomas for patient help, and to Gábor for maintaining the ubuntu-rchk Docker container. While made for rhub, it is also available pre-made here at Docker Cloud which allowed me to run the rchk.sh script in a local instance.

Changes since the last release were:

Changes in RInside version 0.2.15 (2019-03-06)
  • Improved Windows build support by copying getenv("R_HOME") result and improving backslash handling in environemt variable setting (Jonathon Love in #27 and #28)

  • Improved Windows build support by quote-protecting Rscript path in Makevars.win (François-David Collin in #33)

  • A URL was corrected in README.md (Zé Vinícius in #34).

  • Temporary SEXP objects are handled more carefully at initialization to satisfy rchk (Dirk in #36)

CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Enrico Zini: Getting rusage of child processes on python asyncio

Enj, 07/03/2019 - 12:00pd

I am writing a little application server for microservices written as compiled binaries, and I would like to log execution statistics from getrusage(2).

The application server is written using asyncio, and processes are managed using asyncio subprocesses.

Unfortunately, asyncio uses os.waitpid instead of os.wait4 to reap child processes, and to get rusage information one has to delve into the asyncio innards, and provide a custom ChildWatcher implementation. Here's how I did it:

import asyncio from asyncio.log import logger from contextlib import contextmanager import os class ExtendedResults: def __init__(self): self.rusage = None self.returncode = None class SafeChildWatcherWithRusage(asyncio.SafeChildWatcher): """ SafeChildWatcher that uses os.wait4 to also get rusage information. """ rusage_results = {} @classmethod @contextmanager def monitor(cls, proc): """ Return an ExtendedResults that gets filled when the process exits """ assert proc.pid > 0 pid = proc.pid extended_results = ExtendedResults() cls.rusage_results[pid] = extended_results try: yield extended_results finally: cls.rusage_results.pop(pid, None) def _do_waitpid(self, expected_pid): # The original is in asyncio/unix_events.py; on new python versions, it # makes sense to check changes to it and port them here assert expected_pid > 0 try: pid, status, rusage = os.wait4(expected_pid, os.WNOHANG) except ChildProcessError: # The child process is already reaped # (may happen if waitpid() is called elsewhere). pid = expected_pid returncode = 255 logger.warning( "Unknown child process pid %d, will report returncode 255", pid) else: if pid == 0: # The child process is still alive. return returncode = self._compute_returncode(status) if self._loop.get_debug(): logger.debug('process %s exited with returncode %s', expected_pid, returncode) extended_results = self.rusage_results.get(pid) if extended_results is not None: extended_results.rusage = rusage extended_results.returncode = returncode try: callback, args = self._callbacks.pop(pid) except KeyError: # pragma: no cover # May happen if .remove_child_handler() is called # after os.waitpid() returns. if self._loop.get_debug(): logger.warning("Child watcher got an unexpected pid: %r", pid, exc_info=True) else: callback(pid, returncode, *args) @classmethod def install(cls): loop = asyncio.get_event_loop() child_watcher = cls() child_watcher.attach_loop(loop) asyncio.set_child_watcher(child_watcher)

To use it:

from .hacks import SafeChildWatcherWithRusage SafeChildWatcherWithRusage.install() ... @coroutine def run(self, *args, **kw): kw["stdin"] = asyncio.subprocess.PIPE kw["stdout"] = asyncio.subprocess.PIPE kw["stderr"] = asyncio.subprocess.PIPE self.started = time.time() self.proc = yield from asyncio.create_subprocess_exec(*args, **kw) from .hacks import SafeChildWatcherWithRusage with SafeChildWatcherWithRusage.monitor(self.proc) as results: yield from asyncio.tasks.gather( self.write_stdin(self.proc.stdin), self.read_stdout(self.proc.stdout), self.read_stderr(self.proc.stderr) ) self.returncode = yield from self.proc.wait() self.rusage = results.rusage self.ended = time.time()

Molly de Blanc: Cyberbullying

Mër, 06/03/2019 - 3:19md

For about a year now I’ve had the occasional run-ins with “light” internet abuse and cyberbullying. There are a lot of resources around youth (and sometimes even college students) who are being cyberbullied, but not a lot for adults.

I wanted to write a bit about my experiences. As I write this, I have had eight instances of being the recipient of abuse from threads on popular forum sites, emails, and blog posts. I’ve tried to be blithe by calling it cute things like “people being mean to me on the Internet,” but it’s cyberbullying. I’ve never been threatened, per se, but I do find the experiences traumatic and stressful.

Here’s my advice on how to deal with being the recipient (I hesitate to use the word “victim”) of cyberbullying. I spoke with a few people — people I know who have dealt with internet abuse and some professionals — and this is what I came up with.

Take care of yourself.

First and foremost, take care of yourself. Stop reading the comments or the blog post. Close your email or laptop. Remove yourself from the direct interaction with the bullying. I know this is hard, sometimes it’s really hard to look away, but it’s important to do, at least for a bit.

I like to get myself a mocha (if you like to use food/treats as a source of comfort — this may not be your style). I joke that this is me celebrating being successful enough to make people publicly upset with me.

I joke a lot about it. Some of it I find genuinely funny — someone on Slashdot said of me: Molly has trust issues, which is why she’s single. I think this is -hilarious-. Humor helps me deal with difficult situations, but that’s just me.

I also have a file of nice things people have said about me. I don’t feel the need to reference it, but I have it there just in case.

Reach out to your support network.

Tell your friends, family, or whomever. Even if you’re not interested in talking about your feelings — tell them that — just let other people know what you’re going through. In my experience, I enjoy a little solidarity.

Don’t engage.

Really. This is the hardest part. Part of me wants to talk with people who are obviously hurting and suffering a lot, part of me wants to correct factual errors, or even share with others the things I find funny. Engaging is about the worst thing you can do, according to everything I’ve heard.

Talk to a lawyer or reach out to local law enforcement.

This is for more extreme cases — especially when people are threatening you harm. This particular episode of Reply All, “The Snapchat Thief,” covers a bit about when talking to law enforcement might be the right thing to do.

This part is easier to figure out when you are in the same general area or country, or know the identities of those harassing you. Several people I know (myself included) have dealt with international harassment.

On not being a man.

A number of the men in my life are upset about this recent round of abuse — they’re generally more upset than the women. The men come off as shocked or surprised, angry and upset, and some of them are desperately searching for something to do.

The women and enbies in my life are a lot more blase about the whole thing. They respond with commiseration, but, like me, accept this as a part of life.

Women and enbies I have spoken with about this just assume that people are going to be trashing them on the web. When I decided to become more visible within free software, I understood that I was going to be abused by strangers on the internet (enbies, men, and women — all of which have said harmful things about me).

Abuse is an assumption, rather than a possibility.

I was discussing this with a friend and we considered the problem of trying to not be a target. Bullies will find targets. If you try to hold back and be unobjectionable, other people are being abused in your place. Abusers gonna abuse. If you’re strong (or self-sacrificing) you may decide to  make yourself a target, or at least accept the risk of being a target, by being visible in your work.

A few final thoughts

Being bullied, in any form, is terrible. I was badly bullied when I was younger, and facing that again as an adult is equally traumatic.

I’m sorry if you’re going through this experience. Solidarity, empathy, and sympathy.

Antoine Beaupré: February 2019 report: LTS, HTML mail, new phone and new job

Mër, 06/03/2019 - 3:04pd
Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

This is my final LTS report. I have found other work and will unfortunately not be able to continue working on the LTS project in the foreseeable future. I will continue my volunteer work on Debian and might even contribute to LTS in my normal job, but not directly part of the LTS team.

It is too bad because that team is doing essential work, and needs more help. Security is, at best, lacking everywhere and I do not believe the current approach of "minimal viable product, move fast, then break things" is sustainable. The people working on Linux distributions and also the LTS people are doing hard, dirty work of maintaining free software in the long term. It's thankless but I believe it's one of the most important jobs out there right now. And I suspect there will be only more of it as time goes by.

Legacy systems are not going anywhere: this is the next generation's "y2k bug": old, forgotten software no one understands or cares to work with that suddenly break or have a critical vulnerability that needs patching. Moving faster will not help us fix this problem: it only piles up more crap to deal with for real systems running in production.

The survival of humans and other species on planet Earth in my view can only be guaranteed via a timely transition towards a stationary state, a world economy without growth.

-- Peter Custers

Website work

I again worked on the website this month, doing one more mass import (MR 53) which was finally merged by Holger Levsen, after I fixed an issue with PGP signatures showing up on the website.

I also polished the misnamed "audit" script that checks for missing announcements on the website and published it as MR 1 on the "cron" project of the webmaster team. It's still a "work in progress" because it is still too noisy: there are a few DLAs missing already and we haven't published the latest DLAs on the website.

The remaining work here is to automate the import of new announcements on the website (bug #859123). I've done what is hopefully the last mass import and updated the workflow in the wiki.

Finally, I have also done a bit of cleanup on the website that was necessary after the mass import which also required rewrite rules at the server level. Hopefully, I will have this fairly well wrapped up for whoever picks this up next.

Python GPG concerns

Following a new vulnerability (CVE-2019-6690) disclosed in the python-gnupg library, I have expressed concerns at the security reliability of the project in future updates, referring to wider issues identified by isis lovecroft in this post.

I suggested we should simply drop security support for the project, citing it didn't have many reverse dependencies. But it seems that wasn't practical and the response was that it was actually possible to keep on maintaining it an such an update was issued for jessie.

Golang concerns

Similarly, I have expressed more concerns about the maintenance of Golang packages following the disclosure of a vulnerability (CVE-2019-6486) regarding elliptic curve implementations in the core Golang libraries. An update (DLA-1664-1) was issued for the core, but because Golang is statically compiled, I was worried the update wasn't sufficient: we also needed to upload updates for any build dependency using the affected code as well.

Holger asked the golang team for help and i also asked on irc. Apparently, all the non-dev packages (with some exceptions) were binNMU'd in stretch but the process needs to be clarified.

I also wondered if this maintenance problem could be resolved in the long term by switching to dynamic linking. Ubuntu tried to switch to dynamic linking but abandoned the effort, so it seems Golang will be quite difficult to maintain for security updates in the foreseeable future.

Libarchive updates

I have reproduced the problem described in CVE-2019-1000020 and CVE-2019-1000019 in jessie. I published a fix as DLA-1668-1. I had to build the update without sbuild's overlay system (in a tar chroot) otherwise the cpio tests fail.

Netmask updates

This one was minimal: a patch was sent by the maintainer so I only wrote and sent DLA 1665-1. Interestingly, I didn't have access to the .changes file which made writing the DLA a little harder, as my workflow normally involves calling gen-DLA --save with the .changes file which autopopulates a template. I learned that .changes files are normally archived on coccia.debian.org (specifically in /srv/ftp-master.debian.org/queue/done/), but not in the case of security uploads.

Libreoffice

I once again tried to tackle an issue (CVE-2018-16858) with Libreoffice. The last time I tried to work on LibreOffice, the test suite was failing and the linker was crashing after hours of compilation and I never got anywhere. But that was wheezy, so I figured jessie might be in better shape.

I quickly got into trouble with sbuild: I ran out of space on both / and /home so I moved all my photos to external drive (!). The patch ended up being trivial. I could reproduce with a simple proof of concept, but could not quite get code execution going. It might just be I haven't found the right Python module to load, so I assumed the code was vulnerable and, given the patch was simple, it was worth doing an update.

The build ended up taking close to nine hours and 35GiB of disk space. I published DLA-1669-1 as a result.

I also opened a bug report against dput-ng against dput-ng because it still doesn't warn users about uploads to security-master the same way dput does.

Enigmail

Finally, Enigmail was finally taken off the official support list in jessie when the debian-security-support proposed update was approved.

Other free software work

Since I was going to start that new job in March, I figured I would try to take some time off before work starts. I therefore mostly tried to wrap things up and didn't do as much volunteer work as I usually do. I'm unsure I'll be able to do as much volunteer work now that I start a full time job either, so this might possibly be my last report for a while.

Debian work before the freeze

I uploaded new versions of bitlbee-mastodon (1.4.1-1), sopel (6.6.3-1 and 6.6.3-2) and dateparser (0.7.1-1). I've also sponsored new uploads of smokeping and tuptime.

I also uploaded convertdate to NEW as it was a (missing but optional) dependency of dateparser. Unfortunately, it didn't make it through NEW in time for the freeze so dateparser won't be totally fixed in buster.

I also made two new releases of feed2exec, my programmable feed reader, to fix date parsing on broken feeds, add a JSON output plugin, and fix an issue with the ikiwiki_recentchanges plugin.

New phone

I got tired and bought a new phone. Even though I have almost a dozen old phones in a plastic box here, most of them are basically unusable:

  • two are just "feature phones" - I need OSMand
  • two are Nokia n900 phones that can't read a SIM card
  • at least two have broken screens
  • one is "declared stolen or lost" (same, right?) which means it can't be used as a phone at all, which is totally stupid if you ask me

I managed to salvage the old htc-one-s I had. It's still a little buggy (it crashes randomly) and a little slow, but generally works and I really like how small it is. It's going to be hard to go back to a bigger format.

I bought fairphone2 (FP2). It was pricey, and it's crazy because they might come up with the FP3 this year, but I was sick of trying to cross-reference specification tables and LineageOS download pages. The FP2 just works with an "open" Android version (and LOS) out of the box. But more importantly, the FP project tries to avoid major human rights issues in the source of components and the production of the device, something that's way too often overlooked. Many minerals involved in the fabrication of modern electronics come from conflict zones or involve horrible (child) labour conditions. Fixing those issues should be our priority, maybe even before hardware or software freedom.

Even without addressing completely those issues, the fact that it scored a perfect 10 in iFixit's reparability score is amazing. It seems parts are difficult to find, even in Europe. The phone doesn't ship to the Americas from the original website, which makes it difficult to buy, but some shops do ship to Canada, like Ecosto.

So we'll see how that goes. I will, as usual, document my experiences in the wiki, in fairphone2.

Mailing list experiments

As part of my calendar project, I figured I would keep my "readers" informed of my progress this year and send them an update every month or so. I was inspired by this post as I said last week: I can't stop thinking about it.

So I kept working on Mailman 3. Unfortunately, only a single of my proposed patches was merged. Many of them are "work in progress" (WIP) of course, but I was hoping to get more feedback on the proposals, especially the no notification workflow. Such a workflow delegates the sending of confirmation mails to the caller, which enables them to send more complex email than the straitjacket the templating system forces you into: you could then control every part of the email, not just the body and subject, but also content type, attachments and so on. That didn't seem to get traction: some informal comments I received said this wasn't the right fix for the invite problem, but then no one is working on fixing the invite problem either, so I wonder where that is going to go.

Unabashed, I tried to provide a french translation which allowed me to send an actual invite fully translated. This was a lot of work for not much benefit, so that was frustrating as well.

In the end, I ended up just with a Bcc list that I keep as an alias in my ~/.mutt/aliases, which notmuch reads thanks to my notmuch-address hack. In the email, I proposed my readers an "opt-out": if they don't write back, they're on the mailing list. It's spammy, but the readers are not just the general public: they are people I know well, that are close to me, and to who I have given a friggin' calendar (at least most of them).

If I find the energy, I'll finish setting up Mailman 3 just the way I like and use it to do the next mailing. But I can't help but think the mailing list is overkill for this now: the mailing with a Bcc list worked without a flaw, as far as I could tell, and it means minimal maintenance. So I'm not sure I'll battle Mailman 3 much longer, which is a shame because I happen to believe it's probably our best bet to keep mailing lists (and therefore probably email itself) alive in the future.

Emailing HTML in Notmuch

I actually had to write content for that email too - just messing around with the mailing list server is one thing, but the whole point is to actually say something. Or, in my case, show something, which is difficult using plain text. So I went crazy and tried to send HTML mail with notmuch. The thread is interesting: I encourage you to read it in full, but I'll quote the first post here for posterity:

I know, I know, HTML email is "evil"[1]. I mostly never ever use it, in fact, I don't remember the last time I consciously sent HTML. Maybe I did so back when I was using Netscape Communicator[2][3], but whatever.

The reason I thought about this again is I have been doing more photography these days and, well, being allergic to social media, I have very few ways of sharing those photographs with families and friends. I have tried creating a gallery website with an RSS feed but I'm sure no one here will be surprised that the uptake is minimal, if non-existent. People expect to have stuff pushed to them, like Instagram, Facebook, Twitter or Spam does.

So I thought[4] of Email again: the original social network! I figured I would just make a mailing list, and write to my people once in a while to let them new about my new pictures. And while writing the first email, I realized it was pretty silly to not include images, or at least links to images in the email.

I'm sure you can see where this is going. A link in the email: who's going to click that. Who clicks now anyways, with all the tapping[5] going on. So the answer comes naturally: just write frigging HTML email. Don't be a rms^Wreligious zealot and do the right thing, what works basically everywhere[6] (even notmuch!).

So I started Thunderbird and thought "what the heck am I doing! there must be a better way!" After searching for "message mode emacs html email ktxbye", I found some people already thought about this problem and came up with somewhat elegant solutions[7]. I built on that by trying to come up with a pure elisp solution, which goes a little like this:

(defun anarcat/notmuch-html-convert () """create an HTML part from a Markdown body This will not work if there are *any* attachments of any form, those should be added after.""" (interactive) (save-excursion ;; fetch subject, it will be the HTML version title (message "building HTML attachment...") (message-goto-subject) (beginning-of-line) (search-forward ":") (forward-char) (let ((beg (point))) (end-of-line) (setq subject (buffer-substring beg (point)))) (message "determined title is %s..." subject) ;; wrap signature in a <pre> (message-goto-signature) (forward-line -1) ;; save and delete signature which requires special formatting (setq signature (buffer-substring (point) (point-max))) (delete-region (point) (point-max)) ;; set region to top of body then end of buffer (end-of-buffer) (message-goto-body) (narrow-to-region (point) (mark)) ;; run markdown on region (setq output-buffer-name "*notmuch-markdown-output*") (message "running markdown...") (markdown output-buffer-name) (widen) (save-excursion (set-buffer output-buffer-name) (end-of-buffer) ;; add signature formatted as <pre> (insert "\n<pre>") (insert signature) (insert "</pre>\n") (markdown-add-xhtml-header-and-footer subject)) (message "done the dirty work, re-inserting everything...") ;; restore signature (message-goto-signature) (insert signature) (message-goto-body) (insert "<#multipart type=alternative>\n") (end-of-buffer) (insert "<#part type=text/html>\n") (insert-buffer output-buffer-name) (end-of-buffer) (insert "<#/multipart>\n") (let ((f (buffer-size (get-buffer output-buffer-name)))) (message "appended HTML part (%s bytes)" f))))

For those who can't read elisp for breakfast, this does the following:

  1. parse the current email body as markdown, in a separate buffer
  2. make the current email multipart/alternative
  3. add an HTML part
  4. inject the HTML version in the HTML part

There's some nasty business with formatting the signature correctly by wrapping it in a <pre> that's going on there - I took that from Thunderbird as well.

(For those who do read elisp for breakfast, improvements and comments on the coding style are very welcome.)

The idea is that you write your email normally, but in markdown. When you're done writing that email, you launch the above function (carefully bound to "M-x anarcat/notmuch-html-convert" here) which takes that email and adds an equivalent HTML part to it. You can then even tweak that part to screw around with the raw HTML if you feel depressed or nostalgic.

What do people think? Am I insane? Could this work? Does this belong in notmuch? Or maybe in the tips section? Should I seek therapy? Do you hate markdown? Expand on the relationship between your parents and text editors.

Thanks for any feedback,

A.

PS: the above, naturally, could be adapted to parse the body as RST, asciidoc, texinfo, latex or whatever insanity you think would be more appropriate, I don't care. The idea is the same.

PPS: I remember reading about someone wanting to declare a text/markdown mimetype for email, and remembering it was all backwards and weird and I can't find the reference anymore. If some lazyweb magic person could forward the link to me I would be grateful.

[1]: one of so many: https://www.georgedillon.com/web/html_email_is_evil_still.shtml [2]: https://en.wikipedia.org/wiki/Netscape_Communicator [3]: yes my age is showing [4]: to be fair, this article encouraged me quite a bit: https://blog.chaddickerson.com/2019/01/09/replacing-facebook/ [5]: not the bass guitar one, unfortunately [6]: https://en.wikipedia.org/wiki/HTML_email#Adoption [7]: https://trey-jackson.blogspot.com/2008/01/emacs-tip-8-markdown.html

I edited the original message to include the latest version of the script, which (unfortunately) lives in my private dotfiles git repository.

In the end, all that effort didn't quite do it: the image links would break in webmail when seen from Chromium. This is apparently intended behaviour: the problem was that I am embedding the username/password of the gallery in the HTTP URL, using in-URL credentials which is apparently "deprecated" even though no standards actually says so. So I ended up generating a full HTML version of the frigging email, complete with a link on top of the email saying "if this email doesn't display properly, click the following".

Now I remember why I dislike HTML email. Yet my readers were quite happy to see the images directly and I suspect most of them wouldn't click through on individual images to see each photo, so I think it's worth the trouble.

And now that I think about it, it feels silly not to post those updates on this blog now. But the gallery is private right now, and I think I'd like to keep it that way: it gives me more freedom to share more intimate pictures with people.

Using dtach instead of screen for my IRC bouncer

I have been using irssi in a screen session for a long time now. Recently I started thinking about simplifying that setup by setting up password-less authentication to the session, but also running it as a separate user. This was especially important to keep possible compromises of the IRC client limited to a sandboxed account instead of my more powerful user.

To further limit the impact of a possible compromise, I also started using dtach instead of GNU screen to handle my irssi session: irssi can still run arbitrary code, but at least you can't just open a new window in screen and need to think a little more about how to do it.

Eventually, I could make a profile in systemd to keep it from forking at all, although I'm not sure irssi could still work in such an environment. The change broke the "auto-away script" which relies on screen's peculiar handling of the socket to signify if the session is attached, so I filed that as a feature request.

Other work

Enrico Zini: Serving debian-distributed javascript libraries in Tornado

Mër, 06/03/2019 - 12:00pd

Debian conveniently distribute JavaScript libraries, and expects packaged software to use them rather than embedding their own copy.

Here is a convenient custom StaticFileHandler for Tornado that looks for the Debian-distributed versions of JavaScript libraries, and falls back to the vendored versions if they are not found:

from tornado import web import pathlib class StaticFileHandler(web.StaticFileHandler): """ StaticFileHandler that allows overriding paths in the static directory with system provided versions """ SYSTEM_ASSET_PATH = pathlib.Path("/usr/share/javascript") @classmethod def get_absolute_path(self, root, path): path = pathlib.PurePath(path) if not path.parts: return super().get_absolute_path(root, path) system_dir = self.SYSTEM_ASSET_PATH.joinpath(path.parts[0]) if system_dir.is_dir(): # If that asset directory exists in the system, look for things in # there return self.SYSTEM_ASSET_PATH.joinpath(path) else: # Else go ahead with the default static dir return super().get_absolute_path(root, path) def validate_absolute_path(self, root, absolute_path): """ Rewrite of tornado's validate_absolute_path not to raise an error for paths in /usr/share/javascript/ """ root = pathlib.Path(root) absolute_path = pathlib.Path(absolute_path) is_system_root = absolute_path.parts[:len(self.SYSTEM_ASSET_PATH.parts)] == self.SYSTEM_ASSET_PATH.parts is_static_root = absolute_path.parts[:len(root.parts)] == root.parts if not is_system_root and not is_static_root: raise web.HTTPError(403, "%s is not in root static directory or system assets path", self.path) if absolute_path.is_dir() and self.default_filename is not None: # need to look at the request.path here for when path is empty # but there is some prefix to the path that was already # trimmed by the routing if not self.request.path.endswith("/"): self.redirect(self.request.path + "/", permanent=True) return absolute_path = absolute_path.joinpath(self.default_filename) if not absolute_path.exists(): raise web.HTTPError(404) if not absolute_path.is_file(): raise web.HTTPError(403, "%s is not a file", self.path) return str(absolute_path)

This is how to use it:

class DebianApplication(tornado.web.Application): def __init__(self, *args, **settings): from .static import StaticFileHandler settings.setdefault("static_handler_class", StaticFileHandler) super().__init__(*args, **settings)

And from HTML it's simply a matter of matching the first path component to what is used by Debian's packages under /usr/share/javascript:

<link rel="stylesheet" href="{{static_url('bootstrap4/css/bootstrap.min.css')}}"> <script src="{{static_url('jquery/jquery.min.js')}}"></script> <script src="{{static_url('popper.js/umd/popper.min.js')}}"></script> <script src="{{static_url('bootstrap4/js/bootstrap.min.js')}}"></script>

I find it quite convenient: this way I can start writing prototype code without worrying about fetching javascript libraries to bundle.

I only need to start worrying about it if I need to deploy outside of Debian, or to old stable versions of Debian that don't contain the required JavaScript dependencies. In that case, I just cp -r from a working /usr/share/javascript into Tornado's static directory, and I'm done.

Faqet