You are here

Agreguesi i feed

Noah Meyerhans: Further Discussion for DPL!

Planet Debian - Dje, 10/03/2019 - 11:18md

Further Discussion builds concensus within Debian!

Further Discussion gets things done!

Further Discussion welcomes diverse perspectives in Debian!

We'll grow the community with Further Discussion!

Further Discussion has been with Debian from the very beginning! Don't you think it's time we gave Further Discussion its due, after all the things Further Discussion has accomplished for the project?

Somewhat more seriously, have we really exhausted the community of people interested in serving as Debian Project Leader? That seems unfortunate. I'm not worried about it from a technical point of view, as Debian has established ways of operating without a DPL. But the lack of interest suggest some kind of stagnation within the community. Or maybe this is just the cabal trying to wrest power from the community by stifling the vote. Is there still a cabal?

Markus Koschany: My Free Software Activities in February 2019

Planet Debian - Dje, 10/03/2019 - 9:48md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • February was the last month to package new upstream releases before the full freeze, if the changes were not too invasive of course :-). Atomix, gamine, simutrans, simutrans-pak64, simutrans-pak128.britain and hitori qualified.
  • I sponsored a new version of mgba, a Game Boy Advance emulator, for Reiner Herrmann and worked together with Bret Curtis on wildmidi and openmw. The latest upstream version resolved a long-standing bug and made it possible that the game engine, a reimplementation of The Elder Scrolls III: Morrowind, will be part of a Debian stable release for the first time.
  • Johann Suhter reported a bug in one of brainparty’s minigames and also provided the patch. All I had to do was uploading it. Thanks. (#922485)
  • I corrected a minor cross-build FTBFS in openssn. Patch by Helmut Grohne. (#914724)
  • I released a new version of debian-games and updated the dependency list of our games metapackages. This is almost the final version but expect another release in one or two months.
Debian Java Misc Debian LTS

This was my thirty-sixth month as a paid contributor and I have been paid to work 19,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 25.02.2019 until 03.03.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in sox, collabtive, libkohana2-php, ldb, libpodofo, libvirt, openssl, wordpress, twitter-bootstrap, ceph, ikiwiki, edk2, advancecomp, glibc, spice-xpi and zabbix.
  • DLA-1675-1. Issued a security update for python-gnupg fixing 1 CVE.
  • DLA-1676-1. Issued a security update for unbound fixing 1 CVE.
  • DLA-1696-1. Issued a security update for ceph fixing 2 CVE.
  • DLA-1701-1. Issued a security update for openssl fixing 1 CVE.
  • DLA-1702-1. Issued a security update for advancecomp fixing 2 CVE.
  • DLA-1703-1. Issued a security update for jackson-databind fixing 10 CVE.
  • DLA-1706-1. Issued a security update for poppler fixing 5 CVE.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my ninth month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 25.02.2019 until 03.03.2019 and I triaged CVE in file, gnutls26, nettle, libvirt, busybox and eglibc.
  • ELA-84-1. Issued a security update for gnutls26 fixing 4 CVE. I also investigated CVE-2018-16869 in nettle and also CVE-2018-16868 in gnutls26. After some consideration I decided to mark these issues as ignored because the changes were invasive and would have required intensive testing. The benefits appeared small in comparison.
  • ELA-88-1. Issued a security update for openssl fixing 1 CVE.
  • ELA-90-1. Issued a security update for libsdl1.2 fixing 11 CVE.
  • I started to work on sqlalchemy which requires a complex backport to fix a possible SQL injection vulnerability.

Thanks for reading and see you next time.

Michael Stapelberg: Winding down my Debian involvement

Planet Debian - Dje, 10/03/2019 - 9:43md

This post is hard to write, both in the emotional sense but also in the “I would have written a shorter letter, but I didn’t have the time” sense. Hence, please assume the best of intentions when reading it—it is not my intention to make anyone feel bad about their contributions, but rather to provide some insight into why my frustration level ultimately exceeded the threshold.

Debian has been in my life for well over 10 years at this point.

A few weeks ago, I have visited some old friends at the Zürich Debian meetup after a multi-year period of absence. On my bike ride home, it occurred to me that the topics of our discussions had remarkable overlap with my last visit. We had a discussion about the merits of systemd, which took a detour to respect in open source communities, returned to processes in Debian and eventually culminated in democracies and their theoretical/practical failings. Admittedly, that last one might be a Swiss thing.

I say this not to knock on the Debian meetup, but because it prompted me to reflect on what feelings Debian is invoking lately and whether it’s still a good fit for me.

So I’m finally making a decision that I should have made a long time ago: I am winding down my involvement in Debian to a minimum.

What does this mean?

Over the coming weeks, I will:

  • transition packages to be team-maintained where it makes sense
  • remove myself from the Uploaders field on packages with other maintainers
  • orphan packages where I am the sole maintainer

I will try to keep up best-effort maintenance of the manpages.debian.org service and the codesearch.debian.net service, but any help would be much appreciated.

For all intents and purposes, please treat me as permanently on vacation. I will try to be around for administrative issues (e.g. permission transfers) and questions addressed directly to me, permitted they are easy enough to answer.

Why?

When I joined Debian, I was still studying, i.e. I had luxurious amounts of spare time. Now, over 5 years of full time work later, my day job taught me a lot, both about what works in large software engineering projects and how I personally like my computer systems. I am very conscious of how I spend the little spare time that I have these days.

The following sections each deal with what I consider a major pain point, in no particular order. Some of them influence each other—for example, if changes worked better, we could have a chance at transitioning packages to be more easily machine readable.

Change process in Debian

The last few years, my current team at work conducted various smaller and larger refactorings across the entire code base (touching thousands of projects), so we have learnt a lot of valuable lessons about how to effectively do these changes. It irks me that Debian works almost the opposite way in every regard. I appreciate that every organization is different, but I think a lot of my points do actually apply to Debian.

In Debian, packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian.

While it is great to have a lint tool (for quick, local/offline feedback), it is even better to not require a lint tool at all. The team conducting the change (e.g. the C++ team introduces a new hardening flag for all packages) should be able to do their work transparent to me.

Instead, currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages.

Notably, the cost of each change is distributed onto the package maintainers in the Debian model. At work, we have found that the opposite works better: if the team behind the change is put in power to do the change for as many users as possible, they can be significantly more efficient at it, which reduces the total cost and time a lot. Of course, exceptions (e.g. a large project abusing a language feature) should still be taken care of by the respective owners, but the important bit is that the default should be the other way around.

Debian is lacking tooling for large changes: it is hard to programmatically deal with packages and repositories (see the section below). The closest to “sending out a change for review” is to open a bug report with an attached patch. I thought the workflow for accepting a change from a bug report was too complicated and started mergebot, but only Guido ever signaled interest in the project.

Culturally, reviews and reactions are slow. There are no deadlines. I literally sometimes get emails notifying me that a patch I sent out a few years ago (!!) is now merged. This turns projects from a small number of weeks into many years, which is a huge demotivator for me.

Interestingly enough, you can see artifacts of the slow online activity manifest itself in the offline culture as well: I don’t want to be discussing systemd’s merits 10 years after I first heard about it.

Lastly, changes can easily be slowed down significantly by holdouts who refuse to collaborate. My canonical example for this is rsync, whose maintainer refused my patches to make the package use debhelper purely out of personal preference.

Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.

How would things look like in a better world?

  1. As a project, we should strive towards more unification. Uniformity still does not rule out experimentation, it just changes the trade-off from easier experimentation and harder automation to harder experimentation and easier automation.
  2. Our culture needs to shift from “this package is my domain, how dare you touch it” to a shared sense of ownership, where anyone in the project can easily contribute (reviewed) changes without necessarily even involving individual maintainers.

To learn more about how successful large changes can look like, I recommend my colleague Hyrum Wright’s talk “Large-Scale Changes at Google: Lessons Learned From 5 Yrs of Mass Migrations”.

Fragmented workflow and infrastructure

Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Of course, what you do in such a repository also varies subtly from team to team, and even within teams.

In practice, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Instead of using GitLab’s API to create a merge request, you have to design an entirely different, more complex system, which deals with intermittently (or permanently!) unreachable repositories and abstracts away differences in patch delivery (bug reports, merge requests, pull requests, email, …).

Wildly diverging workflows is not just a temporary problem either. I participated in long discussions about different git workflows during DebConf 13, and gather that there were similar discussions in the meantime.

Personally, I cannot keep enough details of the different workflows in my head. Every time I touch a package that works differently than mine, it frustrates me immensely to re-learn aspects of my day-to-day.

After noticing workflow fragmentation in the Go packaging team (which I started), I tried fixing this with the workflow changes proposal, but did not succeed in implementing it. The lack of effective automation and slow pace of changes in the surrounding tooling despite my willingness to contribute time and energy killed any motivation I had.

Old infrastructure: package uploads

When you want to make a package available in Debian, you upload GPG-signed files via anonymous FTP. There are several batch jobs (the queue daemon, unchecked, dinstall, possibly others) which run on fixed schedules (e.g. dinstall runs at 01:52 UTC, 07:52 UTC, 13:52 UTC and 19:52 UTC).

Depending on timing, I estimated that you might wait for over 7 hours (!!) before your package is actually installable.

What’s worse for me is that feedback to your upload is asynchronous. I like to do one thing, be done with it, move to the next thing. The current setup requires a many-minute wait and costly task switch for no good technical reason. You might think a few minutes aren’t a big deal, but when all the time I can spend on Debian per day is measured in minutes, this makes a huge difference in perceived productivity and fun.

The last communication I can find about speeding up this process is ganneff’s post from 2008.

How would things look like in a better world?

  1. Anonymous FTP would be replaced by a web service which ingests my package and returns an authoritative accept or reject decision in its response.
  2. For accepted packages, there would be a status page displaying the build status and when the package will be available via the mirror network.
  3. Packages should be available within a few minutes after the build completed.
Old infrastructure: bug tracker

I dread interacting with the Debian bug tracker. debbugs is a piece of software (from 1994) which is only used by Debian and the GNU project these days.

Debbugs processes emails, which is to say it is asynchronous and cumbersome to deal with. Despite running on the fastest machines we have available in Debian (or so I was told when the subject last came up), its web interface loads very slowly.

Notably, the web interface at bugs.debian.org is read-only. Setting up a working email setup for reportbug(1) or manually dealing with attachments is a rather big hurdle.

For reasons I don’t understand, every interaction with debbugs results in many different email threads.

Aside from the technical implementation, I also can never remember the different ways that Debian uses pseudo-packages for bugs and processes. I need them rarely enough to establish a mental model of how they are set up, or working memory of how they are used, but frequently enough to be annoyed by this.

How would things look like in a better world?

  1. Debian would switch from a custom bug tracker to a (any) well-established one.
  2. Debian would offer automation around processes. It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).
Old infrastructure: mailing list archives

It baffles me that in 2019, we still don’t have a conveniently browsable threaded archive of mailing list discussions. Email and threading is more widely used in Debian than anywhere else, so this is somewhat ironic. Gmane used to paper over this issue, but Gmane’s availability over the last few years has been spotty, to say the least (it is down as I write this).

I tried to contribute a threaded list archive, but our listmasters didn’t seem to care or want to support the project.

Debian is hard to machine-read

While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome. I have picked just 3 quick examples to illustrate my point.

debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database.

pk4 needs to maintain its own cache to look up package metadata based on the package name. Other tools parse the apt database from scratch on every invocation. A proper database format, or at least a binary interchange format, would go a long way.

Debian Code Search wants to ingest new packages as quickly as possible. There used to be a fedmsg instance for Debian, but it no longer seems to exist. It is unclear where to get notifications from for new packages, and where best to fetch those packages.

Complicated build stack

See my “Debian package build tools” post. It really bugs me that the sprawl of tools is not seen as a problem by others.

Developer experience pretty painful

Most of the points discussed so far deal with the experience in developing Debian, but as I recently described in my post “Debugging experience in Debian”, the experience when developing using Debian leaves a lot to be desired, too.

I have more ideas

At this point, the article is getting pretty long, and hopefully you got a rough idea of my motivation.

While I described a number of specific shortcomings above, the final nail in the coffin is actually the lack of a positive outlook. I have more ideas that seem really compelling to me, but, based on how my previous projects have been going, I don’t think I can make any of these ideas happen within the Debian project.

I intend to publish a few more posts about specific ideas for improving operating systems here. Stay tuned.

Lastly, I hope this post inspires someone, ideally a group of people, to improve the developer experience within Debian.

Andy Simpkins: Debian BSP: Cambridge continued

Planet Debian - Dje, 10/03/2019 - 4:06md

I am slowly making progress.  I am quite pleased with myself for slowly moving beyond triage, test, verify to now beginning to understand what is going on with some bugs and being able to suggest fixes :-)  That said my C++ foo is poor and add in QT as well and #917711 is beyond me.

Not only does quite a lot of work get done at a BSP; it is also very good to catch up with people, especially those who traveled to Cambridge from out of the area.  Thank you for taking your weekend to contribute to making Buster.

I must also take the opportunity to thank Sledge and Randombird for opening up their home to host the BSP and provide overnight accommodation as well.

More hacking is still going on…  Some different people from yesterday.

Differing people ++smcv –andrewsh  ++cjwatson –lamby

Michal Čihař: Weblate 3.5.1

Planet Debian - Dje, 10/03/2019 - 4:00md

Weblate 3.5.1 has been released today. Compared to the 3.5 release it brings several bug fixes and performance improvements.

Full list of changes:

  • Fixed Celery systemd unit example.
  • Fixed notifications from http repositories with login.
  • Fixed race condition in editing source string for monolingual translations.
  • Include output of failed addon execution in the logs.
  • Improved validation of choices for adding new language.
  • Allow to edit file format in component settings.
  • Update installation instructions to prefer Python 3.
  • Performance and consistency improvements for loading translations.
  • Make Microsoft Terminology service compatible with current zeep releases.
  • Localization updates.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Andrew Cater: Debian BSP Cambridge March 10th 2019 - post 2

Planet Debian - Dje, 10/03/2019 - 3:49md
Lots of very busy people chasing down bugs. A couple of folk have left. It's a good day and very productive: thanks to Steve an Jo, as ever, for food, coffee, coffee, beds and coffee.

Andrew Cater: Debian BSP Cambridge 10th March 2019

Planet Debian - Dje, 10/03/2019 - 12:06md
Folks are starting to turn up this morning. Kitchen full of people talking and cooking sausages and talking. A quiet room except for Pepper the dog chasing squeaky toys and people chasing into the kitchen for food. Folk are now gradually setting down to code and bug fix. All is good

Andrew Cater: Debian BSP Cambridge March 9th 2019 - post 3

Planet Debian - Sht, 09/03/2019 - 10:40md
Pub meal with lots of people in a crowded pub and lots of chatting. Various folk have come back to Steve's to carry on to be met with a very, very bouncy friendly dog. And on we go :)

Andy Simpkins: Debian BSP: Cambridge

Planet Debian - Sht, 09/03/2019 - 7:38md

I didn’t get a huge amount done today – A bit of email config for Andy followed by many installation tests to reproduce #911036, then to test and confirm the patch has fixed it.  I can’t read a word of Japanese but thankfully Hideki provided the appropriate runes I needed to ‘match’ so I was able to follow the steps to reproduce, that couples with running a separate qemu instance in English so I could shadow my actions to keep track of menu state…

 

Andy Cater – Multi tasking (and trying to look like Andy Hamilton)

main table space hard at work…

Overflow hack space…  stress relief service being provided by Pepper

Andrew Cater: Debian BSP Cambridge March 9th 2019 - post 2

Planet Debian - Sht, 09/03/2019 - 6:14md
Large amounts of conversation: on chat and simultaneously in real life. eight people sat round a dining table. Two on a sofa, me on a chair. ThinkPad quotient has increased. Lots of bugs being closed: Buster steadily becoming less RC-buggy Coffee machine is getting a hammering and Debian is improving steadily. All is good.

Ian Jackson: Nailing Cargo (the Rust build tool)

Planet Debian - Sht, 09/03/2019 - 3:21md
Introduction I quite like the programming language Rust, although it's not without flaws and annoyances.

One of those annoyances is Cargo, the language-specific package manager. Like all modern programming languages, Rust has a combined build tool and curlbashware package manager. Apparently people today expect their computers to download and run code all the time and get annoyed when that's not sufficiently automatic.

I don't want anything on my computer that automatically downloads and executes code from minimally-curated repositories like crates.io. So this is a bit of a problem.

Dependencies available in Debian Luckily I can get nearly all what I have needed so far from Debian, at least if I'm prepared to use Debian testing (buster, now in freeze). Debian's approach to curation is not perfect, but it's mostly good enough for me.

But I still need to arrange to use the packages from Debian instead of downloading things.

Of course anything in Debian written in Rust faces the same problem: Debian source packages are quite rightly Not Allowed to download random stuff from the Internet during the package build. So when I tripped across this I reasoned that the Debian Rust team must already have fixed this problem somehow. And, indeed they have. The result is not too awful:

$ egrep '^[^#]' ~/.cargo/config [source] [source.debian-packages] directory = "/usr/share/cargo/registry" [source.crates-io] replace-with = "debian-packages" $ I cloned and hacked this from the cargo docs after reading the Debian Rust Team packaging policy.

The effect is to cause my copy of cargo to think that crates.io is actually located (only) on my local machine in /usr/share/cargo. If I mention a dependency, cargo will look in /usr/share/cargo for it. If it's not there I get an error, which I fix by installing the appropriate Debian rust package using apt.

So far so good.

Edited 2019-03-07, to add: To publish things on crates.io I then needed another workaround too (scroll down to "Further annoyance from Cargo"

Dependencies not in Debian A recent project of mine involved some dependencies which were not in Debian, notably Rust bindings to the GNU Scientific Library and to the NLopt nonlinear optimisation suite. A quick web search found me the libraries rust-GSL and rust-nlopt. They are on crates.io, of course. I thought I would check them out so that I could check them out. Ie, decide if they looked reasonable, and do a bit of a check on the author, and so on. Digression - a rant about Javascript The first problem is of course that crates.io itself does not work at all without enabling Javascript in my browser. I have always been reluctant about JS. Its security properties have always been suboptimal and much JS is used for purposes which are against the interests of the user.

But I really don't like the way that everyone is still pretending that it is safe to run untrusted code on one's computer, despite Spectre. Most people seem to think that if you are up to date with your patches, Spectre isn't a problem. This is not true at all. Spectre is basically unfixable on affected CPUs and it is not even possible to make a comparably fast CPU without this bug. The patches are bodges which make the attacks more complicated and less effective. (At least, no-one knows how to do so yet.)

And of course Javascript is a way of automatically downloading megabytes of stuff from all over the internet and running it on your computer. Urgh. So I run my browser with JS off by default.

There is absolutely no good reason why crates.io won't let me even look at the details of some library without downloading a pile of code from their website and running it on my computer. But, I guess, it is probably OK to allow it? So on I go, granting JS permission. Then I can click through to the actual git repository. JS just to click a link! At least we can get back to the main plot now...

The "unpublished local crate" problem So I git clone rust-GSL and have a look about. It seems to contain the kind of things I expect. The author seems to actually exist. The git history seems OK on cursory examination. I decide I am going to actually to use it. So I write [dependencies] GSL = "1" in my own crate's metadata file.

I realise that I am going to have to tell cargo where it is. Experimentally, I run cargo build and indeed it complains that it's never heard of GSL. Fair enough. So I read the cargo docs for the local config file, to see how to tell it to look at ../rust-GSL.

It turns out that there is no sensible way to do this!

There is a paths thing you can put in your config, but it does not work for an unpublished crate. (And of course, from the point of view of my cargo, GSL is a unpublished crate - because only crates that I have installed from .debs are "published".)

Also paths actually uses some of the metadata from the repository, which is of course not what you wanted. In my reading I found someone having trouble because crates.io had a corrupted metadata file for some crate, which their cargo rejected. They could just get the source themselves and fix it, but they had serious difficulty hitting cargo on the head hard enough to stop it trying to read the broken online metadata.

The same overall problem would arise if I had simply written the other crate myself and not published it yet. (And indeed I do have a crate split out from my graph layout project, which I have yet to publish.)

You can edit your own crate's Cargo.toml metadata file to say something like this:

GSL = { path = "../rust-GSL", optional = true } but of course that's completely wrong. That's a fact about the setup on my laptop and I dont want to commit it to my git tree. And this approach gets quickly ridiculous if I have indirect dependencies: I would have to make a little local branch in each one just to edit each one's Cargo.toml to refer to the others'. Awful.

Well, I have filed an issue. But that won't get me unblocked.

So, I also wrote a short but horrific shell script. It's a wrapper for cargo, which edits all your Cargo.toml's to refer to each other. Then, when cargo is done, it puts them all back, leaving your git trees clean.

Writing this wrapper script would have been a lot easier if Cargo had been less prescriptive about what things are called and where they must live. For example, if I could have specified an alternative name for Cargo.toml, my script wouldn't have had to save the existing file and write a replacement; it could just have idempotently written a new file.

Even so, nailing-cargo works surprisingly well. I run it around make. I sometimes notice some of the "nailed" Cargo.toml files if I update my magit while a build is running, but that always passes. Even in a nightmare horror of a script, it was worth paying attention to the error handling.

I hope the cargo folks fix this before I have to soup up nailing-cargo to use a proper TOML parser, teach it a greater variety of Cargo.toml structures, give it a proper config file reader and a manpage, and generally turn it into a proper, but still hideous, product.

edited 2019-03-09 to fix tag spelling



comments

Ian Jackson: Rust doubly-linked list

Planet Debian - Sht, 09/03/2019 - 3:19md
I have now released (and published on crates.io) my doubly-linked list library for Rust.

Of course in Rust you don't usually want a doubly-linked list. The VecDeque array-based double-ended queue is usually much better. I discuss this in detail in my module's documentation.

Why a new library
Weirdly, there is a doubly linked list in the Rust standard library but it is good for literally nothing at all. Its API is so limited that you can always do better with a VecDeque. There's a discussion (sorry, requires JS) about maybe deprecating it.

There's also another doubly-linked list available but despite being an 'intrusive' list (in C terminology) list it only supports one link per node, and insists on owning the items you put into it. I needed several links per node for my planar graph work, and I needed Rc-based ownership.

Indeed given my analysis of when a doubly-linked list is needed, rather than a VecDeque, I think it will nearly always involve something like Rc too.

My module
You can read the documentation online.

It provides the facilities I needed, including lists where each node can be on multiple lists with runtime selection of the list link within each node. It's not threadsafe (so Rust will stop you using it across multiple threads) and would be hard to make threadsafe, I think.

Notable wishlist items: entrypoints for splitting and joining lists, and good examples in the documentation. Both of these would be quite easy to add.

Further annoyance from Cargo
As I wrote earlier, because I am some kind of paranoid from the last century, I have hit cargo on the head so that it doesn't randomly download and run code from the internet.

This is done with stuff in my ~/.cargo/config. Of course this stops me actually accessing the real public repository (cargo automatically looks for .cargo/config in all parent directories, not just in $PWD and $HOME). No problem - I was expecting to have to override it.

However there is no way to sensibly override a config file!

So I have had to override it in a silly way: I made a separate area on my laptop which belongs to me but which is not underneath my home directory. Whenever I want to run cargo publish, I copy the crate to be published to that other area, which is not a direct or indirect subdirectory of anything containing my usual .cargo/config.

Cargo really is quite annoying: it has opinions about how everything is and how everything ought to be done. I wouldn't mind that, but unfortunately when it happens to be wrong it is often lacking a good way to tell it what should be done instead. This is kind of thing is serious problem in a build system tool.

comments

Chris Lamb: Book Review: Jeeves and the King of Clubs

Planet Debian - Sht, 09/03/2019 - 2:39md

Jeeves and the King of Clubs (2018)

Ben Schott

For the P.G. Wodehouse fan the idea of bringing back such a beloved duo such as Jeeves and Wooster will either bring out delight or dread. Indeed, the words you find others using often reveals their framing of such endeavours; is this a tribute, homage, pastiche, an imitation…?

Whilst neither parody nor insult, let us start with the "most disagreeable, sir." Rather jarring were the voluminous and Miscellany-like footnotes that let you know that the many allusions and references are all checked, correct and contemporaneous. All too clever by half and would ironically be a negative trait if this was personified by a character within the novel itself. Bertie's uncharactestic knowledge of literature was also eyebrow-raising: whilst he should always have the mot juste within easy reach — especially for that perfect parliamentary insult — Schott's Wooster was just a bit too learned and bookish, ultimately lacking that blithe An Idiot Abroad element that makes him so affably charming.

Furthermore, Wodehouse's far-right Black Shorts group (who "seek to promote the British way of life, the British sense of fair play and the British love of Britishness") was foregrounded a little too much for my taste. One surely reaches for Wodehouse to escape contemporary political noise and nonsense, to be transported to that almost-timeless antebellum world which, of course, never really existed in the first place?

Saying that, the all-important vernacular is full of "snap and vim", the eponymous valet himself is superbly captured, and the plot has enough derring-do and high jinks to possibly assuage even the most ardent fan. The fantastic set pieces in both a Savile Row tailor and a ladies underwear store might be worth the price of admission alone.

To be sure, this is certainly ersatz Wodehouse, but should one acquire it? «Indeed, sir,» intoned Jeeves.

Andrew Cater: Debian BSP Cambridge March 9th 2019 - post 1

Planet Debian - Sht, 09/03/2019 - 1:32md
At Steve's. Breakfast and coffee have happened. There's a table full of developers, release managers, cables snaking all over the floor and a coffee machine. This is heaven for Debian developers (probably powered by ThinkPads). On IRC on #debian-uk (as ever) and #debian-bugs Catch up with us there

Dirk Eddelbuettel: RcppArmadillo 0.9.200.7.1

Planet Debian - Sht, 09/03/2019 - 3:48pd

A minor RcppArmadillo bugfix release arrived on CRAN today. This version 0.9.200.7.1 has two local changes. R 3.6.0 will bring a change in sample() (to correct a subtle bug for large samples) meaning many tests will fail, so in one unit test file we reset the generator to the old behaviour to ensure we match the (old) test expectation. We also backported a prompt upstream fix for an issue with drawing Wishart-distributed random numbers via Armadillo which was uncovered this week. I also just uploaded the Debian version.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 559 other packages on CRAN.

Changed are listed below:

Changes in RcppArmadillo version 0.9.200.7.1 (2019-03-08)
  • Explicit setting of RNGversion("3.5.0") in one unit test to accomodate the change in sample() in R 3.6.0

  • Back-ported a fix to the Wishart RNG from upstream (Dirk in #248 fixing #247)

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Muammar El Khatib: Spotify And Local Files

Planet Debian - Sht, 09/03/2019 - 12:38pd

My favorite music player is undoubtedly Spotify. It is not a secret that its Linux support might not be the best and that some artists have just decided not to upload the music in the service. One of them is Tool, one of my favorite bands, too. I recently decided to play my Tool mp3 files with Spotify as local files and they were not playing. In order to fix that one has to:

Jo Shields: Bootstrapping RHEL 8 support on mono-project.com

Planet Debian - Pre, 08/03/2019 - 5:15md
Preamble

On mono-project.com, we ship packages for Debian 8, Debian 9, Raspbian 8, Raspbian 9, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, RHEL/CentOS 6, and RHEL/CentOS 7. Because this is Linux packaging we’re talking about, making one or two repositories to serve every need just isn’t feasible – incompatible versions of libgif, libjpeg, libtiff, OpenSSL, GNUTLS, etc, mean we really do need to build once per target distribution.

For the most part, this level of “LTS-only” coverage has served us reasonably well – the Ubuntu 18.04 packages work in 18.10, the RHEL 7 packages work in Fedora 28, and so on.

However, when Fedora 29 shipped, users found themselves running into installation problems.

I was not at all keen on adding non-LTS Fedora 29 to our build matrix, due to the time and effort required to bootstrap a new distribution into our package release system. And, as if in answer to my pain, the beta release of Red Hat Enterprise 8 landed.

Cramming a square RPM into a round Ubuntu

Our packaging infrastructure relies upon a homogenous pool of Ubuntu 16.04 machines (x64 on Azure, ARM64 and PPC64el on-site at Microsoft), using pbuilder to target Debian-like distributions (building i386 on the x64 VMs, and various ARM flavours on the ARM64 servers); and mock to target RPM-like distributions. So in theory, all I needed to do was drop a new RHEL 8 beta mock config file into place, and get on with building packages.

Just one problem – between RHEL 7 (based on Fedora 19) and RHEL 8 (based on Fedora 28), the Red Hat folks had changed package manager, dropping Yum in favour of DNF. And mock works by using the host distribution’s package manager to perform operations inside the build root – i.e. yum.deb from Ubuntu.

It’s not possible to install RHEL 8 beta with Yum. It just doesn’t work. It’s also not possible to update mock to $latest and use a bootstrap chroot, because reasons. The only options: either set up Fedora VMs to do our RHEL 8 builds (since they have DNF), or package DNF for Ubuntu 16.04.

For my sins, I opted for the latter. It turns out DNF has a lot of dependencies, only some of which are backportable from post-16.04 Ubuntu. The dependency tree looked something like:

  •  Update mock and put it in a PPA
    •  Backport RPM 4.14+ and put it in a PPA
    •  Backport python3-distro and put it in a PPA
    •  Package dnf and put it in a PPA
      •  Package libdnf and put it in a PPA
        •  Backport util-linux 2.29+ and put it in a PPA
        •  Update libsolv and put it in a PPA
        •  Package librepo and put it in a PPA
          •  Backport python3-xattr and put it in a PPA
          •  Backport gpgme1.0 and put it in a PPA
            •  Backport libgpg-error and put it in a PPA
        •  Package modulemd and put it in a PPA
          •  Backport gobject-introspection 1.54+ and put it in a PPA
          •  Backport meson 0.47.0+ and put it in a PPA
            •  Backport googletest and put it in a PPA
        •  Package libcomps and put it in a PPA
    •  Package dnf-plugins-core and put it in a PPA
  •  Hit all the above with sticks until it actually works
  •  Communicate to community stakeholders about all this, in case they want it

This ended up in two PPAs – the end-user usable one here, and the “you need these to build the other PPA, but probably don’t want them overwriting your system packages” one here. Once I convinced everything to build, it didn’t actually work – a problem I eventually tracked down and proposed a fix for here.

All told it took a bit less than two weeks to do all the above. The end result is, on our Ubuntu 16.04 infrastructure, we now install a version of mock capable of bootstrapping DNF-requiring RPM distributions, like RHEL 8.

RHEL isn’t CentOS

We make various assumptions about package availability, which are true for CentOS, but not RHEL (8). The (lack of) availability of the EPEL repository for RHEL 8 was a major hurdle – in the end I just grabbed the relevant packages from EPEL 7, shoved them in a web server, and got away with it. The second is structural – for a bunch of the libraries we build against, the packages are available in the public RHEL 8 repo, but the corresponding -devel packages are in a (paid, subscription required) repository called “CodeReady Linux Builder” – and using this repo isn’t mock-friendly. In the end, I just grabbed the three packages I needed via curl, and transferred them to the same place as the EPEL 7 packages I grabbed.

Finally, I was able to begin the bootstrapping process.

RHEL isn’t Fedora

After re-bootstrapping all the packages from the CentOS 7 repo into our “””CentOS 8″”” repo (we make lots of naming assumptions in our control flow, so the world would break if we didn’t call it CentOS), I tried installing on Fedora 29, and… Nope. Dependency errors. Turns out there are important differences between the two distributions. The main one is that any package with a Python dependency is incompatible, as the two handle Python paths very differently. Thankfully, the diff here was pretty small.

The final, final end result: we now do every RPM build on CentOS 6, CentOS 7, and RHEL 8. And the RHEL 8 repo works on Fedora 29

MonoDevelop 7.7 on Fedora 29.

The only errata: MonoDevelop’s version control addin is built without support for ssh+git:// repositories, because RHEL 8 does not offer a libssh2-devel. Other than that, hooray!

Enrico Zini: Starting tornado on a random free port

Planet Debian - Pre, 08/03/2019 - 12:00pd

One of the software I maintain for work is a GUI data browser that uses Tornado as a backend and a web browser as a front-end.

It is quite convenient to start the command and have the browser open automatically on the right URL. It's quite annoying to start the command and be told that the default port is already in use.

I've needed this trick quite often, also when writing unit tests, and it's time I note it down somewhere, so it's easier to find than going through Tornado's unittest code where I found it the first time.

This is how to start Tornado on a free random port:

from tornado.options import define, options import tornado.netutil import tornado.httpserver define("web_port", type=int, default=None, help="listening port for web interface") application = Application(self.db_url) if options.web_port is None: sockets = tornado.netutil.bind_sockets(0, '127.0.0.1') self.web_port = sockets[0].getsockname()[:2][1] server = tornado.httpserver.HTTPServer(application) server.add_sockets(sockets) else: server = tornado.httpserver.HTTPServer(application) server.listen(options.web_port)

Chris Lamb: Book Review: The Sellout

Planet Debian - Enj, 07/03/2019 - 7:03md

The Sellout (2016)

Paul Beatty

I couldn't put it down… is the go-to cliché for literature so I found it deeply ironic to catch myself in quite-literally this state at times. Winner of the 2016 Man Booker Prize, the first third of this were perhaps the most engrossing and compulsive reading experience I've had since I started seriously reading.

This book opens in medias res within the Supreme Court of the United States where the narrator lights a spliff under the table. As the book unfolds, it is revealed that this very presence was humbly requested by the Court due to his attempt to reinstate black slavery and segregation in his local Los Angeles neighbourhood. Saying that, outlining the plot would be misleading here as it is far more the ad-hoc references, allusions and social commentary that hang from this that make this such an engrossing work.

The tranchant, deep and unreserved satire might perhaps be merely enough for an interesting book but where it got really fascinating to me (in a rather inside baseball manner) is how the the latter pages of the book somehow don't live up the first 100. That appears like a straight-up criticism but this flaw is actually part of this book's appeal to me — what actually changed in these latter parts? It's not overuse of the idiom or style and neither is it that it strays too far from the original tone or direction, but I cannot put my finger on why which has meant the book sticks to this day in my mind. I can almost, just almost, imagine a devilish author such as Paul deliberately crippling one's output for such an effect…

Now, one cannot unreservedly recommend this book. The subject matter itself, compounded by being dealt with in such an flippant manner will be unpenetrable to many and deeply offensive to others, but if you can see your way past that then you'll be sure to get something—whatever that may be—from this work.

Shirish Agarwal: How to deal with insanity ?

Planet Debian - Enj, 07/03/2019 - 3:20md

There is a friend of mine, from college days. As it happens, we drifted apart from each other, he chose some other vocation and I chose another. At the time we were going to college, mobile phones were expensive. E-mail was also expensive but I chose to spend my money on e-mail and other methods of communication. Then about 6-8 months back, out of the blue, my friend called me back. It took me sometime as I wasn’t able to place him (I just cannot remember people and also features change a lot.) but do remember experiences. We promised to meet but at the time we were supposed to meet, it rained like it never rained before. I waited for an hour but wasn’t able to see him. I tried SMS, called him but no answer. I did try few times but never got him. He used to send me a message once in a while and I used to send a reply back. I was able to talk with his mum some days after that. Yesterday, I was trying to call some people, and his name popped up. On a hunch, dialed his number and his sis. came on the line. She was able to place me (I guess I might have met 6-8 years back or more) but still she was able to place me and she told me he’s gone insane. While I’m supposed to meet the family on the week-end to know what happened, I am still not able to process how it happened. I had known he had fallen into some bad company (his mum had shared this titbit) but can’t figure out for the life of me what could have driven him insane. I told I would be coming on Sunday as I have work, but more importantly trying to create some sense of space or control so I can digest what’s happened. While I know it happens to people, not to people I know, not to people I do care about. I also came to know that all the time my phone was not able to get through is because he has a shitty jio connection or the place where they live, jio doesn’t have a good presence.

Now one part of me has a sort of morbid curiousity as to what chain of events lead to it while at the same time dunno if I would be able to help them or what I should say or do ? Feel totally helpless. If anybody have any ideas, please yell, comment.

Faqet

Subscribe to AlbLinux agreguesi