You are here

Planet Debian

Subscribe to Feed Planet Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 8 months 1 ditë më parë

Reproducible builds folks: Reproducible Builds: Weekly report #202

Mër, 13/03/2019 - 3:24md

Here’s what happened in the Reproducible Builds effort between Sunday March 3 and Saturday March 9 2019:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week:

Chris Lamb uploaded version 113 to Debian unstable fixing a long list of issues. It included contributions already covered in previous weeks as well as new ones by Chris, including:

  • Provide explicit help when the libarchive system package is missing or “incomplete”. (#50)
  • Explicitly mention when the guestfs module is missing at runtime and we are falling back to a binary diff. (#45)

Vagrant Cascadian made the corresponding update to GNU Guix. []

Packages reviewed and fixed, and bugs filed Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, Holger Levsen made the following improvements:

  • Analyse node maintenance job runs to determine whether to mark nodes offline. []
  • Detect hanging health check runs, not just failed ones. []
  • Allow members of the jenkins UNIX group to sudo(8) to the jenkins user [] and simplify adding users to said group [].
  • Improve the “SHA1 checker” script to deal with packages with more than one version [] and to re-download buildinfo.debian.net’s files if they are older than two weeks. []
  • Node maintenance. [][][][]
  • In the version checker, correctly deal with a rare situation when several, say, diffoscope versions are available in one Debian suite at the same time. []

In addition, Alexander “lynxis” Couzens, made a number of changes to our OpenWrt support, including:

  • Add OpenWrt support to our database. []
  • Adding a reproducible_openwrt_package_parser.py script. []
  • Strip unreproducible certificates from images. []
Outreachy

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy. Outreachy provides internships to work free software. Internships are open to applicants around the world, working remotely and are not required to move. Interns are paid a stipend of $5,500 for the three month internship and have an additional $500 travel stipend to attend conferences/events.

So far, we received more than ten initial requests from candidates. The closing date for applicants is April 2nd. More information is available on the application page.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Kees Cook: security things in Linux v5.0

Mër, 13/03/2019 - 12:04pd

Previously: v4.20.

Linux kernel v5.0 was released last week! Looking through the changes, here are some security-related things I found interesting:

read-only linear mapping, arm64
While x86 has had a read-only linear mapping (or “Low Kernel Mapping” as shown in /sys/kernel/debug/page_tables/kernel under CONFIG_X86_PTDUMP=y) for a while, Ard Biesheuvel has added them to arm64 now. This means that ranges in the linear mapping that contain executable code (e.g. modules, JIT, etc), are not directly writable any more by attackers. On arm64, this is visible as “Linear mapping” in /sys/kernel/debug/kernel_page_tables under CONFIG_ARM64_PTDUMP=y, where you can now see the page-level granularity:

---[ Linear mapping ]--- ... 0xffffb07cfc402000-0xffffb07cfc403000 4K PTE ro NX SHD AF NG UXN MEM/NORMAL 0xffffb07cfc403000-0xffffb07cfc4d0000 820K PTE RW NX SHD AF NG UXN MEM/NORMAL 0xffffb07cfc4d0000-0xffffb07cfc4d1000 4K PTE ro NX SHD AF NG UXN MEM/NORMAL 0xffffb07cfc4d1000-0xffffb07cfc79d000 2864K PTE RW NX SHD AF NG UXN MEM/NORMAL

per-task stack canary, arm
ARM has supported stack buffer overflow protection for a long time (currently via the compiler’s -fstack-protector-strong option). However, on ARM, the compiler uses a global variable for comparing the canary value, __stack_chk_guard. This meant that everywhere in the kernel needed to use the same canary value. If an attacker could expose a canary value in one task, it could be spoofed during a buffer overflow in another task. On x86, the canary is in Thread Local Storage (TLS, defined as %gs:20 on 32-bit and %gs:40 on 64-bit), which means it’s possible to have a different canary for every task since the %gs segment points to per-task structures. To solve this for ARM, Ard Biesheuvel built a GCC plugin to replace the global canary checking code with a per-task relative reference to a new canary in struct thread_info. As he describes in his blog post, the plugin results in replacing:

8010fad8: e30c4488 movw r4, #50312 ; 0xc488 8010fadc: e34840d0 movt r4, #32976 ; 0x80d0 ... 8010fb1c: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0 8010fb20: e5943000 ldr r3, [r4] 8010fb24: e1520003 cmp r2, r3 8010fb28: 1a000020 bne 8010fbb0 ... 8010fbb0: eb006738 bl 80129898 <__stack_chk_fail>

with:

8010fc18: e1a0300d mov r3, sp 8010fc1c: e3c34d7f bic r4, r3, #8128 ; 0x1fc0 ... 8010fc60: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0 8010fc64: e5943018 ldr r3, [r4, #24] 8010fc68: e1520003 cmp r2, r3 8010fc6c: 1a000020 bne 8010fcf4 ... 8010fcf4: eb006757 bl 80129a58 <__stack_chk_fail>

r2 holds the canary saved on the stack and r3 the known-good canary to check against. In the former, r3 is loaded through r4 at a fixed address (0x80d0c488, which “readelf -s vmlinux” confirms is the global __stack_chk_guard). In the latter, it’s coming from offset 0x24 in struct thread_info (which “pahole -C thread_info vmlinux” confirms is the “stack_canary” field).

per-task stack canary, arm64
The lack of per-task canary existed on arm64 too. Ard Biesheuvel solved this differently by coordinating with GCC developer Ramana Radhakrishnan to add support for a register-based offset option (specifically “-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=...“). With this feature, the canary can be found relative to sp_el0, since that register holds the pointer to the struct task_struct, which contains the canary. I’m hoping there will be a workable Clang solution soon too (for this and 32-bit ARM). (And it’s also worth noting that, unfortunately, this support isn’t yet in a released version of GCC. It’s expected for 9.0, likely this coming May.)

top-byte-ignore, arm64
Andrey Konovalov has been laying the groundwork with his Top Byte Ignore (TBI) series which will also help support ARMv8.3’s Pointer Authentication (PAC) and ARMv8.5’s Memory Tagging (MTE). While TBI technically conflicts with PAC, both rely on using “non-VA-space” (Virtual Address) bits in memory addresses, and getting the kernel ready to deal with ignoring non-VA bits. PAC stores signatures for checking things like return addresses on the stack or stored function pointers on heap, both to stop overwrites of control flow information. MTE stores a “tag” (or, depending on your dialect, a “color” or “version”) to mark separate memory allocation regions to stop use-after-tree and linear overflows. For either of these to work, the CPU has to be put into some form of the TBI addressing mode (though for MTE, it’ll be a “check the tag” mode), otherwise the addresses would resolve into totally the wrong place in memory. Even without PAC and MTE, this byte can be used to store bits that can be checked by software (which is what the rest of Andrey’s series does: adding this logic to speed up KASan).

ongoing: implicit fall-through removal
An area of active work in the kernel is the removal of all implicit fall-through in switch statements. While the C language has a statement to indicate the end of a switch case (“break“), it doesn’t have a statement to indicate that execution should fall through to the next case statement (just the lack of a “break” is used to indicate it should fall through — but this is not always the case), and such “implicit fall-through” may lead to bugs. Gustavo Silva has been the driving force behind fixing these since at least v4.14, with well over 300 patches on the topic alone (and over 20 missing break statements found and fixed as a result of the work). The goal is to be able to add -Wimplicit-fallthrough to the build so that the kernel will stay entirely free of this class of bug going forward. From roughly 2300 warnings, the kernel is now down to about 200. It’s also worth noting that with Stephen Rothwell’s help, this bug has been kept out of linux-next by him sending warning emails to any tree maintainers where a new instance is introduced (for example, here’s a bug introduced on Feb 20th and fixed on Feb 21st).

ongoing: refcount_t conversions
There also continues to be work converting reference counters from atomic_t to refcount_t so they can gain overflow protections. There have been 18 more conversions since v4.15 from Elena Reshetova, Trond Myklebust, Kirill Tkhai, Eric Biggers, and Björn Töpel. While there are more complex cases, the minimum goal is to reduce the Coccinelle warnings from scripts/coccinelle/api/atomic_as_refcounter.cocci to zero. As of v5.0, there are 131 warnings, with the bulk of the remaining areas in fs/ (49), drivers/ (41), and kernel/ (21).

userspace PAC, arm64
Mark Rutland and Kristina Martsenko enabled kernel support for ARMv8.3 PAC in userspace. As mentioned earlier about PAC, this will give userspace the ability to block a wide variety of function pointer overwrites by “signing” function pointers before storing them to memory. The kernel manages the keys (i.e. selects random keys and sets them up), but it’s up to userspace to detect and use the new CPU instructions. The “paca” and “pacg” flags will be visible in /proc/cpuinfo for CPUs that support it.

platform keyring
Nayna Jain introduced the trusted platform keyring, which cannot be updated by userspace. This can be used to verify platform or boot-time things like firmware, initramfs, or kexec kernel signatures, etc.

Edit: added userspace PAC and platform keyring, suggested by Alexander Popov
Edit: tried to clarify TBI vs PAC vs MTE

That’s it for now; please let me know if I missed anything. The v5.1 merge window is open, so off we go! :)

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Daniel Lange: Wiping harddisks in 2019

Mar, 12/03/2019 - 7:53md

Wiping hard disks is part of my company's policy when returning servers. No exceptions.

Good providers will wipe what they have received back from a customer, but we don't trust that as the hosting / cloud business is under constant budget-pressure and cutting corners (wipefs) is a likely consequence.

With modern SSDs there is "security erase" (man hdparm or see the - as always well maintained - Arch wiki) which is useful if the device is encrypt-by-default. These devices basically "forget" the encryption key but it also means trusting the devices' implementation security. Which doesn't seem warranted. Still after wiping and trimming, a secure erase can't be a bad idea .

Still there are three things to be aware of when wiping modern hard disks:

  1. Don't forget to add bs=4096 (blocksize) to dd as it will still default to 512 bytes and that makes writing even zeros less than half the maximum possible speed. SSDs may benefit from larger block sizes matched to their flash page structure. These are usually 128kB, 256kB, 512kB, 1MB, 2MB and 4MB these days.1
  2. All disks can usually be written to in parallel. screen is your friend.
  3. The write speed varies greatly by disk region, so use 2 hours per TB and wipe pass as a conservative estimate. This is better than extrapolating what you see initially in the fastest region of a spinning disk.
  4. The disks have become huge (we run 12TB disks in production now) but the write speed is still somewhere 100 MB/s ... 300 MB/s. So wiping servers on the last day before returning is not possible anymore with disks larger than 4 TB each (and three passes). Or 12 TB and one pass (where e.g. fully encrypted content allows to just do a final zero-wipe).

hard disk size one pass three passes 1 TB2 h6 h 2 TB4 h12 h 3 TB6 h18 h 4 TB8 h24 h (one day) 5 TB10 h30 h 6 TB12 h36 h 8 TB16 h48 h (two days) 10 TB20 h60 h 12 TB24 h72 h (three days) 14 TB28 h84 h 16 TB32 h96 h (four days) 18 TB36 h108 h 20 TB40 h120 h (five days)

  1. As Douglas pointed out correctly in the comment below, these are IT Kilobytes and Megabytes, so 210 Bytes and 220 Bytes. So Kibibytes and Mebibytes for those firmly in SI territory. 

Bits from Debian: New Debian Developers and Maintainers (January and February 2019)

Mar, 12/03/2019 - 1:00md

The following contributors got their Debian Developer accounts in the last two months:

  • Paulo Henrique de Lima Santana (phls)
  • Unit 193 (unit193)
  • Marcio de Souza Oliveira (marciosouza)
  • Ross Vandegrift (rvandegrift)

The following contributors were added as Debian Maintainers in the last two months:

  • Romain Perier
  • Felix Yan

Congratulations!

John Goerzen: Goodbye to a 15-year-old Debian server

Mar, 12/03/2019 - 10:18pd

It was October of 2003 that the server I’ve called “glockenspiel” was born. It was the early days of Linux-based VM hosting, using a VPS provider called memset, running under, of all things, User Mode Linux. Over the years, it has been migrated around, sometimes running on the metal and sometimes in a VM. The operating system has been upgraded in-place using standard Debian upgrades over the years, and is now happily current on stretch (albeit with a 32-bit userland). But it has never been reinstalled. When I’d migrate hosting providers, I’d use tar or rsync to stream glockenspiel across the Internet to its new home.

A lot of people reinstall an OS when a new version comes out. I’ve been doing Debian upgrades with apt for ages, and this one is a case in point. It lingers.

Root’s .profile was last modified in November 2004, and its .bashrc was last modified in December 2004. My own home directory still has a .pinerc, .gopherrc, and .arch-params file. I last edited my .vimrc in 2003 and my .emacs dates back to 2002 (having been copied over from a pre-glockenspiel FreeBSD server).

drwxr-xr-x 3 jgoerzen jgoerzen 4096 Dec 3 2003 irclogs -rw-r--r-- 1 jgoerzen jgoerzen 373 Dec 3 2003 .vimrc -rw-r--r-- 1 jgoerzen jgoerzen 651 Nov 27 2003 .reportbugrc drwx------ 3 jgoerzen jgoerzen 4096 Sep 2 2003 .arch-params -rw-r--r-- 1 jgoerzen jgoerzen 1115 Aug 23 2003 .gopherrc drwxr-xr-x 3 jgoerzen jgoerzen 4096 Jul 18 2003 .subversion -rw-r--r-- 1 jgoerzen jgoerzen 15317 Jun 21 2003 .pinerc

Poking around /etc on glockenspiel is like a trip back in time. Various apache sites still have configuration files around, but have long since been disabled. Over the years, glockenspiel has hosted source code repositories using Subversion, arch, tla, darcs, mercurial and git. It’s hosted websites using Drupal, WordPress, Serendipity, and so forth. It’s hosted gopher sites, websites or mailing lists for various Free Software projects (such as Freeciv), and any number of local charitable organizations. Remnants of an FTP configuration still exist, when people used web design software to build websites for those organizations on their PCs and then upload them to glockenspiel.

-rw-r--r-- 1 root root 268 Dec 25 2005 libnet.cfg -rw-r----- 1 root root 1305 Nov 11 2004 mrtg.cfg -rw-r--r-- 1 root root 552 Jul 31 2004 pam.conf

All this has been replaced by a set of Docker containers running my docker-debian-base software. They’re all in git, I can rebuild one of the containers in a few seconds or a few minutes by typing “make”, and there is no cruft from 2002. There are a lot of benefits to this.

And yet, there is a part of me that feels it’s all so… cold. Servers having “personalities” was always a distinctly dubious thing, but these days as we work through more and more layers of virtualization and indirection and become more distant from the hardware, we lose an appreciation for what we have and the many shoulders of giants upon which we stand.

And, so with that, the final farewell to this server that’s been running since 2003:

glockenspiel:/etc# shutdown -P now Shared connection to glockenspiel.complete.org closed.

Lucas Nussbaum: On Debian frustrations

Mar, 12/03/2019 - 6:51pd

Michael Stapelberg writes about his frustrations with Debian, resulting in him reducing his involvement in the project. That’s sad: over the years, Michael has made a lot of great contributions to Debian, addressing hard problems in interesting, disruptive ways.

He makes a lot of good points about Debian, with which I’m generally in agreement. An interesting exercise would be to rank those issues: what are, today, the biggest issues to solve in Debian? I’m nowadays not following Debian closely enough to be able to do that exercise, but I would love to read others’ thoughts (bonus points if it’s in a DPL platform, given that it seems that we have a pretty quiet DPL election this year!)

Most of Michael’s points are about the need for modernization of Debian’s infrastructure and workflows, and I agree that it’s sad that we have made little progress in that area over the last decade. And I think that it’s important to realize that providing alternatives to developers have a cost, and that when a large proportion of developers or packages have switched to doing something (using git, using dh, not using 1.0-based patch systems such as dpatch, …), there are huge advantages with standardizing and pushing this on everybody.

There are a few reasons why this is harder than it sounds, though.

First, there’s Debian culture of stability and technical excellence. “Above all, do not harm” could also apply to the mindset of many Debian Developers. On one hand, that’s great, because this focus on not breaking things probably contributes a lot to our ability to produce something that works as well as Debian. But on the other hand, it means that we often seek solutions that limit short-term damage or disruption, but are far from optimal on the long term.
An example is our packaging software stack. I wrote most of the introduction to Debian packaging found in the packaging-tutorial package (which is translated in six languages now), but am still amazed by all the unjustified complexity. We tend to fix problems by adding additional layers of software on top of existing layers, rather than by fixing/refactoring the existing layers. For example, the standard way to package software today is using dh. However, dh stands on dh_* commands (even if it does not call them directly, contrary to what CDBS did), and all the documentation on dh is still structured around those commands: if you want to install an additional file in a package, probably the simplest way to do that is to add it to debian/packagename.install, but this is documented in the manpage for dh_install, which your are not going to actually call because dh abstracts that away for you! I realize that this could be better explained in packaging-tutorial… (patch welcomed)

There’s also the fact that Debian is very large, very diverse, and hard to test. It’s very easy to break things silently in Debian, because many  of our packages are niche packages, or don’t have proper test suites (because not everything can be easily tested automatically). I don’t see how the workflows for large-scale changes that Michael describes could work in Debian without first getting much better at detecting regressions.

Still, there’s a lot of innovation going on inside packaging teams, with the development of language-specific packaging helpers (listed on the AutomaticPackagingTools wiki page). However, this silo-ed organization tends to fragment the expertise of the project about what works and what doesn’t: because packaging teams don’t talk much together, they often solve the same problems in slightly different ways. We probably need more ways to discuss interesting stuff going on in teams, and consolidating what can be shared between teams. The fact that many people have stopped following debian-devel@ nowadays is probably not helping…

The addition of salsa.debian.org is probably the best thing that happened to Debian recently. How much this ends up being used for improving our workflows remain to be seen:

  • We could use Gitlab merge requests to track patches, rather than attachments in the BTS. Some tooling to provide an overview of open MRs in various dashboards is probably needed (and unfortunately GitLab’s API is very slow when dealing with large number of projects).
  • We could probably have a way to move the package upload to a gitlab-ci job (for example, by committing the signed changes file in a specific branch, similar to what pristine-tar does, but there might be a better way)
  • I would love to see a team experiment with a monorepo approach (instead of the “one git repo per package + mr to track them all” approach). For teams with lots of small packages there are probably a lot of things to win with such an organization.

 

Shirish Agarwal: Processing Insanity

Mar, 12/03/2019 - 6:08pd

This blog post starts from where it ended a few days ago. I am fortunate and to an extent even blessed that I have usually had honest advise but sometimes even advice falls short when you see some harsh realities. There were three people who replied, you can read mark’s and frode’s reply as they have shared in the blog post.

I even shared it with a newly-found acquaintaince in the hopes that there may be some ways, some techniques or something which would make more sense as this is something I have never heard within the social circles I have been part of so was feeling more than a bit ill-prepared. When I shared with Paramitra (gentleman whom I engaged as part of another socio-techno probable intervention meetup I am hoping to meet soon) , he also shared some sound advice which helped me mentally prepare as well –

So, if you’re serious about what you can do with/for this friend of yours and his family, I do have several suggestions. 

1. To the best of my knowledge, and I have some exposure, no one goes ‘insane’ just like that. There has to be a diagnosis. Please find out from his family if he’s been taken to a psychiatrist. If not, that’s the first thing you can convince his family to do. Be with them, help them with that task.

2. If he’s been diagnosed, find out what that is. Most psychiatric disorders can be brought to a manageable level with proper medications and care. But any suggestions I can offer on that depends on the diagnosis.

3. However, definitely inform his family that tying him up, keeping him locked etc will only worsen the situation. He needs medical and family care – not incarceration, unless the doctor prescribes institutionalized treatment.

Hope this helps. Please be a friend to him and his family at this hour of crisis. As a nation, our understanding of mental health per se is poor, to say the least.

Paramita Banerjee

So armed with what I thought was sufficient knowledge I went to my friend’s home. The person whom I met could not be the same person whom I knew as a friend in college. During college, he had a reputation of a toughie and he looked and acted the part. So, in many ways it was the unlikeliest of friendships. I shared with him tips of Accountancy, Economics etc. and he was my back. He was also very quick on the re-partees so we used to have quite a fun time exchanging those. For the remainder of the exchange I will call my friend ‘Amar’ and his sister ‘Kritika’ as they have been the names I like.

The person whom I met was a mere shadow of the person I knew. Amar had no memory of who I was. He had trouble comprehending written words and mostly mumbled. Amar did say something interesting and fresh once in a while but it was like talking mostly to a statue. He stank and was mostly asleep even when he was awake. Amar couldn’t look straight at me and he had that if I touched him or he touched me he would infect me. He had long nails as well. Kritika told me that he does have baths once every few days but takes 3-4 hours to take a bath, sleeps in there as well. The same happens when he goes for shitting as well. The saving grace is they have their own toilet and bathroom within the house. I have no comprehension how they might be adjusting, all in that small space.

I learned from Kritika what I hadn’t known about him and the family over the last ten odd years. His mum died in the same room where he was and he had no comprehension that she had died, this had happened just a few weeks back. He was one of three children, the middle child, the elder daughter, who is now a widow and has three daughters who are living with them. Amar, his father and the youngest sister who is trying desperately to keep it altogether but I don’t know how and what she will be able to do. 7 mouths to feed and 6 people who all have their own needs and wants apart from basic existence. They are from a low-income group. The elder sister does have lot of body pains although I was not able to ask what from. I do know nursing is a demanding profession and from my hospital stay, at times they have to around the clock 24×7 doing things no normal person can do.

Two of the nieces are nearing teenage years and was told of sexually suggestive remarks to the nieces by one of the neighbors. The father is a drunk, the brother-in-law who died was a drunk and the brother, Amar had consumed lots of cannabis seeds. Apparently, during the final year exams where we were given different centers he went to Bombay/Mumbai to try his hands at movies, then went to Delhi and was into selling some sort of chemicals from company to the other.

Maybe it was ‘bad company’ as her mother on the phone had told me, maybe it was the work he was doing which he was not happy with which led him to cannabis addiction. I have no way of knowing anything of his past. I did ask Kritika if she can dig out any visiting cards or something. I do have enough friends in Delhi so it’s possible I can know about how things came to be this bad.

There was a small incident which also left me a bit shaken. The place where they are is a place called Pavana Nagar. This is on back of Pimpri-Chinchwad industrial township so most of the water that the town/village people consume has lot of chemical effluents and this the local councillor (called nagar sevak) knows but either can’t or won’t do anything about it. There are lot of diseases due to the pollutants in the water. The grains they buy or purchase, Kritika suspects or/and knows also use the same water but she is helpless to do anything about it.

The incident is a small one but I wanted to share a fuller picture before sharing that. I had left my bag, a sort of sling bag where I was sitting in the room . After Kritika took me to another building to show me the surrounding areas (as I was new here and had evinced interest to know the area) , when we came back, my bag was not to be found. While after searching for a while, I got the bag, there was no money in it ( I usually keep INR 100-200 in case money gets stolen from on me. I also keep some goodies (sweet and sour both) just in case I feel hungry and there’s nothing around. Both were missing. The father pretended, he had put the bag away by mistake. I didn’t say anything because it would have been loss of face for the younger sister although it’s possible that she knows or had some suspicions. With the younger kids around, it would have been awkward to say that and I didn’t really wanna make a scene. It wasn’t much, but just something I didn’t expect.

Also later I came to know that whenever the father drinks, he creates lot of drama and says whatever comes to his mind. It is usually dirty, nasty and hurtful from what I could gather.

Due to my extended stay in hopsital due to Epilepsy had come to know of couple of medical schemes which were meant for weaker sections of the society. I did share what I knew of the schemes. While I hope to talk with Kritika more, I don’t see a way out of the current mess they are in. The sense I got from her is that she is fighting too many battles and I don’t how she can win them all. I also told her about NMRI I have no clue where to go from here. Also don’t wanna generalize but there might be possibilities of many Amars and Kritikas in our midst or around us whose story we don’t know. If they could just have some decent water, no mosquitoes it probably would enhance their lives quite a bit and maybe have a bit more agency about themselves. There is one thing that Kritika shared which was also interesting. She had experience of working back-office for some IT company but now looking after the family she just couldn’t do the same thing.

Note and disclaimer – The names ‘Amar’ and ‘Kritika’ are just some names I chose. The names have been given to –

a. Give privacy to the people involved.
b. To embody sustance to the people and the experience so they are not nameless people.


Steve McIntyre: Debian BSP in Cambridge, 08 - 10 March 2019

Mar, 12/03/2019 - 3:08pd

Lots of snacks, lots of discusssion, lots of bugs fixed! YA BSP at my place.

John Goerzen: A (Partial) Defense of Debian

Hën, 11/03/2019 - 12:44md

I was sad to read on his blog that Michael Stapelberg is winding down his Debian involvement. In his post, he outlined some critiques of Debian. In his post, I want to acknowledge that he is on point with some of them, but also push back on others. Some of this is also a response to some of the comments on Hacker News.

I’d first like to discuss some of the assumptions I believe his post rests on: namely that “we’ve always done it this way” isn’t a good reason to keep doing something. I completely agree. However, I would also say that “this thing is newer, so it’s better and we should use it” is also poor reasoning. Newer is not always better. Sometimes it is, sometimes it’s not, but deeper thought is generally required.

Also, when thinking about why things are a certain way or why people prefer certain approaches, we must often ask “why does that make sense to them?” So let’s dive in.

Debian’s Perspective: Stability

Stability, of course, can mean software that tends not to crash. That’s important, but there’s another aspect of it that is also important: software continuing to act the same over time. For instance, if you wrote a C program in 1985, will that program still compile and run today? Granted, that’s a bit of an extreme example, but the point is: to what extent can you count on software you need continuing to operate without forced change?

People that have been sysadmins for a long period of time will instantly recognize the value of this kind of stability. Change is expensive and difficult, and often causes outages and incidents as bugs are discovered when software is adapted to a new environment. Being able to keep up-to-date with security patches while also expecting little or no breaking changes is a huge win. Maintaining backwards compatibility for old software is also important.

Even from a developer’s perspective, lack of this kind of stability is why I have handed over maintainership of most of my Haskell software to others. Some of my Haskell projects were basically “done”, and every so often I’d get bug reports that it no longer compiles due to some change in the base library. Occasionally I’d have patches with those bug reports, but they would inevitably break compatibility with older versions (even though the language has plenty good support for something akin to a better version of #ifdefs to easily deal with this.) The culture of stability was not there.

This is not to say that this kind of stability is always good or always bad. In the Haskell case, there is value to be had in fixing designs that are realized to be poor and removing cruft. Some would say that strcpy() should be removed from libc for this reason. People that want the latest versions of gimp or whatever are probably not going to be running Debian stable. People that want to install a machine and not really be burdened by it for a couple of years are.

Debian has, for pretty much its entire life, had a large proportion of veteran sysadmins and programmers as part of the organization. Many of us have learned the value of this kind of stability from the school of hard knocks – over and over again. We recognize the value of something that just works, that is so stable that things like unattended-upgrades are safe and reliable. With many other distros, something like this isn’t even possible; when your answer to a security bug is to “just upgrade to the latest version”, just trusting a cron job to do it isn’t going to work because of the higher risk.

Recognizing Personal Preference

Writing about Debian’s bug-tracking tool, Michael says “It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).” This is representative of a personal preference. A web form might be more convenient for Michael — I have no reason to doubt this — but is it more convenient for everyone? I’d say no.

In his linked post, Michael also writes: “Recently, I was wondering why I was pushing off accepting contributions in Debian for longer than in other projects. It occurred to me that the effort to accept a contribution in Debian is way higher than in other FOSS projects. My remaining FOSS projects are on GitHub, where I can just click the “Merge” button after deciding a contribution looks good. In Debian, merging is actually a lot of work: I need to clone the repository, configure it, merge the patch, update the changelog, build and upload. “

I think that’s fair for someone that wants a web-based workflow. Allow me to present the opposite: for me, I tend to push off contributions that only come through Github, and the reason is that, for me, they’re less convenient. It’s also harder for me to contribute to Github projects than Debian ones. Let’s look at this – say I want to send in a small patch for something. If it’s Github, it’s going to look like this:

  1. Go to the website for the thing, click fork
  2. Now clone that fork or add it to my .git/config, hack, and commit
  3. Push the commit, go back to the website, and submit a PR
  4. Github’s email integration is so poor that I basically have to go back to the website for most parts of the conversation. I can do little from the comfort of mu4e.
  5. Remember to clean up my fork after the patch is accepted or rejected.

Compare that to how I’d contribute with Debian:

  1. Hack (and commit if I feel like it)
  2. Type “reportbug foo”, attach my patch
  3. Followup conversation happens directly in email where it’s convenient to reply

How about as the developer? Github constantly forces me to their website. I can’t very well work on bug reports, etc. without a strong Internet connection. And it’s designed to push people into using their tools and their interface, which is inferior in a lot of ways to a local interface – but then the process to pull down someone else’s set of patches involves a lot of typing and clicking, much more that would be involved from a simple git format-patch. In short, I don’t have my shortcut keys, my environment, etc. for reviewing things – the roadblocks are there to make me use theirs.

If I get a contribution from someone in debbugs, it’s so much easier. It’s usually just git apply or patch -p1 and boom, I can see exactly what’s changed and review it. A review comment is just a reply to an email. I don’t have to ever fire up a web browser. So much more convenient.

I don’t write this to say Michael is wrong about what’s more convenient for him. I write it to say he’s wrong about what’s more convenient for me (or others). It may well be the case that debbugs is so inconvenient that it pushes him to leave while github is so inconvenient for others that it pushes them to avoid it.

I will note before leaving this conversation that there are some command-line tools available for Github and a web interface to debbugs, but it is still clear that debbugs is a lot easier to work with from within my own mail reader and tooling, and Github is a lot easier to work with from within a web browser.

The case for reportbug

I remember the days before we had reportbug. Over and over and over again, I would get bug reports from users that wouldn’t have the basic information needed to investigate. reportbug gathers information from the system: package versions, configurations, versions of dependencies, etc. A simple web form can’t do this because it doesn’t have a local agent. From a developer’s perspective, trying to educate users on how to do this over and over as an unending, frustrating, and counter-productive task. Even if it’s clearly documented, the battle will be fought over and over. From a user’s perspective, having your bug report ignored or told you’re doing it wrong is frustrating too.

So I think reportbug is much nicer than having some github-esque web-based submission form. Could it be better? Sure. I think a mode to submit the reportbug report via HTTPS instead of email would make sense, since a lot of machines no longer have local email configured.

Where Debian Should Improve

I agree that there are areas where Debian should improve.

Michael rightly identifies the “strong maintainer” concept as a source of trouble. I agree. Though we’ve been making slow progress over time with things like low-threshold NMU and maintainer teams, the core assumption that a maintainer has a lot of power over particular packages is one that needs to be thrown out.

Michael, and commentators on HN, both identify things that boil down to documentation problems. I have heard so many times that it’s terribly hard to package something up for Debian. That’s really not the case for most things; dh_make and similar tools will do the right thing for many packages, and all you have to do is add some package descriptions and such. I wrote a “concise guide” to packaging for my workplace that ran to only about 2 pages. But it is true that the documentation on debian.org doesn’t clearly offer this simple path, so people are put off and not aware of it. Then there were the comments about how hard it is to become a Debian developer, and how easy it is to submit a patch to NixOS or some such. The fact is, these are different things; one does not need to be a Debian Developer to contribute to Debian. A DD is effectively the same as a patch approver elsewhere; these are the people that can ultimately approve software for insertion into the OS, and you DO want an element of trust there. Debian could do more to offer concise guides for drive-by contributions and the building of packages that follow standard language community patterns, both of which can be done without much knowledge of packaging tools and inner workings of the project.

Finally, I have distanced myself from conversations in Debian for some time, due to lack of time to participate in what I would call excessive bikeshedding. This is hardly unique to Debian, but I am glad to see the project putting more effort into expecting good behavior from conversations of late.

Noah Meyerhans: Further Discussion for DPL!

Dje, 10/03/2019 - 11:18md

Further Discussion builds concensus within Debian!

Further Discussion gets things done!

Further Discussion welcomes diverse perspectives in Debian!

We'll grow the community with Further Discussion!

Further Discussion has been with Debian from the very beginning! Don't you think it's time we gave Further Discussion its due, after all the things Further Discussion has accomplished for the project?

Somewhat more seriously, have we really exhausted the community of people interested in serving as Debian Project Leader? That seems unfortunate. I'm not worried about it from a technical point of view, as Debian has established ways of operating without a DPL. But the lack of interest suggest some kind of stagnation within the community. Or maybe this is just the cabal trying to wrest power from the community by stifling the vote. Is there still a cabal?

Markus Koschany: My Free Software Activities in February 2019

Dje, 10/03/2019 - 9:48md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • February was the last month to package new upstream releases before the full freeze, if the changes were not too invasive of course :-). Atomix, gamine, simutrans, simutrans-pak64, simutrans-pak128.britain and hitori qualified.
  • I sponsored a new version of mgba, a Game Boy Advance emulator, for Reiner Herrmann and worked together with Bret Curtis on wildmidi and openmw. The latest upstream version resolved a long-standing bug and made it possible that the game engine, a reimplementation of The Elder Scrolls III: Morrowind, will be part of a Debian stable release for the first time.
  • Johann Suhter reported a bug in one of brainparty’s minigames and also provided the patch. All I had to do was uploading it. Thanks. (#922485)
  • I corrected a minor cross-build FTBFS in openssn. Patch by Helmut Grohne. (#914724)
  • I released a new version of debian-games and updated the dependency list of our games metapackages. This is almost the final version but expect another release in one or two months.
Debian Java Misc Debian LTS

This was my thirty-sixth month as a paid contributor and I have been paid to work 19,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 25.02.2019 until 03.03.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in sox, collabtive, libkohana2-php, ldb, libpodofo, libvirt, openssl, wordpress, twitter-bootstrap, ceph, ikiwiki, edk2, advancecomp, glibc, spice-xpi and zabbix.
  • DLA-1675-1. Issued a security update for python-gnupg fixing 1 CVE.
  • DLA-1676-1. Issued a security update for unbound fixing 1 CVE.
  • DLA-1696-1. Issued a security update for ceph fixing 2 CVE.
  • DLA-1701-1. Issued a security update for openssl fixing 1 CVE.
  • DLA-1702-1. Issued a security update for advancecomp fixing 2 CVE.
  • DLA-1703-1. Issued a security update for jackson-databind fixing 10 CVE.
  • DLA-1706-1. Issued a security update for poppler fixing 5 CVE.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my ninth month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 25.02.2019 until 03.03.2019 and I triaged CVE in file, gnutls26, nettle, libvirt, busybox and eglibc.
  • ELA-84-1. Issued a security update for gnutls26 fixing 4 CVE. I also investigated CVE-2018-16869 in nettle and also CVE-2018-16868 in gnutls26. After some consideration I decided to mark these issues as ignored because the changes were invasive and would have required intensive testing. The benefits appeared small in comparison.
  • ELA-88-1. Issued a security update for openssl fixing 1 CVE.
  • ELA-90-1. Issued a security update for libsdl1.2 fixing 11 CVE.
  • I started to work on sqlalchemy which requires a complex backport to fix a possible SQL injection vulnerability.

Thanks for reading and see you next time.

Michael Stapelberg: Winding down my Debian involvement

Dje, 10/03/2019 - 9:43md

This post is hard to write, both in the emotional sense but also in the “I would have written a shorter letter, but I didn’t have the time” sense. Hence, please assume the best of intentions when reading it—it is not my intention to make anyone feel bad about their contributions, but rather to provide some insight into why my frustration level ultimately exceeded the threshold.

Debian has been in my life for well over 10 years at this point.

A few weeks ago, I have visited some old friends at the Zürich Debian meetup after a multi-year period of absence. On my bike ride home, it occurred to me that the topics of our discussions had remarkable overlap with my last visit. We had a discussion about the merits of systemd, which took a detour to respect in open source communities, returned to processes in Debian and eventually culminated in democracies and their theoretical/practical failings. Admittedly, that last one might be a Swiss thing.

I say this not to knock on the Debian meetup, but because it prompted me to reflect on what feelings Debian is invoking lately and whether it’s still a good fit for me.

So I’m finally making a decision that I should have made a long time ago: I am winding down my involvement in Debian to a minimum.

What does this mean?

Over the coming weeks, I will:

  • transition packages to be team-maintained where it makes sense
  • remove myself from the Uploaders field on packages with other maintainers
  • orphan packages where I am the sole maintainer

I will try to keep up best-effort maintenance of the manpages.debian.org service and the codesearch.debian.net service, but any help would be much appreciated.

For all intents and purposes, please treat me as permanently on vacation. I will try to be around for administrative issues (e.g. permission transfers) and questions addressed directly to me, permitted they are easy enough to answer.

Why?

When I joined Debian, I was still studying, i.e. I had luxurious amounts of spare time. Now, over 5 years of full time work later, my day job taught me a lot, both about what works in large software engineering projects and how I personally like my computer systems. I am very conscious of how I spend the little spare time that I have these days.

The following sections each deal with what I consider a major pain point, in no particular order. Some of them influence each other—for example, if changes worked better, we could have a chance at transitioning packages to be more easily machine readable.

Change process in Debian

The last few years, my current team at work conducted various smaller and larger refactorings across the entire code base (touching thousands of projects), so we have learnt a lot of valuable lessons about how to effectively do these changes. It irks me that Debian works almost the opposite way in every regard. I appreciate that every organization is different, but I think a lot of my points do actually apply to Debian.

In Debian, packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian.

While it is great to have a lint tool (for quick, local/offline feedback), it is even better to not require a lint tool at all. The team conducting the change (e.g. the C++ team introduces a new hardening flag for all packages) should be able to do their work transparent to me.

Instead, currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages.

Notably, the cost of each change is distributed onto the package maintainers in the Debian model. At work, we have found that the opposite works better: if the team behind the change is put in power to do the change for as many users as possible, they can be significantly more efficient at it, which reduces the total cost and time a lot. Of course, exceptions (e.g. a large project abusing a language feature) should still be taken care of by the respective owners, but the important bit is that the default should be the other way around.

Debian is lacking tooling for large changes: it is hard to programmatically deal with packages and repositories (see the section below). The closest to “sending out a change for review” is to open a bug report with an attached patch. I thought the workflow for accepting a change from a bug report was too complicated and started mergebot, but only Guido ever signaled interest in the project.

Culturally, reviews and reactions are slow. There are no deadlines. I literally sometimes get emails notifying me that a patch I sent out a few years ago (!!) is now merged. This turns projects from a small number of weeks into many years, which is a huge demotivator for me.

Interestingly enough, you can see artifacts of the slow online activity manifest itself in the offline culture as well: I don’t want to be discussing systemd’s merits 10 years after I first heard about it.

Lastly, changes can easily be slowed down significantly by holdouts who refuse to collaborate. My canonical example for this is rsync, whose maintainer refused my patches to make the package use debhelper purely out of personal preference.

Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.

How would things look like in a better world?

  1. As a project, we should strive towards more unification. Uniformity still does not rule out experimentation, it just changes the trade-off from easier experimentation and harder automation to harder experimentation and easier automation.
  2. Our culture needs to shift from “this package is my domain, how dare you touch it” to a shared sense of ownership, where anyone in the project can easily contribute (reviewed) changes without necessarily even involving individual maintainers.

To learn more about how successful large changes can look like, I recommend my colleague Hyrum Wright’s talk “Large-Scale Changes at Google: Lessons Learned From 5 Yrs of Mass Migrations”.

Fragmented workflow and infrastructure

Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Of course, what you do in such a repository also varies subtly from team to team, and even within teams.

In practice, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Instead of using GitLab’s API to create a merge request, you have to design an entirely different, more complex system, which deals with intermittently (or permanently!) unreachable repositories and abstracts away differences in patch delivery (bug reports, merge requests, pull requests, email, …).

Wildly diverging workflows is not just a temporary problem either. I participated in long discussions about different git workflows during DebConf 13, and gather that there were similar discussions in the meantime.

Personally, I cannot keep enough details of the different workflows in my head. Every time I touch a package that works differently than mine, it frustrates me immensely to re-learn aspects of my day-to-day.

After noticing workflow fragmentation in the Go packaging team (which I started), I tried fixing this with the workflow changes proposal, but did not succeed in implementing it. The lack of effective automation and slow pace of changes in the surrounding tooling despite my willingness to contribute time and energy killed any motivation I had.

Old infrastructure: package uploads

When you want to make a package available in Debian, you upload GPG-signed files via anonymous FTP. There are several batch jobs (the queue daemon, unchecked, dinstall, possibly others) which run on fixed schedules (e.g. dinstall runs at 01:52 UTC, 07:52 UTC, 13:52 UTC and 19:52 UTC).

Depending on timing, I estimated that you might wait for over 7 hours (!!) before your package is actually installable.

What’s worse for me is that feedback to your upload is asynchronous. I like to do one thing, be done with it, move to the next thing. The current setup requires a many-minute wait and costly task switch for no good technical reason. You might think a few minutes aren’t a big deal, but when all the time I can spend on Debian per day is measured in minutes, this makes a huge difference in perceived productivity and fun.

The last communication I can find about speeding up this process is ganneff’s post from 2008.

How would things look like in a better world?

  1. Anonymous FTP would be replaced by a web service which ingests my package and returns an authoritative accept or reject decision in its response.
  2. For accepted packages, there would be a status page displaying the build status and when the package will be available via the mirror network.
  3. Packages should be available within a few minutes after the build completed.
Old infrastructure: bug tracker

I dread interacting with the Debian bug tracker. debbugs is a piece of software (from 1994) which is only used by Debian and the GNU project these days.

Debbugs processes emails, which is to say it is asynchronous and cumbersome to deal with. Despite running on the fastest machines we have available in Debian (or so I was told when the subject last came up), its web interface loads very slowly.

Notably, the web interface at bugs.debian.org is read-only. Setting up a working email setup for reportbug(1) or manually dealing with attachments is a rather big hurdle.

For reasons I don’t understand, every interaction with debbugs results in many different email threads.

Aside from the technical implementation, I also can never remember the different ways that Debian uses pseudo-packages for bugs and processes. I need them rarely enough to establish a mental model of how they are set up, or working memory of how they are used, but frequently enough to be annoyed by this.

How would things look like in a better world?

  1. Debian would switch from a custom bug tracker to a (any) well-established one.
  2. Debian would offer automation around processes. It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).
Old infrastructure: mailing list archives

It baffles me that in 2019, we still don’t have a conveniently browsable threaded archive of mailing list discussions. Email and threading is more widely used in Debian than anywhere else, so this is somewhat ironic. Gmane used to paper over this issue, but Gmane’s availability over the last few years has been spotty, to say the least (it is down as I write this).

I tried to contribute a threaded list archive, but our listmasters didn’t seem to care or want to support the project.

Debian is hard to machine-read

While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome. I have picked just 3 quick examples to illustrate my point.

debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database.

pk4 needs to maintain its own cache to look up package metadata based on the package name. Other tools parse the apt database from scratch on every invocation. A proper database format, or at least a binary interchange format, would go a long way.

Debian Code Search wants to ingest new packages as quickly as possible. There used to be a fedmsg instance for Debian, but it no longer seems to exist. It is unclear where to get notifications from for new packages, and where best to fetch those packages.

Complicated build stack

See my “Debian package build tools” post. It really bugs me that the sprawl of tools is not seen as a problem by others.

Developer experience pretty painful

Most of the points discussed so far deal with the experience in developing Debian, but as I recently described in my post “Debugging experience in Debian”, the experience when developing using Debian leaves a lot to be desired, too.

I have more ideas

At this point, the article is getting pretty long, and hopefully you got a rough idea of my motivation.

While I described a number of specific shortcomings above, the final nail in the coffin is actually the lack of a positive outlook. I have more ideas that seem really compelling to me, but, based on how my previous projects have been going, I don’t think I can make any of these ideas happen within the Debian project.

I intend to publish a few more posts about specific ideas for improving operating systems here. Stay tuned.

Lastly, I hope this post inspires someone, ideally a group of people, to improve the developer experience within Debian.

Andy Simpkins: Debian BSP: Cambridge continued

Dje, 10/03/2019 - 4:06md

I am slowly making progress.  I am quite pleased with myself for slowly moving beyond triage, test, verify to now beginning to understand what is going on with some bugs and being able to suggest fixes :-)  That said my C++ foo is poor and add in QT as well and #917711 is beyond me.

Not only does quite a lot of work get done at a BSP; it is also very good to catch up with people, especially those who traveled to Cambridge from out of the area.  Thank you for taking your weekend to contribute to making Buster.

I must also take the opportunity to thank Sledge and Randombird for opening up their home to host the BSP and provide overnight accommodation as well.

More hacking is still going on…  Some different people from yesterday.

Differing people ++smcv –andrewsh  ++cjwatson –lamby

Michal &#268;iha&#345;: Weblate 3.5.1

Dje, 10/03/2019 - 4:00md

Weblate 3.5.1 has been released today. Compared to the 3.5 release it brings several bug fixes and performance improvements.

Full list of changes:

  • Fixed Celery systemd unit example.
  • Fixed notifications from http repositories with login.
  • Fixed race condition in editing source string for monolingual translations.
  • Include output of failed addon execution in the logs.
  • Improved validation of choices for adding new language.
  • Allow to edit file format in component settings.
  • Update installation instructions to prefer Python 3.
  • Performance and consistency improvements for loading translations.
  • Make Microsoft Terminology service compatible with current zeep releases.
  • Localization updates.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Andrew Cater: Debian BSP Cambridge March 10th 2019 - post 2

Dje, 10/03/2019 - 3:49md
Lots of very busy people chasing down bugs. A couple of folk have left. It's a good day and very productive: thanks to Steve an Jo, as ever, for food, coffee, coffee, beds and coffee.

Andrew Cater: Debian BSP Cambridge 10th March 2019

Dje, 10/03/2019 - 12:06md
Folks are starting to turn up this morning. Kitchen full of people talking and cooking sausages and talking. A quiet room except for Pepper the dog chasing squeaky toys and people chasing into the kitchen for food. Folk are now gradually setting down to code and bug fix. All is good

Andrew Cater: Debian BSP Cambridge March 9th 2019 - post 3

Sht, 09/03/2019 - 10:40md
Pub meal with lots of people in a crowded pub and lots of chatting. Various folk have come back to Steve's to carry on to be met with a very, very bouncy friendly dog. And on we go :)

Andy Simpkins: Debian BSP: Cambridge

Sht, 09/03/2019 - 7:38md

I didn’t get a huge amount done today – A bit of email config for Andy followed by many installation tests to reproduce #911036, then to test and confirm the patch has fixed it.  I can’t read a word of Japanese but thankfully Hideki provided the appropriate runes I needed to ‘match’ so I was able to follow the steps to reproduce, that couples with running a separate qemu instance in English so I could shadow my actions to keep track of menu state…

 

Andy Cater – Multi tasking (and trying to look like Andy Hamilton)

main table space hard at work…

Overflow hack space…  stress relief service being provided by Pepper

Andrew Cater: Debian BSP Cambridge March 9th 2019 - post 2

Sht, 09/03/2019 - 6:14md
Large amounts of conversation: on chat and simultaneously in real life. eight people sat round a dining table. Two on a sofa, me on a chair. ThinkPad quotient has increased. Lots of bugs being closed: Buster steadily becoming less RC-buggy Coffee machine is getting a hammering and Debian is improving steadily. All is good.

Ian Jackson: Nailing Cargo (the Rust build tool)

Sht, 09/03/2019 - 3:21md
Introduction I quite like the programming language Rust, although it's not without flaws and annoyances.

One of those annoyances is Cargo, the language-specific package manager. Like all modern programming languages, Rust has a combined build tool and curlbashware package manager. Apparently people today expect their computers to download and run code all the time and get annoyed when that's not sufficiently automatic.

I don't want anything on my computer that automatically downloads and executes code from minimally-curated repositories like crates.io. So this is a bit of a problem.

Dependencies available in Debian Luckily I can get nearly all what I have needed so far from Debian, at least if I'm prepared to use Debian testing (buster, now in freeze). Debian's approach to curation is not perfect, but it's mostly good enough for me.

But I still need to arrange to use the packages from Debian instead of downloading things.

Of course anything in Debian written in Rust faces the same problem: Debian source packages are quite rightly Not Allowed to download random stuff from the Internet during the package build. So when I tripped across this I reasoned that the Debian Rust team must already have fixed this problem somehow. And, indeed they have. The result is not too awful:

$ egrep '^[^#]' ~/.cargo/config [source] [source.debian-packages] directory = "/usr/share/cargo/registry" [source.crates-io] replace-with = "debian-packages" $ I cloned and hacked this from the cargo docs after reading the Debian Rust Team packaging policy.

The effect is to cause my copy of cargo to think that crates.io is actually located (only) on my local machine in /usr/share/cargo. If I mention a dependency, cargo will look in /usr/share/cargo for it. If it's not there I get an error, which I fix by installing the appropriate Debian rust package using apt.

So far so good.

Edited 2019-03-07, to add: To publish things on crates.io I then needed another workaround too (scroll down to "Further annoyance from Cargo"

Dependencies not in Debian A recent project of mine involved some dependencies which were not in Debian, notably Rust bindings to the GNU Scientific Library and to the NLopt nonlinear optimisation suite. A quick web search found me the libraries rust-GSL and rust-nlopt. They are on crates.io, of course. I thought I would check them out so that I could check them out. Ie, decide if they looked reasonable, and do a bit of a check on the author, and so on. Digression - a rant about Javascript The first problem is of course that crates.io itself does not work at all without enabling Javascript in my browser. I have always been reluctant about JS. Its security properties have always been suboptimal and much JS is used for purposes which are against the interests of the user.

But I really don't like the way that everyone is still pretending that it is safe to run untrusted code on one's computer, despite Spectre. Most people seem to think that if you are up to date with your patches, Spectre isn't a problem. This is not true at all. Spectre is basically unfixable on affected CPUs and it is not even possible to make a comparably fast CPU without this bug. The patches are bodges which make the attacks more complicated and less effective. (At least, no-one knows how to do so yet.)

And of course Javascript is a way of automatically downloading megabytes of stuff from all over the internet and running it on your computer. Urgh. So I run my browser with JS off by default.

There is absolutely no good reason why crates.io won't let me even look at the details of some library without downloading a pile of code from their website and running it on my computer. But, I guess, it is probably OK to allow it? So on I go, granting JS permission. Then I can click through to the actual git repository. JS just to click a link! At least we can get back to the main plot now...

The "unpublished local crate" problem So I git clone rust-GSL and have a look about. It seems to contain the kind of things I expect. The author seems to actually exist. The git history seems OK on cursory examination. I decide I am going to actually to use it. So I write [dependencies] GSL = "1" in my own crate's metadata file.

I realise that I am going to have to tell cargo where it is. Experimentally, I run cargo build and indeed it complains that it's never heard of GSL. Fair enough. So I read the cargo docs for the local config file, to see how to tell it to look at ../rust-GSL.

It turns out that there is no sensible way to do this!

There is a paths thing you can put in your config, but it does not work for an unpublished crate. (And of course, from the point of view of my cargo, GSL is a unpublished crate - because only crates that I have installed from .debs are "published".)

Also paths actually uses some of the metadata from the repository, which is of course not what you wanted. In my reading I found someone having trouble because crates.io had a corrupted metadata file for some crate, which their cargo rejected. They could just get the source themselves and fix it, but they had serious difficulty hitting cargo on the head hard enough to stop it trying to read the broken online metadata.

The same overall problem would arise if I had simply written the other crate myself and not published it yet. (And indeed I do have a crate split out from my graph layout project, which I have yet to publish.)

You can edit your own crate's Cargo.toml metadata file to say something like this:

GSL = { path = "../rust-GSL", optional = true } but of course that's completely wrong. That's a fact about the setup on my laptop and I dont want to commit it to my git tree. And this approach gets quickly ridiculous if I have indirect dependencies: I would have to make a little local branch in each one just to edit each one's Cargo.toml to refer to the others'. Awful.

Well, I have filed an issue. But that won't get me unblocked.

So, I also wrote a short but horrific shell script. It's a wrapper for cargo, which edits all your Cargo.toml's to refer to each other. Then, when cargo is done, it puts them all back, leaving your git trees clean.

Writing this wrapper script would have been a lot easier if Cargo had been less prescriptive about what things are called and where they must live. For example, if I could have specified an alternative name for Cargo.toml, my script wouldn't have had to save the existing file and write a replacement; it could just have idempotently written a new file.

Even so, nailing-cargo works surprisingly well. I run it around make. I sometimes notice some of the "nailed" Cargo.toml files if I update my magit while a build is running, but that always passes. Even in a nightmare horror of a script, it was worth paying attention to the error handling.

I hope the cargo folks fix this before I have to soup up nailing-cargo to use a proper TOML parser, teach it a greater variety of Cargo.toml structures, give it a proper config file reader and a manpage, and generally turn it into a proper, but still hideous, product.

edited 2019-03-09 to fix tag spelling



comments

Faqet