You are here

Bits from Debian

Subscribe to Feed Bits from Debian
Planet Debian -
Përditësimi: 1 orë 43 min më parë

Dirk Eddelbuettel: anytime 0.3.4

8 orë 19 min më parë

A new minor release of the anytime package is arriving on CRAN. This is the fifteenth release, and first since the 0.3.3 release in November.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub for a few examples.

This release is mostly internal and switches to the excellent new tinytest package, a tweak the iso8601() format helper which now uses T between date and time (which is a breaking change with the usual addition of a option to get the old behaviour back) and a little more. The full list of changes follows.

Changes in anytime version 0.3.4 (2019-06-18)
  • Documentation was updated about a 'Europe/London' conversion issue (#84, inter alia).

  • The package is now compiled under the C++11 standard.

  • The package now uses tinytest for unit tests.

  • The iso8601() function now places a ‘T’ between date and time; an option switches to prior format using a space.

  • The vignette is now pre-made and included as-is in a Sweave document reducing the number of suggested packages.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steinar H. Gunderson: 0 bytes left

21 orë 2 min më parë

Around 2003–2004, a friend and I wrote a softsynth that was used in a 64 kB intro. Now, 14 years later, cTrix and Pselodux picked it up and made a really cool 32 kB tune with it! Who would have thought.

(For the record, the synth plus the original Nemesis tune fit under 16 kB given the right packer and some squeezing, even with some LPC samples. But there's heck of a lot more notes in this one :-) )

Emmanuel Kasper: Normalize a bunch of audio files to the same loudness

Hën, 17/06/2019 - 4:30md
I had a bunch of audio files in a directory, each recorded live with different devices, and it proved very ear-painful to hear the audio files in a playlist because of the difference of loudness.
To normalize audio filesm  you can find a number of tool working with ID3 tags, but after testing with vlc, mplayer, and the pogo mp3 player none of them did produce a measurable change. So I converted everything to wav, normalized the wav files, then converted back to mp3.

delete funny chars and spaces in file names
detox music_dir
converting files to wav is just a matter of
# this uses zsh recursive globbing
for file in **/*.mp3 ; do ffmpeg -i $file  "$(basename $file .mp3).wav"; done
normalizing files with the normalize-audio program, from the debian package of the same name.
# this uses zsh recursive globbing
normalize-audio **/*.wav
converting back to mp3
for file in **/*.wav ; do ffmpeg -b:a 192k -acodec libmp3lame -i $file "$(basename $file .wav).mp3"; done

Manuel A. Fernandez Montecelo: Debian GNU/Linux riscv64 port in mid 2019

Hën, 17/06/2019 - 4:00pd

It's been a while since last post (Talk about the Debian GNU/Linux riscv64 port at RISC-V workshop), and sometimes things look very quiet from outside even if the people on the backstage never stop working. So this is an update on the status of this port before the release of buster, which should happen in a few weeks and which it will open the way for more changes that will benefit the port.

The Big Picture

First, the big picture(s):

Debian-Ports All-time Graph, 2019-06-17

As it can be seen in the first graph, perhaps with some difficulty, is that the percent of arch-dependent packages built for riscv64 (grey line) has been around or higher than 80% since mid 2018, just a few months after the port was added to the infrastructure.

Given than the arch-dependent packages are about half of the Debian['s main, unstable] archive and that (in simple terms) arch-independent packages can be used by all ports (provided that the software that they rely on is present, e.g. a programming language interpreter), this means that around 90% of packages of the whole archive has been available for this architecture from early on.

Debian-Ports Quarter Graph, 2019-06-17

The second graph shows that the percentages are quite stable (for all architectures, really: the peaks and dips in the graph only represent <5% of the total). This is in part due to the freeze for buster, but it usually happens at other times as well (except in the initial bring-up or in the face of severe problems), and it really shows that even the second-class ports are in quite good health in broad terms.

Note: These graphs are for architectures in the debian-ports infrastructure (which hosts architectures not as well supported as the main ones, the ones present in stable releases). The graphs are taken from the buildd stats page, which also includes the main supported architectures.

A little big Thank You

Together, both graphs are also testament that there are people working on ports at all times, keeping things working behind the scenes, and that's why from a high level view it seems that things “just work”.

More in general, aside from the work of porters themselves, there are also people working on bootstrapping issues that make bringing up ports easier than in the past, or coping better when toolchain support or other issues deal an important blow to some ports. And, of course, all other contributors of Debian help by keeping good tools and building rules that work across architectures, patching the upstream software for needs of several architectures at the same time (endianness, width of basic types), many upstream projects are generic enough that they don't need specific porting, etc.

Thanks to all of you!

Next Steps Installation on hardware, VMs, etc.

Due to several reasons, among them the limited availability of hardware able to run this Debian port and the limited options to use bootloaders during all this time, the instructions to get Debian running on RISC-V are not the best, easiest, more elegant or very up to date. This is an area to improve in the next months.

Meanwhile, there's a Debian RISC-V's wiki page with instructions to get a chroot working in a HiFive Unleashed board as shipped, without destroying the initial factory set-up.

Specially Vagrant Cascadian and Karsten Merker have been working on the area of booting the system, and there are instructions to set-up a riscv64 Qemu VM and boot it with u-boot and opensbi. Karsten is also working to get support in debian-installer, the main/canonical way to install Debian systems (perhaps less canonical nowadays with the use of OS images, but still hugely important).

Additionally, it would be nice to have images publicly available and ready to use, for both Qemu and hardware available like the HiFive Unleashed (or others that might show up in time), but although there's been some progress on that, it's still not ready and available for end users.

The last 10%+ of the archive

So, what's the remaining work left to have almost all of the archive built for this architecture? What's left to port, as such?

The main blockers to get closer to 100% of packages built are basically LLVM and Rust (which, in turn, depends on LLVM).

Currently there are more than 500 packages from the Rust ecosystem in the archive (which is about 4%), and they cannot be built and used until Rust has support for the architecture. And Rust needs LLVM, there's no Rust compiler based on GCC or other toolchains (as it's the case of Go, for example, in which there's a gcc-go compiler in addition to their own golang-go), so this is the only alternative.

Firefox is the main high-level package that depends on Rust, but many packages also depend on librsvg2 to render SVG images, and this library has been converted to Rust. We're still using the C version for that, but it cannot be sustained in the long term.

Aside from Rust, other packages directly depend or use LLVM to some extent, and this is not fully working for riscv64 at the moment, but it is expected that during 2019 the support of LLVM for riscv64 will be completed.

There are other programming language ecosystems that need attention, but they represent a really low percentage (only dozens of packages, of more than 12 thousand; and with no dependencies outside that set). And then, of course, there is long tail of packages that cannot be built due to a missing dependency, lack of support for the architecture or random failures -- together they make a substantial number of the total, but they need to be looked at and solved almost on a case-by-case basis.

Finally, when the gates of the unstable suite open again after the freeze for the stable release of buster, we will see tools with better support and patches can be accepted again to support riscv64, so we can hope that things will improve at a faster rate soon :-)

Russ Allbery: Review: Abaddon's Gate

Dje, 16/06/2019 - 7:17pd

Review: Abaddon's Gate, by James S.A. Corey

Series: The Expanse #3 Publisher: Orbit Copyright: 2013 ISBN: 0-316-23542-3 Format: Kindle Pages: 540

Abaddon's Gate is the third book in the Expanse series, following Caliban's War. This series tells a single long story, so it's hard to discuss without spoilers for earlier books although I'll try. It's a bad series to read out of order.

Once again, solar system politics are riled by an alien artifact set up at the end of the previous book. Once again, we see the fallout through the eyes of multiple viewpoint characters. And, once again, one of them is James Holden, who starts the book trying to get out of the blast radius of the plot but is pulled back into the center of events. But more on that in a moment.

The other three viewpoint characters are, unfortunately, not as strong as the rest of the cast in Caliban's War. Bull is the competent hard-ass whose good advice is repeatedly ignored. Anna is a more interesting character, a Methodist reverend who reluctantly leaves her wife and small child to join an interfaith delegation (part of a larger delegation of artists and philosophers, done mostly as a political stunt) to the alien artifact at the center of this book. Anna doesn't change that much over the course of the book, but her determined, thoughtful kindness and intentional hopefulness was appealing to read about. She also has surprisingly excellent taste in rich socialite friends.

The most interesting character in the book is the woman originally introduced as Melba. Her obsessive quest for revenge drives much of the plot, mostly via her doing awful things but for reasons that come from such a profound internal brokenness, and with so much resulting guilt, that it's hard not to eventually feel some sympathy. She's also the subject of the most effective and well-written scene in the book: a quiet moment of her alone in a weightless cell, trying to position herself in its exact center. (Why this is so effective is a significant spoiler, but it works incredibly well in context.)

Melba's goal in life is to destroy James Holden and everything he holds dear. This is for entirely the wrong reasons, but I had a hard time not feeling a little bit sympathetic to that too.

I had two major problems with Abaddon's Gate. The first of them is that this book (and, I'm increasingly starting to feel, this series) is about humans doing stupid, greedy, and self-serving things in the face of alien mystery, with predictably dire consequences. This is, to be clear, not in the slightest bit unrealistic. Messy humans being messy in the face of scientific wonder (and terror), making tons of mistakes, but then somehow muddling through is very in character for our species. But realistic doesn't necessarily mean entertaining.

A lot of people die or get seriously injured in this book, and most of that is the unpredictable but unsurprising results of humans being petty assholes in the face of unknown dangers instead of taking their time and being thoughtful and careful. The somewhat grim reputation of this series comes from being relatively unflinching about showing the results of that stupidity. Bad decisions plus forces that do not care in the slightest about human life equals mass casualties. The problem, at least for me personally, is this is not fun to read about. If I wanted to see more of incompetent people deciding not to listen to advice or take the time to understand a problem, making impetuous decisions that make them feel good, and then turning everything to shit, I could just read the news. Bull as a viewpoint character doesn't help, since he's smart enough to see the consequences coming but can't stop them. Anna is the one character who manages to reverse some of the consequences by being a better person than everyone else, and that partly salvages the story, but there wasn't enough of that.

The other problem is James Holden. I was already starting to get annoyed with his self-centered whininess in Caliban's War, but in Abaddon's Gate it turns into eye-roll-inducing egomania. Holden seems convinced that everything that happens is somehow about him personally, and my tolerance for self-centered narcissists is, shall we say, at a historically low ebb. There's a point late in this book when Holden decides to be a sexist ass to Naomi (I will never understand what that woman sees in him), and I realized I was just done. Done with people pointing out to Holden that he's just a wee bit self-centered, done with him going "huh, yeah, I guess I am" and then making zero effort to change his behavior, done with him being the center of the world-building for no good reason, done with plot armor and the clear favor of the authors protecting him from consequences and surrounding him with loyalty he totally doesn't deserve, done with his supposed charisma which is all tell and no show. Just done. At this point, I actively loathe the man.

The world-building here is legitimately interesting, if a bit cliched. I do want to know where the authors are going with their progression of alien artifacts, what else humanity might make contact with, and what the rest of the universe looks like. I also would love to read more about Avasarala, who sadly didn't appear in this book but is the best character in this series so far. I liked Anna, I ended up surprising myself and liking Melba (or at least the character she becomes), and I like most of Holden's crew. But I may be done with the series here because I'm not sure I can take any more of Holden. I haven't felt this intense of dislike for a main series character since I finally gave up on The Wheel of Time.

Abaddon's Gate has a lot of combat, a lot of dead people, and a lot of gruesome injury, all of which is belabored enough that it feels a bit padded, but it does deliver on what it promises: old-school interplanetary spaceship fiction with political factions, alien artifacts, some mildly interesting world-building, and, in Melba, some worthwhile questions about what happens after you've done something unforgivable. It doesn't have Avasarala, and therefore is inherently far inferior to Caliban's War, but if you liked the previous books in the series, it's more of that sort of thing. If Holden has been bothering you, though, that gets much worse.

Followed by Cibola Burn.

Rating: 6 out of 10

Erich Schubert: Chinese Citation Factory

Dje, 16/06/2019 - 12:02pd

RetractionWatch published in Feburary 2018 an article titled “A journal waited 13 months to reject a submission. Days later, it published a plagiarized version by different authors”, indicating that in the journal Multimedia Tools and Applications (MTAP) may have been manipulated in the editorial process.

Now, more than a year later, Springer apparently has retracted additional articles from the journal, as mentioned in the blog For Better Science. On the downside, Elsevier has been publishing many of these in another journal now instead…

I am currently aware of 22 retractions associated with this incident. One would have expected to see a clear pattern in the author names, but they seem to have little in common except Chinese names and affiliations, and suspicious email addresses (also, usually only one author has an email at all). It almost appears as if the names may be made up. And these retracted papers clearly contained citation spam: they cite a particular author very often, usually in a single paragraph.

The retraction notices typically include the explanation “there is evidence suggesting authorship manipulation and an attempt to subvert the peer review process”, confirming the earlier claims by Retraction Watch.

So I used the CrossRef API to get the citations from all the articles (I tried SemanticScholar first, but for some of the retracted papers it only had the self-cite of the retraction notice), and counted the citations in these papers.

Essentially, I am counting how many citations authors lost by the retractions.

Here is the “high score” with the top 5 citation losers:

Author Citations lost Cited in papers Citation share Retractions L. Zhang 385 20 60.6% 1 M. Song 68 20 10.9% 0 C. Chen 65 19 11.1% 0 X. Liu 65 19 11.0% 0 R. Zimmermann 60 18 10.8% 0

Now this is a surprisingly clear pattern. In 20 of the retracted papers, L. Zhang was cited on average 19.25 times. In these papers, also 60% of the references were co-authored by him. In one of the remaining two papers, he was an author. The next authors seem to be mostly in this list because of co-authoring with L. Zhang earlier. In fact, if we ignore all citations to papers co-authored by L. Zhang, no author receives more than 5 citations anymore.

So this very clearly suggests that L. Zhang manipulated the MTAP journal to boost his citation index. And it is quite disappointing how long it took until Springe retracted those articles! Judging by the For Better Science article, there may be even more affected papers.

Joey Hess: hacking water

Sht, 15/06/2019 - 6:02md

From water insecurity to offgrid, solar pumped, gravity flow 1000 gallons of running water.

I enjoy hauling water by hand, which is why doing it for 8 years was not really a problem. But water insecurity is; the spring has been drying up for longer periods in the fall, and the cisterns have barely been large enough to get through.

And if I'm going to add storage, it ought to be above the house, so it can gravity flow. And I have these old 100 watts of solar panels sitting unused after my solar upgrade. And a couple of pumps for a pressure tank system that was not working when I moved in. And I stumbled across an odd little flat spot halfway up the hillside. And there's an exposed copper pipe next to the house's retaining wall; email to Africa establishes that it goes down and through the wall and connects into the plumbing.

So I have an old system that doesn't do what I want. Let's hack the system..

(This took a year to research and put together, including learning a lot about plumbing.)

Run a cable from the old solar panels 75 feet over to the spring. Repurpose an old cooler as a pumphouse, to keep the rain off the Shurflow pump, and with the opening facing so it directs noise away from living areas. Add a Shurflow 902-200 linear current booster to control the pump.

Run a temporary pipe up to the logging road, and verify that the pump can just manage to push the water up there.

Sidetrack into a week spent cleaning out and re-sealing the spring's settling tank. This was yak shaving, but it was going to fail. Build a custom ladder because regular ladders are too wide to fit into it. Flashback to my tightest squeezes from caving. Yuurgh.

Install water level sensors in the settling tank, cut a hole for pipe, connect to pumphouse.

Now how to bury 250 feet of PEX pipe a foot deep up a steep hillside covered in rock piles and trees that you don't want to cut down to make way for equipment? Research every possibility, and pick the one that involves a repurposed linemans's tool resembling a medieval axe.

Dig 100 feet of 1 inch wide trench in a single afternoon by hand. Zeno in on the rest of the 300 foot run. Gain ability to bury underground cables without raising a sweat as an accidental superpower. Arms ache for a full month afterwards.

Connect it all up with a temporary water barrel, and it works! Gravity flow yields 30 PSI!

Pressure-test the copper pipe going into the house to make sure it's not leaking behind the retaining wall. Fix all the old leaky plumbing and fixtures in the house.

Clear a 6 foot wide path through the woods up the hill and roll up two 550 gallon Norwesco water tanks. Haul 650 pounds of sand up the hill, by hand, one 5 gallon bucket at a time. Level and prepare two 6 foot diameter pads.

Build a buried manifold with valves turned by water meter key. Include a fire hose outlet just in case.

Begin filling the tanks, unsure how long it will take as the pump balances available sunlight and spring flow.

François Marier: OpenSUSE 15 LXC setup on Ubuntu Bionic 18.04

Sht, 15/06/2019 - 5:15pd

Similarly to what I wrote for Fedora, here is how I was able to create an OpenSUSE 15 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc echo "veth" >> /etc/modules modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:


and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers -A FORWARD -d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s -j ACCEPT -A INPUT -d -s -j ACCEPT -A INPUT -d -s -j ACCEPT -A INPUT -d -s -j ACCEPT -A INPUT -d -s -j ACCEPT

and apply these changes:


before restarting the lxc networking:

systemctl restart lxc-net.service Creating the container

Once that's in place, you can finally create the OpenSUSE 15 container:

lxc-create -n opensuse15 -t download -- -d opensuse -r 15 -a amd64

To see a list of all distros available with the download template:

lxc-create -n foo --template=download -- --list Logging in as root

Start up the container and get a login console:

lxc-start -n opensuse15 -F

In another terminal, set a password for the root user:

lxc-attach -n opensuse15 passwd

You can now use this password to log into the console you started earlier.

Logging in as an unprivileged user via ssh

As root, install a few packages:

zypper install vim openssh sudo man systemctl start sshd systemctl enable sshd

and then create an unprivileged user:

useradd francois passwd francois cd /home mkdir francois chown francois:100 francois/

and give that user sudo access:

visudo # uncomment "wheel" line groupadd wheel usermod -aG wheel francois

Now login as that user from the console and add an ssh public key:

mkdir .ssh chmod 700 .ssh echo "<your public key>" > .ssh/authorized_keys chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

Eddy Petri&#537;or: How to generate a usable map file for Rust code - and related (f)rustrations

Sht, 15/06/2019 - 2:24pd
Cargo does not produce a .map file, and if it does, mangling makes it very unusable. If you're searching for the TLDR, read from "How to generate a map file" on the bottom of the article.
MotivationAs a person with experience in embedded programming I find it very useful to be able to look into the map file.

Scenarios where looking at the map file is important:
  • evaluate if the code changes you made had the desired size impact or no undesired impact - recently I saw a compiler optimize for speed an initialization with 0 of an array by putting long blocks of u8 arrays in .rodata section
  • check if a particular symbol has landed in the appropriate memory section or region
  • make an initial evaluation of which functions/code could be changed to optimize either for code size or for more readability (if the size cost is acceptable)
  • check particular symbols have expected sizes and/or alignments
Rustrations  Because these kind of scenarios  are quite frequent in my work and I am used to looking at the .map file, some "rustrations" I currently face are:
  1. No map file is generated by default via cargo and information on how to do it is sparse
  2. If generated, the symbols are mangled and it seems each symbol is in a section of its own, making per section (e.g. .rodata, .text, .bss, .data) or per file analysys more difficult than it should be
  3. I haven't found a way disable mangling globally, without editing the rust sources. - I remember there is some tool to un-mangle the output map file, but I forgot its name and I find the need to post-process suboptimal
  4. no default map file filename or location - ideally it should be named as the crate or app, as specified in the .toml file.
How to generate a map fileGenerating map file for linux (and possibly other OSes)Unfortunately, not all architectures/targets use the same linker, or on some the preferred linker could change for various reasons.

Here is how I managed to generate a map file for an AMD64/X86_64 linux target where it seems the linker is GLD:

Create a .cargo/config file with the following content:

.cargo/config: [build]
    rustflags = ["-Clink-args=-Wl,"]
This should apply to all targets which use GLD as a linker, so I suspect this is not portable to Windows integrated with MSVC compiler.

Generating a map file for thumb7m with rust-lld
On baremetal targets such as Cortex M7 (thumbv7m where you might want to use the llvm based rust-lld, more linker options might be necessary to prevent linking with compiler provided startup code or libraries, so the config would look something like this:
.cargo/config: [build]
target = "thumbv7m-none-eabi"
rustflags = [""]The thins I dislike about this is the fact the target is forced to thumbv7m-none-eabi, so some unit tests or generic code which might run on the build computer would be harder to test.

Note: if using rustc directly, just pass the extra options
Map file generation with some readable symbolsAfter the changes above ae done, you'll get an file (even if the crate is of a lib) with a predefined name, If anyone knows ho to keep the crate name or at least use for libs, and for apps, if the original project name can't be used.

The problems with the generated linker script are that:
  1. all symbol names are mangled, so you can't easily connect back to the code; the alternative is to force the compiler to not mangle, by adding the #[(no_mangle)] before the interesting symbols.
  2. each symbol seems to be put in its own subsection (e.g. an initalized array in .data.
Dealing with manglingFor problem 1, the fix is to add in the source #[no_mangle] to symbols or functions, like this:

pub fn sing(start: i32, end: i32) -> String {
    // code body follows
}Dealing with mangling globallyI wasn't able to find a way to convince cargo to apply no_mangle to the entire project, so if you know how to, please comment. I was thinking using #![no_mangle] to apply the attribute globally in a file would work, but is doesn't seem to work as expected: the subsection still contains the mangled name, while the symbol seems to be "namespaced":

Here is a some section from the #![no_mangle] (global) version:
                0x000000000004fa00      0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)
                0x000000000004fa00                beer_song::verse
 When the #[no_mangle] attribute is attached directly to the function, the subsection is not mangled and the symbol seems to be global:

.text.verse    0x000000000004f9c0      0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)
                0x000000000004f9c0                verseI would prefer to have a cargo global option to switch for the entire project, and code changes would not be needed, comment welcome.
Each symbol in its sectionThe second issue is quite annoying, even if the fact that each symbol is in its own section can be useful to control every symbol's placement via the linker script, but I guess to fix this I need to custom linker file to redirect, say all constants "subsections" into ".rodata" section.

I haven't tried this, but it should work.

Utkarsh Gupta: GSoC Bi-Weekly Report - Week 1 and 2

Sht, 15/06/2019 - 2:04pd

Hello there.
The last two weeks have been adventurous. Here’s what happened.
My GSoC project is to package a software called Loomio. A little about Loomio:
Loomio is a decision-making software, designed to assist groups with the collaborative decision-making process.
It is a free software web-application, where users can initiate discussions and put up proposals.

Loomio is mostly written in Ruby, but also includes some CoffeeScript, Vue, JavaScript, with a little HTML, CSS.
The idea is to package all the dependencies of Loomio and get Loomio easily installable on the Debian machines.

The phase 1, that is, the first 4 weeks, were planned to package the Ruby and the Node dependencies. When I started off, I hit an obstacle. Little did we know about how to go about packaging complex applications like that.
I have been helping out in packages like gitlab, diaspora, et al. And towards the end of the last week, we learned that loomio needs to be done like diaspora.
First goes the loomio-installer, then would come the main package, loomio.

Now, the steps that are to be followed for loomio-installer are as follows:
» Get the app source.
» Install gem dependencies.
» Create database.
» Create tables/run migrations.
» Precomiple assets (scss -> css, et al).
» Configure nginx.
» Start service with systemd.
» In case of diaspora, JS front end is pulled via wrapper gems and in case of gitlab, it is pulled via npm/yarn.
» Loomio would be done with the same way we’re doing gitlab.

Thus, in the last two weeks, the following work has been done:
» Ruby gems’ test failures patched.
» 18 gems uploaded.
» Looked into loomio-installer’s setup.
» Basic scripts like nginx configuration, et al written.

My other activities in Debian last month:
» Updated and uploaded gitlab 11.10.4 to experimental (thanks to praveen).
» Uploaded gitaly, gitlab-workhorse.
» Sponsored a couple of packages (DM access).
» Learned Perl packaging and packaged 4 modules (thanks to gregoa and yadd).
» Learned basic Python packaging.
» Helping DC19 Bursary team (thanks to highvoltage).
» Helping DC19 Content team (thanks to terceiro).

Plans for the next 2 weeks:
» Get the app source via wget (script).
» Install gem and node dependencies via gem install and npm/yarn install (script).
» Create database for installer.
» Precomiple assets (scss -> css, et al).

I hope the next time I write a report, I’ll have no twists and adventures to share.

Until next time.
:wq for today.

Olivier Berger: Virtual Labs presentation at the HubLinked meeting in Dublin

Pre, 14/06/2019 - 1:31md

We have participated to the HubLinked workshop in Dublin this week, where I delivered a presentation on some of our efforts on Virtual Labs, in the hope that this could be useful to the partners designing the “Global Labs” where students will experiment together for Software Engineering projects.

In this presentation (PDF) I introduced our partners to the Labtainers and Antidote Open Source projects, which are quite promising for designing “virtual labs” using VMs and/or containers.

Thomas and I have recorded the speech, and I’ve used obs and kdenlive to edit the recording.

Here’s the results (unfortunately, the sound is of low quality):

Feel free to comment, ask, etc.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, May 2019

Pre, 14/06/2019 - 9:20pd

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 214 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 17 hours (out of 14 hours allocated plus 10 extra hours from April, thus carrying over 7h to June).
  • Adrian Bunk did 0 hours (out of 8 hours allocated, thus carrying over 8h to June).
  • Ben Hutchings did 18 hours (out of 18 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated plus 0.25 extra hours from April, thus carrying over 0.25h to June).
  • Emilio Pozuelo Monfort did 33 hours (out of 18 hours allocated + 15.25 extra hours from April, thus carrying over 0.25h to June).
  • Hugo Lefeuvre did 18 hours (out of 18 hours allocated).
  • Jonas Meurer did 15.25 hours (out of 17 hours allocated, thus carrying over 1.75h to June).
  • Markus Koschany did 18 hours (out of 18 hours allocated).
  • Mike Gabriel did 23.75 hours (out of 18 hours allocated + 5.75 extra hours from April).
  • Ola Lundqvist did 6 hours (out of 8 hours allocated + 4 extra hours from April, thus carrying over 6h to June).
  • Roberto C. Sanchez did 22.25 hours (out of 12 hours allocated + 10.25 extra hours from April).
  • Sylvain Beucler did 18 hours (out of 18 hours allocated).
  • Thorsten Alteholz did 18 hours (out of 18 hours allocated).
Evolution of the situation

May was a calm month, nothing really changed compared to April, we are still at 214 hours funded by month. We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 34 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Candy Tsai: Outreachy Week 4: Weekly Report

Pre, 14/06/2019 - 9:03pd

Just a normal weekly report this week. Can’t believe I’ve been in the Outreachy program for a month!

Progress for this week Week 5 tasks
  • Fix the self service section merge request
  • Enhance the concept UI for the history section
  • Outreachy blog post

Julian Andres Klode: Encrypted Email Storage, or DIY ProtonMail

Enj, 13/06/2019 - 10:47md

In the previous post about setting up a email server, I explained how I setup a forwarder using Postfix. This post will look at setting up Dovecot to store emails (and provide IMAP and authentication) on the server using GPG encryption to make sure intruders can’t read our precious data!


The basic architecture chosen for encrypted storage is that every incoming email is delivered to postfix via LMTP, and then postfix runs a sieve script that invokes a filter that encrypts the email with PGP/MIME using a user-specific key, before processing it further. Or short:

postfix --ltmp--> dovecot --sieve--> filter --> gpg --> inbox

Security analysis: This means that the message will be on the system unencrypted as long as it is in a Postfix queue. This further means that the message plain text should be recoverable for quite some time after Postfix deleted it, by investigating in the file system. However, given enough time, the probability of being able to recover the messages should reduce substantially. Not sure how to improve this much.

And yes, if the email is already encrypted we’re going to encrypt it a second time, because we can nest encryption and signature as much as we want! Makes the code easier.

Encrypting an email with PGP/MIME

PGP/MIME is a trivial way to encrypt an email. Basically, we take the entire email message, armor-encrypt it with GPG, and stuff it into a multipart mime message with the same headers as the second attachment; the first attachment is a control information.

Technically, this means that we keep headers twice, once encrypted and once decrypted. But the advantage compared to doing it more like most normal clients is clear: The code is a lot easier, and we can reverse the encryption and get back the original!

And when I say easy, I mean easy - the function to encrypt the email is just a few lines long:

def encrypt(message: email.message.Message, recipients: typing.List[str]) -> str: """Encrypt given message""" encrypted_content = gnupg.GPG().encrypt(message.as_string(), recipients) if not encrypted_content: raise ValueError(encrypted_content.status) # Build the parts enc = email.mime.application.MIMEApplication( _data=str(encrypted_content).encode(), _subtype='octet-stream', _encoder=email.encoders.encode_7or8bit) control = email.mime.application.MIMEApplication( _data=b'Version: 1\n', _subtype='pgp-encrypted; name="msg.asc"', _encoder=email.encoders.encode_7or8bit) control['Content-Disposition'] = 'inline; filename="msg.asc"' # Put the parts together encmsg = email.mime.multipart.MIMEMultipart( 'encrypted', protocol='application/pgp-encrypted') encmsg.attach(control) encmsg.attach(enc) # Copy headers headers_not_to_override = {key.lower() for key in encmsg.keys()} for key, value in message.items(): if key.lower() not in headers_not_to_override: encmsg[key] = value return encmsg.as_string()

Decypting the email is even easier: Just pass the entire thing to GPG, it will decrypt the encrypted part, which, as mentioned, contains the entire original email with all headers :)

def decrypt(message: email.message.Message) -> str: """Decrypt the given message""" return str(gnupg.GPG().decrypt(message.as_string()))

(now, not sure if it’s a feature that GPG.decrypt ignores any unencrypted data in the input, but well, that’s GPG for you).

Of course, if you don’t actually need IMAP access, you could drop PGP/MIME and just pipe emails through gpg --encrypt --armor before dropping them somewhere on the filesystem, and then sync them via ssh somehow (e.g. patching maildirsync to encrypt emails it uploads to the server, and decrypting emails it downloads).

Pretty Easy privacy (p≥p)

Now, we almost have a file conforming to draft-marques-pep-email-02, the Pretty Easy privacy (p≥p) format, version 2. That format allows us to encrypt headers, thus preventing people from snooping on our metadata!

Basically it relies on the fact that we have all the headers in the inner (encrypted) message. To mark an email as conforming to that format we just have to set the subject to p≥p and add a header describing the format version:

Subject: =?utf-8?Q?p=E2=89=A1p?= X-Pep-Version: 2.0

A client conforming to p≥p will, when seeing this email, read any headers from the inner (encrypted) message.

We also might want to change the code to only copy a limited amount of headers, instead of basically every header, but I’m going to leave that as an exercise for the reader.

Putting it together

Assume we have a Postfix and a Dovecot configured, and a script gpgmymail written using the function above, like this:

def main() -> None: """Program entry""" parser = argparse.ArgumentParser( description="Encrypt/Decrypt mail using GPG/MIME") parser.add_argument('-d', '--decrypt', action="store_true", help="Decrypt rather than encrypt") parser.add_argument('recipient', nargs='*', help="key id or email of keys to encrypt for") args = parser.parse_args() msg = email.message_from_file(sys.stdin) if args.decrypt: sys.stdout.write(decrypt(msg)) else: sys.stdout.write(encrypt(msg, args.recipient)) if __name__ == '__main__': main()

(don’t forget to add missing imports, or see the end of the blog post for links to full source code)

Then, all we have to is edit our .dovecot.sieve to add

filter "gpgmymail" "myemail@myserver.example";

and all incoming emails are automatically encrypted.

Outgoing emails

To handle outgoing emails, do not store them via IMAP, but instead configure your client to add a Bcc to yourself, and then filter that somehow in sieve. You probably want to set Bcc to something like myemail+sent@myserver.example, and then filter on the detail (the sent).

Encrypt or not Encrypt?

Now do you actually want to encrypt? The disadvantages are clear:

  • Server-side search becomes useless, especially if you use p≥p with encrypted Subject.

    Such a shame, you could have built your own GMail by writing a notmuch FTS plugin for dovecot!

  • You can’t train your spam filter via IMAP, because the spam trainer won’t be able to decrypt the email it is supposed to learn from

There are probably other things I have not thought about, so let me know on mastodon, email, or IRC!

More source code

You can find the source code of the script, and the setup for dovecot in my git repository.

Bits from Debian: 100 Paper cuts kick-off

Enj, 13/06/2019 - 8:30md

Is there a thorny bug in Debian that ruins your user experience? Something just annoying enough to bother you but not serious enough to constitute an RC bug? Are grey panels and slightly broken icon themes making you depressed?

Then join the 100 papercuts project! A project to identify and fix the 100 most annoying bugs in Debian over the next stable release cycle. That also includes figuring out how to identify and categorize those bugs and make sure that they are actually fixable in Debian (or ideally upstream).

The idea of a papercuts project isn't new, Ubuntu did this some years ago which added a good amount of polish to the system.

Kick-off Meeting and DebConf BoF

On the 17th of June at 19:00 UTC we're kicking off an initial brainstorming session on IRC to gather some initial ideas.

We'll use that to seed discussion at DebConf19 in Brazil during a BoF session where we'll solidify those plans into something actionable.

Meeting details

When: 2019-06-17, 19:00 UTC Where: #debian-meeting channel on the OFTC IRC network

Your IRC nick needs to be registered in order to join the channel. Refer to the Register your account section on the OFTC website for more information on how to register your nick.

You can always refer to the debian-meeting wiki page for the latest information and up to date schedule.

Hope to see you there!

Steinar H. Gunderson: Nageru email list

Mër, 12/06/2019 - 2:45md

The Nageru/Futatabi community is now large enough that I thought it would be a good idea to make a proper gathering place. So now, thanks to Tollef Fog Heen's hosting, there is a nageru-discuss list. It's expected to be low-volume, but if you're interested, feel free to join!

As for Nageru itself, there keeps being interesting development(s), but that's for another post. :-)

Dirk Eddelbuettel: RcppArmadillo 0.9.500.2.0

Mër, 12/06/2019 - 1:58md

A new RcppArmadillo release based on a new Armadillo upstream release arrived on CRAN, and will get to Debian shortly. It brings a few upstream changes, including extened interfaces to LAPACK following the recent gcc/gfortran issue. See below for more details.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 610 other packages on CRAN.

Changes in RcppArmadillo version 0.9.500.2.0 (2019-06-11)
  • Upgraded to Armadillo release 9.500.2 (Riot Compact)

    • Expanded solve() with solve_opts::likely_sympd to indicate that the given matrix is likely positive definite

    • more robust automatic detection of positive definite matrices by solve() and inv()

    • faster handling of sparse submatrices

    • expanded eigs_sym() to print a warning if the given matrix is not symmetric

    • extended LAPACK function prototypes to follow Fortran passing conventions for so-called "hidden arguments", in order to address GCC Bug 90329; to use previous LAPACK function prototypes without the "hidden arguments", #define ARMA_DONT_USE_FORTRAN_HIDDEN_ARGS before #include <armadillo>

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Martin Michlmayr: ledger2beancount 1.8 released

Mër, 12/06/2019 - 11:32pd

I released version 1.8 of ledger2beancount, a ledger to beancount converter.

I ran ledger2beancount over the ledger test suite and made it much more robust. If ledger2beancount 1.8 can't parse your ledger file properly, I'd like to know about it.

Here are the changes in 1.8:

  • Add support for apply year
  • Fix incorrect account mapping of certain accounts
  • Handle fixated commodity and postings without amount
  • Improve behaviour for invalid end without apply
  • Improve error message when date can't be parsed
  • Deal with account names consisting of a single letter
  • Ensure account names don't end with a colon
  • Skip ledger directives eval, python, and value
  • Don't assume all filenames for include end in .ledger
  • Support price directives with commodity symbols
  • Support decimal commas in price directives
  • Don't misparse balance assignment as commodity
  • Ensure all beancount commodities have at least 2 characters
  • Ensure all beancount metadata keys have at least 2 characters
  • Don't misparse certain metadata as implicit conversion
  • Avoid duplicate commodity directives for commodities with name collisions
  • Recognise deferred postings
  • Recognise def directive

Thanks to Alen Siljak for reporting a bug.

You can get ledger2beancount from GitHub.

Markus Koschany: My Free Software Activities in May 2019

Mar, 11/06/2019 - 10:27md

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • Like in previous release cycles I published a new version of debian-games at the end to incorporate the latest archive changes. Unfortunately, Netbeans, the Java IDE, cuyo and holdingnuts didn’t make it and I demoted them to Suggests.
  • A longstanding graphical issue (#871223) was resolved in Neverball where stars in goal points where displayed as squares. As usual something (OpenGL-related?) must have changed somewhere but in the end the installation of some missing png files made the difference. How it worked without them before remains a mystery.
  • I sponsored two uploads which were later unblocked for Buster. Bernat reported a crash in etw, a football simulation game ported from the AMIGA. Fortunately Steinar H. Gunderson could provide a patch quickly. (#928240)
  • A rebuild of marsshooter, a great looking space shooter with an awesome soundtrack, may have been the trigger for a segmentation fault. Jacob Nevins stumbled over it and Bernhard Übelacker provided a patch to fix missing return statements.  (#929513)
Debian Java
  • I provided a security update for jackson-databind to fix CVE-2019-12086 (#929177) in Buster and prepared DSA-4452-1 to fix the remaining 11 CVE in Stretch.
  • Unfortunately Netbeans will not be in Buster. There were at least two issues why I could not recommend our Debian version, clear regressions in comparison to the version in Stretch. I found it odd that the severest one was fixed in Ubuntu shortly after the removal from testing. I surely would have appreciated the patch for Debian too. At the moment I don’t believe I will continue to work on Netbeans, very time consuming to get it in shape for Debian, too many dependencies, where the slightest changes in r-deps may cause bugs in Netbeans, nobody else in the Java team is really interested and most Java developers probably install the upstream version. A really bad combination.
Misc Debian LTS

This was my thirty-ninth month as a paid contributor and I have been paid to work 18 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • I investigated CVE-2019-0227, axis and suggested to mark it as unimportant. I triaged CVE-2019-0227, ampache as no-dsa for Jessie.
  • DLA-1798-1. Issued a security update for jackson-databind fixing 1 CVE.
  • DLA-1804-1. Issued a security update for curl fixing 1 CVE.
  • DLA-1816-1. Issued a security update for otrs2 fixing 2 CVE.
  • DLA-1753-3. Issued a regression update for proftpd-dfsg. When the creation of a directory failed during sftp transfer, the sftp session would be terminated instead of failing gracefully due to a non-existing debug logging function.
  • DLA-xxxx-1. I’m currently testing the next security update of phpmyadmin. I triaged or fixed 19 CVE.

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my twelfth month and I have been paid to work 8 hours on ELTS (15 hours were allocated). I intend to use the remaining hours in June.

  • I investigated three CVE in pacemaker, CVE-2018-16877, CVE-2018-16878, CVE-2019-3885 and found that none of them affected Wheezy.
  • ELA-127-1. Issued a security update for linux and linux-latest fixing 15 CVE.

Thanks for reading and see you next time.