You are here

Agreguesi i feed

Daniel Pocock: Merry Christmas from the Balkans

Planet Ubuntu - Dje, 23/12/2018 - 11:27md

This Christmas I'm visiting the Balkans again. It is the seventh time in the last two years that I have been fortunate enough to visit this largely undiscovered but very exciting region of Europe.

A change of name

On Saturday I visited Skopje, the capital of Macedonia. Next month their country will finalize their name change to the Republic of Northern Macedonia.

Prishtina

From Skopje, I travelled north to Prishtina, the capital of Kosovo.

I had dinner with four young women who have become outstanding leaders in the free software movement in the region, Albiona, Elena, Amire and Enkelena.

The population of Kosovo is over ninety percent Muslim, not everybody observes Christmas as a religious festival but nonetheless the city of Prishtina is decorated beautifully with several large trees in the pedestrianised city centre.

Daniel Pocock: Merry Christmas from the Balkans

Planet Debian - Dje, 23/12/2018 - 11:27md

This Christmas I'm visiting the Balkans again. It is the seventh time in the last two years that I have been fortunate enough to visit this largely undiscovered but very exciting region of Europe.

A change of name

On Saturday I visited Skopje, the capital of Macedonia. Next month their country will finalize their name change to the Republic of Northern Macedonia.

Prishtina

From Skopje, I travelled north to Prishtina, the capital of Kosovo.

I had dinner with four young women who have become outstanding leaders in the free software movement in the region, Albiona, Elena, Amire and Enkelena.

The population of Kosovo is over ninety percent Muslim, not everybody observes Christmas as a religious festival but nonetheless the city of Prishtina is decorated beautifully with several large trees in the pedestrianised city centre.

Hideki Yamane: debootstrap: speed up

Planet Debian - Dje, 23/12/2018 - 1:58md
I've put new debootstrap version 1.0.112 into unstable today, it gets more speed than the previous one. Kudos to Thomas Lange for the hack.
If you find any trouble with it, please let me know.

Dougie Richardson: Passwordless SSH access on a Pi

Planet Ubuntu - Dje, 23/12/2018 - 12:25md

Passwordless SSH access is convenient, especially as everything is on my local network. I only really access the Pi remotely and you can configure it to use RSA keys. I’m on Ubuntu Linux so open a terminal and create an RSA key (if you don’t have one): You’ll need to upload it to the Pi: […]

The post Passwordless SSH access on a Pi appeared first on The Midlife Geek.

Joey Hess: effective bug tracking illustrated with AT&T

Planet Debian - Dje, 23/12/2018 - 1:11pd

I'm pleased to have teamed up with AT&T to bring you this illustrated guide to effective bug tracking.

The original issue description was "noise / static on line", and as we can see, AT&T have very effectively closed the ticket: There is no longer any noise, of any kind, on the phone line.

No electrons == no noise, so this is the absolute simplest and most effective fix possible. Always start with the simplest such fix, and be sure to close the problem ticket immediately on fixing. Do not followup with the issue reporter, or contact them in any way to explain how the issue was resolved.

While in the guts of the system fixing such a bug report, you'll probably see something that could be improved by some light refactoring. It's always a good idea to do that right away, because refactoring can often just solves an issue on its own somehow. (Never use your own issue tracking system to report issues to yourself to deal with later, because that would just be bonkers.)

But don't go overboard with refactoring. As we see here, when AT&T decided to run a new line between two poles involved in my bug report, they simply ran it along the ground next to my neighbor's barn. A few festive loops and bows prevent any possible damage by tractor. Can always refactor more later.

The only other information included in my bug report was "house at end of loong driveway". AT&T helpfully limited the size of the field to something smaller than 1 (old-style) tweet, to prevent some long brain dump being put in there.

You don't want to hear that I've lived here for 7 years and the buried line has never been clean but's been getting a bit more noisy lately, or that I noticed signs of water ingress at two of the junction boxes, or that it got much much worse after a recent snow storm, to the point that I was answering the phone by yelling "my phone line is broken" down the line consumed with static.

Design your bug tracking system to not let the user really communicate with you. You know what's wrong better than them.

And certianly don't try to reproduce the circumstances of the bug report. No need to visit my house and check the outside line when you've already identified and clearly fixed the problem at the pole.

My second bug report is "no dial tone" with access information "on porch end of long driveway". With that, I seem to be trying to solicit some kind of contact outside the bug tracking system. That is never a good idea though, and AT&T should instruct their linemen to avoid any possible contact with the user, or any attempts to convey information outside the issue tracking system.

AT&T's issue tracking system reports "Service Restore Date: 12/25/2018 at 12:00 AM" but perhaps they'll provide more effective issue tracking tips for me to share with you. Watch this space.

Sune Vuorela: Kolorfill 0.1.0 released

Planet Debian - Sht, 22/12/2018 - 7:58md

Continuing in Aurelien Gateau‘s release month, where I recently joined in with Kookbook, I’m now also following up with Kolorfill, an app I also described in the past.

It is a simple flood filling game written using the amazing Kirigami framework.

Have fun with it.

Sean Whitton: Persistent=true when the timer never triggers when the system is powered up

Planet Debian - Sht, 22/12/2018 - 6:03md

I have this systemd timer unit

[Unit] Description=Run i3-rotate-wallpaper daily [Timer] OnCalendar=daily Persistent=true [Install] WantedBy=timers.target

which says to start the i3-rotate-wallpaper.service unit at each midnight.

Persistent=true is meant to ensure that the unit is triggered immediately when the system resumes from suspend or is powered on, if at the most recent midnight it was suspended or powered off. The idea is that when I first use my computer each day, the wallpaper gets changed – I delight in seeing the wallpapers I’ve downloaded.

The problem is that Persistent=true only works if the timer has been triggered at least once when the system is not suspended and powered on. But my computer is almost never on at midnight. I don’t want to have to leave it turned on just for the first wallpaper change, or keep track of that when reinstalling the machine’s operating system.

The fix:

% mkdir -p "$HOME/.local/share/systemd/timers" % touch "$HOME/.local/share/systemd/timers/stamp-i3-rotate-wallpaper.timer"

Sean Whitton: Debian Policy call for participation -- December 2018, redux

Planet Debian - Sht, 22/12/2018 - 6:03md

I would like to push a release of Debian Policy but I want to include the patches in the following two bugs. If you are a DD, please consider reviewing and seconding the patches in these bugs so that I can do that.

EDIT 2018/xii/23: these are both now seconded – thanks gregor!

Molly de Blanc: User freedom (n.)

Planet Debian - Sht, 22/12/2018 - 5:01md

I talk a lot about user freedom, but have never explained what that actually means. The more I think about user freedom as a term, the less certain I am about what it is. This makes it hard to define. My thoughts on user freedom are the synthesis of about ten years, first thinking about the Good behind developmental models enabled by open source through to today, where I think about the philosophical implications of traffic lights.

I think I picked up the term from Christopher Lemmer Webber and it’s become integral to how I think and talk about free software and it’s value to society.

User freedom is based in the idea that we have fundamental rights (I’ll use the UN’s Universal Declaration of Human Rights as my metric*) and that these extend to the digital spaces we inhabit. In order to protect these in a world ruled by software, in order to see them in practice, we need the opportunity (and freedom) to use, examine, modify, and share this software. Software freedom is what happens when our software affords us these freedoms. Free and open source software is the software embodying the spirit of software freedom.

Software freedom is also necessary to ensure our rights in the physical world. Let’s take Article 10 as an example.

Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations and of any criminal charge against him.

There is so much thoroughly opaque proprietary software in and around legal matters. This includes software people are choosing to use, like Case Management Software; software is used to gather and manage data and evidence being used against someone and sometimes this evidence isn’t even accessible to those being charged unless they pay licensing and access fees; breathalyzers are little more than small computers that have been subject to tampering since 1988; in Patent 10049419 “Motorola patents a robocop autonomous car that breathalyzes, mirandizes you, calls your lawyer and collects your bail”; and facial recognition technology is available and being used and tested by governments.

The right to a fair and public hearing also extends to digital spaces, your actions there, and your digital life. Your digital activities are monitored, cataloged, and treated with equal judgment as those in physical spaces.

User freedom is important to different people for different reasons. For me, the most important reason ties into the freedom to study software. I think user consent — consent to interacting with technology. Unless software is free, unless we can study it, we cannot understand it, and when we cannot understand something we don’t fully have the autonomy to consent.**

I said a lot of words, but failed to provide a concise definition to user freedom largely because I lack a concise definition. User freedom is the freedom we need to protect, for which we use software freedom and free software, though it extends far beyond those two critical components. User freedom is itself a tool used to uphold and defend human rights when applied to computing technologies. User freedom creates the possibility for knowledge, which gives us autonomy and consent.

* This idea is shared with Chris Webber.
** I’d like to attribute my ideas around autonomy and consent to Dr. Holly Andersen.

Louis-Philippe Véronneau: A Tale of HTTP/2

Planet Debian - Sht, 22/12/2018 - 6:00pd

Around a month ago, someone mentioned the existence of HTTP/2 in an IRC channel I lurk in. For some reason, I had never heard of it and some of the features of this new protocol (like mutiplexing requests without having to open multiple TCP connections) seemed cool.

To be honest, I had just finished re-writing the Puppet code that manages our backup procedures and enabling HTTP/2 seemed like a productive way to procrastinate before moving on to an another large project. How hard could this be?

Turns out it took me around 25 hours of work... Sit back and put on comfortable slippers, for this is a tale of HTTP/2!

Cursed Be the HTTP/1.1

When I first looked up how to enable HTTP/2 on Apache it seemed a pretty simple task. The documentation mentioned loading the http2 module and making sure to prioritise the new protocol via a configuration file like this one:

Protocols h2 h2c http/1.1 H2Push on H2PushPriority * after H2PushPriority text/css before H2PushPriority image/jpeg after 32 H2PushPriority image/png after 32 H2PushPriority application/javascript interleaved

This would of course have been too easy. Even if everything in Apache was set up properly, websites kept being served as HTTP/1.1. I was obviously doing something right though, since my websites were now sending a new HTTP header: Upgrade: h2, h2c.

After wasting a good deal of time debugging TLS ciphers (HTTP/2 is incompatible with TLS 1.1), I finally found out the problem was that we weren't using the right multi-processing module for Apache.

Turns out Apache won't let you serve HTTP/2 while using mpm_prefork (the default MPM), as it is not supported by mod_http2. Even though there are two other MPM you can use with Apache, only mpm_prefork supports mod_php. Suddenly, adding support for HTTP/2 meant switching all our webapps built in PHP to PHP-FPM...

Down the Rabbit Hole

For the longest time, a close friend has been trying to convince me of the virtues of PHP-FPM. As great as it looked on paper, I never really did anything about it. It seemed so ... complicated. Regular ol' mod_php did the trick just fine and other things required my attention.

This whole HTTP/2 thing turned out to be the perfect excuse for me to dive into it after all. Once I understood how FPM pools worked, it was actually pretty easy to set up. Since I had to rewrite the Puppet profiles we're using to deploy websites, also I took that opportunity to harden a bunch of things left and right.

PHP-FPM let's you run websites under different Unix users for added separation. On top of that, I decided it was time for PHP code on our servers to be ran in read-only mode and had to tweak a bunch of things for our Wordpress, Nextcloud, KanBoard and Drupal instances to stop complaining about it.

After too much time passed automating tasks in Puppet, I finally was able to turn off mod_php and mpm_prefork everywhere and to enable mpm_event and mod_http2. The speed bonus offered by PHP-FPM and HTTP/2 is nice, but more than anything I'm happy this whole ordeal forced me to harden the way our Apache servers deal with PHP.

Charles Plessy: On how useful are R packages in Debian

Planet Debian - Sht, 22/12/2018 - 1:02pd

Debian distributes the R language for statistical analysis, data mining or bioinformatics (among others). Satellite to R are hundreds of packages (kind of function libraries), mostly distributed by CRAN and Bioconductor, which contribute a lot to the richness and versatility of the R ecosystem. Debian redistributes some of these packages in the Debian format. Like in all similar cases of "redistribution of a distribution", there is a tension between Debian's goals for its stable version, and the expectations of novelty for the users (in part because the development cycle of R is 6 months), and one sometimes wonder if there is a point for using the packages through Debian and not through the upstream repositories.

Today, after installing a minimal system in a "schroot" container, I installed a R package and all its dependencies natively, that is by downloading their sources through the R command line, which made me wait for 90 minutes until everything compiles, while the R packages redistributed in Debian are already compiled. 90 minutes of compilation for 10 minutes of work; 90 minutes of waiting that I could have avoided with a couple of well-chosen "apt install" commands. Thus, many thanks to all who maintain R packages in Debian!

Daniel Lange: Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

Planet Debian - Pre, 21/12/2018 - 11:24md

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?

Problem

Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issue #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].

Debian

Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.

Solutions

You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make Intel / AMD system trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add

rng_core.default_quality=1000

to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.

VirtIO

For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0 Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.

Chaoskey

The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.

Jitterentropy_RNG

Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.

Haveged

apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 .

  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (2018-12-17) 

Antoine Beaupré: December 2018 report: archiving Brazil, calendar and LTS

Planet Debian - Pre, 21/12/2018 - 8:40md
Last two months free software work

Keen readers probably noticed that I didn't produce a report in November. I am not sure why, but I couldn't find the time to do so. When looking back at those past two months, I didn't find that many individual projects I worked on, but there were massive ones, of the scale of archiving the entire government of Brazil or learning the intricacies of print media, both of which were slightly or largely beyond my existing skill set.

Calendar project

I've been meaning to write about this project more publicly for a while, but never found the way to do so productively. But now that the project is almost over -- I'm getting the final prints today and mailing others hopefully soon -- I think this deserves at least a few words.

As some of you might know, I bought a new camera last January. Wanting to get familiar with how it works and refresh my photography skills, I decided to embark on the project of creating a photo calendar for 2019. The basic idea was simple: take pictures regularly, then each month pick the best picture of that month, collect all those twelve pictures and send that to the corner store to print a few calendars.

Simple, right?

Well, twelve pictures turned into a whopping 8000 pictures since January, not all of which were that good of course. And of course, a calendar has twelve months -- so twelve pictures -- but also a cover and a back, which means thirteen pictures and some explaining. Being critical of my own work, it turned out that finding those pictures was sometimes difficult, especially considering the medium imposed some rules I didn't think about.

For example, the US Letter paper size imposes a different ratio (1.29) than the photographic ratio (~1.5) which means I had to reframe every photograph. Sometimes this meant discarding entire ideas. Other photos were discarded because too depressing even if I found them artistically or journalistically important: you don't want to be staring at a poor kid distressed at going into school every morning for an entire month. Another advice I got was to forget about sunsets and dark pictures, as they are difficult to render correctly in print. We're used to bright screens displaying those pictures, paper is a completely different feeling. Having a good vibe for night and star photography, this was a fairly dramatic setback, even though I still did feature two excellent pictures.

Then I got a little carried away. At the suggestion of a friend, I figured I could get rid of the traditional holiday dates and replace them with truly secular holidays, which got me involved in a deep search for layout tools, which in turn naturally brought me to this LaTeX template. Those who have worked with LaTeX (or probably any professional layout tool) know what's next: I spent a significant amount of time perfecting the rendering and crafting the final document.

Slightly upset by the prices proposed by the corner store (15$CAD/calendar!), I figured I could do better by printing on my own, especially encouraged by a friend who had access to a good color laser printer. I then spent multiple days (if not weeks) looking for the right paper, which got me in the rabbit hole of paper weights, brightness, texture, and more. I'll just say this: if you ever thought lengths were ridiculous in the imperial system, wait until you find out how you find out about how paper weights work. I finally managed to find some 270gsm gloss paper at the corner store -- after looking all over town, it was right there -- and did a first print of 15 calendars, which turned into 14 because of trouble with jammed paper. Because the printer couldn't do recto-verso copies, I had to spend basically 4 hours tending to that stupid device, bringing my loathing of printers (the machines) and my respect for printers (the people) to an entirely new level.

The time spent on the print was clearly not worth it in the end, and I ended up scheduling another print with a professional printer. The first proof are clearly superior to the ones I have done myself and, in retrospect, completely worth the 15$ per copy.

I still haven't paid for my time in any significant way on that project, something I seem to excel at doing consistently. The prints themselves are not paid for, but my time in producing those photographs is not paid either, which clearly outlines my future as a professional photographer, if any, lie far away from producing those silly calendars, at least for now.

More documentation on the project is available, in french, in calendrier-2019. I am also hoping to eventually publish a graphical review of the calendar, but for now I'll leave that for the friends and family who will receive the calendar as a gift...

Archival of Brasil

Another modest project I embarked on was a mission to archive the government of Brazil following the election the infamous Jair Bolsonaro, dictatorship supporter, homophobe, racist, nationalist and christian freak that somehow managed to get elected president of Brazil. Since he threatened to rip apart basically the entire fabric of Brazilian society, comrades were worried that he might attack and destroy precious archives and data from government archives when he comes in power, in January 2019. Like many countries in Latin America that lived under dictatorships in the 20th century, Brazil made an effort to investigate and keep memory of the atrocities that were committed during those troubled times.

Since I had written about archiving websites, those comrades naturally thought I could be of use, so we embarked on a crazy quest to archive Brazil, basically. We tried to create a movement similar to the Internet Archive (IA) response to the 2016 Trump election but were not really successful at getting IA involved. I was, fortunately, able to get the good folks at Archive Team (AT) involved and we have successfully archived a significant number of websites, adding terabytes of data to the IA through the backdoor that is AT. We also ran a bunch of archival on a special server, leveraging tools like youtube-dl, git-annex, wpull and, eventually, grab-site to archive websites, social network sites and video feeds.

I kind of burned out on the job. Following Brazilian politics was scary and traumatizing - I have been very close to Brazil folks and they are colorful, friendly people. The idea that such a horrible person could come into power there is absolutely terrifying and I kept on thinking how disgusted I would be if I would have to archive stuff from the government of Canada, which I do not particularly like either... This goes against a lot of my personal ethics, but then it beats the obscurity of pure destruction of important scientific, cultural and historical data.

Miscellaneous

Considering the workload involved in the above craziness, the fact that I worked on less project than my usual madness shouldn't come as a surprise.

  • As part of the calendar work, I wrote a new tool called moonphases which shows a list of moon phase events in the given time period, and shipped that as part of undertime 1.5 for lack of a better place.

  • AlternC revival: friends at Koumbit asked me for source code of AlternC projects I was working on. I was disappointed (but not surprised) that upstream simply took those repositories down without publishing an archive. Thankfully, I still had SVN checkouts but unfortunately, those do not have the full history, so I reconstructed repositories based on the last checkout that I had for alternc-mergelog, alternc-stats, and alternc-slavedns.

  • I packaged two new projects into Debian, bitlbee-mastodon (to connect to the new Mastodon network over IRC) and python-internetarchive (a command line interface to the IA upload forms)

  • my work on archival tools led to a moderately important patch in pywb: allow symlinking and hardlinking files instead of just copying was important to manage multiple large WARC files along with git-annex.

  • I also noticed the IA people were using a tool called slurm to diagnose bandwidth problems on their networks and implemented iface speed detection on Linux while I was there. slurm is interesting, but I also found out about bmon through the hilarious hollywood project. Each has their advantages: bmon has packets per second graphs, while slurm only has bandwidth graphs, but also notices maximal burst speeds which is very useful.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. Note that my previous report wasn't published on this blog but on the mailing list.

Enigmail / GnuPG 2.1 backport

I've spent a significant amount of time working on the Enigmail backport for a third consecutive month. I first published a straightforward backport of GnuPG 2.1 depending on the libraries available in jessie-backports last month, but then I actually rebuilt the dependencies as well and sent a "HEADS UP" to the mailing list, which finally got peoples' attention.

There are many changes bundled in that possible update: GnuPG actually depends on about half a dozen other libraries, mostly specific to GnuPG, but in some cases used by third party software as well. The most problematic one is libgcrypt20 which Emilio Pozuelo Monfort said included tens of thousands of lines of change. So even though I tested the change against cryptsetup, gpgme, libotr, mutt and Enigmail itself, there are concerns that other dependencies that merit more testing as well.

This caused many to raise the idea of aborting the work and simply marking Enigmail as unsupported in jessie. But Daniel Kahn Gillmor suggested this should also imply removing Thunderbird itself from jessie, as simply removing Enigmail will force people to use the binaries from Mozilla's add-ons service. Gillmor explained those builds include a OpenPGP.js implementation of dubious origin, which is especially problematic considering it deals with sensitive private key material.

It's unclear which way this will go next. I'm taking a break of this issue and hope others will be able to test the packges. If we keep on working on Enigmail, the next step will be to re-enable the dbg packages that were removed in the stretch updates, use dh-autoreconf correctly, remove some mingw pacakges I forgot and test gcrypt like crazy (especially the 1.7 update). We'd also update to the latest Enigmail, as it fixes issues that forced the Tails project to disable autocrypt because of weird interactions that make it send cleartext (instead of encrypted) mail in some cases.

Automatic unclaimer

My previous report yielded an interesting discussion around my work on the security tracker, specifically the "automatic unclaimer" designed to unassign issues that are idle for too long. Holger Levsen, with his new coordinator hat, tested the program and found many bugs and missing features, which I was happy to implement. After many patches and back and forth, it seems the program is working well, although it's ran by hand by the coordinator.

DLA website publication

I took a look at various issues surrounding the publication of LTS advisories on the main debian.org website. While normal security advisories are regularly published on debian.org/security about 500+ DLAs are missing from the website, mainly because DLAs are not automatically imported.

As it turns out, there is a script called parse-dla.pl that is designed to handle those entries but for some reason, they are not imported anymore. So I got to work to import the backlog and make sure new entries are properly imported.

Various fixes for parse-dla.pl were necessary to properly parse messages both from the templates generated by gen-DLA and the existing archives correctly. then I tested the result with two existing advisories, which resulted in two MR on the webml repo: add data for DLA-1561 and add dla-1580 advisory. I requested and was granted access to the repo, and eventually merged my own MRs after a review from Levsen.

I eventually used the following procedure to test importing the entire archive:

rsync -vPa master.debian.org:/home/debian/lists/debian-lts-announce . cd debian-lts-announce xz -d \*.xz cat \* > ../giant.mbox mbox2maildir ../giant.mbox debian-lts-announce.d for mail in debian-lts-announce.d/cur/\*; do ~/src/security-tracker/./parse-dla.pl $mail; done

This lead to 82 errors on an empty directory, which is not bad at all considering the amount of data processed. Of course, there many more errors in the live directory as many advisories were already present. In the live directory, this resulted in 2431 new advisories added to the website.

There were a few corner cases:

  • The first month or so didn't use DLA identifiers and many of those were not correctly imported even back then.

  • DLA-574-1 was a duplicate, covered by the DLA-574-2 regression update. But I only found the Imagemagick advisory - it looks like the qemu one was never published.

  • Similarly, the graphite2 regression was never assigned a real identifier.

  • Other cases include for example DLA-787-1 which was sent twice and the DLA-1263-1 duplicate, which was irrecuperable as it was never added to data/DLA/list

Those special cases will all need to be handled by an eventual automation of this process, which I still haven't quite figured out. Maybe a process similar to the unclaimer will be followed: the coordinator or me could add missing DLAs until we streamline the process, as it seems unlikely we will want to add more friction to the DLA release by forcing workers to send merge requests to the web team, as that will only put more pressure on the web team...

There are also nine advisories missing from the mailing list archive because of a problem with the mailing list server at that time. We'll need to extract those from people's email archives, which I am not sure how to coordinate at this point.

PHP CVE identifier confusion

I have investigated CVE-2018-19518, mistakenly identified as CVE-2018-19158 in various places, including upstream's bugtracker. I requested the latter erroneous CVE-2018-19158 to be retired to avoid any future confusion. Unfortunately, Mitre indicated the CVE was already in "active use for pre-disclosure vulnerability coordination", which made it impossible to correct the error at that level.

I've instead asked upstream to correct the metadata in their tracker but it seems nothing has changed there yet.

Steinar H. Gunderson: FOSDEM talk about Futatabi

Planet Debian - Pre, 21/12/2018 - 1:22md

Yesterday, I got word that my FOSDEM 2019 talk was accepted! Here's the preliminary abstract:

Futatabi is a free software solution for doing instant replay, e.g. for sports production. It supports multiple cameras, high-quality realtime slow motion on the GPU through optical flow, and seamless integration with Nageru, my live video mixer. We'll talk a bit about how interpolation through optical flow works, challenges in transporting the streams back and forth, and demonstrate a real-world sports production done earlier this year using Nageru and Futatabi.

Futatabi has been in the making for a long time, but most of that was various forms of planning and research—the first git commit in the repository was only in June this year. (However, it's grown rapidly; at the moment, it's about 9000 lines of C++, a 1000 lines more of GLSL, and then some more that's shared with Nageru.) There are still rough edges, of course, but it's definitely usable in practice; we used an early version for Trøndisk, and have been incorporating the experience from that to fix various warts and bugs. This was especially centered around I/O issues, but also to add a couple of new features.

As far as I know, this is the first time you can do anything like this using free software. Actually, even doing it using non-free software is hard; the only thing I know of that's comparable in terms of feature set/workflow is vMix Pro, which will set you back $1200 and as far as I know doesn't have interpolation. (Of course, vMix has a very large feature set in general, and does a lot of other stuff in addition to replay, so it's not fair to say that Futatabi is plain-out better.) The next step up is dedicated replay devices, which start at around $10k and go steeply up from there; even the controllers are typically around $1k.

The talk is Saturday February 2nd, 15:00 in room H.1309. As far as I know, it will be streamed and recorded, too, although FOSDEM's 25 fps streams will probably have issues reproducing the interpolated 59.94 fps streams faithfully without lots of stuttering :-) So see you there!

Laura Arjona Reina: Debian is back in the Mastodon/GNU Social fediverse, follow fosstodon.org/@debian

Planet Debian - Pre, 21/12/2018 - 12:42md

The GNU Social instance where the @debian account was hosted (quitter.se) shut down last May. Thanks to the Quitter.se admins for all this time!

Long overdue, I’ve setup the @debian account with the feed of micronews.debian.org in other place (I still cannot selfhost properly, due to time constraints mostly). This time I chose a Mastodon instance, fosstodon.org. Thanks to the Fosstodon admins for hosting, and Carl Chenet for feed2toot.

If you are in the GNU Social/Mastodon fediverse (or other network compatible with ActivityPub I guess), you can follow https://fosstodon.org/@debian to get the news (the official source will always be https://micronews.debian.org, though).

I will try to follow back and answer mentions/replies as time allows. Ping (my contact info is in https://wiki.debian.org/LauraArjona) if something goes wrong (I’m learning this new platform) and I’ll do my best to get things back to normal.

Happy Solstice!

Shirish Agarwal: Agencies able to monitor conversations without judicial oversight.

Planet Debian - Pre, 21/12/2018 - 8:52pd

It seems that the BJP has finally lost what little moral compass it held. Today, in a stunning order we lost all the ground that all our civil activist friends had hard-fought in the last few years making Right to Privacy a fundamental right . At least a decade worth of effort has been put down the drain. While the reasons are not hard to fathom, they just lost 5 state elections and instead of introspecting the reasons of why they lost, they have chosen to act in this brazen manner.

The most worrying and interesting part at the same time is that the powers that have been given to the Central Agencies is without any judicial oversight so it’s pretty much given that they will use it more for their personal gains and enmity rather than any real or perceived threat to Indian sovereignty. They might be perceiving that losing in elections is tantamount to threat to Indian sovereignty. They forget that BJP!=India which means BJP is not equal to India, no political party is. In putting this order they are also putting precedents of fascist and dictatorial orders in a democratic, peace-loving country like India. The picture that emerges is a simple one, if they are going to lose, they might attempt these sort of orders in the hopes they can eavesdrop on the opposition, business leaders, threaten them etc. and by hook or crook win National elections which is supposed to be held in middle of next year nobody would be the wiser. They have made many changes in the highest court expecting that the judges that have been appointed would rule in their favor. And anyways, any filing of suit today would have a first hearing at least 3-4 months down the line or even later depending on dates of judges, court-room access etc. If nothing, the General Attorney from Government of India can always ask for more time.

The most worrying aspect of this is they have given even the Police Commissoner the said powers. While we hope that the higher bodies like ED and Intelligence Bureau would use the powers responsibly, the police sadly has been known to overreach its authority against civilians even without the additional powers given to them. Also this is setting precedents that State Police of different States could also ask for the same powers citing law and order. It is a bit ironic if you ask me that if you a law enforcement official serving say in the United States, you would have to get judicial consent before tapping a suspect’s phone even though they have right to bear arms (it is one of the fundamental rights), while here, i.e. in India posessing arms is more or less illegal but still police will get such powers citing law and order.

I wanted to share that the diversity and values that Indian people have and this seems to be the most opportune time to share it as had bunch of experiences in Kerala. There were couple of interesting experiences and observations that I made on my short visit to Kerala, Kochi. After being for the whole of Debutsav I decided to take a mini-holiday of 2-3 days to just look around Kochi. While I had been to Kerala before but each time it had been either a beach, a temple and not really seeing Kerala. This time around, I was able to see how due to Kochi being a sea port had lot of influences on Christian and Muslim minorities and how they are able to have a sort of jovial relationship. I did see Christian schools on roads named after Muslim saints and vice-versa although due to Dutch East India Company, most of the places and even roads had english names. Being alone, I was able to talk to some of the fishermen and they shared both their helplessness with the State’s response to the floods and the uncertainty of the catch. Because Fort Kochi is still a functioning port, I did see couple of huge cargo or freight ships as well as was able to use the ferry which the local people used for point to point travel. I would probably upload pictures of both in couple of days here itself. While I was fascinated by these huge container ships, the fishermen shared with me how it has polluted the inland and how fishing is not good as it once was. The only way is to go to open sea and with the recent floods it is not inspiring.

One of the more interesting experiences was meeting a Mr. Singh. I will not share the gentleman’s details as all the sharing were done off the record. I am sharing here just as an example of a reality which is not exposed often. Mr. Singh is the eldest of the three brothers. While he was born in Punjab but due to reasons unknown (I didn’t ask him) his father settled in Kerala. Just like rest of Keralities of his time there wasn’t much for young men like him to do. So instead of sitting idle like his 2 brothers he decided to risk and go to Middle East. This was in the 70’s. He worked in UAE, Dubai, Qatar and other places. He found many Indians, Punjabis included and none gave a fag about what religion they were. They made many watering holes where Indians used to meet every Friday, socialize and share news of whatever currently was happening there or whatever news they could get of home, India.

When he came back in the 90’s many people including him started either some small shops or restaurant business to cater to people. Unlike Pune though, Kochi doesn’t have seem to have many restaurants but anyways that’s a different story.

Around 1991/92 unrelated to the Bombay riots 1992 there were communal riots even there where his shop was burned down. But due to relationships he had built up in Middle East, many Muslims and even some Parsis came to help him and get back on his feet. So while his children are well-placed, he did feel that the community relationships which have helped and guided him hoped didn’t get trampled by BJP which is trying to upset the easy relationships most communities have built over decades. I did hear some similar stories from other people as well. There are even lot of Assamese people who have emigrated to Kerala and they speak Malayalam better than the natives. In fact, Balasankar confided that the domestic helper who comes to their place to help out his mother and family, her son got 100/100 in Malayalam . This says something about the spirit of the place and the people therein.

Lubuntu Blog: Sunsetting i386

Planet Ubuntu - Pre, 21/12/2018 - 1:43pd
Lubuntu has been and continues to be the go-to Ubuntu flavor for people who want the most from their computers, especially older hardware that cannot handle today’s workloads. However, the project and computing as a whole has drastically changed in many ways since its origin ten years ago. Computers have become faster, more secure, and […]

Eric Hammond: Using AWS SSM Parameter Store With Git SSH Keys

Planet Ubuntu - Pre, 21/12/2018 - 1:00pd

and employing them securely

At Archer, we have been moving credentials into AWS Systems Manager (SSM) Parameter Store and AWS Secrets Manager. One of the more interesting credentials is an SSH key that is used to clone a GitHub repository into an environment that has IAM roles available (E.g., AWS Lambda, Fargate, EC2).

We’d like to treat this SSH private key as a secret that is stored securely in SSM Parameter Store, with access controlled by AWS IAM, and only retrieve it briefly when it is needed to be used. We don’t even want to store it on disk when it is used, no matter how temporarily.

After a number of design and test iterations with Buddy, here is one of the approaches we ended up with. This is one I like for how clean it is, but may not be what ends up going into the final code.

This solution assumes that you are using bash to run your Git commands, but could be converted to other languages if needed.

Using The Solution

Here is the bash function that retrieves the SSH private key from SSM Parameter Store, adds it to a temporary(!) ssh-agent process, and runs the desired git subcommand using the same temporary ssh-agent process:

git-with-ssm-key() { ssm_key="$1"; shift ssh-agent bash -o pipefail -c ' if aws ssm get-parameter \ --with-decryption \ --name "'$ssm_key'" \ --output text \ --query Parameter.Value | ssh-add -q - then git "$@" else echo >&2 "ERROR: Failed to get or add key: '$ssm_key'" exit 1 fi ' bash "$@" }

Here is a sample of how the above bash function might be used to clone a repository using a Git SSH private key stored in SSM Parameter Store under the key “/githubkeys/gitreader”:

git-with-ssm-key /githubsshkeys/gitreader clone git@github.com:alestic/myprivaterepo.git

Other git subcommands can be run the same way. The SSH private key is only kept in memory and only during the execution of the git command.

How It Works

The main trick here is that ssh-agent can be run specifying a single command as an argument. That command in this case is a bash process that turns around and runs multiple commands.

It first gets the SSH private key from SSM Parameter Store, and adds the key to the ssh-agent process by passing it on stdin. Then it runs the requested git command, with the ssh-agent verifying identity to GitHub using the SSH private key.

When the git command has completed, the parent ssh-agent also disappears, cleaning up after itself.

Note: The current syntax doesn’t work with arguments that include spaces and other strange characters that might need quoting or escaping. I’d love to fix this, but note that this is only needed for commands that interact with the remote GitHub service.

Setting Up SSM Parameter Store

Now let’s go back and talk about how we might set up the AWS SSM Parameter Store and GitHub so that the above can access a repository.

Create a new SSH key with no passphrase (as it will be used by automated processes). This does go to disk, so do it somewhere safe.

keyname="gitreader" # Or something meaningful to you ssh-keygen -t rsa -N "" -b 4096 -C "$keyname" -f "$keyname.pem"

Upload the SSH private key to SSM Parameter Store:

ssm_key="/githubsshkeys/$keyname" # Your choice description="SSH private key for reading Git" # Your choice aws ssm put-parameter \ --name "$ssm_key" \ --type SecureString \ --description "$description" \ --value "$(cat $keyname.pem)"

Note: The above uses the default AWS SSM key in your account, but you can specify another with the --key-id option.

Once the SSH private key is safely in SSM Parameter Store, shred/wipe the copy on the local disk using something like (effectiveness may vary depending on file system type and underlying hardware):

shred -u "$keyname.pem" # or wipe, or your favorite data destroyer Setting Up GitHub User

The SSH public key can be used to provide access with different Git repository hosting providers, but GitHub is currently the most popular.

Create a new GitHub user for automated use:

https://github.com/

Copy the SSH public key that we just created

cat "$keyname.pem.pub"

Add the new SSH key to the GitHub user, pasting in the SSH public key value:

https://github.com/settings/ssh/new

Do not upload the SSH private key to GitHub. Besides, you’ve already shredded it.

Setting Up GitHub Repo Access

How you perform this step depends on how you have set up GitHub.

If you want the new user to have read-only access (and not push access), then you probably want to use a GitHub organization to own the repository, add the new user to a team that has read-only access to the repository.

Here’s more information about giving teams different levels of access in a GitHub organization:

https://help.github.com/articles/about-teams/

Alternatively, you can add the new GitHub user as a collaborator on a repository, but that will allow anybody with access to the SSH private key (which is now located in SSM Parameter Store) to push changes to that repository, instead of enforcing read-only.

Once GitHub is set up, you can go back and use the git-with-ssm-key command that was shown at the start of this article. For example:

git-with-ssm-key "$ssm_key" clone git@github.com:MYORG/MYREPO.git

If you have given your GitHub user write access to a repo, you can also use the push and related git subcommands.

Cleanup

Once you are done with testing this setup, you can clean up after yourself.

Remove the SSM Parameter Store key/value.

aws ssm delete-parameter \ --name "$ssm_key"

If you created a GitHub user and no longer need it, you may delete it carefully. WARNING! Make sure you sign back in to the temporary GitHub user first! Do not delete your main GitHub user!

https://github.com/settings/admin

When the GitHub user is deleted, GitHub will take care of removing that user from team membership and repository collaborator lists.

GitHub vs. AWS CodeCommit

For now, we are using GitHub at our company, which is why we need to go through all of the above rigamarole.

If we were using AWS CodeCommit, this entire process would be easier, because we could just give the code permission to read the Git repository in CodeCommit using the IAM role in Lambda/Fargate/EC2.

Original article and comments: https://alestic.com/2018/12/aws-ssm-parameter-store-git-key/

Sune Vuorela: Kookbook 0.1 – write and manage your kitchen recipes

Planet Debian - Enj, 20/12/2018 - 7:47md

Release Time

A little while ago, I blogged about an application I was writing for my cooking recipes.

I have now gotten to a point where I will declare it version 0.1. The release can be found on a KDE Download server near you: https://download.kde.org/unstable/kookbook/kookbook-0.1.0.tar.xz.mirrorlist.

Desktop app

As written back then, Kookbook is basically displaying markdown, parses semi-structured markdown for ingredients and tags and allows accessing the recipes that way. Kookbook also offers to open a system editor for editing the content.

This is the “normal” view:

Support for images in recipes has also been added since previous blog post.

Mobile app
Since the may blog post, I have also written a little more basic touch-friendly user interface. It does not offer the full set of desktop features, namely it doesn’t offer to launch an editor for you, and it also only allows to access the recipes thru their names, not thru all the other ways of finding recipes. Though the last part is up to discussion for further releases.

The main page looks like

And lets you search thru the titles to find the one you are after.

The recipe view is more or less the same.

The code

The code itself is mit/x11 licensed, and contains a couple of interesting bits that others might want to take advantage of:

  • Kirigami file dialog. Could be polished and upstreamed.
  • Qt Markdown capability (with libdiscount). Could be librarifyied

Have fun
Go forth, do cooking. And feel free to share recipes. Or create patches.

Ubuntu Podcast from the UK LoCo: S11E41 – Forty-One Jane Doe’s

Planet Ubuntu - Enj, 20/12/2018 - 4:00md

This week we have been playing Super Smash Bros Ultimate and upgrading home servers from Ubuntu 16.04 to 18.04. We discuss Discord Store confirming Linux support, MIPS going open source, Microsoft Edge switching to Chromium and the release of Collabora Online Developer Edition 4.0 RC1. We also round up community news and events.

It’s Season 11 Episode 41 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Faqet

Subscribe to AlbLinux agreguesi