You are here

Planet Debian

Subscribe to Feed Planet Debian
Thinking inside the box liw's English language blog feed Stuff, Debian, Free Software and Craig Stuff, Debian, Free Software and Craig (y eso no es poca cosa) Thinking inside the box Just another WordPress.com weblog agrep -pB IT /dev/life Insider infos, master your Debian/Ubuntu distribution Thinking inside the box a blog sesse's blog Random thoughts about everything tagged by Debian Thinking inside the box Reproducible builds blog random musings and comments Recent content in Debian Blog on RESEARCHUT Any sufficiently advanced thinking is indistinguishable from madness Thinking inside the box showing latest 10 Jaldhar Vyas's Debian GNU/Linux Weblog ganbatte kudasai! thoughts of a free software enthusiast jmtd Blog from the Debian Project Entries tagged english random musings and comments Politics, tech and herding cats from Neil McGovern Comments on family, technology, and society random musings and comments Reproducible builds blog The public face of jwiltshire Thoughts about programming, sysadmin, Perl, Debian ... Free Software Hacking agrep -pB IT /dev/life agrep -pB IT /dev/life As time goes by ... (y eso no es poca cosa) showing latest 10 anarcat Just another WordPress.com weblog pabs Entries tagged english "Passion and dispassion. Choose two." -- Larry Wall Ben Hutchings's diary of life and technology sesse's blog Dude! Sweet!
Përditësimi: 1 month 3 javë më parë

Reproducible Builds: Weekly report #178

Mar, 25/09/2018 - 6:53md

Here’s what happened in the Reproducible Builds effort between Sunday September 16 and Saturday September 22 2018:

Patches filed diffoscope development

diffoscope version 102 was uploaded to Debian unstable by Mattia Rizzolo. It included contributions already covered in previous weeks as well as new ones from:

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this month, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Daniel Shahaf, Holger Levsen, Jelle van der Waa, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Crossing the Great St Bernard Pass

Mar, 25/09/2018 - 4:26md

It's a great day for the scenic route to Italy, home of Beethoven's Swiss cousins.

What goes up, must come down...

Descent into the Aosta valley

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Smallish haul

Mar, 25/09/2018 - 6:34pd

It's been a little while since I've made one of these posts, and of course I'm still picking up this and that. Books won't buy themselves!

Elizabeth Bear & Katherine Addison — The Cobbler's Boy (sff)
P. Djèlí Clark — The Black God's Drums (sff)
Sabine Hossenfelder — Lost in Math (nonfiction)
N.K. Jemisin — The Dreamblood Duology (sff)
Mary Robinette Kowal — The Calculating Stars (sff)
Yoon Ha Lee — Extracurricular Activities (sff)
Seanan McGuire — Night and Silence (sff)
Bruce Schneier — Click Here to Kill Everyone (nonfiction)

I have several more pre-orders that will be coming out in the next couple of months. Still doing lots of reading, but behind on writing up reviews, since work has been busy and therefore weekends have been low-energy. That should hopefully change shortly.

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

Archiving web sites

Mar, 25/09/2018 - 2:00pd

I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web.

Converting simple sites

The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the Drupal content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard.

For simple or static sites, the venerable Wget program works well. The incantation to mirror a full web site, however, is byzantine:

$ nice wget --mirror --execute robots=off --no-verbose --convert-links \ --backup-converted --page-requisites --adjust-extension \ --base=./ --directory-prefix=./ --span-hosts \ --domains=www.example.com,example.com http://www.example.com/

The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [robots.txt][] rules, as is now common practice for archivists, and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site.

The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site.

That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a --reject-regex option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well.

JavaScript doom

Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using progressive enhancement to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like NoScript or uMatrix will confirm.

Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper (pamplemousse.ca), I found that WordPress adds query strings (e.g. ?ver=1.12.4) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right Content-Type header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites.

As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach.

Creating and displaying WARC files

At the Internet Archive, Brewster Kahle and Mike Burner designed the ARC (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") specification that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the International Internet Preservation Consortium (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based Heritrix crawler.

A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the --warc parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is pywb, a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on http://localhost:8080/:

$ pip install pywb $ wb-manager init example $ wb-manager add example crawl.warc.gz $ wayback

This tool was, incidentally, built by the folks behind the Webrecorder service, which can use a web browser to save dynamic page contents.

Unfortunately, pywb has trouble loading WARC files generated by Wget because it followed an inconsistency in the 1.0 specification, which was fixed in the 1.1 specification. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called crawl. Here is how it is invoked:

$ crawl https://example.com/

(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the -exclude-related flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the -c flag. But, best of all, the resulting WARC files load perfectly in pywb.

Future work and alternatives

There are plenty more resources for using WARC files. In particular, there's a Wget drop-in replacement called Wpull that is specifically designed for archiving web sites. It has experimental support for PhantomJS and youtube-dl integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called ArchiveBot, which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at ArchiveTeam in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, snscrape will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is crocoite, which uses the Chrome browser in headless mode to archive JavaScript-heavy sites.

This article would also not be complete without a nod to the HTTrack project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line.

In the same vein, during my research I found a full rewrite of Wget called Wget2 that has support for multi-threaded operation, which might make it faster than its predecessor. It is missing some features from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support.

Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in Wallabag, a self-hosted "read it later" service designed as a free-software alternative to Pocket (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually unreadable and Wallabag sometimes fails to parse the article. Instead, other tools like bookmark-archiver or reminiscence save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay.

The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously working on a backup of the Internet Archive itself.

This article first appeared in the Linux Weekly News.

As usual, here's the list of issues and patches generated while researching this article:

I also want to personally thank the folks in the #archivebot channel for their assistance and letting me play with their toys.

The Pamplemousse crawl is now available on the Internet Archive, it might end up in the wayback machine at some point if the Archive curators think it is worth it.

Another example of a crawl is this archive of two Bloomberg articles which the "save page now" feature of the Internet archive wasn't able to save correctly. But webrecorder.io could! Those pages can be seen in the web recorder player to get a better feel of how faithful a WARC file really is.

Finally, this article was originally written as a set of notes and documentation in the archive page which may also be of interest to my readers.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

VLC in Debian now can do bittorrent streaming

Hën, 24/09/2018 - 9:20md

Back in February, I got curious to see if VLC now supported Bittorrent streaming. It did not, despite the fact that the idea and code to handle such streaming had been floating around for years. I did however find a standalone plugin for VLC to do it, and half a year later I decided to wrap up the plugin and get it into Debian. I uploaded it to NEW a few days ago, and am very happy to report that it entered Debian a few hours ago, and should be available in Debian/Unstable tomorrow, and Debian/Testing in a few days.

With the vlc-plugin-bittorrent package installed you should be able to stream videos using a simple call to

vlc https://archive.org/download/TheGoat/TheGoat_archive.torrent

It can handle magnet links too. Now if only native vlc had bittorrent support. Then a lot more would be helping each other to share public domain and creative commons movies. The plugin need some stability work with seeking and picking the right file in a torrent with many files, but is already usable. Please note that the plugin is not removing downloaded files when vlc is stopped, so it can fill up your disk if you are not careful. Have fun. :)

I would love to get help maintaining this package. Get in touch if you are interested.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Handling an old Digital Photo Frame (AX203) with Debian (and gphoto2)

Sht, 22/09/2018 - 8:20md

Some days ago I found an key chain at home that was a small digital photo frame, and it seems that was not used since 2009 (old times when I was not using Debian at home yet). The photo frame was still working (I connected it with an USB cable and after some seconds, it turned on), and showed 37 photos from 2009 indeed.

When I connected it with USB cable to the computer, it was asking “Connect USB? Yes/No” I pressed the button saying “yes” and nothing happened in the computer (I was expecting an USB drive to be shown in Dolphin, but no).

I looked at “dmesg” output and it was shown as a CDROM:

[ 1620.497536] usb 3-2: new full-speed USB device number 4 using xhci_hcd [ 1620.639507] usb 3-2: New USB device found, idVendor=1908, idProduct=1320 [ 1620.639513] usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 1620.639515] usb 3-2: Product: Photo Frame [ 1620.639518] usb 3-2: Manufacturer: BUILDWIN [ 1620.640549] usb-storage 3-2:1.0: USB Mass Storage device detected [ 1620.640770] usb-storage 3-2:1.0: Quirks match for vid 1908 pid 1320: 20000 [ 1620.640807] scsi host7: usb-storage 3-2:1.0 [ 1621.713594] scsi 7:0:0:0: CD-ROM buildwin Photo Frame 1.01 PQ: 0 ANSI: 2 [ 1621.715400] sr 7:0:0:0: [sr1] scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray [ 1621.715745] sr 7:0:0:0: Attached scsi CD-ROM sr1 [ 1621.715932] sr 7:0:0:0: Attached scsi generic sg1 type 5

But not automounted.
I mounted it and then looked at the files, but I couldn’t find photos there, only these files:

Autorun.inf FEnCodeUnicode.dll LanguageUnicode.ini DPFMate.exe flashlib.dat StartInfoUnicode.ini

The Autorun.inf file was pointing to the DPFMate.exe file.

I connected the device to a Windows computer and then I could run the DPFMate.exe program, and it was a program to manage the photos in the device.

I was wondering if I could manage the device from Debian and then searched for «dpf “digital photo frame” linux dpfmate» and found this page:

http://www.penguin.cz/~utx/hardware/Abeyerr_Digital_Photo_Frame/

Yes, that one was my key chain!

I looked for gphoto in Debian, going to https://packages.debian.org/gphoto and then learned that the program I need to install was gphoto2.
I installed it and then went to its Quick Start Guide to learn how to access the device, get the photos etc. In particular, I used these commands:

gphoto2 --auto-detect Model Port ---------------------------------------------------------- AX203 USB picture frame firmware ver 3.4.x usbscsi:/dev/sg1 gphoto2 --get-all-files

(it copied all the pictures that were in the photo frame, to the current folder in my computer)

gphoto2 --upload-file=name_of_file

(to put some file in the photo frame)

gphoto2 --delete-file=1-38

(to delete the file 1 to 38 in the photo frame).

larjona https://larjona.wordpress.com English – The bright side

You may follow me on Mastodon

Pre, 21/09/2018 - 12:15md

I never fancied having accounts with the big players that much, so I never touched e.g. Twitter.

But Mastodon is the kind of service that works for me. You can find me on https://fosstodon.org.

My nick over there is sunweaver. I'll be posting intersting stuff of my work there, probably more regularly than on the blog.

sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

Resigning as the FSFE Fellowship's representative

Enj, 20/09/2018 - 7:15md

I've recently sent the following email to fellows, I'm posting it here for the benefit of the wider community and also for any fellows who don't receive the email.

Dear fellows,

Given the decline of the Fellowship and FSFE's migration of fellows into a supporter program, I no longer feel that there is any further benefit that a representative can offer to fellows.

With recent blogs, I've made a final effort to fulfill my obligations to keep you informed. I hope fellows have a better understanding of who we are and can engage directly with FSFE without a representative. Fellows who want to remain engaged with FSFE are encouraged to work through your local groups and coordinators as active participation is the best way to keep an organization on track.

This resignation is not a response to any other recent events. From a logical perspective, if the Fellowship is going to evolve out of a situation like this, it is in the hands of local leaders and fellowship groups, it is no longer a task for a single representative.

There are many positive experiences I've had working with people in the FSFE community and I am also very grateful to FSFE for those instances where I have been supported in activities for free software.

Going forward, leaving this role will also free up time and resources for other free software projects that I am engaged in.

I'd like to thank all those of you who trusted me to represent you and supported me in this role during such a challenging time for the Fellowship.

Sincerely,

Daniel Pocock

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Words Have Meanings

Enj, 20/09/2018 - 3:10md

As a follow-up to my post with Suggestions for Trump Supporters [1] I notice that many people seem to have private definitions of words that they like to use.

There are some situations where the use of a word is contentious and different groups of people have different meanings. One example that is known to most people involved with computers is “hacker”. That means “criminal” according to mainstream media and often “someone who experiments with computers” to those of us who like experimenting with computers. There is ongoing discussion about whether we should try and reclaim the word for it’s original use or whether we should just accept that’s a lost cause. But generally based on context it’s clear which meaning is intended. There is also some overlap between the definitions, some people who like to experiment with computers conduct experiments with computers they aren’t permitted to use. Some people who are career computer criminals started out experimenting with computers for fun.

But some times words are misused in ways that fail to convey any useful ideas and just obscure the real issues. One example is the people who claim to be left-wing Libertarians. Murray Rothbard (AKA “Mr Libertarian”) boasted about “stealing” the word Libertarian from the left [2]. Murray won that battle, they should get over it and move on. When anyone talks about “Libertarianism” nowadays they are talking about the extreme right. Claiming to be a left-wing Libertarian doesn’t add any value to any discussion apart from demonstrating the fact that the person who makes such a claim is one who gives hipsters a bad name. The first time penny-farthings were fashionable the word “libertarian” was associated with left-wing politics. Trying to have a sensible discussion about politics while using a word in the opposite way to almost everyone else is about as productive as trying to actually travel somewhere by penny-farthing.

Another example is the word “communist” which according to many Americans seems to mean “any person or country I don’t like”. It’s often invoked as a magical incantation that’s supposed to automatically win an argument. One recent example I saw was someone claiming that “Russia has always been communist” and rejecting any evidence to the contrary. If someone was to say “Russia has always been a shit country” then there’s plenty of evidence to support that claim (Tsarist, communist, and fascist Russia have all been shit in various ways). But no definition of “communism” seems to have any correlation with modern Russia. I never discovered what that person meant by claiming that Russia is communist, they refused to make any comment about Russian politics and just kept repeating that it’s communist. If they said “Russia has always been shit” then it would be a clear statement, people can agree or disagree with that but everyone knows what is meant.

The standard response to pointing out that someone is using a definition of a word that is either significantly different to most of the world (or simply inexplicable) is to say “that’s just semantics”. If someone’s “contribution” to a political discussion is restricted to criticising people who confuse “their” and “there” then it might be reasonable to say “that’s just semantics”. But pointing out that someone’s writing has no meaning because they choose not to use words in the way others will understand them is not just semantics. When someone claims that Russia is communist and Americans should reject the Republican party because of their Russian connection it’s not even wrong. The same applies when someone claims that Nazis are “leftist”.

Generally the aim of a political debate is to convince people that your cause is better than other causes. To achieve that aim you have to state your cause in language that can be understood by everyone in the discussion. Would the person who called Russia “communist” be more or less happy if Russia had common ownership of the means of production and an absence of social classes? I guess I’ll never know, and that’s their failure at debating politics.

Related posts:

  1. TED – Defining Words I recently joined the community based around the TED conference...
  2. political compass It appears that some people don’t understand what right-wing means...
  3. Terms of Abuse for Minority Groups Due to the comments on my blog post about Divisive...
etbe https://etbe.coker.com.au etbe – Russell Coker

vmdb2 roadmap

Enj, 20/09/2018 - 10:06pd

I now have a rudimentary [roadmap][] for reaching 1.0 of [vmdb2][], my Debian image building tool.

The visual roadmap is generated from the following YAML file:

vmdb2_1_0: label: | vmdb2 is production ready depends: - ci_builds_images - docs - x220_install docs: label: | vmdb2 has a user manual of acceptable quality x220_install: label: | x220 can install Debian onto a Thinkpad x220 laptop ci_builds_images: label: | CI builds and publishes images using vmdb2 depends: - amd64_images - arm_images amd64_images: label: | CI: amd64 images arm_images: label: | CI: arm images of various kinds Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

binb 0.0.1: binb is not Beamer

Enj, 20/09/2018 - 3:55pd

Following a teaser tweet two days ago, we are thrilled to announce that binb version 0.0.1 arrived on CRAN earlier this evening.

binb extends a little running joke^Htradition I created a while back and joins three other CRAN packages offering RMarkdown integration:

  • tint for tint is not Tufte : pdf or html papers with a fresher variant of the famed Tufte style;
  • pinp for pinp is not PNAS : two-column pdf vignettes in the PNAS style (which we use for several of our packages);
  • linl for linl is not Letter : pdf letters

All four offer easy RMarkdown integration, leaning heavily on the awesome super-power of pandoc as well as general R glue.

This package (finally) wraps something I had offered for Metropolis via a simpler GitHub repo – a repo I put together more-or-less spur-of-the-moment-style when asked for it during the useR! 2016 conference. It also adds the lovely IQSS Beamer theme by Ista Zahn which offers a rather sophisticated spin on the original Metropolis theme by Matthias Vogelgesang.

We put two simple teasers on the GitHub repo.

Metropolis

Consider the following minimal example, adapted from the original minimal example at the bottom of the Metropolis page:

--- title: A minimal example author: Matthias Vogelgesang date: \today institute: Centre for Modern Beamer Themes output: binb::metropolis --- # First Section ## First Frame Hello, world!

It creates a three-page pdf file which we converted into this animated gif (which loses font crispness, sadly):

IQSS

Similarly, for IQSS we use the following input adapting the example above but showing sections and subsections for the nice headings it generates:

--- title: A minimal example author: Ista Zahn date: \today institute: IQSS output: binb::iqss --- # First Section ## First Sub-Section ### First Frame Hello, world! # Second Section ## Second Subsection ### Second Frame Another planet!

This creates this pdf file which we converted into this animated gif (also losing font crispness):

The initial (short) NEWS entry follows:

Changes in binb version 0.0.1 (2018-09-19)
  • Initial CRAN release supporting Metropolis and IQSS

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Privacy and Anonymity Colloquium • Activity program announced!

Enj, 20/09/2018 - 12:07pd

It's only two weeks to the beginning of the privacy and anonymity colloquium we will be celebrating at the Engineering Faculty of my University. Of course, it's not by mere chance we are holding this colloquium starts just after the Tor Meeting, which will happen for the first time in Latin America (and in our city!)

So, even though changes are still prone to happen, I am happy to announce the activity program for the colloquium!

I know some people will ask, so — We don't have the infrastructure to commit to having a video feed from it. We will, though, record the presentations on video, and I have the committment to the university to produce a book from it within a year time. So, at some point in the future, I will be able to give you a full copy of the topics we will discuss!

But, if you are in Mexico City, no excuses: You shall come to the colloquium!

AttachmentSize poster.pdf881.35 KB poster_small.jpg81.22 KB gwolf http://gwolf.org Gunnar Wolf

mmdebstrap: unprivileged reproducible multi-mirror Debian chroot in 11 s

Mër, 19/09/2018 - 8:46md

I wrote an alternative to debootstrap. I call it mmdebstrap which is short for multi-mirror debootstrap. Its interface is very similar to debootstrap, so you can just do:

$ sudo mmdebstrap unstable ./unstable-chroot

And you'll get a Debian unstable chroot just as debootstrap would create it. It also supports the --variant option with minbase and buildd values which install the same package sets as debootstrap would.

A list of advantages in contrast to debootstrap:

  • more than one mirror possible (or really anything that is a legal apt sources.list entry)
  • security and updates mirror included for Debian stable chroots (a wontfix for debootstrap)
  • 2-3 times faster (for debootstrap variants)
  • chroot with apt in 11 seconds (if only installing Essential: yes and apt)
  • gzipped tarball with apt is 27M small
  • bit-by-bit reproducible output (if $SOURCE_DATE_EPOCH is set)
  • unprivileged operation using Linux user namespaces, fakechroot or proot (mode is chosen automatically)
  • can operate on filesystems mounted with nodev
  • foreign architecture chroots with qemu-user (without manually invoking --second-stage)

You can find the code here:

https://gitlab.mister-muffin.de/josch/mmdebstrap

Johannes Schauer https://blog.mister-muffin.de/feed/None Mister Muffin Blog

2018 Linux Audio Miniconference

Mër, 19/09/2018 - 8:40md

As in previous years we’re trying to organize an audio miniconference so we can get together and talk through issues, especially design decisons, face to face. This year’s event will be held on Sunday October 21st in Edinburgh, the day before ELC Europe starts there. Cirrus Logic have generously offered to host this in their Edinburgh office:

7B Nightingale Way
Quartermile
Edinburgh
EH3 9EG

As with previous years let’s pull together an agenda through a mailing list discussion on alsa-devel – if you’ve got any topics you’d like to discuss please join the discussion there.

There’s no cost for the miniconference but if you’re planning to attend please sign up using the document here.

broonie https://blog.sirena.org.uk Technicalities

Freexian’s report about Debian Long Term Support, August 2018

Mër, 19/09/2018 - 10:41pd

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 220 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for September).
  • Antoine Beaupré did 23.75 hours.
  • Ben Hutchings did 5 hours (out of 15 hours allocated + 8 extra hours, thus keeping 8 extra hours for September).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did not manage to work but returned all his hours to the pool (out of 23.75 hours allocated + 19.5 extra hours).
  • Holger Levsen did 10 hours (out of 8 hours allocated + 16 extra hours, thus keeping 14 extra hours for September).
  • Hugo Lefeuvre did nothing (out of 10 hours allocated, but he gave back those hours).
  • Markus Koschany did 23.75 hours.
  • Mike Gabriel did 6 hours (out of 8 hours allocated, thus keeping 2 extra hours for September).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated + 8 remaining hours, thus keeping 11.5 extra hours for September).
  • Roberto C. Sanchez did 6 hours (out of 18h allocated, thus keeping 12 extra hours for September).
  • Santiago Ruano Rincón did 8 hours (out of 20 hours allocated, thus keeping 12 extra hours for September).
  • Thorsten Alteholz did 23.75 hours.
Evolution of the situation

The number of sponsored hours decreased to 206 hours per month, we lost two sponsors and gained only one.

The security tracker currently lists 38 packages with a known CVE and the dla-needed.txt file has 24 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog https://raphaelhertzog.com apt-get install debian-wizard

What is the relationship between FSF and FSFE?

Mër, 19/09/2018 - 1:21pd

Ever since I started blogging about my role in FSFE as Fellowship representative, I've been receiving communications and queries from various people, both in public and in private, about the relationship between FSF and FSFE. I've written this post to try and document my own experiences of the issue, maybe some people will find this helpful. These comments have also been shared on the LibrePlanet mailing list for discussion (subscribe here)

Being the elected Fellowship representative means I am both a member of FSFE e.V. and also possess a mandate to look out for the interests of the community of volunteers and donors (they are not members of FSFE e.V). In both capacities, I feel uncomfortable about the current situation due to the confusion it creates in the community and the risk that volunteers or donors may be confused.

The FSF has a well known name associated with a distinctive philosophy. Whether people agree with that philosophy or not, they usually know what FSF believes in. That is the power of a brand.

When people see the name FSFE, they often believe it is a subsidiary or group working within the FSF. The way that brands work, people associate the philosophy with the name, just as somebody buying a Ferrari in Berlin expects it to do the same things that a Ferrari does in Boston.

To give an example, when I refer to "our president" in any conversation, people not knowledgeable about the politics believe I am referring to RMS. More specifically, if I say to somebody "would you like me to see if our president can speak at your event?", some people think it is a reference to RMS. In fact, FSFE was set up as a completely independent organization with distinct membership and management and therefore a different president. When I try to explain this to people, they sometimes lose interest and the conversation can go cold very quickly.

FSFE leadership have sometimes diverged from FSF philosophy, for example, it is not hard to find some quotes about "open source" and one fellow recently expressed concern that some people behave like "FSF Light". But given that FSF's crown jewels are the philosophy, how can an "FSF Light" mean anything? What would "Ferrari Light" look like, a red lawnmower? Would it be a fair use of the name Ferrari?

Some concerned fellows have recently gone as far as accusing the FSFE staff of effectively domain squatting or trolling the FSF (I can't link to that because of FSFE's censorship regime). When questions appear about the relationship in public, there is sometimes a violent response with no firm details. (I can't link to that either because of FSFE's censorship regime)

The FSFE constitution calls on FSFE to "join forces" with the FSF and sometimes this appears to happen but I feel this could be taken further.

FSF people have also produced vast amounts of code (the GNU Project) and some donors appear to be contributing funds to FSFE in gratitude for that or in the belief they are supporting that. However, it is not clear to me that funds given to FSFE support that work. As Fellowship representative, a big part of my role is to think about the best interests of those donors and so the possibility that they are being confused concerns me.

Given the vast amounts of money and goodwill contributed by the community to FSFE e.V., including a recent bequest of EUR 150,000 and the direct questions about this issue I feel it is becoming more important for both organizations to clarify the issue.

FSFE has a transparency page on the web site and this would be a good place to publish all documents about their relationship with FSF. For example, FSFE could publish the documents explaining their authorization to use a name derived from FSF and the extent to which they are committed to adhere to FSF's core philosophy and remain true to that in the long term. FSF could also publish some guidelines about the characteristics of a sister organization, especially when that organization is authorized to share the FSF's name.

In the specific case of sister organizations who benefit from the tremendous privilege of using the FSF's name, could it also remove ambiguity if FSF mandated the titles used by officers of sister organizations? For example, the "FSFE President" would be referred to as "FSFE European President", or maybe the word president could be avoided in all sister organizations.

People also raise the question of whether FSFE can speak for all Europeans given that it only has a large presence in Germany and other organizations are bigger in other European countries. Would it be fair for some of those other groups to aspire to sister organization status and name-sharing rights too? Could dozens of smaller FSF sister organizations dilute the impact of one or two who go off-script?

Even if FSFE was to distance itself from FSF or even start using a new name and philosophy, as a member, representative and also volunteer I would feel uncomfortable with that as there is a legacy of donations and volunteering that have brought FSFE to the position the organization is in today.

That said, I would like to emphasize that I regard RMS and the FSF, as the original FSF, as having the final authority over the use of the name and I fully respect FSF's right to act unilaterally, negotiate with sister organizations or simply leave things as they are.

If you have questions or concerns about this topic, I would invite you to raise them on the LibrePlanet-discuss mailing list or feel free to email me directly.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Using ARP via netlink to detect presence

Mar, 18/09/2018 - 9:18md

If you remember my first post about home automation I mentioned a desire to use some sort of presence detection as part of deciding when to turn the heat on. Home Assistant has a wide selection of presence detection modules available, but the easy ones didn’t seem like the right solutions. I don’t want something that has to run on my phone to say where I am, but using the phone as the proxy for presence seemed reasonable. It connects to the wifi when at home, so watching for that involves no overhead on the phone and should be reliable (as long as I haven’t let my phone run down). I run OpenWRT on my main house router and there are a number of solutions which work by scraping the web interface. openwrt_hass_devicetracker is a bit better but it watches the hostapd logs and my wifi is actually handled by some UniFis.

So how to do it more efficiently? Learn how to watch for ARP requests via Netlink! That way I could have something sitting idle and only doing any work when it sees a new event, that could be small enough to run directly on the router. I could then tie it together with the Mosquitto client libraries and announce presence via MQTT, tying it into Home Assistant with the MQTT Device Tracker.

I’m going to go into a bit more detail about the Netlink side of things, because I found it hard to find simple documentation and ended up reading kernel source code to figure out what I wanted. If you’re not interested in that you can find my mqtt-arp (I suck at naming simple things) tool locally or on GitHub. It ends up as an 8k binary for my MIPS based OpenWRT box and just needs fed a list of MAC addresses to watch for and details of the MQTT server. When it sees a device it cares about make an ARP request it reports the presence for that device as “home” (configurable), rate limiting it to at most once every 2 minutes. Once it hasn’t seen anything from the device for 10 minutes it declares the location to be unknown. I have found Samsung phones are a little prone to disconnecting from the wifi when not in use so you might need to lengthen the timeout if all you have are Samsung devices.

Home Assistant configuration is easy:

device_tracker: - platform: mqtt devices: noodles: 'location/by-mac/0C:11:22:33:44:55' helen: 'location/by-mac/4C:11:22:33:44:55'

On to the Netlink stuff…

Firstly, you can watch the netlink messages we’re interested in using iproute2 - just run ip monitor. Works as an unpriviledged user which is nice. This happens via an AF_NETLINK routing socket (rtnetlink(7)):

int sock; sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);

We then want to indicate we’re listening for neighbour events:

struct sockaddr_nl group_addr; bzero(&group_addr, sizeof(group_addr)); group_addr.nl_family = AF_NETLINK; group_addr.nl_pid = getpid(); group_addr.nl_groups = RTMGRP_NEIGH; bind(sock, (struct sockaddr *) &group_addr, sizeof(group_addr));

At this point we’re good to go and can wait for an event message:

received = recv(sock, buf, sizeof(buf), 0);

This will be a struct nlmsghdr message and the nlmsg_type field will provide details of what type. In particular I look for RTM_NEWNEIGH, indicating a new neighbour has been seen. This is of type struct ndmsg and immediately follows the struct nlmsghdr in the received message. That has details of the address family type (IPv6 vs IPv4), the state and various flags (such as whether it’s NUD_REACHABLE indicating presence). The only slightly tricky bit comes in working out the MAC address, which is one of potentially several struct nlattr attributes which follow the struct ndmsg. In particular I’m interested in an nla_type of NDA_LLADDR, in which case the attribute data is the MAC address. The main_loop function in mqtt-arp.c shows this - it’s fairly simple stuff, and works nicely. It was just figuring out the relationship between it all and the exact messages I cared about that took me a little time to track down.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

censored Amazon review of Sandisk Ultra 32GB Micro SDHC Card

Mar, 18/09/2018 - 8:03md

★ counterfeits in amazon pipeline

The 32 gb card I bought here at Amazon turned out to be fake. Within days I was getting read errors, even though the card was still mostly empty.

The logo is noticably blurry compared with a 32 gb card purchased elsewhere. Also, the color of the grey half of the card is subtly wrong, and the lettering is subtly wrong.

Amazon apparently has counterfiet stock in their pipeline, google "amazon counterfiet" for more.

You will not find this review on Sandisk Ultra 32GB Micro SDHC UHS-I Card with Adapter - 98MB/s U1 A1 - SDSQUAR-032G-GN6MA because it was rejected. As far as I can tell my review violates none of Amazon's posted guidelines. But it's specific about how to tell this card is counterfeit, and it mentions a real and ongoing issue that Amazon clearly wants to cover up.

Joey Hess http://joeyh.name/blog/ see shy jo

Reproducible Builds: Weekly report #177

Mar, 18/09/2018 - 7:35md

Here’s what happened in the Reproducible Builds effort between Sunday September 9 and Saturday September 15 2018:

Patches filed diffoscope development

Chris Lamb made a large number of changes to diffoscope, our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages:

These changes were then uploaded as diffoscope version 101.

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org by Holger Levsen this month, including:

Misc.

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Faqet