You are here

Planet Debian

Subscribe to Feed Planet Debian
Thinking inside the box liw's English language blog feed Stuff, Debian, Free Software and Craig Stuff, Debian, Free Software and Craig (y eso no es poca cosa) Thinking inside the box Just another WordPress.com weblog agrep -pB IT /dev/life Insider infos, master your Debian/Ubuntu distribution Thinking inside the box a blog sesse's blog Random thoughts about everything tagged by Debian Thinking inside the box Reproducible builds blog random musings and comments Recent content in Debian Blog on RESEARCHUT Any sufficiently advanced thinking is indistinguishable from madness Thinking inside the box showing latest 10 Jaldhar Vyas's Debian GNU/Linux Weblog ganbatte kudasai! thoughts of a free software enthusiast jmtd Blog from the Debian Project Entries tagged english random musings and comments Politics, tech and herding cats from Neil McGovern Comments on family, technology, and society random musings and comments Reproducible builds blog The public face of jwiltshire Thoughts about programming, sysadmin, Perl, Debian ... Free Software Hacking agrep -pB IT /dev/life agrep -pB IT /dev/life As time goes by ... (y eso no es poca cosa) showing latest 10 anarcat Just another WordPress.com weblog pabs Entries tagged english "Passion and dispassion. Choose two." -- Larry Wall Ben Hutchings's diary of life and technology sesse's blog Dude! Sweet!
Përditësimi: 1 month 4 javë më parë

Enabling Wake-on-Lan with the N34 Mini PC

Mar, 30/10/2018 - 8:58md

There is a room at the top of my house which was originally earmarked for storage (the loft is full of insulation rather than being a useful option). Then I remembered I still had my pico projector and it ended up as a cinema room as well. The pico projector needs really low light conditions with a long throw, so the fact the room only has a single small window is a plus.

I bought an “N34” mini PC to act as a media player - I already had a spare DVB-T2 stick to Freeview enable things, and the Kodi box downstairs has all my DVDs stored on it for easy streaming. It’s a Celeron N3450 based box with 4G RAM and a 32GB internal eMMC (though I’m currently running off an SD card because that’s what I initially used to set it up and I haven’t bothered to copy it onto the internal device yet). My device came from Amazon and is branded “Kodlix” (whose website no longer works) but it appears to be the same thing as the Beelink AP34.

Getting Linux onto it turned out to be a hassle. GRUB does not want to play with the EFI BIOS; it can be operated sometimes if manually called from the EFI Shell, but it does not work as the default EFI image to load. Various forum posts recommended the use of rEFInd, which mostly works fine.

Other than that Debian Stretch worked without problems. I had to pull in a backports kernel in order to make the DVB-T2 stick work properly, but the hardware on the N34 itself was all supported out of the box.

The other issue was trying to get Wake-on-Lan to work. The room isn’t used day to day so I want to be able to tie various pieces together with home automation such that I can have everything off by default and a scene configured to set things up ready for use. The BIOS has an entry for Wake-on-Lan, ethtool reported Supports Wake-on: g which should mean MagicPacket wakeup was enabled, but no joy. Looking at /proc/acpi/wakeup gave:

/proc/acpi/wakeup contents Device S-state Status Sysfs node HDAS S3 *disabled pci:0000:00:0e.0 XHC S3 *enabled pci:0000:00:15.0 XDCI S4 *disabled BRCM S0 *disabled RP01 S4 *disabled PXSX S4 *disabled RP02 S4 *disabled PXSX S4 *disabled RP03 S4 *disabled pci:0000:00:13.0 PXSX S4 *disabled pci:0000:01:00.0 RP04 S4 *disabled PXSX S4 *disabled RP05 S4 *disabled PXSX S4 *disabled RP06 S4 *disabled pci:0000:00:13.3 PXSX S4 *disabled pci:0000:02:00.0 PWRK S4 *enabled platform:PNP0C0C:00

pci:0000:01:00.0 is the network card:

01:00.0 Ethernet controller [0200]: Realtek […] Ethernet Controller [10ec:8168] (rev 0c)

I need this configured to allow wakeups which apparently is done via sysfs these days:

echo enabled > /sys/bus/pci/devices/0000\:01\:00.0/power/wakeup

This has to be done every boot so I just tied it into /etc/network/interfaces.

All of this then enables Home Assistant to control the Kodi box:

Home Assistant Kodi WoL configuration wake_on_lan: media_player: - platform: kodi name: Kodi (Cinema) host: kodi-cinema.here port: 8000 username: kodi password: !secret kodi_cinema_pass enable_websocket: false turn_on_action: service: wake_on_lan.send_magic_packet data: mac: 84:39:be:11:22:33 broadcast_address: 192.168.0.2 turn_off_action: service: media_player.kodi_call_method data: entity_id: media_player.kodi_cinema method: System.Shutdown

My Home Assistant container sits on a different subnet to the media box, and I found that the N34 wouldn’t respond to a Wake-on-Lan packet to the broadcast MAC address. So I’ve configured the broadcast_address for Home Assistant to be the actual IP of the media box, allowed UDP port 9 (discard) through on the firewall and statically nailed the ARP address of the media box on the router, so it transmits the packet with the correct destination MAC:

ip neigh change 192.168.0.2 lladdr 84:39:be:11:22:33 nud permanent dev eth0

I’ve still got some other bits to glue together (like putting the pico projector on a SonOff), but this gets me started on that process.

(And yes, the room is a bit cosier these days than when that photograph was taken.)

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

I was a podcast guest on The REPL

Pre, 26/10/2018 - 10:00md

Daniel Compton hosted me on his Clojure podcast, The REPL, where I talked about Debian, packaging Leiningen, and the Clojure ecosystem in Debian. It's got everything: spooky abandoned packages, anarchist collectives, software security policies, and Debian release cycles. Absolutely no shade was thrown at other distros.

Give it a listen:

Your browser does not support the audio element.

Download: MP3

More Q&A

After the podcast was published, Ivan Sagalaev wrote me with a great question about how the different versions of Clojure in Ubuntu 18.04 work:

First of all, THANK YOU for making sudo apt install leiningen work! It's so much better and more consistent than sourcing bash scripts :-)

I have a quick question for you. After installing leiningen and clojure on Ubuntu 18.04 I see that lein repl starts with clojure 1.8.0, while the clojure package itself seems to be independent and is version 1.9.0. How is it possible? I frankly haven't even seen lein downloading its own clojure.jar...

I replied:

Leiningen is "ahead-of-time (AOT) compiled", which is a fancy way of saying that the Leiningen you download from Ubuntu is pre-built. This means it is already compiled to Java bytecode, which can be run directly by Java. I ship the binary Leiningen package as an "uberjar", which means all its dependencies are also included inside the Leiningen jar.

Leiningen depends on and is built with Clojure 1.8, so the Leiningen uberjar in Debian also depends on Clojure 1.8. The "clojure" package in 18.04 defaults to installing Clojure 1.9, but that can be installed simultaneously with the "clojure1.8" package that Leiningen depends on in order to build. You can change your default Clojure to 1.8 using alternatives.

When you launch lein repl, by default the Clojure 1.8 runtime that's compiled in is used. If you run lein repl in the root of a Clojure 1.9 project, Leiningen will download Clojure 1.9 from Clojars and launch a 1.9 repl. If you want to use the Clojure 1.9 shipped with Debian, you can change :local-repo to point at /usr/share/maven-repo, but be careful to also set :offline? to true so you don't try to install things into the system maven repo by accident.

Elana Hashman https://hashman.ca/ hashman.ca

smartmontools

Pre, 26/10/2018 - 11:46pd

I don't do much Debian stuff these days (too busy) but I have adopted some packages over the last year. This has happened if a package that I rely on is lacking person-power and was at risk of being removed from Debian. I thought I should write about some of them. First up, smartmontools.

smartmontools let you query the "Self-Monitoring, Analysis and Reporting Technology" (S.M.A.R.T.) information in your computer's storage devices (hard discs and solid-state equivalents), as well as issue S.M.A.R.T. commands to them, such as instructing them to execute self-tests.

I rescued smartmontools for the Debian release in 2015, but I thought that was a one-off. Since I've just done it again I'm now considering it something I (co-)maintain1.

S.M.A.R.T. can, in theory, give you advance warning about a disc that is "not well" and could stop working. In practice, it isn't very good at predicting disc failures2 — which might explain why the package hasn't received more attention — but it can still be useful: last year it helped me to detect an issue with excessive drive-head parking I was experiencing on one of my drives.

  1. Personally I think the notion of single-maintainers for packages is old and destructive, and I think it should be the exception rather than the norm. Unfortunately it's still baked into a lot of our processes, policies and tools. ↩

jmtd https://jmtd.net/log/ Jonathan Dowland's Weblog

Migrated website from ikiwiki to Hugo

Enj, 25/10/2018 - 8:42md

So, I’ve been using ikiwiki for my website since 2011. At the time, I was hosting the website on a tiny hosting package included in a DSL contract - nothing dynamic possible, so a static site generator seemed like a good idea. ikiwiki was a good social fit at the time, as it was packaged in Debian and developed by a Debian Developer.

Today, I finished converting it to Hugo.

Why?

I did not really have a huge problem with ikiwiki, but I recently converted my blog from wordpress to hugo and it seemed to make sense to have one technology for both, especially since I don’t update the website very often and forget ikiwiki’s special things.

One thing that was somewhat annoying is that I built a custom ikiwiki plugin for the menu in my template, so I had to clone it’s repository into ~/.ikiwiki every time, rather than having a self-contained website. Well, it was a submodule of my dotfiles repo.

Another thing was that ikiwiki had a lot of git integration, and when you build your site it tries to push things to git repositories and all sorts of weird stuff – Hugo just does one thing: It builds your page.

One thing that Hugo does a lot better than ikiwiki is the built-in server which allows you to run `hugo server´ and get a local http URL you can open in the browser with live-reload as you save files. Super convenient to check changes (and of course, for writing this blog post)!

Also, in general, Hugo feels a lot more modern. ikiwiki is from 2006, Hugo is from 2013. Especially recent Hugo versions added quite a few features for asset management.

  • Fingerprinting of assets like css (inserting hash into filename) - ikiwiki just contains its style in style.css (and your templates in other statically named files), so if you switch theming details, you could break things because the CSS the browser has cached does not match the CSS the page expects.
  • Asset minification - Hugo can minimize CSS and JavaScript for you. This means browers have to fetch less data.
  • Asset concatenation - Hugo can concatenate CSS and JavaScript. This allows you to serve only one file per type, reducing the number of round trips a client has to make.

There’s also proper theming support, so you can easily clone a theme into the themes/ directory, or add it as a submodule like I do for my blog. But I don’t use it for the website yet.

Oh, and Hugo automatically generates sitemap.xml files for your website, teaching search engines which pages exist and when they have been modified.

I also like that it’s written in Go vs in Perl, but I think that’s just another more modern type of thing. Gotta keep up with the world!

Basic conversion

The first part to the conversion was to split the repository of the website: ikiwiki puts templates into a templates/ subdirectory of the repository and mixes all other content. Hugo on the other hand splits things into content/ (where pages go), layouts (page templates), and static/ (other files).

The second part was to inject the frontmatter into the markdown files. See, ikiwiki uses shortcuts like this to set up the title, and gets its dates from git:

[[!meta title="My page title"]]

on the other hand, Hugo uses frontmatter - some YAML at the beginning of the markdown, and specifies the creation date in there:

--- title: "My page title" date: Thu, 18 Oct 2018 21:36:18 +0200 ---

You can also have lastmod in there when modifying it, but I set enableGitInfo = true in config.toml so Hugo picks up the mtime from the git repo.

I wrote a small script to automatize those steps, but it was obviously not perfect (also, it inserted lastmod, which it should not have).

One thing it took me some time to figure out was that index.mdown needs to become _index.md in the content/ directory of Hugo, otherwise no pages below it are rendered - not entirely obvious.

The theme

Converting the template was surprisingly easy, it was just a matter of replacing <TMPL_VAR BASEURL> and friends with { .Site.BaseURL } and friends - the names are basically the same, just sometimes there’s .Site at the front of it.

Then I had to take care of the menu generation loop. I had my bootmenu plugin for ikiwiki which allowed me to generate menus from the configuration file. The template for it looked like this:

<TMPL_LOOP BOOTMENU> <TMPL_IF FIRSTNAV> <li <TMPL_IF ACTIVE>class="active"</TMPL_IF>><a href="<TMPL_VAR URL>"><TMPL_VAR PAGE></a></li> </TMPL_IF> </TMPL_LOOP>

I converted this to:

{{ $currentPage := . }} {{ range .Site.Menus.main }} <li class="{{ if $currentPage.IsMenuCurrent "main" . }}active{{ end }}"> <a href="{{ .URL }}"> {{ .Pre | safeHTML }} <span>{{ .Name }}</span> </a> {{ .Post }} </li> {{ end }}

this allowed me to configure my menu in config.toml like this:

[menu] [[menu.main]] name = "dh-autoreconf" url = "/projects/dh-autoreconf" weight = -110

I can also specify pre and post parts and a right menu, and I use pre and post in the right menu to render a few icons before and after items, for example:

[[menu.right]] pre = "<i class='fab fa-mastodon'></i>" post = "<i class='fas fa-external-link-alt'></i>" url = "https://mastodon.social/@juliank" name = "Mastodon" weight = -70

Setting class="active" on the menu item does not seem to work yet, though; I think I need to find out the right code for that…

Fixing up the details

Once I was done with that steps, the next stage was to convert ikiwiki shortcodes to something hugo understands. This took 4 parts:

The first part was converting tables. In ikiwiki, tables look like this:

[[!table format=dsv data=""" Status|License|Language|Reference Active|GPL-3+|Java|[github](https://github.com/julian-klode/dns66) """]]

The generated HTML table had the class="table" set, which the bootstrap framework needs to render a nice table. Converting that to a straightforward markdown hugo table did not work: Hugo did not add the class, so I had to convert pages with tables in them to the mmark variant of markdown, which allows classes to be set like this {.table}, so the end result then looked like this:

{.table} Status|License|Language|Reference ------|-------|--------|--------- Active|GPL-3+|Java|[github](https://github.com/julian-klode/dns66)

I’ll be able to get rid of this in the future by using the bootstrap sources and then having table inherit .table properties, but this requires saas or less, and I only have the CSS at the moment, so using mmark was slightly easier.

The second part was converting ikiwiki links like [[MyPage]] and [[my title|MyPage]] to Markdown links. This was quite easy, the first one became [MyPage](MyPage) and the second one [my title](my page).

The third part was converting custom shortcuts: I had [[!lp <number>]] to generate a link LP: #<number> to the corresponding launchpad bug, and [[!Closes <number>]] to generate Closes: #<number> links to the Debian bug tracker. I converted those to normal markdown links, but I could have converted them to Hugo shortcodes. But meh.

The fourth part was about converting some directory indexes I had. For example, [[!map pages="projects/dir2ogg/0.12/* and ! projects/dir2ogg/0.12/*/*"]] generated a list of all files in projects/dir2ogg/0.12. There was a very useful shortcode for that posted on the Hugo documentation, I used a variant of it and then converted pages like this to {{< directoryindex path="/static/projects/dir2ogg/0.12" pathURL="/projects/dir2ogg/0.12" >}}. As a bonus, the new directory index also generates SHA256 hashes for all files!

Further work

The website is using an old version of bootstrap, and the theme is not split out yet. I’m not sure if I want to keep a bootstrap theme for the website, seeing as the blog theme is Bulma-based - it would be easier to have both use bulma.

I also might want to update both the website and the blog by pushing to GitHub and then using CI to build and push it. That would allow me to write blog posts when I don’t have my laptop with me. But I’m not sure, I might lose control if there’s a breach at travis.

Julian Andres Klode https://blog.jak-linux.org/post/ Posts on Blog of Julian Andres Klode

MQTT enabling my doorbell

Enj, 25/10/2018 - 8:05md

One of the things about my home automation journey is that I don’t always start out with a firm justification for tying something into my setup. There’s not really any additional gain at present from my living room lights being remotely controllable. When it came to tying the doorbell into my setup I had a clear purpose in mind: I often can’t hear it from my study.

The existing device was a Byron BY101. This consists of a 433MHz bell-push and a corresponding receiver that plugs into a normal mains socket for power. I tried moving the receiver to a more central location, but then had issues with it not reliably activating when the button was pushed. I could have attempted the inverse of Colin’s approach and tried to tie in a wired setup to the wireless receiver, but that would have been too simple.

I first attempted to watch for the doorbell via a basic 433MHz receiver. It seems to use a simple 16 bit identifier followed by 3 bits indicating which tone to use (only 4 are supported by mine; I don’t know if other models support more). The on/off timings are roughly 1040ms/540ms vs 450ms/950ms. I found I could reliably trigger the doorbell using these details, but I’ve not had a lot of luck with reliable 433MHz reception on microcontrollers; generally I use PulseView in conjunction with a basic Cypress FX2 logic analyser to capture from a 433MHz receiver and work out timings. Plus I needed a receiver that could be placed close enough to the bell-push to reliably pick it up.

Of course I already had a receiver that could decode the appropriate codes - the doorbell! Taking it apart revealed a PSU board and separate receiver/bell board. The receiver uses a PT4318-S with a potted chip I assume is the microcontroller. There was an HT24LC02 I2C EEPROM on the bottom of the receiver board; monitoring it with my BusPirate indicated that the 16 bit ID code was stored in address 0x20. Sadly it looked like the EEPROM was only used for data storage; only a handful of values were read on power on.

Additionally there were various test points on the board; probing while pressing the bell-push led to the discovery of a test pad that went to 1.8v when a signal was detected. Perfect. I employed an ESP82661 in the form of an ESP-07, sending out an MQTT message containing “ON” or “OFF” as appropriate when the state changed. I had a DS18B20 lying around so I added that for some temperature monitoring too; it reads a little higher due to being inside the case, but not significantly so.

All of this ended up placed in the bedroom, which conveniently had a socket almost directly above the bell-push. Tying it into Home Assistant was easy:

binary_sensor: - platform: mqtt name: Doorbell state_topic: "doorbell/master-bedroom/button"

I then needed something to alert me when the doorbell was pushed. Long term perhaps I’ll add some sounders around the house hooked in via MQTT, and there’s a Kodi notifier available, but that’s only helpful when the TV is on. I ended up employing my Alexa via Notify Me:

notify: - name: alexa platform: rest message_param_name: notification resource: https://api.notifymyecho.com/v1/NotifyMe data: accessCode: !secret notifyme_key

and then an automation in automations.yaml:

- id: alexa_doorbell alias: Notify Alexa when the doorbell is pushed trigger: - platform: state entity_id: binary_sensor.doorbell to: 'on' action: - service: notify.alexa data_template: message: "Doorbell rang at {{ states('sensor.time') }}"

How well does this work? Better than expected! A couple of days after installing everything we were having lunch when Alexa chimed; the door had been closed and music playing, so we hadn’t heard the doorbell. Turned out to be an unexpected delivery which we’d otherwise have missed. It also allows us to see when someone has rang the doorbell when we were in - useful for seeing missed deliveries etc.

(Full disclosure: When initially probing out the mains doorbell for active signals I did so while it was plugged into the mains. My ‘scope is not fully isolated it seems and at one point I managed to trip the breaker on the mains circuit and blow the ringer part of the doorbell. Ooops. I ended up ordering an identical replacement (avoiding the need to replace the bell-push) and subsequently was able to re-use the ‘broken’ device as the ESP8266 receiver - the receiving part was still working, just not making a noise. The new receiver ended up in the living room, so the doorbell still sounds normally.)

  1. I have a basic ESP8266 MQTT framework I’ve been using for a bunch of devices based off Tuan PM’s work. I’ll put it up at some point. 

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

Review: Move Fast and Break Things

Enj, 25/10/2018 - 6:52pd

Review: Move Fast and Break Things, by Jonathan Taplin

Publisher: Little, Brown and Company Copyright: April 2017 Printing: 2018 ISBN: 0-316-27574-3 Format: Kindle Pages: 288

Disclaimer: I currently work for Dropbox, a Silicon Valley tech company. While it's not one of the companies that Taplin singles out in this book, I'm sure he'd consider it part of the problem. I think my reactions to this book are driven more by a long association with the free software movement and its take on copyright issues, and from reading a lot of persuasive work both good and bad, but I'm not a disinterested party.

Taplin is very angry about a lot of things that I'm also very angry about: the redefinition of monopoly to conveniently exclude the largest and most powerful modern companies, the ability of those companies to run roughshod over competitors in ways that simultaneously bring innovation and abusive market power, a toxic mix of libertarian and authoritarian politics deeply ingrained in the foundations of Silicon Valley companies, and a blithe disregard for the social effects of technology and for how to police the new communities that social media has created. This is a book-length rant about the dangers of monopoly domination of industries, politics, on-line communities, and the arts. And the central example of those dangers is the horrific and destructive power of pirating music on the Internet.

If you just felt a mental record-scratch and went "wait, what?", you're probably from a community closer to mine than Taplin's.

I'm going to be clear up-front: this is a bad book. I'm not going to recommend that you read it; quite the contrary, I recommend actively avoiding it. It's poorly written, poorly argued, facile, and unfair, and I say that with a great deal of frustration because I agree with about 80% of its core message. This is the sort of book from an erstwhile ally that makes me cringe: it's a significant supply of straw men, weak arguments, bad-faith arguments, and motivated reasoning that make the case for economic reform so much harder. There are good arguments against capitalism in the form in which we're practicing it. Taplin makes only some of them, and makes them badly.

Despite that, I read the entire book, and I'm still somewhat glad that I did, because it provides a fascinating look at the way unexamined premises lead people to far different conclusions. It also provides a more visceral feel for how people, like Taplin, who are deeply and personally invested in older ways of doing business, reach for a sort of reflexive conservatism when pushing back against the obvious abuses of new forms of inequality and market abuse. I found a reminder here to take a look at my own knee-jerk reactions and think about places where I may be reaching for backward-looking rather than forward-looking solutions.

This is a review, though, so before I get lost in introspection, I should explain why I think so poorly of this book as an argument.

I suspect most people who read enough partisan opinion essays on-line will notice the primary flaw in Move Fast and Break Things as early as I did: this is the kind of book that's full of carefully-chosen quotes designed to make the person being quoted look bad. You'll get a tour of the most famous ill-chosen phrases, expressions of greed, and cherry-picked bits of naked capitalism from the typical suspects: Google, Facebook, and Amazon founders, other Silicon Valley venture capitalists and CEOs, and of course Peter Thiel. Now, Thiel is an odious reactionary and aspiring fascist who yearns for the days when he could live as an unchallenged medieval lord. There's almost no quote you could cherry-pick from him that would make him look worse than he actually is, so I'll give Taplin a free pass on that one. But for the rest, Taplin is not even attempting to understand or engage with the arguments that his opponents are making. He's just finding the most damning statements, the ones that look the ugliest out of context, and parading them before the reader in an attempt to provoke an emotional reaction.

There is a long-standing principle of argument that you should engage with your opponents' position in its strongest form. If you cannot understand the merits and strengths of the opposing position and restate them well enough that an advocate of the opposing view would accept your summary as fair, you aren't prepared to argue the point. Taplin does not even come close to doing that. In the debate over the new Internet monopolies and monopsonies, one central conflict is between the distorting and dangerous concentration of power and the vast and very real improvements they've brought for consumers. I don't like Amazon as a company, and yet I read this book on a Kindle because their products are excellent and the consumer experience of their store is first-rate. I don't like Google as a company, but their search engine is by far the best available. One can quite legitimately take a wide range of political, economic, and ethical positions on that conflict, but one has to acknowledge there is a real conflict. Taplin is not particularly interested in doing that.

Similarly, and returning to the double-take moment with which I began this review, Taplin is startlingly unwilling to examine the flaws of the previous economic systems that he's defending. He writes a paean to the wonderful world of mutual benefit, artistic support, and economic fairness of record labels! Admittedly, I was not deeply enmeshed in that industry the way that he was, and he restrains his praise primarily to the 1960s and 1970s, so it's possible this isn't as mind-boggling as it sounds on first presentation. But, even apart from the numerous stories of artists cheated out of the profits of their work by the music industry long before Silicon Valley entered the picture, Taplin only grudgingly recognizes that the merits he sees in that industry were born of a specific moment in time, a specific pattern of demand, supply, sales method, and cultural moment, and that this world would not have lasted regardless of Napster or YouTube.

In other words, Taplin does the equivalent of arguing against Uber by claiming the taxi industry was a model of efficiency, economic fairness, and free competition. There are many persuasive arguments against new exploitative business practices. This is not one of them.

More tellingly to me, there is zero acknowledgment in this book that I can recall of one of the defining experiences of my generation and younger: the decision by the music and motion picture industries to fight on-line copying of their product by launching a vicious campaign of legal terrorism against teenagers and college students. Taplin's emotional appeals and quote cherry-picking falls on rather deaf ears when I vividly remember the RIAA and MPAA setting out to deliberately destroy people's lives in order to make an example of them, a level of social coercion that Google and Facebook have not yet stooped to, at least at that scale. Taplin is quite correct that his ideological opponents are scarily oblivious to some of the destruction they're wreaking on social and artistic communities, but he needs to come to terms with the fact that some of his allies are thugs.

This is where my community departs from Taplin's. I've been part of the free software community for decades, which includes a view of copyright that is neither the constrained economic model that Taplin advocates as a way to hopefully support artists, nor the corporate libertarian free-for-all from which Google draws its YouTube advertising profits. The free software community stands mostly opposed to both of those economic models, while pursuing the software equivalent of artist collectives. We have our own issues with creeping corporate control of our communities, and with the balance to strike between expanding the commons and empowering amoral companies like Google, Facebook, and Amazon to profit off of our work. Those fights play out in software licensing discussions routinely. But returning to a 1950s model of commercial music (which looks a lot like the 1980s model of commercial software) is clearly not possible, or even desirable if it were.

And that, apart from the poor argumentative technique and the tendency to engage with the weakest of his opponents' arguments, is the largest flaw I see in Taplin's book: he's invested in a binary fight between the economic world of his youth, which worked in ways that he considers fair, and a new economic world that is breaking the guarantees that he considers ethically important. He's not wrong about the problem, and I completely agree with him on the social benefit of putting artists in a more central position of influence in society. But he's not looking deeply at examples of artistic communities that have navigated this better than his own beloved music industry (book publishing, for example, which certainly has its problems with Amazon's monopsony power but is also in some ways stronger than it has ever been). And he's not looking at communities that are approaching the same problem from a different angle, such as free software. He's so caught up on what he sees as the fundamental unfairness of artists not being paid directly by each person consuming their work that he isn't stepping back to look at larger social goals and alternative ways they could be met.

I'm sure I'm making some of these same mistakes, in other places and in other ways. These problems are hard and some of the players truly are malevolent, so you cannot assume good will and good faith on all fronts. But there are good opposing arguments and simple binary analysis will fail.

Taplin, to give him credit, does try to provide some concrete solutions in the last chapter. He realizes that you cannot put the genie of easy digital copies back in the bottle, and tries to talk about alternate approaches that aren't awful (although they're things like micropayments and subscription services that are familiar ground for anyone familiar with this problem). I agree wholeheartedly with his arguments for returning to a pre-Reagan definition of monopoly power and stricter regulation of Internet advertising business. He might even be able to convince me that take-down-and-stay-down (the doctrine that material removed due to copyright complaints has to be kept off the same platform in the future) is a workable compromise... if he would also agree to fines, paid to the victim, of at least $50,000 per instance for every false complaint from a media company claiming copyright on material to which they have no rights. (Taplin seems entirely unaware of the malevolent abuses of copyright complaint systems by his beloved media industry.) As I said, I agree with about 80% of his positions.

But, sadly, this is not the book to use to convince anyone of those positions, or even the book to read for material in one's own debates. It would need more thoughtful engagement of the strongest of the arguments from new media and technology companies, a broader eye to allied fights, a deep look at the flaws in the capitalist system that made these monopoly abuses possible, and a willingness to look at the related abuses of Taplin's closest friends. Without those elements, I'm afraid this book isn't worth your time.

Rating: 3 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

learn.to/quote

Enj, 25/10/2018 - 2:00pd

The “properly quote eMail messages and on Usenet” documentation is hosted on a server that appears to not get too much care at the moment. I’ve dug out workable versions:

The original link, with its http://learn.to/quote/ redirection, which contained the links to the translations into Dutch and English, unfortunately no longer works.

I’m asking everyone to please honour these guidelines when posting in Usenet and responding to eMail messages, as not doing so is an insult to all the (multiple, in the case of Usenet and mailing lists) readers / recipients of your messages. Even if you have to spend a little time trimming the quote, it’s much less than the time spent by all readers trying to figure out a TOFU (reply over fullquote) message.

Ich bitte jeden darum, sich bitte beim Posten im Usenet und Verfassen von eMails sich an diese Richtilinien zu halten; dies nicht zu tun ist ein Affront wider alle (im Falle von Usenet und Mailinglisten viele) Leser bzw. Empfänger eurer Nachrichten. Selbst wenn man zum Kürzen des Zitats ein bißchen Zeit aufwenden muß ist das immer noch deutlich weniger als die Mühe, die jeder einzelne Leser aufwenden muß, herauszufinden, was mit einer als TOFU (Text oben, Vollzitat unten) geschriebenen eMail gemeint ist.

Mag ik iederéén verzoeken, postings in het Usenet en mailtjes volgens deze regels te schrijven? Als het niet te doen is vies tegen alle ontvanger’s en moeilijk om te lezen. Zelfs als je een beetje tijd nodig heb om het oorspronkelijke deel te korten is het nog steeds minder dan de moeite van alleman, om een TOFU (antwoord boven, fullquote beneden) boodschap proberen te begrepen.

Thorsten Glaser http://www.mirbsd.org/ debian tag cloud

Salsa ribbons

Mër, 24/10/2018 - 4:55md

Salsa is the name of the collaborative development server for Debian and is the replacement for the now-deprecated Alioth service.

To make it easier to show the world that you use Salsa, I've created a number of Github-esque ribbons that you can overlay on your projects' sites by copying & pasting the appropriate snippet into your HTML.

For example:

You can find them, with instructions, here:

lamby.pages.debian.net/salsa-ribbons

If you're not satisfied with one of the colours, the original source is available.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Freexian’s report about Debian Long Term Support, September 2018

Mër, 24/10/2018 - 12:13md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, about 227 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 11 hours (out of 10 hours allocated + 5 extra hours, thus keeping 4 extra hours for October).
  • Antoine Beaupré did 24 hours.
  • Ben Hutchings did 29 hours (out of 15 hours allocated + 18 extra hours, thus keeping 4 extra hours for October).
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did not publish his report yet (he had 29.25 hours hours allocated).
  • Holger Levsen did 2.5 hours (out of 8 hours allocated + 14 extra hours, thus keeping 19.5 extra hours for October).
  • Hugo Lefeuvre did 10 hours.
  • Markus Koschany did 29.25 hours.
  • Mike Gabriel did 10 hours (out of 8 hours allocated + 2 extra hours).
  • Ola Lundqvist did 7 hours (out of 8 hours allocated + 11.5 remaining hours, but gave back 4.5 hours, thus keeping 8 extra hours for October).
  • Roberto C. Sanchez did 15 hours (out of 18h allocated + 12 extra hours, and gave back the 15 remaining hours).
  • Santiago Ruano Rincón did 4 hours (out of 20 hours allocated + 12 extra hours, thus keeping 28 extra hours for October).
  • Thorsten Alteholz did 29.25 hours.
Evolution of the situation

The number of sponsored hours decreased to 205 hours per month, we lost another small sponsor. Hopefully this trend will not continue. Time to subscribe your company if it’s not yet done!

The security tracker currently lists 30 packages with a known CVE and the dla-needed.txt file has 24 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog https://raphaelhertzog.com apt-get install debian-wizard

Idea for a Debian QA service: monitoring install size with dependencies

Mër, 24/10/2018 - 9:55pd

This is an idea. I don't have the time to work on it myself, but I thought I'd throw it out in case someone else finds it interesting.

When you install a Debian package, it pulls in its dependencies and recommended packages, and those pull in theirs. For simple cases, this is all fine, but sometimes there's surprises. Installing mutt to a base system pulls in libgpgme, which pulls in gnupg, which pulls in a pinentry package, which can pull in all of GNOME. Or at least people claim that.

It strikes me that it'd be cool for someone to implement a QA service for Debian that measures, for each package, how much installing it adds to the system. It should probably do this in various scenarios:

  • A base system, i.e., the output of debootstrap.
  • A build system, with build-essentian installed.
  • A base GNOME system, with gnome-core installed.
  • A full GNOME system, with gnome installed.
  • Similarly for KDE and each other desktop environment in Debian.

The service would do the installs regularly (daily?), and produce reports. It would also do alerts, such as notify the maintainers when installed size grows too large compared to installing it in stable, or a previous run in unstable. For example, if installing mutt suddenly installs 100 gigabytes more than yesterday, it's probably a good idea to alert interested parties.

Implementing this should be fairly easy, since the actual test is just running debootstrap, and possibly apt-get install. Some experimentation with configuration, caching, and eatmydata may be useful to gain speed. Possibly actual package installation can be skipped, and the whole thing could be implemented just by analysing package metadata.

Maybe it even exists, and I just don't know about it. That'd be cool, too.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

Reproducible Builds: Weekly report #182

Mar, 23/10/2018 - 3:15md

Here’s what happened in the Reproducible Builds effort between Sunday October 14 and Saturday October 20 2018:

Another reminder that the Reproducible Builds summit will be taking place between 11th—13th December 2018 Paris at Mozilla’s offices. If you are interested in attending, please send an email to holger@layer-acht.org. More details can be found on the corresponding event page of our website.

Packages reviewed and fixed, and bugs filed Test framework development

There were a large number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org by Holger Levsen this month, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

A visual basic server

Mar, 23/10/2018 - 11:01pd

So my previous post described a BASIC interpreter I'd written.

Before the previous release I decided to ensure that it was easy to embed, and that it was possible to extend the BASIC environment such that it could call functions implemented in golang.

One of the first things that came to mind was to allow a BASIC script to plot pixels in a PNG. So I made that possible by adding "PLOT x,y" and "SAVE" primitives.

Taking that step further I then wrote a HTTP-server which would allow you to enter a BASIC program and view the image it created. It's a little cute at least.

Install it from source, or fetch a binary if you prefer, via:

$ go get -u github.com/skx/gobasic/goserver

Then launch it and point your browser at http://localhost:8080, and you'll be presented with something like this:

Fun times.

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Review: The Stone Sky

Mar, 23/10/2018 - 6:53pd

Review: The Stone Sky, by N.K. Jemisin

Series: The Broken Earth #3 Publisher: Orbit Copyright: August 2017 ISBN: 0-316-22925-3 Format: Kindle Pages: 464

So, this is it: the epic conclusion of the series that began with The Fifth Season. And it is a true conclusion. Jemisin's world is too large and her characters too deep (and too real) to wrap up into a simple package, but there's a finality to this conclusion that makes me think it unlikely Jemisin will write a direct sequel any time soon. (And oh my do you not want to start with this book. This series must be read in order.)

I'm writing this several months after finishing the novel in part because I still find it challenging to put my feelings about this book into words. There are parts of this story I found frustrating and others I found unsatisfying, but each time I dig into those disagreements, I find new layers of story and meaning and I can't see how the book could have gone any other way. The Stone Sky is in many ways profoundly uncomfortable and unsettling, but that's also what makes it so good. Jemisin is tackling problems, emotions, and consequences that are unsettling, that should be unsettling. Triumphant conclusions would be a lie. This story hurt all the way through; it's fitting that the ending did as well. But it's also strangely hopeful, in a way that doesn't take away the pain.

World-building first. This is, thankfully, not the sort of series that leaves one with a host of unanswered questions or a maddeningly opaque background. Jemisin puts all of her cards on the table. We find out exactly how Essun's world was created, what the obelisks are, who the stone eaters are, who the Guardians are, and something even of the origin of orogeny. This is daring after so much intense build-up, and Jemisin deserves considerable credit for an explanation that (at least for me) held together and made sense of much of what had happened without undermining it.

I do have some lingering reservations about the inhuman villain of this series, which I still think is too magically malevolent (and ethically simplistic) for the interwoven complexity of the rest of the world-building. They're just reservations, not full objections, but buried in the structure of the world is an environmental position that's a touch too comfortable, familiar, and absolute, particularly by the standards of the rest of the series.

For the human villains, though, I have neither objections nor reservations. They are all too believable and straightforward, both in the backstory of the deep past and in its reverberations and implications up to Essun's time. There is a moment when the book's narrator is filling in details in the far past, an off-hand comment about how life was sacred to their civilization. And, for me, a moment of sucked-in breath and realization that of course it was. Of course they said life was sacred. It explained so very much, about so very many things: a momentary flash of white-hot rage, piercing the narrative like a needle, knitting it together.

Against that backdrop, the story shifts in this final volume from its primary focus on Essun to a balanced split between Essun and her daughter, continuing a transition that began in The Obelisk Gate. Essun by now is a familiar figure to the reader: exhausted, angry, bitter, suspicious, and nearly numb, but driving herself forward with unrelenting force. Her character development in The Stone Sky comes less from inside herself and more from unexpected connections and empathy she taught herself not to look for. Her part of this story is the more traditional one, the epic fantasy band of crusaders out to save the world, or Essun's daughter, or both.

Essun's daughter's story is... not that, and is where I found both the frustrations and the joy of this conclusion. She doesn't have Essun's hard experience, her perspective on the world, or Essun's battered, broken, reforged, and hardened sense of duty. But she has in many ways a clearer view, for all its limitations. She realizes some things faster than Essun does, and the solutions she reaches for are a critique of the epic fantasy solutions that's all the more vicious for its gentle emotional tone.

This book offers something very rare in fiction: a knife-edge conclusion resting on a binary choice, where as a reader I was, and still am, deeply conflicted about which choice would have been better. Even though by normal epic fantasy standards the correct choice is obvious.

The Stone Sky is, like a lot of epic fantasy, a story about understanding and then saving the world, but that story is told in counterpoint with a biting examination of the nature of the world that's being saved. It's also a story about a mother and a daughter, about raising a child who's strong enough to survive in a deeply unfair and vicious world, and about what it means to succeed in that goal. It's a story about community, and empathy, and love, and about facing the hard edge of loss inside all of those things and asking whether it was worth it, without easy answers.

The previous books in this series were angry in a way that I rarely see in literature. The anger is still there in The Stone Sky, but this book is also sad, in a way that's profound and complicated and focused on celebrating the relationships that matter enough to make us sad. There are other stories that I have enjoyed reading more, but there are very few that I thought were as profound or as unflinching.

Every book in this series won a Hugo award. Every book in this series deserved it. This is a modern masterpiece of epic fantasy that I am quite certain we will still be talking about fifty years from now. It's challenging, powerful, emotional, and painful in a way that you may have to brace yourself to read, but it is entirely worth the effort.

Rating: 9 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

security things in Linux v4.19

Mar, 23/10/2018 - 1:17pd

Previously: v4.18.

Linux kernel v4.19 was released today. Here are some security-related things I found interesting:

L1 Terminal Fault (L1TF)

While it seems like ages ago, the fixes for L1TF actually landed at the start of the v4.19 merge window. As with the other speculation flaw fixes, lots of people were involved, and the scope was pretty wide: bare metal machines, virtualized machines, etc. LWN has a great write-up on the L1TF flaw and the kernel’s documentation on L1TF defenses is equally detailed. I like how clean the solution is for bare-metal machines: when a page table entry should be marked invalid, instead of only changing the “Present” flag, it also inverts the address portion so even a speculative lookup ignoring the “Present” flag will land in an unmapped area.

protected regular and fifo files

Salvatore Mesoraca implemented an O_CREAT restriction in /tmp directories for FIFOs and regular files. This is similar to the existing symlink restrictions, which take effect in sticky world-writable directories (e.g. /tmp) when the opening user does not match the owner of the existing file (or directory). When a program opens a FIFO or regular file with O_CREAT and this kind of user mismatch, it is treated like it was also opened with O_EXCL: it gets rejected because there is already a file there, and the kernel wants to protect the program from writing possibly sensitive contents to a file owned by a different user. This has become a more common attack vector now that symlink and hardlink races have been eliminated.

syscall register clearing, arm64

One of the ways attackers can influence potential speculative execution flaws in the kernel is to leak information into the kernel via “unused” register contents. Most syscalls take only a few arguments, so all the other calling-convention-defined registers can be cleared instead of just left with whatever contents they had in userspace. As it turns out, clearing registers is very fast. Similar to what was done on x86, Mark Rutland implemented a full register-clearing syscall wrapper on arm64.

Variable Length Array removals, part 3

As mentioned in part 1 and part 2, VLAs continue to be removed from the kernel. While CONFIG_THREAD_INFO_IN_TASK and CONFIG_VMAP_STACK cover most issues with stack exhaustion attacks, not all architectures have those features, so getting rid of VLAs makes sure we keep a few classes of flaws out of all kernel architectures and configurations. It’s been a long road, and it’s shaping up to be a 4-part saga with the remaining VLA removals landing in the next kernel. For v4.19, several folks continued to help grind away at the problem: Arnd Bergmann, Kyle Spiers, Laura Abbott, Martin Schwidefsky, Salvatore Mesoraca, and myself.

shift overflow helper
Jason Gunthorpe noticed that while the kernel recently gained add/sub/mul/div helpers to check for arithmetic overflow, we didn’t have anything for shift-left. He added check_shl_overflow() to round out the toolbox and Leon Romanovsky immediately put it to use to solve an overflow in RDMA.

Edit: I forgot to mention this next feature when I first posted:

trusted architecture-supported RNG initialization

The Random Number Generator in the kernel seeds its pools from many entropy sources, including any architecture-specific sources (e.g. x86’s RDRAND). Due to many people not wanting to trust the architecture-specific source due to the inability to audit its operation, entropy from those sources was not credited to RNG initialization, which wants to gather “enough” entropy before claiming to be initialized. However, because some systems don’t generate enough entropy at boot time, it was taking a while to gather enough system entropy (e.g. from interrupts) before the RNG became usable, which might block userspace from starting (e.g. systemd wants to get early entropy). To help these cases, Ted T’so introduced a toggle to trust the architecture-specific entropy completely (i.e. RNG is considered fully initialized as soon as it gets the architecture-specific entropy). To use this, the kernel can be built with CONFIG_RANDOM_TRUST_CPU=y (or booted with “random.trust_cpu=on“).

That’s it for now; thanks for reading. The merge window is open for v4.20! Wish us luck. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

kees https://outflux.net/blog Debian – codeblog

Why organizational culture matters for online groups

Mar, 23/10/2018 - 12:55pd

Leaders and scholars of online communities tend of think of community growth as the aggregate effect of inexperienced individuals arriving one-by-one. However, there is increasing evidence that growth in many online communities today involves newcomers arriving in groups with previous experience together in other communities. This difference has deep implications for how we think about the process of integrating newcomers. Instead of focusing only on individual socialization into the group culture, we must also understand how to manage mergers of existing groups with distinct cultures. Unfortunately, online community mergers have, to our knowledge, never been studied systematically.

To better understand mergers, my student Charlie Kiene spent six months in 2017 conducting ethnographic participant observation in two World of Warcraft raid guilds planning and undergoing mergers. The results—visible in the attendance plot below—shows that the top merger led to a thriving and sustainable community while the bottom merger led to failure and the eventual dissolution of the group. Why did one merger succeed while the other failed? What can managers of other communities learn from these examples?

In a new paper that will be published in the Proceedings of of the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW) and that Charlie will present in New Jersey next month, I teamed up with Charlie and Aaron Shaw try to answer these questions.

Raid team attendance before and after merging. Guilds were given pseudonyms to protect the identity of the research subjects.

In our research setting, World of Warcraft (WoW), players form organized groups called “guilds” to take on the game’s toughest bosses in virtual dungeons that are called “raids.” Raids can be extremely challenging, and they require a large number of players to be successful. Below is a video demonstrating the kind of communication and coordination needed to be successful as a raid team in WoW.

Because participation in a raid guild requires time, discipline, and emotional investment, raid guilds are constantly losing members and recruiting new ones to resupply their ranks. One common strategy for doing so is arranging formal mergers. Our study involved following two such groups as they completed mergers. To collect data for our study, Charlie joined both groups, attended and recorded all activities, took copious field notes, and spent hours interviewing leaders.

Although our team did not anticipate the divergent outcomes shown in the figure above when we began, we analyzed our data with an eye toward identifying themes that might point to reasons for the success of one merger and the failure of the other. The answers that emerged from our analysis suggest that the key differences that led one merger to be successful and the other to fail revolved around differences in the ways that the two mergers managed organizational culture. This basic insight is supported by a body of research about organizational culture in firms but seem to have not made it onto the radar of most members or scholars of online communities. My coauthors and I think more attention to the role that organizational culture plays in online communities is essential.

We found evidence of cultural incompatibility in both mergers and it seems likely that some degree of cultural clashes is inevitable in any merger. The most important result of our analysis are three observations we drew about specific things that the successful merger did to effectively manage organizational culture. Drawn from our analysis, these themes point to concrete things that other communities facing mergers—either formal or informal—can do.

A recent, random example of a guild merger recruitment post found on the WoW forums.

First, when planning mergers, groups can strategically select other groups with similar organizational culture. The successful merger in our study involved a carefully planned process of advertising for a potential merger on forums, testing out group compatibility by participating in “trial” raid activities with potential guilds, and selecting the guild that most closely matched their own group’s culture. In our settings, this process helped prevent conflict from emerging and ensured that there was enough common ground to resolve it when it did.

Second, leaders can plan intentional opportunities to socialize members of the merged or acquired group. The leaders of the successful merger held community-wide social events in the game to help new members learn their community’s norms. They spelled out these norms in a visible list of rules. They even included the new members in both the brainstorming and voting process of changing the guild’s name to reflect that they were a single, new, cohesive unit. The leaders of the failed merger lacked any explicitly stated community rules, and opportunities for socializing the members of the new group were virtually absent. Newcomers from the merged group would only learn community norms when they broke one of the unstated social codes.

The guild leaders in the successful merger documented every successful high end raid boss achievement in a community-wide “Hall of Fame” journal. A screenshot is taken with every guild member who contributed to the achievement and uploaded to a “Hall of Fame” page.

Third and finally, our study suggested that social activities can be used to cultivate solidarity between the two merged groups, leading to increased retention of new members. We found that the successful guild merger organized an additional night of activity that was socially-oriented. In doing so, they provided a setting where solidarity between new and existing members can cultivate and motivate their members to stick around and keep playing with each other—even when it gets frustrating.

Our results suggest that by preparing in advance, ensuring some degree of cultural compatibility, and providing opportunities to socialize newcomers and cultivate solidarity, the potential for conflict resulting from mergers can be mitigated. While mergers between firms often occur to make more money or consolidate resources, the experience of the failed merger in our study shows that mergers between online communities put their entire communities at stake. We hope our work can be used by leaders in online communities to successfully manage potential conflict resulting from merging or acquiring members of other groups in a wide range of settings.

Much more detail is available our paper which will be published open access and which is currently available as a preprint.

Both this blog post and the paper it is based on are collaborative work by Charles Kiene from the University of Washington, Aaron Shaw from Northwestern University, and Benjamin Mako Hill from the University of Washington. We are also thrilled to mention that the paper received a Best Paper Honorable Mention award at CSCW 2018!

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

Measuring the speaker frequency response using the AUDMES free software GUI - nice free software

Hën, 22/10/2018 - 8:40pd

My current home stereo is a patchwork of various pieces I got on flee markeds over the years. It is amazing what kind of equipment show up there. I've been wondering for a while if it was possible to measure how well this equipment is working together, and decided to see how far I could get using free software. After trawling the web I came across an article from DIY Audio and Video on Speaker Testing and Analysis describing how to test speakers, and it listing several software options, among them AUDio MEasurement System (AUDMES). It is the only free software system I could find focusing on measuring speakers and audio frequency response. In the process I also found an interesting article from NOVO on Understanding Speaker Specifications and Frequency Response and an article from ecoustics on Understanding Speaker Frequency Response, with a lot of information on what to look for and how to interpret the graphs. Armed with this knowledge, I set out to measure the state of my speakers.

The first hurdle was that AUDMES hadn't seen a commit for 10 years and did not build with current compilers and libraries. I got in touch with its author, who no longer was spending time on the program but gave me write access to the subversion repository on Sourceforge. The end result is that now the code build on Linux and is capable of saving and loading the collected frequency response data in CSV format. The application is quite nice and flexible, and I was able to select the input and output audio interfaces independently. This made it possible to use a USB mixer as the input source, while sending output via my laptop headphone connection. I lacked the hardware and cabling to figure out a different way to get independent cabling to speakers and microphone.

Using this setup I could see how a large range of high frequencies apparently were not making it out of my speakers. The picture show the frequency response measurement of one of the speakers. Note the frequency lines seem to be slightly misaligned, compared to the CSV output from the program. I can not hear several of these are high frequencies, according to measurement from Free Hearing Test Software, an freeware system to measure your hearing (still looking for a free software alternative), so I do not know if they are coming out out the speakers. I thus do not quite know how to figure out if the missing frequencies is a problem with the microphone, the amplifier or the speakers, but I managed to rule out the audio card in my PC by measuring my Bose noise canceling headset using its own microphone. This setup was able to see the high frequency tones, so the problem with my stereo had to be in the amplifier or speakers.

Anyway, to try to role out one factor I ended up picking up a new set of speakers at a flee marked, and these work a lot better than the old speakers, so I guess the microphone and amplifier is OK. If you need to measure your own speakers, check out AUDMES. If more people get involved, perhaps the project could become good enough to include in Debian? And if you know of some other free software to measure speakers and amplifier performance, please let me know. I am aware of the freeware option REW, but I want something that can be developed also when the vendor looses interest.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

How about specify Debian version in apt line, not its codename

Hën, 22/10/2018 - 4:33pd
Some of Debian users don't know about its codename, they just know it's a stable version or Debian version.X (major version) like Debian9, and sometimes confuse with apt line ("hey, what does 'stretch' means?").

We probably can improve it as providing symlink in repository as below.
$ cat /etc/apt/sources.list
deb http://ftp.jp.debian.org/debian/ Debian8 main contrib non-free
It's so simple - isn't it? If you have a comment, please post to BTS.

Hideki Yamane noreply@blogger.com Henrich plays with Debian

More than enough is too much.

Hën, 22/10/2018 - 4:25pd

*sigh*. CoC should NOT be a beating stick, of course...

子貢問、「師与商也孰賢乎。」子曰、「師也過。商也不及。」曰、「然則師愈与。」子曰、「過猶不及也。」 Hideki Yamane noreply@blogger.com Henrich plays with Debian

BGP LLGR: robust and reactive BGP sessions

Dje, 21/10/2018 - 9:44md

On a BGP-routed network with multiple redundant paths, we seek to achieve two goals concerning reliability:

  1. A failure on a path should quickly bring down the related BGP sessions. A common expectation is to recover in less than a second by diverting the traffic to the remaining paths.

  2. As long as a path is operational, the related BGP sessions should stay up, even under duress.

Detecting failures fast: BFD

To quickly detect a failure, BGP can be associated with BFD, a protocol to detect faults in bidirectional paths,1 defined in RFC 5880 and RFC 5882. BFD can use very low timers, like 100 ms.

However, when BFD runs in a process on top of a generic kernel,2 notably when running BGP on the host, it is not unexpected to loose a few BFD packets on adverse conditions: the daemon handling the BFD sessions may not get enough CPU to answer in a timely manner. In this scenario, it is not unlikely for all the BGP sessions to go down at the same time, creating an outage, as depicted in the last case in the diagram below.

Examples of failures on a network using BGP as the underlying routing protocol. A link failure is detected by BFD and the failed path is removed from the ECMP route. However, when high CPU usage on the bottom router prevents BFD packets to be processed timely, all paths are removed.

So far, we have two contradicting roads:

  • lower the BFD timers to quickly detect a failure along the path, or
  • raise the BFD timers to ensure BGP sessions remain operational.
Fix false positives: BGP LLGR

Long-lived BGP Graceful Restart is a new BGP capability to retain stale routes for a longer period after a session failure but treating them as least-preferred. It also defines a well-known community to share this information with other routers. It is defined in the Internet-Draft draft-uttaro-idr-bgp-persistence-04 and several implementations already exist:

  • Juniper JunOS (since 15.1, see the documentation),
  • Cisco IOS XR (unfortunately only for VPN and FlowSpec families),
  • BIRD (since 1.6.5 and 2.0.3, both yet to be released, sponsored by Exoscale), and
  • GoBGP (since 1.33).

The following illustration shows what happens during two failure scenarios. Like without LLGR, in ❷, a link failure is detected by BFD and the failed path is removed from the route as two other paths remain with a higher preference. A couple of minutes later, the faulty path has its stale timer expired and will not be used anymore. Shortly after, in ❸, the bottom router experiences high CPU usage, preventing BFD packets to be processed timely. The BGP sessions are closed and the remaining paths become stale but as there is no better path left, they are still used until the LLGR timer expires. In the meantime, we expect the BGP sessions to resume.

Examples of failures on a network using BGP as the underlying routing protocol with LLGR enabled.

From the point of view of the top router, the first failed path was considered as stale because the BGP session with R1 was down. However, during the second failure, the two remaining paths were considered as stale because they were tagged with the well-known community LLGR_STALE (65535:6) by R2 and R3.

Another interesting point of BGP LLGR is the ability to restart the BGP daemon without any impact—as long as all paths keep a steady state shortly before and during restart. This is quite interesting when running BGP on the host.3

BIRD

Let’s see how to configure BIRD 1.6. As BGP LLGR is built on top of the regular BGP graceful restart (BGP GR) capability, we need to enable both. The timer for BGP LLGR starts after the timer for BGP GR. During a regular graceful restart, routes are kept with the same preference. Therefore it is important to set this timer to 0.

template bgp BGP_LLGR { bfd graceful; graceful restart yes; graceful restart time 0; long lived graceful restart yes; long lived stale time 120; }

When a problem appears on the path, the BGP session goes down and the LLGR timer starts:

$ birdc show protocol R1_1 all name proto table state since info R1_1 BGP master start 11:20:17 Connect Preference: 100 Input filter: ACCEPT Output filter: ACCEPT Routes: 1 imported, 0 exported, 0 preferred Route change stats: received rejected filtered ignored accepted Import updates: 2 0 0 0 4 Import withdraws: 0 0 --- 0 0 Export updates: 12 10 0 --- 2 Export withdraws: 1 --- --- --- 0 BGP state: Connect Neighbor address: 2001:db8:104::1 Neighbor AS: 65000 Neighbor graceful restart active LL stale timer: 112/-

The related paths are marked as stale (as reported by the s in 100s) and tagged with the well-known community LLGR_STALE:

$ birdc show route 2001:db8:10::1/128 all 2001:db8:10::1/128 via 2001:db8:204::1 on eth0.204 [R1_2 10:35:01] * (100) [i] Type: BGP unicast univ BGP.origin: IGP BGP.as_path: BGP.next_hop: 2001:db8:204::1 fe80::5254:3300:cc00:5 BGP.local_pref: 100 via 2001:db8:104::1 on eth0.104 [R1_1 11:22:51] (100s) [i] Type: BGP unicast univ BGP.origin: IGP BGP.as_path: BGP.next_hop: 2001:db8:104::1 fe80::5254:3300:6800:5 BGP.local_pref: 100 BGP.community: (65535,6)

We are left with only one path for the route in the kernel:

$ ip route show 2001:db8:10::1 2001:db8:10::1 via 2001:db8:204::1 dev eth0.204 proto bird metric 1024 pref medium

To upgrade BIRD without impact, it needs to run with the -R flag and the graceful restart yes directive should be present in the kernel protocols. Then, before upgrade, stop it using SIGKILL instead of SIGTERM to avoid a clean close of the BGP sessions.

Juniper JunOS

With JunOS, we only have to enable BGP LLGR for each family—assuming BFD is already configured:

# Enable BGP LLGR edit protocols bgp group peers family inet6 unicast set graceful-restart long-lived restarter stale-time 2m

Once a path is failing, the associated BGP session goes down and the BGP LLGR timer starts:

> show bgp neighbor 2001:db8:104::4 Peer: 2001:db8:104::4+179 AS 65000 Local: 2001:db8:104::1+57667 AS 65000 Group: peers Routing-Instance: master Forwarding routing-instance: master Type: Internal State: Connect Flags: <> Last State: Active Last Event: ConnectRetry Last Error: None Export: [ LOOPBACK NOTHING ] Options: <Preference HoldTime Ttl AddressFamily Multipath Refresh> Options: <BfdEnabled LLGR> Address families configured: inet6-unicast Holdtime: 6 Preference: 170 NLRI inet6-unicast: Number of flaps: 2 Last flap event: Restart Time until long-lived stale routes deleted: inet6-unicast 00:01:05 Table inet6.0 Bit: 20000 RIB State: BGP restart is complete Send state: not advertising Active prefixes: 0 Received prefixes: 1 Accepted prefixes: 1 Suppressed due to damping: 0 LLGR-stale prefixes: 1

The associated path is marked as stale and is therefore inactive as there are better paths available:

> show route 2001:db8:10::4 extensive […] BGP Preference: 170/-101 Source: 2001:db8:104::4 Next hop type: Router, Next hop index: 778 Next hop: 2001:db8:104::4 via em1.104, selected Protocol next hop: 2001:db8:104::4 Indirect next hop: 0xb1d27c0 1048578 INH Session ID: 0x15c State: <Int Ext> Inactive reason: LLGR stale Local AS: 65000 Peer AS: 65000 Age: 4 Metric2: 0 Communities: llgr-stale Accepted LongLivedStale Localpref: 100 Router ID: 1.0.0.4 […]

Have a look at the GitHub repository for the complete configurations as well as the expected outputs during normal operations. There is also a variant with the configurations of BIRD and JunOS when acting as a BGP route reflector. Now that FRR got BFD support, I hope it will get LLGR support as well.

  1. With point-to-point links, BGP can immediately detect a failure without BFD. However, with a pair of fibers, the failure may be undirectional, leaving it undetected by the other end until the expiration of the hold timer. ↩︎

  2. On a Juniper MX, BFD is usually handled directly by the real-time microkernel running on the packet forwarding engine. The BFD control packet contains a bit indicating if BFD is implemented by the forwarding plane or by the control plane. Therefore, you can check with tcpdump how a router implements BFD. Here is an example where 10.71.7.1, a Linux host running BIRD, implements BFD in the control plane, while 10.71.0.3, a Juniper MX, does not:

    $ sudo tcpdump -pni vlan181 port 3784 IP 10.71.7.1 > 10.71.0.3: BFDv1, Control, State Up, Flags: [none] IP 10.71.0.3 > 10.71.7.1: BFDv1, Control, State Up, Flags: [Control Plane Independent]

    ↩︎

  3. Such a feature is the selling point of BGP graceful restart. However, without LLGR, non-functional paths are kept with the same preference and are not removed from ECMP routes. ↩︎

Vincent Bernat https://vincent.bernat.ch/en Vincent Bernat

Web browser integration of VLC with Bittorrent support

Dje, 21/10/2018 - 9:50pd

Bittorrent is as far as I know, currently the most efficient way to distribute content on the Internet. It is used all by all sorts of content providers, from national TV stations like NRK, Linux distributors like Debian and Ubuntu, and of course the Internet archive.

Almost a month ago a new package adding Bittorrent support to VLC became available in Debian testing and unstable. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

Since the plugin was made available for the first time in Debian, several improvements have been made to it. In version 2.2-4, now available in both testing and unstable, a desktop file is provided to teach browsers to start VLC when the user click on torrent files or magnet links. The last part is thanks to me finally understanding what the strange x-scheme-handler style MIME types in desktop files are used for. By adding x-scheme-handler/magnet to the MimeType entry in the desktop file, at least the browsers Firefox and Chromium will suggest to start VLC when selecting a magnet URI on a web page. The end result is that now, with the plugin installed in Buster and Sid, one can visit any Internet Archive page with movies using a web browser and click on the torrent link to start streaming the movie.

Note, there is still some misfeatures in the plugin. One is the fact that it will hang and block VLC from exiting until the torrent streaming starts. Another is the fact that it will pick and play a random file in a multi file torrent. This is not always the video file you want. Combined with the first it can be a bit hard to get the video streaming going. But when it work, it seem to do a good job.

For the Debian packaging, I would love to find a good way to test if the plugin work with VLC using autopkgtest. I tried, but do not know enough of the inner workings of VLC to get it working. For now the autopkgtest script is only checking if the .so file was successfully loaded by VLC. If you have any suggestions, please submit a patch to the Debian bug tracking system.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Faqet