You are here

Agreguesi i feed

Jonathan Dowland: iPod refresh

Planet Debian - Mar, 04/12/2018 - 8:46md

Recently I filled up the storage in my iPod and so planned to upgrade it. one. This is a process I've been through several times in the past. My routine used to be to buy the largest capacity SD card that existed at the time (usually twice the capacity of the current one) and spend around £90. Luckily, SD capacity has been growing faster than my music collection. You can buy 400G SD cards today, but I only bought a 200G one, and I only spent around £38.

As I wrote last time, I don't use iTunes: I can move music on and off it from any computer, and I choose music to listen to using a simple file manager. One drawback of this approach is I tend to listen to the same artists over and over, and large swathes of my collection lie forgotten about. The impression I get is that music managers like iTunes have various schemes to help you keep in touch with the rest of your collection, via playlists: "recently added", "stuff you listened to this time last year", or whatever.

As a first step in this direction, I decided it would be useful to build up playlists of recently modified (or added) files. I thought it would be easiest to hook this into my backup solution. In case it's of interest to anyone else, I thought I'd share my solution. The scheme I describe there is used to run a shell script to perform the syncing, which now looks (mostly) like this:

date="$(/bin/date +%Y-%m-%d)" plsd=/home/jon/pls make_playlists() { grep -v deleting \ | grep -v '/\._' \ | grep -E '(m4a|mp3|ogg|wav|flac)$' \ | tee -a "$plsd/$date.m3u8" } # set the attached blinkstick LED to a colour indicating "work in progress" # systemd sets it to either red or green once the job is complete blinkstick --index 1 --limit 10 --set-color 33c280 # sync changes from my iPod onto my NAS; feed the output of files changed # into "make_playlists" rsync -va --delete --no-owner --no-group --no-perms \ --exclude=/.Spotlight-V100 --exclude=/.Trash-1000 \ --exclude=/.Trashes --exclude=/lost+found /media/ipod/ /music/ \ | make_playlists # sync all generated playlists back onto the iPod rsync -va --no-owner --no-group --no-perms \ /home/jon/pls/ /media/ipod/playlists/

Time will tell whether this will help.

Daniel Lange: Google GMail continues to own the email market, Microsoft is catching up

Planet Debian - Mar, 04/12/2018 - 7:41md

Back in 2009 I wrote about Google's GMail emerging as the dominant platform for email. It had 46% of all accounts I sampled from American bloggers for the Ph.D. thesis of a friend. Blogging was big back then .

Now I wondered how things have changed over the last decade while I was working on another email related job. Having access to a list of 2.3 million email addresses from a rather similar (US-centric) demographic, let's do some math:

Google's GMail has 39% in that (much larger, but still non-scientific and skewed) sample. This is down from 46% in 2009. Microsoft, with its various email domains from Hotmail to Live.com has massively caught up from 10% to 35%. This is definitely also due to now focussing more on the strong Microsoft Office brands e.g. for Office 365 and Outlook.com. Yahoo, the #2 player back in 2009, is at 18%, still up from the 12% back then.

So Google plus Microsoft command nearly ¾ of all email addresses in that US-centric sample. Adding Yahoo into the equation leaves the accounts covered at >92%. Wow.

Email has essentially centralized onto three infrastructure providers and with this the neutrality advantage of open standards will probably erode. Interoperability is something two or three players can make or break for 90% of the user base within a single meeting in Sunnyvale.

Google is already trying their luck with "confidential email" which carry expiry dates and revokable reading rights for the recipient. So ... not really email anymore. More like Snapchat. Microsoft has been famous for their winmail.dat attachments and other negligence of email best practices. Yahoo is probably busy trying to develop a sustainable business model and trying to find cash that Marissa didn't spend so hopefully less risk of trying out misguided "innovations" in the email space from them.

All other players are less that 1% of the email domains in the sample. AOL used to have 3.1% and now the are at 0.6% which is in the same (tiny) ball park as the combined Apple offerings (mac.com, me.com) at 0.4%.

There is virtually no use of the new TLDs for (real, user)1 email. Just a few hundreds of .info and .name. And very few that consider themselves .sexy or .guru and want to tell via their email TLD.

Domain owner   2009 2018 GMail   46.1% 38.6% Yahoo 11.6% 18.3% Microsoft 9.9% 35.4% AOL 3.1% 0.6% Apple 1.0% 0.4% Comcast 2.3% 0.2% SBCGlobal 0.9%   0.09%
  1. There is extensive use of cheap TLDs for "throw-away" spam operations

Russ Allbery: Review: The Winter Long

Planet Debian - Mar, 04/12/2018 - 4:24pd

Review: The Winter Long, by Seanan McGuire

Series: October Daye #8 Publisher: DAW Copyright: 2014 ISBN: 1-101-60175-2 Format: Kindle Pages: 368

This is the eighth book in the October Daye series and leans heavily on the alliances, friendship, world-building, and series backstory. This is not the sort of series that can be meaningfully started in the middle. And, for the same reason, it's also rather hard to review without spoilers, although I'll give it a shot.

Toby has had reason to fear Simon Torquill for the entire series. Everything that's happened to her was set off by him turning her into a fish and destroying her life. She's already had to deal with his partner (in Late Eclipses), so it's not a total surprise that he would show up again. But Toby certainly didn't expect him to show up at her house, or to sound weirdly unlike an enemy, or to reference a geas and an employer. She had never understood his motives, but there may be more to them than simple evil.

I have essentially struck out trying to recommend this series to other people. I think everyone else who's started it has bounced off of it for various reasons: unimpressed by Toby's ability to figure things out, feeling the bits borrowed from the mystery genre are badly done, not liking Irish folklore transplanted to the San Francisco Bay Area, or just finding it too dark. I certainly can't argue with people's personal preferences, but I want to, since this remains my favorite urban fantasy series and I want to talk about it with more people. Thankfully, the friends who started reading it independent of my recommendation all love it too. (Perhaps I'm cursing it somehow?)

Regardless, this is more of exactly what I like about this series, which was never the private detective bits (that have now been discarded entirely) and was always the maneuverings and dominance games of faerie politics, the comfort and solid foundation of Toby's chosen family, Toby's full-throttle-forward approach to forcing her way through problems, and the lovely layered world-building. There is so much going on in McGuire's faerie realm, so many hidden secrets, old grudges, lost history, and complex family relationships. I can see some of the shape of problems that the series will eventually resolve, but I still have no guesses as to how McGuire will resolve them.

The Winter Long takes another deep look at some of Toby's oldest relationships, including revisiting some events from Rosemary and Rue (the first book of the series) in a new light. It also keeps, and further deepens, my favorite relationships in this series: Tybalt, Mags and the Library (introduced in the previous book), and of course the Luidaeg, who is my favorite character in the entire series and the one I root for the most.

I've been trying to pinpoint what I like so much about this series, particularly given the number of people who disagree, and I think it's that Toby gets along with, and respects, a wide variety of difficult people, and brings to every interaction a consistent set of internal ethics and priorities. McGuire sets this against a backdrop of court politics, ancient rivalries and agreements, and hidden races with contempt for humans; Toby's role in that world is to stubbornly do the right thing based mostly on gut feeling and personal loyalty. It's not particularly complex ethics; most of the challenges she faces are eventually resolved by finding the right person to kick (or, more frequently now, use her slowly-growing power against) and the right place to kick them.

That simplicity is what I like. This is my comfort reading. Toby looks at tricky court intrigues, bull-headedly does the right thing, and manages to make that work out, which for me (particularly in this political climate) is escapism in the best sense. She has generally good judgment in her friends, those friends stand by her, and the good guys win. Sometimes that's just what I want in a series, particularly when it comes with an impressive range of mythological creations, an interesting and slowly-developing power set, enjoyable character banter, and a ton of world-building mysteries that I want to know more about.

Long story short, this is more of Toby and friends in much the same vein as the last few books in the series. It adds new depth to some past events, moves Toby higher into the upper echelons of faerie politics, and contains many of my favorite characters. Oh, and, for once, Toby isn't sick or injured or drugged for most of the story, which I found a welcome relief.

If you've read this far into the series, I think you'll love it. I certainly did.

Followed by A Red-Rose Chain.

Rating: 8 out of 10

Colin Watson: Deploying Swift

Planet Debian - Mar, 04/12/2018 - 2:37pd

Sometimes I want to deploy Swift, the OpenStack object storage system.

Well, no, that’s not true. I basically never actually want to deploy Swift as such. What I generally want to do is to debug some bit of production service deployment machinery that relies on Swift for getting build artifacts into the right place, or maybe the parts of the Launchpad librarian (our blob storage service) that use Swift. I could find an existing private or public cloud that offers the right API and test with that, but sometimes I need to test with particular versions, and in any case I have a terribly slow internet connection and shuffling large build artifacts back and forward over the relevant bit of wet string makes it painfully slow to test things.

For a while I’ve had an Ubuntu 12.04 VM lying around with an Icehouse-based Swift deployment that I put together by hand. It works, but I didn’t keep good notes and have no real idea how to reproduce it, not that I really want to keep limping along with manually-constructed VMs for this kind of thing anyway; and I don’t want to be dependent on obsolete releases forever. For the sorts of things I’m doing I need to make sure that authentication works broadly the same way as it does in a real production deployment, so I want to have Keystone too. At the same time, I definitely don’t want to do anything close to a full OpenStack deployment of my own: it’s much too big a sledgehammer for this particular nut, and I don’t really have the hardware for it.

Here’s my solution to this, which is compact enough that I can run it on my laptop, and while it isn’t completely automatic it’s close enough that I can spin it up for a test and discard it when I’m finished (so I haven’t worried very much about producing something that runs efficiently). It relies on Juju and LXD. I’ve only tested it on Ubuntu 18.04, using Queens; for anything else you’re on your own. In general, I probably can’t help you if you run into trouble with the directions here: this is provided “as is”, without warranty of any kind, and all that kind of thing.

First, install Juju and LXD if necessary, following the instructions provided by those projects, and also install the python-openstackclient package as you’ll need it later. You’ll want to set Juju up to use LXD, and you should probably make sure that the shells you’re working in don’t have http_proxy set as it’s quite likely to confuse things unless you’ve arranged for your proxy to be able to cope with your local LXD containers. Then add a model:

juju add-model swift

At this point there’s a bit of complexity that you normally don’t have to worry about with Juju. The swift-storage charm wants to mount something to use for storage, which with the LXD provider in practice ends up being some kind of loopback mount. Unfortunately, being able to perform loopback mounts exposes too much kernel attack surface, so LXD doesn’t allow unprivileged containers to do it. (Ideally the swift-storage charm would just let you use directory storage instead.) To make the containers we’re about to create privileged enough for this to work, run:

lxc profile set juju-swift security.privileged true lxc profile device add juju-swift loop-control unix-char \ major=10 minor=237 path=/dev/loop-control for i in $(seq 0 255); do lxc profile device add juju-swift loop$i unix-block \ major=7 minor=$i path=/dev/loop$i done

Now we can start deploying things! Save this to a file, e.g. swift.bundle:

series: bionic description: "Swift in a box" applications: mysql: charm: "cs:mysql-62" channel: candidate num_units: 1 options: dataset-size: 512M keystone: charm: "cs:keystone" num_units: 1 swift-storage: charm: "cs:swift-storage" num_units: 1 options: block-device: "/etc/swift/storage.img|5G" swift-proxy: charm: "cs:swift-proxy" num_units: 1 options: zone-assignment: auto replicas: 1 relations: - ["keystone:shared-db", "mysql:shared-db"] - ["swift-proxy:swift-storage", "swift-storage:swift-storage"] - ["swift-proxy:identity-service", "keystone:identity-service"]

And run:

juju deploy swift.bundle

This will take a while. You can run juju status to see how it’s going in general terms, or juju debug-log for detailed logs from the individual containers as they’re putting themselves together. When it’s all done, it should look something like this:

Model Controller Cloud/Region Version SLA swift lxd localhost 2.3.1 unsupported App Version Status Scale Charm Store Rev OS Notes keystone 13.0.1 active 1 keystone jujucharms 290 ubuntu mysql 5.7.24 active 1 mysql jujucharms 62 ubuntu swift-proxy 2.17.0 active 1 swift-proxy jujucharms 75 ubuntu swift-storage 2.17.0 active 1 swift-storage jujucharms 250 ubuntu Unit Workload Agent Machine Public address Ports Message keystone/0* active idle 0 10.36.63.133 5000/tcp Unit is ready mysql/0* active idle 1 10.36.63.44 3306/tcp Ready swift-proxy/0* active idle 2 10.36.63.75 8080/tcp Unit is ready swift-storage/0* active idle 3 10.36.63.115 Unit is ready Machine State DNS Inst id Series AZ Message 0 started 10.36.63.133 juju-d3e703-0 bionic Running 1 started 10.36.63.44 juju-d3e703-1 bionic Running 2 started 10.36.63.75 juju-d3e703-2 bionic Running 3 started 10.36.63.115 juju-d3e703-3 bionic Running

At this point you have what should be a working installation, but with only administrative privileges set up. Normally you want to create at least one normal user. To do this, start by creating a configuration file granting administrator privileges (this one comes verbatim from the openstack-base bundle):

_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ') for param in $_OS_PARAMS; do if [ "$param" = "OS_AUTH_PROTOCOL" ]; then continue; fi if [ "$param" = "OS_CACERT" ]; then continue; fi unset $param done unset _OS_PARAMS _keystone_unit=$(juju status keystone --format yaml | \ awk '/units:$/ {getline; gsub(/:$/, ""); print $1}') _keystone_ip=$(juju run --unit ${_keystone_unit} 'unit-get private-address') _password=$(juju run --unit ${_keystone_unit} 'leader-get admin_passwd') export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v3 export OS_USERNAME=admin export OS_PASSWORD=${_password} export OS_USER_DOMAIN_NAME=admin_domain export OS_PROJECT_DOMAIN_NAME=admin_domain export OS_PROJECT_NAME=admin export OS_REGION_NAME=RegionOne export OS_IDENTITY_API_VERSION=3 # Swift needs this: export OS_AUTH_VERSION=3 # Gnocchi needs this export OS_AUTH_TYPE=password

Source this into a shell: for instance, if you saved this to ~/.swiftrc.juju-admin, then run:

. ~/.swiftrc.juju-admin

You should now be able to run openstack endpoint list and see a table for the various services exposed by your deployment. Then you can create a dummy project and a user with enough privileges to use Swift:

USERNAME=your-username PASSWORD=your-password openstack domain create SwiftDomain openstack project create --domain SwiftDomain --description Swift \ SwiftProject openstack user create --domain SwiftDomain --project-domain SwiftDomain \ --project SwiftProject --password "$PASSWORD" "$USERNAME" openstack role add --project SwiftProject --user-domain SwiftDomain \ --user "$USERNAME" Member

(This is intended for testing rather than for doing anything particularly sensitive. If you cared about keeping the password secret then you’d use the --password-prompt option to openstack user create instead of supplying the password on the command line.)

Now create a configuration file granting privileges for the user you just created. I felt like automating this to at least some degree:

touch ~/.swiftrc.juju chmod 600 ~/.swiftrc.juju sed '/^_password=/d; s/\( OS_PROJECT_DOMAIN_NAME=\).*/\1SwiftDomain/; s/\( OS_PROJECT_NAME=\).*/\1SwiftProject/; s/\( OS_USER_DOMAIN_NAME=\).*/\1SwiftDomain/; s/\( OS_USERNAME=\).*/\1'"$USERNAME"'/; s/\( OS_PASSWORD=\).*/\1'"$PASSWORD"'/' \ <~/.swiftrc.juju-admin >~/.swiftrc.juju

Source this into a shell. For example:

. ~/.swiftrc.juju

You should now find that swift list works. Success! Now you can swift upload files, or just start testing whatever it was that you were actually trying to test in the first place.

This is not a setup I expect to leave running for a long time, so to tear it down again:

juju destroy-model swift

This will probably get stuck trying to remove the swift-storage unit, since nothing deals with detaching the loop device. If that happens, find the relevant device in losetup -a from another window and use losetup -d to detach it; juju destroy-model should then be able to proceed.

Credit to the Juju and LXD teams and to the maintainers of the various charms used here, as well as of course to the OpenStack folks: their work made it very much easier to put this together.

Sean Whitton: Debian Policy call for participation -- December 2018

Planet Debian - Hën, 03/12/2018 - 8:20md

Here’s are some of the bugs against the Debian Policy Manual. Please consider getting involved.

Consensus has been reached and help is needed to write a patch

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

#902612 Packages should not touch users’ home directories

#905453 Policy does not include a section on NEWS.Debian files

#906286 repository-format sub-policy

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs

#786470 [copyright-format] Add an optional “License-Grant” field

#845255 Include best practices for packaging database applications

#850156 Please firmly deprecate vendor-specific series files

#897217 Vcs-Hg should support -b too

Merged for the next release (no action needed)

#188731 Also strip .comment and .note sections

#845715 Please document that packages are not allowed to write outside thei…

#912581 Slightly relax the requirement to include verbatim copyright inform…

Gunnar Wolf: Chairing «Topics on Internet Censorship and Surveillance»

Planet Debian - Hën, 03/12/2018 - 7:07md

I have been honored to be invited as a co-chair (together with Vasilis Ververis and Mario Isaakidis) for a Special Track called «Topics on Internet Censorship and Surveillance» (TICS), at the The Eighteenth International Conference on Networks, which will be held in Valencia, Spain, 2019.03.24–2019.03.28, and organized under IARIA's name and umbrella.

I am reproducing here the Call for Papers. Please do note that if you are interested in participating, the relevant dates are those publicized for the Special Track (submission by 2019.01.29; notification by 2019.02.18; registration and camera-ready by 2019.02.27), not those on ICN's site.

Over the past years there has been a greater demand for online censorship and surveillance, as an understandable reaction against hate speech, copyright violations, and other cases related to citizen compliance with civil laws and regulations by national authorities. Unfortunately, this is often accompanied by a tendency of extensively censoring online content and massively spying on citizens actions. Numerous whistleblower revelations, leaks from classified documents, and a vast amount of information released by activists, researchers and journalists, reveal evidence of government-sponsored infrastructure that either goes beyond the requirements and scope of the law, or operates without any effective regulations in place. In addition, this infrastructure often supports the interests of big private corporations, such as the companies that enforce online copyright control.

TICS is a special track the area of Internet censorship, surveillance and other adversarial burdens to technology that bring in danger; to a greater extent the safety (physical security and privacy) of its users.

Proposals for TICS 2019 should be situated within the field of Internet censorship, network measurements, information controls, surveillance and content moderation. Ideally topics should connect to the following , but not limited to:

  • Technical, social, political, and economical implications of Internet censorship and surveillance
  • Detection and analysis of network blocking and surveillance infrastructure (hardware or software)
  • Research on legal frameworks, regulations and policies that imply blocking or limitation of the availability of network services and online content
  • Online censorship circumvention and anti-surveillance practices
  • Network measurements methodologies to detect and categorize network interference
  • Research on the implications of automated or centralized user content regulation (such as for hate speech, copyright, or disinformation)

Please help me share this invitation with possible interested people!
Oh — And to make this more interesting and enticing for you, ICN will take place at the same city and just one week before the Internet Freedom Festival, the Global Unconference of the Internet Freedom Communities ☺

Julien Danjou: A multi-value syntax tree filtering in Python

Planet Debian - Hën, 03/12/2018 - 2:29md

A while ago, we've seen how to write a simple filtering syntax tree with Python. The idea was to provide a small abstract syntax tree with an easy to write data structure that would be able to filter a value. Filtering meaning that once evaluated, our AST would return either True or False based on the passed value.

With that, we were able to write small rules like Filter({"eq": 3})(4) that would return False since, well, 4 is not equal to 3.

In this new post, I propose we enhance our filtering ability to support multiple values. The idea is to be able to write something like this:

>>> f = Filter( {"and": [ {"eq": ("foo", 3)}, {"gt": ("bar", 4)}, ] }, ) >>> f(foo=3, bar=5) True >>> f(foo=4, bar=5) False

The biggest change here is that the binary operators (eq, gt, le, etc.) now support getting two values, and not only one, and that we can pass multiple values to our filter by using keyword arguments.

How should we implement that? Well, we can keep the same data structure we built previously. However, this time we're gonna do the following change:

  • The left value of the binary operator will be a string that will be used as the key to access the keyword arguments passed to our Filter.__call__ values.
  • The right value of the binary operator will be kept as it is (like before).

We therefore need to change our Filter.build_evaluator to accommodate this as follow:

def build_evaluator(self, tree): try: operator, nodes = list(tree.items())[0] except Exception: raise InvalidQuery("Unable to parse tree %s" % tree) try: op = self.multiple_operators[operator] except KeyError: try: op = self.binary_operators[operator] except KeyError: raise InvalidQuery("Unknown operator %s" % operator) assert len(nodes) == 2 # binary operators take 2 values def _op(values): return op(values[nodes[0]], nodes[1]) return _op # Iterate over every item in the list of the value linked # to the logical operator, and compile it down to its own # evaluator. elements = [self.build_evaluator(node) for node in nodes] return lambda values: op((e(values) for e in elements))

The algorithm is pretty much the same, the tree being browsed recursively.

First, the operator and its arguments (nodes) are extracted.

Then, if the operator takes multiple arguments (such as and and or operators), each node is recursively evaluated and a function is returned evaluating those nodes.
If the operator is a binary operator (such as eq, lt, etc.), it checks that the passed argument list length is 2. Then, it returns a function that will apply the operator (e.g., operator.eq) to values[nodes[0]] and nodes[1]: the former access the arguments (values) passed to the filter's __call__ function while the latter is directly the passed argument.

The full class looks like this:

import operator class InvalidQuery(Exception): pass class Filter(object): binary_operators = { u"=": operator.eq, u"==": operator.eq, u"eq": operator.eq, u"<": operator.lt, u"lt": operator.lt, u">": operator.gt, u"gt": operator.gt, u"<=": operator.le, u"≤": operator.le, u"le": operator.le, u">=": operator.ge, u"≥": operator.ge, u"ge": operator.ge, u"!=": operator.ne, u"≠": operator.ne, u"ne": operator.ne, } multiple_operators = { u"or": any, u"∨": any, u"and": all, u"∧": all, } def __init__(self, tree): self._eval = self.build_evaluator(tree) def __call__(self, **kwargs): return self._eval(kwargs) def build_evaluator(self, tree): try: operator, nodes = list(tree.items())[0] except Exception: raise InvalidQuery("Unable to parse tree %s" % tree) try: op = self.multiple_operators[operator] except KeyError: try: op = self.binary_operators[operator] except KeyError: raise InvalidQuery("Unknown operator %s" % operator) assert len(nodes) == 2 # binary operators take 2 values def _op(values): return op(values[nodes[0]], nodes[1]) return _op # Iterate over every item in the list of the value linked # to the logical operator, and compile it down to its own # evaluator. elements = [self.build_evaluator(node) for node in nodes] return lambda values: op((e(values) for e in elements))

We can check that it works by building some filters:

x = Filter({"eq": ("foo", 1)}) assert not x(foo=1, bar=1) x = Filter({"eq": ("foo", "bar")}) assert not x(foo=1, bar=1) x = Filter({"or": ( {"eq": ("foo", "bar")}, {"eq": ("bar", 1)}, )}) assert x(foo=1, bar=1)

Supporting multiple values is handy as it allows to pass complete dictionaries to the filter, rather than just one value. That enables users to filter more complex objects.

Sub-dictionary support

It's also possible to support deeper data structure, like a dictionary of dictionary. By replacing values[nodes[0]] by self._resolve_name(values, node[0]) with a _resolve_name method like this one, the filter is able to traverse dictionaries:

ATTR_SEPARATOR = "." def _resolve_name(self, values, name): try: for subname in name.split(self.ATTR_SEPARATOR): values = values[subname] return values except KeyError: raise InvalidQuery("Unknown attribute %s" % name)

It then works like that:

x = Filter({"eq": ("baz.sub", 23)}) assert x(foo=1, bar=1, baz={"sub": 23}) x = Filter({"eq": ("baz.sub", 23)}) assert not x(foo=1, bar=1, baz={"sub": 3})

By using the syntax key.subkey.subsubkey the filter is able to access item inside dictionaries on more complex data structure.

That basic filter engine can evolve quite easily in something powerful, as you can add new operators or new way to access/manipulate the passed data structure.

If you have other ideas on nifty features that could be added, feel free to add a comment below!

Joachim Breitner: Sliding Right into Information Theory

Planet Debian - Hën, 03/12/2018 - 10:56pd

It's hardly news any more, but it seems I have not blogged about my involvement last year with an interesting cryptanalysis project, which resulted in the publication Sliding right into disaster: Left-to-right sliding windows leak by Daniel J. Bernstein, me, Daniel Genkin, Leon Groot Bruinderink, Nadia Heninger, Tanja Lange, Christine van Vredendaal and Yuval Yarom, which was published at CHES 2017 and on ePrint (ePrint is the cryptographer’s version of arXiv).

This project nicely touched upon many fields of computer science: First we need systems expertise to mount a side-channel attack that uses cache timing difference to observe which line of a square-and-multiply algorithm the target process is executing. Then we need algorithm analysis required to learn from these observations partial information about the bits of the private key. This part includes nice PLy concepts like rewrite rules (see Section 3.2). Oncee we know enough about the secret keys, we can use fancy cryptography to recover the whole secret key (Section 3.4). And finally, some theoretical questions arise, such as: “How much information do we need for the attack to succeed?” and “Do we obtain this much information”, and we need some nice math and information theory to answer these.

Initially, I focused on the PL-related concepts. We programming language people are yak-shavers, and in particular “rewrite rules” just demands the creation of a DSL to express them, and an interpreter to execute them, doesn’t it? But it turned out that these rules are actually not necessary, as the key recovery can use the side-channel observation directly, as we found out later (see Section 4 of the paper). But now I was already hooked, and turned towards the theoretical questions mentioned above.

Shannon vs. Rényi

It felt good to shake the dust of some of the probability theory that I learned for my maths degree, and I also learned some new stuff. For example, it was intuitively clear that whether the attack succeeds depends on the amount of information obtained by the side channel attack, and based on prior work, the expectation was that if we know more than half the bits, then the attack would succeed. Note that for this purpose, two known “half bits” are as good as knowing one full bit; for example knowing that the secret key is either 01 or 11 (one bit known for sure) is just as good as knowing that the key is either 00 or 11.

Cleary, this is related to entropy somehow -- but how? Trying to prove that the attack works if the entropy rate of the leak is >0.5 just did not work, against all intuition. But when we started with a formula that describes when the attack succeeds, and then simplified it, we found a condition that looked suspiciously like what we wanted, namely H > 0.5, only that H was not the conventional entropy (also known as the Shannon entropy, H = −∑p ⋅ log p), but rather something else: H = −∑p2, which turned to be called the collision entropy or Rényi entropy.

This resulted in Theorem 3 in the paper, and neatly answers the question when the Heninger and Shacham key recovery algorithm, extended to partial information, can be expected to succeed in a much more general setting that just this particular side-channel attack.

Markov chains and an information theoretical spin-off

The other theoretical question is now: Why does this particular side channel attack succeed, i.e. why is the entropy rate H > 0.5. As so often, Markov chains are an immensly powerful tool to answer that question. After some transformations, I managed to model the state of the square-and-multiply algorithm, together with the side-channel leak, as a markov chain with a hidden state. Now I just had to calculate its Rényi entropy rate, right? I wrote some Haskell code to do this transformation, and also came up with an ad-hoc, intuitive way of calculating the rate. So when it was time to write up the paper, I was searching for a reference that describes the algorithm that I was using…

Only I could find none! I contacted researchers who have published related to Markov chains and entropies, but they just referred me in circles, until one of them, Maciej Skórski responded. Our conversation, highly condendensed, went like this: “Nice idea, but it can’t be right, it would solve problem X” – “Hmm, but it feels so right. Here is a proof sketch.” – “Oh, indeed, cool. I can even generalize this! Let’s write a paper”. Which we did! Analytic Formulas for Renyi Entropy of Hidden Markov Models (preprint only, it is still under submission).

More details

Because I joined the sliding-right project late, not all my contributions made it into the actual paper, and therefore I published an “inofficial appendix” separately on ePrint. It contains

  1. an alternative way to find the definitively knowable bits of the secret exponent, which is complete and can (in rare corner cases) find more bits than the rewrite rules in Section 3.1
  2. an algorithm to calculate the collision entropy H, including how to model a side-channel attack like this one as a markov chain, and how to calculate the entropy of such a markov chain, and
  3. the proof of Theorem 3.

I also published the Haskell code that I wrote for this projects, including the markov chain collision entropy stuff. It is not written with public consumption in mind, but feel free to ask if you have questions about this.

Note that all errors, typos and irrelevancies in that document and the code are purely mine and not of any of the other authors of the sliding-right paper. I’d like to thank my coauthors for the opportunity to join this project.

Daniel Pocock: Smart home: where to start?

Planet Debian - Hën, 03/12/2018 - 9:44pd

My home automation plans have been progressing and I'd like to share some observations I've made about planning a project like this, especially for those with larger houses.

With so many products and technologies, it can be hard to know where to start. Some things have become straightforward, for example, Domoticz can soon be installed from a package on some distributions. Yet this simply leaves people contemplating what to do next.

The quickstart

For a small home, like an apartment, you can simply buy something like the Zigate, a single motion and temperature sensor, a couple of smart bulbs and expand from there.

For a large home, you can also get your feet wet with exactly the same approach in a single room. Once you are familiar with the products, use a more structured approach to plan a complete solution for every other space.

The Debian wiki has started gathering some notes on things that work easily on GNU/Linux systems like Debian as well as Fedora and others.

Prioritize

What is your first goal? For example, are you excited about having smart lights or are you more concerned with improving your heating system efficiency with zoned logic?

Trying to do everything at once may be overwhelming. Make each of these things into a separate sub-project or milestone.

Technology choices

There are many technology choices:

  • Zigbee, Z-Wave or another protocol? I'm starting out with a preference for Zigbee but may try some Z-Wave devices along the way.
  • E27 or B22 (Bayonet) light bulbs? People in the UK and former colonies may have B22 light sockets and lamps. For new deployments, you may want to standardize on E27. Amongst other things, E27 is used by all the Ikea lamp stands and if you want to be able to move your expensive new smart bulbs between different holders in your house at will, you may want to standardize on E27 for all of them and avoid buying any Bayonet / B22 products in future.
  • Wired or wireless? Whenever you take up floorboards, it is a good idea to add some new wiring. For example, CAT6 can carry both power and data for a diverse range of devices.
  • Battery or mains power? In an apartment with two rooms and less than five devices, batteries may be fine but in a house, you may end up with more than a hundred sensors, radiator valves, buttons, and switches and you may find yourself changing a battery in one of them every week. If you have lodgers or tenants and you are not there to change the batteries then this may cause further complications. Some of the sensors have a socket for an optional power supply, battery eliminators may also be an option.
Making an inventory

Creating a spreadsheet table is extremely useful.

This helps estimate the correct quantity of sensors, bulbs, radiator valves and switches and it also helps to budget. Simply print it out, leave it under the Christmas tree and hope Santa will do the rest for you.

Looking at my own house, these are the things I counted in a first pass:

Don't forget to include all those unusual spaces like walk-in pantries, a large cupboard under the stairs, cellar, en-suite or enclosed porch. Each deserves a row in the table.

Sensors help make good decisions

Whatever the aim of the project, sensors are likely to help obtain useful data about the space and this can help to choose and use other products more effectively.

Therefore, it is often a good idea to choose and deploy sensors through the home before choosing other products like radiator valves and smart bulbs.

The smartest place to put those smart sensors

When placing motion sensors, it is important to avoid putting them too close to doorways where they might detect motion in adjacent rooms or hallways. It is also a good idea to avoid putting the sensor too close to any light bulb: if the bulb attracts an insect, it will trigger the motion sensor repeatedly. Temperature sensors shouldn't be too close to heaters or potential draughts around doorways and windows.

There are a range of all-in-one sensors available, some have up to six features in one device smaller than an apple. In some rooms this is a convenient solution but in other rooms, it may be desirable to have separate motion and temperature sensors in different locations.

Consider the dining and sitting rooms in my own house, illustrated in the floorplan below. The sitting room is also a potential 6th bedroom or guest room with sofa bed, the downstairs shower room conveniently located across the hall. The dining room is joined to the sitting room by a sliding double door. When the sliding door is open, a 360 degree motion sensor in the ceiling of the sitting room may detect motion in the dining room and vice-versa. It appears that 180 degree motion sensors located at the points "1" and "2" in the floorplan may be a better solution.

These rooms have wall mounted radiators and fireplaces. To avoid any of these potential heat sources the temperature sensors should probably be in the middle of the room.

This photo shows the proposed location for the 180 degree motion sensor "2" on the wall above the double door:

Summary

To summarize, buy a Zigate and a small number of products to start experimenting with. Make an inventory of all the products potentially needed for your home. Try to mark sensor locations on a floorplan, thinking about the type of sensor (or multiple sensors) you need for each space.

Russ Allbery: Review: Linked

Planet Debian - Hën, 03/12/2018 - 5:22pd

Review: Linked, by Albert-László Barabási

Publisher: Plume Copyright: 2002, 2003 Printing: May 2003 ISBN: 0-452-28439-2 Format: Trade paperback Pages: 241

Barabási at the time of this writing was a professor of physics at Notre Dame University (he's now the director of Northeastern University's Center of Complex Networks). Linked is a popularization of his research into scale-free networks, their relationship to power-law distributions (such as the distribution of wealth), and a proposed model explaining why so many interconnected systems in nature and human society appear to form scale-free networks. Based on some quick Wikipedia research, it's worth mentioning that the ubiquity of scale-free networks has been questioned and may not be as strong as Barabási claims here, not that you would know about that controversy from this book.

I've had this book sitting in my to-read pile for (checks records) ten years, so I only vaguely remember why I bought it originally, but I think it was recommended as a more scientific look at phenomenon popularized by Malcolm Gladwell in The Tipping Point. It isn't that, exactly; Barabási is much less interested in how ideas spread than he is in network structure and its implications for robustness and propagation through the network. (Contagion, as in virus outbreaks, is the obvious example of the latter.)

There are basically two parts to this book: a history of Barabási's research into scale-free networks and the development of the Barabási-Albert model for scale-free network generation, and then Barabási's attempt to find scale-free networks in everything under the sun and make grandiose claims about the implications of that structure for human understanding. One of these parts is better than the other.

The basic definition of a scale-free network is a network where the degree of the nodes (the number of edges coming into or out of the node) follows a power-law distribution. It's a bit hard to describe a power-law distribution without the math, but the intuitive idea is that the distribution will contain a few "winners" who will have orders of magnitude more connections than the average node, to the point that their connections may dominate the graph. This is very unlike a normal distribution (the familiar bell-shaped curve), where most nodes will cluster around a typical number of connections and the number of nodes with a given count of connections will drop off rapidly in either direction from that peak. A typical example of a power-law distribution outside of networks is personal wealth: rather than clustering around some typical values the way natural measurements like physical height do, a few people (Bill Gates, Warren Buffett) have orders of magnitude more wealth than the average person and a noticeable fraction of all wealth in society.

I am moderately dubious of Barabási's assertion here that most prior analysis of networks before his scale-free work focused on random networks (ones where new nodes are connected at an existing node chosen at random), since this is manifestly not the case in computer science (my personal field). However, scale-free networks are a real phenomenon that have some very interesting properties, and Barabási and Albert's proposal of how they might form (add nodes one at a time, and prefer to attach a new node to the existing node with the most connections) is a simple and compelling model of how they can form. Barabási also discusses a later variation, which Wikipedia names the Bianconi-Barabási model, which adds a fitness function for more complex preferential attachment.

Linked covers the history of the idea from Barabási's perspective, as well as a few of its fascinating properties. One is that scale-free networks may not have a tipping point in the Gladwell sense. Depending on the details, there may not be a lower limit of nodes that have to adopt some new property for it to spread through the network. Another is robustness: scale-free networks are startlingly robust against removal of random nodes from the network, requiring removal of large percentages of the nodes before the network fragments, but are quite vulnerable to a more targeted attack that focuses on removing the hubs (the nodes with substantially more connections than average). Scale-free networks also naturally give rise to "six degrees of separation" effects between any two nodes, since the concentration of connections at hubs lead to short paths.

These parts of Linked were fairly interesting, if sometimes clunky. Unfortunately, Barabási doesn't have enough material to talk about mathematical properties and concrete implications at book length, and instead wanders off into an exercise in finding scale-free networks everywhere (cell metabolism, social networks, epidemics, terrorism), and leaping from that assertion (which Wikipedia, at least, labels as not necessarily backed up by later analysis) to some rather overblown claims. I think my favorite was the confident assertion that by 2020 we will be receiving custom-tailored medicine designed specifically for the biological networks of our unique cells, which, one, clearly isn't going to happen, and two, has a strained and dubious connection to scale-free network theory to say the least. There's more in that vein. (That said, the unexpected mathematical connection between the state transition of a Bose-Einstein condensate and scale-free network collapse given sufficiently strong attachment preference and permission to move connections was at least entertaining.)

The general introduction to scale-free networks was interesting and worth reading, but I think the core ideas of this book could have been compressed into a more concise article (and probably have, somewhere on the Internet). The rest of it was mostly boring, punctuated by the occasional eye-roll. I appreciate Barabási's enthusiasm for his topic — it reminds me of professors I worked with at Stanford and their enthusiasm for their pet theoretical concept — but this may be one reason to have the popularization written by someone else. Not really recommended as a book, but if you really want a (somewhat dated) introduction to scale-free networks, you could do worse.

Rating: 6 out of 10

Mike Gabriel: My Work on Debian LTS/ELTS (November 2018)

Planet Debian - Dje, 02/12/2018 - 10:59md

In November 2018, I have worked on the Debian LTS project for nine hours as a paid contributor. Of the originally planned twelve hours (four of them carried over from October) I gave two hours back to the pool of available work hours and carry one hour over to December.

For November, I also signed up for four hours of ELTS work, but had to realize that at the end of the month, I hadn't even set up a test environment for Debian wheezy ELTS, so I gave these four hours back to the "pool". I have started getting an overview of the ELTS workflow now and will start fixing packages in December.

So, here is my list of work accomplished for Debian LTS in November 2018:

  • Regression upload of poppler (DLA 1562-2 [1]), updating the fix for CVE-2018-16646
  • Research on Saltstack salt regarding CVE-2018-15750 and CVE-2018-15751. Unfortunately, there was no reference in the upstream Git repository to the commit(s) that actually fixed those issues. Finally, it turned out that the REST netapi code that is affected by the named CVEs was added between upstream release 2014.1.13 and 2014.7(.0). As Debian jessie ships salt's upstream release 2014.1.13, I concluded that salt in jessie is not affected by the named CVEs.
  • Last week I joined Markus Koschany with triaging a plentitude of libav issues that have/had status "undetermined" for Debian jessie. I was able to triage 21 issues, of which 15 have applicable patches. Three issues have patches that don't apply cleanly and need manual work. One issue only is valid to ffmpeg, but not to libav. For another issue, there seems to be no patch available (yet). And yet another issue seemed already somehow fixed in libav (although with error code AVERROR_PATCHWELCOME).

Thanks to all LTS/ELTS sponsors for making these projects possible.

light+love
Mike

References

Thorsten Alteholz: My Debian Activities in November 2018

Planet Debian - Dje, 02/12/2018 - 8:07md

FTP master

This month I accepted 486 packages, which is twice as much as last month. On the other side I was a bit reluctant and rejected only 38 uploads. The overall number of packages that got accepted this month was 556.

Debian LTS

This was my fifty third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads or prepared security uploads of:

  • [DLA 1574-1] imagemagick security update for one CVE
  • [DLA 1586-1] openssl security update for two CVEs
  • [DLA 1587-1] pixman security update for one CVE
  • [DLA 1594-1] xml-security-c security update for one (temporary) CVE
  • [DLA 1595-1] gnuplot5 security update for three CVEs
  • [DLA 1597-1] gnuplot security update for three CVEs
  • [DLA 1602-1] nsis security update two CVEs

Thanks to Markus Koschany for testing my openssl package. It is really having a calming effect when a different pair of eyes has a quick look and does not start to scream.

I also started to work on the new CVEs of wireshark.

My debdiff of tiff was used by Moritz to doublecheck his and Lazlos work, and finally resulted in DSA 4349-1. Though not every debdiff will result in its own DSA , they are still useful for the security team. So always think of Stretch when you do a DLA.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the sixth ELTS month.

During my allocated time I uploaded:

  • ELA-58-1 for tiff3
  • ELA-59-1 for openssl
  • ELA-60-1 for pixman

I also started to work on the new CVEs of wireshark.

As like in LTS, I also did some days of frontdesk duties.

Other stuff

I improved packaging of …

  • libctl by finally moving to guile-2.2. Though guile-2.0 might not disappear completely in Buster, this is my first step to make it happen
  • mdns-scan
  • libjwt

I uploaded new upstream versions of …

Again I to sponsored some packages for Nicolas Mora. This time it were some dependencies for his new project taliesin, a lightweight audio media server with a REST API interface and a React JS client application. I am already anxious to give it a try :-).

As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-).

Sylvain Beucler: New Android SDK/NDK Rebuilds

Planet Debian - Dje, 02/12/2018 - 3:59md

As described in a previous post, Google is still click-wrapping all Android developer binaries with a non-free EULA.

I recompiled SDK 9.0.0, NDK r18b and SDK Tools 26.1.1 from the free sources to get rid of it:

https://android-rebuilds.beuc.net/

with one-command, Docker-based builds:

https://gitlab.com/android-rebuilds/auto

This triggered an interesting thread about the current state of free dev tools to target the Android platform.

Hans-Christoph Steiner also called for joining efforts towards a repository hosted using the F-Droid architecture:

https://forum.f-droid.org/t/call-for-help-making-free-software-builds-of-the-android-sdk/4685

What do you think?

Sven Hoexter: nginx and lua to evaluate CDN behaviour

Planet Debian - Dje, 02/12/2018 - 2:40md

I guess in the past everyone used CGIs to achieve something similar, it just seemed like a nice detour to use the nginx Lua module instead. Don't expect to read something magic. I'm currently looking into different CDN providers and how they behave regarding cache-control header, and what additional header they sent by default and when you activate certain feature. So I setup two locations inside the nginx configuration using a content_by_lua_block {} for testing purpose.

location /header { default_type 'text/plain'; content_by_lua_block { local myheads=ngx.req.get_headers() for key in pairs(myheads) do local outp="Header '" .. key .. "': " .. myheads[key] ngx.say(outp) end } } location /cc { default_type 'text/plain'; content_by_lua_block { local cc=ngx.req.get_headers()["cc"] if cc ~= nil then ngx.header["cache-control"]=cc ngx.say(cc) else ngx.say("moep - no cc header found") end } }

The first one is rather boring, it just returns you the request header my origin server received, like this

$ curl -is https://nocigar.shx0.cf/header HTTP/2 200 date: Sun, 02 Dec 2018 13:20:14 GMT content-type: text/plain set-cookie: __cfduid=d503ed2d3148923514e3fe86b4e26f5bf1543756814; expires=Mon, 02-Dec-19 13:20:14 GMT; path=/; domain=.shx0.cf; HttpOnly; Secure strict-transport-security: max-age=2592000 expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" server: cloudflare cf-ray: 482e16f7ae1bc2f1-FRA Header 'x-forwarded-for': 93.131.190.59 Header 'cf-ipcountry': DE Header 'connection': Keep-Alive Header 'accept': */* Header 'accept-encoding': gzip Header 'host': nocigar.shx0.cf Header 'x-forwarded-proto': https Header 'cf-visitor': {"scheme":"https"} Header 'cf-ray': 482e16f7ae1bc2f1-FRA Header 'cf-connecting-ip': 93.131.190.59 Header 'user-agent': curl/7.62.0

The second one is more interesting, it copies the content of the "cc" HTTP request header to the "cache-control" response header to allow you convenient evaluation of the handling of different cache-control header settings.

$ curl -H'cc: no-store,no-cache' -is https://nocigar.shx0.cf/cc/foobar42.jpg HTTP/2 200 date: Sun, 02 Dec 2018 13:27:46 GMT content-type: image/jpeg set-cookie: __cfduid=d971badd257b7c2be831a31d13ccec77f1543757265; expires=Mon, 02-Dec-19 13:27:45 GMT; path=/; domain=.shx0.cf; HttpOnly; Secure cache-control: no-store,no-cache cf-cache-status: MISS strict-transport-security: max-age=2592000 expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" server: cloudflare cf-ray: 482e22001f35c26f-FRA no-store,no-cache $ curl -H'cc: public' -is https://nocigar.shx0.cf/cc/foobar42.jpg HTTP/2 200 date: Sun, 02 Dec 2018 13:28:18 GMT content-type: image/jpeg set-cookie: __cfduid=d48a4b571af6374c759c430c91c3223d71543757298; expires=Mon, 02-Dec-19 13:28:18 GMT; path=/; domain=.shx0.cf; HttpOnly; Secure cache-control: public, max-age=14400 cf-cache-status: MISS expires: Sun, 02 Dec 2018 17:28:18 GMT strict-transport-security: max-age=2592000 expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" server: cloudflare cf-ray: 482e22c8886627aa-FRA public $ curl -H'cc: no-cache,no-store' -is https://nocigar.shx0.cf/cc/foobar42.jpg HTTP/2 200 date: Sun, 02 Dec 2018 13:30:33 GMT content-type: image/jpeg set-cookie: __cfduid=dbc4758b7bb98d556173a89aa2a8c2d3a1543757433; expires=Mon, 02-Dec-19 13:30:33 GMT; path=/; domain=.shx0.cf; HttpOnly; Secure cache-control: public, max-age=14400 cf-cache-status: HIT expires: Sun, 02 Dec 2018 17:30:33 GMT strict-transport-security: max-age=2592000 expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" server: cloudflare cf-ray: 482e26185d36c29c-FRA public

As you can see this endpoint is currently fronted by Cloudflare using a default configuration. If you burned one request path below "/cc/" and it's now cached for a long time you can just use a random different one to continue your test, without any requirement to flush the CDN caches.

Junichi Uekawa: Playing with FUSE after several years and realizing things haven't changed that much.

Planet Debian - Dje, 02/12/2018 - 12:37pd
Playing with FUSE after several years and realizing things haven't changed that much.

Julian Andres Klode: Migrating web servers

Planet Debian - Sht, 01/12/2018 - 11:40md

As of today, I migrated various services from shared hosting on uberspace.de to a VPS hosted by hetzner. This includes my weechat client, this blog, and the following other websites:

  • jak-linux.org
  • dep.debian.net redirector
  • mirror.fail
Rationale

Uberspace runs CentOS 6. This was causing more and more issues for me, as I was trying to run up-to-date weechat binaries. In the final stages, I ran weechat and tmux inside a debian proot. It certainly beat compiling half a system with linuxbrew.

The web performance was suboptimal. Webpages are served with Pound and Apache, TLS connection overhead was just huge, there was only HTTP/1.1, and no keep-alive.

Security-wise things were interesting: Everything ran as my user, obviously, whether that’s scripts, weechat, or mail delivery helpers. Ugh. There was also only a single certificate, meaning that all domains shared it, even if they were completely distinct like jak-linux.org and dep.debian.net

Enter Hetzner VPS

I launched a VPS at hetzner and configured it with Ubuntu 18.04, the latest Ubuntu LTS. It is a CX21, so it has 2 vcores, 4 GB RAM, 40 GB SSD storage, and 20 TB of traffic. For 5.83€/mo, you can’t complain.

I went on to build a repository of ansible roles (see repo on github.com), that configured the system with a few key characteristics:

  • http is served by nginx
  • certificates are per logical domain - each domain has a canonical name and a set of aliases; and the certificate is generated for them all
  • HTTPS is configured according to Mozilla’s modern profile, meaning TLSv1.2-only, and a very restricted list of ciphers. I can revisit that if it’s causing problems, but I’ve not seen huge issues.
  • Log files are anonymized to 24 bits for IPv4 addresses, and 32 bit for IPv6 addresses, which should allow me to identify an ISP, but not an individual user.

I don’t think the roles are particularly reusable for others, but it’s nice to have a central repository containing all the configuration for the server.

Go server to serve comments

When I started self-hosting the blog and added commenting via mastodon, it was via a third-party PHP script. This has been replaced by a Go program (GitHub repo). The new Go program scales a lot better than a PHP script, and provides better security properties due to AppArmor and systemd-based sandboxing; it even uses systemd’s DynamicUser.

Special care has been taken to have time outs for talking to upstream servers, so the program cannot hang with open connections and will respond eventually.

The Go binary is connected to nginx via a UNIX domain socket that serves FastCGI. The service is activated via systemd socket activation, allowing it to be owned by www-data, while the binary runs as a dynamic user. Nginx’s native fastcgi caching mechanism is enabled so the Go process is only contacted every 10 minutes at the most (for a given post). Nice!

Performance

Performance is a lot better than the old shared server. Pages load in up to half the time of the old one. Scalability also seems better: I tried various benchmarks, and achieved consistently higher concurrency ratings. A simple curl via https now takes 100ms instead of 200ms.

Performance is still suboptimal from the west coast of the US or other places far away from Germany, but got a lot better than before: Measuring from Oregon using webpagetest, it took 1.5s for a page to fully render vs ~3.4s before. A CDN would surely be faster, but would lose the end-to-end encryption.

Upcoming mail server

The next step is to enable email. Setting up postfix with dovecot is quite easy it turns out. Install them, tweak a few settings, setup SPF, DKIM, DMARC, and a PTR record, and off you go.

I mostly expect to read my email by tagging it on the server using notmuch somehow, and then syncing it to my laptop using muchsync. The IMAP access should allow some notifications or reading on the phone.

Spam filtering will be handled with rspamd. It seems to be the hot new thing on the market, is integrated with postfix as a milter, and handles a lot of stuff, such as:

  • greylisting
  • IP scoring
  • DKIM verification and signing
  • ARC verification
  • SPF verification
  • DNS lists
  • Rate limiting

It also has fancy stuff like neural networks. Woohoo!

As another bonus point: It’s trivial to confine with AppArmor, which I really love. Postfix and Dovecot are a mess to confine with their hundreds of different binaries.

I found it via uberspace, which plan on using it for their next uberspace7 generation. It is also used by some large installations like rambler.ru and locaweb.com.br.

I plan to migrate mail from uberspace in the upcoming weeks, and will post more details about it.

Paul Wise: FLOSS Activities November 2018

Planet Debian - Pre, 30/11/2018 - 10:03md
Changes Issues Review Administration
  • myrepos: respond to some tickets
  • Debian: respond to porterbox schroot query, remove obsolete role accounts, restart misbehaving webserver, redirect openmainframe mail to debian-s390, respond to query about consequences of closing accounts
  • Debian wiki: unblacklist networks, redirect/answer user support query, answer question about page names, whitelist email addresses
  • Debian packages site: update mirror config
  • Debian derivatives census: merge and deploy changes from Outreachy applicants and others
Sponsors

The purple-discord upload was sponsored by my employer. All other work was done on a volunteer basis.

Chris Lamb: Free software activities in November 2018

Planet Debian - Pre, 30/11/2018 - 9:23md

Here is my monthly update covering what I have been doing in the free software world during November 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:


Debian Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Investigated and triaged golang-go.net-dev, libsdl2-image, lighttpd, nginx, pdns, poppler, rustc & xml-security-c amongst many others.

  • "Frontdesk" duties, responding to user queries, etc.

  • Issued DLA 1572-1 for nginx to fix a denial of service (DoS) vulnerability — as there was no validation for the size of a 64-bit atom in an .mp4 file this led to CPU exhaustion when the size was zero.

  • Issued DLA 1576-1 correcting a SSH passphrase disclosure in ansible's User module leaking data in the global process list.

  • Issued DLA 1584-1 for ruby-i18n to fix a remote denial-of-service vulnerability.

  • Issued DLA 1585-1 to prevent an XSS vulnerability in ruby-rack where a malicious request could forge the HTTP scheme being returned to the underlying application.

  • Issued DLA 1591-1 to fix two vulnerabilities in libphp-phpmailer where a arbitrary local files could be disclosed via relative path HTML transformations as well as an object injection attack.

  • Uploaded libsdl2-image (2.0.3+dfsg1-3) and sdl-image1.2 (1.2.12-10) to the unstable distribution to fix buffer overflows on a corrupt or maliciously-crafted XCF files. (#912617 & #912618)

  • Uploaded ruby-i18n (0.7.0-3) to unstable [...] and prepared a stable proposed update for a potential 0.7.0-2+deb9u1 in stretch (#914187).

  • Uploaded ruby-rack (1.6.4-6) to unstable [...] and (2.0.5-2) to experimental [...]. I also prepared a proposed update for a 1.6.4-4+deb9u1 in the stable distribution (#914184).


Uploads
  • python-django (2:2.1.3-1) — New upstream bugfix release.

  • redis:

    • 5.0.1-1 — New upstream release, ensure that Debian-supplied Lua libraries are available during scripting. (#913185), refer to /run directly in .service files, etc.
    • 5.0.1-2 — Ensure that lack of IPv6 support does not prevent startup Debian where we bind to the ::1 interface by default. (#900284 & #914354)
    • 5.0.2-1 — New upstream release.
  • redisearch (1.2.1-1) — Upload the last AGPLv3 (ie. non-Commons Clause)) package from my GoodFORM project.

  • hiredis (0.14.0-3) — Adopt and tidy package (#911732).

  • python-redis (3.0.1-1) — New upstream release.

  • adminer (4.7.0-1) — New upstream release & ensure all documentation is under /usr/share/doc.


I also sponsored uploads of elpy (1.26.0-1) & muttrc-mode-el (1.2+git20180915.aa1601a-1).


Debian bugs filed
  • molly-guard: Breaks conversion with usrmerge. (#914716)

  • git-buildpackage: Please add gbp-dch --stable flag. (#914186)

  • git-buildpackage: gbp pq -Pq suffixes are not actually optional. (#914281)

  • python-redis: Autopkgtests fail. (#914800)

  • git-buildpackage: Correct "saving" typo. (#914280)

  • python-astropy: Please drop unnecessary dh_strip_nondeterminism override. (#914612)

  • shared-mime-info: Don't assume every *.key file is an Apple Keynote file. (#913550, with patch)

FTP Team


As a Debian FTP assistant this month I ACCEPTed 37 packages: android-platform-system-core, arm-trusted-firmware, boost-defaults, dtl, elogind, fonts-ibm-plex, gnome-remote-desktop, gnome-shell-extension-desktop-icons, google-i18n-address, haskell-haskell-gi-base, haskell-rio, lepton-eda, libatteanx-serializer-rdfa-perl, librdf-trine-serializer-rdfa-perl, librdf-trinex-compatibility-attean-perl, libre-engine-re2-perl, libtest-regexp-pattern-perl, linux, lua-lxc, lxc-templates, ndctl, openssh, osmo-bsc, osmo-sgsn, othman, pg-rational, qtdatavis3d-everywhere-src, ruby-grape-path-helpers, ruby-grape-route-helpers, ruby-graphiql-rails, ruby-js-regex, ruby-regexp-parser, shellia, simple-revision-control, theme-d, ulfius & vim-julia.

Gregor Herrmann: RC bugs 2018/01-48

Planet Debian - Pre, 30/11/2018 - 8:12md

I just arrived at the Bug Squashing Party in bern. – a good opportunity to report the RC bugs I've touched so far this year (not that many …):

  • #750732 – src:libanyevent-perl: "libanyevent-perl: Intermittent build failures on various architectures"
    disable a test (pkg-perl)
  • #862678 – src:pidgin: "Switch from network-manager-dev to libnm-dev"
    propose patch, later uploaded by maintainer
  • #878550 – src:liblog-dispatch-filerotate-perl: "liblog-dispatch-filerotate-perl: missing (build) dependency on libparams-validate-perl"
    add missing (build) dependency, upload to DELAYED/5
  • #882618 – libdbix-class-schema-loader-perl: "libdbix-class-schema-loader-perl: Test failures"
    apply patch from ntyni (pkg-perl)
  • #884626 – src:liblinux-dvb-perl: "liblinux-dvb-perl FTBFS with linux-libc-dev 4.14.2-1"
    upload with fix from knowledgejunkie (pkg-perl)
  • #886044 – src:syncmaildir: "syncmaildir: Depends on gconf"
    propose a patch
  • #886355 – src:libpar-packer-perl: "libpar-packer-perl: frequent parallel FTBFS"
    disable parallel building (pkg-perl)
  • #890905 – src:jabref: "jabref: doesn't build/run with default-jdk/-jre"
    try to come up with a patch (pkg-java)
  • #892275 – redshift: "redshift: Unable to connect to GeoClue."
    investigate and downgrade
  • #892392 – src:aqemu: "aqemu: build-depends on GCC 6"
    propose a patch
  • #893251 – jabref: "jabref: doesn't start with liblog4j2-java 2.10.0-1"
    use versioned (build) dependency (pkg-java)
  • #894626 – libsnmp-perl: "libsnmp-perl: undefined symbol: netsnmp_ds_toggle_boolean"
    propose a patch
  • #894727 – libgit-repository-perl: "libgit-repository-perl: FTBFS: t/10-new_fail.t broke with new git"
    add patch from upstream pull request (pkg-perl)
  • #895697 – src:libconfig-model-tester-perl: "libconfig-model-tester-perl FTBFS: Can't locate Module/Build.pm in @INC"
    add missing build dependency (pkg-perl)
  • #896502 – libxml-structured-perl: "libxml-structured-perl: missing dependency on libxml-parser-perl"
    add missing (build) dependency (pkg-perl)
  • #896534 – libnetapp-perl: "libnetapp-perl: missing dependency on libnet-telnet-perl"
    add missing dependency (pkg-perl)
  • #896537 – libmoosex-mungehas-perl: "libmoosex-mungehas-perl: missing dependency on libtype-tiny-perl | libeval-closure-perl"
    add missing dependency (pkg-perl)
  • #896538 – libmonitoring-livestatus-class-perl: "libmonitoring-livestatus-class-perl: missing dependency on libmodule-find-perl"
    add missing dependency, upload to DELAYED/5
  • #896539 – libmodule-install-trustmetayml-perl: "libmodule-install-trustmetayml-perl: missing dependency on libmodule-install-perl"
    add missing (build) dependency (pkg-perl)
  • #896540 – libmodule-install-extratests-perl: "libmodule-install-extratests-perl: missing dependency on libmodule-install-perl"
    add missing (build) dependency (pkg-perl)
  • #896541 – libmodule-install-automanifest-perl: "libmodule-install-automanifest-perl: missing dependency on libmodule-install-perl"
    add missing (build) dependency (pkg-perl)
  • #896543 – liblwp-authen-negotiate-perl: "liblwp-authen-negotiate-perl: missing dependency on libwww-perl"
    add missing dependency, upload to DELAYED/5
  • #896549 – libhtml-popuptreeselect-perl: "libhtml-popuptreeselect-perl: missing dependency on libhtml-template-perl"
    add missing dependency, upload to DELAYED/5
  • #896551 – libgstreamer1-perl: "libgstreamer1-perl: Typelib file for namespace 'Gst', version '1.0' not found"
    add missing (build) dependencies (pkg-perl)
  • #897724 – src:collectd: "collectd: ftbfs with GCC-8"
    pass a compiler flag, upload to DELAYED/5
  • #898198 – src:libnet-oauth-perl: "FTBFS (test failures, also seen in autopkgtests) with libcrypt-openssl-rsa-perl >= 0.30-1"
    add patch (pkg-perl)
  • #898561 – src:libmarc-transform-perl: "libmarc-transform-perl: FTBFS with libyaml-perl >= 1.25-1 (test failures)"
    apply patch provided by YAML upstream (pkg-perl)
  • #898977 – libnet-dns-zonefile-fast-perl: "libnet-dns-zonefile-fast-perl: FTBFS: You are missing required modules for NSEC3 support"
    add missing (build) dependency (pkg-perl)
  • #900232 – src:collectd: "collectd: FTBFS: sed: can't read /usr/lib/pkgconfig/OpenIPMIpthread.pc: No such file or directory"
    propose a patch, later upload to DELAYED/2
  • #901087 – src:libcatalyst-plugin-session-store-dbi-perl: "libcatalyst-plugin-session-store-dbi-perl: FTBFS: Base class package "Class::Data::Inheritable" is empty."
    add missing (build) dependency (pkg-perl)
  • #901807 – src:libmath-gsl-perl: "libmath-gsl-perl: incompatible with GSL >= 2.5"
    apply patches from ntyni and tweak build (pkg-perl)
  • #902192 – src:libpdl-ccs-perl: "libpdl-ccs-perl FTBFS on architectures where char is unsigned"
    new upstream release (pkg-perl)
  • #902625 – libmath-gsl-perl: "libmath-gsl-perl: needs a versioned dependency on libgsl23 (>= 2.5) or so"
    make build dependency versioned (pkg-perl)
  • #903173 – src:get-flash-videos: "get-flash-videos: FTBFS in buster/sid (dh_installdocs: Cannot find "README")"
    fix name in .docs (pkg-perl)
  • #903178 – src:libclass-insideout-perl: "libclass-insideout-perl: FTBFS in buster/sid (dh_installdocs: Cannot find "CONTRIBUTING")"
    fix name in .docs (pkg-perl)
  • #903456 – libbio-tools-phylo-paml-perl: "libbio-tools-phylo-paml-perl: fails to upgrade from 'stable' to 'sid' - trying to overwrite /usr/share/man/man3/Bio::Tools::Phylo::PAML.3pm.gz"
    upload package fixed by carandraug (pkg-perl)
  • #904737 – src:uwsgi: "uwsgi: FTBFS: unable to build gccgo plugin"
    update build dependencies, upload to DELAYED/5
  • #904740 – src:libtext-bidi-perl: "libtext-bidi-perl: FTBFS: 'fribidi_uint32' undeclared"
    apply patch from CPAN RT (pkg-perl)
  • #904858 – src:libtickit-widget-tabbed-perl: "libtickit-widget-tabbed-perl: Incomplete debian/copyright?"
    fix d/copyright (pkg-perl)
  • #905614 – src:license-reconcile: "FTBFS: Failed test 'no warnings' with libsoftware-license-perl 0.103013-2"
    apply patch from Felix Lechner (pkg-perl)
  • #906482 – src:libgit-raw-perl: "libgit-raw-perl: FTBFS in buster/sid (failing tests)"
    patch test (pkg-perl)
  • #908323 – src:libgtk3-perl: "libgtk3-perl: FTBFS: t/overrides.t failure"
    add patch and versioned (build) dependency (pkg-perl)
  • #909343 – src:libcatalyst-perl: "libcatalyst-perl: fails to build with libmoosex-getopt-perl 0.73-1"
    upload new upstream release (pkg-perl)
  • #910943 – libhtml-tidy-perl: "libhtml-tidy-perl: FTBFS (test failures) with tidy-html5 5.7"
    add patch (pkg-perl)
  • #912039 – src:libpetal-utils-perl: "libpetail-utils-perl: FTBFS: Test failures"
    add missing build dependency (pkg-perl)
  • #912045 – src:mb2md: "mb2md: FTBFS: Test failures"
    add missing build dependency (pkg-perl)
  • #914288 – src:libpgplot-perl: "libpgplot-perl: FTBFS and autopkgtest fail with new giza-dev: test waits for input"
    disable interactive tests (pkg-perl)
  • #915096 – src:libperl-apireference-perl: "libperl-apireference-perl: Missing support for perl 5.28.1"
    add support for perl 5.28.1 (pkg-perl)

let's see how the weekend goes.

Michal &#268;iha&#345;: Weblate 3.3

Planet Debian - Pre, 30/11/2018 - 3:00md

Weblate 3.3 has been released today. The most visible new feature are component alerts, but there are several other improvements as well.

Full list of changes:

  • Added support for component and project removal.
  • Improved performance for some monolingual translations.
  • Added translation component alerts to highlight problems with a translation.
  • Expose XLIFF unit resname as context when available.
  • Added support for XLIFF states.
  • Added check for non writable files in DATA_DIR.
  • Improved CSV export for changes.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Faqet

Subscribe to AlbLinux agreguesi