You are here

Agreguesi i feed

Julien Danjou: A multi-value syntax tree filtering in Python

Planet Debian - Hën, 03/12/2018 - 2:29md

A while ago, we've seen how to write a simple filtering syntax tree with Python. The idea was to provide a small abstract syntax tree with an easy to write data structure that would be able to filter a value. Filtering meaning that once evaluated, our AST would return either True or False based on the passed value.

With that, we were able to write small rules like Filter({"eq": 3})(4) that would return False since, well, 4 is not equal to 3.

In this new post, I propose we enhance our filtering ability to support multiple values. The idea is to be able to write something like this:

>>> f = Filter( {"and": [ {"eq": ("foo", 3)}, {"gt": ("bar", 4)}, ] }, ) >>> f(foo=3, bar=5) True >>> f(foo=4, bar=5) False

The biggest change here is that the binary operators (eq, gt, le, etc.) now support getting two values, and not only one, and that we can pass multiple values to our filter by using keyword arguments.

How should we implement that? Well, we can keep the same data structure we built previously. However, this time we're gonna do the following change:

  • The left value of the binary operator will be a string that will be used as the key to access the keyword arguments passed to our Filter.__call__ values.
  • The right value of the binary operator will be kept as it is (like before).

We therefore need to change our Filter.build_evaluator to accommodate this as follow:

def build_evaluator(self, tree): try: operator, nodes = list(tree.items())[0] except Exception: raise InvalidQuery("Unable to parse tree %s" % tree) try: op = self.multiple_operators[operator] except KeyError: try: op = self.binary_operators[operator] except KeyError: raise InvalidQuery("Unknown operator %s" % operator) assert len(nodes) == 2 # binary operators take 2 values def _op(values): return op(values[nodes[0]], nodes[1]) return _op # Iterate over every item in the list of the value linked # to the logical operator, and compile it down to its own # evaluator. elements = [self.build_evaluator(node) for node in nodes] return lambda values: op((e(values) for e in elements))

The algorithm is pretty much the same, the tree being browsed recursively.

First, the operator and its arguments (nodes) are extracted.

Then, if the operator takes multiple arguments (such as and and or operators), each node is recursively evaluated and a function is returned evaluating those nodes.
If the operator is a binary operator (such as eq, lt, etc.), it checks that the passed argument list length is 2. Then, it returns a function that will apply the operator (e.g., operator.eq) to values[nodes[0]] and nodes[1]: the former access the arguments (values) passed to the filter's __call__ function while the latter is directly the passed argument.

The full class looks like this:

import operator class InvalidQuery(Exception): pass class Filter(object): binary_operators = { u"=": operator.eq, u"==": operator.eq, u"eq": operator.eq, u"<": operator.lt, u"lt": operator.lt, u">": operator.gt, u"gt": operator.gt, u"<=": operator.le, u"≤": operator.le, u"le": operator.le, u">=": operator.ge, u"≥": operator.ge, u"ge": operator.ge, u"!=": operator.ne, u"≠": operator.ne, u"ne": operator.ne, } multiple_operators = { u"or": any, u"∨": any, u"and": all, u"∧": all, } def __init__(self, tree): self._eval = self.build_evaluator(tree) def __call__(self, **kwargs): return self._eval(kwargs) def build_evaluator(self, tree): try: operator, nodes = list(tree.items())[0] except Exception: raise InvalidQuery("Unable to parse tree %s" % tree) try: op = self.multiple_operators[operator] except KeyError: try: op = self.binary_operators[operator] except KeyError: raise InvalidQuery("Unknown operator %s" % operator) assert len(nodes) == 2 # binary operators take 2 values def _op(values): return op(values[nodes[0]], nodes[1]) return _op # Iterate over every item in the list of the value linked # to the logical operator, and compile it down to its own # evaluator. elements = [self.build_evaluator(node) for node in nodes] return lambda values: op((e(values) for e in elements))

We can check that it works by building some filters:

x = Filter({"eq": ("foo", 1)}) assert not x(foo=1, bar=1) x = Filter({"eq": ("foo", "bar")}) assert not x(foo=1, bar=1) x = Filter({"or": ( {"eq": ("foo", "bar")}, {"eq": ("bar", 1)}, )}) assert x(foo=1, bar=1)

Supporting multiple values is handy as it allows to pass complete dictionaries to the filter, rather than just one value. That enables users to filter more complex objects.

Sub-dictionary support

It's also possible to support deeper data structure, like a dictionary of dictionary. By replacing values[nodes[0]] by self._resolve_name(values, node[0]) with a _resolve_name method like this one, the filter is able to traverse dictionaries:

ATTR_SEPARATOR = "." def _resolve_name(self, values, name): try: for subname in name.split(self.ATTR_SEPARATOR): values = values[subname] return values except KeyError: raise InvalidQuery("Unknown attribute %s" % name)

It then works like that:

x = Filter({"eq": ("baz.sub", 23)}) assert x(foo=1, bar=1, baz={"sub": 23}) x = Filter({"eq": ("baz.sub", 23)}) assert not x(foo=1, bar=1, baz={"sub": 3})

By using the syntax key.subkey.subsubkey the filter is able to access item inside dictionaries on more complex data structure.

That basic filter engine can evolve quite easily in something powerful, as you can add new operators or new way to access/manipulate the passed data structure.

If you have other ideas on nifty features that could be added, feel free to add a comment below!

Joachim Breitner: Sliding Right into Information Theory

Planet Debian - Hën, 03/12/2018 - 10:56pd

It's hardly news any more, but it seems I have not blogged about my involvement last year with an interesting cryptanalysis project, which resulted in the publication Sliding right into disaster: Left-to-right sliding windows leak by Daniel J. Bernstein, me, Daniel Genkin, Leon Groot Bruinderink, Nadia Heninger, Tanja Lange, Christine van Vredendaal and Yuval Yarom, which was published at CHES 2017 and on ePrint (ePrint is the cryptographer’s version of arXiv).

This project nicely touched upon many fields of computer science: First we need systems expertise to mount a side-channel attack that uses cache timing difference to observe which line of a square-and-multiply algorithm the target process is executing. Then we need algorithm analysis required to learn from these observations partial information about the bits of the private key. This part includes nice PLy concepts like rewrite rules (see Section 3.2). Oncee we know enough about the secret keys, we can use fancy cryptography to recover the whole secret key (Section 3.4). And finally, some theoretical questions arise, such as: “How much information do we need for the attack to succeed?” and “Do we obtain this much information”, and we need some nice math and information theory to answer these.

Initially, I focused on the PL-related concepts. We programming language people are yak-shavers, and in particular “rewrite rules” just demands the creation of a DSL to express them, and an interpreter to execute them, doesn’t it? But it turned out that these rules are actually not necessary, as the key recovery can use the side-channel observation directly, as we found out later (see Section 4 of the paper). But now I was already hooked, and turned towards the theoretical questions mentioned above.

Shannon vs. Rényi

It felt good to shake the dust of some of the probability theory that I learned for my maths degree, and I also learned some new stuff. For example, it was intuitively clear that whether the attack succeeds depends on the amount of information obtained by the side channel attack, and based on prior work, the expectation was that if we know more than half the bits, then the attack would succeed. Note that for this purpose, two known “half bits” are as good as knowing one full bit; for example knowing that the secret key is either 01 or 11 (one bit known for sure) is just as good as knowing that the key is either 00 or 11.

Cleary, this is related to entropy somehow -- but how? Trying to prove that the attack works if the entropy rate of the leak is >0.5 just did not work, against all intuition. But when we started with a formula that describes when the attack succeeds, and then simplified it, we found a condition that looked suspiciously like what we wanted, namely H > 0.5, only that H was not the conventional entropy (also known as the Shannon entropy, H = −∑p ⋅ log p), but rather something else: H = −∑p2, which turned to be called the collision entropy or Rényi entropy.

This resulted in Theorem 3 in the paper, and neatly answers the question when the Heninger and Shacham key recovery algorithm, extended to partial information, can be expected to succeed in a much more general setting that just this particular side-channel attack.

Markov chains and an information theoretical spin-off

The other theoretical question is now: Why does this particular side channel attack succeed, i.e. why is the entropy rate H > 0.5. As so often, Markov chains are an immensly powerful tool to answer that question. After some transformations, I managed to model the state of the square-and-multiply algorithm, together with the side-channel leak, as a markov chain with a hidden state. Now I just had to calculate its Rényi entropy rate, right? I wrote some Haskell code to do this transformation, and also came up with an ad-hoc, intuitive way of calculating the rate. So when it was time to write up the paper, I was searching for a reference that describes the algorithm that I was using…

Only I could find none! I contacted researchers who have published related to Markov chains and entropies, but they just referred me in circles, until one of them, Maciej Skórski responded. Our conversation, highly condendensed, went like this: “Nice idea, but it can’t be right, it would solve problem X” – “Hmm, but it feels so right. Here is a proof sketch.” – “Oh, indeed, cool. I can even generalize this! Let’s write a paper”. Which we did! Analytic Formulas for Renyi Entropy of Hidden Markov Models (preprint only, it is still under submission).

More details

Because I joined the sliding-right project late, not all my contributions made it into the actual paper, and therefore I published an “inofficial appendix” separately on ePrint. It contains

  1. an alternative way to find the definitively knowable bits of the secret exponent, which is complete and can (in rare corner cases) find more bits than the rewrite rules in Section 3.1
  2. an algorithm to calculate the collision entropy H, including how to model a side-channel attack like this one as a markov chain, and how to calculate the entropy of such a markov chain, and
  3. the proof of Theorem 3.

I also published the Haskell code that I wrote for this projects, including the markov chain collision entropy stuff. It is not written with public consumption in mind, but feel free to ask if you have questions about this.

Note that all errors, typos and irrelevancies in that document and the code are purely mine and not of any of the other authors of the sliding-right paper. I’d like to thank my coauthors for the opportunity to join this project.

Daniel Pocock: Smart home: where to start?

Planet Debian - Hën, 03/12/2018 - 9:44pd

My home automation plans have been progressing and I'd like to share some observations I've made about planning a project like this, especially for those with larger houses.

With so many products and technologies, it can be hard to know where to start. Some things have become straightforward, for example, Domoticz can soon be installed from a package on some distributions. Yet this simply leaves people contemplating what to do next.

The quickstart

For a small home, like an apartment, you can simply buy something like the Zigate, a single motion and temperature sensor, a couple of smart bulbs and expand from there.

For a large home, you can also get your feet wet with exactly the same approach in a single room. Once you are familiar with the products, use a more structured approach to plan a complete solution for every other space.

The Debian wiki has started gathering some notes on things that work easily on GNU/Linux systems like Debian as well as Fedora and others.

Prioritize

What is your first goal? For example, are you excited about having smart lights or are you more concerned with improving your heating system efficiency with zoned logic?

Trying to do everything at once may be overwhelming. Make each of these things into a separate sub-project or milestone.

Technology choices

There are many technology choices:

  • Zigbee, Z-Wave or another protocol? I'm starting out with a preference for Zigbee but may try some Z-Wave devices along the way.
  • E27 or B22 (Bayonet) light bulbs? People in the UK and former colonies may have B22 light sockets and lamps. For new deployments, you may want to standardize on E27. Amongst other things, E27 is used by all the Ikea lamp stands and if you want to be able to move your expensive new smart bulbs between different holders in your house at will, you may want to standardize on E27 for all of them and avoid buying any Bayonet / B22 products in future.
  • Wired or wireless? Whenever you take up floorboards, it is a good idea to add some new wiring. For example, CAT6 can carry both power and data for a diverse range of devices.
  • Battery or mains power? In an apartment with two rooms and less than five devices, batteries may be fine but in a house, you may end up with more than a hundred sensors, radiator valves, buttons, and switches and you may find yourself changing a battery in one of them every week. If you have lodgers or tenants and you are not there to change the batteries then this may cause further complications. Some of the sensors have a socket for an optional power supply, battery eliminators may also be an option.
Making an inventory

Creating a spreadsheet table is extremely useful.

This helps estimate the correct quantity of sensors, bulbs, radiator valves and switches and it also helps to budget. Simply print it out, leave it under the Christmas tree and hope Santa will do the rest for you.

Looking at my own house, these are the things I counted in a first pass:

Don't forget to include all those unusual spaces like walk-in pantries, a large cupboard under the stairs, cellar, en-suite or enclosed porch. Each deserves a row in the table.

Sensors help make good decisions

Whatever the aim of the project, sensors are likely to help obtain useful data about the space and this can help to choose and use other products more effectively.

Therefore, it is often a good idea to choose and deploy sensors through the home before choosing other products like radiator valves and smart bulbs.

The smartest place to put those smart sensors

When placing motion sensors, it is important to avoid putting them too close to doorways where they might detect motion in adjacent rooms or hallways. It is also a good idea to avoid putting the sensor too close to any light bulb: if the bulb attracts an insect, it will trigger the motion sensor repeatedly. Temperature sensors shouldn't be too close to heaters or potential draughts around doorways and windows.

There are a range of all-in-one sensors available, some have up to six features in one device smaller than an apple. In some rooms this is a convenient solution but in other rooms, it may be desirable to have separate motion and temperature sensors in different locations.

Consider the dining and sitting rooms in my own house, illustrated in the floorplan below. The sitting room is also a potential 6th bedroom or guest room with sofa bed, the downstairs shower room conveniently located across the hall. The dining room is joined to the sitting room by a sliding double door. When the sliding door is open, a 360 degree motion sensor in the ceiling of the sitting room may detect motion in the dining room and vice-versa. It appears that 180 degree motion sensors located at the points "1" and "2" in the floorplan may be a better solution.

These rooms have wall mounted radiators and fireplaces. To avoid any of these potential heat sources the temperature sensors should probably be in the middle of the room.

This photo shows the proposed location for the 180 degree motion sensor "2" on the wall above the double door:

Summary

To summarize, buy a Zigate and a small number of products to start experimenting with. Make an inventory of all the products potentially needed for your home. Try to mark sensor locations on a floorplan, thinking about the type of sensor (or multiple sensors) you need for each space.

Russ Allbery: Review: Linked

Planet Debian - Hën, 03/12/2018 - 5:22pd

Review: Linked, by Albert-László Barabási

Publisher: Plume Copyright: 2002, 2003 Printing: May 2003 ISBN: 0-452-28439-2 Format: Trade paperback Pages: 241

Barabási at the time of this writing was a professor of physics at Notre Dame University (he's now the director of Northeastern University's Center of Complex Networks). Linked is a popularization of his research into scale-free networks, their relationship to power-law distributions (such as the distribution of wealth), and a proposed model explaining why so many interconnected systems in nature and human society appear to form scale-free networks. Based on some quick Wikipedia research, it's worth mentioning that the ubiquity of scale-free networks has been questioned and may not be as strong as Barabási claims here, not that you would know about that controversy from this book.

I've had this book sitting in my to-read pile for (checks records) ten years, so I only vaguely remember why I bought it originally, but I think it was recommended as a more scientific look at phenomenon popularized by Malcolm Gladwell in The Tipping Point. It isn't that, exactly; Barabási is much less interested in how ideas spread than he is in network structure and its implications for robustness and propagation through the network. (Contagion, as in virus outbreaks, is the obvious example of the latter.)

There are basically two parts to this book: a history of Barabási's research into scale-free networks and the development of the Barabási-Albert model for scale-free network generation, and then Barabási's attempt to find scale-free networks in everything under the sun and make grandiose claims about the implications of that structure for human understanding. One of these parts is better than the other.

The basic definition of a scale-free network is a network where the degree of the nodes (the number of edges coming into or out of the node) follows a power-law distribution. It's a bit hard to describe a power-law distribution without the math, but the intuitive idea is that the distribution will contain a few "winners" who will have orders of magnitude more connections than the average node, to the point that their connections may dominate the graph. This is very unlike a normal distribution (the familiar bell-shaped curve), where most nodes will cluster around a typical number of connections and the number of nodes with a given count of connections will drop off rapidly in either direction from that peak. A typical example of a power-law distribution outside of networks is personal wealth: rather than clustering around some typical values the way natural measurements like physical height do, a few people (Bill Gates, Warren Buffett) have orders of magnitude more wealth than the average person and a noticeable fraction of all wealth in society.

I am moderately dubious of Barabási's assertion here that most prior analysis of networks before his scale-free work focused on random networks (ones where new nodes are connected at an existing node chosen at random), since this is manifestly not the case in computer science (my personal field). However, scale-free networks are a real phenomenon that have some very interesting properties, and Barabási and Albert's proposal of how they might form (add nodes one at a time, and prefer to attach a new node to the existing node with the most connections) is a simple and compelling model of how they can form. Barabási also discusses a later variation, which Wikipedia names the Bianconi-Barabási model, which adds a fitness function for more complex preferential attachment.

Linked covers the history of the idea from Barabási's perspective, as well as a few of its fascinating properties. One is that scale-free networks may not have a tipping point in the Gladwell sense. Depending on the details, there may not be a lower limit of nodes that have to adopt some new property for it to spread through the network. Another is robustness: scale-free networks are startlingly robust against removal of random nodes from the network, requiring removal of large percentages of the nodes before the network fragments, but are quite vulnerable to a more targeted attack that focuses on removing the hubs (the nodes with substantially more connections than average). Scale-free networks also naturally give rise to "six degrees of separation" effects between any two nodes, since the concentration of connections at hubs lead to short paths.

These parts of Linked were fairly interesting, if sometimes clunky. Unfortunately, Barabási doesn't have enough material to talk about mathematical properties and concrete implications at book length, and instead wanders off into an exercise in finding scale-free networks everywhere (cell metabolism, social networks, epidemics, terrorism), and leaping from that assertion (which Wikipedia, at least, labels as not necessarily backed up by later analysis) to some rather overblown claims. I think my favorite was the confident assertion that by 2020 we will be receiving custom-tailored medicine designed specifically for the biological networks of our unique cells, which, one, clearly isn't going to happen, and two, has a strained and dubious connection to scale-free network theory to say the least. There's more in that vein. (That said, the unexpected mathematical connection between the state transition of a Bose-Einstein condensate and scale-free network collapse given sufficiently strong attachment preference and permission to move connections was at least entertaining.)

The general introduction to scale-free networks was interesting and worth reading, but I think the core ideas of this book could have been compressed into a more concise article (and probably have, somewhere on the Internet). The rest of it was mostly boring, punctuated by the occasional eye-roll. I appreciate Barabási's enthusiasm for his topic — it reminds me of professors I worked with at Stanford and their enthusiasm for their pet theoretical concept — but this may be one reason to have the popularization written by someone else. Not really recommended as a book, but if you really want a (somewhat dated) introduction to scale-free networks, you could do worse.

Rating: 6 out of 10

Eric Hammond: Guest Post: Notable AWS re:invent Sessions, by Jennine Townsend

Planet Ubuntu - Hën, 03/12/2018 - 1:00pd

A guest post authored by Jennine Townsend, expert sysadmin and AWS aficionado

There were so many sessions at re:Invent! Now that it’s over, I want to watch some sessions on video, but which ones?

Of course I’ll pick out those that are specific to my interests, but I also want to know the sessions that had good buzz, so I made a list that’s kind of mashed together from sessions that I heard good things about on Twitter, with those that had lots of repeats and overflow sessions, figuring those must have been popular.

But I confess I left out some whole categories! There aren’t sessions for Alexa or DeepRacer (not that I’m not interested, they’re just not part of my re:Invent followup), and I don’t administer any Windows systems so I leave out most of those sessions.

Some sessions have YouTube links, some don’t (yet) have and may never have YouTube videos, since lots of (types of) sessions aren’t recorded. (But even there, if I search the topic and speakers, I bet I can often find an earlier talk.)

There’s not much of a ranking: keynotes at the top, sessions I heard good things about in the middle, then sessions that had lots of repeats. It’s only mildly specific to my interests, so I thought other people might find it helpful. It’s also not really finished, but I wanted to get started watching sessions this weekend!

Keynotes

Peter DeSantis Monday Night Live

Terry Wise Global Partner Keynote

Andy Jassy keynote

Werner Vogels keynote

Popular: Buzz during AWS re:Invent

DEV322 What’s New with the AWS CLI (Kyle Knapp, James Saryerwinnie)

SRV409 A Serverless Journey: AWS Lambda Under the Hood

CON362 Container Power Hour with Jess, Clare, and Abby

SRV325 Using DevOps, Microservices, and Serverless to Accelerate Innovation (David Richardson, Ken Exner, Deepak Singh)

SRV375 Lambda Layers and Runtime API (Danilo Poccia) - Chalk Talk

SRV338 Configuration Management and Service Discovery (mentions CloudMap) (Alex Casalboni, Ben Kehoe) - Chalk Talk

CON367 Introducing App Mesh (Kiran Meduri, Shubha Rao, James Straub)

SRV355 Best Practices for CI/CD with AWS Lambda and Amazon API Gateway (Chris Munns) (focuses on SAM, CodeStar, I believe) - Chalk Talk

DEV327 Advanced Infrastructure as Code Programming on AWS

SRV322 From Monolith to Modern Apps: Best Practices

Popular: Repeats During AWS re:Invent

CON301 Mastering Kubernetes on AWS

ARC202 Running Lean Architectures: How to Optimize for Cost Efficiency

DEV319 Continuous Integration Best Practices

AIM404 Build, Train, and Deploy ML Models Quickly and Easily with Amazon SageMaker

STG209 Amazon S3 Storage Management (Scott Hewitt) - Chalk Talk

ENT205 Executing a Large-Scale Migration to AWS (Joe Chung, Jonathan Allen, Mike Wittig)

DEV317 Advanced Continuous Delivery Best Practices

CON308 Building Microservices with Containers

ANT323 Build Your Own Log Analytics Solutions on AWS

ANT201 Big Data Analytics Architectural Patterns and Best Practices

DEV403 Automate Common Maintenance & Deployment Tasks Using AWS Systems Manager - Builders Session

DAT356 Which Database Should I Use? - Builders Session

DEV309 CI/CD for Serverless and Containerized Applications

ARC209 Architecture Patterns for Multi-Region Active-Active Applications

AIM401 Deep Learning Applications Using TensorFlow

SRV305 Inside AWS: Technology Choices for Modern Applications

SEC401 Mastering Identity at Every Layer of the Cake

SEC371 Incident Response in AWS - Builders Session

SEC322 Using AWS Lambda as a Security Team

NET404 Elastic Load Balancing: Deep Dive and Best Practices

DEV321 What’s New with AWS CloudFormation

DAT205 Databases on AWS: The Right Tool for the Right Job

Original article and comments: https://alestic.com/2018/12/aws-reinvent-jennine/

Mike Gabriel: My Work on Debian LTS/ELTS (November 2018)

Planet Debian - Dje, 02/12/2018 - 10:59md

In November 2018, I have worked on the Debian LTS project for nine hours as a paid contributor. Of the originally planned twelve hours (four of them carried over from October) I gave two hours back to the pool of available work hours and carry one hour over to December.

For November, I also signed up for four hours of ELTS work, but had to realize that at the end of the month, I hadn't even set up a test environment for Debian wheezy ELTS, so I gave these four hours back to the "pool". I have started getting an overview of the ELTS workflow now and will start fixing packages in December.

So, here is my list of work accomplished for Debian LTS in November 2018:

  • Regression upload of poppler (DLA 1562-2 [1]), updating the fix for CVE-2018-16646
  • Research on Saltstack salt regarding CVE-2018-15750 and CVE-2018-15751. Unfortunately, there was no reference in the upstream Git repository to the commit(s) that actually fixed those issues. Finally, it turned out that the REST netapi code that is affected by the named CVEs was added between upstream release 2014.1.13 and 2014.7(.0). As Debian jessie ships salt's upstream release 2014.1.13, I concluded that salt in jessie is not affected by the named CVEs.
  • Last week I joined Markus Koschany with triaging a plentitude of libav issues that have/had status "undetermined" for Debian jessie. I was able to triage 21 issues, of which 15 have applicable patches. Three issues have patches that don't apply cleanly and need manual work. One issue only is valid to ffmpeg, but not to libav. For another issue, there seems to be no patch available (yet). And yet another issue seemed already somehow fixed in libav (although with error code AVERROR_PATCHWELCOME).

Thanks to all LTS/ELTS sponsors for making these projects possible.

light+love
Mike

References

Thorsten Alteholz: My Debian Activities in November 2018

Planet Debian - Dje, 02/12/2018 - 8:07md

FTP master

This month I accepted 486 packages, which is twice as much as last month. On the other side I was a bit reluctant and rejected only 38 uploads. The overall number of packages that got accepted this month was 556.

Debian LTS

This was my fifty third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads or prepared security uploads of:

  • [DLA 1574-1] imagemagick security update for one CVE
  • [DLA 1586-1] openssl security update for two CVEs
  • [DLA 1587-1] pixman security update for one CVE
  • [DLA 1594-1] xml-security-c security update for one (temporary) CVE
  • [DLA 1595-1] gnuplot5 security update for three CVEs
  • [DLA 1597-1] gnuplot security update for three CVEs
  • [DLA 1602-1] nsis security update two CVEs

Thanks to Markus Koschany for testing my openssl package. It is really having a calming effect when a different pair of eyes has a quick look and does not start to scream.

I also started to work on the new CVEs of wireshark.

My debdiff of tiff was used by Moritz to doublecheck his and Lazlos work, and finally resulted in DSA 4349-1. Though not every debdiff will result in its own DSA , they are still useful for the security team. So always think of Stretch when you do a DLA.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the sixth ELTS month.

During my allocated time I uploaded:

  • ELA-58-1 for tiff3
  • ELA-59-1 for openssl
  • ELA-60-1 for pixman

I also started to work on the new CVEs of wireshark.

As like in LTS, I also did some days of frontdesk duties.

Other stuff

I improved packaging of …

  • libctl by finally moving to guile-2.2. Though guile-2.0 might not disappear completely in Buster, this is my first step to make it happen
  • mdns-scan
  • libjwt

I uploaded new upstream versions of …

Again I to sponsored some packages for Nicolas Mora. This time it were some dependencies for his new project taliesin, a lightweight audio media server with a REST API interface and a React JS client application. I am already anxious to give it a try :-).

As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-).

Sylvain Beucler: New Android SDK/NDK Rebuilds

Planet Debian - Dje, 02/12/2018 - 3:59md

As described in a previous post, Google is still click-wrapping all Android developer binaries with a non-free EULA.

I recompiled SDK 9.0.0, NDK r18b and SDK Tools 26.1.1 from the free sources to get rid of it:

https://android-rebuilds.beuc.net/

with one-command, Docker-based builds:

https://gitlab.com/android-rebuilds/auto

This triggered an interesting thread about the current state of free dev tools to target the Android platform.

Hans-Christoph Steiner also called for joining efforts towards a repository hosted using the F-Droid architecture:

https://forum.f-droid.org/t/call-for-help-making-free-software-builds-of-the-android-sdk/4685

What do you think?

Sven Hoexter: nginx and lua to evaluate CDN behaviour

Planet Debian - Dje, 02/12/2018 - 2:40md

I guess in the past everyone used CGIs to achieve something similar, it just seemed like a nice detour to use the nginx Lua module instead. Don't expect to read something magic. I'm currently looking into different CDN providers and how they behave regarding cache-control header, and what additional header they sent by default and when you activate certain feature. So I setup two locations inside the nginx configuration using a content_by_lua_block {} for testing purpose.

location /header { default_type 'text/plain'; content_by_lua_block { local myheads=ngx.req.get_headers() for key in pairs(myheads) do local outp="Header '" .. key .. "': " .. myheads[key] ngx.say(outp) end } } location /cc { default_type 'text/plain'; content_by_lua_block { local cc=ngx.req.get_headers()["cc"] if cc ~= nil then ngx.header["cache-control"]=cc ngx.say(cc) else ngx.say("moep - no cc header found") end } }

The first one is rather boring, it just returns you the request header my origin server received, like this

$ curl -is https://nocigar.shx0.cf/header HTTP/2 200 date: Sun, 02 Dec 2018 13:20:14 GMT content-type: text/plain set-cookie: __cfduid=d503ed2d3148923514e3fe86b4e26f5bf1543756814; expires=Mon, 02-Dec-19 13:20:14 GMT; path=/; domain=.shx0.cf; HttpOnly; Secure strict-transport-security: max-age=2592000 expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" server: cloudflare cf-ray: 482e16f7ae1bc2f1-FRA Header 'x-forwarded-for': 93.131.190.59 Header 'cf-ipcountry': DE Header 'connection': Keep-Alive Header 'accept': */* Header 'accept-encoding': gzip Header 'host': nocigar.shx0.cf Header 'x-forwarded-proto': https Header 'cf-visitor': {"scheme":"https"} Header 'cf-ray': 482e16f7ae1bc2f1-FRA Header 'cf-connecting-ip': 93.131.190.59 Header 'user-agent': curl/7.62.0

The second one is more interesting, it copies the content of the "cc" HTTP request header to the "cache-control" response header to allow you convenient evaluation of the handling of different cache-control header settings.

$ curl -H'cc: no-store,no-cache' -is https://nocigar.shx0.cf/cc/foobar42.jpg HTTP/2 200 date: Sun, 02 Dec 2018 13:27:46 GMT content-type: image/jpeg set-cookie: __cfduid=d971badd257b7c2be831a31d13ccec77f1543757265; expires=Mon, 02-Dec-19 13:27:45 GMT; path=/; domain=.shx0.cf; HttpOnly; Secure cache-control: no-store,no-cache cf-cache-status: MISS strict-transport-security: max-age=2592000 expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" server: cloudflare cf-ray: 482e22001f35c26f-FRA no-store,no-cache $ curl -H'cc: public' -is https://nocigar.shx0.cf/cc/foobar42.jpg HTTP/2 200 date: Sun, 02 Dec 2018 13:28:18 GMT content-type: image/jpeg set-cookie: __cfduid=d48a4b571af6374c759c430c91c3223d71543757298; expires=Mon, 02-Dec-19 13:28:18 GMT; path=/; domain=.shx0.cf; HttpOnly; Secure cache-control: public, max-age=14400 cf-cache-status: MISS expires: Sun, 02 Dec 2018 17:28:18 GMT strict-transport-security: max-age=2592000 expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" server: cloudflare cf-ray: 482e22c8886627aa-FRA public $ curl -H'cc: no-cache,no-store' -is https://nocigar.shx0.cf/cc/foobar42.jpg HTTP/2 200 date: Sun, 02 Dec 2018 13:30:33 GMT content-type: image/jpeg set-cookie: __cfduid=dbc4758b7bb98d556173a89aa2a8c2d3a1543757433; expires=Mon, 02-Dec-19 13:30:33 GMT; path=/; domain=.shx0.cf; HttpOnly; Secure cache-control: public, max-age=14400 cf-cache-status: HIT expires: Sun, 02 Dec 2018 17:30:33 GMT strict-transport-security: max-age=2592000 expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" server: cloudflare cf-ray: 482e26185d36c29c-FRA public

As you can see this endpoint is currently fronted by Cloudflare using a default configuration. If you burned one request path below "/cc/" and it's now cached for a long time you can just use a random different one to continue your test, without any requirement to flush the CDN caches.

Junichi Uekawa: Playing with FUSE after several years and realizing things haven't changed that much.

Planet Debian - Dje, 02/12/2018 - 12:37pd
Playing with FUSE after several years and realizing things haven't changed that much.

Julian Andres Klode: Migrating web servers

Planet Debian - Sht, 01/12/2018 - 11:40md

As of today, I migrated various services from shared hosting on uberspace.de to a VPS hosted by hetzner. This includes my weechat client, this blog, and the following other websites:

  • jak-linux.org
  • dep.debian.net redirector
  • mirror.fail
Rationale

Uberspace runs CentOS 6. This was causing more and more issues for me, as I was trying to run up-to-date weechat binaries. In the final stages, I ran weechat and tmux inside a debian proot. It certainly beat compiling half a system with linuxbrew.

The web performance was suboptimal. Webpages are served with Pound and Apache, TLS connection overhead was just huge, there was only HTTP/1.1, and no keep-alive.

Security-wise things were interesting: Everything ran as my user, obviously, whether that’s scripts, weechat, or mail delivery helpers. Ugh. There was also only a single certificate, meaning that all domains shared it, even if they were completely distinct like jak-linux.org and dep.debian.net

Enter Hetzner VPS

I launched a VPS at hetzner and configured it with Ubuntu 18.04, the latest Ubuntu LTS. It is a CX21, so it has 2 vcores, 4 GB RAM, 40 GB SSD storage, and 20 TB of traffic. For 5.83€/mo, you can’t complain.

I went on to build a repository of ansible roles (see repo on github.com), that configured the system with a few key characteristics:

  • http is served by nginx
  • certificates are per logical domain - each domain has a canonical name and a set of aliases; and the certificate is generated for them all
  • HTTPS is configured according to Mozilla’s modern profile, meaning TLSv1.2-only, and a very restricted list of ciphers. I can revisit that if it’s causing problems, but I’ve not seen huge issues.
  • Log files are anonymized to 24 bits for IPv4 addresses, and 32 bit for IPv6 addresses, which should allow me to identify an ISP, but not an individual user.

I don’t think the roles are particularly reusable for others, but it’s nice to have a central repository containing all the configuration for the server.

Go server to serve comments

When I started self-hosting the blog and added commenting via mastodon, it was via a third-party PHP script. This has been replaced by a Go program (GitHub repo). The new Go program scales a lot better than a PHP script, and provides better security properties due to AppArmor and systemd-based sandboxing; it even uses systemd’s DynamicUser.

Special care has been taken to have time outs for talking to upstream servers, so the program cannot hang with open connections and will respond eventually.

The Go binary is connected to nginx via a UNIX domain socket that serves FastCGI. The service is activated via systemd socket activation, allowing it to be owned by www-data, while the binary runs as a dynamic user. Nginx’s native fastcgi caching mechanism is enabled so the Go process is only contacted every 10 minutes at the most (for a given post). Nice!

Performance

Performance is a lot better than the old shared server. Pages load in up to half the time of the old one. Scalability also seems better: I tried various benchmarks, and achieved consistently higher concurrency ratings. A simple curl via https now takes 100ms instead of 200ms.

Performance is still suboptimal from the west coast of the US or other places far away from Germany, but got a lot better than before: Measuring from Oregon using webpagetest, it took 1.5s for a page to fully render vs ~3.4s before. A CDN would surely be faster, but would lose the end-to-end encryption.

Upcoming mail server

The next step is to enable email. Setting up postfix with dovecot is quite easy it turns out. Install them, tweak a few settings, setup SPF, DKIM, DMARC, and a PTR record, and off you go.

I mostly expect to read my email by tagging it on the server using notmuch somehow, and then syncing it to my laptop using muchsync. The IMAP access should allow some notifications or reading on the phone.

Spam filtering will be handled with rspamd. It seems to be the hot new thing on the market, is integrated with postfix as a milter, and handles a lot of stuff, such as:

  • greylisting
  • IP scoring
  • DKIM verification and signing
  • ARC verification
  • SPF verification
  • DNS lists
  • Rate limiting

It also has fancy stuff like neural networks. Woohoo!

As another bonus point: It’s trivial to confine with AppArmor, which I really love. Postfix and Dovecot are a mess to confine with their hundreds of different binaries.

I found it via uberspace, which plan on using it for their next uberspace7 generation. It is also used by some large installations like rambler.ru and locaweb.com.br.

I plan to migrate mail from uberspace in the upcoming weeks, and will post more details about it.

Paul Wise: FLOSS Activities November 2018

Planet Debian - Pre, 30/11/2018 - 10:03md
Changes Issues Review Administration
  • myrepos: respond to some tickets
  • Debian: respond to porterbox schroot query, remove obsolete role accounts, restart misbehaving webserver, redirect openmainframe mail to debian-s390, respond to query about consequences of closing accounts
  • Debian wiki: unblacklist networks, redirect/answer user support query, answer question about page names, whitelist email addresses
  • Debian packages site: update mirror config
  • Debian derivatives census: merge and deploy changes from Outreachy applicants and others
Sponsors

The purple-discord upload was sponsored by my employer. All other work was done on a volunteer basis.

Chris Lamb: Free software activities in November 2018

Planet Debian - Pre, 30/11/2018 - 9:23md

Here is my monthly update covering what I have been doing in the free software world during November 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:


Debian Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Investigated and triaged golang-go.net-dev, libsdl2-image, lighttpd, nginx, pdns, poppler, rustc & xml-security-c amongst many others.

  • "Frontdesk" duties, responding to user queries, etc.

  • Issued DLA 1572-1 for nginx to fix a denial of service (DoS) vulnerability — as there was no validation for the size of a 64-bit atom in an .mp4 file this led to CPU exhaustion when the size was zero.

  • Issued DLA 1576-1 correcting a SSH passphrase disclosure in ansible's User module leaking data in the global process list.

  • Issued DLA 1584-1 for ruby-i18n to fix a remote denial-of-service vulnerability.

  • Issued DLA 1585-1 to prevent an XSS vulnerability in ruby-rack where a malicious request could forge the HTTP scheme being returned to the underlying application.

  • Issued DLA 1591-1 to fix two vulnerabilities in libphp-phpmailer where a arbitrary local files could be disclosed via relative path HTML transformations as well as an object injection attack.

  • Uploaded libsdl2-image (2.0.3+dfsg1-3) and sdl-image1.2 (1.2.12-10) to the unstable distribution to fix buffer overflows on a corrupt or maliciously-crafted XCF files. (#912617 & #912618)

  • Uploaded ruby-i18n (0.7.0-3) to unstable [...] and prepared a stable proposed update for a potential 0.7.0-2+deb9u1 in stretch (#914187).

  • Uploaded ruby-rack (1.6.4-6) to unstable [...] and (2.0.5-2) to experimental [...]. I also prepared a proposed update for a 1.6.4-4+deb9u1 in the stable distribution (#914184).


Uploads
  • python-django (2:2.1.3-1) — New upstream bugfix release.

  • redis:

    • 5.0.1-1 — New upstream release, ensure that Debian-supplied Lua libraries are available during scripting. (#913185), refer to /run directly in .service files, etc.
    • 5.0.1-2 — Ensure that lack of IPv6 support does not prevent startup Debian where we bind to the ::1 interface by default. (#900284 & #914354)
    • 5.0.2-1 — New upstream release.
  • redisearch (1.2.1-1) — Upload the last AGPLv3 (ie. non-Commons Clause)) package from my GoodFORM project.

  • hiredis (0.14.0-3) — Adopt and tidy package (#911732).

  • python-redis (3.0.1-1) — New upstream release.

  • adminer (4.7.0-1) — New upstream release & ensure all documentation is under /usr/share/doc.


I also sponsored uploads of elpy (1.26.0-1) & muttrc-mode-el (1.2+git20180915.aa1601a-1).


Debian bugs filed
  • molly-guard: Breaks conversion with usrmerge. (#914716)

  • git-buildpackage: Please add gbp-dch --stable flag. (#914186)

  • git-buildpackage: gbp pq -Pq suffixes are not actually optional. (#914281)

  • python-redis: Autopkgtests fail. (#914800)

  • git-buildpackage: Correct "saving" typo. (#914280)

  • python-astropy: Please drop unnecessary dh_strip_nondeterminism override. (#914612)

  • shared-mime-info: Don't assume every *.key file is an Apple Keynote file. (#913550, with patch)

FTP Team


As a Debian FTP assistant this month I ACCEPTed 37 packages: android-platform-system-core, arm-trusted-firmware, boost-defaults, dtl, elogind, fonts-ibm-plex, gnome-remote-desktop, gnome-shell-extension-desktop-icons, google-i18n-address, haskell-haskell-gi-base, haskell-rio, lepton-eda, libatteanx-serializer-rdfa-perl, librdf-trine-serializer-rdfa-perl, librdf-trinex-compatibility-attean-perl, libre-engine-re2-perl, libtest-regexp-pattern-perl, linux, lua-lxc, lxc-templates, ndctl, openssh, osmo-bsc, osmo-sgsn, othman, pg-rational, qtdatavis3d-everywhere-src, ruby-grape-path-helpers, ruby-grape-route-helpers, ruby-graphiql-rails, ruby-js-regex, ruby-regexp-parser, shellia, simple-revision-control, theme-d, ulfius & vim-julia.

Gregor Herrmann: RC bugs 2018/01-48

Planet Debian - Pre, 30/11/2018 - 8:12md

I just arrived at the Bug Squashing Party in bern. – a good opportunity to report the RC bugs I've touched so far this year (not that many …):

  • #750732 – src:libanyevent-perl: "libanyevent-perl: Intermittent build failures on various architectures"
    disable a test (pkg-perl)
  • #862678 – src:pidgin: "Switch from network-manager-dev to libnm-dev"
    propose patch, later uploaded by maintainer
  • #878550 – src:liblog-dispatch-filerotate-perl: "liblog-dispatch-filerotate-perl: missing (build) dependency on libparams-validate-perl"
    add missing (build) dependency, upload to DELAYED/5
  • #882618 – libdbix-class-schema-loader-perl: "libdbix-class-schema-loader-perl: Test failures"
    apply patch from ntyni (pkg-perl)
  • #884626 – src:liblinux-dvb-perl: "liblinux-dvb-perl FTBFS with linux-libc-dev 4.14.2-1"
    upload with fix from knowledgejunkie (pkg-perl)
  • #886044 – src:syncmaildir: "syncmaildir: Depends on gconf"
    propose a patch
  • #886355 – src:libpar-packer-perl: "libpar-packer-perl: frequent parallel FTBFS"
    disable parallel building (pkg-perl)
  • #890905 – src:jabref: "jabref: doesn't build/run with default-jdk/-jre"
    try to come up with a patch (pkg-java)
  • #892275 – redshift: "redshift: Unable to connect to GeoClue."
    investigate and downgrade
  • #892392 – src:aqemu: "aqemu: build-depends on GCC 6"
    propose a patch
  • #893251 – jabref: "jabref: doesn't start with liblog4j2-java 2.10.0-1"
    use versioned (build) dependency (pkg-java)
  • #894626 – libsnmp-perl: "libsnmp-perl: undefined symbol: netsnmp_ds_toggle_boolean"
    propose a patch
  • #894727 – libgit-repository-perl: "libgit-repository-perl: FTBFS: t/10-new_fail.t broke with new git"
    add patch from upstream pull request (pkg-perl)
  • #895697 – src:libconfig-model-tester-perl: "libconfig-model-tester-perl FTBFS: Can't locate Module/Build.pm in @INC"
    add missing build dependency (pkg-perl)
  • #896502 – libxml-structured-perl: "libxml-structured-perl: missing dependency on libxml-parser-perl"
    add missing (build) dependency (pkg-perl)
  • #896534 – libnetapp-perl: "libnetapp-perl: missing dependency on libnet-telnet-perl"
    add missing dependency (pkg-perl)
  • #896537 – libmoosex-mungehas-perl: "libmoosex-mungehas-perl: missing dependency on libtype-tiny-perl | libeval-closure-perl"
    add missing dependency (pkg-perl)
  • #896538 – libmonitoring-livestatus-class-perl: "libmonitoring-livestatus-class-perl: missing dependency on libmodule-find-perl"
    add missing dependency, upload to DELAYED/5
  • #896539 – libmodule-install-trustmetayml-perl: "libmodule-install-trustmetayml-perl: missing dependency on libmodule-install-perl"
    add missing (build) dependency (pkg-perl)
  • #896540 – libmodule-install-extratests-perl: "libmodule-install-extratests-perl: missing dependency on libmodule-install-perl"
    add missing (build) dependency (pkg-perl)
  • #896541 – libmodule-install-automanifest-perl: "libmodule-install-automanifest-perl: missing dependency on libmodule-install-perl"
    add missing (build) dependency (pkg-perl)
  • #896543 – liblwp-authen-negotiate-perl: "liblwp-authen-negotiate-perl: missing dependency on libwww-perl"
    add missing dependency, upload to DELAYED/5
  • #896549 – libhtml-popuptreeselect-perl: "libhtml-popuptreeselect-perl: missing dependency on libhtml-template-perl"
    add missing dependency, upload to DELAYED/5
  • #896551 – libgstreamer1-perl: "libgstreamer1-perl: Typelib file for namespace 'Gst', version '1.0' not found"
    add missing (build) dependencies (pkg-perl)
  • #897724 – src:collectd: "collectd: ftbfs with GCC-8"
    pass a compiler flag, upload to DELAYED/5
  • #898198 – src:libnet-oauth-perl: "FTBFS (test failures, also seen in autopkgtests) with libcrypt-openssl-rsa-perl >= 0.30-1"
    add patch (pkg-perl)
  • #898561 – src:libmarc-transform-perl: "libmarc-transform-perl: FTBFS with libyaml-perl >= 1.25-1 (test failures)"
    apply patch provided by YAML upstream (pkg-perl)
  • #898977 – libnet-dns-zonefile-fast-perl: "libnet-dns-zonefile-fast-perl: FTBFS: You are missing required modules for NSEC3 support"
    add missing (build) dependency (pkg-perl)
  • #900232 – src:collectd: "collectd: FTBFS: sed: can't read /usr/lib/pkgconfig/OpenIPMIpthread.pc: No such file or directory"
    propose a patch, later upload to DELAYED/2
  • #901087 – src:libcatalyst-plugin-session-store-dbi-perl: "libcatalyst-plugin-session-store-dbi-perl: FTBFS: Base class package "Class::Data::Inheritable" is empty."
    add missing (build) dependency (pkg-perl)
  • #901807 – src:libmath-gsl-perl: "libmath-gsl-perl: incompatible with GSL >= 2.5"
    apply patches from ntyni and tweak build (pkg-perl)
  • #902192 – src:libpdl-ccs-perl: "libpdl-ccs-perl FTBFS on architectures where char is unsigned"
    new upstream release (pkg-perl)
  • #902625 – libmath-gsl-perl: "libmath-gsl-perl: needs a versioned dependency on libgsl23 (>= 2.5) or so"
    make build dependency versioned (pkg-perl)
  • #903173 – src:get-flash-videos: "get-flash-videos: FTBFS in buster/sid (dh_installdocs: Cannot find "README")"
    fix name in .docs (pkg-perl)
  • #903178 – src:libclass-insideout-perl: "libclass-insideout-perl: FTBFS in buster/sid (dh_installdocs: Cannot find "CONTRIBUTING")"
    fix name in .docs (pkg-perl)
  • #903456 – libbio-tools-phylo-paml-perl: "libbio-tools-phylo-paml-perl: fails to upgrade from 'stable' to 'sid' - trying to overwrite /usr/share/man/man3/Bio::Tools::Phylo::PAML.3pm.gz"
    upload package fixed by carandraug (pkg-perl)
  • #904737 – src:uwsgi: "uwsgi: FTBFS: unable to build gccgo plugin"
    update build dependencies, upload to DELAYED/5
  • #904740 – src:libtext-bidi-perl: "libtext-bidi-perl: FTBFS: 'fribidi_uint32' undeclared"
    apply patch from CPAN RT (pkg-perl)
  • #904858 – src:libtickit-widget-tabbed-perl: "libtickit-widget-tabbed-perl: Incomplete debian/copyright?"
    fix d/copyright (pkg-perl)
  • #905614 – src:license-reconcile: "FTBFS: Failed test 'no warnings' with libsoftware-license-perl 0.103013-2"
    apply patch from Felix Lechner (pkg-perl)
  • #906482 – src:libgit-raw-perl: "libgit-raw-perl: FTBFS in buster/sid (failing tests)"
    patch test (pkg-perl)
  • #908323 – src:libgtk3-perl: "libgtk3-perl: FTBFS: t/overrides.t failure"
    add patch and versioned (build) dependency (pkg-perl)
  • #909343 – src:libcatalyst-perl: "libcatalyst-perl: fails to build with libmoosex-getopt-perl 0.73-1"
    upload new upstream release (pkg-perl)
  • #910943 – libhtml-tidy-perl: "libhtml-tidy-perl: FTBFS (test failures) with tidy-html5 5.7"
    add patch (pkg-perl)
  • #912039 – src:libpetal-utils-perl: "libpetail-utils-perl: FTBFS: Test failures"
    add missing build dependency (pkg-perl)
  • #912045 – src:mb2md: "mb2md: FTBFS: Test failures"
    add missing build dependency (pkg-perl)
  • #914288 – src:libpgplot-perl: "libpgplot-perl: FTBFS and autopkgtest fail with new giza-dev: test waits for input"
    disable interactive tests (pkg-perl)
  • #915096 – src:libperl-apireference-perl: "libperl-apireference-perl: Missing support for perl 5.28.1"
    add support for perl 5.28.1 (pkg-perl)

let's see how the weekend goes.

Michal &#268;iha&#345;: Weblate 3.3

Planet Debian - Pre, 30/11/2018 - 3:00md

Weblate 3.3 has been released today. The most visible new feature are component alerts, but there are several other improvements as well.

Full list of changes:

  • Added support for component and project removal.
  • Improved performance for some monolingual translations.
  • Added translation component alerts to highlight problems with a translation.
  • Expose XLIFF unit resname as context when available.
  • Added support for XLIFF states.
  • Added check for non writable files in DATA_DIR.
  • Improved CSV export for changes.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Jonathan Dowland: glBSP

Planet Debian - Pre, 30/11/2018 - 11:59pd

Continuing a series of blog posts about Debian packages I have adopted (Previously: smartmontools; duc), in January this year I also adopted glBSP.

I was surprised to see glBSP come up for adoption; I found out when I was installing something entirely unrelated, thanks to the how-can-i-help package. (This package is a great idea: it tells you about packages you have installed which are in danger of being removed from Debian, or have other interesting bugs filed against them. Give it a go!) glBSP is a dependency on another of my packages, WadC, so I adopted it fairly urgently.

glBSP is a node-building tool for Doom maps. A Map in Doom is defined in a handful of different lumps of data. The top-level, canonical data structures are relatively simple: THINGS is a list of things (type, coordinates, angle facing); VERTEXES is a list of points for geometry (X/Y coordinates); SECTORS define regions (light level, floor height and texture,…), etc. Map authoring tools can build these lumps of data relatively easily. (I've done it myself: I generate them all in liquorice, that I should write more about one day.)

During gameplay, Doom needs to answer questions such as: the player is at location (X,Y) and has made a noise. Can Monster Z hear that noise? Or: the player is at location (X,Y) at facing Z°, what walls need to be drawn? These decisions needed to be made very quickly on the target hardware of 1993 (a 486 CPU) in order to maintain the desired frame-rate (35fps). To facilitate this, various additional data structures are derived from the canonical lumps. glBSP is one of a class of tools called node builders that calculate these extra lumps. The name "node-builder" comes from one of the lumps (NODES), which encodes a binary-space partition of the map geometry (and that's where "BSP" comes from).

If you would be interested to know more about these algorithms (and they are fascinating, honest!), I recommend picking up Fabien Sanglard's forthcoming book "Game Engine Black Book: DOOM". You can pre-order an ebook from Google Play here. It will be available as a physical book (and ebook) via Amazon on publication date, which will be December 10, marking Doom's 25th anniversary.

The glBSP package could do with some work to bring it up to the modern standards and conventions of Debian packages. I haven't bothered to do that, because I'm planning to replace it with another node-builder. glBSP is effectively abandoned upstream. There are loads of other node builders that could be included: glBSP and Eureka author Andrew Apted started a new one called AJBSP, and my long-time friend Kim Roar Foldøy Hauge has one called zokumbsp. The best candidate as an all-round useful node-builder is probably ZDBSP, which was originally developed as an internal node-builder for the ZDoom engine, and was designed for speed. It also copes well with some torture-test maps, such as WadC's "choz.wl", which brought glBSP to its knees. I've submitted a package of ZDBSP to Debian and I'm waiting to see if it is accepted by the FTP masters. After that, we could consider removing glBSP.

Molly de Blanc: Free software activities (November, 2018)

Planet Debian - Enj, 29/11/2018 - 11:54md

Welcome to what is the first and may or may not be the last monthly summary of my free software activities.

November was a good month for me, heavily laden with travel. Conferences and meetings took me to Seattle, WA (USA) and Milano and Bolzano in Italy. I think of my activities as generally focusing on “my” projects — that is to say, representing my own thoughts and ideas, rather than those of my employer or associated projects.

In addition to using my free time to work on free and open source software and related issues, my day job is at the Free Software Foundation. I included highlights from my past month at the FSF. This feels a little bit like cheating.

November Activities (personal)
  • I keynoted the Seattle GNU/Linux festival (SeaGL), delivering a talk entitled “Insecure connections: Love and mental health in our digital lives.” Slides are available on GitLab.
  • Attended an Open Source Initiative board meeting in Milan, Italy.
  • Spoke at SFScon in Bolzano, Italy, giving a talk entitled “User freedom: A love Story.” Slides forthcoming. For this talk, I created a few original slides, but largely repurposed images from “Insecure connections.”
  • I made my first quantative Debian contribution, in which I added the Open Source Initiative to the list of organizations to which Debian is a member.
  • Submitted sessions to the Community and the Legal and Policy devrooms at FOSDEM. #speakerlife
  • Reviewed session proposals for CopyLeft Conf, for which I am on the paper’s committee.
  • I helped organize a $15,000 match donation for the Software Freedom Conservancy.
Some highlights from my day job

Daniel Pocock: Connecting software freedom and human rights

Planet Debian - Enj, 29/11/2018 - 11:04md

2018 is the 70th anniversary of the Universal Declaration of Human Rights.

Over the last few days, while attending the UN Forum on Business and Human Rights, I've had various discussions with people about the relationship between software freedom, business and human rights.

In the information age, control of the software, source code and data translates into power and may contribute to inequality. Free software principles are not simply about the cost of the software, they lead to transparency and give people infinitely more choices.

Many people in the free software community have taken a particular interest in privacy, which is Article 12 in the declaration. The modern Internet challenges this right, while projects like TAILS and Tor Browser help to protect it. The UN's 70th anniversary slogan Stand up 4 human rights is a call to help those around us understand these problems and make effective use of the solutions.

We live in a time when human rights face serious challenges. Consider censorship: Saudi Arabia is accused of complicity in the disappearance of columnist Jamal Khashoggi and the White House is accused of using fake allegations to try and banish CNN journalist Jim Acosta. Arjen Kamphuis, co-author of Information Security for Journalists, vanished in mysterious circumstances. The last time I saw Arjen was at OSCAL'18 in Tirana.

For many of us, events like these may leave us feeling powerless. Nothing could be further from the truth. Standing up for human rights starts with looking at our own failures, both as individuals and organizations. For example, have we ever taken offense at something, judged somebody or rushed to make accusations without taking time to check facts and consider all sides of the story? Have we seen somebody we know treated unfairly and remained silent? Sometimes it may be desirable to speak out publicly, sometimes a difficult situation can be resolved by speaking to the person directly or having a meeting with them.

Being at the United Nations provided an acute reminder of these principles. In parallel to the event, the UN were hosting a conference on the mine ban treaty and the conference on Afghanistan, the Afghan president arriving as I walked up the corridor. These events reflect a legacy of hostilities and sincere efforts to come back from the brink.

A wide range of discussions and meetings

There were many opportunities to have discussions with people from all the groups present. Several sessions raised issues that made me reflect on the relationship between corporations and the free software community and the risks for volunteers. At the end of the forum I had a brief discussion with Dante Pesce, Chair of the UN's Business and Human Rights working group.

Best free software resources for human rights?

Many people at the forum asked me how to get started with free software and I promised to keep adding to my blog. What would you regard as the best online resources, including videos and guides, for people with an interest in human rights to get started with free software, solving problems with privacy and equality? Please share them on the Libre Planet mailing list.

Let's not forget animal rights too

Are dogs entitled to danger pay when protecting heads of state?

Bits from Debian: Debian welcomes its new Outreachy intern

Planet Debian - Enj, 29/11/2018 - 8:15md

Debian continues participating in Outreachy, and we'd like to welcome our new Outreachy intern for this round, lasting from December 2018 to March 2019.

Anastasia Tsikoza will work on Improving the integration of Debian derivatives with the Debian infrastructure and the community, mentored by Paul Wise and Raju Devidas.

Congratulations, Anastasia, and welcome!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

Russ Allbery: Review: The Blind Side

Planet Debian - Enj, 29/11/2018 - 6:25pd

Review: The Blind Side, by Michael Lewis

Publisher: W.W. Norton & Company Copyright: 2006, 2007 Printing: 2007 ISBN: 0-393-33047-8 Format: Trade paperback Pages: 339

One of the foundations of Michael Lewis's mastery of long-form journalism is that he is an incredible storyteller. Given even dry topics of interest (baseball statistics, bond trading, football offensive lines), he has an uncanny knack for finding memorable characters around which to tell a story, and uses their involvement as the backbone of a clear explanation of complex processes or situations. That's why one of the surprises of The Blind Side is that Lewis loses control of his material.

The story that Lewis wants to tell is the development of the left tackle position in professional football. The left tackle is the player on the outside of the offensive line on the blind side of a right-handed quarterback. The advent of the west-coast offense with its emphasis on passing plays, and the corresponding development of aggressive pass rushers in the era of Lawrence Taylor, transformed that position from just another member of the most anonymous group of people in football into one of the most highly-paid positions on the field. The left tackle is the person most responsible for stopping a pass rush.

Lewis does tell that story in The Blind Side, but every time he diverts into it, the reader is left tapping their foot in frustration and wishing he'd hurry up. That's because the other topic of this book, the biographical through line, is Michael Oher, and Michael Oher the person is so much more interesting than anything Lewis has to say about football that the football parts seem wasted.

I'm not sure how many people will manage to read this book without having the details of Oher's story spoiled for them first, particularly given there's also a movie based on this book, but I managed it and loved the unfolding of the story. I'm therefore going to leave out most of the specifics to avoid spoilers. But the short version is that Oher was a sometimes-homeless, neglected black kid with incredible physical skills but almost no interaction with the public school system who ended up being adopted as a teenager by a wealthy white family. They help him clear the hurdles required to play NCAA football.

That's just the bare outline. It's an amazing story, and Lewis tells it very well. I had a hard time putting this book down, and rushed through the background chapters on the evolution of football to get back to more details about Oher. But, as much as Lewis tries to make this book a biography of Oher himself, it's really not. As Lewis discloses at the end of this edition, he's a personal friend of Sean Tuohy, Oher's adoptive father. Oher was largely unwilling to talk to Lewis about his life before he met the Tuohys. Therefore, this is, more accurately, the story of Oher as seen from the Tuohys' perspective, which is not quite the same thing.

There are so many pitfalls here that it's amazing Lewis navigates them as well as he does, and even he stumbles. There are stereotypes and pieces of American mythology lurking everywhere beneath this story, trying to make the story snap to them like a guiding grid: the wealthy white family welcoming in the poor black kid, the kid with amazing physical talent who is very bad at school, the black kid with an addict mother, the white Christian school who takes him in, the colleges who try to recruit him... you cannot live in this country without strong feelings about all of these things. Nestled next to this story like landmines are numerous lies that white Americans tell themselves to convince themselves that they're not racist. I could feel the mythological drag on this story trying to make Oher something he's not, trying to make him fit into a particular social frame. It's one of the reasons why I doubt I'll ever see the movie: it's difficult to imagine a movie managing to avoid that undertow.

To give Lewis full credit, he fights to keep this story free of its mythology every step of the way, and you can see the struggle in the book. He succeeds best at showing that Oher is not at all dumb, but instead is an extremely intelligent teenager who was essentially never given an opportunity to learn. He also provides a lot of grounding and nuance to Oher's relationship with the Tuohys. They're still in something of a savior role, but it seems partly deserved. And, most importantly, he's very open about the fact that Oher largely didn't talk to anyone about his past, including Lewis, so except for a chapter near the end laying out the information Lewis was able to gather, it's mostly conjecture on the part of the Tuohys and others.

But there is so much buried here, so many fault lines of US society, so many sharp corners of racism and religion and class, that Oher's story just does not fit into Lewis's evolution-of-football narrative. It spills out of the book, surfaces deep social questions that Lewis barely touches on, and leaves so many open questions (including Oher's own voice). One major example: Briarcrest Christian School, the high school Oher played for and the place where he was discovered as a potential NCAA and later professional football player, is a private high school academy formed in 1973 after the desegregation of Memphis schools as a refuge for the children of white supremacists. Lewis describes Oher's treatment as one of only three black children at the school as positive; I can believe that because three kids out of a thousand plays into one kind of narrative. Later, Lewis mentions in passing that the school balked at the applications of other black kids once Oher became famous, and one has to wonder how that might change the narrative for the school's administration and parents. There's a story there that's left untold, and might not be as positive as Oher's reception.

Don't get me wrong: these aren't truly flaws in Lewis's book, because he's not even trying to tell that story. He's telling the story of one exceptional young man who reached college football through a truly unusual set of circumstances, and he tells that story well. I just can't help but look for systems in individual stories, to look for institutions that should have been there for Oher and weren't. Once I started looking, the signs of systemic failures sit largely unremarked beneath nearly every chapter. Maybe this is a Rorschach test of political analysis: do you see an exceptional person rising out of adversity through human charity, or a failure of society that has to be patched around by uncertain chance that, for most people, will fail without ever leaving a trace?

The other somewhat idiosyncratic reaction I had to this book, and the reason why I've put off reading it for so long, is that I now find it hard to read about football. While I've always been happy to watch nearly any sport, football used to be my primary sport as a fan, the one I watched every Sunday and most Saturdays. As a kid, I even kept my own game statistics from time to time, and hand-maintained team regular season standings. But somewhere along the way, the violence, the head injuries, and the basic incompatibility between the game as currently played and any concept of safety for the players got to me. I was never someone who loved the mud and the blood and the aggression; I grew up on the west coast offense and the passing game and watched football for the tactics. But football is an incredibly violent sport, and the story of quarterback sacks, rushing linebackers, and the offensive line is one of the centers of that violence. Lewis's story opens with Joe Theismann's leg injury in 1985, which is one of the most horrific injuries in the history of sport. I guess I don't have it in me to get excited about a sport that does things like that to its players any more.

I think The Blind Side is a bit of a mess as a book, but I'm still very glad that I read it. Oher's story, particularly through Lewis's story-telling lens, is incredibly compelling. I'm just also wary of it, because it sits slightly askew on some of the deepest fault lines in American society, and it's so easy for everyone involved to read things into the story that are coming from that underlying mythology rather than from Oher himself. I think Lewis fought through this whole book to not do that; I think he mostly but did not entirely succeed.

The Tuohys have their own related book (In a Heartbeat), written with Sally Jenkins, that's about their philosophy of giving and charity and looks very, very Christian in a way that makes me doubtful that it will shine a meaningful light on any of the social fault lines that Lewis left unaddressed. But Oher, with Don Yaeger, has written his own autobiography, I Beat the Odds, and that I will read. Given how invested I got in his story through Lewis, I feel an obligation to hear it on his own terms, rather than filtered through well-meaning white people.

I will cautiously recommend this book because it's an amazing story and Lewis tries very hard to do it justice. But I think this is a book worth reading carefully, thinking about who we're hearing from and who we aren't, and looking critically at the things Lewis leaves unsaid.

Rating: 7 out of 10

Faqet

Subscribe to AlbLinux agreguesi