You are here

Agreguesi i feed

Xavier Claessens: Speed up your GitLab CI

Planet GNOME - Mar, 06/11/2018 - 10:21md

GNOME GitLab has AWS runners, but they are used only when pushing code into a GNOME upstream repository, not when you push into your personal fork. For personal forks there is only one (AFAIK) shared runner and you could be waiting for hours before it picks your job.

But did you know you can register your own PC, or a spare laptop collecting dust in a drawer, to get instant continuous integration (CI) going? It’s really easy to setup!

1. Install docker apt install 2. Install gitlab-runner

Follow the instructions here:

(Note: The Ubuntu 18.04 package doesn’t seem to work.)

3. Install & start the GitLab runner service sudo gitlab-runner install sudo gitlab-runner start 4. Find the registration token

Go to your gitlab project page, settings -> CI/CD -> expand “runners”

5. Register your runner sudo gitlab-runner register --non-interactive --url --executor docker --docker-image fedora:27 --registration-token **

You can repeat step 5 with the registration token of all your personal forks in the same GitLab instance. To make this easier, here’s a snippet I wrote in my ~/.bashrc to register my “builder.local” machine on a new project. Use it as gitlab-register .

function gitlab-register { host=$1 token=$2 case "$host" in gnome) host= ;; fdo) host= ;; collabora) host= ;; *) host= token=$1 esac cmd="sudo gitlab-runner register --non-interactive --url $host --executor docker --docker-image fedora:27 --registration-token $token" #$cmd ssh builder.local -t "$cmd" }

Not only will you now get faster CI, but you’ll also reduce the queue on the shared runner for others!

Reproducible Builds: Weekly report #184

Planet Debian - Mar, 06/11/2018 - 7:52md

If you’re interested in attending the Reproducible Builds summit in Paris between 11th—13th December please see our event page.

In the meantime, here’s what happened in the Reproducible Builds effort between Sunday October 28 and Saturday November 3 2018:

Packages reviewed and fixed, and bugs filed

Chris Lamb also sent two previously-authored patches for GNU mtools to ensure the Debian Installer images could become reproducible. (1 & 2)

This week’s edition was written by Alexander Bedrossian, Amit Biswas, Anoop Nadig, Bernhard M. Wiedemann, Chris Lamb, David A. Wheeler, Holger Levsen, Snahil Singh, Nick Gregory & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks

Allan Day: Birds in flight

Planet GNOME - Mar, 06/11/2018 - 7:37md

If you follow Planet GNOME, you’ll know about Jim Hall’s fantastic usability testing work. For years Jim has spearheaded usability testing on GNOME, both by running tests himself and mentoring usability testing internships offered through Outreachy.

This Autumn, Jim will once again be mentoring usability testing internships. However, this time round, we’re planning on running the internships a bit differently.

In previous rounds of usability testing, the tests have typically been performed on released software: that is, apps and features that are already in the hands of users. This is great and has flagged up issues that we’ve gone on to fix, but it has some drawbacks.

Most obviously, it means that users are exposed to the software prior to usability testing takes place. However, it also means that test results can take a long time to be corrected: active development of the software in question might have been paused by the time the tests are conducted, and it can sometimes take a while until a developer is able to correct any usability issues that have been identified.

Therefore, for this round of the Outreachy internships, we are only going to test UX changes that are actively being worked on. Instead of testing finished features, the tests will be on two things:

  1. Mockups or prototypes of changes that we hope to implement soon (this can include static mockups and paper or software prototypes)
  2. Features or UI changes that are being actively worked on, but haven’t been released to users yet

One goal is to increase the number of cycles of data-driven iteration that our UX work goes through. Ideally there should be multiple rounds of testing and design changes prior to coding even taking place! This will reduce the amount of UI changes that have to be made, and in turn reduce the amount of work for our developers.

Organising the tests in this way, we’re drawing on ideas from agile and lean. The plan is to have a predefined schedule of tests. When test day rolls around, we’ll figure out what we want to test. This will force a routine to our practice and ensure that we keep the exercise light and iterative.

There’s lots of work in progress UX work in GNOME right now, all of which would benefit from testing. This includes the new menu arrangements that are replacing app menus, new sound settings, new design patterns for lists, new application permission settings, the new lock screen design, and more.

One thing I’d actually love to see is design initiatives rejected outright because of testing feedback.

The region and language settings are being updated right now. We can test this!

This approach to testing is an experiment and we’ll have to see how well it works in practice. However, if it does go well, I’m hopeful that we can incorporate it into our design and development practice more generally.

Jim and the rest of the design team will be looking for help from the rest of the GNOME community as we approach the test days. If anyone wants to help make prototypes or make sure that development branches can be easily run by our interns, your help would be extremely welcome. Likewise, we’d love to hear from anyone who has development work that they would like to have tested.

Jono Bacon: Video: 10 Avoidable Career Mistakes (and How to Conquer Them)

Planet Ubuntu - Mar, 06/11/2018 - 5:30md

I don’t claim to be a career expert, but I have noticed some important career mistakes many people make (some I’ve made myself!). These mistakes span how we approach our career growth, balance our careers with the rest of our lives, and the make the choices we do on a day to day basis.

In the latest episode of my Open Organization video series, I delve into 10 of the most important career mistakes people tend to make. Check it below:

So, now let me turn it to you. What are other career mistakes that are avoidable? What have you learned in your career? Share them in the comments below!

The post Video: 10 Avoidable Career Mistakes (and How to Conquer Them) appeared first on Jono Bacon.

Daisy and George got passports

Planet Debian - Mar, 06/11/2018 - 2:56md

For readers of Planet Debian who met Daisy and George in July: they now have their own passports (and more adventure books in the works).

Jon Debian –

My Open-Source Activities from September to October 2018

Planet Debian - Mar, 06/11/2018 - 12:51md

Welcome readers, this is a infrequently updated post series that logs my activities within open-source communities. I want my work to be as transparent as possible in order to promote open governance, a policy feared even by some “mighty” nations.

I do not work on open-source full-time, although I sincerely would love to. Therefore the posts may cover a ridiculously long period (even a whole year).

Unfortunately this blog site does not support commenting. So if anyone has anything to discuss regarding the posts, feel free to reach me via the social links at the footer of the page.


Debian is a general-purpose Linux distribution that is widely used on the planet. I am currently a Debian Maintainer who works on packages related to Android SDK and the Java ecosystem.

Some Android SDK Packages Now Available on Every Architecture

For a long time our packages are only available on x86, ARM and MIPS architectures. This is due to the fact that AOSP is only designed to support those 3 major instruction sets in the market. Another physical limitation is that libunwind, a common dependency by most Android SDK components, can only be built on said architectures after being patched by AOSP. ADB and fastboot now even removed the support on MIPS because we now build them against BoringSSL which does not support MIPS at all. In light of the removal of MIPS support in NDK as well, I can assume that the entire AOSP will say goodbye to MIPS at some point.

But not all components rely on libunwind. With some minor efforts and investigations, we can now enable some of them to build on every architecture that Debian supports. For now they include:

There will surely be more on the road, stay tuned.

DD Application Approved

For those who aren’t familiar with the term, DD means Debian Developer who are official member of the Debian Project and usually have access and permissions to most parts of Debian’s infrastructure.

As a Debian Maintainer (DM), I can ask for upload permission on any packages and upload them without needing a sponsor. But in case of introducing new binary packages or hijacking ones from another source package I will need a sponsor. I believe with a DD account working in Debian will become smoother and easier.

So I applied for DD about… 6 months ago! After a marathon of Q&A sessions I finally got approved by my AM Joerg Jaspert. Now I still have to wait for further review by the system, perhaps I can get the account in November.

Big thanks to Hans-Christoph Steiner, Markus Koschany and Emmanuel Bourg who advocated me, and my AM Joerg Jaspert.

Voidbuilder Release 0.2.0

Voidbuilder is a simple program that mimics pbuilder but uses Docker as the isolation engine. I have been using it privately and am quite satisfied.

Last month I released a 0.2.0 version with the following changes:

  • The login sub-command no longer builds the source-only bundle and this task must be done by the user.
  • One failed hook no longer fail the entire job, instead a message will pop up.
殷啟聰 | Kai-Chung Yan Blog by seamlik

Philip Chimento: Taking Out the Garbage

Planet GNOME - Mar, 06/11/2018 - 3:07pd

From the title, you might think this post is about household chores. Instead, I’m happy to announce that we may have a path to solving GJS’s “Tardy Sweep Problem”.

For more information about the problem, read The Infamous GNOME Shell Memory Leak by Georges Stavracas. This is going to be a more technical post than my previous post on the topic, which was more about the social effects of writing blog posts about memory leaks. So first I’ll recap what the problem is.

Garbage, garbage, everywhere

At the root of the GNOME desktop is an object-oriented technology called GObject. GObjects are reference counted, but not garbage collected. As long as their reference count is nonzero, they are “alive”, and when their reference count drops to zero, they are deleted from memory.

GObject reference counting

Graphical user interfaces (such as a large part of GNOME Shell) typically involve lots of GObjects which all increase each other’s reference count. A diagram for a simple GUI window made with GTK might look like this:

A typical GUI would involve many more objects than this, but this is just for illustrating the problem.

Here, each box is an object in the C program.

Note that these references are all non-directional, meaning that they aren’t really implemented as arrows. In reality it looks more like a list of numbers; Window (1); Box (1); etc. Each object “knows” that it has one reference, but it knows nothing about which other objects own those references. This will become important later.

When the app closes, it drops its reference to the window. The window’s reference count becomes zero, so it is erased. As part of that, it drops all the references it owns, so the reference count of the upper box becomes zero as well, and so on down the tree. Everything is erased and all the memory is reclaimed. This all happens immediately. So far, so good.

Javascript objects

To write the same GUI in a Javascript program, we want each GObject in the underlying C code to have a corresponding Javascript object so that we can interact with the GUI from our Javascript code.

Javascript objects are garbage collected, and the garbage collector in the SpiderMonkey JS engine is a “tracing” garbage collector, meaning that on every garbage collection pass it starts out with objects in a “root set” that it knows are not garbage. It “traces” each of those objects, asking it which other objects it refers to, and keeps tracing each new object until it hits a dead end. Any objects that weren’t traced are considered garbage, and are deleted. (For more information, the Wikipedia article is informative:

We need to integrate the JS objects and their garbage collection scheme with the GObjects and their reference counting scheme. That looks like this:

The associations between the Javascript objects (“JS”) and the GObjects are bidirectional. That means, the JS object owns a reference to the GObject, meaning the reference count of every GObject in this diagram is 2. The GObject also “roots” the JS object (marks it as unable to be garbage collected) because the JS object may have some state set on it (for example, by writing button._alreadyClicked = false; in JS) that should not be lost while the object is still alive.

The JS objects can also refer to each other. For example, see the rightmost arrow from the window’s JS object to the button’s JS object. The JS code that created this GUI probably contained something like win._button = button;. These references are directional, because the JS engine needs to know which objects refer to which other objects, in order to implement the garbage collector.

Speaking of the garbage collector! The JS objects, unlike the GObjects, are cleaned up by garbage collection. So as long as a JS object is not “rooted” and no other JS object refers to it, the garbage collector will clean it up. None of the JS objects in the above graph can be garbage collected, because they are all rooted by the GObjects.

Toggle references and tardy sweeps

Two objects (G and JS) keeping each other alive equals a reference cycle, you might think. That’s right; as I described it above, neither object could ever get deleted, so that’s a memory leak right there. We prevent this with a feature called toggle references: when a GObject’s reference count drops to 1 we assume that the owner of the one remaining reference is the JS object, and so the GObject drops its reference to the JS object (“toggles down“). The JS object is then eligible for garbage collection if no other JS object refers to it.

(If this doesn’t make much sense, don’t worry. Toggle references are among the most difficult to comprehend code in the GJS codebase. It took me about two years after I became the maintainer of GJS to fully understand them. I hope that writing about them will demystify them for others a bit.)

When we close the window of this GUI, here is approximately what happens. The app drops its references to the GObjects and JS objects that comprise the window. The window’s reference count drops to 1, so it toggles down, dropping one direction of the association between GObject and JS object.

Unlike the GObject-only case where everything was destroyed immediately, that’s all that can happen for now! Everything remains in place until the next garbage collection, because at the top of the object tree is the window’s JS object. It is eligible to be collected because it’s not rooted and no other JS object refers to it.

Normally the JS garbage collector can collect a whole tree of objects at once. That’s why the JS engine needs to have all the information about the directionality of the references.

However, it won’t do that for this tree. The JS garbage collector doesn’t know about the GObjects. So unfortunately, it takes several passes of the garbage collector to get everything. After one garbage collection only the window is gone, and the situation looks like this:

Now, the outermost box’s JS object has nothing referring to it, so it will be collected on the next pass of the garbage collector:

And then it takes one more pass for the last objects to be collected:

The objects were not leaked, as such, but it took four garbage collection passes to get all of them. The problem we previously had, that Georges blogged about, was that the garbage collector didn’t realize that this was happening. In normal use of a Javascript engine, there are no GObjects that behave differently, so trees of objects don’t deconstruct layer by layer like this. So, there might be hours or days in between garbage collector passes, making it seem like that memory was leaked. (And often, other trees would build up in the intervening time between passes.)

Avoiding toggle references

To mitigate the problem Georges implemented two optimizations. First, the “avoid toggle references” patch, which was actually written by Giovanni Campagna several years ago but never finished, made it so that objects don’t start out using the toggle reference system. Instead, only the JS objects hold references to the GObjects. The JS object can get garbage collected whenever nothing else refers to it, and it will drop its reference to the GObject.

A problem then occurs when that wasn’t the last reference to the GObject, i.e. it’s being kept alive by some C code somewhere, and the GObject resurfaces again in JS, for example by being returned by a C function. In this case we recreate the JS object, assuming that it will be identical to the one that was already garbage collected. The only case where that assumption doesn’t hold, is when the JS code sets some state on one of the JS objects. For example, you execute something like myButton._tag = 'foo';. If myButton gets deleted and recreated, it won’t have a _tag property. So in the case where any custom state is set on a JS object, we switch it over to the toggle reference system once again.

In theory this should help, because toggle references cause the tardy sweep problem, so if fewer objects use toggle references, there should be fewer objects collected tardily. However, this didn’t solve the problem, because especially in GNOME Shell, most JS objects have some state on them. And, sadly, it made the toggle reference code even more complicated than it already was.

The Big Hammer

The second optimization Georges implemented was the affectionately nicknamed “Big Hammer”. It checks if any GObjects toggled down during a garbage collector pass, and if so, restart the garbage collector a few seconds after. This made CPU performance worse, but would at least make sure that all unused objects were deleted from memory within a reasonable time frame (under a minute, rather than a day.)

Combined with some other memory optimizations, this made GNOME 3.30 quite a lot less memory hungry than its predecessors.

An afternoon at Mozilla

Earlier this year, I had been talking on IRC to Ted Campbell and Steve Fink on the SpiderMonkey team at Mozilla for a while, about various ins and outs of being an external (i.e. not Firefox) user of SpiderMonkey’s JS engine API. Early September I found myself in Toronto, where Ted Campbell is based out of, and I paid a visit to the Mozilla office one afternoon.

I had lunch with Ted and Kannan Vijayan of the SpiderMonkey team where we discussed the current status of external SpiderMonkey API users. Afterwards, we made the plans which eventually became this GitHub repository of examples and best practices for using the SpiderMonkey JS engine outside of Firefox. We have both documentation and code examples there, and more on the way. This is still in progress, but it should be the beginning of a good resource for embedding the JS engine, and the end of all those out-of-date pages on MDN!

I also learned some good practices that I can put to use in GJS. For example, we should avoid using JS::PersistentRooted except as a last resort, because it roots objects by putting them in a giant linked list, which is then traced during garbage collection. It’s often possible to store the objects more efficiently than that, and trace them from some other object, or the context.

Ending the tardy sweeps

In the second half of the afternoon we talked about some of the problems that I had with SpiderMonkey that were specific to GJS. Of course, the tardy sweep problem was one of them.

For advice on that, Ted introduced me to Nika Layzell, an engineer on the Gecko team. We looked at the XPCOM cycle collector and I was surprised to learn that Gecko uses a scheme similar to toggle references for some things. However, rather than GJS sticking with toggle references, she suggested a solution that had the advantage of being much simpler.

In “Avoiding toggle references” above, I mentioned that the only thing standing in the way of removing toggle references, is custom state on the JS objects. If there is custom state, the objects can’t be destroyed and recreated as needed. In Gecko, custom state properties on DOM objects are called “expandos” or “expando properties” and are troublesome in a similar way that they are in GJS’s toggle references.

Nika’s solution is to separate the JS object from the expandos, putting the expandos on a separate JS object which has a different lifetime from the JS object that represents the GObject in the JS code. We can then make the outer JS objects into JS Proxies so that when you get or set an expando property on the JS object, it delegates transparently to the expando object.

Kind of like this:

In the “before” diagram, there is a reference cycle which we have to solve with toggle references, and in the “after” diagram, there is no reference cycle, so everything can simply be taken care of by the garbage collector.

In cases where an object doesn’t have any expando properties set on it, we don’t even need to have an expando object at all. It can be created on demand, just like the JS object. It’s also important to note that the expando objects can never be accessed directly from JS code; the GObject is the sole conduit by which they can be accessed.

Recasting our GUI from the beginning of the post with a tree of GUI elements where the top-level window has an expando property pointing to the bottom-level button, and where the window was just closed, gives us this:

Most of these GObjects don’t even need to have expando objects, or JS objects!

At first glance this might seem to be garbage-collectable all at once, but we have to remember that GObjects aren’t integrated with the garbage collector, because they can’t be traced, they can only have their reference counts decremented. And the JS engine doesn’t allow you to make new garbage in the middle of a garbage collector sweep. So a naive implementation would have to collect this in two passes, leaving the window’s expando object and the button for the second pass:

This would require an extra garbage collector pass for every expando property that referred to another GObject via its JS object. Still a lot better than the previous situation, but it would be nice if we could collect the whole thing at once.

We can’t walk the whole tree of GObjects in the garbage collector’s marking phase; remember, GObject references are nondirectional, so there’s no generic way to ask a GObject which other GObjects it references. What we can do is partially integrate with the marking phase so that when a GObject has only one reference left, we make it so that the JS object traces the expando object directly, instead of the GObject rooting the expando object. Think of it as a “toggle reference lite”. This would solve the above case, but there are still some more corner cases that would require more than one garbage collection pass. I’m still thinking about how best to solve this.

What’s next

All together, this should make the horrible toggle reference code in GJS a lot simpler, and improve performance as well.

I started writing the code for this last weekend. If you would like to help, please get in touch with me. You can help by writing code, figuring out the corner cases, or testing the code by running GNOME Shell with the branch of GJS where this is being implemented. Follow along at issue #217.

Additionally, since I am in Toronto again, I’ll be visiting the Mozilla office again this week, and hopefully more good things will come out of that!


Thanks to Ted Campbell, Nika Layzell, and Kannan Vijayan of Mozilla for making me feel welcome at Mozilla Toronto, and taking some time out of their workday to talk to me; and thanks to my employer Endless for letting me take some time out of my workday to go there.

Thank you to Ted Campbell and Georges Stavracas for reading and commenting on a draft version of this post.

The diagrams in this post were made with svgbob, a nifty tool; hat tip to Federico Mena Quintero.

How to run CEWE photo creator on Debian

Planet Debian - Hën, 05/11/2018 - 8:28md


This post describes how I debug an issue with a proprietary software. I hope this will give you some hint on how to proceed should you face a similar issue. If you’re in a hurry, you can read the TL;DR; version at the end.

After the summer vacations, I’ve decided to offer a photo-book to my mother. I searched for open-source solution but the printed results were lackluster.

Unfortunately, the only possible solution was to use professional service. Some of these services offer a web application to create photo books, but this is painful to use on a slow DSL line. Other services provide a program named CEWE. This proprietary program can be downloaded for Windows, Mac and, lo and behold: Linux !

The download goes quite fast as the downloaded program is a Perl script that does the actual download. I would have preferred a proper Debian package, but at least Linux amd64 is supported.

Once installed, CEWE program is available as an executable and a bunch of shared libraries.

This program works quite well to create a photo album. I won’t go into the details there.

I ran into trouble when trying to connect the application to the service site to order the photo-book: the connection fails with a cryptic message “error code 10000”.

Commercial support was not much help as they insisted that I check my proxy settings. I downloaded again CEWE from another photo service. The new CEWE installation gave me the same error. This showed that the issue was on my side and not on the server’s side.

Given that the error occurred quite fast when trying to connect, I guessed that the connection setup was going south. Since the URL shown in the installation script began with https, I had to check for SSL issues.

I checked certificate issues: curl had no problem connecting to the server mentioned in the Perl script. Wireshark showed that the connection to the server was reset by the server quite fast. I wondered which version of SSL was used by CEWE and ran ldd. To my surprise, I found that ldd did not list libssl. Something weird was going on: SSL was required but CEWE was not linked to libssl…

I used another trick: explore all the menus of the application. This was a good move as I found a checkbox to enable debug report in CEWE in “Options -> paramètres -> Service” menu (that may be “options-> parameters -> support” in English CEWE). When set, debug traces are also shown on standard output of CEWE,

And, somewhere in the debug traces, I found:

W (2018-10-30T18:36:37.143) [ 0] ==> QSslSocket: cannot resolve SSLv3_client_method <==

So CEWE was looking for SSL symbols even though ldd did not require libssl…

I guessed that CEWE was using dlopen to open the ssl library. But which file was opened by dlopen ?

Most likely, the guys who wrote the call to dlopen did not want to handle file names with so version (i.e. like, and added code to open directly This file is provided by libssl-dev package, which was already installed on my system.

But wait, CEWE was probably written for Debian stable with an older libssl. I tried libssl1.0-dev.. which conflicts with libssl-dev. Oh well, I can live with that for a while…

And that was it ! With libssl1.0-dev installed, CEWE was able to connect to the photo service web site without problems.

So here’s the TL;DR; version. To run CEWE on Debian, run:

sudo apt install libssl1.0-dev

Last but not least, here are some suggestions for CEWE:

  • use libssl1.1. as libssl1.0 is deprecated and will be removed from Debian
  • place the debug checkbox in “System” widget. This widget was the first I opened when I began troubleshooting. “Service” does not mean much to me. Having this checkbox in both “Service” and “System” widgets would not harm

All the best

[ Edit: I first blamed CEWE for loading libssl in a non-standard way. libssl is actually loaded by QtNetwork. Depending on the way Qt is built, SSL is either disabled (-no-openssl option), loaded by dlopen (default) or loaded with dynamic linking (-openssl-linked). The way Qt is built is CEWE choice. Thanks Uli Schlachter for the heads-up]


dod Dominique Dumont's Blog

The Best flake8 Extensions for your Python Project

Planet Debian - Hën, 05/11/2018 - 11:00pd

In the last blog post about coding style, we dissected what the state of the art was regarding coding style check in Python.

As we've seen, Flake8 is a wrapper around several tools and is extensible via plugins: meaning that you can add your own checks. I'm a heavy user of Flake8 and relies on a few plugins to extend the check coverage of common programming mistakes in Python. Here's the list of the ones I can't work without. As a bonus, you'll find at the end of this post, a sample of my go-to tox.ini file.


The name is quite explicit: this extension checks the order of your import statements at the beginning of your files. By default, it uses a style that I enjoy, which looks like:

import os import sys import requests import yaml import myproject from myproject.utils import somemodule

The builtin modules are grouped as the first ones. Then comes a group for each third-party modules that are imported. Finally, the last group manages the modules of the current project. I find this way of organizing modules import quite clear and easy to read.

To make sure flake8-import-order knows about the name of your project module name, you need to specify it in tox.ini with the application-import-names option.

If you beg to differ, you can use any of the other styles that flake8-import-order offers by default by setting the import-order-style option. You can obviously provide your own style.


The flake8-blind-except extension checks that no except statement is used without specifying an exception type. The following excerpt is, therefore, considered invalid:

try: do_something() except: pass

Using except without any exception type specified is considered bad practice as it might catch unwanted exceptions. It forces the developer to think about what kind of errors might happen and should really be caught.

In the rare case any exception should be caught, it's still possible to use except Exception anyway.


The flake8-builtins plugin checks that there is no name collision between your code and the Python builtin variables.

For example, this code would trigger an error:

def first(list): return list[0]

As list is a builtin in Python (to create a list!), shadowing its definition by using list as the name of a parameter in a function signature would trigger a warning from flake8-builtins.

While the code is valid, it's a bad habit to override Python builtins functions. It might lead to tricky errors; in the above example, if you ever need to call list(), you won't be able to.


This module is handy as it is still slapping my fingers once in a while. When using the logging module, it prevents from writing this kind of code:"Hello %s" % mystring)

While this works, it's suboptimal as it forces the string interpolation. If the logger is configured to print only messages with a logging level of warning or above, doing a string interpolation here is pointless.

Therefore, one should instead write:"Hello %s", mystring)

Same goes if you use format to do any formatting.

Be aware that contrary to other flake8 modules, this one does not enable the check by default. You'll need to add enable-extensions=G in your tox.ini file.


The flake8-docstrings module checks the content of your Python docstrings for respect of the PEP 257. This PEP is full of small details about formatting your docstrings the right way, which is something you wouldn't be able to do without such a tool. A simple example would be:

class Foobar: """A foobar"""

While this seems valid, there is a missing point at the end of the docstring.

Trust me, especially if you are writing a library that is consumed by other developers, this is a must-have.


This extension is a good complement to flake8-docstrings: it checks that the content of your docstrings is valid RST. It's a no-brainer, so I'd install it without question. Again, if your project exports a documented API that is built with Sphinx, this is a must-have.

My standard tox.ini

Here's the standard tox.ini excerpt that I use in most of my projects. You can copy paste it and use

[testenv:pep8] deps = flake8 flake8-import-order flake8-blind-except flake8-builtins flake8-docstrings flake8-rst-docstrings flake8-logging-format commands = flake8 [flake8] exclude = .tox # If you need to ignore some error codes in the whole source code # you can write them here # ignore = D100,D101 show-source = true enable-extensions=G application-import-names = <myprojectname>

Before disabling an error code for your entire project, remember that you can force flake8 to ignore a particular instance of the error by adding the # noqa tag at the end of the line.

If you have any flake8 extension that you think is useful, please let me know in the comment section!

Julien Danjou Julien Danjou

Jono Bacon: My Clients Are Hiring Community Roles: Corelight, Scality, and Solace

Planet Ubuntu - Hën, 05/11/2018 - 7:29pd

One of the things I love about working with such a diverse range of clients is helping them to shape, source, and mentor high-quality staff to build and grow their communities.

Well, three of clients Corelight, Scality, and Solace are all hiring community staff for their teams. I know many of you work in community management, so I always want to share new positions here in case you want to apply. If these look interesting, you should apply via the role description – don’t send me your resume. If we know each other (as in, we are friends/associates), feel free to reach out to me if you have questions.

(These are listed alphabetically based on the company name)

Corelight Director of Community

See the role here

Corelight are doing some really interesting work. They provide security solutions based around the Bro security monitor, and they invest heavily in that community (hiring staff, sponsoring events, producing code and more). Corelight are very focused on open source and being good participants in the Bro community. This role will not just serve Corelight but also support and grow the Bro community.

Scality Technical Community Manager

See the role here

I started working with Scality a while back with the focus of growing their open source Zenko community. As I started shaping the community strategy with them, we hired for the Director Of Community role there, and my friend Stefano Maffulli got it, who had done great work at Dreamhost and OpenStack.

Well, Stef needs to hire someone for his team, and this is a role with a huge amount of potential. It will be focused on building, fostering, and growing the Zenko community, producing technical materials, working with developers, speaking, and more. Stef is a great guy and will be a great manager to work for.

Solace Director Of Community and Developer Community Evangelist

Solace have built a lightning-fast infrastructure messaging platform and they are building a community focused on supporting developers who use their platform. They are a great team, and are really passionate about not just building a community, but doing it the right way.

They are hiring for two roles. One will be leading the overall community strategy and delivery and the other will be an evangelist role focused on building awareness and developer engagement.

All three of these companies are doing great work, and really focused on building community the right way. Check out the roles and best of luck!

The post My Clients Are Hiring Community Roles: Corelight, Scality, and Solace appeared first on Jono Bacon.

Stephen Michael Kellat: Writing Up Plan B

Planet Ubuntu - Hën, 05/11/2018 - 12:21pd

With the prominence of things like Liberapay and Patreon as well as, I have had to look at the tax implications of them all.  There is no single tax regime on this planet.  Developers and other freelancers who might make use of one of these services within the F/LOSS frame of reference are frequently not within the USA frame of reference.  That makes a difference.


I also have to state at the outset that this does not constitute legal advice.  I am not a lawyer.  I am most certainly not your lawyer.  If anything these recitals are my setting out my review of all this as being “Plan B” due to continuing high tensions surrounding being a federal civil servant in the United States.  With an election coming up Tuesday where one side treats it as a normal routine event while the other is regarding it as Ragnarok and is acting like humanity is about to face an imminent extinction event, changing things up in life may be worthwhile.


An instructive item to consider is Internal Revenue Service Publication 334 Tax Guide for Small Business (For Individuals Who Use Schedule C or C-EZ).  The current version can be found online at  Just because you receive money from people over the Internet does not necessarily mean it is free from taxation.  Generally the income a developer, freelance documentation writer, or a freelancer in general might receive from a Liberapay or appears to fall under “gross receipts”.  


A recent opinion of the United States Tax Court (Felton v. Commissioner, T.C. Memo 2018-168) discusses the issue of “gift” for tax purposes rather nicely in comparison to what Liberapay mentions in its FAQ.  You can find the FAQ at  The opinion can be found at  After reading the discussion in Felton, I remain assured that in the United States context anything received via Liberapay would have to be treated as gross receipts in the United States.  The rules are different in the European Union where Liberapay is based and that’s perfectly fine.  In the end I have to answer to the tax authorities in the United States.


The good part about reporting matters on Schedule C is that it preserves participation in Social Security and allows a variety of business expenses and deductions to be taken.  Regular wage-based employees pay into Social Security via the FICA tax.  Self-employed persons pay into Social Security via SECA tax.


Now, there are various works I would definitely ask for support if I left government.  Such includes:


  • Freelance documentation writing

  • Emergency Management/Homeland Security work under the aegis of my church

  • Podcast production

  • Printing & Publishing


For podcast production, general news reviews would be possible.  Going into actual entertainment programming would be nice.  There are ideas I’m still working out.


Printing & Publishing would involve getting small works into print on a more rapid cycle in light of an increasingly censored Internet.  As the case of shows, you can have one of your users do something horrible but not actually do anything as a site but still have all your hosting partners withdraw service so as to knock you offline.  Outside the context of the USA, total shutdowns of access to the Internet still occur from time to time in other countries.


Emergency Management comes under the helping works of the church.


As to documentation writing, I used to write documentation for Xubuntu.  I want to do that again.


As to the proliferation of codes of conduct that are appearing everywhere, I can only offer the following statement:


“I am generally required to obey the United States Constitution and laws of the United States of America, the Constitution of the State of Ohio and Ohio’s laws, and the orders of any commanding officers appointed to me as a member of the unorganized militia (Ohio Revised Code 5923.01(D), Title 10 United States Code Section 246(b)(2)).  Codes of Conduct adopted by projects and organizations that conflict with those legal responsibilities must either be disregarded or accommodations must otherwise be sought.”


So, that’s “Plan B”.  The dollar amounts remain flexible at the moment as I’m still waiting for matters to pan out at work.  If things turn sour at my job, I at least have plans to hit the ground running seeking contracts and otherwise staying afloat.



Santiago Zarate: gentoo eix-update failure

Planet Ubuntu - Dje, 04/11/2018 - 1:00pd

If you are having the following error on your Gentoo system:

Can't open the database file '/var/cache/eix/portage.eix' for writing (mode = 'wb')

Don’t waste your time, simply the /var/cache/eix directory is not present and/or writeable by the eix/portage use

mkdir -p /var/cache/eix chmod +w /var/cache/eix*

Basic story is that eix will drop privileges to portage user when ran as root.

Ahh, the joy of Cloudflare SNI certificates

Planet Debian - Sht, 03/11/2018 - 8:23md

Nice neighbourhood,

For your copy and paste pleasure:

openssl s_client -connect < /dev/null | openssl x509 -noout -text | grep DNS:


03.11.18: Cloudflare fixed this mess somewhat. They now look for SNI servernames and use customer-specific certs. See:

openssl s_client -servername -connect < /dev/null | openssl x509 -noout -text | grep DNS:

(notice the -servername in the above vs. the original command that will fail with something like 140246838507160:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:769: now)

Daniel Lange Daniel Lange's blog

Firefox asking to be made the default browser again and again

Planet Debian - Sht, 03/11/2018 - 8:03md

Firefox on Linux can develop the habit to (rather randomly) ask again and again to be made the default browser. E.g. when started from Thunderbird by clicking a link it asks but when starting from a shell all is fine.

The reason to this is often two (or more) .desktop entries competing with each other.

So, walkthrough: (GOTO 10 in case you are sure to have all the basics right)

update-alternatives --display x-www-browser
update-alternatives --display gnome-www-browser

should both show firefox for you. If not

update-alternatives --config <entry>

the entry to fix the preference on /usr/bin/firefox.

Check (where available)


that the "Internet Browser" is "Firefox".

Check (where available)


that anything containing "html" points to Firefox (or is left at a non-user set default).

Check (where available)

xdg-settings get default-web-browser

that you get firefox.desktop. If not run

xdg-settings check default-web-browser firefox.desktop

If you are running Gnome, check

xdg-settings get default-url-scheme-handler http

and the same for https.



sensible-editor ~/.config/mimeapps.list

and remove all entries that contain something like userapp-Firefox-<random>.desktop.


find ~/.local/share/applications -iname "userapp-firefox*.desktop"

and delete these files or move them away.


Once you have it working again consider disabling the option for Firefox to check whether it is the default browser. Because it will otherwise create those pesky userapp-Firefox-<random>.desktop files again.

Configuring Linux is easy, innit?

Daniel Lange Daniel Lange's blog

My Debian Activities in October 2018

Planet Debian - Sht, 03/11/2018 - 5:15md

FTP master

This month I accepted 211 packages, which is almost the same amount as last month. On the other side I was a bit reluctant and rejected only 36 uploads. The overall number of packages that got accepted this month was 370.

Debian LTS

This was my fifty second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads or prepared security uploads of:

  • [DLA 1555-1] libmspack security update for two CVEs
  • [DLA 1556-1] paramiko security update for two CVEs
  • [DLA 1557-1] tiff security update for three CVEs
  • [DLA 1558-1] ruby2.1 security update for two CVEs
  • [DSA 4325-1] mosquitto security update for four CVEs
  • #912159 for libmspack and two CVEs in Stretch

I could also mark all emerging CVEs of wireshark as not affected for Jessie. I prepared a debdiff for ten CVEs affecting tiff in Stretch and sent it to the security team and the maintainer. Unfortunately it did not result in an upload yet.

I also worked on imagemagick and expect an upload soon.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the fifth ELTS month.

During my allocated time I uploaded:

  • ELA-52-1 for net-snmp

There was also one CVE for the python package requests, that could be marked as not-affected. The version in Wheezy did contain the correct code, whereas later versions contained the issue.

As like in LTS, I worked on wireshark (marking all CVEs as not-affected for Wheezy) and tiff3, but did not do an upload yet.

Moreover this was a strange month related to the packages I selected for work. So please everybody check twice whether to add an entry to ela-needed.txt.

As like in LTS, I also did some days of frontdesk duties.

Other stuff

I uploaded new upstream versions of …

Further I continued to sponsor some glewlwyd packages for Nicolas Mora. From my point of view he should become a DM now, so he started his NM process.

alteholz » planetdebian

Daniel García Moreno: GNOME Translation Editor 3.30.0

Planet GNOME - Sht, 03/11/2018 - 12:53md

I'm pleased to announce the new GNOME Translation Editor Release. This is the new release of the well known Gtranslator. I talked about the Gtranslator Ressurection some time ago and this is the result:

This new release isn't yet in flathub, but I'm working on it so we'll have a flatpak version really soon. Meantime you can test using the gnome nightly flatpak repo.

New release 3.30.0

This release doesn't add new functionality. The main change is in the code, and in the interface.

We've removed the toolbar and move the main useful buttons to the headerbar. We've also removed the statusbar and replaced it with a new widget that shows the document translation progress.

The plugin system and the dockable window system has been removed to simplify the code and make the project more maintenable. The only plugin that is maintained for now is the translation memory, that's now integrated. I'm planning to migrate other useful plugins, but that's for the future.

Other minor changes that we've made are in the message table, we've removed some columns and now we only show two, the original message and the translated one and we use colors and text styles to show fuzzy status and untranslated.

The main work is a full code modernization, now we use meson to build, we've flatpak integration and this simplify the development because gtranslator know works by default in Gnome Builder without the need to install development dependencies.

There's others minor changes like the new look when you open the app without any file:

Or the new language selector that autofill all the profile fields using the language:

And for sure we've tried to fix most important bugs:

New name and new Icon

Following the modern GNOME app names, we've renamed the app from Gtranslator to GNOME Translation Editor. Internally we'll continue with the gtranslator name, so the app is gtranslator, but for the final user the name will be Translation Editor.

And following the App Icon Redesign Initiative we've a new icon that follows the new HIG.


I'm not doing this alone. I became the gtranslator maintainer because Daniel Mustieles push to have a modern tool for GNOME translators, done with gnome technology and fully functional.

The GNOME Translation Editor is a project done by the GNOME community, there are other people helping with code, documentation, testing, design ideas and much more and any help is always welcome. If you're insterested, don't hesitate and come to the gnome gitlab and collaborate with this great project.

And maybe it's a bit late, but I've publish a project to the, so maybe someone can work on this as an intern for three months. I'll try to get more people involved here using following outreachy and maybe GSoC, so if you're a student, now is the right time to start contributing to be able to be selected for the next year internship programs.

Ismael Olea: Running EPF Composer in Fedora Linux, v3

Planet GNOME - Pre, 02/11/2018 - 11:30md

Well, finally I succeed with native instalation of the EPF (Eclipse Process Framework) Composer in my Linux system thanks to Bruce MacIsaac and the development team help. I’m happy. This is not trivial since EPFC is a 32 bits application running in a modern 64 bits Linux system.

My working configuration:

In my system obviously I can install all rpm packages using DNF. For different distros look for the equivalent packages.

Maybe I’m missing some minor dependency, I didn’t checked in a clean instalation.

Download EPFC and xulrunner and extract each one in the path of your choice. I’m using xulrunner-10.0.2.en-US.linux-i686/ as directory name to be more meaninful.

The contents of epf.ini file:

-data @user.home/EPF/workspace.152 -vmargs -Xms64m -Xmx512m -Dorg.eclipse.swt.browser.XULRunnerPath=/PATHTOXULRUNNER/xulrunner-10.0.2.en-US.linux-i686/

I had to write the full system path for the -Dorg.eclipse.swt.browser.XULRunnerPath property to get Eclipse recognize it.

And to run EPF Composer:

cd $EPF_APP_DIR $ epf -vm /usr/lib/jvm/java-1.8.0-oracle-

If you want some non trivial work with Composer in Linux you’ll need xulrunner since it’s used extensively for editing contents.

I had success running the Windows EPF version using Wine and I can do some work with it, but at some point the program gets inestable and needs to reboot. Other very interesting advantage of running native is I can use the GTK+ filechooser which is really lot better than the simpler native Java one.

I plan to practice a lot modeling with EPF Composer in the coming weeks. Hopefully I’ll share some new artifacts authored by me.

Cross compiling CMake-based projects using Ubuntu/Debian's multi arch

Planet Debian - Pre, 02/11/2018 - 6:20md
As you probably already know Ubuntu (and then Debian) added Multi-Arch support quite some time ago. This means that you can install library packages from multiple architectures on the same machine.

Thanks to the work of many people, in which I would like to specially mention Helmut Grohne, we are now able to cross compile Debian packages using standard sbuild chroots. He even was kind enough to provide me with some numbers:

We have 28790 source packages in Debian unstable.
Of those, 13358 (46.3%) build architecture-dependent binary packages.
Of those, 7301 (54.6%) have satisfiable cross Build-Depends.
Of those, 3696 (50.6% of buildable, 27.6% of sources) were attempted.
Of those, 2695 (72.9% of built, 36.9% of buildable, 20.1% of sources) were successful.
633 bugs affecting 772 packages (7.23% of 10663 unsuccessful) are reported.
Now I asked myself if I could use this to cross compile the code I'm working on without the need of doing a full Debian package build.

My projects uses CMake, so we can cross compile by providing a suitable CMAKE_TOOLCHAIN_FILE.

And so the first question is:

How do we create the necessary file using what Multi-Arch brings to our table?
I asked Helmut and he did not only provide me with lots of tips, he also provided me with the following script, which I modified a little:

Now we can run the script providing it with the desired host arch and voilá, we have our toolchain file.


#set -x


DEB_HOST_GNU_TYPE=$(dpkg-architecture -f "-a$1" -qDEB_HOST_GNU_TYPE)
DEB_HOST_GNU_CPU=$(dpkg-architecture -f "-a$1" -qDEB_HOST_GNU_CPU)
case "$(dpkg-architecture -f "-a$1" -qDEB_HOST_ARCH_OS)" in
        linux) system_name=Linux; ;;
        kfreebsd) system_name=kFreeBSD; ;;
        hurd) system_name=GNU; ;;
        *) exit 1; ;;

cat <> cmake_toolchain_$1.cmake
# Use it while calling CMake:
#   mkdir build; cd build
#   cmake -DCMAKE_TOOLCHAIN_FILE="../cmake_toolchain_.cmake" ../
set(CMAKE_SYSTEM_NAME "$system_name")

Can we improve this?Helmut mentioned that meson provides debcrossgen, a script that automates this step. Meson is written in python, so it only needs to know the host architecture to create the necessary definitions.

CMake is not interpreted, but maybe it has a way to know the host arch in advance. If this is true maybe a helper could be added to help in the process. Ideas (or even better, patches/code!) welcomed. Lisandro Damián Nicanor Pérez Meyer Solo sé que sé querer, que tengo Dios y tengo fe.

Jonathan Riddell: Red Hat and KDE

Planet Ubuntu - Pre, 02/11/2018 - 5:36md

By a strange coincidence the news broke this morning that RHEL is deprecating KDE. The real surprise here is that RHEL supported KDE all.  Back in the 90s they were entirely against KDE and put lots of effort into our friendly rivals Gnome.  It made some sense since at the time Qt was under a not-quite-free licence and there’s no reason why a company would want to support another company’s lock in as well as shipping incompatible licences.  By the time Qt become fully free they were firmly behind Gnome.  Meanwhile Rex and a team of hard working volunteers packaged it anyway and gained many users.  When Red Hat was turned into the all open Fedora and the closed RHEL, Fedora was able to embrace KDE as it should but at some point the Fedora Next initiative again put KDE software in second place. Meanwhile RHEL did use Plasma 4 and hired a number of developers to help us in our time of need which was fabulous but all except one have left some time ago and nobody expected it to continue for long.

So the deprecation is not really new or news and being picked up by the news is poor timing for Red Hat, it’s unclear if they want some distraction from the IBM news or just The Register playing around.  The community has always been much better at supporting out software for their users, maybe now the community run EPEL archive can include modern Plasma 5 instead of being stuck on the much poorer previous release.

Plasma 5 is now lightweight and feature full.  We get new users and people rediscovering us every day who report it as the most usable and pleasant way to run their day.  From my recent trip in Barcelona I can see how a range of different users from university to schools to government consider Plasma 5 the best way to support a large user base.  We now ship on high end devices such as the KDE Slimbook down to the low spec value device of Pinebook.  Our software leads the field in many areas such as video editor Kdenlive, or painting app Krita or educational suite GCompris.  Our range of projects is wider than ever before with textbook project WikiToLearn allowing new ways to learn and we ship our own software through KDE Windows, Flatpak builds and KDE neon with Debs, Snaps and Docker images.

It is a pity that RHEL users won’t be there to enjoy it by default. But, then again, they never really were. KDE is collaborative, open, privacy aware and with a vast scope of interesting projects after 22 years we continue to push the boundaries of what is possible and fun.


Diego Turcios: Getting Docker Syntax In Gedit

Planet Ubuntu - Pre, 02/11/2018 - 5:18md
I have been working with docker in the last days, and encounter the syntax issue with gedit. Just pure plain text. So make a small search and found an easy way for fixing this. I found Jasper J.F. van den Bosch repository in GitHub and found the solution for this simple problem.
We need to download the docker.lang file, available here:

After that, you go to the folder you save the file and do the following command.
sudo mv docker.lang /usr/share/gtksourceview-3.0/language-specs/ If this doesn't work you can try the following:

sudo mv docker.lang  ~/.local/share/gtksourceview-3.0/language-specs/And that's all!

Screenshot of gedit with no docker lang

Screenshot of gedit with docker lang


Subscribe to AlbLinux agreguesi