You are here

Agreguesi i feed

Peter Hutterer: meson fails with "ERROR: Native dependency 'foo' not found" - and how to fix it

Planet GNOME - Hën, 09/07/2018 - 1:46pd

A common error when building from source is something like the error below: ERROR: Native dependency 'foo' not found
or a similar warning ERROR: Invalid version of dependency, need 'foo' ['>= 1.1.0'] found '1.0.0'.
Seeing that can be quite discouraging, but luckily, in many cases it's not too difficult to fix. As usual, there are many ways to get to a successful result, I'll describe what I consider the simplest.

What does it mean? Dependencies are simply libraries or tools that meson needs to build the project. Usually these are declared like this in

dep_foo = dependency('foo', version: '>= 1.1.0')
In human words: "we need the development headers for library foo (or 'libfoo') of version 1.1.0 or later". meson uses the pkg-config tool in the background to resolve that request. If we require package foo, pkg-config searches for a file foo.pc in the following directories:
  • /usr/lib/pkgconfig,
  • /usr/lib64/pkgconfig,
  • /usr/share/pkgconfig,
  • /usr/local/lib/pkgconfig,
  • /usr/local/share/pkgconfig
The error message simply means pkg-config couldn't find the file and you need to install the matching package from your distribution or from source.

And important note here: in most cases, we need the development headers of said library, installing just the library itself is not sufficient. After all, we're trying to build against it, not merely run against it.

What package provides the foo.pc file?

In many cases the package is the development version of the package name. Try foo-devel (Fedora, RHEL, SuSE, ...) or foo-dev (Debian, Ubuntu, ...). yum and dnf provide a great shortcut to install any pkg-config dependency:

$> dnf install "pkgconfig(foo)"

$> yum install "pkgconfig(foo)"
will automatically search and install the right package, including its dependencies.
apt-get requires a bit more effort:
$> apt-get install apt-file
$> apt-file update
$> apt-file search --package-only foo.pc
$> apt-get install foo-dev
For those running Arch and pacman, the sequence is:
$> pacman -S pkgfile
$> pkgfile -u
$> pkgfile foo.pc
$> pacman -S extra/foo
Once that's done you can re-run meson and see if all dependencies have been met. If more packages are missing, follow the same process for the next file.

Any users of other distributions - let me know how to do this on yours and I'll update the post

My version is wrong!

It's not uncommon to see the following error after installing the right package: ERROR: Invalid version of dependency, need 'foo' ['>= 1.1.0'] found '1.0.0'.
Now you're stuck and you have a problem. What this means is that the package version your distribution provides is not new enough to build your software. This is where the simple solutions and and it all gets a bit more complicated - with more potential errors. Unless you are willing to go into the deep end, I recommend moving on and accepting that you can't have the newest bits on an older distribution. Because now you have to build the dependencies from source and that may then require to build their dependencies from source and before you know you've built 30 packages. If you're willing read on, otherwise - sorry, you won't be able to run your software today.

Manually installing dependencies

Now you're in the deep end, so be aware that you may see more complicated errors in the process. First of all you need to figure out where to get the source from. I'll now use cairo as example instead of foo so you see actual data. On rpm-based distributions like Fedora run dnf or yum:

$> dnf info cairo-devel # or yum info cairo-devel
Loaded plugins: auto-update-debuginfo, langpacks
Installed Packages
Name : cairo-devel
Arch : x86_64
Version : 1.13.1
Release : 0.1.git337ab1f.fc20
Size : 2.4 M
Repo : installed
From repo : fedora
Summary : Development files for cairo
License : LGPLv2 or MPLv1.1
Description : Cairo is a 2D graphics library designed to provide high-quality
: display and print output.
: This package contains libraries, header files and developer
: documentation needed for developing software which uses the cairo
: graphics library.
The important field here is the URL line - got to that and you'll find the source tarballs. That should be true for most projects but you may need to google for the package name and hope. Search for the tarball with the right version number and download it. On Debian and related distributions, cairo is provided by the libcairo2-dev package. Run apt-cache show on that package:
$> apt-cache show libcairo2-dev
Package: libcairo2-dev
Source: cairo
Version: 1.12.2-3
Installed-Size: 2766
Maintainer: Dave Beckett
Architecture: amd64
Provides: libcairo-dev
Depends: libcairo2 (= 1.12.2-3), libcairo-gobject2 (= 1.12.2-3),[...]
Suggests: libcairo2-doc
Description-en: Development files for the Cairo 2D graphics library
Cairo is a multi-platform library providing anti-aliased
vector-based rendering for multiple target backends.
This package contains the development libraries, header files needed by
programs that want to compile with Cairo.
Description-md5: 07fe86d11452aa2efc887db335b46f58
Tag: devel::library, role::devel-lib, uitoolkit::gtk
Section: libdevel
Priority: optional
Filename: pool/main/c/cairo/libcairo2-dev_1.12.2-3_amd64.deb
Size: 1160286
MD5sum: e29852ae8e8e5510b00b13dbc201ce66
SHA1: 2ed3534d02c01b8d10b13748c3a02820d10962cf
SHA256: a6099cfbcc6bd891e347dd9abc57b7f137e0fd619deaff39606fd58f0cc60d27
In this case it's the Homepage line that matters, but the process of downloading tarballs is the same as above. For Arch users, the interesting line is URL as well:
$> pacman -Si cairo | grep URL
Repository : extra
Name : cairo
Version : 1.12.16-1
Description : Cairo vector graphics library
Architecture : x86_64
Licenses : LGPL MPL

Now to the complicated bit: In most cases, you shouldn't install the new version over the system version because you may break other things. You're better off installing the dependency into a custom folder ("prefix") and point pkg-config to it. So let's say you downloaded the cairo tarball, now you need to run:

$> mkdir $HOME/dependencies/
$> tar xf cairo-someversion.tar.xz
$> cd cairo-someversion
$> autoreconf -ivf
$> ./configure --prefix=$HOME/dependencies
$> make && make install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run meson again
So you create a directory called dependencies and install cairo there. This will install cairo.pc as $HOME/dependencies/lib/cairo.pc. Now all you need to do is tell pkg-config that you want it to look there as well - so you set PKG_CONFIG_PATH. If you re-run meson in the original project, pkg-config will find the new version and meson should succeed. If you have multiple packages that all require a newer version, install them into the same path and you only need to set PKG_CONFIG_PATH once. Remember you need to set PKG_CONFIG_PATH in the same shell as you are running configure from.

In the case of dependencies that use meson, you replace autotools and make with meson and ninja:

$> mkdir $HOME/dependencies/
$> tar xf foo-someversion.tar.xz
$> cd foo-someversion
$> meson builddir -Dprefix=$HOME/dependencies
$> ninja -C builddir install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run meson again

If you keep seeing the version error the most common problem is that PKG_CONFIG_PATH isn't set in your shell, or doesn't point to the new cairo.pc file. A simple way to check is:

$> pkg-config --modversion cairo
Is the version number the one you installed or the system one? If it is the system one, you have a typo in PKG_CONFIG_PATH, just re-set it. If it still doesn't work do this:
$> cat $HOME/dependencies/lib/pkgconfig/cairo.pc

Name: cairo
Description: Multi-platform 2D graphics library
Version: 1.13.1

Requires.private: gobject-2.0 glib-2.0 >= 2.14 [...]
Libs: -L${libdir} -lcairo
Libs.private: -lz -lz -lGL
Cflags: -I${includedir}/cairo
If the Version field matches what pkg-config returns, then you're set. If not, keep adjusting PKG_CONFIG_PATH until it works. There is a rare case where the Version field in the installed library doesn't match what the tarball said. That's a defective tarball and you should report this to the project, but don't worry, this hardly ever happens. In almost all cases, the cause is simply PKG_CONFIG_PATH not being set correctly. Keep trying :)

Let's assume you've managed to build the dependencies and want to run the newly built project. The only problem is: because you built against a newer library than the one on your system, you need to point it to use the new libraries.

$> export LD_LIBRARY_PATH=$HOME/dependencies/lib
and now you can, in the same shell, run your project.

Good luck!

Fixing a broken ESP8266

Planet Debian - Dje, 08/07/2018 - 4:21md

One of the IoT platforms I’ve been playing with is the ESP8266, which is a pretty incredible little chip with dev boards available for under £4. Arduino and Micropython are both great development platforms for them, but the first board I bought (back in 2016) only had a 4Mbit flash chip. As a result I spent some time writing against the Espressif C SDK and trying to fit everything into less than 256KB so that the flash could hold 2 images and allow over the air updates. Annoyingly just as I was getting to the point of success with Richard Burton’s rBoot my device started misbehaving, even when I went back to the default boot loader:

ets Jan 8 2013,rst cause:1, boot mode:(3,6) load 0x40100000, len 816, room 16 tail 0 chksum 0x8d load 0x3ffe8000, len 788, room 8 tail 12 chksum 0xcf ho 0 tail 12 room 4 load 0x3ffe8314, len 288, room 12 tail 4 chksum 0xcf csum 0xcf 2nd boot version : 1.2 SPI Speed : 40MHz SPI Mode : DIO SPI Flash Size : 4Mbit jump to run user1 Fatal exception (0): epc1=0x402015a4, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000000, depc=0x00000000 Fatal exception (0): epc1=0x402015a4, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000000, depc=0x00000000 Fatal exception (0):

(repeats indefinitely)

Various things suggested this was a bad flash. I tried a clean Micropython install, a restore of the original AT firmware backup I’d taken, and lots of different combinations of my own code/the blinkenlights demo and rBoot/Espressif’s bootloader. I made sure my 3.3v supply had enough oompf (I’d previously been cheating and using the built in FT232RL regulator, which doesn’t have quite enough when the device is fully operational, rather than in UART boot mode, such as doing an OTA flash). No joy. I gave up and moved on to one of the other ESP8266 modules I had, with a greater amount of flash. However I was curious about whether this was simply a case of the flash chip wearing out (various sites claim the cheap ones on some dev boards will die after a fairly small number of programming cycles). So I ordered some 16Mb devices - cheap enough to make it worth trying out, but also giving a useful bump in space.

They arrived this week and I set about removing the old chip and soldering on the new one (Andreas Spiess has a useful video of this, or there’s Pete Scargill’s write up). Powered it all up, ran flash_id to see that it was correctly detected as a 16Mb/2MB device and set about flashing my app onto it. Only to get:

ets Jan 8 2013,rst cause:2, boot mode:(3,3) load 0x40100000, len 612, room 16 tail 4 chksum 0xfd load 0x88380000, len 565951362, room 4 flash read err, ets_unpack_flash_code ets_main.c

Ooops. I had better luck with a complete flash erase ( erase_flash) and then a full program of Micropython using --baud 460800 write_flash --flash_size=detect -fm dio 0 esp8266-20180511-v1.9.4.bin, which at least convinced me I’d managed to solder the new chip on correctly. Further experimention revealed I needed to pass all of the flash parameters to to get rBoot entirely happy, and include esp_init_data_default.bin (FWIW I updated everything to v2.2.1 as part of the process): write_flash --flash_size=16m -fm dio 0x0 rboot.bin 0x2000 rom0.bin \ 0x120000 rom1.bin 0x1fc000 esp_init_data_default_v08.bin

Which gives (at the default 76200 of the bootloader bit):

ets Jan 8 2013,rst cause:1, boot mode:(3,7) load 0x40100000, len 1328, room 16 tail 0 chksum 0x12 load 0x3ffe8000, len 604, room 8 tail 4 chksum 0x34 csum 0x34 rBoot v1.4.2 - Flash Size: 16 Mbit Flash Mode: DIO Flash Speed: 40 MHz Booting rom 0. rf cal sector: 507 freq trace enable 0 rf[112]

Given the cost of the modules it wasn’t really worth my time and energy to actually fix the broken one rather than buying a new one, but it was rewarding to be sure of the root cause. Hopefully this post at least serves to help anyone seeing the same exception messages determine that there’s a good chance their flash has died, and that a replacement may sort the problem.

Jonathan McDowell Noodles' Emptiness

Debian APT upgrade without enough free space on the disk...

Planet Debian - Dje, 08/07/2018 - 12:10md

Quite regularly, I let my Debian Sid/Unstable chroot stay untouch for a while, and when I need to update it there is not enough free space on the disk for apt to do a normal 'apt upgrade'. I normally would resolve the issue by doing 'apt install <somepackages>' to upgrade only some of the packages in one batch, until the amount of packages to download fall below the amount of free space available. Today, I had about 500 packages to upgrade, and after a while I got tired of trying to install chunks of packages manually. I concluded that I did not have the spare hours required to complete the task, and decided to see if I could automate it. I came up with this small script which I call 'apt-in-chunks':

#!/bin/sh # # Upgrade packages when the disk is too full to upgrade every # upgradable package in one lump. Fetching packages to upgrade using # apt, and then installing using dpkg, to avoid changing the package # flag for manual/automatic. set -e ignore() { if [ "$1" ]; then grep -v "$1" else cat fi } for p in $(apt list --upgradable | ignore "$@" |cut -d/ -f1 | grep -v '^Listing...'); do echo "Upgrading $p" apt clean apt install --download-only -y $p for f in /var/cache/apt/archives/*.deb; do if [ -e "$f" ]; then dpkg -i /var/cache/apt/archives/*.deb break fi done done

The script will extract the list of packages to upgrade, try to download the packages needed to upgrade one package, install the downloaded packages using dpkg. The idea is to upgrade packages without changing the APT mark for the package (ie the one recording of the package was manually requested or pulled in as a dependency). To use it, simply run it as root from the command line. If it fail, try 'apt install -f' to clean up the mess and run the script again. This might happen if the new packages conflict with one of the old packages. dpkg is unable to remove, while apt can do this.

It take one option, a package to ignore in the list of packages to upgrade. The option to ignore a package is there to be able to skip the packages that are simply too large to unpack. Today this was 'ghc', but I have run into other large packages causing similar problems earlier (like TeX).

Update 2018-07-08: Thanks to Paul Wise, I am aware of two alternative ways to handle this. The "unattended-upgrades --minimal-upgrade-steps" option will try to calculate upgrade sets for each package to upgrade, and then upgrade them in order, smallest set first. It might be a better option than my above mentioned script. Also, "aptutude upgrade" can upgrade single packages, thus avoiding the need for using "dpkg -i" in the script above.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen Petter Reinholdtsen - Entries tagged english

Getting Started with Debian Packaging

Planet Debian - Dje, 08/07/2018 - 6:00pd

One of my tasks in GSoC involved set up of Thunderbird extensions for the user. Some of the more popular add-ons like ‘Lightning’ (calendar organiser) already has a Debian package.

Another important add on is ‘Cardbook’ which is used to manage contacts for the user based on CardDAV and vCard standards. But it doesn’t have a package yet.

My mentor, Daniel motivated me to create a package for it and upload it to It would ease the installation process as it could get installed through apt-get. This blog describes how I learned and created a Debian package for CardBook from scratch.

Since, I was new to packaging, I did extensive research on basics of building a package from the source code and checked if the license was DFSG compatible.

I learned from various Debian wiki guides like ‘Packaging Intro’, ‘Building a Package’ and blogs.

I also studied the amd64 files included in Lightning extension package.

The package I created could be found here.

Debian Package

Creating an empty package

I started by creating a debian directory by using dh_make command

# Empty project folder $ mkdir -p Debian/cardbook # create files $ dh_make\ > --native \ > --single \ > --packagename cardbook_1.0.0 \ > --email

Some important files like control, rules, changelog, copyright are initialized with it.

The list of all the files created:

$ find /debian debian/ debian/rules debian/preinst.ex debian/ debian/manpage.1.ex debian/install debian/source debian/source/format debian/cardbook.debhelper.lo debian/manpage.xml.ex debian/README.Debian debian/postrm.ex debian/prerm.ex debian/copyright debian/changelog debian/manpage.sgml.ex debian/cardbook.default.ex debian/README debian/cardbook.doc-base.EX debian/README.source debian/compat debian/control debian/debhelper-build-stamp debian/menu.ex debian/postinst.ex debian/cardbook.substvars debian/files

I gained an understanding of Dpkg package management program in Debian and its use to install, remove and manage packages.

I build an empty package with dpkg commands. This created an empty package with four files namely .changes, .deb, .dsc, .tar.gz

.dsc file contains the changes made and signature

.deb is the main package file which can be installed

.tar.gz (tarball) contains the source package

The process also created the README and changelog files in /usr/share. They contain the essential notes about the package like description, author and version.

I installed the package and checked the installed package contents. My new package mentions the version, architecture and description!

$ dpkg -L cardbook /usr /usr/share /usr/share/doc /usr/share/doc/cardbook /usr/share/doc/cardbook/README.Debian /usr/share/doc/cardbook/changelog.gz /usr/share/doc/cardbook/copyright

Including CardBook source files

After successfully creating an empty package, I added the actual CardBook add-on files inside the package. The CardBook’s codebase is hosted here on Gitlab. I included all the source files inside another directory and told the build package command which files to include in the package.

I did this by creating a file debian/install using vi editor and listed the directories that should be installed. In this process I spent some time learning to use Linux terminal based text editors like vi. It helped me become familiar with editing, creating new files and shortcuts in vi.

Once, this was done, I updated the package version in the changelog file to document the changes that I have made.

$ dpkg -l | grep cardbook ii cardbook 1.1.0 amd64 Thunderbird add-on for address book

Changelog file after updating Package

After rebuilding it, dependencies and detailed description can be added if necessary. The Debian control file can be edited to add the additional package requirements and dependencies.

Local Debian Repository

Without creating a local repository, CardBook could be installed with:

$ sudo dpkg -i cardbook_1.1.0.deb

To actually test the installation for the package, I decided to build a local Debian repository. Without it, the apt-get command would not locate the package, as it is not in uploaded in Debian packages on net.

For configuring a local Debian repository, I copied my packages (.deb) to Packages.gz file placed in a /tmp location.

Local Debian Repo

To make it work, I learned about the apt configuration and where it looks for files.

I researched for a way to add my file location in apt-config. Finally I could accomplish the task by adding *.list file with package’s path in APT and updating ‘apt-cache’ afterwards.

Hence, the latest CardBook version could be successfully installed by apt-get install cardbook

CardBook Installation through apt-get

Fixing Packaging errors and bugs

My mentor, Daniel helped me a lot during this process and guided me how to proceed further with the package. He told me to use Lintian for fixing common packaging error and then using dput to finally upload the CardBook package.

Lintian is a Debian package checker which finds policy violations and bugs. It is one of the most widely used tool by Debian Maintainers to automate checks for Debian policies before uploading the package.

I have uploaded the second updated version of the package in a separate branch of the repository on Salsa here inside Debian directory.

I installed Lintian from backports and learned to use it on a package to fix errors. I researched on the abbreviations used in its errors and how to show detailed response from lintian commands

$ lintian -i -I --show-overrides cardbook_1.2.0.changes

Initially on running the command on the .changes file, I was surprised to see that a large number of errors, warnings and notes were displayed!

Brief errors after running Lintian on Package

Detailed Lintian errors (1)

Detailed Lintian errors (2) and many more…

I spend some days to fix some errors related to Debian package policy violations. I had to dig into every policy and Debian rules carefully to eradicate a simple error. For this I referred various sections on Debian Policy Manual and Debian Developer’s Reference.

I am still working on making it flawless and hope to upload it on soon!

It would be grateful if people from the Debian community who use Thunderbird could help fix these errors.

Minkush Jain Minkush Jain

New Software::LicenseMoreUtils Perl module

Planet Debian - Sht, 07/07/2018 - 7:25md


Debian project has rather strict requirements regarding package license. One of these requirements is to provide a copyright file mentioning the license of the files included in a Debian package.

Debian also recommends to provide this copyright information in a machine readable format that contain the whole text of the license(s) or a summary pointing to a pre-defined location on the file system (see this example).

cme and Config::Model::Dpkg::Copyright helps in this task using Software::License module. But this module lacks the following features to properly support the requirements of Debian packaging:

  • license summary
  • support for clause like “GPL version 2 or (at your option) any later version”

Long story short, I’ve written Software::LicenseMoreUtils to provide these missing features. This module is a wrapper around Software::License and has the same API.

Adding license summaries for Debian requires only to update this YAML file.

This modules was written for Debian while keeping other distros in minds. Debian derevatives like Ubuntu or Mind are supported. Adding license summaries for other Linux distribution is straightforward. Please submit a bug or a PR to add support for other distributions.

For more details. please see:


All the best

dod Dominique Dumont's Blog

wordpress 4.9.7

Planet Debian - Sht, 07/07/2018 - 2:50md

No sooner than I had patched WordPress 4.9.5 to fix the arbitrary unlink bug than I realised there is a WordPress 4.9.7 out there. This release (just out for Debian, if my Internet behaves) fixes the unlink bug found by RIPS Technologies.  However, the WordPress developers used a different method to fix it.

There will be Debian backports for WordPress that use one of these methods. It will come down to do those older versions use hooks and how different the code is in post.php

You should update, and if you don’t like WordPress deleting or editing its own files, perhaps consider using AppArmor.

Craig Small Dropbear

Solve for q

Planet Debian - Sht, 07/07/2018 - 12:13pd
f 0 = 1.5875 f 0.5 = 1.5875 f 1 = 3.175 f 2 = 6.35 f 3 = 9.525 f 4 = 12.7 f 5 = 15.875 f 6 = 19.05 f 7 = 22.225 f 8 = 25.4 f 9 = 28.575 f 10 = 31.75 Posted on 2018-07-06 Tags: barks C Yammering

Eisha Chen-yen-su: Improving Fractal’s media viewer

Planet GNOME - Pre, 06/07/2018 - 7:08md

Fractal is a Matrix client for GNOME and is written in Rust. Matrix is an open network for secure, decentralized communication.

This week, I have made improvements for the media viewer. I will talk about the most important of them. You can have a look at this issue to get the full details of what has been done.

A header bar in the full screen mode

I’ve added the possibility to access the header bar while in full screen mode by moving the cursor up to the top of the screen, like in Builder or Videos.

At first, I didn’t have an idea of how I could implement it but I’ve figured out a way to simply do that.

First I’ve asked how it was done in the Builder IRC channel, someone told me to look at this page for implementing a custom header bar with full screen toggle button in Python. It could help me to figure out how to make the first step toward the implementation of this feature.

The media viewer UI has two main parts:

  • `media_viewer_headerbar_box` which is the part of the media viewer interface that goes in the `GtkStack` of the main window’s header bar
  • `media_viewer_box` which is the part of the media viewer interface that goes in the `GtkStack` of the main window’s content

When in full screen mode, we only see media_viewer_box on the entire screen, so in order to view the media viewer’s header bar, it needs to be moved in this GtkBox when entering in full screen mode and to be placed back in the header bar of the application when leaving this mode. The page I’ve previously cited helped me to know how to do that.

Doing this was far from being enough to get the expected result as I wanted the header bar to drop down when the cursor reaches the top of the screen. So I’ve added a GtkRevealer in the GtkOverlay containing the navigation buttons and the header bar would be placed into it instead of being directly added to media_viewer_box. The header bar would be revealed when the pointer is moved and when its vertical coordinate is less or equal than 6 pixels (i.e. when the cursor is close from the top of the screen). After that, a timer is set and when the time is out and that the cursor isn’t hovering the header bar, it is hidden back by the revealer.

I’ve finally hidden some buttons that could cause unexpected behaviors (the back and close buttons) when in full screen mode, the full screen button is also replaced by another one to leave the full screen mode.

Here are the MR for this and a picture of the result:

Add the possibility to go back in the media history without restrictions

Before this, the media viewer was navigating through a list of media which was built when we were entering the media viewer. Hence the navigation was limited to the media already present in the loaded messages.

To be able to go back beyond the available media, I needed to add a method in the backend that would fetch messages containing media from a certain message in the history (usually, the earliest media we currently have in the viewer). It is done by making a GET request to a homeserver at this address “/_matrix/client/r0/rooms/{roomId}/messages” with the query parameters “from” set with the previous batch ID, “dir” set to “b” (for backwards), “limit” set to 40 (that is the default page limit in Fractal), the user’s “access_token” and “filter” set to “{\”filter_json\”: { \”contains_url\”: true, \”not_types\”: [\”m.sticker\”] } }”. It says that we are requesting 40 messages having a “url” field and that are not of the type “m.sticker” (i.e. messages that contains media) starting from the previous batch (we will see later how to get it) and that we are going backward in the message history. The problem was to determine how to get the previous batch ID.

In Matrix, when a client is requesting a list of events, it does this by making a first request that asks for a certain number of them. When the server responds to this query, it sends a previous batch ID with it. It can be used by the client during the next request to ask for the next batch of events, and so on. The problem I had that I couldn’t use a message ID in order to ask for media after this message but I needed a previous batch ID if I wanted to start loading media from it, and I only have the message ID before doing the first request. So my mentor found out a way to get a previous batch ID from a message ID. This is done  by making a GET request to a homeserver at this address “/_matrix/client/r0/rooms/{roomId}/context/{eventId}” with the query parameter “limit” set to “0”. So the “start” field in the server’s answer is the previous batch ID associated with this event.

After the backend’s method was implemented (you can view the implementation in the commit here), it was then pretty much easy to integrate it within the media viewer (see this commit here). However, I needed to spend more time for the implementation of the loading spinner (it is needed when you press on the previous media button and that Fractal is fetching the images from a homeserver as this can take a while) because the UI was frozen while the backend was requesting more media from a server. So I’ve made the operation asynchronous (see this commit) and then I could add the spinner (see this commit).

You can see the full details on this MR.


Newcastle University Historic Computing

Planet Debian - Pre, 06/07/2018 - 3:54md

some of our micro computers

Since first writing about my archiving activities in 2012 I've been meaning to write an update on what I've been up to, but I haven't got around to it. This, however, is noteable enough to be worth writing about!

In the last few months I became chair of the Historic Computing Committee at Newcastle University. We are responsible for a huge collection of historic computing artefacts from the University's past, going back to the 1950s, which has been almost single-handedly assembled and curated over the course of decades by the late Roger Broughton, who did much of the work in his retirement.

Segment of IBM/360 mainframe

Sadly, Roger died in 2016.

Recently there has been an upsurge of interest and support for our project, partly as a result of other volunteers stepping in and partly due to the School of Computing moving to a purpose-built building and celebrating its 60th birthday.

We've managed to secure some funding from various sources to purchase proper, museum-grade storage and display cabinets. Although portions of the collection have been exhibited for one-off events, including School open days, this will be the first time that a substantial portion of the collection will be on (semi-)permanent public display.

Amstrad PPC640 portable PC

Things have been moving very quickly recently. I am very happy to announce that the initial public displays will be unveiled as part of the Great Exhibition of the North! Most of the details are still TBC, but if you are interested you can keep an eye on this A History Of Computing events page.

For more about the Historic Computing Committee, cs-history Special Interest Group and related stuff, you can follow the CS History SIG blog, which we will hopefully be updating more often going forward. For the Historic Computing Collection specifically, please see the The Roger Broughton Museum of Computing Artefacts.

jmtd Jonathan Dowland's Weblog


Planet Debian - Pre, 06/07/2018 - 2:12md
Rise of the machines

Last week I was in a crowd of 256 people watching and cheering Compressorhead, some were stage-diving. Truely awesome.

Holger Levsen Any sufficiently advanced thinking is indistinguishable from madness

Jim Hall: First impressions of PureOS

Planet GNOME - Enj, 05/07/2018 - 10:30md
My new Librem 13 arrived yesterday, and it was my first opportunity to play around with PureOS. I thought I'd share a few thoughts here.

First, PureOS uses GNOME for the desktop. And not that it matters much to usability, but they picked a beautiful default desktop wallpaper:

Because it's GNOME, the desktop was immediately familiar to me. I've been a GNOME user for a long time, and I work with GNOME in testing usability of new features. So the GNOME desktop was a definite plus for me.

It's not a stock GNOME, however. PureOS uses a custom theme that doesn't use the same colors as a stock GNOME. GNOME doesn't use color very often, but I noticed this right away in the file manager. Clicking on a navigation item highlights it in a sort of rust color, instead of the usual blue.

Overall, I thought PureOS was okay. It doesn't really stand out in any particular way, and I didn't like a few choices they made. So in the end, it's just okay to me.

However, I did run into a few things that would seem odd to a new user.

What's that file icon?
When I clicked on Activities to bring up the overview, I was confused about what looked like a "file" icon in the dock.

I understood the other icons. The speaker icon is Rhythmbox, my favorite music application. The camera icon is clearly a photo application (GNOME Photos). The blue file cabinet is the GNOME file manager. And the life ring is GNOME's Help system (but I would argue the "ring buoy" icon is not a great association for "help" in 2018; better to use an international circle-"?" help symbol, but I digress).

Two icons seemed confusing. The "globe" icon was a little weird to me, but I quickly realized it probably meant the web browser. (It is.)

But the one that really confused me was the "file" icon, between the camera and the file manager icons. What was a "file" icon doing here? Was it a broken image, representing an icon that should exist but wasn't on the system? I didn't need to click on it right away, so I didn't discover until later that the "file" icon is LibreOffice. I hadn't seen that icon before, even though that's actually the LibreOffice icon. I guess I'm used to the LibreOffice Writer or LibreOffice Calc icons, which is what I launch most of the time anyway.

No updates?
I wanted to install some extra applications, so I launched GNOME Software. And from there, I realized that PureOS didn't have any updates.

Really? Linux gets updates all the time. Even if Purism updated the OS right before shipping my laptop to me, there should have been a few updates in the time FedEx took to deliver the laptop. But maybe Purism is slow to release updates, so this could be expected. It seemed odd to me, though.

Where's the extra software?
Once I was in GNOME Software, I realized the "store" was quite empty. There's not much to choose from.

If this were my first experiment with Linux, I'd probably think Linux didn't have very many applications. They don't even have the Chromium or Firefox web browsers available to install.

But really, there are a ton of applications out there for Linux. It's just the packages that PureOS makes available through GNOME Software seems pretty limited.

The terminal is broken?
Finally, I'll mention the terminal emulator. PureOS doesn't use the standard GNOME Terminal package, but rather the Tilix terminal emulator. It's a fine terminal, except for the error message you see immediately upon launching it:

I wondered why a pre-configured operating system, aimed at the Linux community, would ship with a broken configuration. I clicked on the link shown, and basically the fix is to set Tilix as a login shell, or to do some other setup steps.

Presenting an error message the first time the user runs a program is very poor usability. I haven't run it yet, so I assume the program should be using defaults. Everything should be okay the first time I run the program. I assume things will "just work." Instead, I get an error message. If I were a novice user, this would probably turn me off PureOS.

In the end, PureOS is a GNOME desktop that runs well. But with a few confusing issues or problems here and there, it doesn't exactly stand out. To me, PureOS is just "okay." It's not bad, but it's not great.

I think my biggest concern as a longtime Linux user is that the distribution doesn't seem to have updates. I'm curious to hear from any PureOS users how often updates are pushed out. I run Fedora Linux, and I get updates pretty regularly. What should I have expected on PureOS?

My Debian Activities in June 2018

Planet Debian - Enj, 05/07/2018 - 9:05md

FTP master

This month I accepted 166 packages and rejected only 7 uploads. The overall number of packages that got accepted this month was 216.

Debian LTS

This was my forty eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

  • [DLA 1404-1] lava-server security update for one CVE
  • [DLA 1403-1] zendframework security update for one CVE
  • [DLA 1409-1] mosquitto security update for two CVE
  • [DLA 1408-1] simplesamlphp security update for two CVE

I also prepared a test package for slurm-llnl but got no feadback yet *hint* *hint*.

This month has been the end of Wheezy LTS and the beginning of Jessie LTS. After asking Ansgar, I did the reconfiguration of the upload queues on seger to remove the embargoed queue for Jessie and reduce the number of supported architectures.

Further I started to work on opencv.

Unfortunately the normal locking mechanism for work on packages by claiming the package in dla-needed.txt did not really work during the transition. As a result I worked on libidn and mercurial parallel to others. There seems to be room for improvement for the next transition.

Last but not least I did one week of frontdesk duties.

Debian ELTS

This month was the first ELTS month.

During my allocated time I made the first CVE triage in my week of frontdesk duties, extended the check-syntax part in the ELTS security tracker and uploaded:

  • ELA-3-1 for file
  • ELA-4-1 for openssl

Other stuff

During June I continued the libosmocore transition but could not finish it. I hope I can upload all missing packages in July.

Further I continued to sponsor some glewlwyd packages for Nicolas Mora.

The DOPOM package for this month was dvbstream.

I also upload a new upstream version of …

alteholz » planetdebian

My Free Software Activities in June 2018

Planet Debian - Mër, 04/07/2018 - 3:35md

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

I merged a branch adding appstream related data (thanks to Matthias Klumpp). I merged multiple small contributions from a new contributor: Lev Lazinskiy submitted a fix to have the correct version string in the documentation and ensured that we could not use the login page if we are already identified (see MR 36).

Arthur Del Esposte continued his summer of code project and submitted multiple merge requests that I reviewed multiple times before they were ready to be merged. He implemented a team search feature, created a small framework to display an overview of all packages of a team.

On a more administrative level, I had to deal with many subscriptions that became immediately invalid when shut down. So I tried to replace all email subscriptions using * with alternate emails linked to the same account. When no fallback was possible, I simply deleted the subscription.

pkg-security work

I sponsored cewl 5.4.3-1 (new upstream release) and wfuzz_2.2.11-1.dsc (new upstream release), masscan 1.0.5+ds1-1 (taken over by the team, new upstream release) and wafw00f 0.9.5-1 (new upstream release). I sponsored wifite2, made the unittests run during the build and added some autopkgtest. I submitted a pull request to skip tests when some tools are unavailable.

I filed #901595 on reaver to get a fixed watch file.

Misc Debian work

I reviewed multiple merge requests on live-build (about its handling of archive keys and the associated documentation). I uploaded a new version of live-boot (20180603) with the pending changes.

I sponsored pylint-django 0.11-1 for Joseph Herlant, xlwt 1.3.0-2 (bug fix) and python-num2words_0.5.6-1~bpo9+1 (backport requested by a user).

I uploaded a new version of ftplib fixing a release critical bug (#901224: ftplib FTCBFS: uses the build architecture compiler).

I submitted two patches to git (fixing french l10n in git bisect and marking two strings for translation).

I reviewed multiple merge requests on debootstrap: make --unpack-tarball no longer downloads anything, --components not carried over with --foreign/--second-stage and enabling --merged-usr by default.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog apt-get install debian-wizard

anytime 0.3.1

Planet Debian - Mar, 03/07/2018 - 10:36md

A new minor release of the anytime package is now on CRAN. This is the twelveth release, and the first in a little over a year as the package has stabilized.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub for a few examples.

This release adds a few minor tweaks. For numeric input, the function is now immutable: arguments that are not already cast to a different type (a common use case) are now cloned so that the input is never changed. We also added two assertion helpers for dates and datetimes, a new formatting helper for the (arguably awful, but common) ‘yyyymmdd’ format, and expanded some unit tests.

Changes in anytime version 0.3.1 (2017-06-05)
  • Numeric input is now preserved rather than silently cast to the return object type (#69 fixing #68).

  • New assertion function assertDate() and assertTime().

  • Unit tests were expanded for the new functions, for conversion from integer as well as for yyyymmdd().

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel Thinking inside the box

Introducing debos, a versatile images generator

Planet Debian - Mar, 03/07/2018 - 7:00md

In Debian and derivative systems, there are many ways to build images. The simplest tool of choice is often debootstrap. It works by downloading the .deb files from a mirror and unpacking them into a directory which can eventually be chrooted into.

More often than not, we want to make some customization on this image, install some extra packages, run a script, add some files, etc

debos is a tool to make this kind of trivial tasks easier. debos works using recipe files in YAML listing the actions you want to perform in your image sequentially and finally, choosing the output formats.

As opposite to debootstrap and other tools, debos doesn't need to be run as root for making actions that require root privileges in the images. debos uses fakemachine a library that setups qemu-system allowing you to work in the image with root privileges and to create images for all the architectures supported by qemu user. However, for this to work, make sure your user has permission to use /dev/kvm.

Let's see how debos works with a simple example. If we wanted to create an arm64 image for Debian Stretch customized, we would follow these steps:

  • debootstrap the image
  • install the packages we need
  • create an user
  • setup our preferred hostname
  • run a script creating an user
  • copy a file adding the user to sudoers
  • creating a tarball with the final image

This would translate into a debos recipe like this one:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36{{- $architecture := or .architecture "arm64" -}} {{- $suite := or .suite "stretch" -}} {{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }} architecture: {{ $architecture }} actions: - action: debootstrap suite: {{ $suite }} components: - main mirror: variant: minbase - action: apt recommends: false packages: - adduser - sudo - action: run description: Set hostname chroot: true command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname - action: run chroot: true script: scripts/ - action: overlay description: Add sudo configuration source: overlays/sudo - action: pack file: {{ $image }} compression: gz

(The files used in this example are available from this git repository)

We run debos on the recipe file:

$ debos simple.yaml

The result will be a tarball named debian-stretch-arm64.tar.gz. If you check the top two lines of the recipe, you can see that the recipe defaults to architecture arm64 and Debian stretch. We can override these defaults when running debos:

$ debos -t suite:"buster" -t architecture:"amd64" simple.yaml

This time the result will be a tarball debian-buster-amd64.tar.gz.

The recipe allows some customization depending on the parameters. We could install packages depending on the target architecture, for example, installing python-libsoc in armhf and arm64:

1 2 3 4 5 6 7 8- action: apt recommends: false packages: - adduser - sudo {{- if eq $architecture "armhf" "arm64" }} - python-libsoc {{- end }}

What happens if in addition to a tarball we would like to create a filesystem image? This could be done adding two more actions to our example, a first action creating the image partition with the selected filesystem and a second one deploying the image in the filesystem:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16- action: image-partition imagename: {{ $ext4 }} imagesize: 1GB partitiontype: msdos mountpoints: - mountpoint: / partition: root partitions: - name: root fs: ext4 start: 0% end: 100% flags: [ boot ] - action: filesystem-deploy description: Deploying filesystem onto image

{{ $ext4 }} should be defined in the top of the file as follows:

1{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

We could even make this step optional and make the recipe by default to only create the tarball and add the filesystem image only adding an option to debos:

$ debos -t type:"full" full.yaml

The final debos recipe will look like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60{{- $architecture := or .architecture "arm64" -}} {{- $suite := or .suite "stretch" -}} {{ $type := or .type "min" }} {{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }} {{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }} architecture: {{ $architecture }} actions: - action: debootstrap suite: {{ $suite }} components: - main mirror: variant: minbase - action: apt recommends: false packages: - adduser - sudo {{- if eq $architecture "armhf" "arm64" }} - python-libsoc {{- end }} - action: run description: Set hostname chroot: true command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname - action: run chroot: true script: scripts/ - action: overlay description: Add sudo configuration source: overlays/sudo - action: pack file: {{ $image }} compression: gz {{ if eq $type "full" }} - action: image-partition imagename: {{ $ext4 }} imagesize: 1GB partitiontype: msdos mountpoints: - mountpoint: / partition: root partitions: - name: root fs: ext4 start: 0% end: 100% flags: [ boot ] - action: filesystem-deploy description: Deploying filesystem onto image {{end}}

debos also provides some other actions that haven't been covered in the example above:

  • download allows to download a single file from the internet
  • raw can directly write a file to the output image at a given offset
  • unpack can be used to unpack files from archive in the filesystem
  • ostree-commit create an OSTree commit from rootfs
  • ostree-deploy deploy an OSTree branch to the image

The example in this blog post is simple and short on purpose. Combining the actions presented above, you could also include a kernel and install a bootloader to make a bootable image. Upstream is planning to add more examples soon to the debos recipes repository.

debos is a project from Sjoerd Simons at Collabora, it's still missing some features but it's actively being developed and there are big plans for the future!

Ana Ana Guerrero Lopez - planet-debian

Docker Compose

Planet Debian - Mar, 03/07/2018 - 5:13md

I glanced back over my shoulder to see the Director approaching. Zhe stood next to me, watched me intently for a few moments, before turning and looking out at the scape. The water was preturnaturally calm, above it only clear blue. A number of dark, almost formless, shapes were slowly moving back and forth beneath the surface.

"Is everything in readiness?" zhe queried, sounding both impatient and resigned at the same time. "And will it work?" zhe added. My predecessor, and zir predecessor before zem, had attempted to reach the same goal now set for myself.

"I believe so" I responded, sounding perhaps slightly more confident than I felt. "All the preparations have been made, everything is in accordance with what has been written". The director nodded, zir face pinched, with worry writ across it.

I closed my eyes, took a deep breath, opened them, raised my hand and focussed on the scape, until it seemed to me that my hand was almost floating on the water. With all of my strength of will I formed the incantation, repeating it over and over in my mind until I was sure that I was ready. I released it into the scape and dropped my arm.

The water began to churn, the blue above darkening rapidly, becoming streaked with grey. The shapes beneath the water picked up speed and started to grow, before resolving to what appeared to be stylised Earth whales. Huge arcs of electricity speared the water, a screaming, crashing, wall of sound rolled over us as we watched, a foundation rose up from the depths on the backs of the whale-like shapes wherever the lightning struck.

Chunks of goodness-knows-what rained down from the grey streaked morass, thumping into place seamlessly onto the foundations, slowly building what I had envisioned. I started to allow myself to feel hope, things were going well, each tower of the final solution was taking form, becoming the slick and clean visions of function which I had painstakingly selected from among the masses of clamoring options.

Now and then, the whale-like shapes would surface momentarily near one of the towers, stringing connections like bunting across the water, until the final design was achieved. My shoulders tightened and I raised my hand once more. As I did so, the waters settled, the grey bled out from the blue, and the scape became calm and the towers shone, each in its place, each looking exactly as it should.

Chanting the second incantation under my breath, over and over, until it seemed seared into my very bones, I released it into the scape and watched it flow over the towers, each one ringing out as the command reached it, until all the towers sang, producing a resonant and consonant chord which rose of its own accord, seeming to summon creatures from the very waters in which the towers stood.

The creatures approached the towers, reached up as one, touched the doors, and screamed in horror as their arms caught aflame. In moments each and every creature was reduced to ashes, somehow fundamentally unable to make use of the incredible forms I had wrought. The Director sighed heavily, turned, and made to leave. The towers I had sweated over the design of for months stood proud, beautiful, worthless.

I also turned, made my way out of the realisation suite, and with a sigh hit the scape-purge button on the outer wall. It was over. The grand design was flawed. Nothing I created in this manner would be likely to work in the scape and so the most important moment of my life was lost to ruin, just as my predecessor, and zir predecessor before zem.

Returning to my chambers, I snatched up the book from my workbench. The whale-like creature winking to me from the cover, grinning, as though it knew what I had tried to do and relished my failure. I cast it into the waste chute and went back to my drafting table to design new towers, towers which might be compatible with the creatures which were needed to inhabit them and breath life into their very structure, towers which would involve no grinning whales.

Daniel Silverstone Digital-Scurf Ramblings

How I stopped merging broken code

Planet Debian - Mar, 03/07/2018 - 9:22pd

It's been a while since I moved all my projects to GitHub. It's convenient to host Git projects, and the collaboration workflow is smooth.

I love pull requests to merge code. I review them, I send them, I merge them. The fact that you can plug them into a continuous integration system is great and makes sure that you don't merge code that will break your software. I usually have Travis-CI setup and running my unit tests and code style check.

The problem with the GitHub workflow is that it allows merging untested code.


Yes, it does. If you think that your pull requests, all green decorated, are ready to be merged, you're wrong.

This might not be as good as you think

You see, pull requests on GitHub are marked as valid as soon as the continuous integration system passes and indicates that everything is valid. However, if the target branch (let's say, master) is updated while the pull request is opened, nothing forces to retest that pull request with this new master branch. You think that the code in this pull request works while that might have become false.

Master moved, the pull request is not up to date though it's still marked as passing integration.

So it might be that what went into your master branch now breaks this not-yet-merged pull request. You've no clue. You'll trust GitHub and press that green merge button, and you'll break your software. For whatever reason, it's possible that the test will break.

If the pull request has not been updated with the latest version of its target branch, it might break your integration.

The good news is that's something that's solvable with the strict workflow that Mergify provides. There's a nice explanation and example in Mergify's blog post You are merging untested code that I advise you to read. What Mergify provides here is a way to serialize the merge of pull requests while making sure that they are always updated with the latest version of their target branch. It makes sure that there's no way to merge broken code.

That's a workflow I've now adopted and automatized on all my repositories, and we've been using such a workflow for Gnocchi for more than a year, with great success. Once you start using it, it becomes impossible to go back!

Julien Danjou Julien Danjou

Towards Debian Unstable builds on Debian Stable OBS

Planet Debian - Mar, 03/07/2018 - 2:32pd

This is the sixth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian Unstable builds on OBS

Lately, I have been working towards triggering Debian Unstable builds with Debian OBS packages. As reported before, We can already build packages for both Debian 8 and 9 based on the example project configurations shipped with the package in Debian Stable and with the project configuration files publicly available on OBS SUSE instance.

While trying to build packages agains Debian unstable I have been reaching the following issue:

OBS scheduler reads the project configuration and starts downloading dependencies. The deendencies get downloaded but the build is never dispatched (the package stays on a “blocked” state). The downloaded dependencies get cleaned up and the scheduler starts the downloads again. OBS enters in an infinite loop there.

This only happens for builds on sid (unstable) and buster (testing).

We realized that the OBS version packaged in Debian 9 (the one we are currently using) does not support debian source packages built with dpkg >= 1.19. At first I started applying this patch on the OBS Debian package, but after reporting the issue to Debian OBS maintainers, they pointed me to the obs-build package in Debian stable backports repositories, which included the mentioned patch.

While the backports package included the patch needed to support source packages built with newr versions of dpkg, we still get the same issue with unstable and testing builds: the scheduler downloads the dependencies, hangs for a while but the build is never dispatched (the package stays on a “blocked” state). After a while, the dependencies get cleaned up and the scheduler starts the downloads again.

The bug has been quite hard to debug since OBS logs do not provide feedback on the problem we have been facing. To debug the problem, We tried to trigger local builds with osc. First, I (successfuly) triggered a few local builds against Debian 8 and 9 to make sure the command would work. Then We proceeded to trigger builds against Debian Unstable.

The first issue we faced was that the osc package in Debian stable cannot handle builds against source packages built with new dpkg versions. We fixed that by patching osc/util/ (we just substituted the file with the latest file in osc development version). After applying the patch, we got the same results we’d get when trying to build the package remotelly, but with debug flags on, we could have a better understanding of the problem:

BuildService API error: CPIO archive is incomplete (see .errors file)

The .errors file would just contain a list of dependencies which were missing in the CPIO archive.

If we kept retrying, OBS would keep caching more and more dependencies, until the build succeeded at some point.

We now know that the issue lies with the Download on Demand feature.

We then tried a local build in a fresh OBS instance (no cached packages) using the --disable-cpio-bulk-download osc build option, which would make OBS download each dependency individually instead of doing so in bulks. For our surprise, the builds succeeded in our first attempt.

Finally, we traced the issue all the way down to the OBS API call which is triggered when OBS needs to download missing dependenies. For some reason, the number of parameters (number of dependencies to be downloaded) affects the final response of the API call. When trying to download too many packages, The CPIO archive is not built correctly and OBS builds fail.

At the moment, we are still investigating why such calls fail with too many params and why it only fails for Debian Testing and Unstable repositories.

Next steps (A TODO list to keep on the radar)
  • Fix OBS builds on Debian Testing and Unstable
  • Write patch for Debian osc’s so it can build Debian packages with control.tar.xz
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)
Athos Ribeiro Debian on /home/athos

Reproducible Builds: Weekly report #166

Planet Debian - Hën, 02/07/2018 - 11:16md

Here’s what happened in the Reproducible Builds effort between Sunday June 24 and Saturday June 30 2018:

Packages reviewed and fixed, and bugs filed diffoscope development

diffoscope versions 97 and 98 were uploaded to Debian unstable by Chris Lamb. They included contributions already covered in previous weeks as well as new ones from:

Chris Lamb also updated the SSL certificate for


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks

80bit x87 FPU

Planet Debian - Hën, 02/07/2018 - 10:24md

Once again, I got surprised by the 80 bit x87 FPU stuff.

First time was around a decade ago. Back then, it was something along the lines of a sort function like:

struct ValueSorter() { bool operator (const Value& first, const Value& second) const { double valueFirst = first.amount() * first.value(); double valueSecond = second.amount() * second.value(); return valueFirst < valueSecond; } }

With some values, first and would be smaller than second, and second smaller than first. All depending on which one got truncated to 64 bit, and which one came directly from the 80bit fpu.

This time, the 80 bit version when cast to integers was 1 smaller than the 64 bit version.

Oh. The joys of x86 CPU’s.

Sune Vuorela english – Blog :: Sune Vuorela


Subscribe to AlbLinux agreguesi