You are here

Agreguesi i feed

Is it an upgrade, or a sidegrade?

Planet Debian - Mar, 13/02/2018 - 8:43md

I first bought a netbook shortly after the term was coined, in 2008. I got one of the original 8.9" Acer Aspire One. Around 2010, my Dell laptop was stolen, so the AAO ended up being my main computer at home — And my favorite computer for convenience, not just for when I needed to travel light. Back then, Regina used to work in a national park and had to cross her province (~6hr by a combination of buses) twice a week, so she had one as well. When she came to Mexico, she surely brought it along. Over the years, we bought new batteries and chargers, as they died over time...

Five years later, it started feeling too slow, and I remember to start having keyboard issues. Time to change.

Sadly, 9" computers were no longer to be found. Even though I am a touch typist, and a big person, I miss several things about the Acer's tiny keyboard (such as being able to cover the diagonal with a single hand, something useful when you are typing while standing). But, anyway, I got the closest I could to it — In July 2013, I bought the successor to the Acer Aspire One: An 10.5" Acer Aspire One Nowadays, the name that used to identify just the smallest of the Acer Family brethen covers at least up to 15.6" (which is not exactly helpful IMO).

Anyway, for close to five years I was also very happy with it. A light laptop that didn't mean a burden to me. Also, very important: A computer I could take with me without ever thinking twice. I often tell people I use a computer I got at a supermarket, and that, bought as new, costed me under US$300. That way, were I to lose it (say, if it falls from my bike, if somebody steals it, if it gets in any way damaged, whatever), it's not a big blow. Quite a difference from my two former laptops, both over US$1000.

I enjoyed this computer a lot. So much, I ended up buying four of them (mine, Regina's, and two for her family members).

Over the last few months, I have started being nagged by unresponsivity, mainly in the browser (blame me, as I typically keep ~40 tabs open). Some keyboard issues... I had started thinking about changing my trusty laptop. Would I want a newfangle laptop-and-tablet-in-one? Just thinking about fiddling with the OS to recognize stuff was a sort-of-turnoff...

This weekend we had an incident with spilled water. After opening and carefully ensuring the computer was dry, it would not turn on. Waited an hour or two, and no changes. Clear sign, a new computer is needed ☹

I went to a nearby store, looked at the offers... And, in part due to the attitude of the salesguy, I decided not to (installing Linux will void any warranty, WTF‽ In 2018‽). Came back home, and... My Acer works again!

But, I know five years are enough. I decided to keep looking for a replacement. After some hesitation, I decided to join what seems to be the elite group in Debian, and go for a refurbished Thinkpad X230.

And that's why I feel this is some sort of "sidegrade" — I am replacing a five year old computer with another five year old computer. Of course, a much sturdier one, built to last, originally sold as an "Ultrabook" (that means, meant for a higher user segment) much more expandable... I'm paying ~US$250, which I'm comfortable with. Looking at several online forums, it is a model quite popular with "knowledgeable" people AFAICT even now. I was hoping, just for the sake of it, to find a X230t (foldable and usable as tablet)... But I won't put too much time into looking for it.

The Thinkpad is 12", which I expect will still fit in my smallish satchel I take to my classes. The machine looks as tweakable as I can expect. Spare parts for replacement are readily available. I have 4GB I bought for the Acer I will probably be able to carry on to this machine, so I'm ready with 8GB. I'm eager to feel the keyboard, as it's often repeated it's the best in the laptop world (although it's not the classic one anymore) I'm just considering to pop ~US$100 more and buy an SSD drive, and... Well, lets see how much does this new sidegrade make me smile!

gwolf http://gwolf.org Gunnar Wolf

Our future relationship with FSFE

Planet Debian - Enj, 01/02/2018 - 2:19md

Below is an email that has been distributed to the FSFE community today. FSFE aims to be an open organization and people are welcome to discuss it through the main discussion group (join, thread and reply) whether you are a member or not.

For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site. The "No Cloud" stickers and the Public Money Public Code campaign are examples of initiatives started by FSFE - you can request free stickers and posters by filling in this form.

Dear FSFE Community,

I'm writing to you today as one of your elected fellowship representatives rather than to convey my own views, which you may have already encountered in my blog or mailing list discussions.

The recent meeting of the General Assembly (GA) decided that the annual elections will be abolished but this change has not yet been ratified in the constitution.

Personally, I support an overhaul of FSFE's democratic processes and the bulk of the reasons for this change are quite valid. One of the reasons proposed for the change, the suggestion that the election was a popularity contest, is an argument I don't agree with: the same argument could be used to abolish elections anywhere.

One point that came up in discussions about the elections is that people don't need to wait for the elections to be considered for GA membership. Matthias Kirschner, our president, has emphasized this to me personally as well, he looks at each new request with an open mind and forwards it to all of the GA for discussion. According to our constitution, anybody can write to the president at any time and request to join the GA. In practice, the president and the existing GA members will probably need to have seen some of your activities in one of the FSFE teams or local groups before accepting you as a member. I want to encourage people to become familiar with the GA membership process and discuss it within their teams and local groups and think about whether you or anybody you know may be a good candidate.

According to the minutes of the last GA meeting, several new members were already accepted this way in the last year. It is particularly important for the organization to increase diversity in the GA at this time.

The response rate for the last fellowship election was lower than in previous years and there is also concern that emails don't reach everybody thanks to spam filters or the Google Promotions tab (if you use gmail). If you had problems receiving emails about the last election, please consider sharing that feedback on the discussion list.

Understanding where the organization will go beyond the extinction of the fellowship representative is critical. The Identity review process, championed by Jonas Oberg and Kristi Progri, is actively looking at these questions. Please contact Kristi if you wish to participate and look out for updates about this process in emails and Planet FSFE. Kristi will be at FOSDEM this weekend if you want to speak to her personally.

I'll be at FOSDEM this weekend and would welcome the opportunity to meet with you personally. I will be visiting many different parts of FOSDEM at different times, including the FSFE booth, the Debian booth, the real-time lounge (K-building) and the Real-Time Communications (RTC) dev-room on Sunday, where I'm giving a talk. Many other members of the FSFE community will also be present, if you don't know where to start, simply come to the FSFE booth. The next European event I visit after FOSDEM will potentially be OSCAL in Tirana, it is in May and I would highly recommend this event for anybody who doesn't regularly travel to events outside their own region.

Changing the world begins with the change we make ourselves. If you only do one thing for free software this year and you are not sure what it is going to be, then I would recommend this: visit an event that you never visited before, in a city or country you never visited before. It doesn't necessarily have to be a free software or IT event. In 2017 I attended OSCAL in Tirana and the Digital-Born Media Carnival in Kotor for the first time. You can ask FSFE to send you some free stickers and posters (online request with optional donation) to give to the new friends you meet on your travels. Change starts with each of us doing something new or different and I hope our paths may cross in one of these places.

For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site.

Please feel free to discuss this through the FSFE discussion group (join, thread and reply)

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

FLOSS Activities January 2018

Planet Debian - Enj, 01/02/2018 - 1:12pd
Changes Issues Review Administration
  • Debian: try to regain OOB access to a host, try to connect with a hoster, restart bacula after db restart, provide some details to a hoster, add debsnap to snapshot host, debug external email issue, redirect users to support channels
  • Debian mentors: redirect to sponsors, teach someone about dput .upload files, check why a package disappeared
  • Debian wiki: unblacklist IP address, whitelist email addresses, whitelist email domain, investigate DocBook output crash
Communication
  • Initiate discussion about ingestion of more security issue feeds
  • Invite LinuxCNC to the Debian derivatives census
Sponsors

I renewed my support of Software Freedom Conservancy.

The Discord related uploads (harmony, librecaptcha, purple-discord) and the Debian fakeupstream change were sponsored by my employer. All other work was done on a volunteer basis.

Paul Wise http://bonedaddy.net/pabs3/log/ Log

Free software activities in January 2018

Planet Debian - Mër, 31/01/2018 - 11:20md

Here is my monthly update covering what I have been doing in the free software world in January 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:



I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Compare JSON files using the jsondiff module. (#888112)
    • Report differences in extended file attributes when comparing files. (#888401)
    • Show extended filesystem metadata when directly comparing two files not just when we specify two directories. (#888402)
    • Do some fuzzy parsing to detect JSON files not named .json. [...]
  • Bug fixes:
    • Return unknown if we can't parse the readelf version number for (eg.) FreeBSD. (#886963)
    • If the LLVM disassembler does not work, try the internal one. (#886736)
  • Misc:
    • Explicitly depend on e2fsprogs. (#887180)
    • Clarify Unidentified file log message as we did try and lookup via the comparators first. [...]

I also fixed an issue in the "trydiffoscope" command-line client that was preventing installation on non-Debian systems (#888882).


disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Correct "explicitly" typo in disorderfs.1.txt. [...]
  • Bump Standards-Version to 4.1.3. [...]
  • Drop trailing whitespace in debian/control. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

In addition to this, I:

  • Published whydoesaptnotusehttps.com, an overview of why APT does not rely solely on SSL for validation of downloaded packages as I noticed it was being asked a lot on support forums.
  • Reported a number of issues for the mentors.debian.net review service.
Patches contributed
  • dput: Suggest --force if package has already been uploaded. (#886829)
  • linux: Add link to the Firmware page on the wiki to failed to load log messages. (#888405)
  • markdown: Make markdown exit with a non-zero exit code if cannot open input file. (#886032)
  • spectre-meltdown-checker: Return a sensible exit code. (#887077)
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • Initial draft of a script to automatically detect when CVEs should be assigned to multiple source packages in the case of legacy renames, duplicates or embedded code copies.
  • Issued DLA 1228-1 for the poppler PDF library to fix an overflow vulnerability.
  • Issued DLA 1229-1 for imagemagick correcting two potential denial-of-service attacks.
  • Issued DLA 1233-1 for gifsicle — a command-line tool for manipulating GIF images — to fix a use-after-free vulnerability.
  • Issued DLA 1234-1 to fix multiple integer overflows in the GTK gdk-pixbuf graphics library.
  • Issued DLA 1247-1 for rsync, fixing a command-injection vulnerability.
  • Issued DLA 1248-1 for libgd2 to prevent a potential infinite loop caused by signedness confusion.
  • Issued DLA 1249-1 for smarty3 fixing an arbitrary code execution vulnerability.
  • "Frontdesk" duties, triaging CVEs, etc.
Uploads
  • adminer (4.5.0-1) — New upstream release.
  • bfs (1.2-1) — New upstream release.
  • dbus-cpp (5.0.0+18.04.20171031-1) — Initial upload to Debian.
  • installation-birthday (7) — Add e2fsprogfs to Depends so it can drop Essential: yes. (#887275
  • process-cpp:
    • 3.0.1-1 — Initial upload to Debian.
    • 3.0.1-2 — Fix FTBFS due to symbol versioning.
  • python-django (1:1.11.9-1 & 2:2.0.1-1) — New upstream releases.
  • python-gflags (1.5.1-4) — Always use SOURCE_DATE_EPOCH from the environment.
  • redis:
    • 5:4.0.6-3 — Use --clients argument to runtest to force single-threaded operation over using taskset.
    • 5:4.0.6-4 — Re-add procps to Build-Depends. (#887075)
    • 5:4.0.6-5 — Fix a dangling symlink (and thus a broken package). (#884321)
    • 5:4.0.7-1 — New upstream release.
  • redisearch (1.0.3-1, 1.0.4-1 & 1.0.5-1) — New upstream releases.
  • trydiffoscope (67.0.0) — New upstream release.

I also sponsored the following uploads:

Debian bugs filed
  • gdebi: Invalid gnome-mime-application-x-deb icon in AppStream metadata. (#887056)
  • git-buildpackage: Please make gbp clone not quieten the output by default. (#886992)
  • git-buildpackage: Please word-wrap generated changelog lines. (#887055)
  • isort: Don't install test_isort.py to global Python namespace. (#887816)
  • restrictedpython: Please add Homepage. (#888759)
  • xcal: Missing patches due to 00List != 00list. (#888542)

I also filed 4 bugs against packages missing patches due to incomplete quilt conversions against cernlib geant321, mclibs & paw.

RC bugs
  • gnome-shell-extension-tilix-shortcut: Invalid date in debian/changelog. (#886950)
  • python-qrencode: Missing PIL dependencies due to use of Python 2 substvars in Python 3 package. (#887811)


I also filed 7 FTBFS bugs against lintian, netsniff-ng, node-coveralls, node-macaddress, node-timed-out, python-pyocr & sleepyhead.

FTP Team

As a Debian FTP assistant I ACCEPTed 173 packages: appmenu-gtk-module, atlas-cpp, canid, check-manifest, cider, citation-style-language-locales, citation-style-language-styles, cloudkitty, coreapi, coreschema, cypari2, dablin, dconf, debian-dad, deepin-icon-theme, dh-dlang, django-js-reverse, flask-security, fpylll, gcc-8, gcc-8-cross, gdbm, gitlint, gnome-tweaks, gnupg-pkcs11-scd, gnustep-back, golang-github-juju-ansiterm, golang-github-juju-httprequest, golang-github-juju-schema, golang-github-juju-testing, golang-github-juju-webbrowser, golang-github-posener-complete, golang-gopkg-juju-environschema.v1, golang-gopkg-macaroon-bakery.v2, golang-gopkg-macaroon.v2, harmony, hellfire, hoel, iem-plugin-suite, ignore-me, itypes, json-tricks, jstimezonedetect.js, libcdio, libfuture-asyncawait-perl, libgig, libjs-cssrelpreload, liblxi, libmail-box-imap4-perl, libmail-box-pop3-perl, libmail-message-perl, libmatekbd, libmoosex-traitfor-meta-class-betteranonclassnames-perl, libmoosex-util-perl, libpath-iter-perl, libplacebo, librecaptcha, libsyntax-keyword-try-perl, libt3highlight, libt3key, libt3widget, libtree-r-perl, liburcu, linux, mali-midgard-driver, mate-panel, memleax, movit, mpfr4, mstch, multitime, mwclient, network-manager-fortisslvpn, node-babel-preset-airbnb, node-babel-preset-env, node-boxen, node-browserslist, node-caniuse-lite, node-cli-boxes, node-clone-deep, node-d3-axis, node-d3-brush, node-d3-dsv, node-d3-force, node-d3-hierarchy, node-d3-request, node-d3-scale, node-d3-transition, node-d3-zoom, node-fbjs, node-fetch, node-grunt-webpack, node-gulp-flatten, node-gulp-rename, node-handlebars, node-ip, node-is-npm, node-isomorphic-fetch, node-js-beautify, node-js-cookie, node-jschardet, node-json-buffer, node-json3, node-latest-version, node-npm-bundled, node-plugin-error, node-postcss, node-postcss-value-parser, node-preact, node-prop-types, node-qw, node-sellside-emitter, node-stream-to-observable, node-strict-uri-encode, node-vue-template-compiler, ntl, olivetti-mode, org-mode-doc, otb, othman, papirus-icon-theme, pgq-node, php7.2, piu-piu, prometheus-sql-exporter, py-radix, pyparted, pytest-salt, pytest-tempdir, python-backports.tempfile, python-backports.weakref, python-certbot, python-certbot-apache, python-certbot-nginx, python-cloudkittyclient, python-josepy, python-jsondiff, python-magic, python-nose-random, python-pygerrit2, python-static3, r-cran-broom, r-cran-cli, r-cran-dbplyr, r-cran-devtools, r-cran-dt, r-cran-ggvis, r-cran-git2r, r-cran-pillar, r-cran-plotly, r-cran-psych, r-cran-rhandsontable, r-cran-rlist, r-cran-shinydashboard, r-cran-utf8, r-cran-whisker, r-cran-wordcloud, recoll, restrictedpython, rkt, rtklib, ruby-handlebars-assets, sasmodels, spectre-meltdown-checker, sphinx-gallery, stepic, tilde, togl, ums2net, vala-panel, vprerex, wafw00f & wireguard.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: fpylll, gnome-tweaks, org-mode-doc & py-radix.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Day three of the pre-FOSDEM Debconf Videoteam sprint

Planet Debian - Mër, 31/01/2018 - 7:46md

This should really have been the "day two" post, but I forgot to do that yesterday, and now it's the end of day three already, so let's just do the two together for now.

Kyle

Has been hacking on the opsis so we can get audio through it, but so far without much success. In addition, he's been working a bit more on documentation, as well as splitting up some data that's currently in our ansible repository into a separate one so that other people can use our ansible configuration more easily, without having to fork too much.

Tzafrir

Did some tests on the ansible setup, and did some documentation work, and worked on a kodi plugin for parsing the metadata that we've generated.

Stefano

Did some work on the DebConf website. This wasn't meant to be much, but yak shaving sucks. Additionally, he's been doing some work on the youtube uploader as well.

Nattie

Did more work reviewing our documentation, and has been working on rewording some of the more awkward bits.

Wouter

Spent much time on improving the SReview installation for FOSDEM. While at it, fixed a number of bugs in some of the newer code that were exposed by full tests of the FOSDEM installation. Additionally, added code to SReview to generate metadata files that can be handed to Stefano's youtube uploader.

Pollo

Although he had less time yesterday than he did on monday (and apparently no time today) to sprint remotely, Pollo still managed to add a basic CI infrastructure to lint our ansible playbooks.

Wouter Verhelst https://grep.be/blog//pd/ pd

Swatantra17

Planet Debian - Mër, 31/01/2018 - 2:49md

Its very late but here it goes..

Last month Thiruvananthapuram witnessed one of the biggest Free and Open Source Software conference called Swatantra17. Swatantra is a flagship triennial ( actually used to be triennial, but from now on organizers decided to conduct in every 2 years.) FOSS conference from ICFOSS. This year there were more than 30 speakers from all around the world. The event held from 20-21 December at Mascot hotel, Thiruvananthapuram. I was one of the community volunteer for the event and was excited from the day it announced :) .

Current Kerala Chief Minister Pinarayi Vijayan inaugurated Swatantra17. The first day session started with keynote from Software Freedom Conservancy executive director Karen Sandler. Karen told about safety of medical devices like defibrillator which runs proprietary software. After that there were many parallel talks about various free software projects,technologies and tools. This edition of Swatantra focused more on art. It was good to know more about artist’s free software stack. Most amazing thing is through out the conference I met so many people from FSCI whom I only know through matrix/IRC/emails.

The first day talks were ended at 6PM. After that Oorali band performed for us. This band is well-known in Kerala because they speak for many social and political issues. This make them best match for a free software conference cultural program :). Their songs are mainly about birds, forests, freedom and we danced to the many of the songs.

Last day evening there was kind of BoF from FSF person Benjamin Mako Hill. Half way through I came to know he is also a Debian Developer :D. Unfortunately this Bof stopped as he was called for a panel discussion. After the panel discussion we all Debian people gathered and had a chat.

Abhijith PA http://abhijithpa.me/ Abhijith PA

Migrating the debichem group subversion repository to Git - Part 1: svn-all-fast-export basics

Planet Debian - Mër, 31/01/2018 - 1:24md

With the deprecation of alioth.debian.org the subversion service hosted there will be shut down too. According to lintian the estimated date is May 1st 2018 and there are currently more then 1500 source packages affected. In the debichem group we've used the subversion service since 2006. Our repository contains around 7500 commits done by around 20 different alioth user accounts and the packaging history of around 70 to 80 packages, including packaging attempts. I've spent the last days to prepare the Git migration, comparing different tools, controlling the created repositories and testing possibilities to automate the process as much as possible. The resulting scripts can currently be found here.

Of course I began as described at the Debian Wiki. But following this guide, using git-svn and converting the tags with the script supplied under rubric Convert remote tags and branches to local one gave me really weird results. The tags were pointing to the wrong commit-IDs. I thought, that git-svn was to blame and reported this as bug #887881. In the following mail exchange Andreas Kaesorg explained to me, that the issue is caused by so-called mixed-revision-tags in our repository as shown in the following example:


$ svn log -v -r7405
------------------------------------------------------------------------
r7405 | dleidert | 2018-01-17 18:14:57 +0100 (Mi, 17. Jan 2018) | 1 Zeile
Geänderte Pfade:
A /tags/shelxle/1.0.888-1 (von /unstable/shelxle:7396)
R /tags/shelxle/1.0.888-1/debian/changelog (von /unstable/shelxle/debian/changelog:7404)
R /tags/shelxle/1.0.888-1/debian/control (von /unstable/shelxle/debian/control:7403)
D /tags/shelxle/1.0.888-1/debian/patches/qt5.patch
R /tags/shelxle/1.0.888-1/debian/patches/series (von /unstable/shelxle/debian/patches/series:7402)
R /tags/shelxle/1.0.888-1/debian/rules (von /unstable/shelxle/debian/rules:7403)

[svn-buildpackage] Tagging shelxle 1.0.888-1
------------------------------------------------------------------------

Looking into the git log, the tags deteremined by git-svn are really not in their right place in the history line, even before running the script to convert the branches into real Git tags. So IMHO git-svn is not able to cope with this kind of situation. Because it also cannot handle our branch model, where we use /branch/package/, I began to look for different tools and found svn-all-fast-export, a tool created (by KDE?) to convert even large subversion repositories based on a ruleset. My attempt using this tool was so successful (not to speak of, how fast it is), that I want to describe it more. Maybe it will prove to be useful for others as well and it won't hurt to give some more information about this poorly documented tool :)

Step 1: Setting up a local subversion mirror

First I suggest setting up a local copy of the subversion repository to migrate, that is kept in sync with the remote repository. This can be achieved using the svnsync command. There are several howtos for this, so I won't describe this step here. Please check out this guide. In my case I have such a copy in /srv/svn/debichem.

Step 2: Creating the identity map

svn-all-fast-export needs at least two files to work. One is the so called identity map. This file contains the mapping between subversion user IDs (login names) and the (Git) committer info, like real name and mail address. The format is the same as used by git-svn:

loginname = author name <mail address>

e.g.

dleidert = Daniel Leidert <dleidert@debian.org>

The list of subversion user IDs can be obtained the same way as described in the Wiki:

svn log SVN_URL | awk -F'|' '/^r[0-9]+/ { print $2 }' | sort -u

Just replace the placeholder SVN_URL with your subversion URL. Here is the complete file for the debichem group.

Step 3: Creating the rules

The most important thing is the second file, which contains the processing rules. There is really not much documentation out there. So when in doubt, one has to read the source file src/ruleparser.cpp. I'll describe, what I already found out. If you are impatient, here is my result so far.

The basic rules are:


create repository REPOSITORY
...
end repository

and


match PATTERN
...
end match

The first rule creates a bare git repository with the name you've chosen (above represented by REPOSITORY). It can have one child, that is the repository description to be put into the repositories description file. There are AFAIK no other elements allowed here. So in case of e.g. ShelXle the rule might look like this:


create repository shelxle
description packaging of ShelXle, a graphical user interface for SHELXL
end repository

You'll have to create every repository, before you can put something into it. Else svn-all-fast-export will exit with an error. JFTR: It won't complain, if you create a repository, but don't put anything into it. You will just end up with an empty Git repository.

Now the second type of rule is the most important one. Based on regular expression match patterns (above represented by PATTERN), one can define actions, including the possibility to limit these actions to repositories, branches and revisions. The patterns are applied in their order of appearance. Thus if a matching pattern is found, other patterns matching but appearing later in the rules file, won't apply! So a special rule should always be put above a general rule. The patterns, that can be used, seem to be of type QRegExp and seem like basic Perl regular expressions including e.g. capturing, backreferences and lookahead capabilities. For a multi-package subversion repository with standard layout (that is /PACKAGE/{trunk,tags,branches}/), clean naming and subversion history, the rules could be:


match /([^/]+)/trunk/
repository \1
branch master
end match

match /([^/]+)/tags/([^/]+)/
repository \1
branch refs/tags/debian/\2
annotated true
end match

match /([^/]+)/branches/([^/]+)/
repository \1
branch \2
end match

The first rule captures the (source) package name from the path and puts it into the backreference \1. It applies to the trunk directory history and will put everything it finds there into the repository named after the directory - here we simply use the backreference \1 to that name - and there into the master branch. Note, that svn-all-fast-export will error out, if it tries to access a repository, which has not been created. So make sure, all repositories are created as shown with the create repository rule. The second rule captures the (source) package name from the path too and puts it into the backreference \1. But in backreference \2 it further captures (and applies to) all the tag directories under the /tags/ directory. Usually these have a Debian package version as name. With the branch statement as shown in this rule, the tags, which are really just branches in subversion, are automatically converted to annotated Git tags (another advantage of svn-all-fast-export over git-svn). Without enabling the annotated statement, the tags created will be lightweight tags. So the tag name (here: debian/VERSION) is determined via backreference \2. The third rule is almost the same, except that everything found in the matching path will be pushed into a Git branch named after the top-level directory captured from the subversion path.

Now in an ideal world, this might be enough and the actual conversion can be done. The command should only be executed in an empty directory. I'll assume, that the identity map is called authors and the rules file is called rules and that both are in the parent directory. I'll also assume, that the local subversion mirror of the packaging repository is at /srv/svn/mymirror. So ...

svn-all-fast-export --stats --identity-map=../authors.txt --rules=../debichem.rules --stats /srv/svn/mymirror

... will create one or more bare Git repositories (depending on your rules file) in the current directory. After the command succeeded, you can test the results ...


git -C REPOSITORY/ --bare show-ref
git -C REPOSITORY/ --bare log --all --graph

... and you will find your repositories description (if you added one to the rules file) in REPOSITORY/description:

cat REPOSITORY/description

Please note, that not all the debian version strings are well formed Git reference names and therefor need fixing. There might also be gaps shown in the Git history log. Or maybe the command didn't even suceed or complained (without you noticing it) or you ended up with an empty repository, although the matching rules applied. I encountered all of these issues and I'll describe the cause and fixes in the next blog article.

But if everything went well (you have no history gaps, the tags are in their right place within the linearized history and the repository looks fine) and you can and want to proceed, you might want to skip to the next step.

In the debichem group we used a different layout. The packaging directories were under /{unstable,experimental,wheezy,lenny,non-free}/PACKAGE/. This translates to /unstable/PACKAGE/ and /non-free/PACKAGE/ being the trunk directories and the others being the branches. The tags are in /tags/PACKAGE/. And packages, that are yet to upload are located in /wnpp/PACKAGE/. With this layout, the basic rules are:


# trunk handling
# e.g. /unstable/espresso/
# e.g. /non-free/molden/
match /(?:unstable|non-free)/([^/]+)/
repository \1
branch master
end match

# handling wnpp
# e.g. /wnpp/osra/
match /(wnpp)/([^/]+)/
repository \2
branch \1
end match

# branch handling
# e.g. /wheezy/espresso/
match /(lenny|wheezy|experimental)/([^/]+)/
repository \2
branch \1
end match

# tags handling
# e.g. /tags/espresso/VERSION/
match /tags/([^/]+)/([^/]+)/
repository \1
annotated true
branch refs/tags/debian/\2
substitute branch s/~/_/
substitute branch s/:/_/
end match

In the first rule, there is a non-capturing expression (?: ... ), which simply means, that the rule applies to /unstable/ and /non-free/. Thus the backreference \1 refers to second part of the path, the package directory name. The contents found are pushed to the master branch. In the second rule, the contents from the wnpp directory are not pushed to master, but instead to a branch called wnpp. This was necessary because of overlaps between /unstable/ and /wnpp/ history and already shows, that the repositories history makes things complicated. In the third rule, the first backreference \1 determines the branch (note the capturing expression in contrast to the first rule) and the second backreference \2 the package repository to act on. The last rule is similar, but now \1 determines the package repository and \2 the tag name (debian package version) based on the matching path. The example also shows another issue, which I'd like to explain more in the next article: some characters we use in debian package versions, e.g. the tilde sign and the colon, are not allowed within Git tag names and must therefor be substituted, which is done by the substitute branch EXPRESSION instructions.

Step 4: Cleaning the bare repository

The tool documentation suggests to run ...

git -C REPOSITORY/ repack -a -d -f

... before you upload this bare repository to another location. But Stuart Prescott told me on the debichem list, that this might not be enough and still leave some garbage behind. I'm not experienved enough to judge here, but his suggestion is, to clone the repository, either a bare clone or clone and init a new bare. I used the first approach:


git -C REPOSITORY/ --bare clone --bare REPOSITORY.git
git -C REPOSITORY.git/ repack -a -d -f

Please note, that this won't copy the repositories description file. You'll have to copy it manually, if you wanna keep it. The resulting bare repository can be uploaded (e.g. to git.debian.org as personal repository:


cp REPOSITORY/description REPOSITORY.git/description
touch REPOSITORY.git/git-daemon-export-ok
rsync -avz REPOSITORY.git git.debian.org:~/public_git/

Or you clone the repository, add a remote origin and push everything there. It is even possible to use the gitlab API at salsa.debian.org to create a project and push there. I'll save the latter for another post. If you are hasty, you'll find a script here.

Daniel Leidert noreply@blogger.com [erfahrungen, meinungen, halluzinationen]

An old DOS BBS in a Docker container

Planet Debian - Mër, 31/01/2018 - 12:32md

Awhile back, I wrote about my Debian Docker base images. I decided to extend this concept a bit further: to running DOS applications in Docker.

But first, a screenshot:

It turns out this is possible, but difficult. I went through all three major DOS emulators available (dosbox, qemu, and dosemu). I got them all running inside the Docker container, but had a number of, er, fun issues to resolve.

The general thing one has to do here is present a fake modem to the DOS environment. This needs to be exposed outside the container as a TCP port. That much is possible in various ways — I wound up using tcpser. dosbox had a TCP modem interface, but it turned out to be too buggy for this purpose.

The challenge comes in where you want to be able to accept more than one incoming telnet (or TCP) connection at a time. DOS was not a multitasking operating system, so there were any number of hackish solutions back then. One might have had multiple physical computers, one for each incoming phone line. Or they might have run multiple pseudo-DOS instances under a multitasking layer like DESQview, OS/2, or even Windows 3.1.

(Side note: I just learned of DESQview/X, which integrated DESQview with X11R5 and replaced the Windows 3 drivers to allow running Windows as an X application).

For various reasons, I didn’t want to try running one of those systems inside Docker. That left me with emulating the original multiple physical node setup. In theory, pretty easy — spin up a bunch of DOS boxes, each using at most 1MB of emulated RAM, and go to town. But here came the challenge.

In a multiple-physical-node setup, you need some sort of file sharing, because your nodes have to access the shared message and file store. There were a myriad of clunky ways to do this in the old DOS days – Netware, LAN manager, even some PC NFS clients. I didn’t have access to Netware. I tried the Microsoft LM client in DOS, talking to a Samba server running inside the Docker container. This I got working, but the LM client used so much RAM that, even with various high memory tricks, BBS software wasn’t going to run. I couldn’t just mount an underlying filesystem in multiple dosbox instances either, because dosbox did caching that wasn’t going to be compatible.

This is why I wound up using dosemu. Besides being a more complete emulator than dosbox, it had a way of sharing the host’s filesystems that was going to work.

So, all of this wound up with this: jgoerzen/docker-bbs-renegade.

I also prepared building blocks for others that want to do something similar: docker-dos-bbs and the lower-level docker-dosemu.

As a side bonus, I also attempted running this under Joyent’s Triton (SmartOS, Solaris-based). I was pleasantly impressed that I got it all almost working there. So yes, a Renegade DOS BBS running under a Linux-based DOS emulator in a container on a Solaris machine.

John Goerzen http://changelog.complete.org The Changelog

Review: My Grandmother Asked Me to Tell You She's Sorry

Planet Debian - Mër, 31/01/2018 - 5:19pd

Review: My Grandmother Asked Me to Tell You She's Sorry, by Fredrik Backman

Series: Britt-Marie #1 Translator: Henning Koch Publisher: Washington Square Copyright: 2014 Printing: April 2016 ISBN: 1-5011-1507-3 Format: Trade paperback Pages: 372

Elsa is seven, going on eight. She's not very good at it; she knows she's different and annoying, which is why she gets chased and bullied constantly at school and why her only friend is her grandmother. But Granny is a superhero, who's also very bad at being old. Her superpowers are lifesaving and driving people nuts. She made a career of being a doctor in crisis zones; now she makes a second career of, well, this sort of thing:

Or that time she made a snowman in Britt-Marie and Kent's garden right under their balcony and dressed it up in grown-up clothes so it looked as if a person had fallen from the roof. Or that time those prim men wearing spectacles started ringing all the doorbells and wanted to talk about God and Jesus and heaven, and Granny stood on her balcony with her dressing gown flapping open, shooting at them with her paintball gun

The other thing Granny is good at is telling fairy tales. She's been telling Elsa fairy tales since she was small and her mom and dad had just gotten divorced and Elsa was having trouble sleeping. The fairy tales are all about Miamas and the other kingdoms of the Land-of-Almost-Awake, where the fearsome War-Without-End was fought against the shadows. Miamas is the land from which all fairy tales come, and Granny has endless stories from there, featuring princesses and knights, sorrows and victories, and kingdoms like Miploris where all the sorrows are stored.

Granny and Miamas and the Land-of-Almost-Awake make Elsa's life not too bad, even though she has no other friends and she's chased at school. But then Granny dies, right after giving Elsa one final quest, her greatest quest. It starts with a letter and a key, addressed to the Monster who lives downstairs. (Elsa calls him that because he's a huge man who only seems to come out at night.) And Granny's words:

"Promise you won't hate me when you find out who I've been. And promise me you'll protect the castle. Protect your friends."

My Grandmother Asked Me to Tell You She's Sorry is written in third person, but it's close third person focused on Elsa and her perspective on the world. She's a precocious seven-year-old who I thought was nearly perfect (rare praise for me for children in books), which probably means some folks will think she's a little too precocious. But she has a wonderful voice, a combination of creative imagination, thoughtfulness, and good taste in literature (particularly Harry Potter and Marvel Comics). The book is all about what it's like to be seven, going on eight, with a complicated family situation and an awful time at school, but enough strong emotional support from her family that she's still full of stubbornness, curiosity, and fire.

Her grandmother's quest gets her to meet the other residents of the apartment building she lives in, turning them into more than the backdrop of her life. That, in turn, adds new depth to the fairy tales her Granny told her. Their events turn out to not be pure fabrication. They were about people, the many people in her Granny's life, reshaped by Granny's wild imagination and seen through the lens of a child. They leave Elsa surprisingly well-equipped to navigate and start to untangle the complicated relationships surrounding her.

This is where Backman pulls off the triumph of this book. Elsa's discoveries that her childhood fairy tales are about the people around her, people with a long history with her grandmother, could have been disillusioning. This could be the story of magic fading into reality and thereby losing its luster. And at first Elsa is quite angry that other people have this deep connection to things she thought were hers, shared with her favorite person. But Backman perfectly walks that line, letting Elsa keep her imaginative view of the world while intelligently mapping her new discoveries onto it. The Miamas framework withstands serious weight in this story because Elsa is flexible, thoughtful, and knows how to hold on to the pieces of her story that carry deeper truth. She sees the people around her more clearly than anyone else because she has a deep grasp of her grandmother's highly perceptive, if chaotic, wisdom, baked into all the stories she grew up with.

This book starts out extremely funny, turns heartwarming and touching, and develops real suspense by the end. It starts out as Elsa nearly alone against the world and ends with a complicated matrix of friends and family, some of whom were always supporting each other beneath Elsa's notice and some of whom are re-learning the knack. It's a beautiful story, and for the second half of the book I could barely put it down.

I am, as a side note, once again struck by the subtle difference in stories from cultures with a functional safety net. I caught my American brain puzzling through ways that some of the people in this book could still be alive and living in this apartment building since they don't seem capable of holding down jobs, before realizing this story is not set in a brutal Hobbesian jungle of all against all like the United States. The existence of this safety net plays no significant role in this book apart from putting a floor under how far people can fall, and yet it makes all the difference in the world and in some ways makes Backman's plot possible. Perhaps publishers should market Swedish literary novels as utopian science fiction in the US.

This is great stuff. The back and forth between fairy tales and Elsa's resilient and slightly sarcastic life can take a bit to get used to, but stick with it. All the details of the fairy tales matter, and are tied back together wonderfully by the end of the book. Highly recommended. In its own way, this is fully as good as A Man Called Ove.

There is a subsequent book, Britt-Marie Was Here, that follows one of the supporting characters of this novel, but My Grandmother Asked Me to Tell You She's Sorry stands alone and reaches a very satisfying conclusion (including for that character).

Rating: 10 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

logo.png for default avatar for GitLab repos

Planet Debian - Mër, 31/01/2018 - 4:13pd

Debian and GNOME have both recently adopted self-hosted GitLab for their git hosting. GNOME’s service is named simply https://gitlab.gnome.org/ ; Debian’s has the more intriguing name https://salsa.debian.org/ . If you ask the Salsa sysadmins, they’ll explain that they were in a Mexican restaurant when they needed to decide on a name!

There’s a useful under-documented feature I found. If you place a logo.png in the root of your repository, it will be automatically used as the default “avatar” for your project (in other words, the logo that shows up on the web page next to your project).

I added a logo.png to GNOME Tweaks at GNOME and it automatically showed up in Salsa when I imported the new version.

Other Notes

I first tried with a symlink to my app icon, but it didn’t work. I had to actually copy the icon.

The logo.png convention doesn’t seem to be supported at GitHub currently.

Jeremy Bicha https://jeremy.bicha.net Debian – Just Jeremy

Fair communication requires mutual consent

Planet Debian - Mar, 30/01/2018 - 9:33md

I was pleased to read Shirish Agarwal's blog in reply to the blog I posted last week Do the little things matter?

Given the militaristic theme used in my own post, I was also somewhat amused to see news this week of the Strava app leaking locations and layouts of secret US military facilities like Area 51. What a way to mark International Data Privacy Day. Maybe rather than inadvertently misleading people to wonder if I was suggesting that Gmail users don't make their beds, I should have emphasized that Admiral McRaven's boot camp regime for Navy SEALS needs to incorporate some of my suggestions about data privacy?

A highlight of Agarwal's blog is his comment I usually wait for a day or more when I feel myself getting inflamed/heated and I wish this had occurred in some of the other places where my ideas were discussed. Even though my ideas are sometimes provocative, I would kindly ask people to keep point 2 of the Debian Code of Conduct in mind, Assume good faith.

One thing that became clear to me after reading Agarwal's blog is that some people saw my example one-line change to Postfix's configuration as a suggestion that people need to run their own mail server. In fact, I had seen such comments before but I hadn't realized why people were reaching a conclusion that I expect everybody to run a mail server. The purpose of that line was simply to emphasize the content of the proposed bounce message, to help people understand, the receiver of an email may never have agreed to Google's non-privacy policy but if you do use Gmail, you impose that surveillance regime on them, and not just yourself, if you send them a message from a Gmail account.

Communication requires mutual agreement about the medium. Think about it another way: if you go to a meeting with your doctor and some stranger in a foreign military uniform is in the room, you might choose to leave and find another doctor rather than communicate under surveillance.

As it turns out, many people are using alternative email services, even if they only want a web interface. There is already a feature request discussion in ProtonMail about letting users choose to opt-out of receiving messages monitored by Google and send back the bounce message suggested in my blog. Would you like to have that choice, even if you didn't use it immediately? You can vote for that issue or leave your own feedback comments in there too.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Imagine the world's biggest Kanban / Scrumboard

Planet Debian - Mar, 30/01/2018 - 7:52md

Imagine a Kanban board that could aggregate issues from multiple backends, including your CalDAV task list, Bugzilla systems (Fedora, Mozilla, GNOME communities), Github issue lists and the Debian Bug Tracking System, visualize them together and coordinate your upstream fixes and packaging fixes in a single sprint.

It is not so farfetched - all of those systems already provide read access using iCalendar URLs as described in my earlier blog. There are REST APIs to manipulate most of them too. Why not write a front end to poll them and merge the content into a Kanban board view?

We've added this as a potential GSoC project using Python and PyQt.

If you'd like to see this or any of the other proposed projects go ahead, you don't need to be a Debian Developer to suggest ideas, refer a student or be a co-mentor. Many of our projects have relevance in multiple communities. Feel free to get in touch with us through the debian-outreach mailing list.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Reproducible Builds: Weekly report #144

Planet Debian - Mar, 30/01/2018 - 7:05md

Here's what happened in the Reproducible Builds effort between Sunday January 21 and Saturday January 27 2018:

Media coverage Development and fixes in key packages
  • Mattia uploaded dpkg (1.19.0.5.0~reproducible1) to our experimental toolchain.

  • cpython-3.7 now has .pyc files without timestamps. Most work happening in PEP 552 but older Python versions probably still need variants of the mtime patch because the new .pyc format is not compatible.

Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

35 package reviews have been added, 37 have been updated and 91 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (24)
  • Niels Thykier (8)
diffoscope development reproducible-website development jenkins.debian.net development Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

2018 and CD burning still painful

Planet Debian - Mar, 30/01/2018 - 4:27md

Ok, we are in 2018, and for the first time in ages I wanted to burn a Audio CD, and dared to think about CD_TEXT. You know what – it is hard as in impossible for a normal user. And that in 2018. Debian, you could try to do better.

Before I start I guess it is necessary to make clear that this is a newly installed system, less than a few months old. That I am as Debian Developer not completely new to system administration. Furthermore, the user trying to burn is member of the cdrom group.

The problem with most GUI frontends is that they rely on wodim, a member of the cdrkit family. And wodim itself simply doesn’t work:

$ wodim -dummy -v speed=16 dev=/dev/sr0 -audio track* wodim: No write mode specified. wodim: Assuming -tao mode. wodim: Future versions of wodim may have different drive dependent defaults. TOC Type: 0 = CD-DA wodim: Operation not permitted. Warning: Cannot raise RLIMIT_MEMLOCK limits. wodim: Resource temporarily unavailable. Cannot get mmap for 12587008 Bytes on /dev/zero. $

Yes I know, the easy solution is to make wodim setuid root, but this is not what I want. Unfortunately the – in contrast to cdrkit/wodim still in development – parent of wodim, cdrecord, works, but only because it is setuid root after a standard installation.

That is all complicated by the fact that the main front-ends are, well, broken:

  • K3b: feels like completely broken: it cannot open its own saved project files, the .inf files generated for an audio CD project are completely broken and void of any content, it hangs regularly without any response
  • Nautilus DVD/CD burning: incapable of doing Audio CDs, offers only Data cd
  • Brasero: terminates with “ejecting disc” and “An unknown error occurred”, looking at the log file it shows that again wodim is the culprit. But brasero could be a bit more helpful! Additional minus point: no cddb interface.

The only exception I found was Xfburn which managed to burn the CD without any hitch or problem. Wow – besides it doesn’t support CD_TEXT, which is also not optimal.

A solution for burning CD_TEXT

So in case you really want to burn CD_TEXT, there is at the moment, as far as I see, only one option, and that is the command line using Cdrdao. Thanks to this excellent article I managed to burn using the following command (as user, not root, nothing special):

cdrdao write --device /dev/sr0 --driver generic-mmc:0x10 -v 2 -n --eject mycd.toc

The format of the .toc file is a bit complicated but documented, see the linked article.

All in all a very depressing situation I have to say, especially for being in 2018 …

Norbert Preining https://www.preining.info/blog There and back again

Exploring minimax polynomials with Sollya

Planet Debian - Sht, 27/01/2018 - 11:18pd

Following Fabian Giesen's advice, I took a look at Sollya—I'm not really that much into numerics (and Sollya, like the other stuff that comes out of the same group, is really written by hardcode numerics nerds), but approximation is often useful.

A simple example: When converting linear light values to sRGB, you need to be able to compute the formula f(x) = (x + ɑ - 1) / ɑ)^ɣ for a given (non-integer) ɑ and ɣ. (Movit frequently needs this. For the specific case of sRGB, GPUs often have hard-coded lookup tables, but they are not always applicable, for instance if the data comes from Y'CbCr.) However, even after simplifications, the exponentiation is rather expensive to run for every pixel, so we'd like some sort of approximation.

If you've done any calculus, you may have heard of Taylor series, which looks at the derivatives in a certain point and creates a polynomial from that. One of the perhaps most famous is arctan(x) = x - 1/3 x³ + 1/5 x⁵ - 1/7 x⁷ + ..., which gives rise to a simple formula for approximating pi if you set x=1 (since arctan(1) = pi/4). However, for practical approximation, Taylor series are fairly useless; they're accurate near the origin point of the expansion, but don't care at all about what happens far from it. Minimax polynomials are better; they minimize the maximum error over the range of interest.

In the past, I've been using Maple for this (I never liked Mathematica much); it's non-free, but not particularly expensive for a personal license, and it can do pretty much everything I expect from a computer algebra system. However, it would be interesting to see if Sollya could do better. After toying around a bit, it seems there are pros and cons:

  • Sollya appears to be faster. I haven't made any formal benchmarks, but I just feel like I have to wait a lot less for it.
  • I find Sollya's syntax maybe a bit more obscure (e.g., [| to start a list), although this is probably partially personal preference. Its syntax error handling is also a lot less friendly.
  • Sollya appears to be a lot more robust towards actually terminating with a working result. E.g., Maple just fails on optimizing sqrt(x) over 0..1 (a surprisingly hard case), whereas I haven't really been able to make Sollya fail yet except in the case of malformed problems (e.g. asking for optimizing for relative error of an error which is zero at certain points). Granted, I haven't pushed it that hard.
  • Maple supports a much wider range of functions. This is a killer for me; I frequently need something as simple as piecewise functions, and Sollya simply doesn't appear to support them.
  • Maple supports rational expansions, ie. two polynomials divided by each other (which can often increase performance dramatically—although the execution cost also balloons, of course). Sollya doesn't. On the other hand, Sollya supports expansion over given base functions, e.g. if you happen to sin(x) computed for whatever obscure reason, you can get an expansion of the type f(x) = a + bsin(x) + cx + dsin(x)² + ex².
  • Maple supports arbitrary weighing of the error (e.g. if you care more about errors at the endpoints)—I find this super-useful, especially if you are dealing with transformed variables or piecewise approximations. Sollya only supports relative and absolute errors, which is more limiting.
  • Sollya can seemingly be embedded as a library. Useful for some, not really relevant for me.
  • And finally, Sollya doesn't optimize coefficients over arbitrary precision; you tell it what accuracy you have to deal with (number of bits in floating or fixed point) and it optimizes the coefficients with that round-off error in mind. (I don't know if it also deals with intermediate roundoff errors when evaluating the polynomial.) Fabian makes a big deal of this, but for fp32, it doesn't really seem to matter much; I did some tests relative to what I had already gotten out of Maple, and the difference in maximum error was microscopic.

So, the verdict? Sollya is certainly good, and I can see myself using it in the future, but for me, it's more of an augmentation than replacing Maple for this use.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Detecting binary files in the history of a git repository

Planet Debian - Pre, 26/01/2018 - 3:57md
Git, VCSes and binary filesGit is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being git-lfs and git-annex.

My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.

Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).

The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.

But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?
Detecting any binary files, only in the current commitAs with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:

find . -type f -print | grep -v '^\.\/\.git\/'Then we can use the 'file' utility to list for non-text files:
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK Of course, we assume here, the work tree is clean.
Checking all commits in a branchSince we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.

Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:
git checkout -b test_binsGit has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:

cat > ../check_file_text.sh#!/bin/sh

(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
then (ab)use 'git rebase' to execute that for us for all commits:
git rebase --exec="sh ../check_file_text.sh" -i $startcommitAfter we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.

Here is an example output when checking the newest 5 commits:

$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Successfully rebased and updated refs/heads/test_bins.

Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.

$ git co master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master' $ git branch -D test_bins
Deleted branch test_bins (was 6358b91).Enjoy! eddyp noreply@blogger.com Rambling around foo

prrd 0.0.2: Many improvements

Planet Debian - Pre, 26/01/2018 - 12:30md

The prrd package was introduced recently, and made it to CRAN shortly thereafter. The idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development and is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the following screenshot (running six parallel workers, arranged in split byobu session).

This note announce the second, and much improved, release. The package now runs on all operating systems supported by R and no longer has external system requirements. Several functions were improved, two new helper functions were added in a so-far still preliminary form, and everything is more robust now.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.2 (2018-01-24)
  • The package no longer require wget.

  • Enhanced sanity checker function.

  • Expanded and improved dequeue function.

  • No longer use $HOME in xvfb-run-safe (#2).

  • The use of xvfb-run use is now conditional on the OS (#3).

  • The set of available packages is no longer constrained to CRAN, but could be via the local setup script (#4).

  • The dequeue() function now uses system2().

  • The enqueue() functions checks if no reverse dependencies are found and stops (#6).

  • The enqueue() functions checks for repository information being set (#5).

CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Changes in Prometheus 2.0

Planet Debian - Enj, 25/01/2018 - 1:00pd

This is one part of my coverage of KubeCon Austin 2017. Other articles include:

2017 was a big year for the Prometheus project, as it published its 2.0 release in November. The new release ships numerous bug fixes, new features and, notably, a new storage engine that brings major performance improvements. This comes at the cost of incompatible changes to the storage and configuration-file formats. An overview of Prometheus and its new release was presented to the Kubernetes community in a talk held during KubeCon + CloudNativeCon. This article covers what changed in this new release and what is brewing next in the Prometheus community; it is a companion to this article, which provided a general introduction to monitoring with Prometheus.

What changed

Orchestration systems like Kubernetes regularly replace entire fleets of containers for deployments, which means rapid changes in parameters (or "labels" in Prometheus-talk) like hostnames or IP addresses. This was creating significant performance problems in Prometheus 1.0, which wasn't designed for such changes. To correct this, Prometheus ships a new storage engine that was specifically designed to handle continuously changing labels. This was tested by monitoring a Kubernetes cluster where 50% of the pods would be swapped every 10 minutes; the new design was proven to be much more effective. The new engine boasts a hundred-fold I/O performance improvement, a three-fold improvement in CPU, five-fold in memory usage, and increased space efficiency. This impacts container deployments, but it also means improvements for any configuration as well. Anecdotally, there was no noticeable extra load on the servers where I deployed Prometheus, at least nothing that the previous monitoring tool (Munin) could detect.

Prometheus 2.0 also brings new features like snapshot backups. The project has a longstanding design wart regarding data volatility: backups are deemed to be unnecessary in Prometheus because metrics data is considered disposable. According to Goutham Veeramanchaneni, one of the presenters at KubeCon, "this approach apparently doesn't work for the enterprise". Backups were possible in 1.x, but they involved using filesystem snapshots and stopping the server to get a consistent view of the on-disk storage. This implied downtime, which was unacceptable for certain production deployments. Thanks again to the new storage engine, Prometheus can now perform fast and consistent backups, triggered through the web API.

Another improvement is a fix to the longstanding staleness handling bug where it would take up to five minutes for Prometheus to notice when a target disappeared. In that case, when polling for new values (or "scraping" as it's called in Prometheus jargon) a failure would make Prometheus reuse the older, stale value, which meant that downtime would go undetected for too long and fail to trigger alerts properly. This would also cause problems with double-counting of some metrics when labels vary in the same measurement.

Another limitation related to staleness is that Prometheus wouldn't work well with scrape intervals above two minutes (instead of the default 15 seconds). Unfortunately, that is still not fixed in Prometheus 2.0 as the problem is more complicated than originally thought, which means there's still a hard limit to how slowly you can fetch metrics from targets. This, in turn, means that Prometheus is not well suited for devices that cannot support sub-minute refresh rates, which, to be fair, is rather uncommon. For slower devices or statistics, a solution might be the node exporter "textfile support", which we mentioned in the previous article, and the pushgateway daemon, which allows pushing results from the targets instead of having the collector pull samples from targets.

The migration path

One downside of this new release is that the upgrade path from the previous version is bumpy: since the storage format changed, Prometheus 2.0 cannot use the previous 1.x data files directly. In his presentation, Veeramanchaneni justified this change by saying this was consistent with the project's API stability promises: the major release was the time to "break everything we wanted to break". For those who can't afford to discard historical data, a possible workaround is to replicate the older 1.8 server to a new 2.0 replica, as the network protocols are still compatible. The older server can then be decommissioned when the retention window (which defaults to fifteen days) closes. While there is some work in progress to provide a way to convert 1.8 data storage to 2.0, new deployments should probably use the 2.0 release directly to avoid this peculiar migration pain.

Another key point in the migration guide is a change in the rules-file format. While 1.x used a custom file format, 2.0 uses YAML, matching the other Prometheus configuration files. Thankfully the promtool command handles this migration automatically. The new format also introduces rule groups, which improve control over the rules execution order. In 1.x, alerting rules were run sequentially but, in 2.0, the groups are executed sequentially and each group can have its own interval. This fixes the longstanding race conditions between dependent rules that create inconsistent results when rules would reuse the same queries. The problem should be fixed between groups, but rule authors still need to be careful of that limitation within a rule group.

Remaining limitations and future

As we saw in the introductory article, Prometheus may not be suitable for all workflows because of its limited default dashboards and alerts, but also because of the lack of data-retention policies. There are, however, discussions about variable per-series retention in Prometheus and native down-sampling support in the storage engine, although this is a feature some developers are not really comfortable with. When asked on IRC, Brian Brazil, one of the lead Prometheus developers, stated that "downsampling is a very hard problem, I don't believe it should be handled in Prometheus".

Besides, it is already possible to selectively delete an old series using the new 2.0 API. But Veeramanchaneni warned that this approach "puts extra pressure on Prometheus and unless you know what you are doing, its likely that you'll end up shooting yourself in the foot". A more common approach to native archival facilities is to use recording rules to aggregate samples and collect the results in a second server with a slower sampling rate and different retention policy. And of course, the new release features external storage engines that can better support archival features. Those solutions are obviously not suitable for smaller deployments, which therefore need to make hard choices about discarding older samples or getting more disk space.

As part of the staleness improvements, Brazil also started working on "isolation" (the "I" in the ACID acronym) so that queries wouldn't see "partial scrapes". This hasn't made the cut for the 2.0 release, and is still work in progress, with some performance impacts (about 5% CPU and 10% RAM). This work would also be useful when heavy contention occurs in certain scenarios where Prometheus gets stuck on locking. Some of the performance impact could therefore be offset under heavy load.

Another performance improvement mentioned during the talk is an eventual query-engine rewrite. The current query engine can sometimes cause excessive loads for certain expensive queries, according the Prometheus security guide. The goal would be to optimize the current engine so that those expensive queries wouldn't harm performance.

Finally, another issue I discovered is that 32-bit support is limited in Prometheus 2.0. The Debian package maintainers found that the test suite fails on i386, which lead Debian to remove the package from the i386 architecture. It is currently unclear if this is a bug in Prometheus: indeed, it is strange that Debian tests actually pass in other 32-bit architectures like armel. Brazil, in the bug report, argued that "Prometheus isn't going to be very useful on a 32bit machine". The position of the project is currently that "'if it runs, it runs' but no guarantees or effort beyond that from our side".

I had the privilege to meet the Prometheus team at the conference in Austin and was happy to see different consultants and organizations working together on the project. It reminded me of my golden days in the Drupal community: different companies cooperating on the same project in a harmonious environment. If Prometheus can keep that spirit together, it will be a welcome change from the drama that affected certain monitoring software. This new Prometheus release could light a bright path for the future of monitoring in the free software world.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Movit 1.6.0 released

Planet Debian - Enj, 25/01/2018 - 12:49pd

I just released version 1.6.0 of Movit, my GPU-based video filter library.

The full changelog is below, but what's more interesting is maybe what isn't in it, namely the compute shader version of the high-quality resampling filter I blogged about earlier. It turned out that my benchmark setup was wrong in a sort-of subtle way, and unfortunately biased towards the compute shader. Fixing that negated the speed difference—it was actually usually a few percent slower than the fragment shader version, despite a fair amount of earlier tweaks. (It did use less CPU when setting up new parameters, which was nice for things like continuous zooms, but probably not enough to justify the GPU slowdown.)

Which means that after a month or so of testing and performance tuning, I had to scrap it—it's sad to notice so late (I only realized that something was wrong as I started writing up the final documentation, and figured I couldn't actually justify why I would let one of them chain with other effects and the other one not), but it's a sunk cost, and keeping it in based on known-bad benchmarks would have helped nobody. I've left it in a git branch in case the world should change.

I still believe there are useful gains from compute shaders—in particular, the deinterlacer shines—but it's increasingly clear to me that fragment shaders should remain the default go-to tool for graphics on the GPU. (I guess the next natural target would be the convolution/FFT operators, but they're not all that much used.)

The full changelog reads:

Movit 1.6.0, January 24th, 2018 - Support for effects that work as compute shaders. Compute shaders are generally slower than fragment shaders for the same algorithm, but allow some forms of communication between shader invocations and have more flexible output, which can enable more efficient algorithms. See effect.h for more details. Note that the fastest rendering API on EffectChain is now to a texture if possible, not to an FBO. This will only matter if the last effect is a compute shader. - Movit now includes a compute shader implementation of DeinterlaceEffect, which is automatically used instead of the fragment shader implementation if your GPU and OpenGL driver supports it (in practice, this means on all platforms except on macOS). The compute shader version is typically 20–80% faster than the fragment shader version, depending on your GPU and other factors. A compute shader implementation of ResampleEffect was written but ultimately failed to be faster, and so is not included. - Support for microbenchmarks of effects through the Google microbenchmarking framework (optional). Currently, DeinterlaceEffect and ResampleEffect has benchmarks; enable them by running the unit test with --benchmark (also try --benchmark --help). - Effects can now explicitly request _not_ to have mipmaps, which means they can do so without needing to request bounce and fiddling with the sampler state. Note that this is an API change for effects. - Movit now requires C++11, both to build and to #include the header files. Support for SDL1 has been dropped; unit tests and the demo program now need SDL2. - Various smaller bugfixes and optimizations.

Debian packages are on their way up through the NEW queue (there's a soname bump).

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

The Pune Metro 1st anniversary celebrations

Planet Debian - Enj, 25/01/2018 - 12:13pd

This would be long. First and foremost, couple of days ago, I got the following direct message on my twitter handle –

Hi Shirish,

We are glad to inform you that we are celebrating the 1st anniversary of Pune Metro Rail Project & the incorporation of both Nagpur Metro and Pune Metro into Maharashtra Metro Rail Corporation Limited(MahaMetro) on 23rd January at 13:00 hrs followed by the lunch.

On this occasion we would like to invite you to accept a small token of appreciation for your immense support & continued valuable interaction on our social media channels for the project at the hands of Dr. Brijesh Dixit, Managing Director, MahaMetro.

Venue: Hotel Citrus, Opposite PCMC Building, Pimpri-Chinchwad.
Time: 13:00 Hrs
Lunch: 14:00 hrs

Kindly confirm your attendance. Looking forward to meet you.

Regards & Thanks, Pune Metro Team

I went and had an interaction with Mr. Dixit and was gifted a gift card which can be redeemed.

I shared it on facebook. Some people have asked me privately as to what I did.

First of all, let me be very clear. I did not enter into any competition or put up any queries with getting any sort of monetary benefit at all. I have been a user of public transport both out of necessity and choice and do feel the need for a fast, secure, reasonable mode of transport. I am also immensely passionate and curious about public transport as a whole.

Just to share couple of facts and I’m sure most of you will agree with me, it takes more than twice the time if you are taking public transport. at least that’s true in India. Part of it is due to the drivers not keeping the GPS on, nor people/users asking/stressing for using GPS and using that location-based info. to derive when the next bus is gonna come. So, for instance for my journey to PCMC roughly took 20 kms. and about 45 minutes, but waiting took almost twice the time and this was not the rush-hour time where it could easily have taken double the time. Hence people opting for private vehicles even though they know it’s harmful for the environment as well as for themselves and their loved ones.

There was/has been a plan of PMC (Pune Municipal Corporation) for over a decade to use GPS to make aware the traveling public tentatively when the next bus would arrive but due to number of reasons (corruption, lack of training, awareness, discipline) all of which has been hampering citizen’s productivity due to which people are forced to get private vehicles and it becomes a zero-sum game. There is much more but don’t want to go in there.

Now people have asked me what sort of suggestions I gave or am giving –

Yesterday’s interaction after seeing Mahametro’s interaction with the press, it seems the press or media seems to have a very poor understanding of the dynamics and not really interested in enriching citizen’s understanding of either the Pune Metro or the idea of Integrated Transport Initiative which has been in making for sometime now. Part of the issue also seem to lay with Pune Metro in not sharing knowledge as much as they can with the opportunities that digital media/space provides and at very low-cost.

Suggestions and Queries –

1. One of the first things that Pune Metro could make is an animation of how a DPR (Detailed Project Report) is made. I don’t think any of the people from the press, especially English language press has seen the DPR or otherwise many of the questions would have been answered.

http://www.punemetrorail.org/download/PuneMetro-DPR.pdf

The simplest analogy I can give is let’s say you want to build a hospital but the land on which you have to build it belongs to 2-3 parties, so how will you will build it? Also you don’t have money. The DPR is different only in the sense of the scale of the things and construction of the routes is not by a single contractor but by multiple contractors. A route say A – B is divided in 5 parts and asked by people to submit tenders for the packet a company/contractor/s are interested in.

The way I see it, the DPR has to figure out the right of way where construction of the spans have to be, where the stations have to be built, from where electricity and water has to come, where the maintainance depot will be (usually the depot is at the end), the casting yard for the spans/pillars etc.

There is a pre-qualification round so that only eligible bidders are interested who have history of doing similar scale work and then bidding as to who can do it at lowest cost with a set reserve price. If no bidder comes up either due to from contractor’s POV a very high reserve price, then the reserve price is lowered. The idea there is simply to have a price discovery which may be seen as being found out by a just and fair method.

The press seemed to be more interested in making a tiff between Pune Metro/Maha Metro chief and Gaurdian Minister Shri Girish Bapat of something which to my mind is a non-issue at this juncture.

Mr. Dixit was absolutely correct in saying that he can’t comment on when the extension to Nigdi will happen unless the DPR for extension to Nigdi is made, land is found and the various cost heads, expenses and funding is approved in the State and Central Government and funding from multiple places is done.

The other question which was being raised by the press was razing of the BRTS in Pune. While the press knew it was neither Mr. Dixit’s place or responsibility nor is he supposed to comment upon whatever happens to BRTS. He can’t even comment as that would come under Pune Urban Transport Ministry.

As far as I understand Mr. Dixit’s obligations, it is to build Pune Metro as safely, as quickly, using good materials, give good signages and give an efficient public transit service that we Puneties can be proud of.

2. The site http://www.punemetrorail.org/ really needs an update and upgrade. You should use something like wordpress or something where you are able to change themes every now and then. Every 3-6 months the themes should be tweaked so the content remains or at least looks fresh.

3. There are no time-stamps of any of the videos. At the very least should have time-stamps so some sort of contextual information is available.

4. There is no way to know if there is news. News should be highlighted and more information be shared. For instance, there should have been more info. e.g. about this particular item –

MoU signed between Dr. Brijesh Dixit, MD MahaMetro, Mrs. Prerna Deshbhratar, Addl Municipal Commissioner(Spl), PMC and Mr Kong Wy Mun, CEO, Singapore Cooperation Enterprise for the Urban Management(Multi-Modal Transport) during a program at Yashada, Pune.

from http://www.punemetrorail.org/projectupdate.aspx

It would have been interesting to know what it means and how would the Singapore Government help us in achieving a unified multi-modal transport model.

There were/are tons of questions that the press could have asked but didn’t as above and more below.

5. The DPR was made in November 2015 and then now it is 2018. There probably needs to be vis-a-vis adjusted prices taking into consideration changes over 3 years and would probably change till 2021.

6. Similarly, there are time-gaps between plans and execution of the plan and for Puneties we don’t even know what the plan is.

I would urge Pune Metro to have a dynamic plan which shows areas in which work is progressing in terms of blinking lights to know which are active areas and which are not. They could be a source of inspiration and trail-blazer on this.

7. Similarly, another idea which could be done or might even be done is to have a single photograph taken everyday at say 1200 hrs. in afternoon at all the areas at 640×480 resolution which can be uploaded to the site and in turn could be published onto a separate web-page which in-turn over days and weeks could be turn into a time-lapse video similar to what was achieved for the first viaduct video shot over a day or two –

If you want to make it more interesting and challenging, you could invite students from Symbiosis to make it on something like a Raspberry Pi2/3 or some other SBC (Single Board Computer), a camera lens, a solar cell and a modem with instructions to stagger images to send it to Pune metro rail portal in case some web traffic is already there. There could be specific port (not port 80) .

Later on make a time-lapse video would be simple as stitching all those photographs together and getting some nice audio music as fillers. Something which has already been done once for the viaduct video as seen above.

8. Tracking planned versus real-time progress – While Mr. Dixit has time and again ensured that things are progressing well, it would make it far much easier to trust if there was a web-service which tells if things are going according to schedule or is it bit off. It does overlap a bit with my earlier suggestion but there are many development projects around the wold which show tentative and actual progress.

9. Apart from traffic diversion news in major newspapers, it would be nice to also have a section about traffic diversion with blinkers or something about road diversions which are in effect.

10. Another would be to have a RSS feed about all news found out by various search-engine crawlers, delete duplicate links and share the news and views for people to click-through and know for themselves.

11. Statistics of jobs (both direct and indirect created) due to Pune Metro works displayed prominently.

12. Have a glossary of terms which can easily be garnered by having a 10th-12th average student going through say the DPR and see which terms he has problems with.

The simplest example is the word ‘Reach’ which has been used in a different context in Pune Metro than what is usually understood.

13. Are there and if there are, How many Indian SME’s have been entrusted either via joint-venture or whichever way to ensure knowledge transfer of making and maintaining the rakes, car/bogies, track etc.

14. If any performance and load guarantee has been asked from various packet holders. If yes, what are the guarantees and for what duration ?

These are all low-hanging fruits. Also I’m no web-developer although am a bit of content producer (as can be seen) and a voracious consumer of the web. I do have few friends though, if there is requirement and who understand the medium in a far more better, intimate way than the crudish manner I shared above.

A student who believes democracy needs work and needs efforts to democracy work. If citizens themselves would not ask these questions who would ?

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

Faqet

Subscribe to AlbLinux agreguesi