You are here

Planet Debian

Subscribe to Feed Planet Debian
Entries tagged english Ben Hutchings's diary of life and technology Reproducible builds blog Google Summer of Code 2018 Intern with Debian Chez Charles Bálint's blog about some of the important things in the Universe ganbatte kudasai! Entries tagged english Dude! Sweet! Entries tagged english Google Summer of Code 2018 Intern with Debian Thoughts about programming, sysadmin, Perl, Debian ... Stuff, Debian, Free Software and Craig jmtd Any sufficiently advanced thinking is indistinguishable from madness As time goes by ... Insider infos, master your Debian/Ubuntu distribution Thinking inside the box Digital-Scurf Ramblings Free Software Hacking Recent content in Debian on /home/athos Reproducible builds blog Thoughts, actions and projects Debian and Free Software sesse's blog Thinking inside the box showing latest 10 pabs Entries tagged english Thinking inside the box Recent content in Gsoc18 on bisco.org I began this blog as part of my homework of Master of Libre Software at URJC. I finished my studies but I keep on writing about free (as in freedom) software, networks and knowledge. Dude! Sweet! Recent content in Debian on Tickets'n'patches Just another WordPress.com weblog anarcat Blog from the Debian Project mejo roaming Entries tagged english anarcat joey Debian and Free Software Reproducible builds blog rebel with rather too many causes Thinking inside the box "Passion and dispassion. Choose two." -- Larry Wall something around Debian, written in funny Eng"r"ish ;)
Përditësimi: 7 months 1 javë më parë

What is the best online dating site and the best way to use it?

Mër, 14/02/2018 - 6:25md

Somebody recently shared this with me, this is what happens when you attempt to access Parship, an online dating site, from the anonymous Tor Browser.

Experian is basically a private spy agency. Their website boasts about how they can:

  • Know who your customers are regardless of channel or device
  • Know where and how to reach your customers with optimal messages
  • Create and deliver exceptional experiences every time

Is that third objective, an "exceptional experience", what you were hoping for with their dating site honey trap? You are out of luck: you are not the customer, you are the product.

When the Berlin wall came down, people were horrified at what they found in the archives of the Stasi. Don't companies like Experian and Facebook gather far more data than this?

So can you succeed with online dating?

There are only three strategies that are worth mentioning:

  • Access sites you can't trust (which includes all dating sites, whether free or paid for) using anonymous services like Tor Browser and anonymous email addresses. Use fake photos and fake all other data. Don't send your real phone number through the messaging or chat facility in any of these sites because they can use that to match your anonymous account to a real identity: instead, get an extra SIM card that you pay for and top-up with cash. One person told me they tried this for a month as an experiment, expediently cutting and pasting a message to each contact to arrange a meeting for coffee. At each date they would give the other person a card that apologized for their completely fake profile photos and offering to start over now they could communicate beyond the prying eyes of the corporation.
  • Join online communities that are not primarily about dating and if a relationship comes naturally, it is a bonus.
  • If you really care about your future partner and don't want your photo to be a piece of bait used to exploit and oppress them, why not expand your real-world activities?
Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Packaging is hard. Packager-friendly is harder.

Mër, 14/02/2018 - 12:21md

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.

directhex https://apebox.org/wordpress debian – APEBOX.ORG

Using VLC to stream bittorrent sources

Mër, 14/02/2018 - 8:00pd

A few days ago, a new major version of VLC was announced, and I decided to check out if it now supported streaming over bittorrent and webtorrent. Bittorrent is one of the most efficient ways to distribute large files on the Internet, and Webtorrent is a variant of Bittorrent using WebRTC as its transport channel, allowing web pages to stream and share files using the same technique. The network protocols are similar but not identical, so a client supporting one of them can not talk to a client supporting the other. I was a bit surprised with what I discovered when I started to look. Looking at the release notes did not help answering this question, so I started searching the web. I found several news articles from 2013, most of them tracing the news from Torrentfreak ("Open Source Giant VLC Mulls BitTorrent Streaming Support"), about a initiative to pay someone to create a VLC patch for bittorrent support. To figure out what happend with this initiative, I headed over to the #videolan IRC channel and asked if there were some bug or feature request tickets tracking such feature. I got an answer from lead developer Jean-Babtiste Kempf, telling me that there was a patch but neither he nor anyone else knew where it was. So I searched a bit more, and came across an independent VLC plugin to add bittorrent support, created by Johan Gunnarsson in 2016/2017. Again according to Jean-Babtiste, this is not the patch he was talking about.

Anyway, to test the plugin, I made a working Debian package from the git repository, with some modifications. After installing this package, I could stream videos from The Internet Archive using VLC commands like this:

vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent

The plugin is supposed to handle magnet links too, but since The Internet Archive do not have magnet links and I did not want to spend time tracking down another source, I have not tested it. It can take quite a while before the video start playing without any indication of what is going on from VLC. It took 10-20 seconds when I measured it. Some times the plugin seem unable to find the correct video file to play, and show the metadata XML file name in the VLC status line. I have no idea why.

I have created a request for a new package in Debian (RFP) and asked if the upstream author is willing to help make this happen. Now we wait to see what come out of this. I do not want to maintain a package that is not maintained upstream, nor do I really have time to maintain more packages myself, so I might leave it at this. But I really hope someone step up to do the packaging, and hope upstream is still maintaining the source. If you want to help, please update the RFP request or the upstream issue.

I have not found any traces of webtorrent support for VLC.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

BH 1.66.0-1

Mër, 14/02/2018 - 2:37pd

A new release of the BH package arrived on CRAN a little earlier: now at release 1.66.0-1. BH provides a sizeable portion of the Boost C++ libraries as a set of template headers for use by R, possibly with Rcpp as well as other packages.

This release upgrades the version of Boost to the Boost 1.66.0 version released recently, and also adds one exciting new library: Boost compute which provides a C++ interface to multi-core CPU and GPGPU computing platforms based on OpenCL.

Besides the usual small patches we need to make (i.e., cannot call abort() etc pp to satisfy CRAN Policy) we made one significant new change in response to a relatively recent CRAN Policy change: compiler diagnostics are not suppressed for clang and g++. This may make builds somewhat noisy so we all may want to keep our ~/.R/Makevars finely tuned suppressing a bunch of warnings...

Changes in version 1.66.0-1 (2018-02-12)
  • Upgraded to Boost 1.66.0 (plus the few local tweaks)

  • Added Boost compute (as requested in #16)

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Is it an upgrade, or a sidegrade?

Mar, 13/02/2018 - 8:43md

I first bought a netbook shortly after the term was coined, in 2008. I got one of the original 8.9" Acer Aspire One. Around 2010, my Dell laptop was stolen, so the AAO ended up being my main computer at home — And my favorite computer for convenience, not just for when I needed to travel light. Back then, Regina used to work in a national park and had to cross her province (~6hr by a combination of buses) twice a week, so she had one as well. When she came to Mexico, she surely brought it along. Over the years, we bought new batteries and chargers, as they died over time...

Five years later, it started feeling too slow, and I remember to start having keyboard issues. Time to change.

Sadly, 9" computers were no longer to be found. Even though I am a touch typist, and a big person, I miss several things about the Acer's tiny keyboard (such as being able to cover the diagonal with a single hand, something useful when you are typing while standing). But, anyway, I got the closest I could to it — In July 2013, I bought the successor to the Acer Aspire One: An 10.5" Acer Aspire One Nowadays, the name that used to identify just the smallest of the Acer Family brethen covers at least up to 15.6" (which is not exactly helpful IMO).

Anyway, for close to five years I was also very happy with it. A light laptop that didn't mean a burden to me. Also, very important: A computer I could take with me without ever thinking twice. I often tell people I use a computer I got at a supermarket, and that, bought as new, costed me under US$300. That way, were I to lose it (say, if it falls from my bike, if somebody steals it, if it gets in any way damaged, whatever), it's not a big blow. Quite a difference from my two former laptops, both over US$1000.

I enjoyed this computer a lot. So much, I ended up buying four of them (mine, Regina's, and two for her family members).

Over the last few months, I have started being nagged by unresponsivity, mainly in the browser (blame me, as I typically keep ~40 tabs open). Some keyboard issues... I had started thinking about changing my trusty laptop. Would I want a newfangle laptop-and-tablet-in-one? Just thinking about fiddling with the OS to recognize stuff was a sort-of-turnoff...

This weekend we had an incident with spilled water. After opening and carefully ensuring the computer was dry, it would not turn on. Waited an hour or two, and no changes. Clear sign, a new computer is needed ☹

I went to a nearby store, looked at the offers... And, in part due to the attitude of the salesguy, I decided not to (installing Linux will void any warranty, WTF‽ In 2018‽). Came back home, and... My Acer works again!

But, I know five years are enough. I decided to keep looking for a replacement. After some hesitation, I decided to join what seems to be the elite group in Debian, and go for a refurbished Thinkpad X230.

And that's why I feel this is some sort of "sidegrade" — I am replacing a five year old computer with another five year old computer. Of course, a much sturdier one, built to last, originally sold as an "Ultrabook" (that means, meant for a higher user segment) much more expandable... I'm paying ~US$250, which I'm comfortable with. Looking at several online forums, it is a model quite popular with "knowledgeable" people AFAICT even now. I was hoping, just for the sake of it, to find a X230t (foldable and usable as tablet)... But I won't put too much time into looking for it.

The Thinkpad is 12", which I expect will still fit in my smallish satchel I take to my classes. The machine looks as tweakable as I can expect. Spare parts for replacement are readily available. I have 4GB I bought for the Acer I will probably be able to carry on to this machine, so I'm ready with 8GB. I'm eager to feel the keyboard, as it's often repeated it's the best in the laptop world (although it's not the classic one anymore) I'm just considering to pop ~US$100 more and buy an SSD drive, and... Well, lets see how much does this new sidegrade make me smile!

gwolf http://gwolf.org Gunnar Wolf

Our future relationship with FSFE

Enj, 01/02/2018 - 2:19md

Below is an email that has been distributed to the FSFE community today. FSFE aims to be an open organization and people are welcome to discuss it through the main discussion group (join, thread and reply) whether you are a member or not.

For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site. The "No Cloud" stickers and the Public Money Public Code campaign are examples of initiatives started by FSFE - you can request free stickers and posters by filling in this form.

Dear FSFE Community,

I'm writing to you today as one of your elected fellowship representatives rather than to convey my own views, which you may have already encountered in my blog or mailing list discussions.

The recent meeting of the General Assembly (GA) decided that the annual elections will be abolished but this change has not yet been ratified in the constitution.

Personally, I support an overhaul of FSFE's democratic processes and the bulk of the reasons for this change are quite valid. One of the reasons proposed for the change, the suggestion that the election was a popularity contest, is an argument I don't agree with: the same argument could be used to abolish elections anywhere.

One point that came up in discussions about the elections is that people don't need to wait for the elections to be considered for GA membership. Matthias Kirschner, our president, has emphasized this to me personally as well, he looks at each new request with an open mind and forwards it to all of the GA for discussion. According to our constitution, anybody can write to the president at any time and request to join the GA. In practice, the president and the existing GA members will probably need to have seen some of your activities in one of the FSFE teams or local groups before accepting you as a member. I want to encourage people to become familiar with the GA membership process and discuss it within their teams and local groups and think about whether you or anybody you know may be a good candidate.

According to the minutes of the last GA meeting, several new members were already accepted this way in the last year. It is particularly important for the organization to increase diversity in the GA at this time.

The response rate for the last fellowship election was lower than in previous years and there is also concern that emails don't reach everybody thanks to spam filters or the Google Promotions tab (if you use gmail). If you had problems receiving emails about the last election, please consider sharing that feedback on the discussion list.

Understanding where the organization will go beyond the extinction of the fellowship representative is critical. The Identity review process, championed by Jonas Oberg and Kristi Progri, is actively looking at these questions. Please contact Kristi if you wish to participate and look out for updates about this process in emails and Planet FSFE. Kristi will be at FOSDEM this weekend if you want to speak to her personally.

I'll be at FOSDEM this weekend and would welcome the opportunity to meet with you personally. I will be visiting many different parts of FOSDEM at different times, including the FSFE booth, the Debian booth, the real-time lounge (K-building) and the Real-Time Communications (RTC) dev-room on Sunday, where I'm giving a talk. Many other members of the FSFE community will also be present, if you don't know where to start, simply come to the FSFE booth. The next European event I visit after FOSDEM will potentially be OSCAL in Tirana, it is in May and I would highly recommend this event for anybody who doesn't regularly travel to events outside their own region.

Changing the world begins with the change we make ourselves. If you only do one thing for free software this year and you are not sure what it is going to be, then I would recommend this: visit an event that you never visited before, in a city or country you never visited before. It doesn't necessarily have to be a free software or IT event. In 2017 I attended OSCAL in Tirana and the Digital-Born Media Carnival in Kotor for the first time. You can ask FSFE to send you some free stickers and posters (online request with optional donation) to give to the new friends you meet on your travels. Change starts with each of us doing something new or different and I hope our paths may cross in one of these places.

For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site.

Please feel free to discuss this through the FSFE discussion group (join, thread and reply)

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

FLOSS Activities January 2018

Enj, 01/02/2018 - 1:12pd
Changes Issues Review Administration
  • Debian: try to regain OOB access to a host, try to connect with a hoster, restart bacula after db restart, provide some details to a hoster, add debsnap to snapshot host, debug external email issue, redirect users to support channels
  • Debian mentors: redirect to sponsors, teach someone about dput .upload files, check why a package disappeared
  • Debian wiki: unblacklist IP address, whitelist email addresses, whitelist email domain, investigate DocBook output crash
Communication
  • Initiate discussion about ingestion of more security issue feeds
  • Invite LinuxCNC to the Debian derivatives census
Sponsors

I renewed my support of Software Freedom Conservancy.

The Discord related uploads (harmony, librecaptcha, purple-discord) and the Debian fakeupstream change were sponsored by my employer. All other work was done on a volunteer basis.

Paul Wise http://bonedaddy.net/pabs3/log/ Log

Free software activities in January 2018

Mër, 31/01/2018 - 11:20md

Here is my monthly update covering what I have been doing in the free software world in January 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:



I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Compare JSON files using the jsondiff module. (#888112)
    • Report differences in extended file attributes when comparing files. (#888401)
    • Show extended filesystem metadata when directly comparing two files not just when we specify two directories. (#888402)
    • Do some fuzzy parsing to detect JSON files not named .json. [...]
  • Bug fixes:
    • Return unknown if we can't parse the readelf version number for (eg.) FreeBSD. (#886963)
    • If the LLVM disassembler does not work, try the internal one. (#886736)
  • Misc:
    • Explicitly depend on e2fsprogs. (#887180)
    • Clarify Unidentified file log message as we did try and lookup via the comparators first. [...]

I also fixed an issue in the "trydiffoscope" command-line client that was preventing installation on non-Debian systems (#888882).


disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Correct "explicitly" typo in disorderfs.1.txt. [...]
  • Bump Standards-Version to 4.1.3. [...]
  • Drop trailing whitespace in debian/control. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

In addition to this, I:

  • Published whydoesaptnotusehttps.com, an overview of why APT does not rely solely on SSL for validation of downloaded packages as I noticed it was being asked a lot on support forums.
  • Reported a number of issues for the mentors.debian.net review service.
Patches contributed
  • dput: Suggest --force if package has already been uploaded. (#886829)
  • linux: Add link to the Firmware page on the wiki to failed to load log messages. (#888405)
  • markdown: Make markdown exit with a non-zero exit code if cannot open input file. (#886032)
  • spectre-meltdown-checker: Return a sensible exit code. (#887077)
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • Initial draft of a script to automatically detect when CVEs should be assigned to multiple source packages in the case of legacy renames, duplicates or embedded code copies.
  • Issued DLA 1228-1 for the poppler PDF library to fix an overflow vulnerability.
  • Issued DLA 1229-1 for imagemagick correcting two potential denial-of-service attacks.
  • Issued DLA 1233-1 for gifsicle — a command-line tool for manipulating GIF images — to fix a use-after-free vulnerability.
  • Issued DLA 1234-1 to fix multiple integer overflows in the GTK gdk-pixbuf graphics library.
  • Issued DLA 1247-1 for rsync, fixing a command-injection vulnerability.
  • Issued DLA 1248-1 for libgd2 to prevent a potential infinite loop caused by signedness confusion.
  • Issued DLA 1249-1 for smarty3 fixing an arbitrary code execution vulnerability.
  • "Frontdesk" duties, triaging CVEs, etc.
Uploads
  • adminer (4.5.0-1) — New upstream release.
  • bfs (1.2-1) — New upstream release.
  • dbus-cpp (5.0.0+18.04.20171031-1) — Initial upload to Debian.
  • installation-birthday (7) — Add e2fsprogfs to Depends so it can drop Essential: yes. (#887275
  • process-cpp:
    • 3.0.1-1 — Initial upload to Debian.
    • 3.0.1-2 — Fix FTBFS due to symbol versioning.
  • python-django (1:1.11.9-1 & 2:2.0.1-1) — New upstream releases.
  • python-gflags (1.5.1-4) — Always use SOURCE_DATE_EPOCH from the environment.
  • redis:
    • 5:4.0.6-3 — Use --clients argument to runtest to force single-threaded operation over using taskset.
    • 5:4.0.6-4 — Re-add procps to Build-Depends. (#887075)
    • 5:4.0.6-5 — Fix a dangling symlink (and thus a broken package). (#884321)
    • 5:4.0.7-1 — New upstream release.
  • redisearch (1.0.3-1, 1.0.4-1 & 1.0.5-1) — New upstream releases.
  • trydiffoscope (67.0.0) — New upstream release.

I also sponsored the following uploads:

Debian bugs filed
  • gdebi: Invalid gnome-mime-application-x-deb icon in AppStream metadata. (#887056)
  • git-buildpackage: Please make gbp clone not quieten the output by default. (#886992)
  • git-buildpackage: Please word-wrap generated changelog lines. (#887055)
  • isort: Don't install test_isort.py to global Python namespace. (#887816)
  • restrictedpython: Please add Homepage. (#888759)
  • xcal: Missing patches due to 00List != 00list. (#888542)

I also filed 4 bugs against packages missing patches due to incomplete quilt conversions against cernlib geant321, mclibs & paw.

RC bugs
  • gnome-shell-extension-tilix-shortcut: Invalid date in debian/changelog. (#886950)
  • python-qrencode: Missing PIL dependencies due to use of Python 2 substvars in Python 3 package. (#887811)


I also filed 7 FTBFS bugs against lintian, netsniff-ng, node-coveralls, node-macaddress, node-timed-out, python-pyocr & sleepyhead.

FTP Team

As a Debian FTP assistant I ACCEPTed 173 packages: appmenu-gtk-module, atlas-cpp, canid, check-manifest, cider, citation-style-language-locales, citation-style-language-styles, cloudkitty, coreapi, coreschema, cypari2, dablin, dconf, debian-dad, deepin-icon-theme, dh-dlang, django-js-reverse, flask-security, fpylll, gcc-8, gcc-8-cross, gdbm, gitlint, gnome-tweaks, gnupg-pkcs11-scd, gnustep-back, golang-github-juju-ansiterm, golang-github-juju-httprequest, golang-github-juju-schema, golang-github-juju-testing, golang-github-juju-webbrowser, golang-github-posener-complete, golang-gopkg-juju-environschema.v1, golang-gopkg-macaroon-bakery.v2, golang-gopkg-macaroon.v2, harmony, hellfire, hoel, iem-plugin-suite, ignore-me, itypes, json-tricks, jstimezonedetect.js, libcdio, libfuture-asyncawait-perl, libgig, libjs-cssrelpreload, liblxi, libmail-box-imap4-perl, libmail-box-pop3-perl, libmail-message-perl, libmatekbd, libmoosex-traitfor-meta-class-betteranonclassnames-perl, libmoosex-util-perl, libpath-iter-perl, libplacebo, librecaptcha, libsyntax-keyword-try-perl, libt3highlight, libt3key, libt3widget, libtree-r-perl, liburcu, linux, mali-midgard-driver, mate-panel, memleax, movit, mpfr4, mstch, multitime, mwclient, network-manager-fortisslvpn, node-babel-preset-airbnb, node-babel-preset-env, node-boxen, node-browserslist, node-caniuse-lite, node-cli-boxes, node-clone-deep, node-d3-axis, node-d3-brush, node-d3-dsv, node-d3-force, node-d3-hierarchy, node-d3-request, node-d3-scale, node-d3-transition, node-d3-zoom, node-fbjs, node-fetch, node-grunt-webpack, node-gulp-flatten, node-gulp-rename, node-handlebars, node-ip, node-is-npm, node-isomorphic-fetch, node-js-beautify, node-js-cookie, node-jschardet, node-json-buffer, node-json3, node-latest-version, node-npm-bundled, node-plugin-error, node-postcss, node-postcss-value-parser, node-preact, node-prop-types, node-qw, node-sellside-emitter, node-stream-to-observable, node-strict-uri-encode, node-vue-template-compiler, ntl, olivetti-mode, org-mode-doc, otb, othman, papirus-icon-theme, pgq-node, php7.2, piu-piu, prometheus-sql-exporter, py-radix, pyparted, pytest-salt, pytest-tempdir, python-backports.tempfile, python-backports.weakref, python-certbot, python-certbot-apache, python-certbot-nginx, python-cloudkittyclient, python-josepy, python-jsondiff, python-magic, python-nose-random, python-pygerrit2, python-static3, r-cran-broom, r-cran-cli, r-cran-dbplyr, r-cran-devtools, r-cran-dt, r-cran-ggvis, r-cran-git2r, r-cran-pillar, r-cran-plotly, r-cran-psych, r-cran-rhandsontable, r-cran-rlist, r-cran-shinydashboard, r-cran-utf8, r-cran-whisker, r-cran-wordcloud, recoll, restrictedpython, rkt, rtklib, ruby-handlebars-assets, sasmodels, spectre-meltdown-checker, sphinx-gallery, stepic, tilde, togl, ums2net, vala-panel, vprerex, wafw00f & wireguard.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: fpylll, gnome-tweaks, org-mode-doc & py-radix.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Day three of the pre-FOSDEM Debconf Videoteam sprint

Mër, 31/01/2018 - 7:46md

This should really have been the "day two" post, but I forgot to do that yesterday, and now it's the end of day three already, so let's just do the two together for now.

Kyle

Has been hacking on the opsis so we can get audio through it, but so far without much success. In addition, he's been working a bit more on documentation, as well as splitting up some data that's currently in our ansible repository into a separate one so that other people can use our ansible configuration more easily, without having to fork too much.

Tzafrir

Did some tests on the ansible setup, and did some documentation work, and worked on a kodi plugin for parsing the metadata that we've generated.

Stefano

Did some work on the DebConf website. This wasn't meant to be much, but yak shaving sucks. Additionally, he's been doing some work on the youtube uploader as well.

Nattie

Did more work reviewing our documentation, and has been working on rewording some of the more awkward bits.

Wouter

Spent much time on improving the SReview installation for FOSDEM. While at it, fixed a number of bugs in some of the newer code that were exposed by full tests of the FOSDEM installation. Additionally, added code to SReview to generate metadata files that can be handed to Stefano's youtube uploader.

Pollo

Although he had less time yesterday than he did on monday (and apparently no time today) to sprint remotely, Pollo still managed to add a basic CI infrastructure to lint our ansible playbooks.

Wouter Verhelst https://grep.be/blog//pd/ pd

Swatantra17

Mër, 31/01/2018 - 2:49md

Its very late but here it goes..

Last month Thiruvananthapuram witnessed one of the biggest Free and Open Source Software conference called Swatantra17. Swatantra is a flagship triennial ( actually used to be triennial, but from now on organizers decided to conduct in every 2 years.) FOSS conference from ICFOSS. This year there were more than 30 speakers from all around the world. The event held from 20-21 December at Mascot hotel, Thiruvananthapuram. I was one of the community volunteer for the event and was excited from the day it announced :) .

Current Kerala Chief Minister Pinarayi Vijayan inaugurated Swatantra17. The first day session started with keynote from Software Freedom Conservancy executive director Karen Sandler. Karen told about safety of medical devices like defibrillator which runs proprietary software. After that there were many parallel talks about various free software projects,technologies and tools. This edition of Swatantra focused more on art. It was good to know more about artist’s free software stack. Most amazing thing is through out the conference I met so many people from FSCI whom I only know through matrix/IRC/emails.

The first day talks were ended at 6PM. After that Oorali band performed for us. This band is well-known in Kerala because they speak for many social and political issues. This make them best match for a free software conference cultural program :). Their songs are mainly about birds, forests, freedom and we danced to the many of the songs.

Last day evening there was kind of BoF from FSF person Benjamin Mako Hill. Half way through I came to know he is also a Debian Developer :D. Unfortunately this Bof stopped as he was called for a panel discussion. After the panel discussion we all Debian people gathered and had a chat.

Abhijith PA http://abhijithpa.me/ Abhijith PA

Migrating the debichem group subversion repository to Git - Part 1: svn-all-fast-export basics

Mër, 31/01/2018 - 1:24md

With the deprecation of alioth.debian.org the subversion service hosted there will be shut down too. According to lintian the estimated date is May 1st 2018 and there are currently more then 1500 source packages affected. In the debichem group we've used the subversion service since 2006. Our repository contains around 7500 commits done by around 20 different alioth user accounts and the packaging history of around 70 to 80 packages, including packaging attempts. I've spent the last days to prepare the Git migration, comparing different tools, controlling the created repositories and testing possibilities to automate the process as much as possible. The resulting scripts can currently be found here.

Of course I began as described at the Debian Wiki. But following this guide, using git-svn and converting the tags with the script supplied under rubric Convert remote tags and branches to local one gave me really weird results. The tags were pointing to the wrong commit-IDs. I thought, that git-svn was to blame and reported this as bug #887881. In the following mail exchange Andreas Kaesorg explained to me, that the issue is caused by so-called mixed-revision-tags in our repository as shown in the following example:


$ svn log -v -r7405
------------------------------------------------------------------------
r7405 | dleidert | 2018-01-17 18:14:57 +0100 (Mi, 17. Jan 2018) | 1 Zeile
Geänderte Pfade:
A /tags/shelxle/1.0.888-1 (von /unstable/shelxle:7396)
R /tags/shelxle/1.0.888-1/debian/changelog (von /unstable/shelxle/debian/changelog:7404)
R /tags/shelxle/1.0.888-1/debian/control (von /unstable/shelxle/debian/control:7403)
D /tags/shelxle/1.0.888-1/debian/patches/qt5.patch
R /tags/shelxle/1.0.888-1/debian/patches/series (von /unstable/shelxle/debian/patches/series:7402)
R /tags/shelxle/1.0.888-1/debian/rules (von /unstable/shelxle/debian/rules:7403)

[svn-buildpackage] Tagging shelxle 1.0.888-1
------------------------------------------------------------------------

Looking into the git log, the tags deteremined by git-svn are really not in their right place in the history line, even before running the script to convert the branches into real Git tags. So IMHO git-svn is not able to cope with this kind of situation. Because it also cannot handle our branch model, where we use /branch/package/, I began to look for different tools and found svn-all-fast-export, a tool created (by KDE?) to convert even large subversion repositories based on a ruleset. My attempt using this tool was so successful (not to speak of, how fast it is), that I want to describe it more. Maybe it will prove to be useful for others as well and it won't hurt to give some more information about this poorly documented tool :)

Step 1: Setting up a local subversion mirror

First I suggest setting up a local copy of the subversion repository to migrate, that is kept in sync with the remote repository. This can be achieved using the svnsync command. There are several howtos for this, so I won't describe this step here. Please check out this guide. In my case I have such a copy in /srv/svn/debichem.

Step 2: Creating the identity map

svn-all-fast-export needs at least two files to work. One is the so called identity map. This file contains the mapping between subversion user IDs (login names) and the (Git) committer info, like real name and mail address. The format is the same as used by git-svn:

loginname = author name <mail address>

e.g.

dleidert = Daniel Leidert <dleidert@debian.org>

The list of subversion user IDs can be obtained the same way as described in the Wiki:

svn log SVN_URL | awk -F'|' '/^r[0-9]+/ { print $2 }' | sort -u

Just replace the placeholder SVN_URL with your subversion URL. Here is the complete file for the debichem group.

Step 3: Creating the rules

The most important thing is the second file, which contains the processing rules. There is really not much documentation out there. So when in doubt, one has to read the source file src/ruleparser.cpp. I'll describe, what I already found out. If you are impatient, here is my result so far.

The basic rules are:


create repository REPOSITORY
...
end repository

and


match PATTERN
...
end match

The first rule creates a bare git repository with the name you've chosen (above represented by REPOSITORY). It can have one child, that is the repository description to be put into the repositories description file. There are AFAIK no other elements allowed here. So in case of e.g. ShelXle the rule might look like this:


create repository shelxle
description packaging of ShelXle, a graphical user interface for SHELXL
end repository

You'll have to create every repository, before you can put something into it. Else svn-all-fast-export will exit with an error. JFTR: It won't complain, if you create a repository, but don't put anything into it. You will just end up with an empty Git repository.

Now the second type of rule is the most important one. Based on regular expression match patterns (above represented by PATTERN), one can define actions, including the possibility to limit these actions to repositories, branches and revisions. The patterns are applied in their order of appearance. Thus if a matching pattern is found, other patterns matching but appearing later in the rules file, won't apply! So a special rule should always be put above a general rule. The patterns, that can be used, seem to be of type QRegExp and seem like basic Perl regular expressions including e.g. capturing, backreferences and lookahead capabilities. For a multi-package subversion repository with standard layout (that is /PACKAGE/{trunk,tags,branches}/), clean naming and subversion history, the rules could be:


match /([^/]+)/trunk/
repository \1
branch master
end match

match /([^/]+)/tags/([^/]+)/
repository \1
branch refs/tags/debian/\2
annotated true
end match

match /([^/]+)/branches/([^/]+)/
repository \1
branch \2
end match

The first rule captures the (source) package name from the path and puts it into the backreference \1. It applies to the trunk directory history and will put everything it finds there into the repository named after the directory - here we simply use the backreference \1 to that name - and there into the master branch. Note, that svn-all-fast-export will error out, if it tries to access a repository, which has not been created. So make sure, all repositories are created as shown with the create repository rule. The second rule captures the (source) package name from the path too and puts it into the backreference \1. But in backreference \2 it further captures (and applies to) all the tag directories under the /tags/ directory. Usually these have a Debian package version as name. With the branch statement as shown in this rule, the tags, which are really just branches in subversion, are automatically converted to annotated Git tags (another advantage of svn-all-fast-export over git-svn). Without enabling the annotated statement, the tags created will be lightweight tags. So the tag name (here: debian/VERSION) is determined via backreference \2. The third rule is almost the same, except that everything found in the matching path will be pushed into a Git branch named after the top-level directory captured from the subversion path.

Now in an ideal world, this might be enough and the actual conversion can be done. The command should only be executed in an empty directory. I'll assume, that the identity map is called authors and the rules file is called rules and that both are in the parent directory. I'll also assume, that the local subversion mirror of the packaging repository is at /srv/svn/mymirror. So ...

svn-all-fast-export --stats --identity-map=../authors.txt --rules=../debichem.rules --stats /srv/svn/mymirror

... will create one or more bare Git repositories (depending on your rules file) in the current directory. After the command succeeded, you can test the results ...


git -C REPOSITORY/ --bare show-ref
git -C REPOSITORY/ --bare log --all --graph

... and you will find your repositories description (if you added one to the rules file) in REPOSITORY/description:

cat REPOSITORY/description

Please note, that not all the debian version strings are well formed Git reference names and therefor need fixing. There might also be gaps shown in the Git history log. Or maybe the command didn't even suceed or complained (without you noticing it) or you ended up with an empty repository, although the matching rules applied. I encountered all of these issues and I'll describe the cause and fixes in the next blog article.

But if everything went well (you have no history gaps, the tags are in their right place within the linearized history and the repository looks fine) and you can and want to proceed, you might want to skip to the next step.

In the debichem group we used a different layout. The packaging directories were under /{unstable,experimental,wheezy,lenny,non-free}/PACKAGE/. This translates to /unstable/PACKAGE/ and /non-free/PACKAGE/ being the trunk directories and the others being the branches. The tags are in /tags/PACKAGE/. And packages, that are yet to upload are located in /wnpp/PACKAGE/. With this layout, the basic rules are:


# trunk handling
# e.g. /unstable/espresso/
# e.g. /non-free/molden/
match /(?:unstable|non-free)/([^/]+)/
repository \1
branch master
end match

# handling wnpp
# e.g. /wnpp/osra/
match /(wnpp)/([^/]+)/
repository \2
branch \1
end match

# branch handling
# e.g. /wheezy/espresso/
match /(lenny|wheezy|experimental)/([^/]+)/
repository \2
branch \1
end match

# tags handling
# e.g. /tags/espresso/VERSION/
match /tags/([^/]+)/([^/]+)/
repository \1
annotated true
branch refs/tags/debian/\2
substitute branch s/~/_/
substitute branch s/:/_/
end match

In the first rule, there is a non-capturing expression (?: ... ), which simply means, that the rule applies to /unstable/ and /non-free/. Thus the backreference \1 refers to second part of the path, the package directory name. The contents found are pushed to the master branch. In the second rule, the contents from the wnpp directory are not pushed to master, but instead to a branch called wnpp. This was necessary because of overlaps between /unstable/ and /wnpp/ history and already shows, that the repositories history makes things complicated. In the third rule, the first backreference \1 determines the branch (note the capturing expression in contrast to the first rule) and the second backreference \2 the package repository to act on. The last rule is similar, but now \1 determines the package repository and \2 the tag name (debian package version) based on the matching path. The example also shows another issue, which I'd like to explain more in the next article: some characters we use in debian package versions, e.g. the tilde sign and the colon, are not allowed within Git tag names and must therefor be substituted, which is done by the substitute branch EXPRESSION instructions.

Step 4: Cleaning the bare repository

The tool documentation suggests to run ...

git -C REPOSITORY/ repack -a -d -f

... before you upload this bare repository to another location. But Stuart Prescott told me on the debichem list, that this might not be enough and still leave some garbage behind. I'm not experienved enough to judge here, but his suggestion is, to clone the repository, either a bare clone or clone and init a new bare. I used the first approach:


git -C REPOSITORY/ --bare clone --bare REPOSITORY.git
git -C REPOSITORY.git/ repack -a -d -f

Please note, that this won't copy the repositories description file. You'll have to copy it manually, if you wanna keep it. The resulting bare repository can be uploaded (e.g. to git.debian.org as personal repository:


cp REPOSITORY/description REPOSITORY.git/description
touch REPOSITORY.git/git-daemon-export-ok
rsync -avz REPOSITORY.git git.debian.org:~/public_git/

Or you clone the repository, add a remote origin and push everything there. It is even possible to use the gitlab API at salsa.debian.org to create a project and push there. I'll save the latter for another post. If you are hasty, you'll find a script here.

Daniel Leidert noreply@blogger.com [erfahrungen, meinungen, halluzinationen]

An old DOS BBS in a Docker container

Mër, 31/01/2018 - 12:32md

Awhile back, I wrote about my Debian Docker base images. I decided to extend this concept a bit further: to running DOS applications in Docker.

But first, a screenshot:

It turns out this is possible, but difficult. I went through all three major DOS emulators available (dosbox, qemu, and dosemu). I got them all running inside the Docker container, but had a number of, er, fun issues to resolve.

The general thing one has to do here is present a fake modem to the DOS environment. This needs to be exposed outside the container as a TCP port. That much is possible in various ways — I wound up using tcpser. dosbox had a TCP modem interface, but it turned out to be too buggy for this purpose.

The challenge comes in where you want to be able to accept more than one incoming telnet (or TCP) connection at a time. DOS was not a multitasking operating system, so there were any number of hackish solutions back then. One might have had multiple physical computers, one for each incoming phone line. Or they might have run multiple pseudo-DOS instances under a multitasking layer like DESQview, OS/2, or even Windows 3.1.

(Side note: I just learned of DESQview/X, which integrated DESQview with X11R5 and replaced the Windows 3 drivers to allow running Windows as an X application).

For various reasons, I didn’t want to try running one of those systems inside Docker. That left me with emulating the original multiple physical node setup. In theory, pretty easy — spin up a bunch of DOS boxes, each using at most 1MB of emulated RAM, and go to town. But here came the challenge.

In a multiple-physical-node setup, you need some sort of file sharing, because your nodes have to access the shared message and file store. There were a myriad of clunky ways to do this in the old DOS days – Netware, LAN manager, even some PC NFS clients. I didn’t have access to Netware. I tried the Microsoft LM client in DOS, talking to a Samba server running inside the Docker container. This I got working, but the LM client used so much RAM that, even with various high memory tricks, BBS software wasn’t going to run. I couldn’t just mount an underlying filesystem in multiple dosbox instances either, because dosbox did caching that wasn’t going to be compatible.

This is why I wound up using dosemu. Besides being a more complete emulator than dosbox, it had a way of sharing the host’s filesystems that was going to work.

So, all of this wound up with this: jgoerzen/docker-bbs-renegade.

I also prepared building blocks for others that want to do something similar: docker-dos-bbs and the lower-level docker-dosemu.

As a side bonus, I also attempted running this under Joyent’s Triton (SmartOS, Solaris-based). I was pleasantly impressed that I got it all almost working there. So yes, a Renegade DOS BBS running under a Linux-based DOS emulator in a container on a Solaris machine.

John Goerzen http://changelog.complete.org The Changelog

Review: My Grandmother Asked Me to Tell You She's Sorry

Mër, 31/01/2018 - 5:19pd

Review: My Grandmother Asked Me to Tell You She's Sorry, by Fredrik Backman

Series: Britt-Marie #1 Translator: Henning Koch Publisher: Washington Square Copyright: 2014 Printing: April 2016 ISBN: 1-5011-1507-3 Format: Trade paperback Pages: 372

Elsa is seven, going on eight. She's not very good at it; she knows she's different and annoying, which is why she gets chased and bullied constantly at school and why her only friend is her grandmother. But Granny is a superhero, who's also very bad at being old. Her superpowers are lifesaving and driving people nuts. She made a career of being a doctor in crisis zones; now she makes a second career of, well, this sort of thing:

Or that time she made a snowman in Britt-Marie and Kent's garden right under their balcony and dressed it up in grown-up clothes so it looked as if a person had fallen from the roof. Or that time those prim men wearing spectacles started ringing all the doorbells and wanted to talk about God and Jesus and heaven, and Granny stood on her balcony with her dressing gown flapping open, shooting at them with her paintball gun

The other thing Granny is good at is telling fairy tales. She's been telling Elsa fairy tales since she was small and her mom and dad had just gotten divorced and Elsa was having trouble sleeping. The fairy tales are all about Miamas and the other kingdoms of the Land-of-Almost-Awake, where the fearsome War-Without-End was fought against the shadows. Miamas is the land from which all fairy tales come, and Granny has endless stories from there, featuring princesses and knights, sorrows and victories, and kingdoms like Miploris where all the sorrows are stored.

Granny and Miamas and the Land-of-Almost-Awake make Elsa's life not too bad, even though she has no other friends and she's chased at school. But then Granny dies, right after giving Elsa one final quest, her greatest quest. It starts with a letter and a key, addressed to the Monster who lives downstairs. (Elsa calls him that because he's a huge man who only seems to come out at night.) And Granny's words:

"Promise you won't hate me when you find out who I've been. And promise me you'll protect the castle. Protect your friends."

My Grandmother Asked Me to Tell You She's Sorry is written in third person, but it's close third person focused on Elsa and her perspective on the world. She's a precocious seven-year-old who I thought was nearly perfect (rare praise for me for children in books), which probably means some folks will think she's a little too precocious. But she has a wonderful voice, a combination of creative imagination, thoughtfulness, and good taste in literature (particularly Harry Potter and Marvel Comics). The book is all about what it's like to be seven, going on eight, with a complicated family situation and an awful time at school, but enough strong emotional support from her family that she's still full of stubbornness, curiosity, and fire.

Her grandmother's quest gets her to meet the other residents of the apartment building she lives in, turning them into more than the backdrop of her life. That, in turn, adds new depth to the fairy tales her Granny told her. Their events turn out to not be pure fabrication. They were about people, the many people in her Granny's life, reshaped by Granny's wild imagination and seen through the lens of a child. They leave Elsa surprisingly well-equipped to navigate and start to untangle the complicated relationships surrounding her.

This is where Backman pulls off the triumph of this book. Elsa's discoveries that her childhood fairy tales are about the people around her, people with a long history with her grandmother, could have been disillusioning. This could be the story of magic fading into reality and thereby losing its luster. And at first Elsa is quite angry that other people have this deep connection to things she thought were hers, shared with her favorite person. But Backman perfectly walks that line, letting Elsa keep her imaginative view of the world while intelligently mapping her new discoveries onto it. The Miamas framework withstands serious weight in this story because Elsa is flexible, thoughtful, and knows how to hold on to the pieces of her story that carry deeper truth. She sees the people around her more clearly than anyone else because she has a deep grasp of her grandmother's highly perceptive, if chaotic, wisdom, baked into all the stories she grew up with.

This book starts out extremely funny, turns heartwarming and touching, and develops real suspense by the end. It starts out as Elsa nearly alone against the world and ends with a complicated matrix of friends and family, some of whom were always supporting each other beneath Elsa's notice and some of whom are re-learning the knack. It's a beautiful story, and for the second half of the book I could barely put it down.

I am, as a side note, once again struck by the subtle difference in stories from cultures with a functional safety net. I caught my American brain puzzling through ways that some of the people in this book could still be alive and living in this apartment building since they don't seem capable of holding down jobs, before realizing this story is not set in a brutal Hobbesian jungle of all against all like the United States. The existence of this safety net plays no significant role in this book apart from putting a floor under how far people can fall, and yet it makes all the difference in the world and in some ways makes Backman's plot possible. Perhaps publishers should market Swedish literary novels as utopian science fiction in the US.

This is great stuff. The back and forth between fairy tales and Elsa's resilient and slightly sarcastic life can take a bit to get used to, but stick with it. All the details of the fairy tales matter, and are tied back together wonderfully by the end of the book. Highly recommended. In its own way, this is fully as good as A Man Called Ove.

There is a subsequent book, Britt-Marie Was Here, that follows one of the supporting characters of this novel, but My Grandmother Asked Me to Tell You She's Sorry stands alone and reaches a very satisfying conclusion (including for that character).

Rating: 10 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

logo.png for default avatar for GitLab repos

Mër, 31/01/2018 - 4:13pd

Debian and GNOME have both recently adopted self-hosted GitLab for their git hosting. GNOME’s service is named simply https://gitlab.gnome.org/ ; Debian’s has the more intriguing name https://salsa.debian.org/ . If you ask the Salsa sysadmins, they’ll explain that they were in a Mexican restaurant when they needed to decide on a name!

There’s a useful under-documented feature I found. If you place a logo.png in the root of your repository, it will be automatically used as the default “avatar” for your project (in other words, the logo that shows up on the web page next to your project).

I added a logo.png to GNOME Tweaks at GNOME and it automatically showed up in Salsa when I imported the new version.

Other Notes

I first tried with a symlink to my app icon, but it didn’t work. I had to actually copy the icon.

The logo.png convention doesn’t seem to be supported at GitHub currently.

Jeremy Bicha https://jeremy.bicha.net Debian – Just Jeremy

Fair communication requires mutual consent

Mar, 30/01/2018 - 9:33md

I was pleased to read Shirish Agarwal's blog in reply to the blog I posted last week Do the little things matter?

Given the militaristic theme used in my own post, I was also somewhat amused to see news this week of the Strava app leaking locations and layouts of secret US military facilities like Area 51. What a way to mark International Data Privacy Day. Maybe rather than inadvertently misleading people to wonder if I was suggesting that Gmail users don't make their beds, I should have emphasized that Admiral McRaven's boot camp regime for Navy SEALS needs to incorporate some of my suggestions about data privacy?

A highlight of Agarwal's blog is his comment I usually wait for a day or more when I feel myself getting inflamed/heated and I wish this had occurred in some of the other places where my ideas were discussed. Even though my ideas are sometimes provocative, I would kindly ask people to keep point 2 of the Debian Code of Conduct in mind, Assume good faith.

One thing that became clear to me after reading Agarwal's blog is that some people saw my example one-line change to Postfix's configuration as a suggestion that people need to run their own mail server. In fact, I had seen such comments before but I hadn't realized why people were reaching a conclusion that I expect everybody to run a mail server. The purpose of that line was simply to emphasize the content of the proposed bounce message, to help people understand, the receiver of an email may never have agreed to Google's non-privacy policy but if you do use Gmail, you impose that surveillance regime on them, and not just yourself, if you send them a message from a Gmail account.

Communication requires mutual agreement about the medium. Think about it another way: if you go to a meeting with your doctor and some stranger in a foreign military uniform is in the room, you might choose to leave and find another doctor rather than communicate under surveillance.

As it turns out, many people are using alternative email services, even if they only want a web interface. There is already a feature request discussion in ProtonMail about letting users choose to opt-out of receiving messages monitored by Google and send back the bounce message suggested in my blog. Would you like to have that choice, even if you didn't use it immediately? You can vote for that issue or leave your own feedback comments in there too.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Imagine the world's biggest Kanban / Scrumboard

Mar, 30/01/2018 - 7:52md

Imagine a Kanban board that could aggregate issues from multiple backends, including your CalDAV task list, Bugzilla systems (Fedora, Mozilla, GNOME communities), Github issue lists and the Debian Bug Tracking System, visualize them together and coordinate your upstream fixes and packaging fixes in a single sprint.

It is not so farfetched - all of those systems already provide read access using iCalendar URLs as described in my earlier blog. There are REST APIs to manipulate most of them too. Why not write a front end to poll them and merge the content into a Kanban board view?

We've added this as a potential GSoC project using Python and PyQt.

If you'd like to see this or any of the other proposed projects go ahead, you don't need to be a Debian Developer to suggest ideas, refer a student or be a co-mentor. Many of our projects have relevance in multiple communities. Feel free to get in touch with us through the debian-outreach mailing list.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Reproducible Builds: Weekly report #144

Mar, 30/01/2018 - 7:05md

Here's what happened in the Reproducible Builds effort between Sunday January 21 and Saturday January 27 2018:

Media coverage Development and fixes in key packages
  • Mattia uploaded dpkg (1.19.0.5.0~reproducible1) to our experimental toolchain.

  • cpython-3.7 now has .pyc files without timestamps. Most work happening in PEP 552 but older Python versions probably still need variants of the mtime patch because the new .pyc format is not compatible.

Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

35 package reviews have been added, 37 have been updated and 91 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (24)
  • Niels Thykier (8)
diffoscope development reproducible-website development jenkins.debian.net development Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

2018 and CD burning still painful

Mar, 30/01/2018 - 4:27md

Ok, we are in 2018, and for the first time in ages I wanted to burn a Audio CD, and dared to think about CD_TEXT. You know what – it is hard as in impossible for a normal user. And that in 2018. Debian, you could try to do better.

Before I start I guess it is necessary to make clear that this is a newly installed system, less than a few months old. That I am as Debian Developer not completely new to system administration. Furthermore, the user trying to burn is member of the cdrom group.

The problem with most GUI frontends is that they rely on wodim, a member of the cdrkit family. And wodim itself simply doesn’t work:

$ wodim -dummy -v speed=16 dev=/dev/sr0 -audio track* wodim: No write mode specified. wodim: Assuming -tao mode. wodim: Future versions of wodim may have different drive dependent defaults. TOC Type: 0 = CD-DA wodim: Operation not permitted. Warning: Cannot raise RLIMIT_MEMLOCK limits. wodim: Resource temporarily unavailable. Cannot get mmap for 12587008 Bytes on /dev/zero. $

Yes I know, the easy solution is to make wodim setuid root, but this is not what I want. Unfortunately the – in contrast to cdrkit/wodim still in development – parent of wodim, cdrecord, works, but only because it is setuid root after a standard installation.

That is all complicated by the fact that the main front-ends are, well, broken:

  • K3b: feels like completely broken: it cannot open its own saved project files, the .inf files generated for an audio CD project are completely broken and void of any content, it hangs regularly without any response
  • Nautilus DVD/CD burning: incapable of doing Audio CDs, offers only Data cd
  • Brasero: terminates with “ejecting disc” and “An unknown error occurred”, looking at the log file it shows that again wodim is the culprit. But brasero could be a bit more helpful! Additional minus point: no cddb interface.

The only exception I found was Xfburn which managed to burn the CD without any hitch or problem. Wow – besides it doesn’t support CD_TEXT, which is also not optimal.

A solution for burning CD_TEXT

So in case you really want to burn CD_TEXT, there is at the moment, as far as I see, only one option, and that is the command line using Cdrdao. Thanks to this excellent article I managed to burn using the following command (as user, not root, nothing special):

cdrdao write --device /dev/sr0 --driver generic-mmc:0x10 -v 2 -n --eject mycd.toc

The format of the .toc file is a bit complicated but documented, see the linked article.

All in all a very depressing situation I have to say, especially for being in 2018 …

Norbert Preining https://www.preining.info/blog There and back again

Exploring minimax polynomials with Sollya

Sht, 27/01/2018 - 11:18pd

Following Fabian Giesen's advice, I took a look at Sollya—I'm not really that much into numerics (and Sollya, like the other stuff that comes out of the same group, is really written by hardcode numerics nerds), but approximation is often useful.

A simple example: When converting linear light values to sRGB, you need to be able to compute the formula f(x) = (x + ɑ - 1) / ɑ)^ɣ for a given (non-integer) ɑ and ɣ. (Movit frequently needs this. For the specific case of sRGB, GPUs often have hard-coded lookup tables, but they are not always applicable, for instance if the data comes from Y'CbCr.) However, even after simplifications, the exponentiation is rather expensive to run for every pixel, so we'd like some sort of approximation.

If you've done any calculus, you may have heard of Taylor series, which looks at the derivatives in a certain point and creates a polynomial from that. One of the perhaps most famous is arctan(x) = x - 1/3 x³ + 1/5 x⁵ - 1/7 x⁷ + ..., which gives rise to a simple formula for approximating pi if you set x=1 (since arctan(1) = pi/4). However, for practical approximation, Taylor series are fairly useless; they're accurate near the origin point of the expansion, but don't care at all about what happens far from it. Minimax polynomials are better; they minimize the maximum error over the range of interest.

In the past, I've been using Maple for this (I never liked Mathematica much); it's non-free, but not particularly expensive for a personal license, and it can do pretty much everything I expect from a computer algebra system. However, it would be interesting to see if Sollya could do better. After toying around a bit, it seems there are pros and cons:

  • Sollya appears to be faster. I haven't made any formal benchmarks, but I just feel like I have to wait a lot less for it.
  • I find Sollya's syntax maybe a bit more obscure (e.g., [| to start a list), although this is probably partially personal preference. Its syntax error handling is also a lot less friendly.
  • Sollya appears to be a lot more robust towards actually terminating with a working result. E.g., Maple just fails on optimizing sqrt(x) over 0..1 (a surprisingly hard case), whereas I haven't really been able to make Sollya fail yet except in the case of malformed problems (e.g. asking for optimizing for relative error of an error which is zero at certain points). Granted, I haven't pushed it that hard.
  • Maple supports a much wider range of functions. This is a killer for me; I frequently need something as simple as piecewise functions, and Sollya simply doesn't appear to support them.
  • Maple supports rational expansions, ie. two polynomials divided by each other (which can often increase performance dramatically—although the execution cost also balloons, of course). Sollya doesn't. On the other hand, Sollya supports expansion over given base functions, e.g. if you happen to sin(x) computed for whatever obscure reason, you can get an expansion of the type f(x) = a + bsin(x) + cx + dsin(x)² + ex².
  • Maple supports arbitrary weighing of the error (e.g. if you care more about errors at the endpoints)—I find this super-useful, especially if you are dealing with transformed variables or piecewise approximations. Sollya only supports relative and absolute errors, which is more limiting.
  • Sollya can seemingly be embedded as a library. Useful for some, not really relevant for me.
  • And finally, Sollya doesn't optimize coefficients over arbitrary precision; you tell it what accuracy you have to deal with (number of bits in floating or fixed point) and it optimizes the coefficients with that round-off error in mind. (I don't know if it also deals with intermediate roundoff errors when evaluating the polynomial.) Fabian makes a big deal of this, but for fp32, it doesn't really seem to matter much; I did some tests relative to what I had already gotten out of Maple, and the difference in maximum error was microscopic.

So, the verdict? Sollya is certainly good, and I can see myself using it in the future, but for me, it's more of an augmentation than replacing Maple for this use.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Detecting binary files in the history of a git repository

Pre, 26/01/2018 - 3:57md
Git, VCSes and binary filesGit is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being git-lfs and git-annex.

My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.

Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).

The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.

But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?
Detecting any binary files, only in the current commitAs with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:

find . -type f -print | grep -v '^\.\/\.git\/'Then we can use the 'file' utility to list for non-text files:
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK Of course, we assume here, the work tree is clean.
Checking all commits in a branchSince we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.

Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:
git checkout -b test_binsGit has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:

cat > ../check_file_text.sh#!/bin/sh

(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
then (ab)use 'git rebase' to execute that for us for all commits:
git rebase --exec="sh ../check_file_text.sh" -i $startcommitAfter we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.

Here is an example output when checking the newest 5 commits:

$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Successfully rebased and updated refs/heads/test_bins.

Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.

$ git co master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master' $ git branch -D test_bins
Deleted branch test_bins (was 6358b91).Enjoy! eddyp noreply@blogger.com Rambling around foo

Faqet