You are here

Planet Debian

Subscribe to Feed Planet Debian
"Passion and dispassion. Choose two." -- Larry Wall "Passion and dispassion. Choose two." -- Larry Wall sesse's blog Entries tagged english Just another weblog Debian and Free Software random musings and comments showing latest 10 Comments on family, technology, and society liw's English language blog feed My WebLog at Intitut Mines-Telecom, Télécom SudParis Debian and Free Software Enrico Zini: pdo Linux, politics, and other interesting things Thinking inside the box Iustin Pop Ben Hutchings's diary of life and technology Blog from the Debian Project Thinking inside the box Dude! Sweet! Reproducible builds blog ganbatte kudasai! a personal blog of Dimitri John Ledkov joey As time goes by ... ein kleines, privates blog "Passion and dispassion. Choose two." -- Larry Wall "Passion and dispassion. Choose two." -- Larry Wall spwhitton Just another weblog Thinking inside the box anarcat Joachim Breitners Denkblogade joey WEBlog -- Wouter's Eclectic Blog Comments on family, technology, and society Blog from the Debian Project All blogs related to Debian Insider infos, master your Debian/Ubuntu distribution Just another weblog pabs WEBlog -- Wouter's Eclectic Blog Hello, I'm Abhijith. A Free Software enthusiast. ein kleines, privates blog Comments on family, technology, and society "Passion and dispassion. Choose two." -- Larry Wall just my thoughts Reproducible builds blog random musings and comments
Përditësimi: 2 months 1 javë më parë

Not being perfect

Mër, 17/01/2018 - 8:49md

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, when I was wondering that I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that was very much me, because even though I had written down pretty much every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong candidate.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>> <<EventCalendar(category=CategoryEventCalendar)>> <<EventCalendar(,category=CategoryEventCalendar)>> <<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/ BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968' [Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Renata Renata's blog

Announcing "Just TODO It"

Mër, 17/01/2018 - 6:20md

Recently, I wished to use a trivially-simple TODO-list application whilst working on a project. I had a look through what was available to me in the "GNOME Software" application and was surprised to find nothing suitable. In particular I just wanted to capture a list of actions that I could tick off; I didn't want anything more sophisticated than that (and indeed, more sophistication would mean a learning curve I couldn't afford at the time). I then remembered that I'd written one myself, twelve years ago. So I found the old code, dusted it off, made some small adjustments so it would work on modern systems and published it.

At the time that I wrote it, I found (at least) one other similar piece of software called "Tasks" which used Evolution's TODO-list as the back-end data store. I can no longer find any trace of this software, and the old web host ( has disappeared.

My tool is called Just TODO It and it does very little. If that's what you want, great! You can reach the source via that prior link or jump straight to GitHub:

jmtd Jonathan Dowland's Weblog

Procrastinating by tweaking my desktop with devilspie2

Mar, 16/01/2018 - 3:51md

Tweaking my desktop seems to be my preferred form of procrastination. So, a blog like this is a sure sign I have too much work on my plate.

I have a laptop. I carry it to work and plug it into a large monitor - where I like to keep all my instant or near-instant communications displayed at all times while I switch between workspaces on my smaller laptop screen as I move from email (workspace one), to shell (workspace two), to web (workspace three), etc.

When I'm not at the office, I only have my laptop screen - which has to accomdate everything.

I soon got tired of dragging things around everytime I plugged or unplugged the monitor and starting accumulating a mess of bash scripts running wmctrl and even calling my own python-wnck script. (At first I couldn't get wmctrl to pin a window but I lived with it. But when gajim switched to gtk3 and my openbox window decorations disappeared, then I couldn't even pin my window manually.)

Now I have the following simpler setup.

Manage hot plugging of my monitor.

Symlink to my monitor status device:

0 jamie@turkey:~$ ls -l ~/.config/turkey/monitor.status lrwxrwxrwx 1 jamie jamie 64 Jan 15 15:26 /home/jamie/.config/turkey/monitor.status -> /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-1/status 0 jamie@turkey:~$

Create a udev rule to place my monitor to the right of my LCD every time the monitor is plugged in and every time it is unplugged.

0 jamie@turkey:~$ cat /etc/udev/rules.d/90-vga.rules # When a monitor is plugged in, adjust my display to take advantage of it ACTION=="change", SUBSYSTEM=="drm", ENV{HOTPLUG}=="1", RUN+="/etc/udev/scripts/vga-adjust" 0 jamie@turkey:~$

And here is the udev script:

0 jamie@turkey:~$ cat /etc/udev/scripts/vga-adjust #!/bin/bash logger -t "jamie-udev" "Monitor event detected, waiting 1 second for system to detect change." # We don't know whether the VGA monitor is being plugged in or unplugged so we # have to autodetect first. And,it takes a few seconds to assess whether the # monitor is there or not, so sleep for 1 second. sleep 1 monitor_status="/home/jamie/.config/turkey/monitor.status" status=$(cat "$monitor_status") XAUTHORITY=/home/jamie/.Xauthority if [ "$status" = "disconnected" ]; then # The monitor is not plugged in logger -t "jamie-udev" "Monitor is being unplugged" xrandr --output DP-1 --off else logger -t "jamie-udev" "Monitor is being plugged in" xrandr --output DP-1 --right-of eDP-1 --auto fi 0 jamie@turkey:~$ Move windows into place.

So far, this handles ensuring the monitor is activated and placed in the right position. But, nothing has changed in my workspace.

Here's where the devilspie2 configuration comes in:

==> /home/jamie/.config/devilspie2/00-globals.lua <== -- Collect some global varibles to be used throughout. name = get_window_name(); app = get_application_name(); instance = get_class_instance_name(); -- See if the monitor is plugged in or not. If monitor is true, it is -- plugged in, if it is false, it is not plugged in. monitor = false; device = "/home/jamie/.config/turkey/monitor.status" f =, "rb") if f then -- Read the contents, remove the trailing line break. content = string.gsub(f:read "*all", "\n", ""); if content == "connected" then monitor = true; end end ==> /home/jamie/.config/devilspie2/gajim.lua <== -- Look for my gajim message window. Pin it if we have the monitor. if string.find(name, "Gajim:") then if monitor then set_window_geometry(1931,31,590,1025); pin_window(); else set_window_workspace(4); set_window_geometry(676,31,676,725); unpin_window(); end end ==> /home/jamie/.config/devilspie2/grunt.lua <== -- grunt is the window I use to connect via irc. I typically connect to -- grunt via a terminal called spade, which is opened using a-terminal-yoohoo -- so that bell actions cause a notification. The window is called spade if I -- just opened it but usually changes names to grunt after I connect via autossh -- to grunt. -- -- If no monitor, put spade in workspace 2, if monitor, then pin it to all -- workspaces and maximize it vertically. if instance == "urxvt" then -- When we launch, the terminal is called spade, after we connect it -- seems to get changed to jamie@grunt or something like that. if name == "spade" or string.find(name, "grunt:") then if monitor then set_window_geometry(1365,10,570,1025); set_window_workspace(3); -- maximize_vertically(); pin_window(); else set_window_geometry(677,10,676,375); set_window_workspace(2); unpin_window(); end end end ==> /home/jamie/.config/devilspie2/terminals.lua <== -- Note - these will typically only work after I start the terminals -- for the first time because their names seem to change. if instance == "urxvt" then if name == "heart" then set_window_geometry(0,10,676,375); elseif name == "spade" then set_window_geometry(677,10,676,375); elseif name == "diamond" then set_window_geometry(0,376,676,375); elseif name == "clover" then set_window_geometry(677,376,676,375); end end ==> /home/jamie/.config/devilspie2/zimbra.lua <== -- Look for my zimbra firefox window. Shows support queue. if string.find(name, "Zimbra") then if monitor then unmaximize(); set_window_geometry(2520,10,760,1022); pin_window(); else set_window_workspace(5); set_window_geometry(0,10,676,375); -- Zimbra can take up the whole window on this workspace. maximize(); unpin_window(); end end

And lastly, it is started (and restartd) with:

0 jamie@turkey:~$ cat ~/.config/systemd/user/devilspie2.service [Unit] Description=Start devilspie2, program to place windows in the right locations. [Service] ExecStart=/usr/bin/devilspie2 [Install] 0 jamie@turkey:~$

Which I have configured via a key combination that I hit everytime I plug in or unplug my monitor.

Jamie McClelland pages tagged debian

Reproducible Builds: Weekly report #142

Mar, 16/01/2018 - 1:00md

Here's what happened in the Reproducible Builds effort between Sunday December 31 and Saturday January 13 2018:

Media coverage Development and fixes in key packages

Chris Lamb implemented two reproducibility checks in the lintian Debian package quality-assurance tool:

  • Warn about packages that ship Hypothesis example files. (#886101, report)
  • Warn about packages that override dh_fixperms without calling dh_fixperms as this makes the build vary depending on the current umask(2). (#885910, report)
Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

60 package reviews have been added, 43 have been updated and 76 have been removed in this week, adding to our knowledge about identified issues.

4 new issue types have been added:

The notes of one issue type was updated:

  • build_dir_in_documentation_generated_by_doxygen: 1, 2
Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adam Borowski (2)
  • Adrian Bunk (16)
  • Niko Tyni (1)
  • Chris Lamb (6)
  • Jonas Meurer (1)
  • Simon McVittie (1)
diffoscope development disorderfs development development Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks Reproducible builds blog

OpenSym 2017 Program Postmortem

Mar, 16/01/2018 - 4:38pd

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in iithis case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere.

Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape.

In OpenSym 2017, we made several changes to the way the conference has been run:

  • In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model.
  • Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs.
  • We eliminated page limits and the distinction between full papers and notes.
  • We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals.
  • To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted.

Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general.

Overview Statistics Papers submitted 44 Papers accepted 20 Acceptance rate 45% Posters submitted 2 Posters presented 9 Associate Chairs 8 PC Members 59 Authors 108 Author countries 20

The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years).

As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.”


Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of categories when submitting their work. These categories are similar to the tracks at OpenSym 2016. Interestingly, a number of authors selected more than one category. This would have led to difficult decisions in the old track-based system.

The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Papers in multiple categories are counted multiple times. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym’s submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair Lorraine Morgan’s involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past.

Scores and Reviews

As in previous years, review was single blind in that reviewers’ identities are hidden but authors identities are not. Each paper received between 3 and 4 reviews plus a metareview by the Associate Chair assigned to the paper. All papers received 3 reviews but ACs were encouraged to call in a 4th reviewer at any point in the process. In addition to the text of the reviews, we used a -3 to +3 scoring system where papers that are seen as borderline will be scored as 0. Reviewers scored papers using full-point increments.

The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It’s important to keep in mind that two papers were submitted as posters.

Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected or presented as a poster and most (but not all) papers with positive average scores were accepted. Although a positive average score seemed to be a requirement for publication, negative individual scores weren’t necessary showstoppers. We accepted 6 papers with at least one negative score. We ultimately accepted 20 papers—45% of those submitted.


This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entirely optional, almost every team of authors used it! Authors of 40 of our 46 submissions (87%!) submitted rebuttals.

Lower Unchanged Higher 6 24 10

The table above shows how average scores changed after authors submitted rebuttals. The table shows that rebuttals’ effect was typically neutral or positive. Most average scores stayed the same but nearly two times as many average scores increased as decreased in the post-rebuttal period. We hope that this made the process feel more fair for authors and I feel, having read them all, that it led to improvements in the quality of final papers.

Page Lengths

In previous years, OpenSym followed most other venues in computer science by allowing submission of two kinds of papers: full papers which could be up to 10 pages long and short papers which could be up to 4. Following some other conferences, we eliminated page limits altogether. This is the text we used in the OpenSym 2017 CFP:

There is no minimum or maximum length for submitted papers. Rather, reviewers will be instructed to weigh the contribution of a paper relative to its length. Papers should report research thoroughly but succinctly: brevity is a virtue. A typical length of a “long research paper” is 10 pages (formerly the maximum length limit and the limit on OpenSym tracks), but may be shorter if the contribution can be described and supported in fewer pages— shorter, more focused papers (called “short research papers” previously) are encouraged and will be reviewed like any other paper. While we will review papers longer than 10 pages, the contribution must warrant the extra length. Reviewers will be instructed to reject papers whose length is incommensurate with the size of their contribution.

The following graph shows the distribution of page lengths across papers in our final program.

In the end 3 of 20 published papers (15%) were over 10 pages. More surprisingly, 11 of the accepted papers (55%) were below the old 10-page limit. Fears that some have expressed that page limits are the only thing keeping OpenSym from publshing enormous rambling manuscripts seems to be unwarranted—at least so far.


Although, I won’t post any analysis or graphs, bidding worked well. With only two exceptions, every single assigned review was to someone who had bid “yes” or “maybe” for the paper in question and the vast majority went to people that had bid “yes.” However, this comes with one major proviso: people that did not bid at all were marked as “maybe” for every single paper.

Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. But everybody needs to bid. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don’t bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get poor matches which cause them to become less interested, willing, or able to do their reviews well and on time.

Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a program committee (PC) before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid.


The final results were a fantastic collection of published papers. Of course, it couldn’t have been possible without the huge collection of conference chairs, associate chairs, program committee members, external reviewers, and staff supporters.

Although we tried quite a lot of new things, my sense is that nothing we changed made things worse and many changes made things smoother or better. Although I’m not directly involved in organizing OpenSym 2018, I am on the OpenSym steering committee. My sense is that most of the changes we made are going to be carried over this year.

Finally, it’s also been announced that OpenSym 2018 will be in Paris on August 22-24. The call for papers should be out soon and the OpenSym 2018 paper deadline has already been announced as March 15, 2018. You should consider submitting! I hope to see you in Paris!

This Analysis

OpenSym used the gratis version of EasyChair to manage the conference which doesn’t allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a knitr file that combines R visualization and analysis code with markdown to create the HTML directly from the datasets. I’ve made all the code I used to produce this analysis available in this git repository. I hope someone else finds it useful. Because the data contains sensitive information on the review process, I’m not publishing the data.

This blog post was originally posted on the Community Data Science Collective blog.

Benjamin Mako Hill copyrighteous

More About the Thinkpad X301

Mar, 16/01/2018 - 4:22pd

Last month I blogged about the Thinkpad X301 I got from a rubbish pile [1]. One thing I didn’t realise when writing that post is that the X301 doesn’t have the keyboard light that the T420 has. With the T420 I could press the bottom left (FN) and top right (PgUp from memory) keys on the keyboard to turn a light on the keyboard. This is really good for typing at night. While I can touch type the small keyboard on a laptop makes it a little difficult so the light is a feature I found useful. I wrote my review of the X301 before having to use it at night.

Another problem I noticed is that it crashes after running Memtest86+ for between 30 minutes and 4 hours. Memtest86+ doesn’t report any memory errors, the system just entirely locks up. I have 2 DIMMs for it (2G and 4G), I tried installing them in both orders, and I tried with each of them in the first slot (the system won’t boot if only the second slot is filled). Nothing changed. Now it is possible that this is something that might not happen in real use. For example it might only happen due to heat when the system is under sustained load which isn’t something I planned for that laptop. I would discard a desktop system that had such a problem because I get lots of free desktop PCs, but I’m prepared to live with a laptop that has such a problem to avoid paying for another laptop.

Last night the laptop battery suddenly stopped working entirely. I had it unplugged for about 5 minutes when it abruptly went off (no flashing light to warn that the battery was low or anything). Now when I plug it in the battery light flashes orange. A quick Google search indicates that this might mean that a fuse inside the battery pack has blown or that there might be a problem with the system board. Replacing the system board is much more than the laptop is worth and even replacing the battery will probably cost more than it’s worth. Previously bought a Thinkpad T420 at auction because it didn’t cost much more than getting a new battery and PSU for a T61 [2] and I expect I can find a similar deal if I poll the auction sites for a while.

Using an X series Thinkpad has been a good experience and I’ll definitely consider an X series for my next laptop. My previous history of laptops involved going from ones with a small screen that were heavy and clunky (what was available with 90’s technology and cost less than a car) to ones that had a large screen and were less clunky but still heavy. I hadn’t tried small and light with technology from the last decade, it’s something I could really get used to!

By today’s standards the X301 is deficient in a number of ways. It has 64G of storage (the same as my most recent phones) which isn’t much for software development, 6G of RAM which isn’t too bad but is small by today’s standards (16G is a common factory option nowadays), a 1440*900 screen which looks bad in any comparison (less than the last 3 phones I’ve owned), and a slow CPU. No two of these limits would be enough to make me consider replacing that laptop. Even with the possibility of crashing under load it was still a useful system. But the lack of a usable battery in combination with all the other issues makes the entire system unsuitable for my needs. I would be very happy to use a fast laptop with a high resolution screen even without a battery, but not with this list of issues.

Next week I’m going to a conference and there’s no possibility of buying a new laptop before then. So for a week when I need to use a laptop a lot I will have a sub-standard laptop.

It really sucks to have a laptop develop a problem that makes me want to replace it so soon after I got it.

Related posts:

  1. I Just Bought a new Thinkpad and the Lenovo Web Site Sucks I’ve just bought a Thinkpad T61 at auction for $AU796....
  2. Thinkpad X301 Another Broken Thinkpad A few months ago I wrote a...
  3. thinkpad back from repair On Tuesday my Thinkpad was taken for service to fix...
etbe etbe – Russell Coker

Tex Yoda II Mechanical Keyboard with Trackpoint

Mar, 16/01/2018 - 3:38pd
Here’s a short review of the Tex Yoda II Mechanical Keyboard with Trackpoint, a pointer to the next Swiss Mechanical Keyboard Meetup and why I ordered a $300 keyboard with less keys than a normal one.

Short Review of the Tex Yoda II Pro
  • Trackpoint
  • Cherry MX Switches
  • Compact but heavy alumium case
  • Backlight (optional)
  • USB C connector and USB A to C cable with angled USB C plug
  • All three types of Thinkpad Trackpoint caps included
  • Configurable layout with nice web-based configurator (might be opensourced in the future)
  • Fn+Trackpoint = scrolling (not further configurable, though)
  • Case not clipped, but screwed
  • Backlight brightness and Trackpoint speed configurable via key bindings (usually Fn and some other key)
  • Default Fn keybindings as side printed and backlit labels
  • Nice packaging
  • It’s only a 60% Keyboard (I prefer TKL) and the two common top rows are merged into one, switched with the Fn key.
  • Cursor keys by default (and labeled) on the right side (mapped to Fn + WASD) — maybe good for games, but not for me.
  • ~ on Fn-Shift-Esc
  • Occassionally backlight flickering (low frequency)
  • Pulsed LED light effect (i.e. high frequency flickering) on all but the lowest brightness level
  • Trackpoint is very sensitive even in the slowest setting — use Fn+Q and Fn+E to adjust the trackpoint speed (“tps”)
  • No manual included or (obviously) downloadable.
  • Only the DIP switches 1-3 and 6 are documented, 4 and 5 are not. (Thanks gismo for the question about them!)
  • No more included USB hub like the Tex Yoda I had or the HHKB Lite 2 (USB 1.1 only) has.
My Modifications So Far Layout Modifications Via The Web-Based Yoda 2 Configurator
  • Right Control and Menu key are Right and Left cursors keys
  • Fn+Enter and Fn+Shift are Up and Down cursor keys
  • Right Windows key is the Compose key (done in software via xmodmap)
  • Middle mouse button is of course a middle click (not Fn as with the default layout).
Other Modifications
  • Clear dampening o-rings (clear, 50A) under each key cap for a more silent typing experience
  • Braided USB cable
Next Swiss Mechanical Keyboard Meetup

On Sunday, the 18th of February 2018, the 4th Swiss Mechanical Keyboard Meetup will happen, this time at ETH Zurich, building CAB, room H52. I’ll be there with at least my Tex Yoda II and my vintage Cherry G80-2100.

Why I ordered a $300 keyboard

(JFTR: It was actually USD $299 plus shipping from the US to Europe and customs fee in Switzerland. Can’t exactly find out how much of shipping and customs fee were actually for that one keyboard, because I ordered several items at once. It’s complicated…)

I always was and still are a big fan of Trackpoints as common on IBM and Lenovo Thinkapds as well as a few other laptop manufactures.

For a while I just used Thinkpads as my private everyday computer, first a Thinkpad T61, later a Thinkpad X240. At some point I also wanted a keyboard with Trackpoint on my workstation at work. So I ordered a Lenovo Thinkpad USB Keyboard with Trackpoint. Then I decided that I want a permanent workstation at home again and ordered two more such keyboards: One for the workstation at home, one for my Debian GNU/kFreeBSD running ASUS EeeBox (not affected by Meltdown or Spectre, yay! :-) which I often took with me to staff Debian booths at events. There, a compact keyboard with a built-in pointing device was perfect.

Then I met the guys from the Swiss Mechanical Keyboard Meetup at their 3rd meetup (pictures) and knew: I need a mechanical keyboard with Trackpoint.

IBM built one Model M with Trackpoint, the M13, but they’re hard to get. For example, ClickyKeyboards sells them, but doesn’t publish the price tag. :-/ Additionally, back then there were only two mouse buttons usual and I really need the third mouse button for unix-style pasting.

Then there’s the Unicomp Endura Pro, the legit successor of the IBM Model M13, but it’s only available with an IMHO very ugly color combination: light grey key caps in a black case. And they want approximately 50% of the price as shipping costs (to Europe). Additionally it didn’t have some other nice keyboard features I started to love: Narrow bezels are nice and keyboards with backlight (like the Thinkpad X240 ff. has) have their advantages, too. So … no.

Soon I found, what I was looking for: The Tex Yoda, a nice, modern and quite compact mechanical keyboard with Trackpoint. Unfortunately it is sold out since quite some years ago and more then 5000 people on Massdrop were waiting for its reintroduction.

And then the unexpected happened: The Tex Yoda II has been announced. I knew, I had to get one. From then on the main question was when and where will it be available. To my surprise it was not on Massdrop but at a rather normal dealer, at

At that time a friend heard me talking of mechanical keyboards and of being unsure about which keyboard switches I should order. He offered to lend me his KBTalking ONI TKL (Ten Key Less) keyboard with Cherry MX Brown switches for a while. Which was great, because from theory, MX Brown switches were likely the most fitting ones for me. He also gave me two other non-functional keyboards with other Cherry MX switch colors (variants) for comparision. As a another keyboard to compare I had my programmable Cherry G80-2100 from the early ’90s with vintage Cherry MX Black switches. Another keyboard to compare with is my Happy Hacking Keyboard (HHKB) Lite 2 (PD-KB200B/U) which I got as a gift a few years ago. While the HHKB once was a status symbol amongst hackers and system administrators, the old models (like this one) only had membrane type keyboard switches. (They nevertheless still seem to get built, but only sold in Japan.)

I noticed that I was quickly able to type faster with the Cherry MX Brown switches and the TKL layout than with the classic Thinkpad layout and its rubber dome switches or with the HHKB. So two things became clear:

  • At least for now I want Cherry MX Brown switches.
  • I want a TKL (ten key less) layout, i.e. one without the number block but with the cursor block. As with the Lenovo Thinkpad USB Keyboards and the HHKB, I really like the cursor keys being in the easy to reach lower right corner. The number pad is just in the way to have that.

Unfortunately the Tex Yoda II was without that cursor block. But since it otherwise fitted perfectly into my wishlist (Trackpoint, Cherry MX Brown switches available, Backlight, narrow bezels, heavy weight), I had to buy one once available.

So in early December 2017, I ordered a Tex Yoda II White Backlit Mechanical Keyboard (Brown Cherry MX) at

Because I was nevertheless keen on a TKL-sized keyboard I also ordered a Deck Francium Pro White LED Backlit PBT Mechanical Keyboard (Brown Cherry MX) which has an ugly font on the key caps, but was available for a reduced price at that time, and the controller got quite good reviews. And there was that very nice Tai-Hao 104 Key PBT Double Shot Keycap Set - Orange and Black, so the font issue was quickly solved with keycaps in my favourite colour: orange. :-)

The package arrived in early January. The aluminum case of the Tex Yoda II was even nicer than I thought. Unfortunately they’ve sent me a Deck Hassium full-size keyboard instead of the wanted TKL-sized Deck Francium. But the support of was very helpful and I assume I can get the keyboard exchanged at no cost.

Axel Beckert Blogging is futile

Retpoline-enabled GCC

Hën, 15/01/2018 - 10:28md

Since I assume there are people out there that want Spectre-hardened kernels as soon as possible, I pieced together a retpoline-enabled build of GCC. It's based on the latest gcc-snapshot package from Debian unstable with H.J.Lu's retpoline patches added, but built for stretch.

Obviously this is really scary prerelease code and will possibly eat babies (and worse, it hasn't taken into account the last-minute change of retpoline ABI, so it will break with future kernels), but it will allow you to compile 4.15.0-rc8 with CONFIG_RETPOLINE=y, and also allow you to assess the cost of retpolines (-mindirect-branch=thunk) in any particularly sensitive performance userspace code.

There will be upstream backports at least to GCC 7, but probably pretty far back (I've seen people talk about all the way to 4.3). So you won't have to run my crappy home-grown build for very long—it's a temporary measure. :-)

Oh, and it made Stockfish 3% faster than with GCC 6.3! Hooray.

Steinar H. Gunderson Steinar H. Gunderson

Quick recap of 2017

Hën, 15/01/2018 - 12:00md

I haven’t been posting anything on my personal blog in a long while, let’s fix that!

Partial reason for this is that I’ve been busy documenting progress on the Debian Installer on my company’s blog. So far, the following posts were published there:

After the Stretch release, it was time to attend DebConf’17 in Montreal, Canada. I’ve presented the latest news on the Debian Installer front there as well. This included a quick demo of my little framework which lets me run automatic installation tests. Many attendees mentioned openQA as the current state of the art technology for OS installation testing, and Philip Hands started looking into it. Right now, my little thing is still useful as it is, helping me reproduce regressions quickly, and testing bug fixes… so I haven’t been trying to port that to another tool yet.

I also gave another presentation in two different contexts: once at a local FLOSS meeting in Nantes, France and once during the mini-DebConf in Toulouse, France. Nothing related to Debian Installer this time, as the topic was how I helped a company upgrade thousands of machines from Debian 6 to Debian 8 (and to Debian 9 since then). It was nice to have Evolix people around, since we shared our respective experience around automation tools like Ansible and Puppet.

After the mini-DebConf in Toulouse, another event: the mini-DebConf in Cambridge, UK. I tried to give a lightning talk about “how helped saved the release(s)” but clearly speed was lacking, and/or I had too many things to present, so that didn’t work out as well as I hoped. Fortunately, no time constraints when I presented that during a Debian meet-up in Nantes, France. :)

Since Reproducible Tails builds were announced, it seemed like a nice opportunity to document how my company got involved into early work on reproducibility for the Tails project.

On an administrative level, I’m already done with all the paperwork related to the second financial year. \o/

Next things I’ll likely write about: the first two D-I Buster Alpha releases (many blockers kept popping up, it was really hard to release), and a few more recent release critical bug reports.

Cyril Brulebois KiBi’s blog

RHL'18 in Saint-Cergue, Switzerland

Hën, 15/01/2018 - 9:02pd

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

Daniel.Pocock - debian


Hën, 15/01/2018 - 12:54pd

A few comments on Star Wars: The Last Jedi.

Vice Admiral Holdo’s subplot was a huge success. She had to make a very difficult call over which she knew she might face a mutiny from the likes of Poe Dameron. The core of her challenge was that there was no speech or argument she could have given that would have placated Dameron and restored unity to the crew. Instead, Holdo had to press on in the face of that disunity. This reflects the fact that, sometimes, living as one should demands pressing on in the face deep disagreement with others.

Not making it clear that Dameron was in the wrong until very late in the film was a key component of the successful portrayal of the unpleasantness of what Holdo had to do. If instead it had become clear to the audience early on that Holdo’s plan was obviously the better one, we would not have been able to observe the strength of Holdo’s character in continuing to pursue her plan despite the mutiny.

One thing that I found weak about Holdo was her dress. You cannot be effective on the frontlines of a hot war in an outfit like that! Presumably the point was to show that women don’t have to give up their femininity in order to take tough tactical decisions under pressure, and that’s indeed something worth showing. But this could have been achieved by much more subtle means. What was needed was to have her be the character with the most feminine outfit, and it would have been possible to fulfill that condition by having her wear something much more practical. Thus, having her wear that dress was crude and implausible overkill in the service of something otherwise worth doing.

I was very disappointed by most of the subplot with Rey and Luke: both the content of that subplot, and its disconnection from the rest of film.

Firstly, the content. There was so much that could have been explored that was not explored. Luke mentions that the Jedi failed to stop Darth Sidious “at the height of their powers”. Well, what did the Jedi get wrong? Was it the Jedi code; the celibacy; the bureaucracy? Is their light side philosophy to absolutist? How are Luke’s beliefs about this connected to his recent rejection of the Force? When he lets down his barrier and reconnects with the force, Yoda should have had much more to say. The Force is, perhaps, one big metaphor for certain human capacities not emphasised by our contemporary culture. It is at the heart of Star Wars, and it was at the heart of Empire and Rogue One. It ought to have been at the heart of The Last Jedi.

Secondly, the lack of integration with the rest of the film. One of the aspects of Empire that enables its importance as a film, I suggest, is the tight integration and interplay between the two main subplots: the training of Luke under Yoda, and attempting to shake the Empire off the trail of the Millennium Falcon. Luke wants to leave the training unfinished, and Yoda begs him to stay, truly believing that the fate of the galaxy depends on him completing the training. What is illustrated by this is the strengths and weaknesses of both Yoda’s traditional Jedi view and Luke’s desire to get on with fighting the good fight, the latter of which is summed up by the binary sunset scene from A New Hope. Tied up with this desire is Luke’s love for his friends; this is an important strength of his, but Yoda has a point when he says that the Jedi training must be completed if Luke is to be ultimately succesful. While the Yoda subplot and what happens at Cloud City could be independently interesting, it is only this integration that enables the film to be great. The heart of the integration is perhaps the Dark Side Cave, where two things are brought together: the challenge of developing the relationship with oneself possessed by a Jedi, and the threat posed by Darth Vader.

In the Last Jedi, Rey just keeps saying that the galaxy needs Luke, and eventually Luke relents when Kylo Ren shows up. There was so much more that could have been done with this! What is it about Rey that enables her to persuade Luke? What character strengths of hers are able to respond adequately to Luke’s fear of the power of the Force, and doubt regarding his abilities as a teacher? Exploring these things would have connected together the rebel evacuation, Rey’s character arc and Luke’s character arc, but these three were basically independent.

(Possibly I need to watch the cave scene from The Last Jedi again, and think harder about it.)

Sean Whitton Notes from the Library

I pushed an implementation of myself to GitHub

Dje, 14/01/2018 - 10:22md

Roughly 4 years ago, I mentioned that there appears to be an esotieric programming language which shares my full name.

I know, it is really late, but two days ago, I discovered Racket. As a Lisp person, I immediately felt at home. And realizing how the language dispatch mechanism works, I couldn't resist and write a Racket implementation of MarioLANG. A nice play on words and a good toy project to get my feet wet.

Racket programs always start with #lang. How convenient. MarioLANG programs for Racket therefore look something like this:

#lang mario ++++++++++++ ===========+: ==

So much about abusing coincidences. Phew, this was a fun weekend project! And it has some potential for more challenges. Right now, it is only an interpreter, because it appears to be tricky to compile a 2d instruction "space" to traditional code. MarioLANG does not only allow for nested loops as BrainFuck does, it also includes weird concepts like the reversal of the instruction pointer direction. Coupled with the "skip" ([) instruction, this allow to create loops which have two exit conditions and reverse code execution on every pass. Something like this:

@[ some brainfuck [@ ====================

And since this is a 2d programming language, this theoretical loop could be entered by jumping onto any of the instruction inbetween from above. And, the heading could be either leftward or rightward when entering.

Discovering these patterns and translating them to compilable code is quite beyond me right now. Lets see what time will bring.

Mario Lang The Blind Guru

SSL migration

Dje, 14/01/2018 - 11:05pd
SSL migration

This week I managed to finally migrate my personal website to SSL, and on top of that migrate the SMTP/IMAP services to certificates signed by "proper" a CA (instead of my own). This however was more complex than I thought…

Let's encrypt?

I first wanted to do this when Let's Encrypt became available, but the way it works - with short term certificates with automated renewal put me off at first. The certbot tool needs to make semi-arbitrary outgoing requests to renew the certificates, and on public machines I have a locked-down outgoing traffic policy. So I gave up, temporarily…

I later found out that at least for now (for the current protocol), certbot only needs to talk to a certain API endpoint, and after some more research, I realized that the http-01 protocol is very straight-forward, only needing to allow some specific plain http URLs.

So then:

Issue 1: allowing outgoing access to a given API endpoint, somewhat restricted. I solved this by using a proxy, forcing certbot to go through it via env vars, learning about systemctl edit on the way, and from the proxy, only allowing that hostname. Quite weak, but at least not "open policy".

Issue 2: due to how http-01 works, it requires to leave some specific paths under http, which means you can't have (in Apache) a "redirect everything to https" config. While fixing this I learned about mod_macro, which is quite interesting (and doesn't need an external pre-processor).

The only remaining problem is that you can't renew automatically certificates for non-externally accessible systems; the dns protocol also need changing externally-visible state, so more or less the same. So:

Issue 3: For internal websites, still need a solution if own CA (self-signed, needs certificates added to clients) is not acceptable.

How did it go?

It seems that using SSL is more than SSLEngine on. I learned in this exercise about quite a few things.


DNS Certification Authority Authorization is pretty nice, and although it's not a strong guarantee (against malicious CAs), it gives some more signals that proper clients could check ("For this domain, only this CA is expected to sign certificates"); also, trivial to configure, with the caveat that one would need DNSSEC as well for end-to-end checks.

OCSP stapling

I was completely unaware of OCSP Stapling, and yay, seems like a good solution to actually verifying that the certs were not revoked. However… there are many issues with it:

  • there needs to be proper configuration on the webserver to not cause more problems than without; Apache at least, needs increasing the cache lifetime, disable sending error responses (for transient CA issues), etc.
  • but even more, it requires the web server user to be able to make "random" outgoing requests, which IMHO is a big no-no
  • even the command line tools (i.e. openssl ocsp) are somewhat deficient: no proxy support (while s_client can use one)

So the proper way to do this seems to be a separate piece of software, isolated from the webserver, that does proper/eager refresh of certificates while handling errors well.

Issue 4: No OCSP until I find a good way to do it.

HSTS, server-side and preloading

HTTP Strict Transport Security represent a commitment to encryption: once published with recommended lifetime, browsers will remember that the website shouldn't be accessed over plain http, so you can't rollback.

Preloading HSTS is even stronger, and so far I haven't done it. Seems worthwhile, but I'll wait another week or so ☺ It's easily doable online.


HTTP Public Key Pinning seems dangerous, at least by some posts. Properly deployed, it would solve a number of problems with the public key infrastructure, but still, complex and a lot of overhead.

Certificate chains

Something I didn't know before is that the servers are supposed to serve the entire chain; I thought, naïvely, that just the server is enough, since the browsers will have the root-root CA, but the intermediaries seem to be problematic.

So, one needs to properly serve the full chain (Let's Encrypt makes this trivial, by the way), and also monitor that it is so.

Ciphers and SSL protocols

OpenSSL disabled SSLv2 in recent builds, but at least Debian stable still has SSLv3+ enabled and Apache does not disable it, so if you put your shiny new website through a SSL checker you get many issues (related strictly to ciphers).

I spent a bit of time researching and getting to the conclusion that:

  • every reasonable client (for my small webserver) supports TLSv1.1+, so disabling SSLv3/TLSv1.0 solved a bunch of issues
  • however, even for TLSv1.1+, a number of ciphers are not recommended by US standards, but going into explicit cipher disable is a pain because I don't see a way to make it "cheap" (without needing manual maintenance); so there's that, my website is not HIPAA compliant due to Camellia cipher.

Issue 5: Weak default configs

Issue 6: Getting perfect ciphers not easy.

However, while not perfect, getting a proper config once you did the research is pretty trivial in terms of configuration.

My apache config. Feedback welcome:

SSLCipherSuite HIGH:!aNULL SSLHonorCipherOrder on SSLProtocol all -SSLv3 -TLSv1

And similarly for dovecot:

ssl_cipher_list = HIGH:!aNULL ssl_protocols = !SSLv3 !TLSv1 ssl_prefer_server_ciphers = yes ssl_dh_parameters_length = 4096

The last line there - the dh_params - I found via nmap, as my previous config has it do 1024, which is weaker than the key, defeating the purpose of a long key. Which leads to the next point:

DH parameters

It seems that DH parameters can be an issue, in the sense that way too many sites/people reuse the same params. Dovecot (in Debian) generates its own, but Apache (AFAIK) not, and needs explicit configuration added to use your own.

Issue 7: Investigate DH parameters for all software (postfix, dovecot, apache, ssh); see instructions.


A number interesting tools:

  • Online resources to analyse https config: e.g. SSL labs, and htbridge; both give very detailed information.
  • CAA checker (but this is trivial).
  • nmap ciphers report: nmap --script ssl-enum-ciphers, and very useful, although I don't think this works for STARTTLS protocols.
  • Cert Spotter from SSLMate. This seems to be useful as a complement to CAA (CAA being the policy, and Cert Spotter the monitoring for said policy), but it goes beyond it (key sizes, etc.); for the expiration part, I think nagios/icinga is easier if you already have it setup (check_http has options for lifetime checks).
  • Certificate chain checker; trivial, but a useful extra check that the configuration is right.

Ah, the good old days of plain http. SSL seems to add a lot of complexity; I'm not sure how much is needed and how much could actually be removed by smarter software. But, not too bad, a few evenings of study is enough to get a start; probably the bigger cost is in the ongoing maintenance and keeping up with the changes.

Still, a number of unresolved issues. I think the next goal will be to find a way to properly do OCSP stapling.

Iustin Pop blog

Make 'bts' (devscripts) accept TLS connection to mail server with self signed certificate

Dje, 14/01/2018 - 3:46pd

My mail server runs with a self signed certificate. So bts, configured like this ...

...lately refused to send mails with this error:

bts: failed to open SMTP connection to
(SSL connect attempt failed error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed)

After searching a bit, I found a way to fix this locally without turning off the server certificate verification. The fix belongs into the send_mail() function. When calling the Net::SMTPS->new() constructor, it is possible to add the fingerprint of my self signed certificate like this (bold):

if (have_smtps) {
$smtp = Net::SMTPS->new($host, Port => $port,
Hello => $smtphelo, doSSL => 'starttls',
SSL_fingerprint => 'sha1$hex-fingerprint'
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
} else {
$smtp = Net::SMTP->new($host, Port => $port, Hello => $smtphelo)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";

Pretty happy to being able to use the bts command again.

Daniel Leidert [erfahrungen, meinungen, halluzinationen]

Fixing a Nintendo Game Boy Screen

Dje, 14/01/2018 - 1:28pd

Over the holidays my old Nintendo Game Boy (the original DMG-01 model) has resurfaced. It works, but the display had a bunch of vertical lines near the left and right border that stay blank. Apparently a common problem with these older Game Boys and the solution is to apply heat to the connector foil upper side to resolder the contacts hidden underneath. There’s lots of tutorials and videos on the subject so I won’t go into much detail here.

Just one thing: The easiest way is to use a soldering iron (the foil is pretty heat resistant, it has to be soldered during production after all) and move it along the top at the affected locations. Which I tried at first and it kind of works but takes ages. Some columns reappear, others disappear, reappeared columns disappear again… In someone’s comment I read that they needed over five minutes until it was fully fixed!

So… simply apply a small drop of solder to the tip. That’s what you do for better heat transfer in normal soldering and of course it also works here (since the foil connector back doesn’t take solder this doesn’t make a mess or anything). That way, the missing columns reappeared practically instantly at the touch of the solder iron and stayed fixed. Temperature setting was 250°C, more than sufficient for the task.

This particular Game Boy always had issues with the speaker stopping to work but we never had it replaced, I think because the problem was intermittent. After locating the bad solder joint on the connector and reheating it this problem was also fixed. Basically this almost 28 year old device is now in better working condition than it ever was.

Andreas Bombe pdo on Active Low

The VR Show

Pre, 29/12/2017 - 5:39md

One of the things If I had got the visa on time for Debconf 15 (Germany) apart from the conference itself was the attention on VR (Virtual Reality) and AR (Augmented Reality) . I had heard the hype so much for so many years that I wanted to experience and did know that with Debianities who might be perhaps a bit better in crystal-gazing and would have perhaps more of an idea as I had then. The only VR which I knew about was from Hollywood movies and some VR videos but that doesn’t tell you anything. Also while movie like Chota-Chetan and others clicked they were far lesser immersive than true VR has to be.

I was glad that it didn’t happen after the fact as in 2016 while going to the South African Debconf I experienced VR at Qatar Airport in a Samsung showroom. I was quite surprised as how heavy the headset was and also surprised by how little content they had. Something which has been hyped for 20 odd years had not much to show for it. I was also able to trick the VR equipment as the eye/motion tracking was not good enough so if you put shook the head fast enough it couldn’t keep up with you.

I shared the above as I was invited to another VR conference by a web-programmer/designer friend Mahendra couple of months ago here in Pune itself . We attended the conference and were showcased quite a few success stories. One of the stories which was liked by the geek in me was framastore’s 360 Mars VR Experience on a bus the link shows how the framastore developers mapped Mars or part of Mars on Washington D.C. streets and how kids were able to experience how it would feel to be on Mars without knowing any of the risks the astronauts or the pioneers would have to face if we do get the money, the equipment and the technology to send people to Mars. In reality we are still decades from making such a trip keeping people safe to Mars and back or to have Mars for the rest of their life.

If my understanding is correct, the gravity of Mars is half of earth and once people settle there they or their exoskeleton would no longer be able to support Earth’s gravity, at least a generation who is born on Mars.

An interesting take on how things might turn out is shown in ‘The Expanse

But this is taking away from the topic at hand. While I saw the newer generation VR headsets there are still a bit ways off. It would be interesting once the headset becomes similar to eye-glasses and you do not have to either be tethered to a power unit or need to lug a heavy backpack full of dangerous lithium-ion battery. The chemistry for battery or some sort of self-powered unit would need to be much more safer, lighter.

While being in the conference and seeing the various scenarios being played out between potential developers and marketeers, it crossed my mind that people were not at all thinking of safe-guarding users privacy. Right from what games or choices you make to your biometric and other body sensitive information which has a high chance of being misused by companies and individuals.

There were also questions about how Sony and other developers are asking insane amounts for use of their SDK to develop content while it should be free as games and any content is going to enhance the marketability of their own ecosystem. For both the above questions (privacy and security asked by me) and SDK-related questions asked by some of the potential developers were not really answered.

At the end, they also showed AR or Augmented Reality which to my mind has much more potential to be used for reskilling and upskilling of young populations such as India and other young populous countries. It was interesting to note that both China and the U.S. are inching towards the older demographics while India would relatively be a still young country till another 20-30 odd years. Most of the other young countries (by median age) seem to be in the African continent and I believe (might be a myth) is that they are young because most of the countries are still tribal-like and they still are perhaps a lot of civil wars for resources.

I was underwhelmed by what they displayed in Augmented Reality, part of which I do understand that there may be lot many people or companies working on their IP and hence didn’t want to share or show or show a very rough work so their idea doesn’t get stolen.

I was also hoping somebody would take about motion-sickness or motion displacement similar to what people feel when they are train-lagged or jet-lagged. I am surprised that wikipedia still doesn’t have an article on train-lag as millions of Indians go through the process every year. The one which is most pronounced on Indian Railways is Motion being felt but not seen.

There are both challenges and opportunities provided by VR and AR but until costs come down both in terms of complexity, support and costs (for both the deployer and the user) it would remain a distant dream.

There are scores of ideas that could be used or done. For instance, the whole of North India is one big palace in the sense that there are palaces built by Kings and queens which have their own myth and lore over centuries. A story-teller could use a modern story and use say something like Chota Imambara or/and Bara Imambara where there have been lots of stories of people getting lost in the alleyways.

Such sort of lore, myths and mysteries are all over India. The Ramayana and the Mahabharata are just two of the epics which tell how grand the tales could be spun. The History of Indus Valley Civilization till date and the modern contestations to it are others which come to my mind.

Even the humble Panchtantra can be re-born and retold to generations who have forgotten it. I can’t express it much better as the variety of stories and contrasts to offer as bolokids does as well as SRK did in opening of IFFI. Even something like Khakee which is based on true incidents and a real-life inspector could be retold in so many ways. Even Mukti Bhavan which I saw few months ago, coincidentally before I became ill tells of stories which have complex stories and each person or persons have their own rich background which on VR could be much more explored.

Even titles such as the ever-famous Harry Potter or even the ever-beguiling RAMA could be shared and retooled for generations to come. The Shiva Trilogy is another one which comes to my mind which could be retold as well. There was another RAMA trilogy by the same author and another competing one which comes out in 2018 by an author called PJ Annan

We would need to work out the complexities of both hardware, bandwidth and the technologies but stories or content waiting to be developed is aplenty.

Once upon a time I had the opportunity to work, develop and understand make-believe walk-throughs (2-d blueprints animated/bought to life and shown to investors/clients) for potential home owners in a society (this was in the hey-days and heavy days of growth circa around y2k ) , it was 2d or 2.5 d environment, tools were lot more complex and I was the most inept person as I had no idea of what camera positioning and what source of light meant.

Apart from the gimmickry that was shown, I thought it would have been interesting if people had shared both the creative and the budget constraints while working in immersive technologies and bringing something good enough for the client. There was some discussion in a ham-handed way but not enough as there was considerable interest from youngsters to try this new medium but many lacked both the opportunities, knowledge, the equipment and the software stack to make it a reality.

Lastly, as far as the literature I have just shared bits and pieces of just the Indian English literature. There are 16 recognized Indian languages and all of them have a vibrant literature scene. Just to take an example, Bengal has been a bed-rock of new Bengali Detective stories all the time. I think I had shared the history of Bengali Crime fiction sometime back as well but nevertheless here it is again.

So apart from games, galleries, 3-d visual interactive visual novels with alternative endings could make for some interesting immersive experiences provided we are able to shed the costs and the technical challenges to make it a reality.

Filed under: Miscellenous Tagged: #Augmented Reality, #Debconf South Africa 2016, #Epics, #framastore, #indian literature, #Mars trip, #median age population inded, #motion sickness, #Palaces, #planet-debian, #Pune VR Conference, #RAMA, #RAMA trilogy, #Samsung VR, #Shiva Trilogy, #The Expanse, #Virtual Reality, #VR Headsets, #walkthroughs, Privacy shirishag75 #planet-debian – Experiences in the community

Compute rescaling progress

Pre, 29/12/2017 - 2:18md

My Lanczos rescaling compute shader for Movit is finally nearing usable performance improvements:

BM_ResampleEffectInt8/Fragment/Int8Downscale/1280/720/640/360 3149 us 69.7767M pixels/s BM_ResampleEffectInt8/Fragment/Int8Downscale/1280/720/320/180 2720 us 20.1983M pixels/s BM_ResampleEffectHalf/Fragment/Float16Downscale/1280/720/640/360 3777 us 58.1711M pixels/s BM_ResampleEffectHalf/Fragment/Float16Downscale/1280/720/320/180 3269 us 16.8054M pixels/s BM_ResampleEffectInt8/Compute/Int8Downscale/1280/720/640/360 2007 us 109.479M pixels/s [+ 56.9%] BM_ResampleEffectInt8/Compute/Int8Downscale/1280/720/320/180 1609 us 34.1384M pixels/s [+ 69.0%] BM_ResampleEffectHalf/Compute/Float16Downscale/1280/720/640/360 2057 us 106.843M pixels/s [+ 56.7%] BM_ResampleEffectHalf/Compute/Float16Downscale/1280/720/320/180 1633 us 33.6394M pixels/s [+100.2%]

Some tuning and bugfixing still needed; this is on my Haswell (the NVIDIA results are somewhat different). Upscaling also on its way. :-)

Steinar H. Gunderson Steinar H. Gunderson


Pre, 29/12/2017 - 12:11md
I have no idea whatsover of how I achieved this, but there you go. This citizen's legal draft is moving forward to the Finnish parliament. Martin-Éric Funkyware: ITCetera

Debian Policy call for participation -- December 2017

Enj, 28/12/2017 - 11:47md

Yesterday we released Debian Policy, containing patches from numerous different contributors, some of them first-time contributors. Thank you to everyone who was involved!

Please consider getting involved in preparing the next release of Debian Policy, which is likely to be uploaded sometime around the end of January.

Consensus has been reached and help is needed to write a patch

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#515856 remove get-orig-source

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#645696 [copyright-format] clearer definitions and more consistent License:…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#742364 Document debian/missing-sources

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

Sean Whitton Notes from the Library

Get rid of the backpack

Enj, 28/12/2017 - 11:43md

In 2008 I read a blog post by Mark Pilgrim which made a profound impact on me, although I didn't realise it at the time. It was

  1. Stop buying stuff you don’t need
  2. Pay off all your credit cards
  3. Get rid of all the stuff that doesn’t fit in your house/apartment (storage lockers, etc.)
  4. Get rid of all the stuff that doesn’t fit on the first floor of your house (attic, garage, etc.)
  5. Get rid of all the stuff that doesn’t fit in one room of your house
  6. Get rid of all the stuff that doesn’t fit in a suitcase
  7. Get rid of all the stuff that doesn’t fit in a backpack
  8. Get rid of the backpack

At the time I first read it, I think I could see (and concur) with the logic behind the first few points, but not further. Revisiting it now I can agree much further along the list and I'm wondering if I'm brave enough to get to the last step, or anywhere near it.

Mark was obviously going on a journey, and another stopping-off point for him on that journey was to delete his entire online persona, which is why I've linked to the Wayback Machine copy of the blog post.

jmtd Jonathan Dowland's Weblog