You are here

Planet Debian

Subscribe to Feed Planet Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 5 months 4 orë më parë

Molly de Blanc: Free software activities (January, 2019)

Mar, 05/02/2019 - 6:03md

January was another quiet month for free software. This isn’t to say I wasn’t busy, but merely that there were fewer things going on, with those things being more demanding of my attention. I’m including some more banal activities both to pad out the list, but to also draw some attention to the labor that goes into free software participation.

January activities (personal)
  • Debian Anti-harassment covered several incidents. These have not yet been detailed in an email to the Debian Project mailing list. I won’t get into details here, due to the sensitive nature of some of the conversations.
  • We began planning for Debian involvement in Google Summer of Code and Outreachy.
  • I put together a slide deck and prepared for FOSDEM. More details about FOSDEM next month! In the mean time, check out my talk description.
January activities (professional)
  • We wrapped up the end of the year fundraiser.
  • We’re planning LibrePlanet 2019! I hope to see you there.
  • I put together a slide deck for CopyLeft Conf, which I’ll detail more in February. I had drew and scanned in my slides, which is a time consuming process.

Reproducible builds folks: Reproducible Builds: Weekly report #197

Mar, 05/02/2019 - 4:15md

Here’s what happened in the Reproducible Builds effort between Sunday January 27th and Saturday February 2nd 2019:

  • There was yet more progress towards making the Debian Installer images reproducible. Following-on from last week, Chris Lamb performed some further testing of the generated images resulting in two patches to ensure that builds were reproducible regardless of both the user’s umask(2) (filed as #920631) and even the underlying ordering of files on disk (#920676). It is hoped these can be merged for the next Debian Installer alpha/beta after the recent “Alpha 5” release.

  • Tails, the privacy-oriented “live” operating system released its first USB image, which is reproducible.

  • Chris Lamb implemented a check in the Lintian static analysis tool that performs automated checks against Debian packages in order to add a check for .sass-cache directories. As as they contain non-deterministic subdirectories they immediately contribute towards an unreproducible build (#920593).
  • disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into filesystems for easy and reliable testing. Chris Lamb fixed an issue this week in the handling of the fsyncdir system call to ensure dpkg(1) can “flush” /var/lib/dpkg correctly [].

  • Hervé Boutemy made more updates to the reproducible-builds.org project website, including documenting mvn.build-root []. In addition, Chris Smith fixed a typo on the tools page [] and Holger Levsen added a link to Lukas’s report to the recent Paris Summit page [].

  • strip-nondeterminism is our our tool that post-processes files to remove known non-deterministic output) version. This week, Chris Lamb investigated an issue regarding the tool not normalising file ownerships in .epub files that was originally identified by Holger Levsen, as well as clarified the negative message in test failures [] and performed some code cleanups (eg. []).

  • Chris Lamb updated the SSL certificate for try.diffoscope.org to ensure validation after the deprecation of TLS-SNI-01 validation in LetsEncrypt.

  • Reproducible Builds were present at FOSDEM 2019 handing out t-shirts to contributors. Thank you!

  • On Tuesday February 26th Chris Lamb will speak at Speck&Tech 31 “Open Security” on Reproducible Builds in Trento, Italy.

  • 6 Debian package reviews were added, 3 were updated and 5 were removed in this week, adding to our knowledge about identified issues. Chris Lamb unearthed a new toolchain issue randomness_in_documentation_underscore_downloads_generated_by_sphinx, .
Packages reviewed and fixed, and bugs filed Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, Holger Levsen made a large number of improvements including:

  • Arch Linux-specific changes:
    • The scheduler is now run every 4h so present stats for this time period. []
    • Fix detection of bad builds. []
  • LEDE/OpenWrt-specific changes:
    • Make OpenSSH usable with a TCP port other than 22. This is needed for our OSUOSL nodes. []
    • Perform a minor refactoring of the build script. []
  • NetBSD-specific changes:
    • Add a ~jenkins/.ssh/config to fix jobs regarding OpenSSH running on non-standard ports. []
    • Add a note that osuosl171 is constantly online. []
  • Misc/generic changes:
    • Use same configuration for df_inode as for df to reduce noise. []
    • Remove a now-bogus warning; we have its parallel in Git now. []
    • Define ControlMaster and ControlPath in our OpenSSH configurations. []

In addition, Mattia Rizzolo and Vagrant Cascadian performed maintenance of the build nodes. ([], [], [], etc.)

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, intrigeri & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Michael Stapelberg: Looking for a new Raspberry Pi image maintainer

Mar, 05/02/2019 - 9:42pd

This is taken care of: Gunnar Wolf has taken on maintenance of the Raspberry Pi image. Thank you!

(Cross-posting this message I sent to pkg-raspi-maintainers for broader visibility.)

I started building Raspberry Pi images because I thought there should be an easy, official way to install Debian on the Raspberry Pi.

I still believe that, but I’m not actually using Debian on any of my Raspberry Pis anymore¹, so my personal motivation to do any work on the images is gone.

On top of that, I realize that my commitments exceed my spare time capacity, so I need to get rid of responsibilities.

Therefore, I’m looking for someone to take up maintainership of the Raspberry Pi images. Numerous people have reached out to me with thank you notes and questions, so I think the user interest is there. Also, I’ll be happy to answer any questions that you might have and that I can easily answer. Please reply here (or in private) if you’re interested.

If I can’t find someone within the next 7 days, I’ll put up an announcement message in the raspi3-image-spec README, wiki page, and my blog posts, stating that the image is unmaintained and looking for a new maintainer.

Thanks for your understanding,

① just in case you’re curious, I’m now running cross-compiled Go programs directly under a Linux kernel and minimal userland, see https://gokrazy.org/

Michael Stapelberg: TurboPFor: an analysis

Mar, 05/02/2019 - 9:18pd
Motivation

I have recently been looking into speeding up Debian Code Search. As a quick reminder, search engines answer queries by consulting an inverted index: a map from term to documents containing that term (called a “posting list”). See the Debian Code Search Bachelor Thesis (PDF) for a lot more details.

Currently, Debian Code Search does not store positional information in its index, i.e. the index can only reveal that a certain trigram is present in a document, not where or how often.

From analyzing Debian Code Search queries, I knew that identifier queries (70%) massively outnumber regular expression queries (30%). When processing identifier queries, storing positional information in the index enables a significant optimization: instead of identifying the possibly-matching documents and having to read them all, we can determine matches from querying the index alone, no document reads required.

This moves the bottleneck: having to read all possibly-matching documents requires a lot of expensive random I/O, whereas having to decode long posting lists requires a lot of cheap sequential I/O.

Of course, storing positions comes with a downside: the index is larger, and a larger index takes more time to decode when querying.

Hence, I have been looking at various posting list compression/decoding techniques, to figure out whether we could switch to a technique which would retain (or improve upon!) current performance despite much longer posting lists and produce a small enough index to fit on our current hardware.

Literature

I started looking into this space because of Daniel Lemire’s Stream VByte post. As usual, Daniel’s work is well presented, easily digestible and accompanied by not just one, but multiple implementations.

I also looked for scientific papers to learn about the state of the art and classes of different approaches in general. The best I could find is Compression, SIMD, and Postings Lists. If you don’t have access to the paper, I hear that Sci-Hub is helpful.

The paper is from 2014, and doesn’t include all algorithms. If you know of a better paper, please let me know and I’ll include it here.

Eventually, I stumbled upon an algorithm/implementation called TurboPFor, which the rest of the article tries to shine some light on.

TurboPFor

If you’re wondering: PFor stands for Patched Frame Of Reference and describes a family of algorithms. The principle is explained e.g. in SIMD Compression and the Intersection of Sorted Integers (PDF).

The TurboPFor project’s README file claims that TurboPFor256 compresses with a rate of 5.04 bits per integer, and can decode with 9400 MB/s on a single thread of an Intel i7-6700 CPU.

For Debian Code Search, we use unsigned integers of 32 bit (uint32), which TurboPFor will compress into as few bits as required.

Dividing Debian Code Search’s file sizes by the total number of integers, I get similar values, at least for the docid index section:

  • 5.49 bits per integer for the docid index section
  • 11.09 bits per integer for the positions index section

I can confirm the order of magnitude of the decoding speed, too. My benchmark calls TurboPFor from Go via cgo, which introduces some overhead. To exclude disk speed as a factor, data comes from the page cache. The benchmark sequentially decodes all posting lists in the specified index, using as many threads as the machine has cores¹:

  • ≈1400 MB/s on a 1.1 GiB docid index section
  • ≈4126 MB/s on a 15.0 GiB position index section

I think the numbers differ because the position index section contains larger integers (requiring more bits). I repeated both benchmarks, capped to 1 GiB, and decoding speeds still differed, so it is not just the size of the index.

Compared to Streaming VByte, a TurboPFor256 index comes in at just over half the size, while still reaching 83% of Streaming VByte’s decoding speed. This seems like a good trade-off for my use-case, so I decided to have a closer look at how TurboPFor works.

① See cmd/gp4-verify/verify.go run on an Intel i9-9900K.

Methodology

To confirm my understanding of the details of the format, I implemented a pure-Go TurboPFor256 decoder. Note that it is intentionally not optimized as its main goal is to use simple code to teach the TurboPFor256 on-disk format.

If you’re looking to use TurboPFor from Go, I recommend using cgo. cgo’s function call overhead is about 51ns as of Go 1.8, which will easily be offset by TurboPFor’s carefully optimized, vectorized (SSE/AVX) code.

With that caveat out of the way, you can find my teaching implementation at https://github.com/stapelberg/goturbopfor

I verified that it produces the same results as TurboPFor’s p4ndec256v32 function for all posting lists in the Debian Code Search index.

On-disk format

Note that TurboPFor does not fully define an on-disk format on its own. When encoding, it turns a list of integers into a byte stream:

size_t p4nenc256v32(uint32_t *in, size_t n, unsigned char *out);

When decoding, it decodes the byte stream into an array of integers, but needs to know the number of integers in advance:

size_t p4ndec256v32(unsigned char *in, size_t n, uint32_t *out);

Hence, you’ll need to keep track of the number of integers and length of the generated byte streams separately. When I talk about on-disk format, I’m referring to the byte stream which TurboPFor returns.

The TurboPFor256 format uses blocks of 256 integers each, followed by a trailing block — if required — which can contain fewer than 256 integers:

SIMD bitpacking is used for all blocks but the trailing block (which uses regular bitpacking). This is not merely an implementation detail for decoding: the on-disk structure is different for blocks which can be SIMD-decoded.

Each block starts with a 2 bit header, specifying the type of the block:

Each block type is explained in more detail in the following sections.

Note that none of the block types store the number of elements: you will always need to know how many integers you need to decode. Also, you need to know in advance how many bytes you need to feed to TurboPFor, so you will need some sort of container format.

Further, TurboPFor automatically choses the best block type for each block.

Constant block

A constant block (all integers of the block have the same value) consists of a single value of a specified bit width ≤ 32. This value will be stored in each output element for the block. E.g., after calling decode(input, 3, output) with input being the constant block depicted below, output is {0xB8912636, 0xB8912636, 0xB8912636}.

The example shows the maximum number of bytes (5). Smaller integers will use fewer bytes: e.g. an integer which can be represented in 3 bits will only use 2 bytes.

Bitpacking block

A bitpacking block specifies a bit width ≤ 32, followed by a stream of bits. Each value starts at the Least Significant Bit (LSB), i.e. the 3-bit values 0 (000b) and 5 (101b) are encoded as 101000b.

Bitpacking with exceptions (bitmap) block

The constant and bitpacking block types work well for integers which don’t exceed a certain width, e.g. for a series of integers of width ≤ 5 bits.

For a series of integers where only a few values exceed an otherwise common width (say, two values require 7 bits, the rest requires 5 bits), it makes sense to cut the integers into two parts: value and exception.

In the example below, decoding the third integer out2 (000b) requires combination with exception ex0 (10110b), resulting in 10110000b.

The number of exceptions can be determined by summing the 1 bits in the bitmap using the popcount instruction.

Bitpacking with exceptions (variable byte)

When the exceptions are not uniform enough, it makes sense to switch from bitpacking to a variable byte encoding:

Decoding: variable byte

The variable byte encoding used by the TurboPFor format is similar to the one used by SQLite, which is described, alongside other common variable byte encodings, at github.com/stoklund/varint.

Instead of using individual bits for dispatching, this format classifies the first byte (b[0]) into ranges:

  • [0—176]: the value is b[0]
  • [177—240]: a 14 bit value is in b[0] (6 high bits) and b[1] (8 low bits)
  • [241—248]: a 19 bit value is in b[0] (3 high bits), b[1] and b[2] (16 low bits)
  • [249—255]: a 32 bit value is in b[1], b[2], b[3] and possibly b[4]

Here is the space usage of different values:

  • [0—176] are stored in 1 byte (as-is)
  • [177—16560] are stored in 2 bytes, with the highest 6 bits added to 177
  • [16561—540848] are stored in 3 bytes, with the highest 3 bits added to 241
  • [540849—16777215] are stored in 4 bytes, with 0 added to 249
  • [16777216—4294967295] are stored in 5 bytes, with 1 added to 249

An overflow marker will be used to signal that encoding the values would be less space-efficient than simply copying them (e.g. if all values require 5 bytes).

This format is very space-efficient: it packs 0-176 into a single byte, as opposed to 0-128 (most others). At the same time, it can be decoded very quickly, as only the first byte needs to be compared to decode a value (similar to PrefixVarint).

Decoding: bitpacking Regular bitpacking

In regular (non-SIMD) bitpacking, integers are stored on disk one after the other, padded to a full byte, as a byte is the smallest addressable unit when reading data from disk. For example, if you bitpack only one 3 bit int, you will end up with 5 bits of padding.

SIMD bitpacking (256v32)

SIMD bitpacking works like regular bitpacking, but processes 8 uint32 little-endian values at the same time, leveraging the AVX instruction set. The following illustration shows the order in which 3-bit integers are decoded from disk:

In Practice

For a Debian Code Search index, 85% of posting lists are short enough to only consist of a trailing block, i.e. no SIMD instructions can be used for decoding.

The distribution of block types looks as follows:

  • 72% bitpacking with exceptions (bitmap)
  • 19% bitpacking with exceptions (variable byte)
  • 5% constant
  • 4% bitpacking

Constant blocks are mostly used for posting lists with just one entry.

Conclusion

The TurboPFor on-disk format is very flexible: with its 4 different kinds of blocks, chances are high that a very efficient encoding will be used for most integer series.

Of course, the flip side of covering so many cases is complexity: the format and implementation take quite a bit of time to understand — hopefully this article helps a little! For environments where the C TurboPFor implementation cannot be used, smaller algorithms might be simpler to implement.

That said, if you can use the TurboPFor implementation, you will benefit from a highly optimized SIMD code base, which will most likely be an improvement over what you’re currently using.

Michael Stapelberg: sbuild-debian-developer-setup(1)

Hën, 04/02/2019 - 7:08md

I have heard a number of times that sbuild is too hard to get started with, and hence people don’t use it.

To reduce hurdles from using/contributing to Debian, I wanted to make sbuild easier to set up.

sbuild ≥ 0.74.0 provides a Debian package called sbuild-debian-developer-setup. Once installed, run the sbuild-debian-developer-setup(1) command to create a chroot suitable for building packages for Debian unstable.

On a system without any sbuild/schroot bits installed, a transcript of the full setup looks like this:

% sudo apt install -t unstable sbuild-debian-developer-setup Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libsbuild-perl sbuild schroot Suggested packages: deborphan btrfs-tools aufs-tools | unionfs-fuse qemu-user-static Recommended packages: exim4 | mail-transport-agent autopkgtest The following NEW packages will be installed: libsbuild-perl sbuild sbuild-debian-developer-setup schroot 0 upgraded, 4 newly installed, 0 to remove and 1454 not upgraded. Need to get 1.106 kB of archives. After this operation, 3.556 kB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 http://localhost:3142/deb.debian.org/debian unstable/main amd64 libsbuild-perl all 0.74.0-1 [129 kB] Get:2 http://localhost:3142/deb.debian.org/debian unstable/main amd64 sbuild all 0.74.0-1 [142 kB] Get:3 http://localhost:3142/deb.debian.org/debian testing/main amd64 schroot amd64 1.6.10-4 [772 kB] Get:4 http://localhost:3142/deb.debian.org/debian unstable/main amd64 sbuild-debian-developer-setup all 0.74.0-1 [62,6 kB] Fetched 1.106 kB in 0s (5.036 kB/s) Selecting previously unselected package libsbuild-perl. (Reading database ... 276684 files and directories currently installed.) Preparing to unpack .../libsbuild-perl_0.74.0-1_all.deb ... Unpacking libsbuild-perl (0.74.0-1) ... Selecting previously unselected package sbuild. Preparing to unpack .../sbuild_0.74.0-1_all.deb ... Unpacking sbuild (0.74.0-1) ... Selecting previously unselected package schroot. Preparing to unpack .../schroot_1.6.10-4_amd64.deb ... Unpacking schroot (1.6.10-4) ... Selecting previously unselected package sbuild-debian-developer-setup. Preparing to unpack .../sbuild-debian-developer-setup_0.74.0-1_all.deb ... Unpacking sbuild-debian-developer-setup (0.74.0-1) ... Processing triggers for systemd (236-1) ... Setting up schroot (1.6.10-4) ... Created symlink /etc/systemd/system/multi-user.target.wants/schroot.service → /lib/systemd/system/schroot.service. Setting up libsbuild-perl (0.74.0-1) ... Processing triggers for man-db (2.7.6.1-2) ... Setting up sbuild (0.74.0-1) ... Setting up sbuild-debian-developer-setup (0.74.0-1) ... Processing triggers for systemd (236-1) ... % sudo sbuild-debian-developer-setup The user `michael' is already a member of `sbuild'. I: SUITE: unstable I: TARGET: /srv/chroot/unstable-amd64-sbuild I: MIRROR: http://localhost:3142/deb.debian.org/debian I: Running debootstrap --arch=amd64 --variant=buildd --verbose --include=fakeroot,build-essential,eatmydata --components=main --resolve-deps unstable /srv/chroot/unstable-amd64-sbuild http://localhost:3142/deb.debian.org/debian I: Retrieving InRelease I: Checking Release signature I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010) I: Retrieving Packages I: Validating Packages I: Found packages in base already in required: apt I: Resolving dependencies of required packages... […] I: Successfully set up unstable chroot. I: Run "sbuild-adduser" to add new sbuild users. ln -s /usr/share/doc/sbuild/examples/sbuild-update-all /etc/cron.daily/sbuild-debian-developer-setup-update-all Now run `newgrp sbuild', or log out and log in again. % newgrp sbuild % sbuild -d unstable hello sbuild (Debian sbuild) 0.74.0 (14 Mar 2018) on x1 +==============================================================================+ | hello (amd64) Mon, 19 Mar 2018 07:46:14 +0000 | +==============================================================================+ Package: hello Distribution: unstable Machine Architecture: amd64 Host Architecture: amd64 Build Architecture: amd64 Build Type: binary […]

I hope you’ll find this useful.

Michael Stapelberg: dput usability changes

Hën, 04/02/2019 - 7:08md

dput-ng ≥ 1.16 contains two usability changes which make uploading easier:

  1. When no arguments are specified, dput-ng auto-selects the most recent .changes file (with confirmation).
  2. Instead of erroring out when detecting an unsigned .changes file, debsign(1) is invoked to sign the .changes file before proceeding.

With these changes, after building a package, you just need to type dput (in the correct directory of course) to sign and upload it.

Julien Danjou: How to Log Properly in Python

Hën, 04/02/2019 - 11:15pd

Logging is one of the most underrated features. Often ignored by software engineers, it can save your time when your application's running in production.

Most teams don't think about it until it's too late in their development process. It's when things start to get wrong in deployments that somebody realizes too late that logging is missing.

Guidelines

The Twelve-Factor App defines logs as a stream of aggregated, time-ordered events collected from the output streams of all running processes. It also describes how applications should handle their logging. We can summarize those guidelines as:

  • Logs have no fixed beginning or end.
  • Print logs to stdout.
  • Print logs unbuffered.
  • The environment is responsible for capturing the stream.

From my experience, this set of rules is a good trade-off. Logs have to be kept pretty simple to be efficient and reliable. Building complex logging systems might make it harder to get insight into a running application.

There's also no point in duplication effort in log management (e.g., log file rotation, archival policy, etc) in your different applications. Having an external workflow that can be shared across different programs seems more efficient.

In Python

Python provides a logging subsystem with its logging module. This module provides a Logger object that allows you to emit messages with different levels of criticality. Those messages can then be filtered and send to different handlers.

Let's have an example:

import logging logger = logging.getLogger("myapp") logger.error("something wrong")

Depending on the version of Python you're running you'll either see:

No handlers could be found for logger "test123"

or:

something wrong

Python 2 used to have no logging setup by default, so it would print an error message about no handler being found. Since Python 3, a default handler outputting to stdout is now installed — matching the requirements from the 12factor App.

However, this default setup is far from being perfect.

Shortcomings

The default format that Python uses does not embed any contextual information. There is no way to know the name of the logger — myapp in the previous example — nor the date and time of the logged message.

You must configure Python logging subsystem to enhance its output format.

To do that, I advise using the daiquiri module. It provides an excellent default configuration and a simple API to configure logging, plus some exciting features.

Logging Setup

When using daiquiri, the first thing to do is to set up your logging correctly. This can be done with the daiquiri.setup function as this:

import daiquiri daiquiri.setup()

As simple as that. You can tweak the setup further by asking it to log to file, to change the default string formats, etc, but just calling daiquiri.setup is enough to get a proper logging default.

See:

import daiquiri daiquiri.setup() daiquiri.getLogger("myapp").error("something wrong")

outputs:

2018-12-13 10:24:04,373 [38550] ERROR myapp: something wrong

If your terminal supports writing text in colors, the line will be printed in red since it's an error. The format provided by daiquiri is better than Python's default: this one includes a timestamp, the process ID,  the criticality level and the logger's name. Needless to say that this format can also be customized.

Passing Contextual Information

Logging strings are boring. Most of the time, engineers end up writing code such as:

logger.error("Something wrong happened with %s when writing data at %d", myobject.myfield, myobject.mynumber")

The issue with this approach is that you have to think about each field that you want to log about your object, and to make sure that they are inserted correctly in your sentence. If you forget an essential field to describe your object and the problem, you're screwed.

A reliable alternative to this manual crafting of log strings is to pass interesting objects as keyword arguments. Daiquiri supports it, and it works that way:

import attr import daiquiri import requests daiquiri.setup() logger = daiquiri.getLogger("myapp") @attr.s class Request: url = attr.ib() status_code = attr.ib(init=False, default=None) def get(self): r = requests.get(self.url) self.status_code = r.status_code r.raise_for_status() return r user = "jd" req = Request("https://google.com/not-this-page") try: req.get() except Exception: logger.error("Something wrong happened during the request", request=req, user=user)

If anything goes wrong with the request, it will be logged with the stack trace, like this:

2018-12-14 10:37:24,586 [43644] ERROR myapp [request: Request(url='https://google.com/not-this-page', status_code=404)] [user: jd]: Something wrong happened during the request

As you can see, the call to logger.error is pretty straight-forward: a line that explains what's wrong, and then the different interesting objects are passed as keyword arguments.

Daiquiri logs those keyword arguments with a default format of [key: value] that is included as a prefix to the log string. The value is printed using its __format__ method — that's why I'm using the attr module here: it automatically generates this method for me and includes all fields by default. You can also customize daiquiri to use any other format.

Following those guidelines should be a perfect start for logging correctly with Python!

Iustin Pop: HGST HUH721212ALN600

Dje, 03/02/2019 - 9:32md

Due to life, I didn’t manage to do any more investigation on backup solutions, but after a long saga I managed to finally have 4 working new HDDs - model HUH721212ALN600.

These are 12TB, 4K sector size, “ISE” (instant secure erase) SATA hard-drives. This combination seemed the best, especially the ISE part; apparently the HDD controller always encrypts data to/from platters, and the “instant erase” part is about wiping the encryption key. Between this and cryptsetup, dealing with either a failed HDD or one that is to be removed from service should be a no-breeze.

Hardware is hardware

The long saga of getting the hard-drives is that this particular model, and in general any 4K hard-drives seem to be very hard to acquire in Switzerland. I ordered and cancelled many times due to “available in 3 days” then “in 5 days” then “unknown delivery”, to finally being able to order and receive 4 of them. And one to be dead-on-arrival, in the form of failing a short smart test, and failing any writes. Sent back, and due to low availability, wait and wait… I’m glad I finally got the replacement and can proceed with the project. A spare wouldn’t be bad though :)

Performance stats

Fun aside, things went through burn-in procedure successfully, and now the raid array is being built. It’s the first time I see mechanical HDDs being faster than the default sync_speed_max. After bumping up that value for this particular raid array:

md4 : active raid5 sdd2[4] sdc2[2] sdb2[1] sda2[0] 35149965312 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_] [=>...................] recovery = 5.2% (619963868/11716655104) finish=769.4min speed=240361K/sec bitmap: 0/88 pages [0KB], 65536KB chunk

Still, it’ll take more than that value as the speed decreases significantly towards the end. Not really looking forwards to 24hrs rebuild times, but that’s life:

[Sat Feb 2 09:03:21 2019] md4: detected capacity change from 0 to 35993564479488 [Sat Feb 2 09:03:21 2019] md: recovery of RAID array md4 [Sun Feb 3 03:05:51 2019] md: md4: recovery done.

Eighteen hours to the minute almost.

IOPS graph:

IOPS graph

I was not aware that 7’200 RPM hard-drives can now peak at close 200 IO/s (196, to be precise). For the same RPM, 8 years ago the peak was about 175, so smarter something (controller, cache, actuator, whatever) results in about 20 extra IOPS. And yes, SATA limitation of 31 in-flight I/Os is clearly visible…

While trying to get a bandwidth graph, I was surprised about fio not ending. Apparently there’s a bug in fio since 2014, reported back in 2015, but still not fixed. I filled issue 738 on github, hopefully it is confirmed and can be fixed (I’m not sure what the intent was at all to loop?). And the numbers I see if I let it run for long are a bit weird as well. While running, I see numbers between 260MB/s (max) and around 125MB (min). Some quick numbers for max performance:

# dd if=/dev/sda of=/dev/null iflag=direct bs=8192k count=1000 1000+0 records in 1000+0 records out 8388608000 bytes (8.4 GB, 7.8 GiB) copied, 31.9695 s, 262 MB/s # dd if=/dev/md4 of=/dev/null iflag=direct bs=8192k count=2000 2000+0 records in 2000+0 records out 16777216000 bytes (17 GB, 16 GiB) copied, 23.1841 s, 724 MB/s

This and the very large storage space are about the only thing mechanical HDDs are still good at.

And now, to migrate from the old array…

Raw info

As other people might be interested (I was and couldn’t find the information beforehand), here is smartctl and hdparm info from the broken hard-drive (before I knew it was so):

SMART:

smartctl 6.6 2017-11-05 r4594 [x86_64-linux] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: HGST HUH721212ALN600 Serial Number: 8HHUE36H LU WWN Device Id: 5 000cca 270d9a5f8 Firmware Version: LEGNT3D0 User Capacity: 12,000,138,625,024 bytes [12.0 TB] Sector Size: 4096 bytes logical/physical Rotation Rate: 7200 rpm Form Factor: 3.5 inches Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Thu Jan 10 13:58:03 2019 CET SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x80) Offline data collection activity was never started. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 87) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: (1257) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 054 Pre-fail Offline - 0 3 Spin_Up_Time 0x0007 100 100 024 Pre-fail Always - 0 4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 1 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 020 Pre-fail Offline - 0 9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 0 10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 1 22 Unknown_Attribute 0x0023 100 100 025 Pre-fail Always - 100 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 1 193 Load_Cycle_Count 0x0012 100 100 000 Old_age Always - 1 194 Temperature_Celsius 0x0002 200 200 000 Old_age Always - 30 (Min/Max 25/30) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

hdparm:

/dev/sda:

ATA device, with non-removable media Model Number: HGST HUH721212ALN600 Serial Number: 8HHUE36H Firmware Revision: LEGNT3D0 Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0; Revision: ATA8-AST T13 Project D1697 Revision 0b Standards: Used: unknown (minor revision code 0x0029) Supported: 9 8 7 6 5 Likely used: 9 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 2929721344 Logical Sector size: 4096 bytes Physical Sector size: 4096 bytes device size with M = 1024*1024: 11444224 MBytes device size with M = 1000*1000: 12000138 MBytes (12000 GB) cache/buffer size = unknown Form Factor: 3.5 inch Nominal Media Rotation Rate: 7200 Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 2 Current = 0 Advanced power management level: 254 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE * Advanced Power Management feature set Power-Up In Standby feature set * SET_FEATURES required to spinup after power up SET_MAX security extension * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test * Media Card Pass-Through * General Purpose Logging feature set * WRITE_{DMA|MULTIPLE}_FUA_EXT * 64-bit World wide name * URG for READ_STREAM[_DMA]_EXT * URG for WRITE_STREAM[_DMA]_EXT * WRITE_UNCORRECTABLE_EXT command * {READ,WRITE}_DMA_EXT_GPL commands * Segmented DOWNLOAD_MICROCODE unknown 119[6] unknown 119[7] * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Gen3 signaling speed (6.0Gb/s) * Native Command Queueing (NCQ) * Host-initiated interface power management * Phy event counters * NCQ priority information * READ_LOG_DMA_EXT equivalent to READ_LOG_EXT Non-Zero buffer offsets in DMA Setup FIS * DMA Setup Auto-Activate optimization Device-initiated interface power management In-order data delivery * Software settings preservation unknown 78[7] unknown 78[10] unknown 78[11] * SMART Command Transport (SCT) feature set * SCT Write Same (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) * SANITIZE feature set * CRYPTO_SCRAMBLE_EXT command * OVERWRITE_EXT command * reserved 69[3] * reserved 69[4] * WRITE BUFFER DMA command * READ BUFFER DMA command Security: Master password revision code = 65534 supported not enabled not locked not frozen not expired: security count not supported: enhanced erase 1114min for SECURITY ERASE UNIT. Logical Unit WWN Device Identifier: 5000cca270d9a5f8 NAA : 5 IEEE OUI : 000cca Unique ID : 270d9a5f8 Checksum: correct

Jamie McClelland: I didn't know what ibus was one day ago, now I love it

Dje, 03/02/2019 - 8:07md

[See update below.]

After over a decade using mutt as my email client, I finally gave up pretending I didn't want to see pretty pictures in my email and switched to Thunderbird.

Since I don't write email in spanish often, I didn't immediately notice that my old dead key trick for typing spanish accent characters didn't work in Thunderbird like it does in vim or any terminal program.

I learned many years ago that I could set a special key via my openbox autostart script with xmodmap -e 'keysym Alt_R = Multi_key'. Then, if I wanted to type é, I would press and hold my right alt key while I press the apostrophe key, let go, and press the e key. I could get an ñ using the same trick but press the tilde key instead of the apostrophe key. Pretty easy.

When I tried that trick in Thunderbird I got an upside down e. WTF.

I spent about 30 minutes clawing my way through search results on several occassions over the course of many months before I finally found someone say: "I installed the ibus package, rebooted and it all worked." (Sorry Internet, I can't find that page now!)

ibus? apt-cache show ibus states:

IBus is an Intelligent Input Bus. It is a new input framework for the Linux OS. It provides full featured and user friendly input method user interface. It also may help developers to develop input method easily.

Well, that's succinct. I still had no idea what ibus was, but it sounded like it might work. I followed those directions and suddenly in my system tray area, there was a new icon. If I clicked on it, it listed by default my English keyboard.

I could right click, hit preferences and add a new keyboard:

English - English (US, International with dead keys)

Now, when I select that new option, I simply press my right alt key and e (no need for the apostrophe) and I get my é. Same with ñ. Hooray!

My only complaint is that while using this keyboard, I can't using regular apostrophes or ~'s. Not sure why, but it's not that hard to switch.

As far as I can tell, ibus tries to abstract some of the difficulties around input methods so it's easier on GUI developers.

Update 2019-02-11

Thanks, Internet, particularly for the comment from Alex about how I was choosing the wrong International keyboard. Of course, my keyboard does not have dead keys, so I need to choose the one called "English (intl., with AltGr dead keys)."

Now everything works perfectly. No need for ibus at all. I can get é with my right alt key followed by e. It works in my unicode terminal, thunderbird, and everywhere else that I've tried.

David Moreno: FOSDEM 2019

Dje, 03/02/2019 - 2:28md

Spent the Saturday at FOSDEM with my friend jackdoe.

It was great to both see some and avoid other familiar faces. It was great to meet some unfamiliar faces as well.

Until next year! Maybe.

Junichi Uekawa: Found myself charged for cloud build last month.

Dje, 03/02/2019 - 11:28pd
Found myself charged for cloud build last month. Noticed that I have some remains in cloud storage and GCR.

Bits from Debian: Projects and mentors for Debian's Google Summer of Code 2019 and Outreachy

Dje, 03/02/2019 - 10:40pd

Debian is applying as a mentoring organization for the Google Summer of Code 2019, an internship program open to university students aged 18 and up, and will apply soon for the next round of Outreachy, an internship program for people from groups traditionally underrepresented in tech.

Please join us and help expanding Debian and mentoring new free software contributors!

If you have a project idea related to Debian and can mentor (or can coordinate the mentorship with some other Debian Developer or contributor, or within a Debian team), please add the details to the Debian GSoC2019 Projects wiki page by Tuesday, February 5 2019.

Participating in these programs has many benefits for Debian and the wider free software community. If you have questions, please come and ask us on IRC #debian-outreach or the debian-outreach mailing list.

Russ Allbery: Another new year haul

Dje, 03/02/2019 - 3:38pd

The last haul I named that was technically not a new year haul since it was posted in December, so I'll use the same title again. This is a relatively small collection of random stuff, mostly picking up recommendations and award nominees that I plan on reading soon.

Kate Elliott — Cold Fire (sff)
Kate Elliott — Cold Steel (sff)
Mik Everett — Self-Published Kindling (non-fiction)
Michael D. Gordin — The Pseudoscience Wars (non-fiction)
Yoon Ha Lee — Dragon Pearl (sff)
Ruth Ozeki — A Tale for the Time Being (sff)
Marge Piercy — Woman on the Edge of Time (sff)
Kim Stanley Robinson — New York 2140 (sff)

I've already reviewed New York 2140. I have several more pre-orders that will be delivered this month, so still safely acquiring books faster than I'm reading them. It's all to support authors!

Steinar H. Gunderson: FOSDEM 2019, Saturday

Sht, 02/02/2019 - 11:22md

Got lost in the wet slush of Brussels. Got to ULB. Watched seven talks in whole or partially, some good and some not so good. (Six more that I wanted to see, but couldn't due to overfilled rooms, scheduling conflicts or cancellations.) Marvelled at the Wi-Fi as usual (although n-m is slow to connect to v6-only networks, it seems). Had my own talk. Met people in the hallway track. Charged the laptop a bit in the cafeteria; should get a new internal battery for next year so that it lasts all day. Insane amount of people in the hallways as always. Tired. Going back tomorrow.

FOSDEM continues to be among the top free software conferences. But I would love some better way of finding talks than “read through this list of 750+ talks linearly, except ignore the blockchain devroom”.

Steinar H. Gunderson: Futatabi: Multi-camera instant replay with slow motion

Sht, 02/02/2019 - 11:14md

I've launched Futatabi, my slow motion software! Actually, the source code has been out as part of Nageru for a few weeks, so it's in Debian buster and all, but there's been a dash the last few days to get all the documentation and such in place.

The FOSDEM talk went well—the turnout wasn't huge (about fifty people in person; I don't know the stream numbers), but I guess it's a fairly narrow topic. Feedback was overall good, and the questions were well thought-out. Thanks to everyone who came, and especially those who asked questions! I had planned for 20 minutes (with demo, but without questions) and ended up in 18, so that's fairly good. I forgot only minor details, and reached my goal of zero bullet points. The recording will be out as soon as I can get my hands on it, although I do suspect it's been downconverted from 60 to 50 and then further to 25 fps, which will naturally kill the smoothness of the interpolated video.

Relevant stuff:

Jonathan Dowland: glitched Amiga video

Sht, 02/02/2019 - 8:30md

This is the fifth part in a series of blog posts. The previous post was Amiga/Gotek boot test.

Glitchy component-video out

As I was planning out my next Gotek-floppy-adaptor experiment, disaster struck: the video out from my Amiga had become terribly distorted, in a delightfully Rob Sheridan fashion, sufficiently so that it was impossible to operate the machine.

Reading around, the most likely explanation seemed to be a blown capacitor. These devices are nearly 30 years old, and blown capacitors are a common problem. If it were in the Amiga, then the advice is to replace all the capacitors on the mainboard. This is something that can be done by an amateur enthusiast with some soldering skills. I'm too much of a beginner with soldering to attempt something like this. I was recommended a company in Aberystwyth called Mutant Caterpillar who do a full recap and repair service for £60 which seems very reasonable.

Philips CRT

Luckily, the blown capacitor (if that's what it was) wasn't in the Amiga, but in the A520 video adaptor. I dug my old Philips CRT monitor out of the loft and connected it directly to the Amiga and the picture was perfect. I had been hoping to avoid fetching it down, as I don't have enough space on my desk to leave it in situ, and instead must lug it over whenever I've found a spare minute to play with the Amiga. But it's probably not worth repairing the A520 (or sourcing a replacement) and the upshot is the picture via the RGB out is much clearer.

As I write this, I'm in a hotel room recovering after my first day at FOSDEM 2019, my first FOSDEM conference. There was a Retrocomputing devroom this year that looked really interesting but I was fully booked into the Java room all day today. (And I don't see mention of Amigas in any of the abstracts)

Bits from Debian: Help test initial support for Secure Boot

Sht, 02/02/2019 - 11:00pd

The Debian Installer team is happy to report that the Buster Alpha 5 release of the installer includes some initial support for UEFI Secure Boot (SB) in Debian's installation media.

This support is not yet complete, and we would like to request some help! Please read on for more context and instructions to help us get better coverage and support.

On amd64 machines, by default the Debian installer will now boot (and install) a signed version of the shim package as the first stage boot loader. Shim is the core package in a signed Linux boot chain on Intel-compatible PCs. It is responsible for validating signatures on further pieces of the boot process (Grub and the Linux kernel), allowing for verification of those pieces. Each of those pieces will be signed by a Debian production signing key that is baked into the shim binary itself.

However, for safety during the development phase of Debian's SB support, we have only been using a temporary test key to sign our Grub and Linux packages. If we made a mistake with key management or trust path verification during this development, this would save us from having to revoke the production key. We plan on switching to the production key soon.

Due to the use of the test key so far, out of the box Debian will not yet install or run with SB enabled; Shim will not validate signatures with the test key and will stop, reporting the problem. This is correct and useful behaviour!

Thus far, Debian users have needed to disable SB before installation to make things work. From now on, with SB disabled, installation and use should work just the same as previously. Shim simply chain-loads grub and continues through the boot chain without checking signatures.

It is possible to enrol more keys on a SB system so that shim will recognise and allow other signatures, and this is how we have been able to test the rest of the boot chain. We now invite more users to give us valuable test coverage on a wider variety of hardware by enrolling our Debian test key and running with SB enabled.

If you want to help us test our Secure Boot support, please follow the instructions in the Debian wiki and provide feedback.

With help from users, we expect to be able to ship fully-working and tested UEFI Secure Boot in an upcoming Debian Installer release and in the main Buster release itself.

Dirk Eddelbuettel: The Incomplete Book of Running: A Short Review

Sht, 02/02/2019 - 6:48pd

Peter Sagal’s The Incomplete Book of Running has been my enigma for several weeks now. As a connection, Peter and I have at most one degree of separation: a common fellow runner friend and neighbor who, sadly, long departed to Colorodo (hi Russ!). So we’re quasi-neighbors. But he is famous, I am not, but I follow him on social media.

So as “just another runner”, I had been treated to a constant trickling of content about the book. And I had (in vain) hoped my family would get me the book for Xmas, but no such luck. Hence I ordered a copy. And then Amazon, mankind’s paragon of inventory management and shipment, was seemingly out of it for weeks – so that my copy finally came today all the way from England (!!) even though Sagal and I live a few miles apart, and he and I run similar neighborhoud routes, run (or ran) the same track for Tuesday morning speedwork – and as I noticed while devouring the book, share the same obsession for FIRST I tried to install onto my running pals a decade ago. We also ran the same initial Boston Marathon in 2007, ran many similar marathons (Boston, NY, Philly) even at the same time. But bastard that he his not only owns both my PRs at a half (by about two minutes) and full (by about four minutes) marathon – but he also knows how to write!

This is a great book about running, life, and living around Oak Park. As its focus, the reflections about running are good, sometimes even profound, often funny, and show a writer’s genuine talent in putting words around something that is otherwise hard to describe. Particularly for caustic people such as long-distance runners.

The book was a great pleasure to read—and possibly only the second book in a decade or longer that I “inhaled” cover to cover in one sitting this evening as it was just the right content on a Friday night after a long work week. This was a fun and entertaining yet profound read. I really enjoyed his meditation on the process and journey that got him to his PR – when it was time for mine by now over ten years ago it came after a (now surreal seeming) sequence of running Boston, Chicago, New York in one year and London and Berlin the next. And somehow by the time I got to Berlin I was both well trained, and in a good and relaxed mental shape so that things came together for me that day. (I also got lucky as circumstances were favourable: that was one of the many recent years in which a marathon record was broken in Berlin.) And as Sagal describes really well throughout the book, running is a process and a practical philosophy and an out and occassional meditation. But there is much more in the book so go and read it.

One minor correction: It is Pfeiffer with a P before the f for Michelle’s family name as every viewer of the Baker Boys should know.

Great book. Recommended to runners and non-runners alike.

Petter Reinholdtsen: Websocket from Kraken in Valutakrambod

Pre, 01/02/2019 - 10:25md

Yesterday, the Kraken virtual currency exchange announced their Websocket service, providing a stream of exchange updates to its clients. Getting updated rates quickly is a good idea, so I used their API documentation and added Websocket support to the Kraken service in Valutakrambod today. The python library can now get updates from Kraken several times per second, instead of every time the information is polled from the REST API.

If this sound interesting to you, the code for valutakrambod is available from github. Here is example output from the example client displaying rates in a curses view:

Name Pair Bid Ask Spr Ftcd Age BitcoinsNorway BTCEUR 2959.2800 3021.0500 2.0% 36 nan nan Bitfinex BTCEUR 3087.9000 3088.0000 0.0% 36 37 nan Bitmynt BTCEUR 3001.8700 3135.4600 4.3% 36 52 nan Bitpay BTCEUR 3003.8659 nan nan% 35 nan nan Bitstamp BTCEUR 3008.0000 3010.2300 0.1% 0 1 1 Bl3p BTCEUR 3000.6700 3010.9300 0.3% 1 nan nan Coinbase BTCEUR 2992.1800 3023.2500 1.0% 34 nan nan Kraken+BTCEUR 3005.7000 3006.6000 0.0% 0 1 0 Paymium BTCEUR 2940.0100 2993.4400 1.8% 0 2688 nan BitcoinsNorway BTCNOK 29000.0000 29360.7400 1.2% 36 nan nan Bitmynt BTCNOK 29115.6400 29720.7500 2.0% 36 52 nan Bitpay BTCNOK 29029.2512 nan nan% 36 nan nan Coinbase BTCNOK 28927.6000 29218.5900 1.0% 35 nan nan MiraiEx BTCNOK 29097.7000 29741.4200 2.2% 36 nan nan BitcoinsNorway BTCUSD 3385.4200 3456.0900 2.0% 36 nan nan Bitfinex BTCUSD 3538.5000 3538.6000 0.0% 36 45 nan Bitpay BTCUSD 3443.4600 nan nan% 34 nan nan Bitstamp BTCUSD 3443.0100 3445.0500 0.1% 0 2 1 Coinbase BTCUSD 3428.1600 3462.6300 1.0% 33 nan nan Gemini BTCUSD 3445.8800 3445.8900 0.0% 36 326 nan Hitbtc BTCUSD 3473.4700 3473.0700 -0.0% 0 0 0 Kraken+BTCUSD 3444.4000 3445.6000 0.0% 0 1 0 Exchangerates EURNOK 9.6685 9.6685 0.0% 36 22226 nan Norgesbank EURNOK 9.6685 9.6685 0.0% 36 22226 nan Bitstamp EURUSD 1.1440 1.1462 0.2% 0 1 2 Exchangerates EURUSD 1.1471 1.1471 0.0% 36 22226 nan BitcoinsNorway LTCEUR 1.0009 22.6538 95.6% 35 nan nan BitcoinsNorway LTCNOK 259.0900 264.9300 2.2% 35 nan nan BitcoinsNorway LTCUSD 0.0000 29.0000 100.0% 35 nan nan Norgesbank USDNOK 8.4286 8.4286 0.0% 36 22226 nan

Yes, I notice the strange negative spread on Hitbtc. I've seen the same on Kraken. Another strange observation is that Kraken some times announce trade orders a fraction of a second in the future. I really wonder what is going on there.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Wouter Verhelst: Visum!

Pre, 01/02/2019 - 10:22md

A year and a half ago, we made a decision: I was going to move.

About a year ago, I decided that getting this done without professional help was not a very good idea and would take forever, so I got set up with a lawyer and had her guide me through the process.

After lots of juggling with bureaucracies, some unfortunate delays, and some repeats of things I had already done before, I dropped off a 1 cm thick file of paperwork at the consulate a few weeks ago

Today, I went back there to fetch my passport, containing the visum.

Tomorrow, FOSDEM starts. After that, I will be moving to a different continent!

Faqet