You are here

Bits from Debian

Subscribe to Feed Bits from Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 4 ditë 2 orë më parë

Michael Stapelberg: Linux package managers are slow

Sht, 16/05/2020 - 12:46md

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance. The machine is located in Zürich and connected to the Internet with a 1 Gigabit fiber connection, so the expected top download speed is ≈115 MB/s.

See Appendix B for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program) distribution package manager data wall-clock time rate Fedora dnf 107 MB 29s 3.7 MB/s NixOS Nix 15 MB 14s 1.1 MB/s Debian apt 15 MB 4s 3.7 MB/s Arch Linux pacman 6.5 MB 3s 2.1 MB/s Alpine apk 10 MB 1s 10.0 MB/s qemu (large C program) distribution package manager data wall-clock time rate Fedora dnf 266 MB 1m8s 3.9 MB/s Arch Linux pacman 124 MB 1m2s 2.0 MB/s Debian apt 159 MB 51s 3.1 MB/s NixOS Nix 262 MB 38s 6.8 MB/s Alpine apk 26 MB 2.4s 10.8 MB/s


The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes
Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers AppImage apps image: ISO9660, SquashFS no snappy apps image: SquashFS yes: hooks FlatPak apps archive: OSTree no 0install apps archive: tar.bz2 no nix, guix distro archive: nar.{bz2,xz} activation script dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes rpm distro archive: cpio.{bz2,lz,xz} scriptlets pacman distro archive: tar.xz install slackware distro archive: tar.{gz,xz} yes: doinst.sh apk distro archive: tar.gz yes: .post-install Entropy distro archive: tar.bz2 yes ipkg, opkg distro archive: tar{,.gz} yes Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

Appendix A: related work

There are a couple of recent developments going into the same direction:

Appendix B: measurement details ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash [root@722e6df10258 /]# time dnf install -y ack Fedora Modular 30 - x86_64 4.4 MB/s | 2.7 MB 00:00 Fedora Modular 30 - x86_64 - Updates 3.7 MB/s | 2.4 MB 00:00 Fedora 30 - x86_64 - Updates 17 MB/s | 19 MB 00:01 Fedora 30 - x86_64 31 MB/s | 70 MB 00:02 […] Install 44 Packages Total download size: 13 M Installed size: 42 M […] real 0m29.498s user 0m22.954s sys 0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix 39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28' unpacking channels... created 2 symlinks in user environment installing 'perl5.28.2-ack-2.28' these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked): /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2 /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48 /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27 /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31 /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53 /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16 /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28 copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'... copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'... copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'... copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'... copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'... copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'... copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'... copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'... building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'... created 56 symlinks in user environment real 0m 14.02s user 0m 8.83s sys 0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep) Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB] Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB] Fetched 8502 kB in 2s (4764 kB/s) […] The following NEW packages will be installed: ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26 The following packages will be upgraded: perl-base 1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded. Need to get 8238 kB of archives. After this operation, 42.3 MB of additional disk space will be used. […] real 0m9.096s user 0m2.616s sys 0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base [root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack) :: Synchronizing package databases... core 132.2 KiB 1033K/s 00:00 extra 1629.6 KiB 2.95M/s 00:01 community 4.9 MiB 5.75M/s 00:01 […] Total Download Size: 0.07 MiB Total Installed Size: 0.19 MiB […] real 0m3.354s user 0m0.224s sys 0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine / # time apk add ack fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz (1/4) Installing perl-file-next (1.16-r0) (2/4) Installing libbz2 (1.0.6-r7) (3/4) Installing perl (5.28.2-r1) (4/4) Installing ack (3.0.0-r0) Executing busybox-1.30.1-r2.trigger OK: 44 MiB in 18 packages real 0m 0.96s user 0m 0.25s sys 0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash [root@722e6df10258 /]# time dnf install -y qemu Fedora Modular 30 - x86_64 3.1 MB/s | 2.7 MB 00:00 Fedora Modular 30 - x86_64 - Updates 2.7 MB/s | 2.4 MB 00:00 Fedora 30 - x86_64 - Updates 20 MB/s | 19 MB 00:00 Fedora 30 - x86_64 31 MB/s | 70 MB 00:02 […] Install 262 Packages Upgrade 4 Packages Total download size: 172 M […] real 1m7.877s user 0m44.237s sys 0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix 39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0' unpacking channels... created 2 symlinks in user environment installing 'qemu-4.0.0' these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked): […] real 0m 38.49s user 0m 26.52s sys 0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86) Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB] Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB] Fetched 8574 kB in 1s (6716 kB/s) […] Fetched 151 MB in 2s (64.6 MB/s) […] real 0m51.583s user 0m15.671s sys 0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base [root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu) :: Synchronizing package databases... core 132.2 KiB 751K/s 00:00 extra 1629.6 KiB 3.04M/s 00:01 community 4.9 MiB 6.16M/s 00:01 […] Total Download Size: 123.20 MiB Total Installed Size: 587.84 MiB […] real 1m2.475s user 0m9.272s sys 0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine / # time apk add qemu-system-x86_64 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz […] OK: 78 MiB in 95 packages real 0m 2.43s user 0m 0.46s sys 0m 0.09s

Sven Hoexter: Quick and Dirty: masquerading / NAT with nftables

Sht, 16/05/2020 - 12:06md

Since nftables is now the new default, a short note to myself on how to setup masquerading, like the usual NAT setup you use on a gateway.

nft add table nat nft add chain nat postrouting { type nat hook postrouting priority 100 \; } nft add rule nat postrouting ip saddr 192.168.1.0/24 oif wlan0 masquerade

In this case the wlan0 is basically the "WAN" interface, because I use an old netbook as a wired to wireless network adapter.

Michael Stapelberg: a new distri linux (fast package management) release

Sht, 16/05/2020 - 9:14pd

I just released a new version of distri.

The focus of this release lies on:

  • a better developer experience, allowing users to debug any installed package without extra setup steps

  • performance improvements in all areas (starting programs, building distri packages, generating distri images)

  • better tooling for keeping track of upstream versions

See the release notes for more details.

The distri research linux distribution project was started in 2019 to research whether a few architectural changes could enable drastically faster package management.

While the package managers in common Linux distributions (e.g. apt, dnf, …) top out at data rates of only a few MB/s, distri effortlessly saturates 1 Gbit, 10 Gbit and even 40 Gbit connections, resulting in fast installation and update speeds.

Lucas Kanashiro: Quarantine times

Sht, 16/05/2020 - 5:20pd

After quite some time without publishing anything here, I decided to share the latest events. It is a hard time for most of us but with all this time at home, one can also do great things.

I would like to start with the wonderful idea the Debian Brasil community had! Why not create an online Debian related conference to keep people’s minds busy and also share knowledge? After brainstorming, we came up with our online conference called #FiqueEmCasaUseDebian (in English it would be #StayHomeUseDebian). It started on May 3rd and will last until May 30th (yes, one month)! Every weekday, we have one online talk at night and on every Saturday, a Debian packaging workshop. The feedback so far has been awesome and the Brazilian Debian community is reaching out to more people than usual at our regular conferences (as you might have imagined, Brazil is huge and it is hard to bring people at the same place). At the end of the month, we will have the first MiniDebConf online and I hope it will be as successful as our experience here in Brazil.

Another thing that deserves a highlight is the fact that I became an Ubuntu Core Developer this month; yay! After 9 months of working almost daily on the Ubuntu Server, I was able to get my upload rights to the Ubuntu archive. I was tired of asking for sponsorship, and probably my peers were tired of me too.

I could spend more time here complaining about the Brazilian government but I think it isn’t worth it. Let’s try to do something useful instead!

Ingo Juergensmann: XMPP: ejabberd Project on the-federation.info

Pre, 15/05/2020 - 5:30md

For those interested in alternative social networks there is that website that is called the-federation.info, which collects some statistics of "The Fediverse". The biggest part of the fediverse is Mastodon, but there are other projects (or parts) like Friendica or Matrix that do "federation". One of the oldest projects doing federation is XMPP. You could find some Prosody servers for some time now, because there is a Prosody module "mod_nodeinfo2" that can be used. But for ejabberd there is no such module (yet?) so far, which makes it a little bit difficult to get listed on the-federation.info.

Some days ago I wrote a small script to export the needed values to x-nodeinfo2 that is queried by the-federation.info. It's surely not the best script or solution for that job and is currently limited to ejabberd servers that use a PostgreSQL database as backend, although it should be fairly easy to adapt the script for use with MySQL. Well, at least it does its job. At least as there is no native ejabberd module for this task.

You can find the script on Github: https://github.com/ingoj/ejabberd-nodeinfo2

Enjoy it and register your ejabberd server on the-federation.info! :-)

Kategorie: DebianTags: DebianejabberdXMPPServergit

Junichi Uekawa: Bought Canon TS3300 Printer.

Pre, 15/05/2020 - 2:00md
Bought Canon TS3300 Printer. I haven't bought a printer in 20 years. 5 years ago I think I would have used Google Cloud Print which took forever uploading and downloading huge print data to the cloud. 20 years ago I would have used LPR and LPD, admiring the postscript backtraces. Configuration was done using Android app, the printer could be connected via WiFi and then configuration through the app allowed printer to be connected to the home WiFi network, feels modern. It was recognized as a IPPS device by Chrome OS (using CUPS?). No print server necessary. Amazing.

Iustin Pop: New internet provider ☺

Enj, 14/05/2020 - 11:45md

Note: all this is my personal experience, on my personal machines, and I don’t claim it to be the absolute truth. Also, I don’t directly call out names, although if you live in Switzerland it’s pretty easy to guess who the old provider is from some hints.

For a long time, I wanted to move away from my current (well, now past) provider, for a multitude of reasons. The main being that the company is very classic company, with classic support system, that doesn’t work very well - I had troubles with their billing system that left me out cold without internet for 15 days, but for the recent few years, they were mostly OK, and changing to a different provider would have meant me routing a really long Ethernet cable around the apartment, so I kept postponing it. Yes, self-inflicted pain, I know.

Until the entire work-from-home thing, when the usually stable connection start degrading in a very linear fashion day-by-day (this is a graph that basically reflects download bandwidth):

1+ month download bandwidth test

At first, I didn’t realise this, as even 100Mbps is still fast enough. But once the connection went below (I believe) 50Mbps, it became visible in day to day work. And since I work daily from home… yeah. Not fun.

So I started doing speedtest.net - and oh my, ~20Mbps was a good result, usually 12-14Mbps. On a wired connection. On a connection that officially is supposed to be 600Mbps down. The upload speed was spot on, so I didn’t think it was my router, but:

  • rebooted everything; yes, including the cable modem.
  • turned off firewall. and on. and off. of course, no change.
  • changed the Ethernet port used on my firewall.
  • changed firewall rules.
  • rebooted again.

Nothing helped. Once in a blue moon, speedtest would give me 100Mbps, but like once every two days, then it would be back and showing 8Mbps. Eight! It ended up as even apt update was tediously slow, and kernel downloads took ages…

The official instructions for dealing with bad internet are a joke:

  • turn off everything.
  • change cable modem from bridged mode to router mode - which is not how I want the modem to work, so the test is meaningless; also, how can router mode be faster⁈
  • disconnect everything else than the test computer.
  • download a program from somewhere (Windows/Mac/iOS, with a browser version too), and run it. yay for open platform!

And the best part: “If you are not satisfied with the results, read our internet optimisation guide. If you are still not happy, use our community forums or our social platforms.”

Given that it was a decline over 3-weeks, that I don’t know of any computer component that would degrade this steadily but not throw any other errors, and that my upload speed was all good, I assumed it’s the provider. Maybe I was wrong, but I wanted to do this anyway for a long while, so I went through the “find how to route cable, check if other provider socket is good, order service, etc.” dance, and less than a week later, I had the other connection.

Now, of course, bandwidth works as expected:

1+ month updated bandwidth test

Both download and upload are fine (the graph above is just download). Latency is also much better, towards many parts of the internet that matter. But what is shocking is the difference in jitter to some external hosts I care about. On the previous provider, a funny thing was that both outgoing and incoming pings had both more jitter and packet loss when done directly (IPv4 to IPv4) than when done over a VPN. This doesn’t make sense, since VPN is just overhead over IPv4, but the graphs show it, and what I think happens is that a VPN flow is “cached” in the provider’s routers, whereas a simple ping packet not. But, the fact that there’s enough jitter for a ping to a not-very-far host doesn’t make me happy.

Examples, outgoing:

Outgoing smokeping to public IPv4 Outgoing smokeping over VPN

And incoming:

Incoming smokeping to public IPv4 Incoming smokeping over VPN

Both incoming and outgoing show this weirdness - more packet loss and more jitter over VPN. Again, this is not a problem in practice, or not much, but makes me wonder what other shenanigans happen behind the scenes. You can also see clearly when the “work from home” traffic entered the picture and started significantly degrading my connection, even over the magically “better” VPN connection.

Switching to this week’s view shows the (in my opinion) dramatic improvement in consistency of the connection:

Outgoing current week smokeping to public IPv4 Outgoing current week smokeping over VPN

No more packet loss, no more jitter. You can also see my VPN being temporarily down during provider switchover because my firewall was not quite correct for a moment.

And the last drill down, at high resolution, one day before and one day after switchover. Red is VPN, blue is plain IPv4, yellow is the missing IPv6 connection :)

Incoming old:

Incoming 1-day smokeping, old provider

and new:

Incoming 1-day smokeping, new provider

Outgoing old:

Outgoing 1-day smokeping, old provider

and new:

Outgoing 1-day smokeping, new provider

This is what I expect, ping-over-VPN should of course be slower than plain ping. Note that incoming and outgoing have slightly different consistency, but that is fine for me :) The endpoints doing the two tests are different, so this is expected. Reading the legend on the graphs for the incoming connection (similar story for outgoing):

  • before/after plain latency: 24.1ms→18.8ms, ~5.3ms gained, ~22% lower.
  • before/after packet loss: ~7.0% → 0.0%→, infinitely better :)
  • before/after latency standard deviation: ~2.1ms → 0.0ms.
  • before/after direct-vs-VPN difference: inconsistent → consistent 0.4ms faster for direct ping.

So… to my previous provider: it can be done better. Or at least, allow people easier ways to submit performance issue problems.

For me, the moral of the story is that I should have switched a couple of years ago, instead of being lazy. And that I’m curious to see how IPv6 traffic will differ, if at all :)

Take care, everyone! And thanks for looking at these many graphs :)

Norbert Preining: Switching from NVIDIA to AMD (including tensorflow)

Enj, 14/05/2020 - 5:55pd

I have been using my Geforce 1060 extensively for deep learning, both with Python and R. But the always painful play with the closed source drivers and kernel updates, paired with the collapse of my computer’s PSU and/or GPU, I decided to finally do the switch to AMD graphic card and open source stack. And you know what, within half a day I had everything, including Tensorflow running. Yeah to Open Source!

Preliminaries

So what is the starting point: I am running Debian/unstable with a AMD Radeon 5700. First of all I purged all NVIDIA related packages, and that are a lot I have to say. Be sure to search for nv and nvidia and get rid of all packages. For safety I did reboot and checked again that no kernel modules related to NVIDIA are loaded.

Firmware

Debian ships the package amd-gpu-firmware but this is not enough for the current kernel and current hardware. Better is to clone git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git and copy everything from the amdgpu directory to /lib/firmware/amdgpu.

I didn’t do that at first, and then booting the kernel did hang during the switch to AMD framebuffer. If you see this behaviour, your firmwares are too old.

Kernel

The advantage of having open source driver that is in the kernel is that you don’t have to worry about incompatibilities (like every time a new kernel comes out the NVIDIA driver needs patching). For recent AMD GPUs you need a rather new kernel, I have 5.6.0 and 5.7.0-rc5 running. Make sure that you have all the necessary kernel config options turned on if you compile your own kernels. In my case this is

CONFIG_DRM_AMDGPU=m CONFIG_DRM_AMDGPU_USERPTR=y CONFIG_DRM_AMD_ACP=y CONFIG_DRM_AMD_DC=y CONFIG_DRM_AMD_DC_DCN=y CONFIG_HSA_AMD=y

When installing the kernel, be sure that the firmware is already updated so that the correct firmware is copied into the initrd.

Support programs and libraries

All the following is more or less an excerpt from the ROCm Installation Guide!

AMD provides a Debian/Ubuntu APT repository for software as well as kernel sources. Put the following into /etc/apt/sources.list.d/rocm.list:

deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main

and also put the public key of the rocm repository into /etc/apt/trusted.d/rocm.asc.

After that apt-get update should work.

I did install rocm-dev-3.3.0, rocm-libs-3.3.0, hipcub-3.3.0, miopen-hip-3.3.0 (and of course the dependencies), but not rocm-dkms which is the kernel module. If you have a sufficiently recent kernel (see above), the source in the kernel itself is newer.

The libraries and programs are installed under /opt/rocm-3.3.0, and to make the libraries available to Tensorflow (see below) and other programs, I added /etc/ld.so.conf.d/rocm.conf with the following content:

/opt/rocm-3.3.0/lib/

and run ldconfig as root.

Last but not least, add a udev rule that is normally installed by rocm-dkms, put the following into /etc/udev/rules.d/70-kfd.rules:

SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"

This allows users from the video group to access the GPU.

Up to here you should be able to boot into the system and have X running on top of AMD GPU, including OpenGL acceleration and direct rendering:

$ glxinfo ame of display: :0 display: :0 screen: 0 direct rendering: Yes server glx vendor string: SGI server glx version string: 1.4 ... client glx vendor string: Mesa Project and SGI client glx version string: 1.4 ... Tensorflow

Thinking about how hard it was to get the correct libraries to get Tensorflow running on GPUs (see here and here), it is a pleasure to see that with open source all this pain is relieved.

There is already work done to make Tensorflow run on ROCm, the tensorflow-rocm project. The provide up to date PyPi packages, so a simple

pip3 install tensorflow-rocm

is enough to get Tensorflow running with Python:

>> import tensorflow as tf >> tf.add(1, 2).numpy() 2020-05-14 12:07:19.590169: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libhip_hcc.so ... 2020-05-14 12:07:19.711478: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7444 MB memory) -> physical GPU (device: 0, name: Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT], pci bus id: 0000:03:00.0) 3 >> Tensorflow for R

Installation is trivial again since there is a tensorflow for R package, just run (as a user that is in the group staff, which normally own /usr/local/lib/R)

$ R ... > install.packages("tensorflow") ..

Do not call the R function install_tensorflow() since Tensorflow is already installed and functional!

With that done, R can use the AMD GPU for computations:

$ R ... > library(tensorflow) > tf$constant("Hellow Tensorflow") 2020-05-14 12:14:24.185609: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libhip_hcc.so ... 2020-05-14 12:14:24.277736: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7444 MB memory) -> physical GPU (device: 0, name: Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT], pci bus id: 0000:03:00.0) tf.Tensor(b'Hellow Tensorflow', shape=(), dtype=string) > AMD Vulkan

From the Vulkan home page:

Vulkan is a new generation graphics and compute API that provides high-efficiency, cross-platform access to modern GPUs used in a wide variety of devices from PCs and consoles to mobile phones and embedded platforms.

Several games are using the Vulkan API if available and it is said to be more efficient.

There are Vulkan libraries for Radeon shipped in with mesa, in the Debian package mesa-vulkan-drivers, but they look a bit outdated is my guess.

The AMDVLK project provides the latest version, and to my surprise was rather easy to install, again by following the advice in their README. The steps are basically (always follow what is written for Ubuntu):

  • Install the necessary dependencies
  • Install the Repo tool
  • Get the source code
  • Make 64-bit and 32-bit builds
  • Copy driver and JSON files (see below for what I did differently!)

All as described in the linked README. Just to make sure, I removed the JSON files /usr/share/vulkan/icd.d/radeon* shipped by Debians mesa-vulkan-drivers package.

Finally I deviated a bit by not editing the file /usr/share/X11/xorg.conf.d/10-amdgpu.conf, but instead copying to /etc/X11/xorg.conf.d/10-amdgpu.conf and adding there the section:

Section "Device" Identifier "AMDgpu" Option "DRI" "3" EndSection

.

To be honest, I did not follow the Copy driver and JSON files literally, since I don’t want to copy self-made files into system directories under /usr/lib. So what I did is:

  • copy the driver files to /opt/amdvkn/lib, so I have now there /opt/amdvlk/lib/i386-linux-gnu/amdvlk32.so and /opt/amdvlk/lib/x86_64-linux-gnu/amdvlk64.so
  • Adjust the location of the driver file in the two JSON files /etc/vulkan/icd.d/amd_icd32.json and /etc/vulkan/icd.d/amd_icd64.json (which were installed above under Copy driver and JSON files)
  • added a file /etc/ld.so.conf.d/amdvlk.conf containing the two lines: /opt/amdvlk/lib/i386-linux-gnu /opt/amdvlk/lib/x86_64-linux-gnu

With this in place, I don’t pollute the system directories, and still the new Vulkan driver is available.

But honestly, I don’t really know whether it is used and is working, because I don’t know how to check.

With all that in place, I can run my usual set of Steam games (The Long Dark, Shadow of the Tomb Raider, The Talos Principle, Supraland, …) and I don’t see any visual problem till now. As a bonus, KDE/Plasma is now running much better, since NVIDIA and KDE has traditionally some incompatibilities.

The above might sound like a lot of stuff to do, but considering that most of the parts are not really packaged within Debian, and all this is rather new open source stack, I was surprised that in half a day I got all working smoothly.

Thanks to all the developers who have worked hard to make this all possible.

Mike Gabriel: Q: Remote Support Framework for the GNU/Linux Desktop?

Mër, 13/05/2020 - 9:14pd

TL;DR; For those (admins) of you who run GNU/Linux on staff computers: How do you organize your graphical remote support in your company? Get in touch, share your expertise and experiences.

Researching on FLOSS based Linux Desktops

When bringing GNU/Linux desktops to a generic folk of productive office users on a large scale, graphical remote support is a key feature when organizing helpdesk support teams' workflows.

In a research project that I am currently involved in, we investigate the different available remote support technologies (VNC screen mirroring, ScreenCasts, etc.) and the available frameworks that allow one to provide a remote support infrastructure 100% on-premise.

In this research project we intend to find FLOSS solutions for everything required for providing a large scale GNU/Linux desktop to end users, but we likely will have to recommend non-free solutions, if a FLOSS approach is not available for certain demands. Depending on the resulting costs, bringing forth a new software solution instead of dumping big money in subscription contracts for non-free software is seen as a possible alternative.

As a member of the X2Go upstream team and maintainer of several remote desktop related tools and frameworks in Debian, I'd consider myself as sort of in-the-topic. The available (as FLOSS) underlying technologies for plumbing a remote support framework are pretty much clear (x11vnc, recent pipewire-related approaches in Wayland compositors, browser-based screencasting). However, I still lack a good spontaneous answer to the question: "How to efficiently software-side organize a helpdesk scenario for 10.000+ users regarding graphical remote support?".

Framework for Remote Desktop in Webbrowsers

In fact, in the context of my X2Go activities, I am currently planning to put together a Django-based framework for running X2Go sessions in a web browser. The framework that we will come up with (two developers have already been hired for an initial sprint in July 2020) will be designed to be highly pluggable and it will probably be easy to add remote support / screen sharing features further on.

And still, I walk around with the question in mind: Do I miss anything? Is there anything already out there that provides a remote support solution as 100% FLOSS, that has enterprise grade, that up-scales well, that has a modern UI design, etc. Something that I simply haven't come across, yet?

Looking forward to Your Feedback

Please get in touch (OFTC/Freenode IRC, Telegram, Email), if you can fill the gap and feel like sharing your ideas and experiences.

light+love
Mike

Kunal Mehta: MediaWiki packages for Ubuntu 20.04 Focal available

Mar, 12/05/2020 - 10:21md

Packages for the MediaWiki 1.31 LTS release are now available for the new Ubuntu 20.04 LTS "Focal Fossa" release in my PPA. Please let me know if you run into any errors or issues.

In the future these packages will be upgraded to the MediaWiki 1.35 LTS release, whenever that's ready. It's currently delayed because of the pandemic, but I expect that it'll be ready for the next Debian release.

Evgeni Golov: Building a Shelly 2.5 USB to TTL adapter cable

Mar, 12/05/2020 - 12:44md

When you want to flash your Shelly 2.5 with anything but the original firmware for the first time, you'll need to attach it to your computer. Later flashes can happen over the air (at least with ESPHome or Tasmota), but the first one cannot.

In theory, this is not a problem as the Shelly has a quite exposed and well documented interface:

However, on closer inspection you'll notice that your normal jumper wires don't fit as the Shelly has a connector with 1.27mm (0.05in) pitch and 1mm diameter holes.

Now, there are various tutorials on the Internet how to build a compatible connector using Ethernet cables and hot glue or with female header socket legs, and you can even buy cables on Amazon for 18€! But 18€ sounded like a lot and the female header socket thing while working was pretty finicky to use, so I decided to build something different.

We'll need 6 female-to-female jumper wires and a 1.27mm pitch male header. Jumper wires I had at home, the header I got is a SL 1X20G 1,27 from reichelt.de for 0.61€. It's a 20 pin one, so we can make 3 adapters out of it if needed. Oh and we'll need some isolation tape.

The first step is to cut the header into 6 pin chunks. Make sure not to cut too close to the 6th pin as the whole thing is rather fragile and you might lose it.

It now fits very well into the Shelly with the longer side of the pins.

Second step is to strip the plastic part of one side of the jumper wires. Those are designed to fit 2.54mm pitch headers and won't work for our use case otherwise.

As the connectors are still too big, even after removing the plastic, the next step is to take some pliers and gently press the connectors until they fit the smaller pins of our header.

Now is the time to put everything together. To avoid short circuiting the pins/connectors, apply some isolation tape while assembling, but not too much as the space is really limited.

And we're done, a wonderful (lol) and working (yay) Shelly 2.5 cable that can be attached to any USB-TTL adapter, like the pictured FTDI clone you get almost everywhere.

Yes, in an ideal world we would have soldered the header to the cable, but I didn't feel like soldering on that limited space. And yes, shrink-wrap might be a good thing too, but again, limited space and with isolation tape you only need one layer between two pins, not two.

Daniel Silverstone: The Lars, Mark, and Daniel Club

Mar, 12/05/2020 - 9:00pd

Last night, Lars, Mark, and I discussed Jeremy Kun's The communicative value of using Git well post. While a lot of our discussion was spawned by the article, we did go off-piste a little, and I hope that my notes below will enlighten you all as to a bit of how we see revision control these days. It was remarkably pleasant to read an article where the comments section wasn't a cesspool of horror, so if this posting encourages you to go and read the article, don't stop when you reach the bottom -- the comments are good and useful too.

This was a fairly non-contentious article for us though each of us had points we wished to bring up and chat about it turned into a very convivial chat. We saw the main thrust of the article as being about using the metadata of revision control to communicate intent, process, and decision making. We agreed that it must be possible to do so effectively with Mercurial (thus deciding that the mention of it was simply a bit of clickbait / red herring) and indeed Mark figured that he was doing at least some of this kind of thing way back with CVS.

We all discussed how knowing the fundamentals of Git's data model improved our ability to work wih the tool. Lars and I mentioned how jarring it has originally been to come to Git from revision control systems such as Bazaar (bzr) but how over time we came to appreciate Git for what it is. For Mark this was less painful because he came to Git early enough that there was little more than the fundamental data model, without much of the porcelain which now exists.

One point which we all, though Mark in particular, felt was worth considering was that of distinguishing between published and unpublished changes. The article touches on it a little, but one of the benefits of the workflow which Jeremy espouses is that of using the revision control system as an integral part of the review pipeline. This is perhaps done very well with Git based workflows, but can be done with other VCSs.

With respect to the points Jeremy makes regarding making commits which are good for reviewing, we had a general agreement that things were good and sensible, to a point, but that some things were missed out on. For example, I raised that commit messages often need to be more thorough than one-liners, but Jeremy's examples (perhaps through expedience for the article?) were all pretty trite one-liners which perhaps would have been entirely obvious from the commit content. Jeremy makes the point that large changes are hard to review, and Lars poined out that Greg Wilson did research in this area, and at least one article mentions 200 lines as a maximum size of a reviewable segment.

I had a brief warble at this point about how reviewing needs to be able to consider the whole of the intended change (i.e. a diff from base to tip) not just individual commits, which is also missing from Jeremy's article, but that such a review does not need to necessarily be thorough and detailed since the commit-by-commit review remains necessary. I use that high level diff as a way to get a feel for the full shape of the intended change, a look at the end-game if you will, before I read the story of how someone got to it. As an aside at this point, I talked about how Jeremy included a 'style fixes' commit in his example, but I loathe seeing such things and would much rather it was either not in the series because it's unrelated to it; or else the style fixes were folded into the commits they were related to.

We discussed how style rules, as well as commit-bisectability, and other rules which may exist for a codebase, the adherence to which would form part of the checks that a code reviewer may perform, are there to be held to when they help the project, and to be broken when they are in the way of good communication between humans.

In this, Lars talked about how revision control histories provide high level valuable communication between developers. Communication between humans is fraught with error and the rules are not always clear on what will work and what won't, since this depends on the communicators, the context, etc. However whatever communication rules are in place should be followed. We often say that it takes two people to communicate information, but when you're writing commit messages or arranging your commit history, the second party is often nebulous "other" and so the code reviewer fulfils that role to concretise it for the purpose of communication.

At this point, I wondered a little about what value there might be (if any) in maintaining the metachanges (pull request info, mailing list discussions, etc) for historical context of decision making. Mark suggested that this is useful for design decisions etc but not for the style/correctness discussions which often form a large section of review feedback. Maybe some of the metachange tracking is done automatically by the review causing the augmentation of the changes (e.g. by comments, or inclusion of design documentation changes) to explain why changes are made.

We discussed how the "rebase always vs. rebase never" feeling flip-flopped in us for many years until, as an example, what finally won Lars over was that he wants the history of the project to tell the story, in the git commits, of how the software has changed and evolved in an intentional manner. Lars said that he doesn't care about the meanderings, but rather about a clear story which can be followed and understood.

I described this as the switch from "the revision history is about what I did to achieve the goal" to being more "the revision history is how I would hope someone else would have done this". Mark further refined that to "The revision history of a project tells the story of how the project, as a whole, chose to perform its sequence of evolution."

We discussed how project history must necessarily then contain issue tracking, mailing list discussions, wikis, etc. There are exist free software projects where part of their history is forever lost because, for example, the project moved from Sourceforge to Github, but made no effort (or was unable) to migrate issues or somesuch. Linkages between changes and the issues they relate to can easily be broken, though at least with mailing lists you can often rewrite URLs if you have something consistent like a Message-Id.

We talked about how cover notes, pull request messages, etc. can thus also be lost to some extent. Is this an argument to always use merges whose message bodies contain those details, rather than always fast-forwarding? Or is it a reason to encapsulate all those discussions into git objects which can be forever associated with the changes in the DAG?

We then diverted into discussion of CI, testing every commit, and the benefits and limitations of automated testing vs. manual testing; though I think that's a little too off-piste for even this summary. We also talked about how commit message audiences include software perhaps, with the recent movement toward conventional commits and how, with respect to commit messages for machine readability, it can become very complex/tricky to craft good commit messages once there are multiple disparate audiences. For projects the size of the Linux kernel this kind of thing would be nearly impossible, but for smaller projects, perhaps there's value.

Finally, we all agreed that we liked the quote at the end of the article, and so I'd like to close out by repeating it for you all...

Hal Abelson famously said:

Programs must be written for people to read, and only incidentally for machines to execute.

Jeremy agrees, as do we, and extends that to the metacommit information as well.

Petter Reinholdtsen: Debian Edu interview: Yvan Masson

Mar, 12/05/2020 - 6:30pd

It has been way too long since my last interview, but as the Debian Edu / Skolelinux community is still active, and new people keep showing up on the IRC channel #debian-edu and the debian-edu mailing list, I decided to give it another go. I was hoping someone else might pick up the idea and run with it, but this has not happened as far as I can tell, so here we are… This time the announcement of a new free software tool to create a school year book triggered my interest, and I decided to learn more about its author.

Who are you, and how do you spend your days?

My name is Yvan MASSON, I live in France. I have my own one person business in computer services. The work consist of visiting my customers (person's home, local authority, small business) to give advise, install computers and software, fix issues, and provide computing usage training. I spend the rest of my time enjoying my family and promoting free software.

What is your approach for promoting free software?

When I think that free software could be suitable for someone, I explain what it is, with simple words, give a few known examples, and explain that while there is no fee it is a viable alternative in many situations. Most people are receptive when you explain how it is better (I simplify arguments here, I know that it is not so simple): Linux works on older hardware, there are no viruses, and the software can be audited to ensure user is not spied upon. I think the most important is to keep a clear but moderated speech: when you try to convince too much, people feel attacked and stop listening.

How did you get in contact with the Skolelinux / Debian Edu project?

I can not remember how I first heard of Skolelinux / Debian Edu, but probably on planet.debian.org. As I have been working for a school, I have interest in this type of project.

The school I am involved in is a school for "children" between 14 and 18 years old. The French government has recommended free software since 2012, but they do not always use free software themselves. The school computers are still using the Windows operating system, but all of them have the classic set of free software: Firefox ESR, LibreOffice (with the excellent extension Grammalecte that indicates French grammatical errors), SumatraPDF, Audacity, 7zip, KeePass2, VLC, GIMP, Inkscape…

What do you see as the advantages of Skolelinux / Debian Edu?

It is free software! Built on Debian, I am sure that users are not spied upon, and that it can run on low end hardware. This last point is very important, because we really need to improve "green IT". I do not know enough about Skolelinux / Debian Edu to tell how it is better than another free software solution, but what I like is the "all in one" solution: everything has been thought of and prepared to ease installation and usage.

I like Free Software because I hate using something that I can not understand. I do not say that I can understand everything nor that I want to understand everything, but knowing that someone / some company intentionally prevents me from understanding how things work is really unacceptable to me.

Secondly, and more importantly, free software is a requirement to prevent abuses regarding human rights and environmental care. Humanity can not rely on tools that are in the hands of small group of people.

What do you see as the disadvantages of Skolelinux / Debian Edu?

Again, I don't know this project enough. Maybe a dedicated website? Debian wiki works well for documentation, but is not very appealing to someone discovering the project. Also, as Skolelinux / Debian Edu uses OpenLDAP, it probably means that Windows workstations cannot use centralized authentication. Maybe the project could use Samba as an Active Directory domain controller instead, allowing Windows desktop usage when necessary.

(Editors note: In fact Windows workstations can use the centralized authentication in a Debian Edu setup, at least for some versions of Windows, but the fact that this is not well known can be seen as an indication of the need for better documentation and marketing. :)

Which free software do you use daily?

Nothing original: Debian testing/sid with Gnome desktop, Firefox, Thunderbird, LibreOffice…

Which strategy do you believe is the right one to use to get schools to use free software?

Every effort to spread free software into schools is important, whatever it is. But I think, at least where I live, that IT professionals maintaining schools networks are still very "Microsoft centric". Schools will use any working solution, but they need people to install and maintain it. How to make these professionals sensitive about free software and train them with solutions like Debian Edu / Skolelinux is a really good question :-)

Jacob Adams: Roman Finger Counting

Mar, 12/05/2020 - 2:00pd

I recently wrote a final paper on the history of written numerals. In the process, I discovered this fascinating tidbit that didn’t really fit in my paper, but I wanted to put it somewhere. So I’m writing about it here.

If I were to ask you to count as high as you could on your fingers you’d probably get up to 10 before running out of fingers. You can’t count any higher than the number of fingers you have, right? The Romans could! They used a place-value system, combined with various gestures to count all the way up to 9,999 on two hands.

The System

(Note that in this diagram 60 is, in fact, wrong, and this picture swaps the hundreds and the thousands.)

We’ll start with the units. The last three fingers of the left hand, middle, ring, and pinkie, are used to form them.

Zero is formed with an open hand, the opposite of the finger counting we’re used to.

One is formed by bending the middle joint of the pinkie, two by including the ring finger and three by including the middle finger, all at the middle joint. You’ll want to keep all these bends fairly loose, as otherwise these numbers can get quite uncomfortable. For four, you extend your pinkie again, for five, also raise your ring finger, and for six, you raise your middle finger as well, but then lower your ring finger.

For seven you bend your pinkie at the bottom joint, for eight adding your ring finger, and for nine, including your middle finger. This mirrors what you did for one, two and three, but bending the finger at the bottom joint now instead.

This leaves your thumb and index finger for the tens. For ten, touch the nail of your index finger to the inside of your top thumb joint. For twenty, put your thumb between your index and middle fingers. For thirty, touch the nails of your thumb and index fingers. For forty, bend your index finger slightly towards your palm and place your thumb between the middle and top knuckle of your index finger. For fifty, place your thumb against your palm. For sixty, leave your thumb where it is and wrap your index finger around it (the diagram above is wrong). For seventy, move your thumb so that the nail touches between the middle and top knuckle of your index finger. For eighty, flip your thumb so that the bottom of it now touches the spot between the middle and top knuckle of your index finger. For ninety, touch the nail of your index finger to your bottom thumb joint.

The hundreds and thousands use the same positions on the right hand, with the units being the thousands and the tens being the hundreds. One account, from which the picture above comes, swaps these two, but the first account we have uses this ordering.

Combining all these symbols, you can count all the way to 9,999 yourself on just two hands. Try it!

History The Venerable Bede

The first written record of this system comes from the Venerable Bede, an English Benedictine monk who died in 735.

He wrote De computo vel loquela digitorum, “On Calculating and Speaking with the Fingers,” as the introduction to a larger work on chronology, De temporum ratione. (The primary calculation done by monks at the time was calculating the date of Easter, the first Sunday after the first full moon of spring).

He also includes numbers from 10,000 to 1,000,000, but its unknown if these were inventions of the author and were likely rarely used regardless. They require moving your hands to various positions on your body, as illustrated below, from Jacob Leupold’s Theatrum Arilhmetico-Geometricum, published in 1727:

The Romans

If Bede was the first to write it, how do we know that it came from Roman times? It’s referenced in many Roman writings, including this bit from the Roman satirist Juvenal who died in 130:

Felix nimirum qui tot per saecula mortem distulit atque suos iam dextera computat annos.

Happy is he who so many times over the years has cheated death And now reckons his age on the right hand.

Because of course the right hand is where one counts hundreds!

There’s also this Roman riddle:

Nunc mihi iam credas fieri quod posse negatur: octo tenes manibus, si me monstrante magistro sublatis septem reliqui tibi sex remanebunt.

Now you shall believe what you would deny could be done: In your hands you hold eight, as my teacher once taught; Take away seven, and six still remain.

If you form eight with this system and then remove the symbol for seven, you get the symbol for six!

Sources

My source for this blog post is Paul Broneer’s 1969 English translation of Karl Menninger’s Zahlwort und Ziffer (Number Words and Number Symbols).

Enrico Zini: Fraudsters and pirates

Hën, 11/05/2020 - 1:00pd
Adele Spitzeder - Wikipedia history people archive.org 2020-05-11 Adelheid Luise "Adele" Spitzeder ([ˈaːdl̩haɪt ʔaˈdeːlə ˈʃpɪtˌtseːdɐ]; 9 February 1832 – 27 or 28 October 1895), also known by her stage name Adele Vio, was a German actress, folk singer, and con artist. Initially a promising young actress, Spitzeder became a well-known private banker in 19th-century Munich when her theatrical success dwindled. Running what was possibly the first recorded Ponzi scheme, she offered large returns on investments by continually using the money of new investors to pay back the previous ones. At the height of her success, contemporary sources considered her the wealthiest woman in Bavaria. Anne Bonny - Wikipedia history people archive.org 2020-05-11 Anne Bonny (possibly 1697 – possibly April 1782)[1][2] was an Irish pirate operating in the Caribbean, and one of the most famous female pirates of all time.[3] The little that is known of her life comes largely from Captain Charles Johnson's A General History of the Pyrates. Mary Read - Wikipedia history people archive.org 2020-05-11 Mary Read (1685 – 28 April 1721), also known as Mark Read, was an English pirate. She and Anne Bonny are two of the most famed female pirates of all time, and among the few women known to have been convicted of piracy during the early 18th century, at the height of the "Golden Age of Piracy". Women in piracy - Wikipedia history people archive.org 2020-05-11 While piracy was predominantly a male occupation, a minority of pirates were women.[1] On many ships, women (as well as young boys) were prohibited by the ship's contract, which all crew members were required to sign.[2] :303

Ben Hutchings: Debian LTS work, April 2020

Dje, 10/05/2020 - 7:27md

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and carried over 8.5 hours from March. I worked 26 hours this month, so I will carry over 2.5 hours to May.

I sent a (belated) request for testing an update of the linux package to 3.16.82. I then prepared and, after review, released Linux 3.16.83, including a large number of security fixes. I rebased the linux package onto that and will soon send out a request for testing. I also spent some time working on a still-embargoed security issue.

I did not spend signficant time on any other LTS activities this month, and unfortunately missed the contributor meeting.

Russell Coker: IT Asset Management

Dje, 10/05/2020 - 3:42md

In my last full-time position I managed the asset tracking database for my employer. It was one of those things that “someone” needed to do, and it seemed that only way that “someone” wouldn’t equate to “no-one” was for me to do it – which was ok. We used Snipe IT [1] to track the assets. I don’t have enough experience with asset tracking to say that Snipe is better or worse than average, but it basically did the job. Asset serial numbers are stored, you can have asset types that allow you to just add one more of the particular item, purchase dates are stored which makes warranty tracking easier, and every asset is associated with a person or listed as available. While I can’t say that Snipe IT is better than other products I can say that it will do the job reasonably well.

One problem that I didn’t discover until way too late was the fact that the finance people weren’t tracking serial numbers and that some assets in the database had the same asset IDs as the finance department and some had different ones. The best advice I can give to anyone who gets involved with asset tracking is to immediately chat to finance about how they track things, you need to know if the same asset IDs are used and if serial numbers are tracked by finance. I was pleased to discover that my colleagues were all honourable people as there was no apparent evaporation of valuable assets even though there was little ability to discover who might have been the last person to use some of the assets.

One problem that I’ve seen at many places is treating small items like keyboards and mice as “assets”. I think that anything that is worth less than 1 hour’s pay at the minimum wage (the price of a typical PC keyboard or mouse) isn’t worth tracking, treat it as a disposable item. If you hire a programmer who requests an unusually expensive keyboard or mouse (as some do) it still won’t be a lot of money when compared to their salary. Some of the older keyboards and mice that companies have are nasty, months of people eating lunch over them leaves them greasy and sticky. I think that the best thing to do with the keyboards and mice is to give them away when people leave and when new people join the company buy new hardware for them. If a company can’t spend $25 on a new keyboard and mouse for each new employee then they either have a massive problem of staff turnover or a lack of priority on morale.

Related posts:

  1. Bad Project Management I have just read a rant by Sean Middleditch about...
  2. New Dell Server My Dell PowerEdge T105 server (as referenced in my previous...
  3. CPU time use from WordPress Javascript Currently I have some significant problems with Javascript CPU use...

Norbert Preining: Updating Dovecot for Debian

Dje, 10/05/2020 - 2:54md

A tweet of a friend pointed me at the removal of dovecot from Debian/testing, which surprised me a bit. Investigating the situation it seems that Dovecot in Debian is lagging a bit behind in releases, and hasn’t seen responses to some RC bugs. This sounds critical to me as dovecot is a core part of many mail setups, so I prepared updated packages.

Based on the latest released version of Dovecot, 2.3.10, I have made a package starting from the current Debian packaging and adjusted to the newer upstream. The package builds on Debian Buster (10), Testing, and Unstable on i386 and x64 archs. The packages are available on OBS, as usual:

For Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_Unstable/ ./

For Testing:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_Testing/ ./

For Debian 10 Buster:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_10/ ./

To make these repositories work, don’t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

These packages are provided without any warranty. Enjoy.

NOKUBI Takatsugu: Virtual Background using webcam

Dje, 10/05/2020 - 12:50md

I made a webpage to produce virtual background with webcam.

https://knok.github.io/virtbg/

Sorcecode:
https://github.com/knok/knok.github.io/tree/master/virtbg

Some online meeting software (Zoom, Microsoft Teams) supports virtual background, but I want to use other software like Jitsi (or google meet), so I made it.

To make this, I referred the article “Open Source Virtual Background”. 
The following figure is the diagram.

It depends on docker, GPU, v4l2loopback (only works on Linux), so I want to make more generic solution. To make as a webpage, and using OBS
Studio with plugins (obs-v4l2sink, OBS-VirtualCam or OBS (macOS) Virtual Camera) you can use the solution on more platforms.

To make as a single webpage, I can reduce overhead using inter-process commuication using http via docker.

This is an example animation:

Using jisti snapshot:

Unfortunately, BodyPix releases only pretraind models, no training data.

I need more improvements:

  • Accept any background images
  • Suppot choose camera device
  • Useful UI

Andrew Cater: CD / DVD testing for Buster release 4 - 202005092130 - Slowing down a bit - but still going.

Sht, 09/05/2020 - 11:39md
Last few architectures are being built in the background. Schweer has just confirmed successful testing of all the Debian Edu images - thanks to him, as ever, and to all involved. We're slowing up a bit - it's been a long, hot day and it's not quite over yet. The images release looks to be well on course. As ever, the point release incorporates security fixes and some packages have been removed. The release announcement at https://www.debian.org/News/2020/20200509 gives the details.