You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 2 ditë 4 orë më parë

Jordan Petridis: Navigating Docker for Windows versions

Enj, 23/01/2020 - 5:54md

Recently, gitlab-runner got support for running jobs inside windows containers. This simplifies a lot the setup needed to get a windows CI job up and running, and it makes it similar how we do linux builds.

Windows though has a couple of gotchas, the behavior of docker on windows can vastly vary depending on which binary and/or configuration you use.

Containers on windows are dependent on the server version of the Host. For example, your server 2016 (1607) containers can only be executed on a server 2016 host. Currently there are 2 popular base versions that docker supports, Server 2016, and 2019. Gitlab-runner only supports server 2019, so we will go with that.

Microsoft recently introduced a set of  Process isolation apis, where up till now docker had been using a hyper-v backend to launch containers. This results in possibly more light-weight containers as opposed to light VMs with Hyper-V, but also now now docker for windows has 2 isolation backends, which can affect differently the behavior of your containers! Proccess Isolation seems to be the default for Server 2019.

Next you want to go and install docker, so you look up the Microsoft documentation. There’s one option for the servers, but there’s also an option for Docker Desktop for Windows 10, and to add to this Docker Desktop itself has 2 versions depending if you want the Community or Enterprise Edition! Also, you can’t install Docker on windows 10 the same way you would do on the Server, so you are left having to mix different binaries and hope everything works out. Spoiler, it didn’t work for me! There were couple of build/networking failure when using Docker Desktop, where the same Dockerfile built fine on the server host.

Bonus round:

Docker Desktop has 2 modes, one for running linux containers and one for windows containers! We talked about the 2 different isolation backends for windows above. Now it looks like the linux containers mode got a backend based on WSL2 in addition to the existing hyper-v backend it has been using.

Hope you made it to the end without losing count of all the different “containers” in the windows land.

Tim Janik: Intersecting Intel & AMD Instruction Set Extensions

Enj, 23/01/2020 - 12:31md
In some of my projects, I’ve recently had the need to utilize FMA (fused-multiply-add) or AVX instructions. Compiling C/C++ on X86_64 will by default only activate MXX and a few of the early SSE extensions. The utilized instruction set basically predates the core2 which was introduced in 2006. Math…

Jussi Pakkanen: The Meson Manual is now available for purchase

Mar, 21/01/2020 - 12:26pd
Some of you might remember that last year I ran a crowdfunding campaign to create a full written user manual for Meson. That failed fairly spectacularly, mostly due to the difficulty of getting any sort of visibility for these kinds of projects (i.e. on the Internet, everything drowns).

Not taking the hint I chose to write and publish it on my own anyway. It is now available on this web page for the price of 29.95€ plus a tax that depends on the country of purchase. Some countries which have unreasonable requirements for foreign online sellers such as India, Russia and South Korea have been geoblocked. Sorry about that. However you can still buy the book if you are traveling outside the country in question, but in that case all tax responsibilities for importing fall upon you.
What if you don't care about books?I don't have a Patreon or any other crowdfunding thing ongoing, because of the considerable legal uncertainties of running a donation based service for the public good in Finland. Selling digital goods for money is fine, so this is a convenient way for people to support my work on Meson financially.Will the book be made available under a free license?No. We already have one set of free documentation on the project web site. Everyone is free to use and contribute that documentation. This book contains no text from the existing documentation, it is all new and written from scratch.Is it available as a hard copy?No, the only available format is PDF. This is both to save trees and because international shipping of physical items is both time consuming and expensive.Getting review copiesIf you are a journalist and wish to write a review of the book for a publication, send me an email and I'll provide you with a free review copy.When was the book first made public?It was announced at the very beginning of my LCA2020 talk. See it for yourself in the embedded video below.Can you post about this on your favourite social media site / news aggregator / etc?Yes, by all means. It is hard to get visibility without so I appreciate all the help I can get.What was that site's URL again?https://meson-manual.com

Matthew Garrett: Verifying your system state in a secure and private way

Hën, 20/01/2020 - 1:53md
Most modern PCs have a Trusted Platform Module (TPM) and firmware that, together, support something called Trusted Boot. In Trusted Boot, each component in the boot chain generates a series of measurements of next component of the boot process and relevant configuration. These measurements are pushed to the TPM where they're combined with the existing values stored in a series of Platform Configuration Registers (PCRs) in such a way that the final PCR value depends on both the value and the order of the measurements it's given. If any measurements change, the final PCR value changes.

Windows takes advantage of this with its Bitlocker disk encryption technology. The disk encryption key is stored in the TPM along with a policy that tells it to release it only if a specific set of PCR values is correct. By default, the TPM will release the encryption key automatically if the PCR values match and the system will just transparently boot. If someone tampers with the boot process or configuration, the PCR values will no longer match and boot will halt to allow the user to provide the disk key in some other way.

Unfortunately the TPM keeps no record of how it got to a specific state. If the PCR values don't match, that's all we know - the TPM is unable to tell us what changed to result in this breakage. Fortunately, the system firmware maintains an event log as we go along. Each measurement that's pushed to the TPM is accompanied by a new entry in the event log, containing not only the hash that was pushed to the TPM but also metadata that tells us what was measured and why. Since the algorithm the TPM uses to calculate the hash values is known, we can replay the same values from the event log and verify that we end up with the same final value that's in the TPM. We can then examine the event log to see what changed.

Unfortunately, the event log is stored in unprotected system RAM. In order to be able to trust it we need to compare the values in the event log (which can be tampered with) with the values in the TPM (which are much harder to tamper with). Unfortunately if someone has tampered with the event log then they could also have tampered with the bits of the OS that are doing that comparison. Put simply, if the machine is in a potentially untrustworthy state, we can't trust that machine to tell us anything about itself.

This is solved using a procedure called Remote Attestation. The TPM can be asked to provide a digital signature of the PCR values, and this can be passed to a remote system along with the event log. That remote system can then examine the event log, make sure it corresponds to the signed PCR values and make a security decision based on the contents of the event log rather than just on the final PCR values. This makes the system significantly more flexible and aids diagnostics. Unfortunately, it also means you need a remote server and an internet connection and then some way for that remote server to tell you whether it thinks your system is trustworthy and also you need some way to believe that the remote server is trustworthy and all of this is well not ideal if you're not an enterprise.

Last week I gave a talk at linux.conf.au on one way around this. Basically, remote attestation places no constraints on the network protocol in use - while the implementations that exist all do this over IP, there's no requirement for them to do so. So I wrote an implementation that runs over Bluetooth, in theory allowing you to use your phone to serve as the remote agent. If you trust your phone, you can use it as a tool for determining if you should trust your laptop.

I've pushed some code that demos this. The current implementation does nothing other than tell you whether UEFI Secure Boot was enabled or not, and it's also not currently running on a phone. The phone bit of this is pretty straightforward to fix, but the rest is somewhat harder.

The big issue we face is that we frequently don't know what event log values we should be seeing. The first few values are produced by the system firmware and there's no standardised way to publish the expected values. The Linux Vendor Firmware Service has support for publishing these values, so for some systems we can get hold of this. But then you get to measurements of your bootloader and kernel, and those change every time you do an update. Ideally we'd have tooling for Linux distributions to publish known good values for each package version and for that to be common across distributions. This would allow tools to download metadata and verify that measurements correspond to legitimate builds from the distribution in question.

This does still leave the problem of the initramfs. Since initramfs files are usually generated locally, and depend on the locally installed versions of tools at the point they're built, we end up with no good way to precalculate those values. I proposed a possible solution to this a while back, but have done absolutely nothing to help make that happen. I suck. The right way to do this may actually just be to turn initramfs images into pre-built artifacts and figure out the config at runtime (dracut actually supports a bunch of this already), so I'm going to spend a while playing with that.

If we can pull these pieces together then we can get to a place where you can boot your laptop and then, before typing any authentication details, have your phone compare each component in the boot process to expected values. Assistance in all of this extremely gratefully received.

comments

Julian Sparber: Digitizing a analog water meter

Sht, 18/01/2020 - 11:22pd

For a University project a spent some time working on a project to digitally track the water consumption in my shared flat. Since nowadays everything is about data collection, I wanted to give this idea a shot. In my flat we have a simple analog water meter in my house.

Sadly, my meter is really dirt under the glass and i couldn’t manage to clean it. This will cause problems down the road.

The initial idea was easy, add a webcam on top of the meter and read the number on the upper half it. But I soon realized that the project won’t be that simple. The number shows only the use of 1m^3 (1000 liters), this means that I would have a change only every couple of days, which is useless and boring.  So, I had to read the analog gauges, which show the fraction in 0.0001, 0.001,  0.01 and 0.1 m^3. This discovery blocked me, and I was like “this is way to complicated”.

I have no idea how I found or what reminded me of OpenCV, but that was the solution. OpenCV is an awesome tool for computer vision, it has many features like Facial recognition, Gesture recognition … and also shape recognition. What’s a analog gauge? It’s just a circle with an triangular arrow indicating the value.

Let’s jump in to the project

I’m using a Raspberry Pi 1, a Logitech webcam, a juice bottle and some leds out of a bicycle light.

You need to find a juice bottle which fits nicely over the water meter. Cut of the top and bottom of the bottle and replace one side with a cardboard or wood with a hole in the middle. Attach the webcam centered over the hole and place a led on each side of the webcam to illuminate the water meter (you my need to cover them with paper to reduce reflection on the plastic of the meter).

First step is to set up the Raspberry Pi 1 (it doesn’t have to be a RPI, any computer running linux should work fine). You have to install a Linux Distro on the device, I used Archlinux. You can find a guide to install it on a Raspberry Pi 1 here.

After the initial setup you need to install git, python3 and opencv:
sudo pacman -S python3 git opencv
Clone the needed code to a know location:
git clone https://github.com/jsparber/water-meter-code.git
You need to create a new git repository to store the data and clone it to /home/alarm/water-meter-data/. If you want to use a different name or location you need to modify the name in measure.sh

On the RPI I have a cronjob which runs a script every minute. The script turns on the the led and then takes a picture then it turns the led off again, to save some energy.
With crontab -e you can modify the cron jobs, add * * * * * /home/alarm/code/take_photo.sh to run the take_photo.sh every minute, you may need to adjust the path depending on where you cloned the git repo.

After the picture is taken it calls a second script which then uses OpenCV to read the gauges and it appends the found values to a file which then is pushed to  git repo. I had a issue with the webcam. After some time my script couldn’t access the webcam anymore, I solved it by rebooting the RPI when it wasn’t possible to take a picture. (I did a quick search on the internet, most people solved this issue by changing the cam)

A nice optional feature is the home made switch connected to the RPI on the above picture. The schematics are really simple it’s only a 1KOhm resistor, a transistor and a USB extension cable. The transistor is switched on via the GPIO pin 18 of the Raspberry PI and gives power to the connected USB device. In this case I used it to connect the Leds.

Inside the USB extension cable there should be 4 different colored cables. We need to cut only the red one and connect it the same way as the schematics above show it, where the red_in goes to the male connector and the red_out to the female side of the cable. The GND needs to be connected to the ground pin of the Raspberry Pi, if you need to power something which requires more then 500mA you should connect the ground directly to the power source the same way as you did with the +5V red cable. You need to use the same power source for the switch and the RPI or it may not work.

And now the OpenCv part

First my code finds the circles of the right size on the image, and uses the two most left ones as gauges for 0.1 m^3 and 0.01 m^3 (Sadly since my meter is so dirty I can’t reliably read the other two values).

The input image. The found circles of the right size

As the second step I create a mask which filters out everything what’s not red (remember the arrows are read). I take the contour of the mask which encloses the center of the circle I want to read. Then it finds the fairest point from the center of the circle which is the tip of the arrow. The software then creates a virtual line between the center and the tip, which is then used to calculate the angle which is basically the value shown on the gauge. The same thing is repeated for the other gauges.

The mask with only red areas showing. The arrows found on the source image. This lines are used to calculate the angle.

This system sounds extremely simple, but to make everything work well together it isn’t that easy. OpenCV requires a lot of tuning, e.g selecting the right red color so that it detects it well but stays working even with light changes.

Conclusions

I learned a lot during this project, especially about OpenCV which i never used before. Sadly my water meter was really dirty so a couldn’t read all values and get also some wrong readings. So far I didn’t decide for what i want to use the collected data therefore I didn’t spend much time on finding a solution for read errors and problems I have when the gauges make a full turn. A easy solution would be to just keep an internal count of the water. And when we are unsure about a value we can go back to the memorized value.

The final plot can be found here. All values are saved directly without filter this causes the plot to have quite some noise but it allows to change the function used to filter later and adapt it to future needs.

My code is published on github: Some sources which helped me a lot, many thanks to them:

Tobias Bernard: Doing Things That Scale

Pre, 17/01/2020 - 10:45md

There was a point in my life when I ran Arch, had an elaborate personalized terminal prompt, and my own custom icon theme. I stopped doing all these things at various points for different reasons, but underlying them all is a general feeling that it’s taken me some time to figure out how to articulate: I no longer want to invest time in things that don’t scale.

What I mean by that in particular is things that

  1. Only fix a problem for myself (and maybe a small group of others)
  2. Have to be maintained in perpetuity (by me)

Not only is it highly wasteful for me to come up with a custom solution to every problem, but in most cases those solutions would be worse than ones developed in collaboration with others. It also means nobody will help maintain these solutions in the long run, so I’ll be stuck with extra work, forever.

Conversely, things that scale

  1. Fix the problem in way that will just work for most people, most of the time
  2. Are developed, used, and maintained by a wider community

A few examples:

I used to have an Arch GNU/Linux setup with tons of tweaks and customizations. These days I just run vanilla Fedora. It’s not perfect, but for actually getting things done it’s way better than what I had before. I’m also much happier knowing that if something goes seriously wrong I can reinstall and get to a usable system in half an hour, as opposed to several hours of tedious work for setting up Arch. Plus, this is a setup I can actually install for friends and relatives, because it does a decent job at getting people to update when I’m not around.

Until relatively recently I always set a custom monospace font in my editor and terminal when setting up a new machine. At some point I realized that I wouldn’t have to do that if the default was nicer, so I just opened an issue. A discussion ensued, a better default was agreed upon, and voilà — my problem was solved. One less thing to do after every install. And of course, everyone else now gets a nicer default font too!

I also used to use ZSH with a configuration framework and various plugins to get autocompletion, git status, a fancy prompt etc. A few years ago I switched to fish. It gives me most of what I used to get from my custom ZSH thing, but it does so out of the box, no configuration needed. Of course ideally we’d have all of these things in the default shell so everyone gets these features for free, but that’s hard to do unfortunately (if you’re interested in making it happen I’d love to talk!).

Years ago I used to maintain my own extension set to the Faenza icon theme, because Faenza didn’t cover every app I was using. Eventually I realized that trying to draw a consistent icon for every single third party app was impossible. The more icons I added, the more those few apps that didn’t have custom icons stuck out. Nowadays when I see an app with a poor icon I file an issue asking if the developer would like help with a nicer one. This has worked out great in most cases, and now I probably have more consistent app icons on my system than back when I used a custom theme. And of course, everyone gets to enjoy the nicer icons, not only me.

Some other things that don’t scale (in no particular order):

  • Separate home partition
  • Dotfiles
  • Non-trivial downstream patches
  • Manual tracker/cookie/Javascript blocking (I use uMatrix, which is already a lot nicer than NoScript, but still a pretty terrible experience)
  • Multiple Firefox profiles
  • User styles on websites
  • Running your blog on a static site generator
  • Manual backups
  • Hosting your own email (and self-hosting more generally)
  • Google-free Android (I use Lineage on a Pixel 1, it’s a pretty miserable existence)
  • Buying a Windows computer and installing GNU/Linux
  • Auto-starting apps
  • Custom keyboard shortcuts, e.g. for launching apps (I still have a few of these, mostly because of muscle memory)

The free software community tends to celebrate custom, hacky solutions to problems as something positive (“It’s so flexible!”), even when these hacks are only necessary because things are broken by default. It’s nice that people with a lot of time and technical skills can fix their own problems, but the benefits from that don’t automatically trickle down to everybody else.

If we want ethical technology to become accessible to more people, we need to invest our (very limited) time and energy in solutions that scale. This means good defaults instead of endless customization, apps instead of scripts, “it just works” instead of “read the fucking manual”. The extra effort to make proper solutions that work for everyone, rather than hacks just for ourselves can seem daunting, but is always worth it in the long run. Just as with accessibility and commenting your code, the person most likely to benefit from it is you, in the future.

Hans de Goede: Plug and play support for (Gaming) keyboards with a builtin LCD panel

Pre, 17/01/2020 - 2:39md
A while ago as a spin-off of my project to improve support for Logitech wireless keyboards and mice I have also done some work on improving support for (Gaming) keyboards with a builtin LCD panel.

Specifically if you have a Logitech MX5000, G15, G15 v2 or G510 and you want the LCD panel to show something somewhat useful then on Fedora 31 you can now install the lcdproc package and it will automatically recognize the keyboard and show "top" like information on it. No need to manually write an LCDd.conf or anything, this works fully plug and play:

sudo dnf install lcdproc
sudo udevadm trigger


If you have a MX5000 and you do not want the LCD panel to show "top" like info, you may still want to install the mx5000tools package, this will automatically send the system time to the keyboard, after which it will display the time.

Once the 5.5 kernel becomes available as an update for Fedora you will also be able to use the keys surrounding the LCD panel to control the lcdproc menu-s on the LCD panel. The 5.5 kernel will also export key backlight brightness control through the standardized /sys/class/leds API, so that you can control it from e.g. the GNOME control-center's power-settings and you get a nice OSD when toggling the brightnesslevel using the key on the keyboard.

The 5.5 kernel will also make the "G" keys send standard input-events (evdev events), once userspace support for the new key-codes these send has landed, this will allow e.g. binding them to actions in GNOME control-center's keyboard-settings. But only under Wayland as the new keycodes are > 255 and X11 does not support this.