You are here

Agreguesi i feed

Retpoline-enabled GCC

Planet Debian - Hën, 15/01/2018 - 10:28md

Since I assume there are people out there that want Spectre-hardened kernels as soon as possible, I pieced together a retpoline-enabled build of GCC. It's based on the latest gcc-snapshot package from Debian unstable with H.J.Lu's retpoline patches added, but built for stretch.

Obviously this is really scary prerelease code and will possibly eat babies (and worse, it hasn't taken into account the last-minute change of retpoline ABI, so it will break with future kernels), but it will allow you to compile 4.15.0-rc8 with CONFIG_RETPOLINE=y, and also allow you to assess the cost of retpolines (-mindirect-branch=thunk) in any particularly sensitive performance userspace code.

There will be upstream backports at least to GCC 7, but probably pretty far back (I've seen people talk about all the way to 4.3). So you won't have to run my crappy home-grown build for very long—it's a temporary measure. :-)

Oh, and it made Stockfish 3% faster than with GCC 6.3! Hooray.

Steinar H. Gunderson Steinar H. Gunderson

Quick recap of 2017

Planet Debian - Hën, 15/01/2018 - 12:00md

I haven’t been posting anything on my personal blog in a long while, let’s fix that!

Partial reason for this is that I’ve been busy documenting progress on the Debian Installer on my company’s blog. So far, the following posts were published there:

After the Stretch release, it was time to attend DebConf’17 in Montreal, Canada. I’ve presented the latest news on the Debian Installer front there as well. This included a quick demo of my little framework which lets me run automatic installation tests. Many attendees mentioned openQA as the current state of the art technology for OS installation testing, and Philip Hands started looking into it. Right now, my little thing is still useful as it is, helping me reproduce regressions quickly, and testing bug fixes… so I haven’t been trying to port that to another tool yet.

I also gave another presentation in two different contexts: once at a local FLOSS meeting in Nantes, France and once during the mini-DebConf in Toulouse, France. Nothing related to Debian Installer this time, as the topic was how I helped a company upgrade thousands of machines from Debian 6 to Debian 8 (and to Debian 9 since then). It was nice to have Evolix people around, since we shared our respective experience around automation tools like Ansible and Puppet.

After the mini-DebConf in Toulouse, another event: the mini-DebConf in Cambridge, UK. I tried to give a lightning talk about “how helped saved the release(s)” but clearly speed was lacking, and/or I had too many things to present, so that didn’t work out as well as I hoped. Fortunately, no time constraints when I presented that during a Debian meet-up in Nantes, France. :)

Since Reproducible Tails builds were announced, it seemed like a nice opportunity to document how my company got involved into early work on reproducibility for the Tails project.

On an administrative level, I’m already done with all the paperwork related to the second financial year. \o/

Next things I’ll likely write about: the first two D-I Buster Alpha releases (many blockers kept popping up, it was really hard to release), and a few more recent release critical bug reports.

Cyril Brulebois KiBi’s blog

Lubuntu Blog: Lubuntu 17.04 has reached End of Life

Planet Ubuntu - Hën, 15/01/2018 - 9:32pd
The Lubuntu Team announces that as a non-LTS release, 17.04 has a 9-month support cycle and, as such, reached end of life on Saturday, January 13, 2018. Lubuntu will no longer provide bug fixes or security updates for 17.04, and we strongly recommend that you update to 17.10, which continues to be actively supported with […]

RHL'18 in Saint-Cergue, Switzerland

Planet Debian - Hën, 15/01/2018 - 9:02pd

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

Daniel.Pocock - debian

Nathan Haines: Introducing the Ubuntu Free Culture Showcase for 18.04

Planet Ubuntu - Hën, 15/01/2018 - 9:00pd

Ubuntu’s changed a lot in the last year, and everything is leading up to a really exciting event: the release of 18.04 LTS! This next version of Ubuntu will once again offer a stable foundation for countless humans who use computers for work, play, art, relaxation, and creation. Among the various visual refreshes of Ubuntu, it’s also time to go to the community and ask for the best wallpapers. And it’s also time to look for a new video and music file that will be waiting for Ubuntu users on the install media’s Examples folder, to reassure them that their video and sound drivers are quite operational.

Long-term support releases like Ubuntu 18.04 LTS are very important, because they are downloaded and installed ten times more often than every single interim release combined. That means that the wallpapers, video, and music that are shipped will be seen ten times more than in other releases. So artists, select your best works. Ubuntu enthusiasts, spread the word about the contest as far and wide as you can. Everyone can help make this next LTS version of Ubuntu an amazing success.

All content must be released under a Creative Commons Attribution-Sharealike or Creative Commons Attribute license. (The Creative Commons Zero waiver is okay, too!). Each entrant must only submit content they have created themselves, and all submissions must adhere to the Ubuntu Code of Conduct.

The winners will be featured in the Ubuntu 18.04 LTS release this April!

There are a lot of details, so please see the Ubuntu Free Culture Showcase wiki page for details and links to where you can submit your work from now through March 15th. Good luck!


Planet Debian - Hën, 15/01/2018 - 12:54pd

A few comments on Star Wars: The Last Jedi.

Vice Admiral Holdo’s subplot was a huge success. She had to make a very difficult call over which she knew she might face a mutiny from the likes of Poe Dameron. The core of her challenge was that there was no speech or argument she could have given that would have placated Dameron and restored unity to the crew. Instead, Holdo had to press on in the face of that disunity. This reflects the fact that, sometimes, living as one should demands pressing on in the face deep disagreement with others.

Not making it clear that Dameron was in the wrong until very late in the film was a key component of the successful portrayal of the unpleasantness of what Holdo had to do. If instead it had become clear to the audience early on that Holdo’s plan was obviously the better one, we would not have been able to observe the strength of Holdo’s character in continuing to pursue her plan despite the mutiny.

One thing that I found weak about Holdo was her dress. You cannot be effective on the frontlines of a hot war in an outfit like that! Presumably the point was to show that women don’t have to give up their femininity in order to take tough tactical decisions under pressure, and that’s indeed something worth showing. But this could have been achieved by much more subtle means. What was needed was to have her be the character with the most feminine outfit, and it would have been possible to fulfill that condition by having her wear something much more practical. Thus, having her wear that dress was crude and implausible overkill in the service of something otherwise worth doing.

I was very disappointed by most of the subplot with Rey and Luke: both the content of that subplot, and its disconnection from the rest of film.

Firstly, the content. There was so much that could have been explored that was not explored. Luke mentions that the Jedi failed to stop Darth Sidious “at the height of their powers”. Well, what did the Jedi get wrong? Was it the Jedi code; the celibacy; the bureaucracy? Is their light side philosophy to absolutist? How are Luke’s beliefs about this connected to his recent rejection of the Force? When he lets down his barrier and reconnects with the force, Yoda should have had much more to say. The Force is, perhaps, one big metaphor for certain human capacities not emphasised by our contemporary culture. It is at the heart of Star Wars, and it was at the heart of Empire and Rogue One. It ought to have been at the heart of The Last Jedi.

Secondly, the lack of integration with the rest of the film. One of the aspects of Empire that enables its importance as a film, I suggest, is the tight integration and interplay between the two main subplots: the training of Luke under Yoda, and attempting to shake the Empire off the trail of the Millennium Falcon. Luke wants to leave the training unfinished, and Yoda begs him to stay, truly believing that the fate of the galaxy depends on him completing the training. What is illustrated by this is the strengths and weaknesses of both Yoda’s traditional Jedi view and Luke’s desire to get on with fighting the good fight, the latter of which is summed up by the binary sunset scene from A New Hope. Tied up with this desire is Luke’s love for his friends; this is an important strength of his, but Yoda has a point when he says that the Jedi training must be completed if Luke is to be ultimately succesful. While the Yoda subplot and what happens at Cloud City could be independently interesting, it is only this integration that enables the film to be great. The heart of the integration is perhaps the Dark Side Cave, where two things are brought together: the challenge of developing the relationship with oneself possessed by a Jedi, and the threat posed by Darth Vader.

In the Last Jedi, Rey just keeps saying that the galaxy needs Luke, and eventually Luke relents when Kylo Ren shows up. There was so much more that could have been done with this! What is it about Rey that enables her to persuade Luke? What character strengths of hers are able to respond adequately to Luke’s fear of the power of the Force, and doubt regarding his abilities as a teacher? Exploring these things would have connected together the rebel evacuation, Rey’s character arc and Luke’s character arc, but these three were basically independent.

(Possibly I need to watch the cave scene from The Last Jedi again, and think harder about it.)

Sean Whitton Notes from the Library

I pushed an implementation of myself to GitHub

Planet Debian - Dje, 14/01/2018 - 10:22md

Roughly 4 years ago, I mentioned that there appears to be an esotieric programming language which shares my full name.

I know, it is really late, but two days ago, I discovered Racket. As a Lisp person, I immediately felt at home. And realizing how the language dispatch mechanism works, I couldn't resist and write a Racket implementation of MarioLANG. A nice play on words and a good toy project to get my feet wet.

Racket programs always start with #lang. How convenient. MarioLANG programs for Racket therefore look something like this:

#lang mario ++++++++++++ ===========+: ==

So much about abusing coincidences. Phew, this was a fun weekend project! And it has some potential for more challenges. Right now, it is only an interpreter, because it appears to be tricky to compile a 2d instruction "space" to traditional code. MarioLANG does not only allow for nested loops as BrainFuck does, it also includes weird concepts like the reversal of the instruction pointer direction. Coupled with the "skip" ([) instruction, this allow to create loops which have two exit conditions and reverse code execution on every pass. Something like this:

@[ some brainfuck [@ ====================

And since this is a 2d programming language, this theoretical loop could be entered by jumping onto any of the instruction inbetween from above. And, the heading could be either leftward or rightward when entering.

Discovering these patterns and translating them to compilable code is quite beyond me right now. Lets see what time will bring.

Mario Lang The Blind Guru

Simon Raffeiner: What a GNU C Compiler Bug looks like

Planet Ubuntu - Dje, 14/01/2018 - 6:35md
Back in December a Linux Mint user sent a strange bug report to the darktable mailing list. Apparently the GNU C Compiler (GCC) on his system exited with an unexpected error message, breaking the build process.

SSL migration

Planet Debian - Dje, 14/01/2018 - 11:05pd
SSL migration

This week I managed to finally migrate my personal website to SSL, and on top of that migrate the SMTP/IMAP services to certificates signed by "proper" a CA (instead of my own). This however was more complex than I thought…

Let's encrypt?

I first wanted to do this when Let's Encrypt became available, but the way it works - with short term certificates with automated renewal put me off at first. The certbot tool needs to make semi-arbitrary outgoing requests to renew the certificates, and on public machines I have a locked-down outgoing traffic policy. So I gave up, temporarily…

I later found out that at least for now (for the current protocol), certbot only needs to talk to a certain API endpoint, and after some more research, I realized that the http-01 protocol is very straight-forward, only needing to allow some specific plain http URLs.

So then:

Issue 1: allowing outgoing access to a given API endpoint, somewhat restricted. I solved this by using a proxy, forcing certbot to go through it via env vars, learning about systemctl edit on the way, and from the proxy, only allowing that hostname. Quite weak, but at least not "open policy".

Issue 2: due to how http-01 works, it requires to leave some specific paths under http, which means you can't have (in Apache) a "redirect everything to https" config. While fixing this I learned about mod_macro, which is quite interesting (and doesn't need an external pre-processor).

The only remaining problem is that you can't renew automatically certificates for non-externally accessible systems; the dns protocol also need changing externally-visible state, so more or less the same. So:

Issue 3: For internal websites, still need a solution if own CA (self-signed, needs certificates added to clients) is not acceptable.

How did it go?

It seems that using SSL is more than SSLEngine on. I learned in this exercise about quite a few things.


DNS Certification Authority Authorization is pretty nice, and although it's not a strong guarantee (against malicious CAs), it gives some more signals that proper clients could check ("For this domain, only this CA is expected to sign certificates"); also, trivial to configure, with the caveat that one would need DNSSEC as well for end-to-end checks.

OCSP stapling

I was completely unaware of OCSP Stapling, and yay, seems like a good solution to actually verifying that the certs were not revoked. However… there are many issues with it:

  • there needs to be proper configuration on the webserver to not cause more problems than without; Apache at least, needs increasing the cache lifetime, disable sending error responses (for transient CA issues), etc.
  • but even more, it requires the web server user to be able to make "random" outgoing requests, which IMHO is a big no-no
  • even the command line tools (i.e. openssl ocsp) are somewhat deficient: no proxy support (while s_client can use one)

So the proper way to do this seems to be a separate piece of software, isolated from the webserver, that does proper/eager refresh of certificates while handling errors well.

Issue 4: No OCSP until I find a good way to do it.

HSTS, server-side and preloading

HTTP Strict Transport Security represent a commitment to encryption: once published with recommended lifetime, browsers will remember that the website shouldn't be accessed over plain http, so you can't rollback.

Preloading HSTS is even stronger, and so far I haven't done it. Seems worthwhile, but I'll wait another week or so ☺ It's easily doable online.


HTTP Public Key Pinning seems dangerous, at least by some posts. Properly deployed, it would solve a number of problems with the public key infrastructure, but still, complex and a lot of overhead.

Certificate chains

Something I didn't know before is that the servers are supposed to serve the entire chain; I thought, naïvely, that just the server is enough, since the browsers will have the root-root CA, but the intermediaries seem to be problematic.

So, one needs to properly serve the full chain (Let's Encrypt makes this trivial, by the way), and also monitor that it is so.

Ciphers and SSL protocols

OpenSSL disabled SSLv2 in recent builds, but at least Debian stable still has SSLv3+ enabled and Apache does not disable it, so if you put your shiny new website through a SSL checker you get many issues (related strictly to ciphers).

I spent a bit of time researching and getting to the conclusion that:

  • every reasonable client (for my small webserver) supports TLSv1.1+, so disabling SSLv3/TLSv1.0 solved a bunch of issues
  • however, even for TLSv1.1+, a number of ciphers are not recommended by US standards, but going into explicit cipher disable is a pain because I don't see a way to make it "cheap" (without needing manual maintenance); so there's that, my website is not HIPAA compliant due to Camellia cipher.

Issue 5: Weak default configs

Issue 6: Getting perfect ciphers not easy.

However, while not perfect, getting a proper config once you did the research is pretty trivial in terms of configuration.

My apache config. Feedback welcome:

SSLCipherSuite HIGH:!aNULL SSLHonorCipherOrder on SSLProtocol all -SSLv3 -TLSv1

And similarly for dovecot:

ssl_cipher_list = HIGH:!aNULL ssl_protocols = !SSLv3 !TLSv1 ssl_prefer_server_ciphers = yes ssl_dh_parameters_length = 4096

The last line there - the dh_params - I found via nmap, as my previous config has it do 1024, which is weaker than the key, defeating the purpose of a long key. Which leads to the next point:

DH parameters

It seems that DH parameters can be an issue, in the sense that way too many sites/people reuse the same params. Dovecot (in Debian) generates its own, but Apache (AFAIK) not, and needs explicit configuration added to use your own.

Issue 7: Investigate DH parameters for all software (postfix, dovecot, apache, ssh); see instructions.


A number interesting tools:

  • Online resources to analyse https config: e.g. SSL labs, and htbridge; both give very detailed information.
  • CAA checker (but this is trivial).
  • nmap ciphers report: nmap --script ssl-enum-ciphers, and very useful, although I don't think this works for STARTTLS protocols.
  • Cert Spotter from SSLMate. This seems to be useful as a complement to CAA (CAA being the policy, and Cert Spotter the monitoring for said policy), but it goes beyond it (key sizes, etc.); for the expiration part, I think nagios/icinga is easier if you already have it setup (check_http has options for lifetime checks).
  • Certificate chain checker; trivial, but a useful extra check that the configuration is right.

Ah, the good old days of plain http. SSL seems to add a lot of complexity; I'm not sure how much is needed and how much could actually be removed by smarter software. But, not too bad, a few evenings of study is enough to get a start; probably the bigger cost is in the ongoing maintenance and keeping up with the changes.

Still, a number of unresolved issues. I think the next goal will be to find a way to properly do OCSP stapling.

Iustin Pop blog

Make 'bts' (devscripts) accept TLS connection to mail server with self signed certificate

Planet Debian - Dje, 14/01/2018 - 3:46pd

My mail server runs with a self signed certificate. So bts, configured like this ...

...lately refused to send mails with this error:

bts: failed to open SMTP connection to
(SSL connect attempt failed error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed)

After searching a bit, I found a way to fix this locally without turning off the server certificate verification. The fix belongs into the send_mail() function. When calling the Net::SMTPS->new() constructor, it is possible to add the fingerprint of my self signed certificate like this (bold):

if (have_smtps) {
$smtp = Net::SMTPS->new($host, Port => $port,
Hello => $smtphelo, doSSL => 'starttls',
SSL_fingerprint => 'sha1$hex-fingerprint'
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
} else {
$smtp = Net::SMTP->new($host, Port => $port, Hello => $smtphelo)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";

Pretty happy to being able to use the bts command again.

Daniel Leidert [erfahrungen, meinungen, halluzinationen]

Fixing a Nintendo Game Boy Screen

Planet Debian - Dje, 14/01/2018 - 1:28pd

Over the holidays my old Nintendo Game Boy (the original DMG-01 model) has resurfaced. It works, but the display had a bunch of vertical lines near the left and right border that stay blank. Apparently a common problem with these older Game Boys and the solution is to apply heat to the connector foil upper side to resolder the contacts hidden underneath. There’s lots of tutorials and videos on the subject so I won’t go into much detail here.

Just one thing: The easiest way is to use a soldering iron (the foil is pretty heat resistant, it has to be soldered during production after all) and move it along the top at the affected locations. Which I tried at first and it kind of works but takes ages. Some columns reappear, others disappear, reappeared columns disappear again… In someone’s comment I read that they needed over five minutes until it was fully fixed!

So… simply apply a small drop of solder to the tip. That’s what you do for better heat transfer in normal soldering and of course it also works here (since the foil connector back doesn’t take solder this doesn’t make a mess or anything). That way, the missing columns reappeared practically instantly at the touch of the solder iron and stayed fixed. Temperature setting was 250°C, more than sufficient for the task.

This particular Game Boy always had issues with the speaker stopping to work but we never had it replaced, I think because the problem was intermittent. After locating the bad solder joint on the connector and reheating it this problem was also fixed. Basically this almost 28 year old device is now in better working condition than it ever was.

Andreas Bombe pdo on Active Low

Sebastian Dröge: How to write GStreamer Elements in Rust Part 1: A Video Filter for converting RGB to grayscale

Planet Ubuntu - Sht, 13/01/2018 - 11:23md

This is part one of a series of blog posts that I’ll write in the next weeks, as previously announced in the GStreamer Rust bindings 0.10.0 release blog post. Since the last series of blog posts about writing GStreamer plugins in Rust ([1] [2] [3] [4]) a lot has changed, and the content of those blog posts has only historical value now, as the journey of experimentation to what exists now.

In this first part we’re going to write a plugin that contains a video filter element. The video filter can convert from RGB to grayscale, either output as 8-bit per pixel grayscale or 32-bit per pixel RGB. In addition there’s a property to invert all grayscale values, or to shift them by up to 255 values. In the end this will allow you to watch Big Bucky Bunny, or anything else really that can somehow go into a GStreamer pipeline, in grayscale. Or encode the output to a new video file, send it over the network via WebRTC or something else, or basically do anything you want with it.

Big Bucky Bunny – Grayscale

This will show the basics of how to write a GStreamer plugin and element in Rust: the basic setup for registering a type and implementing it in Rust, and how to use the various GStreamer API and APIs from the Rust standard library to do the processing.

The final code for this plugin can be found here, and it is based on the 0.1 version of the gst-plugin crate and the 0.10 version of the gstreamer crate. At least Rust 1.20 is required for all this. I’m also assuming that you have GStreamer (at least version 1.8) installed for your platform, see e.g. the GStreamer bindings installation instructions.

Table of Contents
  1. Project Structure
  2. Plugin Initialization
  3. Type Registration
  4. Type Class & Instance Initialization
  5. Caps & Pad Templates
  6. Caps Handling Part 1
  7. Caps Handling Part 2
  8. Conversion of BGRx Video Frames to Grayscale
  9. Testing the new element
  10. Properties
  11. What next?
Project Structure

We’ll create a new cargo project with cargo init –lib –name gst-plugin-tutorial. This will create a basically empty Cargo.toml and a corresponding src/ We will use this structure: will contain all the plugin related code, separate modules will contain any GStreamer plugins that are added.

The empty Cargo.toml has to be updated to list all the dependencies that we need, and to define that the crate should result in a cdylib, i.e. a C library that does not contain any Rust-specific metadata. The final Cargo.toml looks as follows

[package] name = "gst-plugin-tutorial" version = "0.1.0" authors = ["Sebastian Dröge <>"] repository = "" license = "MIT/Apache-2.0" [dependencies] glib = "0.4" gstreamer = "0.10" gstreamer-base = "0.10" gstreamer-video = "0.10" gst-plugin = "0.1" [lib] name = "gstrstutorial" crate-type = ["cdylib"] path = "src/"

We’re depending on the gst-plugin crate, which provides all the basic infrastructure for implementing GStreamer plugins and elements. In addition we depend on the gstreamer, gstreamer-base and gstreamer-video crates for various GStreamer API that we’re going to use later, and the glib crate to be able to use some GLib API that we’ll need. GStreamer is building upon GLib, and this leaks through in various places.

With the basic project structure being set-up, we should be able to compile the project with cargo build now, which will download and build all dependencies and then creates a file called target/debug/ (or .dll on Windows, .dylib on macOS). This is going to be our GStreamer plugin.

To allow GStreamer to find our new plugin and make it available in every GStreamer-based application, we could install it into the system- or user-wide GStreamer plugin path or simply point the GST_PLUGIN_PATH environment variable to the directory containing it:

export GST_PLUGIN_PATH=`pwd`/target/debug

If you now run the gst-inspect-1.0 tool on the, it will not yet print all information it can extract from the plugin but for now just complains that this is not a valid GStreamer plugin. Which is true, we didn’t write any code for it yet.

Plugin Initialization

Let’s start editing src/ to make this an actual GStreamer plugin. First of all, we need to add various extern crate directives to be able to use our dependencies and also mark some of them #[macro_use] because we’re going to use macros defined in some of them. This looks like the following

extern crate glib; #[macro_use] extern crate gstreamer as gst; extern crate gstreamer_base as gst_base; extern crate gstreamer_video as gst_video; #[macro_use] extern crate gst_plugin;

Next we make use of the plugin_define! macro from the gst-plugin crate to set-up the static metadata of the plugin (and make the shared library recognizeable by GStreamer to be a valid plugin), and to define the name of our entry point function (plugin_init) where we will register all the elements that this plugin provides.

plugin_define!( b"rstutorial\0", b"Rust Tutorial Plugin\0", plugin_init, b"1.0\0", b"MIT/X11\0", b"rstutorial\0", b"rstutorial\0", b"\0", b"2017-12-30\0" );

This is unfortunately not very beautiful yet due to a) GStreamer requiring this information to be statically available in the shared library, not returned by a function (starting with GStreamer 1.14 it can be a function), and b) Rust not allowing raw strings (b”blabla) to be concatenated with a macro like the std::concat macro (so that the b and \0 parts could be hidden away). Expect this to become better in the future.

The static plugin metadata that we provide here is

  1. name of the plugin
  2. short description for the plugin
  3. name of the plugin entry point function
  4. version number of the plugin
  5. license of the plugin (only a fixed set of licenses is allowed here, see)
  6. source package name
  7. binary package name (only really makes sense for e.g. Linux distributions)
  8. origin of the plugin
  9. release date of this version

In addition we’re defining an empty plugin entry point function that just returns true

fn plugin_init(plugin: &gst::Plugin) -> bool { true }

With all that given, gst-inspect-1.0 should print exactly this information when running on the file (or .dll on Windows, or .dylib on macOS)

gst-inspect-1.0 target/debug/

Type Registration

As a next step, we’re going to add another module rgb2gray to our project, and call a function called register from our plugin_init function.

mod rgb2gray; fn plugin_init(plugin: &gst::Plugin) -> bool { rgb2gray::register(plugin); true }

With that our src/ is complete, and all following code is only in src/ At the top of the new file we first need to add various use-directives to import various types and functions we’re going to use into the current module’s scope

use glib; use gst; use gst::prelude::*; use gst_video; use gst_plugin::properties::*; use gst_plugin::object::*; use gst_plugin::element::*; use gst_plugin::base_transform::*; use std::i32; use std::sync::Mutex;

GStreamer is based on the GLib object system (GObject). C (just like Rust) does not have built-in support for object orientated programming, inheritance, virtual methods and related concepts, and GObject makes these features available in C as a library. Without language support this is a quite verbose endeavour in C, and the gst-plugin crate tries to expose all this in a (as much as possible) Rust-style API while hiding all the details that do not really matter.

So, as a next step we need to register a new type for our RGB to Grayscale converter GStreamer element with the GObject type system, and then register that type with GStreamer to be able to create new instances of it. We do this with the following code

struct Rgb2GrayStatic; impl ImplTypeStatic<BaseTransform> for Rgb2GrayStatic { fn get_name(&self) -> &str { "Rgb2Gray" } fn new(&self, element: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> { Rgb2Gray::new(element) } fn class_init(&self, klass: &mut BaseTransformClass) { Rgb2Gray::class_init(klass); } } pub fn register(plugin: &gst::Plugin) { let type_ = register_type(Rgb2GrayStatic); gst::Element::register(plugin, "rsrgb2gray", 0, type_); }

This defines a zero-sized struct Rgb2GrayStatic that is used to implement the ImplTypeStatic<BaseTransform> trait on it for providing static information about the type to the type system. In our case this is a zero-sized struct, but in other cases this struct might contain actual data (for example if the same element code is used for multiple elements, e.g. when wrapping a generic codec API that provides support for multiple decoders and then wanting to register one element per decoder). By implementing ImplTypeStatic<BaseTransform> we also declare that our element is going to be based on the GStreamer BaseTransform base class, which provides a relatively simple API for 1:1 transformation elements like ours is going to be.

ImplTypeStatic provides functions that return a name for the type, and functions for initializing/returning a new instance of our element (new) and for initializing the class metadata (class_init, more on that later). We simply let those functions proxy to associated functions on the Rgb2Gray struct that we’re going to define at a later time.

In addition, we also define a register function (the one that is already called from our plugin_init function) and in there first register the Rgb2GrayStatic type metadata with the GObject type system to retrieve a type ID, and then register this type ID to GStreamer to be able to create new instances of it with the name “rsrgb2gray” (e.g. when using gst::ElementFactory::make).

Type Class & Instance Initialization

As a next step we declare the Rgb2Gray struct and implement the new and class_init functions on it. In the first version, this struct is almost empty but we will later use it to store all state of our element.

struct Rgb2Gray { cat: gst::DebugCategory, } impl Rgb2Gray { fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> { Box::new(Self { cat: gst::DebugCategory::new( "rsrgb2gray", gst::DebugColorFlags::empty(), "Rust RGB-GRAY converter", ), }) } fn class_init(klass: &mut BaseTransformClass) { klass.set_metadata( "RGB-GRAY Converter", "Filter/Effect/Converter/Video", "Converts RGB to GRAY or grayscale RGB", "Sebastian Dröge <>", ); klass.configure(BaseTransformMode::NeverInPlace, false, false); } }

In the new function we return a boxed (i.e. heap-allocated) version of our struct, containing a newly created GStreamer debug category of name “rsrgb2gray”. We’re going to use this debug category later for making use of GStreamer’s debug logging system for logging the state and changes of our element.

In the class_init function we, again, set up some metadata for our new element. In this case these are a description, a classification of our element, a longer description and the author. The metadata can later be retrieved and made use of via the Registry and PluginFeature/ElementFactory API. We also configure the BaseTransform class and define that we will never operate in-place (producing our output in the input buffer), and that we don’t want to work in passthrough mode if the input/output formats are the same.

Additionally we need to implement various traits on the Rgb2Gray struct, which will later be used to override virtual methods of the various parent classes of our element. For now we can keep the trait implementations empty. There is one trait implementation required per parent class.

impl ObjectImpl<BaseTransform> for Rgb2Gray {} impl ElementImpl<BaseTransform> for Rgb2Gray {} impl BaseTransformImpl<BaseTransform> for Rgb2Gray {}

With all this defined, gst-inspect-1.0 should be able to show some more information about our element already but will still complain that it’s not complete yet.

Caps & Pad Templates

Data flow of GStreamer elements is happening via pads, which are the input(s) and output(s) (or sinks and sources) of an element. Via the pads, buffers containing actual media data, events or queries are transferred. An element can have any number of sink and source pads, but our new element will only have one of each.

To be able to declare what kinds of pads an element can create (they are not necessarily all static but could be created at runtime by the element or the application), it is necessary to install so-called pad templates during the class initialization. These pad templates contain the name (or rather “name template”, it could be something like src_%u for e.g. pad templates that declare multiple possible pads), the direction of the pad (sink or source), the availability of the pad (is it always there, sometimes added/removed by the element or to be requested by the application) and all the possible media types (called caps) that the pad can consume (sink pads) or produce (src pads).

In our case we only have always pads, one sink pad called “sink”, on which we can only accept RGB (BGRx to be exact) data with any width/height/framerate and one source pad called “src”, on which we will produce either RGB (BGRx) data or GRAY8 (8-bit grayscale) data. We do this by adding the following code to the class_init function.

let caps = gst::Caps::new_simple( "video/x-raw", &[ ( "format", &gst::List::new(&[ &gst_video::VideoFormat::Bgrx.to_string(), &gst_video::VideoFormat::Gray8.to_string(), ]), ), ("width", &gst::IntRange::<i32>::new(0, i32::MAX)), ("height", &gst::IntRange::<i32>::new(0, i32::MAX)), ( "framerate", &gst::FractionRange::new( gst::Fraction::new(0, 1), gst::Fraction::new(i32::MAX, 1), ), ), ], ); let src_pad_template = gst::PadTemplate::new( "src", gst::PadDirection::Src, gst::PadPresence::Always, &caps, ); klass.add_pad_template(src_pad_template); let caps = gst::Caps::new_simple( "video/x-raw", &[ ("format", &gst_video::VideoFormat::Bgrx.to_string()), ("width", &gst::IntRange::<i32>::new(0, i32::MAX)), ("height", &gst::IntRange::<i32>::new(0, i32::MAX)), ( "framerate", &gst::FractionRange::new( gst::Fraction::new(0, 1), gst::Fraction::new(i32::MAX, 1), ), ), ], ); let sink_pad_template = gst::PadTemplate::new( "sink", gst::PadDirection::Sink, gst::PadPresence::Always, &caps, ); klass.add_pad_template(sink_pad_template);

The names “src” and “sink” are pre-defined by the BaseTransform class and this base-class will also create the actual pads with those names from the templates for us whenever a new element instance is created. Otherwise we would have to do that in our new function but here this is not needed.

If you now run gst-inspect-1.0 on the rsrgb2gray element, these pad templates with their caps should also show up.

Caps Handling Part 1

As a next step we will add caps handling to our new element. This involves overriding 4 virtual methods from the BaseTransformImpl trait, and actually storing the configured input and output caps inside our element struct. Let’s start with the latter

struct State { in_info: gst_video::VideoInfo, out_info: gst_video::VideoInfo, } struct Rgb2Gray { cat: gst::DebugCategory, state: Mutex<Option<State>>, } impl Rgb2Gray { fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> { Box::new(Self { cat: gst::DebugCategory::new( "rsrgb2gray", gst::DebugColorFlags::empty(), "Rust RGB-GRAY converter", ), state: Mutex::new(None), }) } }

We define a new struct State that contains the input and output caps, stored in a VideoInfo. VideoInfo is a struct that contains various fields like width/height, framerate and the video format and allows to conveniently with the properties of (raw) video formats. We have to store it inside a Mutex in our Rgb2Gray struct as this can (in theory) be accessed from multiple threads at the same time.

Whenever input/output caps are configured on our element, the set_caps virtual method of BaseTransform is called with both caps (i.e. in the very beginning before the data flow and whenever it changes), and all following video frames that pass through our element should be according to those caps. Once the element is shut down, the stop virtual method is called and it would make sense to release the State as it only contains stream-specific information. We’re doing this by adding the following to the BaseTransformImpl trait implementation

impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn set_caps(&self, element: &BaseTransform, incaps: &gst::Caps, outcaps: &gst::Caps) -> bool { let in_info = match gst_video::VideoInfo::from_caps(incaps) { None => return false, Some(info) => info, }; let out_info = match gst_video::VideoInfo::from_caps(outcaps) { None => return false, Some(info) => info, }; gst_debug!(, obj: element, "Configured for caps {} to {}", incaps, outcaps ); *self.state.lock().unwrap() = Some(State { in_info: in_info, out_info: out_info, }); true } fn stop(&self, element: &BaseTransform) -> bool { // Drop state let _ = self.state.lock().unwrap().take(); gst_info!(, obj: element, "Stopped"); true } }

This code should be relatively self-explanatory. In set_caps we’re parsing the two caps into a VideoInfo and then store this in our State, in stop we drop the State and replace it with None. In addition we make use of our debug category here and use the gst_info! and gst_debug! macros to output the current caps configuration to the GStreamer debug logging system. This information can later be useful for debugging any problems once the element is running.

Next we have to provide information to the BaseTransform base class about the size in bytes of a video frame with specific caps. This is needed so that the base class can allocate an appropriately sized output buffer for us, that we can then fill later. This is done with the get_unit_size virtual method, which is required to return the size of one processing unit in specific caps. In our case, one processing unit is one video frame. In the case of raw audio it would be the size of one sample multiplied by the number of channels.

impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn get_unit_size(&self, _element: &BaseTransform, caps: &gst::Caps) -> Option<usize> { gst_video::VideoInfo::from_caps(caps).map(|info| info.size()) } }

We simply make use of the VideoInfo API here again, which conveniently gives us the size of one video frame already.

Instead of get_unit_size it would also be possible to implement the transform_size virtual method, which is getting passed one size and the corresponding caps, another caps and is supposed to return the size converted to the second caps. Depending on how your element works, one or the other can be easier to implement.

Caps Handling Part 2

We’re not done yet with caps handling though. As a very last step it is required that we implement a function that is converting caps into the corresponding caps in the other direction. For example, if we receive BGRx caps with some width/height on the sinkpad, we are supposed to convert this into new caps with the same width/height but BGRx or GRAY8. That is, we can convert BGRx to BGRx or GRAY8. Similarly, if the element downstream of ours can accept GRAY8 with a specific width/height from our source pad, we have to convert this to BGRx with that very same width/height.

This has to be implemented in the transform_caps virtual method, and looks as following

impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn transform_caps( &self, element: &BaseTransform, direction: gst::PadDirection, caps: gst::Caps, filter: Option<&gst::Caps>, ) -> gst::Caps { let other_caps = if direction == gst::PadDirection::Src { let mut caps = caps.clone(); for s in caps.make_mut().iter_mut() { s.set("format", &gst_video::VideoFormat::Bgrx.to_string()); } caps } else { let mut gray_caps = gst::Caps::new_empty(); { let gray_caps = gray_caps.get_mut().unwrap(); for s in caps.iter() { let mut s_gray = s.to_owned(); s_gray.set("format", &gst_video::VideoFormat::Gray8.to_string()); gray_caps.append_structure(s_gray); } gray_caps.append(caps.clone()); } gray_caps }; gst_debug!(, obj: element, "Transformed caps from {} to {} in direction {:?}", caps, other_caps, direction ); if let Some(filter) = filter { filter.intersect_with_mode(&other_caps, gst::CapsIntersectMode::First) } else { other_caps } } }

This caps conversion happens in 3 steps. First we check if we got caps for the source pad. In that case, the caps on the other pad (the sink pad) are going to be exactly the same caps but no matter if the caps contained BGRx or GRAY8 they must become BGRx as that’s the only format that our sink pad can accept. We do this by creating a clone of the input caps, then making sure that those caps are actually writable (i.e. we’re having the only reference to them, or a copy is going to be created) and then iterate over all the structures inside the caps and then set the “format” field to BGRx. After this, all structures in the new caps will be with the format field set to BGRx.

Similarly, if we get caps for the sink pad and are supposed to convert it to caps for the source pad, we create new caps and in there append a copy of each structure of the input caps (which are BGRx) with the format field set to GRAY8. In the end we append the original caps, giving us first all caps as GRAY8 and then the same caps as BGRx. With this ordering we signal to GStreamer that we would prefer to output GRAY8 over BGRx.

In the end the caps we created for the other pad are filtered against optional filter caps to reduce the potential size of the caps. This is done by intersecting the caps with that filter, while keeping the order (and thus preferences) of the filter caps (gst::CapsIntersectMode::First).

Conversion of BGRx Video Frames to Grayscale

Now that all the caps handling is implemented, we can finally get to the implementation of the actual video frame conversion. For this we start with defining a helper function bgrx_to_gray that converts one BGRx pixel to a grayscale value. The BGRx pixel is passed as a &[u8] slice with 4 elements and the function returns another u8 for the grayscale value.

impl Rgb2Gray { #[inline] fn bgrx_to_gray(in_p: &[u8]) -> u8 { // See const R_Y: u32 = 19595; // 0.299 * 65536 const G_Y: u32 = 38470; // 0.587 * 65536 const B_Y: u32 = 7471; // 0.114 * 65536 assert_eq!(in_p.len(), 4); let b = u32::from(in_p[0]); let g = u32::from(in_p[1]); let r = u32::from(in_p[2]); let gray = ((r * R_Y) + (g * G_Y) + (b * B_Y)) / 65536; (gray as u8) } }

This function works by extracting the blue, green and red components from each pixel (remember: we work on BGRx, so the first value will be blue, the second green, the third red and the fourth unused), extending it from 8 to 32 bits for a wider value-range and then converts it to the Y component of the YUV colorspace (basically what your grandparents’ black & white TV would’ve displayed). The coefficients come from the Wikipedia page about YUV and are normalized to unsigned 16 bit integers so we can keep some accuracy, don’t have to work with floating point arithmetic and stay inside the range of 32 bit integers for all our calculations. As you can see, the green component is weighted more than the others, which comes from our eyes being more sensitive to green than to other colors.

Note: This is only doing the actual conversion from linear RGB to grayscale (and in BT.601 colorspace). To do this conversion correctly you need to know your colorspaces and use the correct coefficients for conversion, and also do gamma correction. See this about why it is important.

Afterwards we have to actually call this function on every pixel. For this the transform virtual method is implemented, which gets a input and output buffer passed and we’re supposed to read the input buffer and fill the output buffer. The implementation looks as follows, and is going to be our biggest function for this element

impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn transform( &self, element: &BaseTransform, inbuf: &gst::Buffer, outbuf: &mut gst::BufferRef, ) -> gst::FlowReturn { let mut state_guard = self.state.lock().unwrap(); let state = match *state_guard { None => { gst_element_error!(element, gst::CoreError::Negotiation, ["Have no state yet"]); return gst::FlowReturn::NotNegotiated; } Some(ref mut state) => state, }; let in_frame = match gst_video::VideoFrameRef::from_buffer_ref_readable( inbuf.as_ref(), &state.in_info, ) { None => { gst_element_error!( element, gst::CoreError::Failed, ["Failed to map input buffer readable"] ); return gst::FlowReturn::Error; } Some(in_frame) => in_frame, }; let mut out_frame = match gst_video::VideoFrameRef::from_buffer_ref_writable(outbuf, &state.out_info) { None => { gst_element_error!( element, gst::CoreError::Failed, ["Failed to map output buffer writable"] ); return gst::FlowReturn::Error; } Some(out_frame) => out_frame, }; let width = in_frame.width() as usize; let in_stride = in_frame.plane_stride()[0] as usize; let in_data = in_frame.plane_data(0).unwrap(); let out_stride = out_frame.plane_stride()[0] as usize; let out_format = out_frame.format(); let out_data = out_frame.plane_data_mut(0).unwrap(); if out_format == gst_video::VideoFormat::Bgrx { assert_eq!(in_data.len() % 4, 0); assert_eq!(out_data.len() % 4, 0); assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride); let in_line_bytes = width * 4; let out_line_bytes = width * 4; assert!(in_line_bytes <= in_stride); assert!(out_line_bytes <= out_stride); for (in_line, out_line) in in_data .chunks(in_stride) .zip(out_data.chunks_mut(out_stride)) { for (in_p, out_p) in in_line[..in_line_bytes] .chunks(4) .zip(out_line[..out_line_bytes].chunks_mut(4)) { assert_eq!(out_p.len(), 4); let gray = Rgb2Gray::bgrx_to_gray(in_p); out_p[0] = gray; out_p[1] = gray; out_p[2] = gray; } } } else if out_format == gst_video::VideoFormat::Gray8 { assert_eq!(in_data.len() % 4, 0); assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride); let in_line_bytes = width * 4; let out_line_bytes = width; assert!(in_line_bytes <= in_stride); assert!(out_line_bytes <= out_stride); for (in_line, out_line) in in_data .chunks(in_stride) .zip(out_data.chunks_mut(out_stride)) { for (in_p, out_p) in in_line[..in_line_bytes] .chunks(4) .zip(out_line[..out_line_bytes].iter_mut()) { let gray = Rgb2Gray::bgrx_to_gray(in_p); *out_p = gray; } } } else { unimplemented!(); } gst::FlowReturn::Ok } }

What happens here is that we first of all lock our state (the input/output VideoInfo) and error out if we don’t have any yet (which can’t really happen unless other elements have a bug, but better safe than sorry). After that we map the input buffer readable and the output buffer writable with the VideoFrameRef API. By mapping the buffers we get access to the underlying bytes of them, and the mapping operation could for example make GPU memory available or just do nothing and give us access to a normally allocated memory area. We have access to the bytes of the buffer until the VideoFrameRef goes out of scope.

Instead of VideoFrameRef we could’ve also used the gst::Buffer::map_readable() and gst::Buffer::map_writable() API, but different to those the VideoFrameRef API also extracts various metadata from the raw video buffers and makes them available. For example we can directly access the different planes as slices without having to calculate the offsets ourselves, or we get directly access to the width and height of the video frame.

After mapping the buffers, we store various information we’re going to need later in local variables to save some typing later. This is the width (same for input and output as we never changed the width in transform_caps), the input and out (row-) stride (the number of bytes per row/line, which possibly includes some padding at the end of each line for alignment reasons), the output format (which can be BGRx or GRAY8 because of how we implemented transform_caps) and the pointers to the first plane of the input and output (which in this case also is the only plane, BGRx and GRAY8 both have only a single plane containing all the RGB/gray components).

Then based on whether the output is BGRx or GRAY8, we iterate over all pixels. The code is basically the same in both cases, so I’m only going to explain the case where BGRx is output.

We start by iterating over each line of the input and output, and do so by using the chunks iterator to give us chunks of as many bytes as the (row-) stride of the video frame is, do the same for the other frame and then zip both iterators together. This means that on each iteration we get exactly one line as a slice from each of the frames and can then start accessing the actual pixels in each line.

To access the individual pixels in each line, we again use the chunks iterator the same way, but this time to always give us chunks of 4 bytes from each line. As BGRx uses 4 bytes for each pixel, this gives us exactly one pixel. Instead of iterating over the whole line, we only take the actual sub-slice that contains the pixels, not the whole line with stride number of bytes containing potential padding at the end. Now for each of these pixels we call our previously defined bgrx_to_gray function and then fill the B, G and R components of the output buffer with that value to get grayscale output. And that’s all.

Using Rust high-level abstractions like the chunks iterators and bounds-checking slice accesses might seem like it’s going to cause quite some performance penalty, but if you look at the generated assembly most of the bounds checks are completely optimized away and the resulting assembly code is close to what one would’ve written manually (especially when using the newly-added exact_chunks iterators). Here you’re getting safe and high-level looking code with low-level performance!

You might’ve also noticed the various assertions in the processing function. These are there to give further hints to the compiler about properties of the code, and thus potentially being able to optimize the code better and moving e.g. bounds checks out of the inner loop and just having the assertion outside the loop check for the same. In Rust adding assertions can often improve performance by allowing further optimizations to be applied, but in the end always check the resulting assembly to see if what you did made any difference.

Testing the new element

Now we implemented almost all functionality of our new element and could run it on actual video data. This can be done now with the gst-launch-1.0 tool, or any application using GStreamer and allowing us to insert our new element somewhere in the video part of the pipeline. With gst-launch-1.0 you could run for example the following pipelines

# Run on a test pattern gst-launch-1.0 videotestsrc ! rsrgb2gray ! videoconvert ! autovideosink # Run on some video file, also playing the audio gst-launch-1.0 playbin uri=file:///path/to/some/file video-filter=rsrgb2gray

Note that you will likely want to compile with cargo build –release and add the target/release directory to GST_PLUGIN_PATH instead. The debug build might be too slow, and generally the release builds are multiple orders of magnitude (!) faster.


The only feature missing now are the properties I mentioned in the opening paragraph: one boolean property to invert the grayscale value and one integer property to shift the value by up to 255. Implementing this on top of the previous code is not a lot of work. Let’s start with defining a struct for holding the property values and defining the property metadata.

const DEFAULT_INVERT: bool = false; const DEFAULT_SHIFT: u32 = 0; #[derive(Debug, Clone, Copy)] struct Settings { invert: bool, shift: u32, } impl Default for Settings { fn default() -> Self { Settings { invert: DEFAULT_INVERT, shift: DEFAULT_SHIFT, } } } static PROPERTIES: [Property; 2] = [ Property::Boolean( "invert", "Invert", "Invert grayscale output", DEFAULT_INVERT, PropertyMutability::ReadWrite, ), Property::UInt( "shift", "Shift", "Shift grayscale output (wrapping around)", (0, 255), DEFAULT_SHIFT, PropertyMutability::ReadWrite, ), ]; struct Rgb2Gray { cat: gst::DebugCategory, settings: Mutex<Settings>, state: Mutex<Option<State>>, } impl Rgb2Gray { fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> { Box::new(Self { cat: gst::DebugCategory::new( "rsrgb2gray", gst::DebugColorFlags::empty(), "Rust RGB-GRAY converter", ), settings: Mutex::new(Default::default()), state: Mutex::new(None), }) } }

This should all be rather straightforward: we define a Settings struct that stores the two values, implement the Default trait for it, then define a two-element array with property metadata (names, description, ranges, default value, writability), and then store the default value of our Settings struct inside another Mutex inside the element struct.

In the next step we have to make use of these: we need to tell the GObject type system about the properties, and we need to implement functions that are called whenever a property value is set or get.

impl Rgb2Gray { fn class_init(klass: &mut BaseTransformClass) { [...] klass.install_properties(&PROPERTIES); [...] } } impl ObjectImpl<BaseTransform> for Rgb2Gray { fn set_property(&self, obj: &glib::Object, id: u32, value: &glib::Value) { let prop = &PROPERTIES[id as usize]; let element = obj.clone().downcast::<BaseTransform>().unwrap(); match *prop { Property::Boolean("invert", ..) => { let mut settings = self.settings.lock().unwrap(); let invert = value.get().unwrap(); gst_info!(, obj: &element, "Changing invert from {} to {}", settings.invert, invert ); settings.invert = invert; } Property::UInt("shift", ..) => { let mut settings = self.settings.lock().unwrap(); let shift = value.get().unwrap(); gst_info!(, obj: &element, "Changing shift from {} to {}", settings.shift, shift ); settings.shift = shift; } _ => unimplemented!(), } } fn get_property(&self, _obj: &glib::Object, id: u32) -> Result<glib::Value, ()> { let prop = &PROPERTIES[id as usize]; match *prop { Property::Boolean("invert", ..) => { let settings = self.settings.lock().unwrap(); Ok(settings.invert.to_value()) } Property::UInt("shift", ..) => { let settings = self.settings.lock().unwrap(); Ok(settings.shift.to_value()) } _ => unimplemented!(), } } }

Property values can be changed from any thread at any time, that’s why the Mutex is needed here to protect our struct. And we’re using a new mutex to be able to have it locked only for the shorted possible amount of time: we don’t want to keep it locked for the whole time of the transform function, otherwise applications trying to set/get values would block for up to one frame.

In the property setter/getter functions we are working with a glib::Value. This is a dynamically typed value type that can contain values of any type, together with the type information of the contained value. Here we’re using it to handle an unsigned integer (u32) and a boolean for our two properties. To know which property is currently set/get, we get an identifier passed which is the index into our PROPERTIES array. We then simply match on the name of that to decide which property was meant

With this implemented, we can already compile everything, see the properties and their metadata in gst-inspect-1.0 and can also set them on gst-launch-1.0 like this

# Set invert to true and shift to 128 gst-launch-1.0 videotestsrc ! rsrgb2gray invert=true shift=128 ! videoconvert ! autovideosink

If we set GST_DEBUG=rsrgb2gray:6 in the environment before running that, we can also see the corresponding debug output when the values are changing. The only thing missing now is to actually make use of the property values for the processing. For this we add the following changes to bgrx_to_gray and the transform function

impl Rgb2Gray { #[inline] fn bgrx_to_gray(in_p: &[u8], shift: u8, invert: bool) -> u8 { [...] let gray = ((r * R_Y) + (g * G_Y) + (b * B_Y)) / 65536; let gray = (gray as u8).wrapping_add(shift); if invert { 255 - gray } else { gray } } } impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn transform( &self, element: &BaseTransform, inbuf: &gst::Buffer, outbuf: &mut gst::BufferRef, ) -> gst::FlowReturn { let settings = *self.settings.lock().unwrap(); [...] let gray = Rgb2Gray::bgrx_to_gray(in_p, settings.shift as u8, settings.invert); [...] } }

And that’s all. If you run the element in gst-launch-1.0 and change the values of the properties you should also see the corresponding changes in the video output.

Note that we always take a copy of the Settings struct at the beginning of the transform function. This ensures that we take the mutex only the shorted possible amount of time and then have a local snapshot of the settings for each frame.

Also keep in mind that the usage of the property values in the bgrx_to_gray function is far from optimal. It means the addition of another condition to the calculation of each pixel, thus potentially slowing it down a lot. Ideally this condition would be moved outside the inner loops and the bgrx_to_gray function would made generic over that. See for example this blog post about “branchless Rust” for ideas how to do that, the actual implementation is left as an exercise for the reader.

What next?

I hope the code walkthrough above was useful to understand how to implement GStreamer plugins and elements in Rust. If you have any questions, feel free to ask them here in the comments.

The same approach also works for audio filters or anything that can be handled in some way with the API of the BaseTransform base class. You can find another filter, an audio echo filter, using the same approach here.

In the next blog post in this series I’ll show how to use another base class to implement another kind of element, but for the time being you can also check the GIT repository for various other element implementations.

Xubuntu: Xubuntu 17.10.1 Release

Planet Ubuntu - Pre, 12/01/2018 - 6:34md

Following the recent testing of a respin to deal with the BIOS bug on some Lenovo machines, Xubuntu 17.10.1 has been released. Official download sources have been updated to point to this point release, but if you’re using a mirror, be sure you are downloading the 17.10.1 version.

No changes to applications are included, however, this release does include any updates made between the original release date and now.

Note: Even with this fix, you will want to update your system to make sure you get all security fixes since the ISO respin, including the one for Meltdown, addressed in USN-3523, which you can read more about here.

Xubuntu: Xubuntu 17.04 End Of Life

Planet Ubuntu - Pre, 12/01/2018 - 3:40md

On Saturday 13th January 2018, Xubuntu 17.04 goes End of Life (EOL). For more information please see the Ubuntu 17.04 EOL Notice.

We strongly recommend upgrading to the current regular release, Xubuntu 17.10.1, as soon as practical. Alternatively you can download the current Xubuntu release and install fresh.

The 17.10.1 release recently saw testing across all flavors to address the BIOS bug found after its release in October 2017. Updated and bugfree ISO files are now available.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2017

Planet Ubuntu - Pre, 12/01/2018 - 3:15md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 142 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 21 packages with a known CVE and the dla-needed.txt file 16 (we’re a bit behind in CVE triaging apparently). Both numbers show a significant drop compared to last month. Yet the number of DLA released was not larger than usual (30), instead it looks like December brought us fewer new security vulnerabilities to handle and at the same time we used this opportunity to handle lower priorities packages that were kept on the side for multiple months.

Thanks to our sponsors

New sponsors are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Valorie Zimmerman: Seeding new ISOs the easy zsync way

Planet Ubuntu - Pre, 12/01/2018 - 9:39pd
Kubuntu recently had to pull our 17.10 ISOs because of the so-called lenovo bug. Now that this bug is fixed, the ISOs have been respun, and so now it's time to begin to reseed the torrents.

To speed up the process, I wanted to zsync to the original ISOs before getting the new torrent files. Simon kindly told me the easy way to do this - cd to the directory where the ISOs live, which in my case is 

cd /media/valorie/Data/ISOs/


cp kubuntu-17.10{,.1}-desktop-amd64.iso && zsync

Where did I get the link to zsync? At All ISOs are found at cdimage, just as all torrents are found at

The final step is to download those torrent files (pro-tip: use control F) and tell Ktorrent to seed them all! I seed all the supported Ubuntu releases. The more people do this, the faster torrents are for everyone. If you have the bandwidth, go for it!

PS: you don't have to copy all the cdimage URLs. Just up-arrow and then back-arrow through your previous command once the sync has finished, edit it, hit return and you are back in business.

Lubuntu Blog: Lubuntu 17.10.1 (Artful Aardvark) released!

Planet Ubuntu - Pre, 12/01/2018 - 7:29pd
Lubuntu 17.10.1 has been released to fix a major problem affecting many Lenovo laptops that causes the computer to have BIOS problems after installing. You can find more details about this problem here. Please note that the Meltdown and Spectre vulnerabilities have not been fixed in this ISO, so we advise that if you install […]

Valorie Zimmerman: Beginning 2018

Planet Ubuntu - Dje, 07/01/2018 - 11:55md
2017 began with the once-in-a-lifetime trip to India to speak at That was amazing enough, but the trip to a local village, and visiting the Kaziranga National Park were too amazing for words.

Literal highlight of last year were the eclipse and trip to see it with my son Thomas, and Christian and Hailey's wedding, and the trip to participate with my daughter Anne, while also spending some time with son Paul, his wife Tara and my grandson Oscar. This summer I was able to spend a few days in Brooklyn with Colin and Rory as well on my way to Akademy. So 2017 was definitely worth living through!

This is reality, and we can only see it during a total eclipse
2018 began wonderfully at the cabin. I'm looking forward to 2018 for a lot of reasons.
First, I'm so happy that soon Kubuntu will again be distributing 17.10 images next week. Right now we're in testing in preparation for that; pop into IRC if you'd like to help with the testing (#kubuntu-devel). next week!

Lubuntu has a nice write-up of the issues and testing procedures:

The other serious problems with meltdown and spectre are being handled by the Ubuntu kernel team and those updates will be rolled out as soon as testing is complete. Scary times when dealing with such a fundamental flaw in the design of our computers!

Second, in KDE we're beginning to ramp up for Google Summer of Code. Mentors are preparing the ideas page on the wiki, and Bhushan has started the organization application process. If you want to mentor or help us administer the program this year, now is the time to get in gear!

At Renton PFLAG we had our first support meeting of the year, and it was small but awesome! Our little group has had some tough times in the past, but I see us growing and thriving in this next year.

Finally, my local genealogy society is doing some great things, and I'm so happy to be involved and helping out again. My own searching is going well too. As I find more supporting evidence to the lives of my ancestors and their families, I feel my own place in the cosmos more deeply and my connection to history more strongly. I wish I could link to our website, but Rootsweb is down and until we get our new website up......

Finally, today I saw a news article about a school in India far outside the traditional education model. Called the Tamarind Tree School, it uses an open education model to offer collaborative, innovative learning solutions to rural students. They use free and open source software, and even hardware so that people can build their own devices. Read more about this:

Eric Hammond: Streaming AWS DeepLens Video Over SSH

Planet Ubuntu - Sht, 30/12/2017 - 6:00pd

instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse

Credit for this excellent idea goes to Ernie Kim. Thank you!

Instructions without ssh

The standard AWS DeepLens instructions recommend connecting the device to a monitor, keyboard, and mouse. The instructions provide information on how to view the video streams in this mode:

If you are connected to the DeepLens using a monitor, you can view the unprocessed device stream (raw camera video before being processed by the model) using this command on the DeepLens device:

mplayer –demuxer /opt/awscam/out/ch1_out.h264

If you are connected to the DeepLens using a monitor, you can view the project stream (video after being processed by the model on the DeepLens) using this command on the DeepLens device:

mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/ssd_results.mjpeg Instructions with ssh

You can also view the DeepLens video streams over ssh, without having a monitor connected to the device. To make this possible, you need to enable ssh access on your DeepLens. This is available as a checkbox option in the initial setup of the device. I’m working to get instructions on how to enable ssh access afterwards and will update once this is available.

To view the video streams over ssh, we take the same mplayer command options above and the same source stream files, but send the stream over ssh, and feed the result to the stdin of an mplayer process running on the local system, presumably a laptop.

All of the following commands are run on your local laptop (not on the DeepLens device).

You need to know the IP address of your DeepLens device on your local network:

ip_address=[IP ADDRESS OF DeepLens]

You will need to install the mplayer software on your local laptop. This varies with your OS, but for Ubuntu:

sudo apt-get install mplayer

You can view the unprocessed device stream (raw camera video before being processed by the model) over ssh using the command:

ssh aws_cam@$ip_address cat /opt/awscam/out/ch1_out.h264 | mplayer –demuxer -

You can view the project stream (video after being processed by the model on the DeepLens) over ssh with the command:

ssh aws_cam@$ip_address cat /tmp/ssd_results.mjpeg | mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 -

Benefits of using ssh to view the video streams include:

  • You don’t need to have an extra monitor, keyboard, mouse, and micro-HDMI adapter cable.

  • You don’t need to locate the DeepLens close to a monitor, keyboard, mouse.

  • You don’t need to be physically close to the DeepLens when you are viewing the video streams.

For those of us sitting on a couch with a laptop, a DeepLens across the room, and no extra micro-HDMI cable, this is great news!


To protect the security of your sensitive DeepLens video feeds:

  • Use a long, randomly generated password for ssh on your DeepLens, even if you are only using it inside a private network.

  • I would recommend setting up .ssh/authorized_keys on the DeepLens so you can ssh in with your personal ssh key, test it, then disable password access for ssh on the DeepLens device. Don’t forget the password, because it is still needed for sudo.

  • Enable automatic updates on your DeepLens so that Ubuntu security patches are applied quickly. This is available as an option in the initial setup, and should be possible to do afterwards using the standard Ubuntu unattended-upgrades package.

Unrelated side note: It’s kind of nice having the DeepLens run a standard Ubuntu LTS release. Excellent choice!

Original article and comments:

The VR Show

Planet Debian - Pre, 29/12/2017 - 5:39md

One of the things If I had got the visa on time for Debconf 15 (Germany) apart from the conference itself was the attention on VR (Virtual Reality) and AR (Augmented Reality) . I had heard the hype so much for so many years that I wanted to experience and did know that with Debianities who might be perhaps a bit better in crystal-gazing and would have perhaps more of an idea as I had then. The only VR which I knew about was from Hollywood movies and some VR videos but that doesn’t tell you anything. Also while movie like Chota-Chetan and others clicked they were far lesser immersive than true VR has to be.

I was glad that it didn’t happen after the fact as in 2016 while going to the South African Debconf I experienced VR at Qatar Airport in a Samsung showroom. I was quite surprised as how heavy the headset was and also surprised by how little content they had. Something which has been hyped for 20 odd years had not much to show for it. I was also able to trick the VR equipment as the eye/motion tracking was not good enough so if you put shook the head fast enough it couldn’t keep up with you.

I shared the above as I was invited to another VR conference by a web-programmer/designer friend Mahendra couple of months ago here in Pune itself . We attended the conference and were showcased quite a few success stories. One of the stories which was liked by the geek in me was framastore’s 360 Mars VR Experience on a bus the link shows how the framastore developers mapped Mars or part of Mars on Washington D.C. streets and how kids were able to experience how it would feel to be on Mars without knowing any of the risks the astronauts or the pioneers would have to face if we do get the money, the equipment and the technology to send people to Mars. In reality we are still decades from making such a trip keeping people safe to Mars and back or to have Mars for the rest of their life.

If my understanding is correct, the gravity of Mars is half of earth and once people settle there they or their exoskeleton would no longer be able to support Earth’s gravity, at least a generation who is born on Mars.

An interesting take on how things might turn out is shown in ‘The Expanse

But this is taking away from the topic at hand. While I saw the newer generation VR headsets there are still a bit ways off. It would be interesting once the headset becomes similar to eye-glasses and you do not have to either be tethered to a power unit or need to lug a heavy backpack full of dangerous lithium-ion battery. The chemistry for battery or some sort of self-powered unit would need to be much more safer, lighter.

While being in the conference and seeing the various scenarios being played out between potential developers and marketeers, it crossed my mind that people were not at all thinking of safe-guarding users privacy. Right from what games or choices you make to your biometric and other body sensitive information which has a high chance of being misused by companies and individuals.

There were also questions about how Sony and other developers are asking insane amounts for use of their SDK to develop content while it should be free as games and any content is going to enhance the marketability of their own ecosystem. For both the above questions (privacy and security asked by me) and SDK-related questions asked by some of the potential developers were not really answered.

At the end, they also showed AR or Augmented Reality which to my mind has much more potential to be used for reskilling and upskilling of young populations such as India and other young populous countries. It was interesting to note that both China and the U.S. are inching towards the older demographics while India would relatively be a still young country till another 20-30 odd years. Most of the other young countries (by median age) seem to be in the African continent and I believe (might be a myth) is that they are young because most of the countries are still tribal-like and they still are perhaps a lot of civil wars for resources.

I was underwhelmed by what they displayed in Augmented Reality, part of which I do understand that there may be lot many people or companies working on their IP and hence didn’t want to share or show or show a very rough work so their idea doesn’t get stolen.

I was also hoping somebody would take about motion-sickness or motion displacement similar to what people feel when they are train-lagged or jet-lagged. I am surprised that wikipedia still doesn’t have an article on train-lag as millions of Indians go through the process every year. The one which is most pronounced on Indian Railways is Motion being felt but not seen.

There are both challenges and opportunities provided by VR and AR but until costs come down both in terms of complexity, support and costs (for both the deployer and the user) it would remain a distant dream.

There are scores of ideas that could be used or done. For instance, the whole of North India is one big palace in the sense that there are palaces built by Kings and queens which have their own myth and lore over centuries. A story-teller could use a modern story and use say something like Chota Imambara or/and Bara Imambara where there have been lots of stories of people getting lost in the alleyways.

Such sort of lore, myths and mysteries are all over India. The Ramayana and the Mahabharata are just two of the epics which tell how grand the tales could be spun. The History of Indus Valley Civilization till date and the modern contestations to it are others which come to my mind.

Even the humble Panchtantra can be re-born and retold to generations who have forgotten it. I can’t express it much better as the variety of stories and contrasts to offer as bolokids does as well as SRK did in opening of IFFI. Even something like Khakee which is based on true incidents and a real-life inspector could be retold in so many ways. Even Mukti Bhavan which I saw few months ago, coincidentally before I became ill tells of stories which have complex stories and each person or persons have their own rich background which on VR could be much more explored.

Even titles such as the ever-famous Harry Potter or even the ever-beguiling RAMA could be shared and retooled for generations to come. The Shiva Trilogy is another one which comes to my mind which could be retold as well. There was another RAMA trilogy by the same author and another competing one which comes out in 2018 by an author called PJ Annan

We would need to work out the complexities of both hardware, bandwidth and the technologies but stories or content waiting to be developed is aplenty.

Once upon a time I had the opportunity to work, develop and understand make-believe walk-throughs (2-d blueprints animated/bought to life and shown to investors/clients) for potential home owners in a society (this was in the hey-days and heavy days of growth circa around y2k ) , it was 2d or 2.5 d environment, tools were lot more complex and I was the most inept person as I had no idea of what camera positioning and what source of light meant.

Apart from the gimmickry that was shown, I thought it would have been interesting if people had shared both the creative and the budget constraints while working in immersive technologies and bringing something good enough for the client. There was some discussion in a ham-handed way but not enough as there was considerable interest from youngsters to try this new medium but many lacked both the opportunities, knowledge, the equipment and the software stack to make it a reality.

Lastly, as far as the literature I have just shared bits and pieces of just the Indian English literature. There are 16 recognized Indian languages and all of them have a vibrant literature scene. Just to take an example, Bengal has been a bed-rock of new Bengali Detective stories all the time. I think I had shared the history of Bengali Crime fiction sometime back as well but nevertheless here it is again.

So apart from games, galleries, 3-d visual interactive visual novels with alternative endings could make for some interesting immersive experiences provided we are able to shed the costs and the technical challenges to make it a reality.

Filed under: Miscellenous Tagged: #Augmented Reality, #Debconf South Africa 2016, #Epics, #framastore, #indian literature, #Mars trip, #median age population inded, #motion sickness, #Palaces, #planet-debian, #Pune VR Conference, #RAMA, #RAMA trilogy, #Samsung VR, #Shiva Trilogy, #The Expanse, #Virtual Reality, #VR Headsets, #walkthroughs, Privacy shirishag75 #planet-debian – Experiences in the community


Subscribe to AlbLinux agreguesi