You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 4 months 2 javë më parë

The Fridge: Ubuntu 17.04 (Zesty Zapus) reached End of Life on January 13, 2018

Mër, 17/01/2018 - 2:32md
Ubuntu announced its 17.04 (Zesty Zapus) release almost 9 months ago, on April 13, 2017. As a non-LTS release, 17.04 has a 9-month support cycle and, as such, will reach end of life on Saturday, January 13th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 17.04. The supported upgrade path from Ubuntu 17.04 is via Ubuntu 17.10. Instructions and caveats for the upgrade may be found at: https://help.ubuntu.com/community/Upgrades Ubuntu 17.10 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce Development of a complete response to the highly-publicized Meltdown and Spectre vulnerabilities is ongoing, and due to the timing with respect to this End of Life, we will not be providing updated Linux kernel packages for Ubuntu 17.04. We advise users to upgrade to Ubuntu 17.10 and install the updated kernel packages for that release when they become available. For more information about Canonical’s response to the Meltdown and Spectre vulnerabilities, see: https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/ Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs. https://lists.ubuntu.com/archives/ubuntu-announce/2018-January/000227.html Originally posted to the ubuntu-announce mailing list on Fri Jan 5 22:23:25 UTC 2018 by Steve Langasek, on behalf of the Ubuntu Release Team

Benjamin Mako Hill: OpenSym 2017 Program Postmortem

Mar, 16/01/2018 - 4:38pd

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in iithis case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere.

Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape.

In OpenSym 2017, we made several changes to the way the conference has been run:

  • In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model.
  • Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs.
  • We eliminated page limits and the distinction between full papers and notes.
  • We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals.
  • To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted.

Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general.

Overview Statistics Papers submitted 44 Papers accepted 20 Acceptance rate 45% Posters submitted 2 Posters presented 9 Associate Chairs 8 PC Members 59 Authors 108 Author countries 20

The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years).

As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.”

Topics

Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of categories when submitting their work. These categories are similar to the tracks at OpenSym 2016. Interestingly, a number of authors selected more than one category. This would have led to difficult decisions in the old track-based system.

The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Papers in multiple categories are counted multiple times. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym’s submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair Lorraine Morgan’s involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past.

Scores and Reviews

As in previous years, review was single blind in that reviewers’ identities are hidden but authors identities are not. Each paper received between 3 and 4 reviews plus a metareview by the Associate Chair assigned to the paper. All papers received 3 reviews but ACs were encouraged to call in a 4th reviewer at any point in the process. In addition to the text of the reviews, we used a -3 to +3 scoring system where papers that are seen as borderline will be scored as 0. Reviewers scored papers using full-point increments.

The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It’s important to keep in mind that two papers were submitted as posters.

Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected or presented as a poster and most (but not all) papers with positive average scores were accepted. Although a positive average score seemed to be a requirement for publication, negative individual scores weren’t necessary showstoppers. We accepted 6 papers with at least one negative score. We ultimately accepted 20 papers—45% of those submitted.

Rebuttals

This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entirely optional, almost every team of authors used it! Authors of 40 of our 46 submissions (87%!) submitted rebuttals.

Lower Unchanged Higher 6 24 10

The table above shows how average scores changed after authors submitted rebuttals. The table shows that rebuttals’ effect was typically neutral or positive. Most average scores stayed the same but nearly two times as many average scores increased as decreased in the post-rebuttal period. We hope that this made the process feel more fair for authors and I feel, having read them all, that it led to improvements in the quality of final papers.

Page Lengths

In previous years, OpenSym followed most other venues in computer science by allowing submission of two kinds of papers: full papers which could be up to 10 pages long and short papers which could be up to 4. Following some other conferences, we eliminated page limits altogether. This is the text we used in the OpenSym 2017 CFP:

There is no minimum or maximum length for submitted papers. Rather, reviewers will be instructed to weigh the contribution of a paper relative to its length. Papers should report research thoroughly but succinctly: brevity is a virtue. A typical length of a “long research paper” is 10 pages (formerly the maximum length limit and the limit on OpenSym tracks), but may be shorter if the contribution can be described and supported in fewer pages— shorter, more focused papers (called “short research papers” previously) are encouraged and will be reviewed like any other paper. While we will review papers longer than 10 pages, the contribution must warrant the extra length. Reviewers will be instructed to reject papers whose length is incommensurate with the size of their contribution.

The following graph shows the distribution of page lengths across papers in our final program.

In the end 3 of 20 published papers (15%) were over 10 pages. More surprisingly, 11 of the accepted papers (55%) were below the old 10-page limit. Fears that some have expressed that page limits are the only thing keeping OpenSym from publshing enormous rambling manuscripts seems to be unwarranted—at least so far.

Bidding

Although, I won’t post any analysis or graphs, bidding worked well. With only two exceptions, every single assigned review was to someone who had bid “yes” or “maybe” for the paper in question and the vast majority went to people that had bid “yes.” However, this comes with one major proviso: people that did not bid at all were marked as “maybe” for every single paper.

Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. But everybody needs to bid. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don’t bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get poor matches which cause them to become less interested, willing, or able to do their reviews well and on time.

Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a program committee (PC) before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid.

Conclusions

The final results were a fantastic collection of published papers. Of course, it couldn’t have been possible without the huge collection of conference chairs, associate chairs, program committee members, external reviewers, and staff supporters.

Although we tried quite a lot of new things, my sense is that nothing we changed made things worse and many changes made things smoother or better. Although I’m not directly involved in organizing OpenSym 2018, I am on the OpenSym steering committee. My sense is that most of the changes we made are going to be carried over this year.

Finally, it’s also been announced that OpenSym 2018 will be in Paris on August 22-24. The call for papers should be out soon and the OpenSym 2018 paper deadline has already been announced as March 15, 2018. You should consider submitting! I hope to see you in Paris!

This Analysis

OpenSym used the gratis version of EasyChair to manage the conference which doesn’t allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a knitr file that combines R visualization and analysis code with markdown to create the HTML directly from the datasets. I’ve made all the code I used to produce this analysis available in this git repository. I hope someone else finds it useful. Because the data contains sensitive information on the review process, I’m not publishing the data.

This blog post was originally posted on the Community Data Science Collective blog.

Lubuntu Blog: Lubuntu 17.04 has reached End of Life

Hën, 15/01/2018 - 9:32pd
The Lubuntu Team announces that as a non-LTS release, 17.04 has a 9-month support cycle and, as such, reached end of life on Saturday, January 13, 2018. Lubuntu will no longer provide bug fixes or security updates for 17.04, and we strongly recommend that you update to 17.10, which continues to be actively supported with […]

Nathan Haines: Introducing the Ubuntu Free Culture Showcase for 18.04

Hën, 15/01/2018 - 9:00pd

Ubuntu’s changed a lot in the last year, and everything is leading up to a really exciting event: the release of 18.04 LTS! This next version of Ubuntu will once again offer a stable foundation for countless humans who use computers for work, play, art, relaxation, and creation. Among the various visual refreshes of Ubuntu, it’s also time to go to the community and ask for the best wallpapers. And it’s also time to look for a new video and music file that will be waiting for Ubuntu users on the install media’s Examples folder, to reassure them that their video and sound drivers are quite operational.

Long-term support releases like Ubuntu 18.04 LTS are very important, because they are downloaded and installed ten times more often than every single interim release combined. That means that the wallpapers, video, and music that are shipped will be seen ten times more than in other releases. So artists, select your best works. Ubuntu enthusiasts, spread the word about the contest as far and wide as you can. Everyone can help make this next LTS version of Ubuntu an amazing success.

All content must be released under a Creative Commons Attribution-Sharealike or Creative Commons Attribute license. (The Creative Commons Zero waiver is okay, too!). Each entrant must only submit content they have created themselves, and all submissions must adhere to the Ubuntu Code of Conduct.

The winners will be featured in the Ubuntu 18.04 LTS release this April!

There are a lot of details, so please see the Ubuntu Free Culture Showcase wiki page for details and links to where you can submit your work from now through March 15th. Good luck!

Simon Raffeiner: What a GNU C Compiler Bug looks like

Dje, 14/01/2018 - 6:35md
Back in December a Linux Mint user sent a strange bug report to the darktable mailing list. Apparently the GNU C Compiler (GCC) on his system exited with an unexpected error message, breaking the build process.

Sebastian Dröge: How to write GStreamer Elements in Rust Part 1: A Video Filter for converting RGB to grayscale

Sht, 13/01/2018 - 11:23md

This is part one of a series of blog posts that I’ll write in the next weeks, as previously announced in the GStreamer Rust bindings 0.10.0 release blog post. Since the last series of blog posts about writing GStreamer plugins in Rust ([1] [2] [3] [4]) a lot has changed, and the content of those blog posts has only historical value now, as the journey of experimentation to what exists now.

In this first part we’re going to write a plugin that contains a video filter element. The video filter can convert from RGB to grayscale, either output as 8-bit per pixel grayscale or 32-bit per pixel RGB. In addition there’s a property to invert all grayscale values, or to shift them by up to 255 values. In the end this will allow you to watch Big Bucky Bunny, or anything else really that can somehow go into a GStreamer pipeline, in grayscale. Or encode the output to a new video file, send it over the network via WebRTC or something else, or basically do anything you want with it.

Big Bucky Bunny – Grayscale

This will show the basics of how to write a GStreamer plugin and element in Rust: the basic setup for registering a type and implementing it in Rust, and how to use the various GStreamer API and APIs from the Rust standard library to do the processing.

The final code for this plugin can be found here, and it is based on the 0.1 version of the gst-plugin crate and the 0.10 version of the gstreamer crate. At least Rust 1.20 is required for all this. I’m also assuming that you have GStreamer (at least version 1.8) installed for your platform, see e.g. the GStreamer bindings installation instructions.

Table of Contents
  1. Project Structure
  2. Plugin Initialization
  3. Type Registration
  4. Type Class & Instance Initialization
  5. Caps & Pad Templates
  6. Caps Handling Part 1
  7. Caps Handling Part 2
  8. Conversion of BGRx Video Frames to Grayscale
  9. Testing the new element
  10. Properties
  11. What next?
Project Structure

We’ll create a new cargo project with cargo init –lib –name gst-plugin-tutorial. This will create a basically empty Cargo.toml and a corresponding src/lib.rs. We will use this structure: lib.rs will contain all the plugin related code, separate modules will contain any GStreamer plugins that are added.

The empty Cargo.toml has to be updated to list all the dependencies that we need, and to define that the crate should result in a cdylib, i.e. a C library that does not contain any Rust-specific metadata. The final Cargo.toml looks as follows

[package] name = "gst-plugin-tutorial" version = "0.1.0" authors = ["Sebastian Dröge <sebastian@centricular.com>"] repository = "https://github.com/sdroege/gst-plugin-rs" license = "MIT/Apache-2.0" [dependencies] glib = "0.4" gstreamer = "0.10" gstreamer-base = "0.10" gstreamer-video = "0.10" gst-plugin = "0.1" [lib] name = "gstrstutorial" crate-type = ["cdylib"] path = "src/lib.rs"

We’re depending on the gst-plugin crate, which provides all the basic infrastructure for implementing GStreamer plugins and elements. In addition we depend on the gstreamer, gstreamer-base and gstreamer-video crates for various GStreamer API that we’re going to use later, and the glib crate to be able to use some GLib API that we’ll need. GStreamer is building upon GLib, and this leaks through in various places.

With the basic project structure being set-up, we should be able to compile the project with cargo build now, which will download and build all dependencies and then creates a file called target/debug/libgstrstutorial.so (or .dll on Windows, .dylib on macOS). This is going to be our GStreamer plugin.

To allow GStreamer to find our new plugin and make it available in every GStreamer-based application, we could install it into the system- or user-wide GStreamer plugin path or simply point the GST_PLUGIN_PATH environment variable to the directory containing it:

export GST_PLUGIN_PATH=`pwd`/target/debug

If you now run the gst-inspect-1.0 tool on the libgstrstutorial.so, it will not yet print all information it can extract from the plugin but for now just complains that this is not a valid GStreamer plugin. Which is true, we didn’t write any code for it yet.

Plugin Initialization

Let’s start editing src/lib.rs to make this an actual GStreamer plugin. First of all, we need to add various extern crate directives to be able to use our dependencies and also mark some of them #[macro_use] because we’re going to use macros defined in some of them. This looks like the following

extern crate glib; #[macro_use] extern crate gstreamer as gst; extern crate gstreamer_base as gst_base; extern crate gstreamer_video as gst_video; #[macro_use] extern crate gst_plugin;

Next we make use of the plugin_define! macro from the gst-plugin crate to set-up the static metadata of the plugin (and make the shared library recognizeable by GStreamer to be a valid plugin), and to define the name of our entry point function (plugin_init) where we will register all the elements that this plugin provides.

plugin_define!( b"rstutorial\0", b"Rust Tutorial Plugin\0", plugin_init, b"1.0\0", b"MIT/X11\0", b"rstutorial\0", b"rstutorial\0", b"https://github.com/sdroege/gst-plugin-rs\0", b"2017-12-30\0" );

This is unfortunately not very beautiful yet due to a) GStreamer requiring this information to be statically available in the shared library, not returned by a function (starting with GStreamer 1.14 it can be a function), and b) Rust not allowing raw strings (b”blabla) to be concatenated with a macro like the std::concat macro (so that the b and \0 parts could be hidden away). Expect this to become better in the future.

The static plugin metadata that we provide here is

  1. name of the plugin
  2. short description for the plugin
  3. name of the plugin entry point function
  4. version number of the plugin
  5. license of the plugin (only a fixed set of licenses is allowed here, see)
  6. source package name
  7. binary package name (only really makes sense for e.g. Linux distributions)
  8. origin of the plugin
  9. release date of this version

In addition we’re defining an empty plugin entry point function that just returns true

fn plugin_init(plugin: &gst::Plugin) -> bool { true }

With all that given, gst-inspect-1.0 should print exactly this information when running on the libgstrstutorial.so file (or .dll on Windows, or .dylib on macOS)

gst-inspect-1.0 target/debug/libgstrstutorial.so

Type Registration

As a next step, we’re going to add another module rgb2gray to our project, and call a function called register from our plugin_init function.

mod rgb2gray; fn plugin_init(plugin: &gst::Plugin) -> bool { rgb2gray::register(plugin); true }

With that our src/lib.rs is complete, and all following code is only in src/rgb2gray.rs. At the top of the new file we first need to add various use-directives to import various types and functions we’re going to use into the current module’s scope

use glib; use gst; use gst::prelude::*; use gst_video; use gst_plugin::properties::*; use gst_plugin::object::*; use gst_plugin::element::*; use gst_plugin::base_transform::*; use std::i32; use std::sync::Mutex;

GStreamer is based on the GLib object system (GObject). C (just like Rust) does not have built-in support for object orientated programming, inheritance, virtual methods and related concepts, and GObject makes these features available in C as a library. Without language support this is a quite verbose endeavour in C, and the gst-plugin crate tries to expose all this in a (as much as possible) Rust-style API while hiding all the details that do not really matter.

So, as a next step we need to register a new type for our RGB to Grayscale converter GStreamer element with the GObject type system, and then register that type with GStreamer to be able to create new instances of it. We do this with the following code

struct Rgb2GrayStatic; impl ImplTypeStatic<BaseTransform> for Rgb2GrayStatic { fn get_name(&self) -> &str { "Rgb2Gray" } fn new(&self, element: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> { Rgb2Gray::new(element) } fn class_init(&self, klass: &mut BaseTransformClass) { Rgb2Gray::class_init(klass); } } pub fn register(plugin: &gst::Plugin) { let type_ = register_type(Rgb2GrayStatic); gst::Element::register(plugin, "rsrgb2gray", 0, type_); }

This defines a zero-sized struct Rgb2GrayStatic that is used to implement the ImplTypeStatic<BaseTransform> trait on it for providing static information about the type to the type system. In our case this is a zero-sized struct, but in other cases this struct might contain actual data (for example if the same element code is used for multiple elements, e.g. when wrapping a generic codec API that provides support for multiple decoders and then wanting to register one element per decoder). By implementing ImplTypeStatic<BaseTransform> we also declare that our element is going to be based on the GStreamer BaseTransform base class, which provides a relatively simple API for 1:1 transformation elements like ours is going to be.

ImplTypeStatic provides functions that return a name for the type, and functions for initializing/returning a new instance of our element (new) and for initializing the class metadata (class_init, more on that later). We simply let those functions proxy to associated functions on the Rgb2Gray struct that we’re going to define at a later time.

In addition, we also define a register function (the one that is already called from our plugin_init function) and in there first register the Rgb2GrayStatic type metadata with the GObject type system to retrieve a type ID, and then register this type ID to GStreamer to be able to create new instances of it with the name “rsrgb2gray” (e.g. when using gst::ElementFactory::make).

Type Class & Instance Initialization

As a next step we declare the Rgb2Gray struct and implement the new and class_init functions on it. In the first version, this struct is almost empty but we will later use it to store all state of our element.

struct Rgb2Gray { cat: gst::DebugCategory, } impl Rgb2Gray { fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> { Box::new(Self { cat: gst::DebugCategory::new( "rsrgb2gray", gst::DebugColorFlags::empty(), "Rust RGB-GRAY converter", ), }) } fn class_init(klass: &mut BaseTransformClass) { klass.set_metadata( "RGB-GRAY Converter", "Filter/Effect/Converter/Video", "Converts RGB to GRAY or grayscale RGB", "Sebastian Dröge <sebastian@centricular.com>", ); klass.configure(BaseTransformMode::NeverInPlace, false, false); } }

In the new function we return a boxed (i.e. heap-allocated) version of our struct, containing a newly created GStreamer debug category of name “rsrgb2gray”. We’re going to use this debug category later for making use of GStreamer’s debug logging system for logging the state and changes of our element.

In the class_init function we, again, set up some metadata for our new element. In this case these are a description, a classification of our element, a longer description and the author. The metadata can later be retrieved and made use of via the Registry and PluginFeature/ElementFactory API. We also configure the BaseTransform class and define that we will never operate in-place (producing our output in the input buffer), and that we don’t want to work in passthrough mode if the input/output formats are the same.

Additionally we need to implement various traits on the Rgb2Gray struct, which will later be used to override virtual methods of the various parent classes of our element. For now we can keep the trait implementations empty. There is one trait implementation required per parent class.

impl ObjectImpl<BaseTransform> for Rgb2Gray {} impl ElementImpl<BaseTransform> for Rgb2Gray {} impl BaseTransformImpl<BaseTransform> for Rgb2Gray {}

With all this defined, gst-inspect-1.0 should be able to show some more information about our element already but will still complain that it’s not complete yet.

Caps & Pad Templates

Data flow of GStreamer elements is happening via pads, which are the input(s) and output(s) (or sinks and sources) of an element. Via the pads, buffers containing actual media data, events or queries are transferred. An element can have any number of sink and source pads, but our new element will only have one of each.

To be able to declare what kinds of pads an element can create (they are not necessarily all static but could be created at runtime by the element or the application), it is necessary to install so-called pad templates during the class initialization. These pad templates contain the name (or rather “name template”, it could be something like src_%u for e.g. pad templates that declare multiple possible pads), the direction of the pad (sink or source), the availability of the pad (is it always there, sometimes added/removed by the element or to be requested by the application) and all the possible media types (called caps) that the pad can consume (sink pads) or produce (src pads).

In our case we only have always pads, one sink pad called “sink”, on which we can only accept RGB (BGRx to be exact) data with any width/height/framerate and one source pad called “src”, on which we will produce either RGB (BGRx) data or GRAY8 (8-bit grayscale) data. We do this by adding the following code to the class_init function.

let caps = gst::Caps::new_simple( "video/x-raw", &[ ( "format", &gst::List::new(&[ &gst_video::VideoFormat::Bgrx.to_string(), &gst_video::VideoFormat::Gray8.to_string(), ]), ), ("width", &gst::IntRange::<i32>::new(0, i32::MAX)), ("height", &gst::IntRange::<i32>::new(0, i32::MAX)), ( "framerate", &gst::FractionRange::new( gst::Fraction::new(0, 1), gst::Fraction::new(i32::MAX, 1), ), ), ], ); let src_pad_template = gst::PadTemplate::new( "src", gst::PadDirection::Src, gst::PadPresence::Always, &caps, ); klass.add_pad_template(src_pad_template); let caps = gst::Caps::new_simple( "video/x-raw", &[ ("format", &gst_video::VideoFormat::Bgrx.to_string()), ("width", &gst::IntRange::<i32>::new(0, i32::MAX)), ("height", &gst::IntRange::<i32>::new(0, i32::MAX)), ( "framerate", &gst::FractionRange::new( gst::Fraction::new(0, 1), gst::Fraction::new(i32::MAX, 1), ), ), ], ); let sink_pad_template = gst::PadTemplate::new( "sink", gst::PadDirection::Sink, gst::PadPresence::Always, &caps, ); klass.add_pad_template(sink_pad_template);

The names “src” and “sink” are pre-defined by the BaseTransform class and this base-class will also create the actual pads with those names from the templates for us whenever a new element instance is created. Otherwise we would have to do that in our new function but here this is not needed.

If you now run gst-inspect-1.0 on the rsrgb2gray element, these pad templates with their caps should also show up.

Caps Handling Part 1

As a next step we will add caps handling to our new element. This involves overriding 4 virtual methods from the BaseTransformImpl trait, and actually storing the configured input and output caps inside our element struct. Let’s start with the latter

struct State { in_info: gst_video::VideoInfo, out_info: gst_video::VideoInfo, } struct Rgb2Gray { cat: gst::DebugCategory, state: Mutex<Option<State>>, } impl Rgb2Gray { fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> { Box::new(Self { cat: gst::DebugCategory::new( "rsrgb2gray", gst::DebugColorFlags::empty(), "Rust RGB-GRAY converter", ), state: Mutex::new(None), }) } }

We define a new struct State that contains the input and output caps, stored in a VideoInfo. VideoInfo is a struct that contains various fields like width/height, framerate and the video format and allows to conveniently with the properties of (raw) video formats. We have to store it inside a Mutex in our Rgb2Gray struct as this can (in theory) be accessed from multiple threads at the same time.

Whenever input/output caps are configured on our element, the set_caps virtual method of BaseTransform is called with both caps (i.e. in the very beginning before the data flow and whenever it changes), and all following video frames that pass through our element should be according to those caps. Once the element is shut down, the stop virtual method is called and it would make sense to release the State as it only contains stream-specific information. We’re doing this by adding the following to the BaseTransformImpl trait implementation

impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn set_caps(&self, element: &BaseTransform, incaps: &gst::Caps, outcaps: &gst::Caps) -> bool { let in_info = match gst_video::VideoInfo::from_caps(incaps) { None => return false, Some(info) => info, }; let out_info = match gst_video::VideoInfo::from_caps(outcaps) { None => return false, Some(info) => info, }; gst_debug!( self.cat, obj: element, "Configured for caps {} to {}", incaps, outcaps ); *self.state.lock().unwrap() = Some(State { in_info: in_info, out_info: out_info, }); true } fn stop(&self, element: &BaseTransform) -> bool { // Drop state let _ = self.state.lock().unwrap().take(); gst_info!(self.cat, obj: element, "Stopped"); true } }

This code should be relatively self-explanatory. In set_caps we’re parsing the two caps into a VideoInfo and then store this in our State, in stop we drop the State and replace it with None. In addition we make use of our debug category here and use the gst_info! and gst_debug! macros to output the current caps configuration to the GStreamer debug logging system. This information can later be useful for debugging any problems once the element is running.

Next we have to provide information to the BaseTransform base class about the size in bytes of a video frame with specific caps. This is needed so that the base class can allocate an appropriately sized output buffer for us, that we can then fill later. This is done with the get_unit_size virtual method, which is required to return the size of one processing unit in specific caps. In our case, one processing unit is one video frame. In the case of raw audio it would be the size of one sample multiplied by the number of channels.

impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn get_unit_size(&self, _element: &BaseTransform, caps: &gst::Caps) -> Option<usize> { gst_video::VideoInfo::from_caps(caps).map(|info| info.size()) } }

We simply make use of the VideoInfo API here again, which conveniently gives us the size of one video frame already.

Instead of get_unit_size it would also be possible to implement the transform_size virtual method, which is getting passed one size and the corresponding caps, another caps and is supposed to return the size converted to the second caps. Depending on how your element works, one or the other can be easier to implement.

Caps Handling Part 2

We’re not done yet with caps handling though. As a very last step it is required that we implement a function that is converting caps into the corresponding caps in the other direction. For example, if we receive BGRx caps with some width/height on the sinkpad, we are supposed to convert this into new caps with the same width/height but BGRx or GRAY8. That is, we can convert BGRx to BGRx or GRAY8. Similarly, if the element downstream of ours can accept GRAY8 with a specific width/height from our source pad, we have to convert this to BGRx with that very same width/height.

This has to be implemented in the transform_caps virtual method, and looks as following

impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn transform_caps( &self, element: &BaseTransform, direction: gst::PadDirection, caps: gst::Caps, filter: Option<&gst::Caps>, ) -> gst::Caps { let other_caps = if direction == gst::PadDirection::Src { let mut caps = caps.clone(); for s in caps.make_mut().iter_mut() { s.set("format", &gst_video::VideoFormat::Bgrx.to_string()); } caps } else { let mut gray_caps = gst::Caps::new_empty(); { let gray_caps = gray_caps.get_mut().unwrap(); for s in caps.iter() { let mut s_gray = s.to_owned(); s_gray.set("format", &gst_video::VideoFormat::Gray8.to_string()); gray_caps.append_structure(s_gray); } gray_caps.append(caps.clone()); } gray_caps }; gst_debug!( self.cat, obj: element, "Transformed caps from {} to {} in direction {:?}", caps, other_caps, direction ); if let Some(filter) = filter { filter.intersect_with_mode(&other_caps, gst::CapsIntersectMode::First) } else { other_caps } } }

This caps conversion happens in 3 steps. First we check if we got caps for the source pad. In that case, the caps on the other pad (the sink pad) are going to be exactly the same caps but no matter if the caps contained BGRx or GRAY8 they must become BGRx as that’s the only format that our sink pad can accept. We do this by creating a clone of the input caps, then making sure that those caps are actually writable (i.e. we’re having the only reference to them, or a copy is going to be created) and then iterate over all the structures inside the caps and then set the “format” field to BGRx. After this, all structures in the new caps will be with the format field set to BGRx.

Similarly, if we get caps for the sink pad and are supposed to convert it to caps for the source pad, we create new caps and in there append a copy of each structure of the input caps (which are BGRx) with the format field set to GRAY8. In the end we append the original caps, giving us first all caps as GRAY8 and then the same caps as BGRx. With this ordering we signal to GStreamer that we would prefer to output GRAY8 over BGRx.

In the end the caps we created for the other pad are filtered against optional filter caps to reduce the potential size of the caps. This is done by intersecting the caps with that filter, while keeping the order (and thus preferences) of the filter caps (gst::CapsIntersectMode::First).

Conversion of BGRx Video Frames to Grayscale

Now that all the caps handling is implemented, we can finally get to the implementation of the actual video frame conversion. For this we start with defining a helper function bgrx_to_gray that converts one BGRx pixel to a grayscale value. The BGRx pixel is passed as a &[u8] slice with 4 elements and the function returns another u8 for the grayscale value.

impl Rgb2Gray { #[inline] fn bgrx_to_gray(in_p: &[u8]) -> u8 { // See https://en.wikipedia.org/wiki/YUV#SDTV_with_BT.601 const R_Y: u32 = 19595; // 0.299 * 65536 const G_Y: u32 = 38470; // 0.587 * 65536 const B_Y: u32 = 7471; // 0.114 * 65536 assert_eq!(in_p.len(), 4); let b = u32::from(in_p[0]); let g = u32::from(in_p[1]); let r = u32::from(in_p[2]); let gray = ((r * R_Y) + (g * G_Y) + (b * B_Y)) / 65536; (gray as u8) } }

This function works by extracting the blue, green and red components from each pixel (remember: we work on BGRx, so the first value will be blue, the second green, the third red and the fourth unused), extending it from 8 to 32 bits for a wider value-range and then converts it to the Y component of the YUV colorspace (basically what your grandparents’ black & white TV would’ve displayed). The coefficients come from the Wikipedia page about YUV and are normalized to unsigned 16 bit integers so we can keep some accuracy, don’t have to work with floating point arithmetic and stay inside the range of 32 bit integers for all our calculations. As you can see, the green component is weighted more than the others, which comes from our eyes being more sensitive to green than to other colors.

Note: This is only doing the actual conversion from linear RGB to grayscale (and in BT.601 colorspace). To do this conversion correctly you need to know your colorspaces and use the correct coefficients for conversion, and also do gamma correction. See this about why it is important.

Afterwards we have to actually call this function on every pixel. For this the transform virtual method is implemented, which gets a input and output buffer passed and we’re supposed to read the input buffer and fill the output buffer. The implementation looks as follows, and is going to be our biggest function for this element

impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn transform( &self, element: &BaseTransform, inbuf: &gst::Buffer, outbuf: &mut gst::BufferRef, ) -> gst::FlowReturn { let mut state_guard = self.state.lock().unwrap(); let state = match *state_guard { None => { gst_element_error!(element, gst::CoreError::Negotiation, ["Have no state yet"]); return gst::FlowReturn::NotNegotiated; } Some(ref mut state) => state, }; let in_frame = match gst_video::VideoFrameRef::from_buffer_ref_readable( inbuf.as_ref(), &state.in_info, ) { None => { gst_element_error!( element, gst::CoreError::Failed, ["Failed to map input buffer readable"] ); return gst::FlowReturn::Error; } Some(in_frame) => in_frame, }; let mut out_frame = match gst_video::VideoFrameRef::from_buffer_ref_writable(outbuf, &state.out_info) { None => { gst_element_error!( element, gst::CoreError::Failed, ["Failed to map output buffer writable"] ); return gst::FlowReturn::Error; } Some(out_frame) => out_frame, }; let width = in_frame.width() as usize; let in_stride = in_frame.plane_stride()[0] as usize; let in_data = in_frame.plane_data(0).unwrap(); let out_stride = out_frame.plane_stride()[0] as usize; let out_format = out_frame.format(); let out_data = out_frame.plane_data_mut(0).unwrap(); if out_format == gst_video::VideoFormat::Bgrx { assert_eq!(in_data.len() % 4, 0); assert_eq!(out_data.len() % 4, 0); assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride); let in_line_bytes = width * 4; let out_line_bytes = width * 4; assert!(in_line_bytes <= in_stride); assert!(out_line_bytes <= out_stride); for (in_line, out_line) in in_data .chunks(in_stride) .zip(out_data.chunks_mut(out_stride)) { for (in_p, out_p) in in_line[..in_line_bytes] .chunks(4) .zip(out_line[..out_line_bytes].chunks_mut(4)) { assert_eq!(out_p.len(), 4); let gray = Rgb2Gray::bgrx_to_gray(in_p); out_p[0] = gray; out_p[1] = gray; out_p[2] = gray; } } } else if out_format == gst_video::VideoFormat::Gray8 { assert_eq!(in_data.len() % 4, 0); assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride); let in_line_bytes = width * 4; let out_line_bytes = width; assert!(in_line_bytes <= in_stride); assert!(out_line_bytes <= out_stride); for (in_line, out_line) in in_data .chunks(in_stride) .zip(out_data.chunks_mut(out_stride)) { for (in_p, out_p) in in_line[..in_line_bytes] .chunks(4) .zip(out_line[..out_line_bytes].iter_mut()) { let gray = Rgb2Gray::bgrx_to_gray(in_p); *out_p = gray; } } } else { unimplemented!(); } gst::FlowReturn::Ok } }

What happens here is that we first of all lock our state (the input/output VideoInfo) and error out if we don’t have any yet (which can’t really happen unless other elements have a bug, but better safe than sorry). After that we map the input buffer readable and the output buffer writable with the VideoFrameRef API. By mapping the buffers we get access to the underlying bytes of them, and the mapping operation could for example make GPU memory available or just do nothing and give us access to a normally allocated memory area. We have access to the bytes of the buffer until the VideoFrameRef goes out of scope.

Instead of VideoFrameRef we could’ve also used the gst::Buffer::map_readable() and gst::Buffer::map_writable() API, but different to those the VideoFrameRef API also extracts various metadata from the raw video buffers and makes them available. For example we can directly access the different planes as slices without having to calculate the offsets ourselves, or we get directly access to the width and height of the video frame.

After mapping the buffers, we store various information we’re going to need later in local variables to save some typing later. This is the width (same for input and output as we never changed the width in transform_caps), the input and out (row-) stride (the number of bytes per row/line, which possibly includes some padding at the end of each line for alignment reasons), the output format (which can be BGRx or GRAY8 because of how we implemented transform_caps) and the pointers to the first plane of the input and output (which in this case also is the only plane, BGRx and GRAY8 both have only a single plane containing all the RGB/gray components).

Then based on whether the output is BGRx or GRAY8, we iterate over all pixels. The code is basically the same in both cases, so I’m only going to explain the case where BGRx is output.

We start by iterating over each line of the input and output, and do so by using the chunks iterator to give us chunks of as many bytes as the (row-) stride of the video frame is, do the same for the other frame and then zip both iterators together. This means that on each iteration we get exactly one line as a slice from each of the frames and can then start accessing the actual pixels in each line.

To access the individual pixels in each line, we again use the chunks iterator the same way, but this time to always give us chunks of 4 bytes from each line. As BGRx uses 4 bytes for each pixel, this gives us exactly one pixel. Instead of iterating over the whole line, we only take the actual sub-slice that contains the pixels, not the whole line with stride number of bytes containing potential padding at the end. Now for each of these pixels we call our previously defined bgrx_to_gray function and then fill the B, G and R components of the output buffer with that value to get grayscale output. And that’s all.

Using Rust high-level abstractions like the chunks iterators and bounds-checking slice accesses might seem like it’s going to cause quite some performance penalty, but if you look at the generated assembly most of the bounds checks are completely optimized away and the resulting assembly code is close to what one would’ve written manually (especially when using the newly-added exact_chunks iterators). Here you’re getting safe and high-level looking code with low-level performance!

You might’ve also noticed the various assertions in the processing function. These are there to give further hints to the compiler about properties of the code, and thus potentially being able to optimize the code better and moving e.g. bounds checks out of the inner loop and just having the assertion outside the loop check for the same. In Rust adding assertions can often improve performance by allowing further optimizations to be applied, but in the end always check the resulting assembly to see if what you did made any difference.

Testing the new element

Now we implemented almost all functionality of our new element and could run it on actual video data. This can be done now with the gst-launch-1.0 tool, or any application using GStreamer and allowing us to insert our new element somewhere in the video part of the pipeline. With gst-launch-1.0 you could run for example the following pipelines

# Run on a test pattern gst-launch-1.0 videotestsrc ! rsrgb2gray ! videoconvert ! autovideosink # Run on some video file, also playing the audio gst-launch-1.0 playbin uri=file:///path/to/some/file video-filter=rsrgb2gray

Note that you will likely want to compile with cargo build –release and add the target/release directory to GST_PLUGIN_PATH instead. The debug build might be too slow, and generally the release builds are multiple orders of magnitude (!) faster.

Properties

The only feature missing now are the properties I mentioned in the opening paragraph: one boolean property to invert the grayscale value and one integer property to shift the value by up to 255. Implementing this on top of the previous code is not a lot of work. Let’s start with defining a struct for holding the property values and defining the property metadata.

const DEFAULT_INVERT: bool = false; const DEFAULT_SHIFT: u32 = 0; #[derive(Debug, Clone, Copy)] struct Settings { invert: bool, shift: u32, } impl Default for Settings { fn default() -> Self { Settings { invert: DEFAULT_INVERT, shift: DEFAULT_SHIFT, } } } static PROPERTIES: [Property; 2] = [ Property::Boolean( "invert", "Invert", "Invert grayscale output", DEFAULT_INVERT, PropertyMutability::ReadWrite, ), Property::UInt( "shift", "Shift", "Shift grayscale output (wrapping around)", (0, 255), DEFAULT_SHIFT, PropertyMutability::ReadWrite, ), ]; struct Rgb2Gray { cat: gst::DebugCategory, settings: Mutex<Settings>, state: Mutex<Option<State>>, } impl Rgb2Gray { fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> { Box::new(Self { cat: gst::DebugCategory::new( "rsrgb2gray", gst::DebugColorFlags::empty(), "Rust RGB-GRAY converter", ), settings: Mutex::new(Default::default()), state: Mutex::new(None), }) } }

This should all be rather straightforward: we define a Settings struct that stores the two values, implement the Default trait for it, then define a two-element array with property metadata (names, description, ranges, default value, writability), and then store the default value of our Settings struct inside another Mutex inside the element struct.

In the next step we have to make use of these: we need to tell the GObject type system about the properties, and we need to implement functions that are called whenever a property value is set or get.

impl Rgb2Gray { fn class_init(klass: &mut BaseTransformClass) { [...] klass.install_properties(&PROPERTIES); [...] } } impl ObjectImpl<BaseTransform> for Rgb2Gray { fn set_property(&self, obj: &glib::Object, id: u32, value: &glib::Value) { let prop = &PROPERTIES[id as usize]; let element = obj.clone().downcast::<BaseTransform>().unwrap(); match *prop { Property::Boolean("invert", ..) => { let mut settings = self.settings.lock().unwrap(); let invert = value.get().unwrap(); gst_info!( self.cat, obj: &element, "Changing invert from {} to {}", settings.invert, invert ); settings.invert = invert; } Property::UInt("shift", ..) => { let mut settings = self.settings.lock().unwrap(); let shift = value.get().unwrap(); gst_info!( self.cat, obj: &element, "Changing shift from {} to {}", settings.shift, shift ); settings.shift = shift; } _ => unimplemented!(), } } fn get_property(&self, _obj: &glib::Object, id: u32) -> Result<glib::Value, ()> { let prop = &PROPERTIES[id as usize]; match *prop { Property::Boolean("invert", ..) => { let settings = self.settings.lock().unwrap(); Ok(settings.invert.to_value()) } Property::UInt("shift", ..) => { let settings = self.settings.lock().unwrap(); Ok(settings.shift.to_value()) } _ => unimplemented!(), } } }

Property values can be changed from any thread at any time, that’s why the Mutex is needed here to protect our struct. And we’re using a new mutex to be able to have it locked only for the shorted possible amount of time: we don’t want to keep it locked for the whole time of the transform function, otherwise applications trying to set/get values would block for up to one frame.

In the property setter/getter functions we are working with a glib::Value. This is a dynamically typed value type that can contain values of any type, together with the type information of the contained value. Here we’re using it to handle an unsigned integer (u32) and a boolean for our two properties. To know which property is currently set/get, we get an identifier passed which is the index into our PROPERTIES array. We then simply match on the name of that to decide which property was meant

With this implemented, we can already compile everything, see the properties and their metadata in gst-inspect-1.0 and can also set them on gst-launch-1.0 like this

# Set invert to true and shift to 128 gst-launch-1.0 videotestsrc ! rsrgb2gray invert=true shift=128 ! videoconvert ! autovideosink

If we set GST_DEBUG=rsrgb2gray:6 in the environment before running that, we can also see the corresponding debug output when the values are changing. The only thing missing now is to actually make use of the property values for the processing. For this we add the following changes to bgrx_to_gray and the transform function

impl Rgb2Gray { #[inline] fn bgrx_to_gray(in_p: &[u8], shift: u8, invert: bool) -> u8 { [...] let gray = ((r * R_Y) + (g * G_Y) + (b * B_Y)) / 65536; let gray = (gray as u8).wrapping_add(shift); if invert { 255 - gray } else { gray } } } impl BaseTransformImpl<BaseTransform> for Rgb2Gray { fn transform( &self, element: &BaseTransform, inbuf: &gst::Buffer, outbuf: &mut gst::BufferRef, ) -> gst::FlowReturn { let settings = *self.settings.lock().unwrap(); [...] let gray = Rgb2Gray::bgrx_to_gray(in_p, settings.shift as u8, settings.invert); [...] } }

And that’s all. If you run the element in gst-launch-1.0 and change the values of the properties you should also see the corresponding changes in the video output.

Note that we always take a copy of the Settings struct at the beginning of the transform function. This ensures that we take the mutex only the shorted possible amount of time and then have a local snapshot of the settings for each frame.

Also keep in mind that the usage of the property values in the bgrx_to_gray function is far from optimal. It means the addition of another condition to the calculation of each pixel, thus potentially slowing it down a lot. Ideally this condition would be moved outside the inner loops and the bgrx_to_gray function would made generic over that. See for example this blog post about “branchless Rust” for ideas how to do that, the actual implementation is left as an exercise for the reader.

What next?

I hope the code walkthrough above was useful to understand how to implement GStreamer plugins and elements in Rust. If you have any questions, feel free to ask them here in the comments.

The same approach also works for audio filters or anything that can be handled in some way with the API of the BaseTransform base class. You can find another filter, an audio echo filter, using the same approach here.

In the next blog post in this series I’ll show how to use another base class to implement another kind of element, but for the time being you can also check the GIT repository for various other element implementations.

Xubuntu: Xubuntu 17.10.1 Release

Pre, 12/01/2018 - 6:34md

Following the recent testing of a respin to deal with the BIOS bug on some Lenovo machines, Xubuntu 17.10.1 has been released. Official download sources have been updated to point to this point release, but if you’re using a mirror, be sure you are downloading the 17.10.1 version.

No changes to applications are included, however, this release does include any updates made between the original release date and now.

Note: Even with this fix, you will want to update your system to make sure you get all security fixes since the ISO respin, including the one for Meltdown, addressed in USN-3523, which you can read more about here.

Xubuntu: Xubuntu 17.04 End Of Life

Pre, 12/01/2018 - 3:40md

On Saturday 13th January 2018, Xubuntu 17.04 goes End of Life (EOL). For more information please see the Ubuntu 17.04 EOL Notice.

We strongly recommend upgrading to the current regular release, Xubuntu 17.10.1, as soon as practical. Alternatively you can download the current Xubuntu release and install fresh.

The 17.10.1 release recently saw testing across all flavors to address the BIOS bug found after its release in October 2017. Updated and bugfree ISO files are now available.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2017

Pre, 12/01/2018 - 3:15md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 142 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 21 packages with a known CVE and the dla-needed.txt file 16 (we’re a bit behind in CVE triaging apparently). Both numbers show a significant drop compared to last month. Yet the number of DLA released was not larger than usual (30), instead it looks like December brought us fewer new security vulnerabilities to handle and at the same time we used this opportunity to handle lower priorities packages that were kept on the side for multiple months.

Thanks to our sponsors

New sponsors are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Valorie Zimmerman: Seeding new ISOs the easy zsync way

Pre, 12/01/2018 - 9:39pd
Kubuntu recently had to pull our 17.10 ISOs because of the so-called lenovo bug. Now that this bug is fixed, the ISOs have been respun, and so now it's time to begin to reseed the torrents.

To speed up the process, I wanted to zsync to the original ISOs before getting the new torrent files. Simon kindly told me the easy way to do this - cd to the directory where the ISOs live, which in my case is 

cd /media/valorie/Data/ISOs/

Next: 

cp kubuntu-17.10{,.1}-desktop-amd64.iso && zsync http://cdimage.ubuntu.com/kubuntu/releases/17.10.1/release/kubuntu-17.10.1-desktop-amd64.iso.zsync

Where did I get the link to zsync? At http://cdimage.ubuntu.com/kubuntu/releases/17.10.1/release/. All ISOs are found at cdimage, just as all torrents are found at http://torrent.ubuntu.com:6969/.

The final step is to download those torrent files (pro-tip: use control F) and tell Ktorrent to seed them all! I seed all the supported Ubuntu releases. The more people do this, the faster torrents are for everyone. If you have the bandwidth, go for it!

PS: you don't have to copy all the cdimage URLs. Just up-arrow and then back-arrow through your previous command once the sync has finished, edit it, hit return and you are back in business.

Lubuntu Blog: Lubuntu 17.10.1 (Artful Aardvark) released!

Pre, 12/01/2018 - 7:29pd
Lubuntu 17.10.1 has been released to fix a major problem affecting many Lenovo laptops that causes the computer to have BIOS problems after installing. You can find more details about this problem here. Please note that the Meltdown and Spectre vulnerabilities have not been fixed in this ISO, so we advise that if you install […]

Valorie Zimmerman: Beginning 2018

Dje, 07/01/2018 - 11:55md
2017 began with the once-in-a-lifetime trip to India to speak at KDE.Conf.in. That was amazing enough, but the trip to a local village, and visiting the Kaziranga National Park were too amazing for words.

Literal highlight of last year were the eclipse and trip to see it with my son Thomas, and Christian and Hailey's wedding, and the trip to participate with my daughter Anne, while also spending some time with son Paul, his wife Tara and my grandson Oscar. This summer I was able to spend a few days in Brooklyn with Colin and Rory as well on my way to Akademy. So 2017 was definitely worth living through!

This is reality, and we can only see it during a total eclipse
2018 began wonderfully at the cabin. I'm looking forward to 2018 for a lot of reasons.
First, I'm so happy that soon Kubuntu will again be distributing 17.10 images next week. Right now we're in testing in preparation for that; pop into IRC if you'd like to help with the testing (#kubuntu-devel). https://kubuntu.org/getkubuntu/ next week!

Lubuntu has a nice write-up of the issues and testing procedures: http://lubuntu.me/lubuntu-17-04-eol-and-lubuntu-17-10-respins/

The other serious problems with meltdown and spectre are being handled by the Ubuntu kernel team and those updates will be rolled out as soon as testing is complete. Scary times when dealing with such a fundamental flaw in the design of our computers!

Second, in KDE we're beginning to ramp up for Google Summer of Code. Mentors are preparing the ideas page on the wiki, and Bhushan has started the organization application process. If you want to mentor or help us administer the program this year, now is the time to get in gear!

At Renton PFLAG we had our first support meeting of the year, and it was small but awesome! Our little group has had some tough times in the past, but I see us growing and thriving in this next year.

Finally, my local genealogy society is doing some great things, and I'm so happy to be involved and helping out again. My own searching is going well too. As I find more supporting evidence to the lives of my ancestors and their families, I feel my own place in the cosmos more deeply and my connection to history more strongly. I wish I could link to our website, but Rootsweb is down and until we get our new website up......

Finally, today I saw a news article about a school in India far outside the traditional education model. Called the Tamarind Tree School, it uses an open education model to offer collaborative, innovative learning solutions to rural students. They use free and open source software, and even hardware so that people can build their own devices. Read more about this: https://opensource.com/article/18/1/tamarind-tree-school-india.

Eric Hammond: Streaming AWS DeepLens Video Over SSH

Sht, 30/12/2017 - 6:00pd

instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse

Credit for this excellent idea goes to Ernie Kim. Thank you!

Instructions without ssh

The standard AWS DeepLens instructions recommend connecting the device to a monitor, keyboard, and mouse. The instructions provide information on how to view the video streams in this mode:

If you are connected to the DeepLens using a monitor, you can view the unprocessed device stream (raw camera video before being processed by the model) using this command on the DeepLens device:

mplayer –demuxer /opt/awscam/out/ch1_out.h264

If you are connected to the DeepLens using a monitor, you can view the project stream (video after being processed by the model on the DeepLens) using this command on the DeepLens device:

mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/ssd_results.mjpeg Instructions with ssh

You can also view the DeepLens video streams over ssh, without having a monitor connected to the device. To make this possible, you need to enable ssh access on your DeepLens. This is available as a checkbox option in the initial setup of the device. I’m working to get instructions on how to enable ssh access afterwards and will update once this is available.

To view the video streams over ssh, we take the same mplayer command options above and the same source stream files, but send the stream over ssh, and feed the result to the stdin of an mplayer process running on the local system, presumably a laptop.

All of the following commands are run on your local laptop (not on the DeepLens device).

You need to know the IP address of your DeepLens device on your local network:

ip_address=[IP ADDRESS OF DeepLens]

You will need to install the mplayer software on your local laptop. This varies with your OS, but for Ubuntu:

sudo apt-get install mplayer

You can view the unprocessed device stream (raw camera video before being processed by the model) over ssh using the command:

ssh aws_cam@$ip_address cat /opt/awscam/out/ch1_out.h264 | mplayer –demuxer -

You can view the project stream (video after being processed by the model on the DeepLens) over ssh with the command:

ssh aws_cam@$ip_address cat /tmp/ssd_results.mjpeg | mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 -

Benefits of using ssh to view the video streams include:

  • You don’t need to have an extra monitor, keyboard, mouse, and micro-HDMI adapter cable.

  • You don’t need to locate the DeepLens close to a monitor, keyboard, mouse.

  • You don’t need to be physically close to the DeepLens when you are viewing the video streams.

For those of us sitting on a couch with a laptop, a DeepLens across the room, and no extra micro-HDMI cable, this is great news!

Bonus

To protect the security of your sensitive DeepLens video feeds:

  • Use a long, randomly generated password for ssh on your DeepLens, even if you are only using it inside a private network.

  • I would recommend setting up .ssh/authorized_keys on the DeepLens so you can ssh in with your personal ssh key, test it, then disable password access for ssh on the DeepLens device. Don’t forget the password, because it is still needed for sudo.

  • Enable automatic updates on your DeepLens so that Ubuntu security patches are applied quickly. This is available as an option in the initial setup, and should be possible to do afterwards using the standard Ubuntu unattended-upgrades package.

Unrelated side note: It’s kind of nice having the DeepLens run a standard Ubuntu LTS release. Excellent choice!

Original article and comments: https://alestic.com/2017/12/aws-deeplens-video-stream-ssh/

Clive Johnston: Love KDE software? Show your love by donating today

Pre, 15/12/2017 - 11:16md

It is the season of giving and if you use KDE software, donate to KDE.  Software such as Krita, Kdenlive, KDE Connect, Kontact, digiKam, the Plasma desktop and many many more are all projects under the KDE umbrella.

KDE have launched a fund drive running until the end of 2017.  If you want to help make KDE software better, please consider donating.  For more information on what KDE will do with any money you donate, please go to https://www.kde.org/fundraisers/yearend2017/

Matthew Helmke: Learn Java the Easy Way

Pre, 15/12/2017 - 4:53md

This is an enjoyable introduction to programming in Java by an author I have enjoyed in the past.

Learn Java the Easy Way: A Hands-On Introduction to Programming was written by Dr. Bryson Payne. I previously reviewed his book Teach Your Kids to Code, which is Python-based.

Learn Java the Easy Way covers all the topics one would expect, from development IDEs (it focuses heavily on Eclipse and Android Studio, which are both reasonable, solid choices) to debugging. In between, the reader receives clear explanations of how to perform calculations, manipulate text strings, use conditions and loops, create functions, along with solid and easy-to-understand definitions of important concepts like classes, objects, and methods.

Java is taught systematically, starting with simple and moving to complex. We first create a simple command-line game, then we create a GUI for it, then we make it into an Android app, then we add menus and preference options, and so on. Along the way, new games and enhancement options are explored, some in detail and some in end-of-chapter exercises designed to give more confident or advancing students ideas for pushing themselves further than the book’s content. I like that.

Side note: I was pleasantly amused to discover that the first program in the book is the same as one that I originally wrote in 1986 on a first-generation Casio graphing calculator, so I would have something to kill time when class lectures got boring.

The pace of the book is good. Just as I began to feel done with a topic, the author moved to something new. I never felt like details were skipped and I also never felt like we were bogged down with too much detail, beyond what is needed for the current lesson. The author has taught computer science and programming for nearly 20 years, and it shows.

Bottom line: if you want to learn Java, this is a good introduction that is clearly written and will give you a nice foundation upon which you can build.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2017

Pre, 15/12/2017 - 3:15md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 144 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Antoine Beaupré did 8.5h (out of 13h allocated + 3.75h remaining, thus keeping 8.25h for December).
  • Ben Hutchings did 17 hours (out of 13h allocated + 4 extra hours).
  • Brian May did 10 hours.
  • Chris Lamb did 13 hours.
  • Emilio Pozuelo Monfort did 14.5 hours (out of 13 hours allocated + 15.25 hours remaining, thus keeping 13.75 hours for December).
  • Guido Günther did 14 hours (out of 11h allocated + 5.5 extra hours, thus keeping 2.5h for December).
  • Hugo Lefeuvre did 13h.
  • Lucas Kanashiro did not request any work hours, but he had 3 hours left. He did not publish any report yet.
  • Markus Koschany did 14.75 hours (out of 13 allocated + 1.75 extra hours).
  • Ola Lundqvist did 7h.
  • Raphaël Hertzog did 10 hours (out of 12h allocated, thus keeping 2 extra hours for December).
  • Roberto C. Sanchez did 32.5 hours (out of 13 hours allocated + 24.50 hours remaining, thus keeping 5 extra hours for November).
  • Thorsten Alteholz did 13 hours.
About external support partners

You might notice that there is sometimes a significant gap between the number of distributed work hours each month and the number of sponsored hours reported in the “Evolution of the situation” section. This is mainly due to some work hours that are “externalized” (but also because some sponsors pay too late). For instance, since we don’t have Xen experts among our Debian contributors, we rely on credativ to do the Xen security work for us. And when we get an invoice, we convert that to a number of hours that we drop from the available hours in the following month. And in the last months, Xen has been a significant drain to our resources: 35 work hours made in September (invoiced in early October and taken off from the November hours detailed above), 6.25 hours in October, 21.5 hours in November. We also have a similar partnership with Diego Bierrun to help us maintain libav, but here the number of hours tend to be very low.

In both cases, the work done by those paid partners is made freely available for others under the original license: credativ maintains a Xen 4.1 branch on GitHub, Diego commits his work on the release/0.8 branch in the official git repository.

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 55 packages with a known CVE and the dla-needed.txt file 35 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Dimitri John Ledkov: What does FCC Net Neutrality repeal mean to you?

Pre, 15/12/2017 - 10:09pd
Sorry, the web page you have requested is not available through your internet connection.We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.If you are a home broadband customer, for more information on why certain web pages are blocked, please click here.If you are a business customer, or are trying to view this page through your company's internet connection, please click here.

Sebastian Dröge: A GStreamer Plugin like the Rec Button on your Tape Recorder – A Multi-Threaded Plugin written in Rust

Enj, 14/12/2017 - 11:41md

As Rust is known for “Fearless Concurrency”, that is being able to write concurrent, multi-threaded code without fear, it seemed like a good fit for a GStreamer element that we had to write at Centricular.

Previous experience with Rust for writing (mostly) single-threaded GStreamer elements and applications (also multi-threaded) were all quite successful and promising already. And in the end, this new element was also a pleasure to write and probably faster than doing the equivalent in C. For the impatient, the code, tests and a GTK+ example application (written with the great Rust GTK bindings, but the GStreamer element is also usable from C or any other language) can be found here.

What does it do?

The main idea of the element is that it basically works like the rec button on your tape recorder. There is a single boolean property called “record”, and whenever it is set to true it will pass-through data and whenever it is set to false it will drop all data. But different to the existing valve element, it

  • Outputs a contiguous timeline without gaps, i.e. there are no gaps in the output when not recording. Similar to the recording you get on a tape recorder, you don’t have 10s of silence if you didn’t record for 10s.
  • Handles and synchronizes multiple streams at once. When recording e.g. a video stream and an audio stream, every recorded segment starts and stops with both streams at the same time
  • Is key-frame aware. If you record a compressed video stream, each recorded segment starts at a keyframe and ends right before the next keyframe to make it most likely that all frames can be successfully decoded

The multi-threading aspect here comes from the fact that in GStreamer each stream usually has its own thread, so in this case the video stream and the audio stream(s) would come from different threads but would have to be synchronized between each other.

The GTK+ example application for the plugin is playing a video with the current playback time and a beep every second, and allows to record this as an MP4 file in the current directory.

How did it go?

This new element was again based on the Rust GStreamer bindings and the infrastructure that I was writing over the last year or two for writing GStreamer plugins in Rust.

As written above, it generally went all fine and was quite a pleasure but there were a few things that seem noteworthy. But first of all, writing this in Rust was much more convenient and fun than writing it in C would’ve been, and I’ve written enough similar code in C before. It would’ve taken quite a bit longer, I would’ve had to debug more problems in the new code during development (there were actually surprisingly few things going wrong during development, I expected more!), and probably would’ve written less exhaustive tests because writing tests in C is just so inconvenient.

Rust does not prevent deadlocks

While this should be clear, and was also clear to myself before, this seems like it might need some reiteration. Safe Rust prevents data races, but not all possible bugs that multi-threaded programs can have. Rust is not magic, only a tool that helps you prevent some classes of potential bugs.

For example, you can’t just stop thinking about lock order if multiple mutexes are involved, or that you can carelessly use condition variables without making sure that your conditions actually make sense and accessed atomically. As a wise man once said, “the safest program is the one that does not run at all”, and a deadlocking program is very close to that.

The part about condition variables might be something that can be improved in Rust. Without this, you can easily end up in situations where you wait forever or your conditions are actually inconsistent. Currently Rust’s condition variables only require a mutex to be passed to the functions for waiting for the condition to be notified, but it would probably also make sense to require passing the same mutex to the constructor and notify functions to make it absolutely clear that you need to ensure that your conditions are always accessed/modified while this specific mutex is locked. Otherwise you might end up in debugging hell.

Fortunately during development of the plugin I only ran into a simple deadlock, caused by accidentally keeping a mutex locked for too long and then running into conflict with another one. Which is probably an easy trap if the most common way of unlocking a mutex is to let the mutex lock guard fall out of scope. This makes it impossible to forget to unlock the mutex, but also makes it less explicit when it is unlocked and sometimes explicit unlocking by manually dropping the mutex lock guard is still necessary.

So in summary, while a big group of potential problems with multi-threaded programs are prevented by Rust, you still have to be careful to not run into any of the many others. Especially if you use lower-level constructs like condition variables, and not just e.g. channels. Everything is however far more convenient than doing the same in C, and with more support by the compiler, so I definitely prefer writing such code in Rust over doing the same in C.

Missing API

As usual, for the first dozen projects using a new library or new bindings to an existing library, you’ll notice some missing bits and pieces. That I missed relatively core part of GStreamer, the GstRegistry API, was surprising nonetheless. True, you usually don’t use it directly and I only need to use it here for loading the new plugin from a non-standard location, but it was still surprising. Let’s hope this was the biggest oversight. If you look at the issues page on GitHub, you’ll find a few other things that are still missing though. But nobody needed them yet, so it’s probably fine for the time being.

Another part of missing APIs that I noticed during development was that many manual (i.e. not auto-generated) bindings didn’t have the Debug trait implemented, or not in a too useful way. This is solved now, as otherwise I wouldn’t have been able to properly log what is happening inside the element to allow easier debugging later if something goes wrong.

Apart from that there were also various other smaller things that were missing, or bugs (see below) that I found in the bindings while going through all these. But those seem not very noteworthy – check the commit logs if you’re interested.

Bugs, bugs, bgsu

I also found a couple of bugs in the bindings. They can be broadly categorized in two categories

  • Annotation bugs in GStreamer. The auto-generated parts of the bindings are generated from an XML description of the API, that is generated from the C headers and code and annotations in there. There were a couple of annotations that were wrong (or missing) in GStreamer, which then caused memory leaks in my case. Such mistakes could also easily cause memory-safety issues though. The annotations are fixed now, which will also benefit all the other language bindings for GStreamer (and I’m not sure why nobody noticed the memory leaks there before me).
  • Bugs in the manually written parts of the bindings. Similarly to the above, there was one memory leak and another case where a function could’ve returned NULL but did not have this case covered on the Rust-side by returning an Option<_>.

Generally I was quite happy with the lack of bugs though, the bindings are really ready for production at this point. And especially, all the bugs that I found are things that are unfortunately “normal” and common when writing code in C, while Rust is preventing exactly these classes of bugs. As such, they have to be solved only once at the bindings layer and then you’re free of them and you don’t have to spent any brain capacity on their existence anymore and can use your brain to solve the actual task at hand.

Inconvenient API

Similar to the missing API, whenever using some rather new API you will find things that are inconvenient and could ideally be done better. The biggest case here was the GstSegment API. A segment represents a (potentially open-ended) playback range and contains all the information to convert timestamps to the different time bases used in GStreamer. I’m not going to get into details here, best check the documentation for them.

A segment can be in different formats, e.g. in time or bytes. In the C API this is handled by storing the format inside the segment, and requiring you to pass the format together with the value to every function call, and internally there are some checks then that let the function fail if there is a format mismatch. In the previous version of the Rust segment API, this was done the same, and caused lots of unwrap() calls in this element.

But in Rust we can do better, and the new API for the segment now encodes the format in the type system (i.e. there is a Segment<Time>) and only values with the correct type (e.g. ClockTime) can be passed to the corresponding functions of the segment. In addition there is a type for a generic segment (which still has all the runtime checks) and functions to “cast” between the two.

Overall this gives more type-safety (the compiler already checks that you don’t mix calculations between seconds and bytes) and makes the API usage more convenient as various error conditions just can’t happen and thus don’t have to be handled. Or like in C, are simply ignored and not handled, potentially leaving a trap that can cause hard to debug bugs at a later time.

That Rust requires all errors to be handled makes it very obvious how many potential error cases the average C code out there is not handling at all, and also shows that a more expressive language than C can easily prevent many of these error cases at compile-time already.

Simos Xenitellis: multipass, management of virtual machines running Ubuntu

Enj, 14/12/2017 - 9:44md

If you want to run a machine container, you would use LXD.  But if you want to run a virtual machine, you would use multipass. multipass is so new, that is still in beta. The name is not known yet to Google, and you get many weird results when you search for it.

Both containers and virtual machines, you can set them up manually without much additional tools. However, if you want to perform real work, it helps if you have a system that supports you. Let’s see what multipass can do for us.

Installing the multipass snap

multipass is available as a snap package. You need a Linux distribution, and the Linux distribution has to have snap support.

Check the availability of multipass as a snap,

$ snap info multipass name: multipass summary: Ubuntu at your fingertips publisher: saviq description: | Multipass gives you Ubuntu VMs in seconds. Just run `multipass.ubuntu create` and it'll do all the setup for you. snap-id: mA11087v6dR3IEcQLgICQVjuvhUUBUKM channels: stable: – candidate: – beta: 2017.2.2 (37) 44MB classic edge: 2017.2.2-4-g691449f (38) 44MB classic

There is a snap available, and is currently in the beta channel. It is a classic snap which means that it has less restrictions that your typical snap.

Therefore, install it as follows,

$ sudo snap install multipass --beta --classic multipass (beta) 2017.2.2 from 'saviq' installed Trying out multipass

Now what? Let’s run it.

$ multipass Usage: /snap/multipass/37/bin/multipass [options] <command> Create, control and connect to Ubuntu instances. This is a command line utility for multipass, a service that manages Ubuntu instances. Options: -h, --help Display this help -v, --verbose Increase logging verbosity, repeat up to three times for more detail Available commands: connect Connect to a running instance delete Delete instances exec Run a command on an instance find Display available images to create instances from help Display help about a command info Display information about instances launch Create and start an Ubuntu instance list List all available instances mount Mount a local directory in the instance purge Purge all deleted instances permanently recover Recover deleted instances start Start instances stop Stop running instances umount Unmount a directory from an instance version Show version details Exit 1

Just like with LXD, launch should do something. Let’s try it and see what parameters it takes.

$ multipass launch Launched: talented-pointer

Oh, no. Just like with LXD, if you do not supply a name of the container/virtual machine, they pick one for you AND proceed in creating the container/virtual machine. So, here we are with the virtual machine creatively named talented-pointer.

How do we get some more info about this virtual machine? What defaults were selected?

$ multipass info talented-pointer Name: talented-pointer State: RUNNING IPv4: 10.122.122.2 Release: Ubuntu 16.04.3 LTS Image hash: a381cee0aae4 (Ubuntu 16.04 LTS) Load: 0.08 0.12 0.07 Disk usage: 1014M out of 2.1G Memory usage: 37M out of 992M

The default image is Ubuntu 16.04.3, on a 2GB disk and with 1GB RAM.

How should we have created the virtual machine instead?

$ multipass launch --help Usage: /snap/multipass/37/bin/multipass launch [options] [<remote:>]<image> Create and start a new instance. Options: -h, --help Display this help -v, --verbose Increase logging verbosity, repeat up to three times for more detail -c, --cpus <cpus> Number of CPUs to allocate -d, --disk <disk> Disk space to allocate in bytes, or with K, M, G suffix -m, --mem <mem> Amount of memory to allocate in bytes, or with K, M, G suffix -n, --name <name> Name for the instance --cloud-init <file> Path to a user-data cloud-init configuration Arguments: image Ubuntu image to start

Therefore, the default command to launch a new instance would have looked like

$ multipass launch --disk 2G --mem 1G -n talented-pointer

We still do not know how to specify the image name, whether it will be Ubuntu 16.04 or something else. saviq replied, and now we know how to get the list of available images for multipass.

$ multipass find multipass launch … Starts an instance of Image version ---------------------------------------------------------- 14.04 Ubuntu 14.04 LTS 20171208 (or: t, trusty) 16.04 Ubuntu 16.04 LTS 20171208 (or: default, lts, x, xenial) 17.04 Ubuntu 17.04 20171208 (or: z, zesty) 17.10 Ubuntu 17.10 20171213 (or: a, artful) daily:18.04 Ubuntu 18.04 LTS 20171213 (or: b, bionic, devel)

multipass merges the CLI semantics of both the lxc and the snap clients :-).

That is, there are five images currently available and each has several handy aliases. And currently, the default and the lts point to Ubuntu 16.04. In spring 2018, they will point to Ubuntu 18.04 when it gets released.

Here is the list of aliases in an inverted table.

Ubuntu 14.04: 14.04, t, trusty

Ubuntu 16.04: 16.04, default, lts, x, xenial (at the end of April 2018, it will lose the default and lts aliases)

Ubuntu 17.04: 17.04, z, zesty

Ubuntu 17.10: 17.10, a, artful

Ubuntu 18.04: daily:18.04, daily:b, daily:bionic, daily:devel (at the end of April 2018, it will gain the default and lts aliases)

Therefore, if we want to launch a 8G disk/2GB RAM virtual machine myserver with, let’s say, the current LTS Ubuntu, we would explicitly run

$ multipass launch --disk 8G --mem 2G -n myserver lts Looking into the lifecycle of a virtual machine

When you first launch a virtual machine for a specific version of Ubuntu, it will download from the Internet the image of the virtual machine, and then cache it locally for any future virtual machines. This happened earlier when we launched talented-pointer. Let’s view it.

$ multipass list Name State IPv4 Release talented-pointer RUNNING 10.122.122.2 Ubuntu 16.04 LTS

Now delete it, then purge it.

$ multipass delete talented-pointer $ multipass list Name State IPv4 Release talented-pointer DELETED -- $ multipass purge $ multipass list No instances found.

That is, we have a second chance when we delete a virtual machine. A deleted virtual machine can be recovered with multipass recover.

Let’s create a new virtual machine and time it.

$ time multipass launch -n myVM default Launched: myVM Elapsed time : 0m16.942s User mode : 0m0.008s System mode : 0m0.016s CPU percentage : 0.14

It took about 17 seconds for a virtual machine. In contrast, a LXD container takes significantly less,

$ time lxc launch ubuntu:x mycontainer Creating mycontainer Starting mycontainer Elapsed time : 0m1.943s User mode : 0m0.008s System mode : 0m0.024s CPU percentage : 1.64

We can stop and start a virtual machine with multipass.

$ multipass list Name State IPv4 Release myVM RUNNING 10.122.122.2 Ubuntu 16.04 LTS $ multipass stop myVM $ multipass list Name State IPv4 Release myVM STOPPED -- Ubuntu 16.04 LTS $ multipass start Name argument or --all is required Exit 1 $ time multipass start --all Elapsed time : 0m11.109s User mode : 0m0.008s System mode : 0m0.012s CPU percentage : 0.18

We can start and stop virtual machines, and if we do not want to specify a name, we can use –all (to perform a task to all). Here it took 11 seconds to restart the virtual machine. The time it takes to start a virtual machine is somewhat variable and on my system it is in the tens of seconds. For LXD containers, it is about two seconds or less.

Running commands in a VM with Multipass

From what we saw earlier from multipass –help, there are two actions, connect and exec.

Here is connect to a VM.

$ multipass connect myVM Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-103-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 5 packages can be updated. 3 updates are security updates. Last login: Thu Dec 14 20:19:45 2017 from 10.122.122.1 To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@myVM:~$

Therefore, with connect, we get a shell directly to the virtual machine! Because this is a virtual machine, it booted a new Linux kernel, Linux 4.4.0 in parallel with the one I use on my Ubuntu system. There are 5 packages that can be updated, and 3 of them are security updates.  Nowadays in Ubuntu, any pending security updates are autoinstalled by default thanks to the unattended-upgrades package and its default configuration. They will get autoupdated sometime within the day and the default configuration will automatically do the security updates only.

We view the available updates, five in total, three are security updates.

ubuntu@myVM:~$ sudo apt update Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] Get:4 http://security.ubuntu.com/ubuntu xenial-security/main Sources [104 kB] Get:5 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB] Get:6 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [48.9 kB] Get:7 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [408 kB] Get:8 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [179 kB] Get:9 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [98.9 kB] Fetched 1,145 kB in 0s (1,181 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 5 packages can be upgraded. Run 'apt list --upgradable' to see them. ubuntu@myVM:~$ apt list --upgradeable Listing... Done cloud-init/xenial-updates 17.1-46-g7acc9e68-0ubuntu1~16.04.1 all [upgradable from: 17.1-27-geb292c18-0ubuntu1~16.04.1] grub-legacy-ec2/xenial-updates 17.1-46-g7acc9e68-0ubuntu1~16.04.1 all [upgradable from: 17.1-27-geb292c18-0ubuntu1~16.04.1] libssl1.0.0/xenial-updates,xenial-security 1.0.2g-1ubuntu4.10 amd64 [upgradable from: 1.0.2g-1ubuntu4.9] libxml2/xenial-updates,xenial-security 2.9.3+dfsg1-1ubuntu0.5 amd64 [upgradable from: 2.9.3+dfsg1-1ubuntu0.4] openssl/xenial-updates,xenial-security 1.0.2g-1ubuntu4.10 amd64 [upgradable from: 1.0.2g-1ubuntu4.9] ubuntu@myVM:~$

Let’s update them all and get done with it.

ubuntu@myVM:~$ sudo apt upgrade Reading package lists... Done ...ubuntu@myVM:~$

Can we reboot the virtual machine with the shutdown command?

ubuntu@myVM:~$ sudo shutdown -r now $ multipass connect myVM terminate called after throwing an instance of 'std::runtime_error' what(): ssh: Connection refused Aborted (core dumped) Exit 134 $ multipass connect myVM Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-104-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. Last login: Thu Dec 14 20:40:10 2017 from 10.122.122.1 ubuntu@myVM:~$ exit

Yes, we can. It takes a few seconds for the virtual machine to boot again. When we try to connect too early, we get an error. We try again and get connect.

There is the exec action as well. Let’s see how it works.

$ multipass exec myVM pwd /home/ubuntu $ multipass exec myVM id uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),109(netdev),110(lxd)

We specify the VM name, then the command to run. The default user is the ubuntu user (non-root, can sudo without passwords). In contrast, with LXD the default user is root.

Let’s try something else, uname -a.

$ multipass exec myVM uname -a Unknown option 'a'. Exit 1

It is a common Unix shell issue, the shell passes the -a parameter to multipass instead of leaving it unprocessed so that it runs in the virtual machine. The solution is to add at the point we want the shell to stop processing parameters, like in

$ multipass exec myVM -- uname -a Linux myVM 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 12:16:42 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

If you try to exec commands a few times, you should encounter a case where the command hangs. It does not return, and you cannot Ctrl-C it either. It’s a bug, and the workaround is to open another shell in order to multipass stop myVM and then multipass start myVM.

Conclusion

It is cool to have multipass that complements LXD. Both tools make it easy to create virtual machines and machine containers. There are some bugs and usability issues than can be reported at the Issues page. Overall, it makes running virtual machines and machine containers so usable and easy.

 

Simos Xenitellishttps://blog.simos.info/

Faqet