You are here

Agreguesi i feed

Julian Andres Klode: New software: sicherboot

Planet UBUNTU - Mër, 07/09/2016 - 6:09md

Today, I wrote sicherboot, a tool to integrate systemd-boot into a Linux distribution in an entirely new way: With secure boot support. To be precise: The use case here is to only run trusted code which then unmounts an otherwise fully encrypted disk, as in my setup:

If you want, sicherboot automatically creates db, KEK, and PK keys, and puts the public keys on your EFI System Partition (ESP) together with the KeyTool tool, so you can enroll the keys in UEFI. You can of course also use other keys, you just need to drop a db.crt and a db.key file into /etc/sicherboot/keys. It would be nice if sicherboot could enroll the keys directly in Linux, but there seems to be a bug in efitools preventing that at the moment. For some background: The Platform Key (PK) signs the Key Exchange Key (KEK) which signs the database key (db). The db key is the one signing binaries.

sicherboot also handles installing new kernels to your ESP. For this, it combines the kernel with its initramfs into one executable UEFI image, and then signs that. Combined with a fully encrypted disk setup, this assures that only you can run UEFI binaries on the system, and attackers cannot boot any other operating system or modify parts of your operating system (except for, well, any block of your encrypted data, as XTS does not authenticate the data; but then you do have to know which blocks are which which is somewhat hard).

sicherboot integrates with various parts of Debian: It can work together by dracut via an evil hack (diverting dracut’s kernel/postinst.d config file, so we can run sicherboot after running dracut), it should support initramfs-tools (untested), and it also integrates with systemd upgrades via triggers on the /usr/lib/systemd/boot/efi directory.

Currently sicherboot only supports Debian-style setups with /boot/vmlinuz-<version> and /boot/initrd.img-<version> files, it cannot automatically create combined boot images from or install boot loader entries for other naming schemes yet. Fixing that should be trivial though, with a configuration setting and some eval magic (or string substitution).

Future planned features include: (1) support for multiple ESP partitions, so you can have a fallback partition on a different drive (think RAID type situation, keep one ESP on each drive, so you can remove a failing one); and (2) a tool to create a self-contained rescue disk image from a directory (which will act as initramfs) and a kernel (falling back to a vmlinuz file )

It might also be interesting to add support for other bootloaders and setups, so you could automatically sign a grub cryptodisk image for example. Not sure how much sense that makes.

I published the source at https://github.com/julian-klode/sicherboot (MIT licensed) and uploaded the package to Debian, it should enter the NEW queue soon (or be in NEW by the time you read this). Give it a try, and let me know what you think.


Filed under: Debian, sicherboot

Sebastian Dröge: Writing GStreamer Elements in Rust (Part 2): Don’t panic, we have better assertions now – and other updates

Planet GNOME - Mër, 07/09/2016 - 2:39md

It’s a while since last article about writing GStreamer plugins in Rust, so here is a short (or not so short?) update of the changes since back then. You might also want to attend my talk at the GStreamer Conference on 10-11 October 2016 in Berlin about this topic.

At this point it’s still only with the same elements as before (HTTP source, file sink and source), but the next step is going to be something more useful (an element processing actual data, parser or demuxer is not decided yet) now that I’m happy with the general infrastructure. You can still find the code in the same place as before on GitHub here, and that’s where also all updates are going to be.

The main sections here will be Error Handling, Threading and Asynchonous IO.

Error Handling & Panics

First of all let’s get started with a rather big change that shows some benefits of Rust over C. There are two types of errors we care about here: expected errors and unexpected errors.

Expected Errors

In GLib based libraries we usually report errors with some kind of boolean return value plus an optional GError that allows to propagate further information about what exactly went wrong to the caller but also to the user. Bindings sometimes convert these directly into exceptions of the target language or whatever construct exists there.

Unfortunately, in GStreamer we use GErrors not very often. Consider for example GstBaseSink (in pseudo-C++/Java/… for simplicity):

class BaseSrc { ... virtual gboolean start(); virtual gboolean stop(); virtual GstFlowReturn create(GstBuffer ** buffer); ... }

For start()/stop() there is just a boolean, for render() there is at least an enum with a few variants. This is for from ideal, so what is additionally required by implementors of those virtual methods is that they post error messages if something goes wrong with further details. Those are propagated out of the normal control flow via the GstBus to the surrounding bins and in the end the application. It would be much nicer if instead we would have GErrors there and make it mandatory for implementors to return one if something goes wrong. These could still be converted to error messages but at a central place then. Something to think about for the next major version of GStreamer.

This is of course only for expected errors, that is, for things where we know that something can go wrong and want to report that.

Rust

In Rust this problem is solved in a similar way, see the huge chapter about error handling in the documentation. You basically return either the successful result, or something very similar to a GError:

trait Src { ... start(&mut self) -> Result<(), ErrorMessage>; stop(&mut self) -> Result<(), ErrorMessage>; create(&mut self, &[u8] buffer) -> Result; ... }

Result is the type behind that, and it comes with convenient macros for propagating errors upwards (try!()), chaining multiple failing calls and/or converting errors (map(), and_then(), map_err(), or_else(), etc) and libraries that make defining errors with all the glue code required for combining different errors types from different parts of the code easier.

Similar to Result, there is also Option, which can be Some(x) or None, to signal the possible absence of a value. It works similarly, has similar API, but is generally not meant for error handling. It’s now used instead of GST_CLOCK_TIME_NONE (aka u64::MAX) to signal the absence of e.g. a stop position of a seek, or the absence of a known size of the stream. It’s more explicit then giving a single integer value of all of them a completely different meaning.

How is the different?

The most important difference from my point of view here is, that you must handle errors in one way or another. Otherwise the compiler won’t accept your code. If something can fail, you must explicitly handle this and can’t just silently ignore the possibility of failure. While in C people tend to just ignore error return values and assume that things just went fine.

What’s ErrorMessage and FlowError, what else?

As you probably expect, ErrorMessage maps to the GStreamer concept of error messages and contains exactly the same kind of information. In Rust this is implemented slightly different but in the end results in the same thing. The main difference here is that whenever e.g. start fails, you must provide an error message and can’t just fail silently. That error message can then be used by the caller, and e.g. be posted on the bus (and that’s exactly what happens).

FlowError is basically the negative part (the errors or otherwise non-successful results) of GstFlowReturn:

pub enum FlowError { NotLinked, Flushing, Eos, NotNegotiated(ErrorMessage), Error(ErrorMessage), }

Similarly, for the actual errors (NotNegotiated and Error), an actual error message must be provided and that then gets used by the caller (and is posted on the bus).

And in the same way, if setting an URI fails we now return a Result<(), UriError>, which then reports the error properly to GStreamer.

In summary, if something goes wrong, we know about that, have to handle/report that and have an error message to post on the bus.

Macros are awesome

As a side-note, creating error messages for GStreamer is not too convenient and they want information like the current source file, line number, function, etc. Like in C, I’ve created a macro to make such an error message. Different to C, macros in Rust are actually awesome though and not just arbitrarily substituting text. Instead they work via pattern matching and allow you to distinguish all kinds of different cases, can be recursive and are somewhat typed (expression vs. statement vs. block of code vs. type name, …).

Unexpected Errors

So this was about expected errors so far, which have to be handled explicitly in Rust but not in C, and for which we have some kind of data structure to pass around. What about the other cases, the errors that will never happen (but usually do sooner or later) because your program would just be completely broken then and all your assumptions wrong, and you wouldn’t know what to do in those cases anyway.

In C with GLib we usually have 3 ways of handling these. 1) Not at all (and crashing, doing something wrong, deadlocking, deleting all your files, …), 2) explicitly asserting what the assumptions in the code are and crashing cleanly otherwise (SIGABRT), or 3) returning some default value from the function but just returning immediately and printing a warning instead of going on.

None of these 3 cases are handleable in any case, which seems fair because they should never happen and if they do we wouldn’t know what to do anyway. 1) is obviously least desirable but the most common, 3) is only slightly better (you get a warning, but usually sooner or later something will crash anyway because you’re in an inconsistent state) and 2) is cleanest. However 2) is nothing you really want either, your application should somehow be able to return back to a clean state if it can (by e.g. storing the current user data, stopping everything and loading up a new UI with the stored user data and some dialog).

Rust

Of course no Rust code should ever run into case 1) above and randomly crash, cause memory corruptions or similar. But of course this will also happen due to bugs in Rust itself, using unsafe code, or code wrapping e.g. a C library. There’s not really anything that can be done about this.

For the other two cases there is however: catching panics. Whenever something goes wrong in unexpected ways, the corresponding Rust code can call the panic!() macro in one way or another. Like via assertions, or when “asserting” that a Result is never the error case by calling unwrap() on it (you don’t have to handle errors but you have to explicitly opt-in to ignore them by calling unwrap()).

What happens from there on is similar to exception handling in other languages (unless you compiled your code so that panics automatically kill the application). The stack gets unwound, everything gets cleaned up on the way, and at some point either everything stops or someone catches that. The boundary for the unwinding is either your main() in Rust, or if the code is called from C, then at that exact point (i.e. for the GStreamer plugins at the point where functions are called from GStreamer).

So what?

At the point where GStreamer calls into the Rust code, we now catch all unwinds that might happen and remember that one happened. This is then converted into a GStreamer error message (so that the application can handle that in a meaningful way) and by remembering that we prevent any further calls into the Rust code and immediately make them error messages too and return.

This allows to keep the inconsistent state inside the element and to allow the application to e.g. remove the element and replace it with something else, restart the pipeline, or do whatever else it wants to do. Assertions are always local to the element and not going to take down the whole application!

Threading

The other major change that happened is that Sink and Source are now single-threaded. There is no reason why the code would have to worry about threading as everything happens in exactly one thread (the streaming thread), except for the setting/getting of the URI (and possibly other “one-time” settings in the future).

To solve that, at the translation layer between C and Rust there is now a (Rust!) wrapper object that handles all the threading (in Rust with Mutexes, which work like the ones in C++, or atomic booleans/integers), stores the URI separately from the Source/Sink and just passes the URI to the start() function. This made the code much cleaner and made it even simpler to write new sources or sinks. No more multi-threading headaches.

I think that we should in general move to such a simpler model in GStreamer and not require a full-fledged, multi-threaded GstElement subclass to be implemented, but instead something more use-case oriented (Source, sink, encoder, decoder, …) that has a single threaded API and hides all the gory details of GstElement. You don’t have to know these in most cases, so you shouldn’t have to know them as is required right now.

Simpler Source/Sink Traits

Overall the two traits look like this now, and that’s all you have to implement for a new source or sink:

pub type UriValidator = Fn(&Url) -> Result<(), UriError>; pub trait Source { fn uri_validator(&self) -> Box; fn is_seekable(&self) -> bool; fn get_size(&self) -> Option; fn start(&mut self, uri: Url) -> Result<(), ErrorMessage>; fn stop(&mut self) -> Result<(), ErrorMessage>; fn fill(&mut self, offset: u64, data: &mut [u8]) -> Result; fn seek(&mut self, start: u64, stop: Option) -> Result<(), ErrorMessage>; } pub trait Sink { fn uri_validator(&self) -> Box; fn start(&mut self, uri: Url) -> Result<(), ErrorMessage>; fn stop(&mut self) -> Result<(), ErrorMessage>; fn render(&mut self, data: &[u8]) -> Result<(), FlowError>; }

Asynchronous IO

The last time I mentioned that a huge missing feature was asynchronous IO, in a composeable way. This has some news now, there’s an abstract implementation for futures and a set of higher-level APIs around mio for doing actual IO, called tokio. Independent of that there’s also futures-cpupool, which allows to call arbitrary calculations as futures on threads of a thread pool.

Recently also the HTTP library Hyper, as used by the HTTP source (and Servo), also got a branch that moves it to tokio for allowing asynchronous IO. Once that is landed, it can relatively easily be used inside the HTTP source for allowing to interrupt HTTP requests at any time.

It seems like this area moves into a very promising direction now, solving my biggest technical concern in a very pleasant way.

Sebastian Dröge: Writing GStreamer Elements in Rust (Part 2): Don’t panic, we have better assertions now – and other updates

Planet UBUNTU - Mër, 07/09/2016 - 2:37md

It’s a while since last article about writing GStreamer plugins in Rust, so here is a short (or not so short?) update of the changes since back then. You might also want to attend my talk at the GStreamer Conference on 10-11 October 2016 in Berlin about this topic.

At this point it’s still only with the same elements as before (HTTP source, file sink and source), but the next step is going to be something more useful (an element processing actual data, parser or demuxer is not decided yet) now that I’m happy with the general infrastructure. You can still find the code in the same place as before on GitHub here, and that’s where also all updates are going to be.

The main sections here will be Error Handling, Threading and Asynchonous IO.

Error Handling & Panics

First of all let’s get started with a rather big change that shows some benefits of Rust over C. There are two types of errors we care about here: expected errors and unexpected errors.

Expected Errors

In GLib based libraries we usually report errors with some kind of boolean return value plus an optional GError that allows to propagate further information about what exactly went wrong to the caller but also to the user. Bindings sometimes convert these directly into exceptions of the target language or whatever construct exists there.

Unfortunately, in GStreamer we use GErrors not very often. Consider for example GstBaseSink (in pseudo-C++/Java/… for simplicity):

class BaseSrc { ... virtual gboolean start(); virtual gboolean stop(); virtual GstFlowReturn create(GstBuffer ** buffer); ... }

For start()/stop() there is just a boolean, for render() there is at least an enum with a few variants. This is for from ideal, so what is additionally required by implementors of those virtual methods is that they post error messages if something goes wrong with further details. Those are propagated out of the normal control flow via the GstBus to the surrounding bins and in the end the application. It would be much nicer if instead we would have GErrors there and make it mandatory for implementors to return one if something goes wrong. These could still be converted to error messages but at a central place then. Something to think about for the next major version of GStreamer.

This is of course only for expected errors, that is, for things where we know that something can go wrong and want to report that.

Rust

In Rust this problem is solved in a similar way, see the huge chapter about error handling in the documentation. You basically return either the successful result, or something very similar to a GError:

trait Src { ... start(&mut self) -> Result<(), ErrorMessage>; stop(&mut self) -> Result<(), ErrorMessage>; create(&mut self, &[u8] buffer) -> Result; ... }

Result is the type behind that, and it comes with convenient macros for propagating errors upwards (try!()), chaining multiple failing calls and/or converting errors (map(), and_then(), map_err(), or_else(), etc) and libraries that make defining errors with all the glue code required for combining different errors types from different parts of the code easier.

Similar to Result, there is also Option, which can be Some(x) or None, to signal the possible absence of a value. It works similarly, has similar API, but is generally not meant for error handling. It’s now used instead of GST_CLOCK_TIME_NONE (aka u64::MAX) to signal the absence of e.g. a stop position of a seek, or the absence of a known size of the stream. It’s more explicit then giving a single integer value of all of them a completely different meaning.

How is the different?

The most important difference from my point of view here is, that you must handle errors in one way or another. Otherwise the compiler won’t accept your code. If something can fail, you must explicitly handle this and can’t just silently ignore the possibility of failure. While in C people tend to just ignore error return values and assume that things just went fine.

What’s ErrorMessage and FlowError, what else?

As you probably expect, ErrorMessage maps to the GStreamer concept of error messages and contains exactly the same kind of information. In Rust this is implemented slightly different but in the end results in the same thing. The main difference here is that whenever e.g. start fails, you must provide an error message and can’t just fail silently. That error message can then be used by the caller, and e.g. be posted on the bus (and that’s exactly what happens).

FlowError is basically the negative part (the errors or otherwise non-successful results) of GstFlowReturn:

pub enum FlowError { NotLinked, Flushing, Eos, NotNegotiated(ErrorMessage), Error(ErrorMessage), }

Similarly, for the actual errors (NotNegotiated and Error), an actual error message must be provided and that then gets used by the caller (and is posted on the bus).

And in the same way, if setting an URI fails we now return a Result<(), UriError>, which then reports the error properly to GStreamer.

In summary, if something goes wrong, we know about that, have to handle/report that and have an error message to post on the bus.

Macros are awesome

As a side-note, creating error messages for GStreamer is not too convenient and they want information like the current source file, line number, function, etc. Like in C, I’ve created a macro to make such an error message. Different to C, macros in Rust are actually awesome though and not just arbitrarily substituting text. Instead they work via pattern matching and allow you to distinguish all kinds of different cases, can be recursive and are somewhat typed (expression vs. statement vs. block of code vs. type name, …).

Unexpected Errors

So this was about expected errors so far, which have to be handled explicitly in Rust but not in C, and for which we have some kind of data structure to pass around. What about the other cases, the errors that will never happen (but usually do sooner or later) because your program would just be completely broken then and all your assumptions wrong, and you wouldn’t know what to do in those cases anyway.

In C with GLib we usually have 3 ways of handling these. 1) Not at all (and crashing, doing something wrong, deadlocking, deleting all your files, …), 2) explicitly asserting what the assumptions in the code are and crashing cleanly otherwise (SIGABRT), or 3) returning some default value from the function but just returning immediately and printing a warning instead of going on.

None of these 3 cases are handleable in any case, which seems fair because they should never happen and if they do we wouldn’t know what to do anyway. 1) is obviously least desirable but the most common, 3) is only slightly better (you get a warning, but usually sooner or later something will crash anyway because you’re in an inconsistent state) and 2) is cleanest. However 2) is nothing you really want either, your application should somehow be able to return back to a clean state if it can (by e.g. storing the current user data, stopping everything and loading up a new UI with the stored user data and some dialog).

Rust

Of course no Rust code should ever run into case 1) above and randomly crash, cause memory corruptions or similar. But of course this will also happen due to bugs in Rust itself, using unsafe code, or code wrapping e.g. a C library. There’s not really anything that can be done about this.

For the other two cases there is however: catching panics. Whenever something goes wrong in unexpected ways, the corresponding Rust code can call the panic!() macro in one way or another. Like via assertions, or when “asserting” that a Result is never the error case by calling unwrap() on it (you don’t have to handle errors but you have to explicitly opt-in to ignore them by calling unwrap()).

What happens from there on is similar to exception handling in other languages (unless you compiled your code so that panics automatically kill the application). The stack gets unwound, everything gets cleaned up on the way, and at some point either everything stops or someone catches that. The boundary for the unwinding is either your main() in Rust, or if the code is called from C, then at that exact point (i.e. for the GStreamer plugins at the point where functions are called from GStreamer).

So what?

At the point where GStreamer calls into the Rust code, we now catch all unwinds that might happen and remember that one happened. This is then converted into a GStreamer error message (so that the application can handle that in a meaningful way) and by remembering that we prevent any further calls into the Rust code and immediately make them error messages too and return.

This allows to keep the inconsistent state inside the element and to allow the application to e.g. remove the element and replace it with something else, restart the pipeline, or do whatever else it wants to do. Assertions are always local to the element and not going to take down the whole application!

Threading

The other major change that happened is that Sink and Source are now single-threaded. There is no reason why the code would have to worry about threading as everything happens in exactly one thread (the streaming thread), except for the setting/getting of the URI (and possibly other “one-time” settings in the future).

To solve that, at the translation layer between C and Rust there is now a (Rust!) wrapper object that handles all the threading (in Rust with Mutexes, which work like the ones in C++, or atomic booleans/integers), stores the URI separately from the Source/Sink and just passes the URI to the start() function. This made the code much cleaner and made it even simpler to write new sources or sinks. No more multi-threading headaches.

I think that we should in general move to such a simpler model in GStreamer and not require a full-fledged, multi-threaded GstElement subclass to be implemented, but instead something more use-case oriented (Source, sink, encoder, decoder, …) that has a single threaded API and hides all the gory details of GstElement. You don’t have to know these in most cases, so you shouldn’t have to know them as is required right now.

Simpler Source/Sink Traits

Overall the two traits look like this now, and that’s all you have to implement for a new source or sink:

pub type UriValidator = Fn(&Url) -> Result<(), UriError>; pub trait Source { fn uri_validator(&self) -> Box; fn is_seekable(&self) -> bool; fn get_size(&self) -> Option; fn start(&mut self, uri: Url) -> Result<(), ErrorMessage>; fn stop(&mut self) -> Result<(), ErrorMessage>; fn fill(&mut self, offset: u64, data: &mut [u8]) -> Result; fn seek(&mut self, start: u64, stop: Option) -> Result<(), ErrorMessage>; } pub trait Sink { fn uri_validator(&self) -> Box; fn start(&mut self, uri: Url) -> Result<(), ErrorMessage>; fn stop(&mut self) -> Result<(), ErrorMessage>; fn render(&mut self, data: &[u8]) -> Result<(), FlowError>; }

Asynchronous IO

The last time I mentioned that a huge missing feature was asynchronous IO, in a composeable way. This has some news now, there’s an abstract implementation for futures and a set of higher-level APIs around mio for doing actual IO, called tokio. Independent of that there’s also futures-cpupool, which allows to call arbitrary calculations as futures on threads of a thread pool.

Recently also the HTTP library Hyper, as used by the HTTP source (and Servo), also got a branch that moves it to tokio for allowing asynchronous IO. Once that is landed, it can relatively easily be used inside the HTTP source for allowing to interrupt HTTP requests at any time.

It seems like this area moves into a very promising direction now, solving my biggest technical concern in a very pleasant way.

Mike Gabriel: Debian's GTK-3+ v3.21 breaks Debian MATE 1.14

Planet Debian - Mër, 07/09/2016 - 1:21md
sunweaver sighs...

This short post is to inform all Debian MATE users that the recent GTK-3+ upload to Debian (GTK-3+ v3.21) broke most parts of the MATE 1.14 desktop environment as currently available in Debian testing (aka stretch). This raises some questions here on the MATE maintainers' side...

Questions
  1. Isn't GTK-3+ a shared library? This one was rhetorical... Yes, it is.

  2. One that breaks other application with every point release? Well, unfortunately, as experience over the past years has shown: Yes, this has happened several times, so far — and it happened again.

  3. Why is it that GTK-3+ uploads appear in Debian without going through a proper transition? This question is not rhetorical. If someone has an answer, please enlighten me.

Potential Counter Measures

For Debian MATE users running on Debian testing: This is untested, but it is quite likely that your MATE desktop environment will work again, once you have reverted your GTK-3+ library back to v3.20. For obtaining old Debian package versions, please visit the https://snapshots.debian.org site.

Prospective

The MATE 1.16 release is expected for Sep 20th, 2016. We will do our best to provide MATE 1.16 in Debian before this month is over. MATE 1.16 will again run smoothly (so I heard) on GTK-3+ 3.21.


light+love
sunweaver (who is already scared of the 3.22 GTK+ release, luckily the last development release of the GTK+ 3-series)

Ubuntu App Developer Blog: Releasing the 4.1.0 Ubuntu SDK IDE

Planet UBUNTU - Mër, 07/09/2016 - 1:04md

We are happy to announce the release of the Ubuntu SDK IDE 4.1.0 for the Trusty, Xenial and Yakkety Ubuntu series.

The testing phase took longer than we have expected but finally we are ready. To compensate this delay we have even upgraded the IDE to the most recent 4.1.0 QtCreator.

Based on QtCreator 4.1.0

We have based the new IDE on the most recent QtCreator upstream release, which brings a lot of new features and fixes. To see whats new there just check out: http://blog.qt.io/blog/2016/08/25/qt-creator-4-1-0-released/.

LXD based backend

The click chroot based builders are now deprecated. LXD allows us to download and use pre built SDK images instead of having to bootstrap them every time a new build target is created.  These LXD containers are used to run the applications from the IDE. Which means that the host machine of the SDK IDE does not need any runtime dependencies.

Get started

It is good to know that all existing schroot based builders will not be used by the IDE anymore. The click chroots will remain on the host but will be decoupled from the Ubuntu SDK IDE. If they are not required otherwise just remove them using the Ubuntu dialog in Tools->Options.

If the beta IDE was used already make sure to recreate all containers, there were some bugs in the images that we do not fix automatically.

To get the new IDE use:

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa

sudo apt update && sudo apt install ubuntu-sdk-ide

Check our first blog post about the LXD based IDE for more detailed instructions:

https://developer.ubuntu.com/en/blog/2016/06/14/calling-testers-new-ubuntu-sdk-ide-post/

Sebastian K&uuml;gler: KDE Store presentation video online

Planet UBUNTU - Mër, 07/09/2016 - 11:37pd

The QtCon / Akademy organizers have published the videos of last weekend’s conference presentations. If you’re interested in the topic, you can watch the video of my presentation about the KDE Software Store here:

I’ve also uploaded my slides, and you can find the rest of the QtCon presentation videos here.

Reproducible builds folks: Reproducible Builds: week 71 in Stretch cycle

Planet Debian - Mër, 07/09/2016 - 10:14pd

What happened in the Reproducible Builds effort between Sunday August 28 and Saturday September 3 2016:

Media coverage

Antonio Terceiro blogged about testing build reprodubility with debrepro .

GSoC and Outreachy updates

The next round is being planned now: see their page with a timeline and participating organizations listing.

Maybe you want to participate this time? Then please reach out to us as soon as possible!

Packages reviewed and fixed, and bugs filed

The following packages have addressed reproducibility issues in other packages:

The following updated packages have become reproducible in our current test setup after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out yet. (Relevant changelogs did not mention reproducible builds.)

The following 4 packages were not changed, but have become reproducible due to changes in their build-dependencies:

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

706 package reviews have been added, 22 have been updated and 16 have been removed in this week, adding to our knowledge about identified issues.

5 issue types have been added:

1 issue type has been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (8)
  • Lucas Nussbaum (3)
diffoscope development

diffoscope development on the next version (60) continued in git, taking in contributions from:

  • Mattia Rizzolo:
    • Better and more thorough testing
    • Improvements to packaging
    • Improvements to the ppu comparator
strip-nondeterminism development

Mattia Rizzolo uploaded strip-nondeterminism 0.023-2~bpo8+1 to jessie-backports.

A new version of strip-nondeterminism 0.024-1 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Chris Lamb:
    • Improve code quality of zip, jar, ar, png processors
  • AYANOKOUZI, Ryuunosuke:
    • Preserve file attribute information of target file (#836075)

Holger added jobs on jenkins.debian.net to run testsuites on every commit. There is one job for the master branch and one for the other branches.

disorderfs development

Holger added jobs on jenkins.debian.net to run testsuites on every commit. There is one job for the master branch and one for the other branches.

tests.reproducible-builds.org

Debian: We now vary the GECOS records of the two build users. Thanks to Paul Wise for providing the patch.

Misc.

This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

Julian Andres Klode: New software: sicherboot

Planet Debian - Mër, 07/09/2016 - 12:13pd

Today, I wrote sicherboot, a tool to integrate systemd-boot into a Linux distribution in an entirely new way: With secure boot support. To be precise: The use case here is to only run trusted code which then unmounts an otherwise fully encrypted disk, as in my setup:

If you want, sicherboot automatically creates db, KEK, and PK keys, and puts the public keys on your EFI System Partition (ESP) together with the KeyTool tool, so you can enroll the keys in UEFI. You can of course also use other keys, you just need to drop a db.crt and a db.key file into /etc/sicherboot/keys. It would be nice if sicherboot could enroll the keys directly in Linux, but there seems to be a bug in efitools preventing that at the moment. For some background: The Platform Key (PK) signs the Key Exchange Key (KEK) which signs the database key (db). The db key is the one signing binaries.

sicherboot also handles installing new kernels to your ESP. For this, it combines the kernel with its initramfs into one executable UEFI image, and then signs that. Combined with a fully encrypted disk setup, this assures that only you can run UEFI binaries on the system, and attackers cannot boot any other operating system or modify parts of your operating system (except for, well, any block of your encrypted data, as XTS does not authenticate the data; but then you do have to know which blocks are which which is somewhat hard).

sicherboot integrates with various parts of Debian: It can work together by dracut via an evil hack (diverting dracut’s kernel/postinst.d config file, so we can run sicherboot after running dracut), it should support initramfs-tools (untested), and it also integrates with systemd upgrades via triggers on the /usr/lib/systemd/boot/efi directory.

Currently sicherboot only supports Debian-style setups with /boot/vmlinuz-<version> and /boot/initrd.img-<version> files, it cannot automatically create combined boot images from or install boot loader entries for other naming schemes yet. Fixing that should be trivial though, with a configuration setting and some eval magic (or string substitution).

Future planned features include: (1) support for multiple ESP partitions, so you can have a fallback partition on a different drive (think RAID type situation, keep one ESP on each drive, so you can remove a failing one); and (2) a tool to create a self-contained rescue disk image from a directory (which will act as initramfs) and a kernel (falling back to a vmlinuz file )

It might also be interesting to add support for other bootloaders and setups, so you could automatically sign a grub cryptodisk image for example. Not sure how much sense that makes.

I published the source at https://github.com/julian-klode/sicherboot (MIT licensed) and uploaded the package to Debian, it should enter the NEW queue soon (or be in NEW by the time you read this). Give it a try, and let me know what you think.


Filed under: Debian, sicherboot

Markus Koschany: My Free Software Activities in August 2016

Planet Debian - Mar, 06/09/2016 - 11:28md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Android, Java, Games and LTS topics, this might be interesting for you.

Debian Android
  • This was the final month of the Google Summer of Code and the students achieved the main goal of packaging the Android SDK. It is now possible to build Android apps on Debian with packages only from the main distribution (apt install android-sdk). Chirayu Desai fixed the last remaining issue in android-platform-system-core (#827216).  That also means apktool is now ready to rebuild Android applications. You can find more information about the students’ work at wiki.debian.org and on their individual pages Chirayu Desai, Kai-Chung Yan and Mouaad Aallam.
  • I sponsored a new upstream release (2.2.0) of apktool for Chirayu Desai.
  • I also reviewed and sponsored the following packages for Kai-Chung and Chirayu Desai (RC bug fixes and new upstream releases): android-platform-dalvik, android-platform-frameworks-base, android-sdk-meta.
Debian Games
  • I started the month with package updates for foobillardplus, tuxpuck, etw, cube2, cube2-data and neverball.
  • I released a new revision of triplane to fix a reproducible build issue.
  • I packaged a new upstream release of springlobby.
  • I fixed GCC-6 FTBFS bugs in stormbaancoureur and love and updated both packages to use modern Debian helpers (stormbaancoureur needed it badly).
  • I invested some time to package Liquidwar 6 (#680023) and attached my preliminary work to the bug report. Liquidwar 6 has been in the works for a long time now and is a complete rewrite of the original Liquidwar game. The graphics are much more polished and dozens of new levels are available. I didn’t complete my work on Liquidwar 6 because, at least on my system, the game constantly consumes 100% CPU time. Network modus isn’t finished yet and it still depends on SDL 1. Nowadays I’m only interested in SDL 2 (or similar) games though because I think the library is more future-proof and SDL 1 will probably become a burden for future maintainers.
  • In the second half of the month I fixed a couple of RC bugs again caused by the Boost 1.61 transition and yes still more GCC-6 bugs : libclaw (GCC-6 and Boost 1.61 issues, new upstream release), freeorion (Boost 1.61 FTBFS, #833773. This one was arguably a regression in Boost 1.61 and I filed #833794 because of it), pokerth (GCC-6 RC bugs. I also took the opportunity to implement systemd support for pokerth-server and I modified the package to run the server as the _pokerth system user out-of-the-box.), 0ad (missing build-dependency on python).
  • Even music packages can pile up bug reports, so I went ahead and updated fretsonfire-songs-muldjord and fretsonfire-songs-sectoid.
  • In the last days of August 2016 I packaged a new upstream release of redeclipse and redeclipse-data, a first-person shooter. The older version was network-incompatible and long unsupported.
Debian Java Debian LTS

This was my seventh month as a paid contributor and I have been paid to work 14,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 01. August to 07. August I was in charge of our LTS frontdesk. I triaged CVEs in wordpress, mysql-5.5, libsys-syslog-perl, libspring-java, curl and squid and answered questions on the debian-lts mailing list.
  • DLA-586-1. Issued a security update for curl fixing 2 CVE.
  • DLA-585-1. Announced the security update for firefox-esr which was prepared by Mike Hommey.
  • I was involved in an embargoed security issue that currently affects two source packages in Wheezy. The update will be released on 15. September 2016 and will be coordinated with Debian’s Security Team and other distributions. I will add more information next month.
  • DLA-610-1. I spent most of the time this month on triaging and fixing security issues in tiff3, a library providing support for the Tagged Image File Format (TIFF). 99 source packages currently build-depend on this library in Wheezy. In total I triaged 35 CVEs and fixed 23 of them. I could confirm that CVE-2015-1547, CVE-2016-5322, CVE-2016-5314, CVE-2016-5315, CVE-2016-5316, CVE-2016-5317 and CVE-2016-5320 were duplicates of other CVEs fixed in this update. The update hardened the library and fixed possible denial-of-service (application crash) and arbitrary code execution issues. I tested whenever possible against the provided reproducers (malicious tiff images). The tiff3 package now includes all currently available patches. Most of the current open vulnerabilities do not directly affect end-users since no binary package has been provided for the tiff tools in Wheezy. However they can still pose a threat to people who build these tools from source manually. Though the majority of users should not be affected. It is also unlikely that the remaining issues will be fixed by tiff’s upstream developers since they decided to remove the affected applications from newer releases but again most of them can’t be exploited since the tools are not built by default in this version.
Non-maintainer uploads
  • I did a NMU for pacman fixing one GCC-6 RC bug.
QA
  • I packaged a new upstream release of pygccxml and worked around a RC bug that threatened to remove spring. For similar reasons I filed #835121 against castxml that got quickly fixed by Gert Wollny.

Elena 'valhalla' Grandi: Candy from Strangers

Planet Debian - Mar, 06/09/2016 - 8:46md
Candy from Strangers

A few days ago I gave a talk at ESC https://www.endsummercamp.org/ about some reasons why I think that using software and especially libraries from the packages of a community managed distribution is important and much better than alternatives such as pypi, nmp etc. This article is a translation of what I planned to say before forgetting bits of it and luckily adding it back as an answer to a question :)

When I was young, my parents taught me not to accept candy from strangers, unless they were present and approved of it, because there was a small risk of very bad things happening. It was of course a simplistic rule, but it had to be easy enough to follow for somebody who wasn't proficient (yet) in the subtleties of social interactions.

One of the reasons why it worked well was that following it wasn't a big burden: at home candy was plenty and actual offers were rare: I only remember missing one piece of candy because of it, and while it may have been a great one, the ones I could have at home were also good.

Contrary to candy, offers of gratis software from random strangers are quite common: from suspicious looking websites to legit and professional looking ones, to platforms that are explicitly designed to allow developers to publish their own software with little or no checks.

Just like candy, there is also a source of trusted software in the Linux distributions, especially those lead by a community: I mention mostly Debian because it's the one I know best, but the same principles apply to Fedora and, to some measure, to most of the other distributions. Like good parents, distributions can be wrong, and they do leave room for older children (and proficient users) to make their own choices, but still provide a safe default.

Among the unsafe sources there are many different cases and while they do share some of the risks, they have different targets with different issues; for brevity the scope of this article is limited to the ones that mostly concern software developers: language specific package managers and software distribution platforms like PyPi, npm and rubygems etc.

These platforms are extremely convenient both for the writers of libraries, who are enabled to publish their work with minor hassles, and for the people who use such libraries, because they provide an easy way to install and use an huge amount of code. They are of course also an excellent place for distributions to find new libraries to package and distribute, and this I agree is a good thing.

What I however believe is that getting code from such sources and using it without carefully checking it is even more risky than accepting candy from a random stranger on the street in an unfamiliar neighbourhood.

The risk aren't trivial: while you probably won't be taken as an hostage for ransom, your data could be, or your devices and the ones who run your programs could be used in some criminal act causing at least some monetary damage both to yourself and to society at large.

If you're writing code that should be maintained in time there are also other risks even when no malice is involved, because each package on these platform has a different policy with regards to updates, their backwards compatibility and what can be expected in case an old version is found to have security issues.

The very fact that everybody can publish anything on such platforms is both their biggest strength and their main source of vulnerability: while most of the people who publish their libraries do so with good intentions, attacks have been described and publicly tested, such as the fun typo-squatting http://incolumitas.com/2016/06/08/typosquatting-package-managers/ one (archived URL http://incolumitas.com/2016/06/08/typosquatting-package-managers/" target="_blank">http://web.archive.org/web/20160801161807/http://incolumitas.com/2016/06/08/typosquatting-package-managers/) that published harmless malicious code under common typos for famous libraries.

Contrast this with Debian, where everybody can contribute, but before they are allowed full unsupervised access to the archive they have to establish a relationship with the rest of the community, which includes meeting other developers in real life, at the very least to get their gpg keys signed.

This doesn't prevent malicious people from introducing software, but raises significantly the effort required to do so, and once caught people can usually be much more effectively prevented from repeating it than a simple ban on an online-only account can do.

It is true that not every Debian maintainer actually does a full code review of everything that they allow in the archive, and in some cases it would be unreasonable to expect it, but in most cases they are at least reasonably familiar with the code to do at least bug triage, and most importantly they are in an excellent position to establish a relationship of mutual trust with the upstream authors.

Additionally, package maintainers don't work in isolation: a growing number of packages are being maintained by a team of people, and most importantly there are aspects that involve potentially the whole community, from the fact that new packages that enter the distribution are publicity announced on a mailing list to the various distribution-wide QA efforts.

Going back to the language specific distribution platforms, sometimes even the people who manage the platform themselves can't be fully trusted to do the right thing: I believe everybody in the field remembers the npm fiasco https://lwn.net/Articles/681410/ where a lawyer letter requesting the removal of a package started a series of events that resulted in potentially breaking a huge amount of automated build systems.

Here some of the problems were caused by some technical policies that caused the whole ecosystem to be especially vulnerable, but one big issue was the fact that the managers of the npm platform are a private entity with no oversight from the user community.

Here not all distributions are equal, but contrast this with Debian, where the distribution is managed by a community that is based on a social contract https://www.debian.org/social_contract and is governed via democratic procedures established in its https://www.debian.org/devel/constitution.

Additionally, the long history of the distribution model means that many issues have already been met, the errors have already been done, and there are established technical procedures to deal with them in a better way.

So, shouldn't we use language specific distribution platforms at all? No! As developers we aren't children, we are adults who have the skills to distinguish between safe and unsafe libraries just as well as the average distribution maintainer can do. What I believe we should do is stop treating them as a safe source that can be used blindly and reserve that status to actual trustful sources like Debian, falling back to the language specific platforms only when strictly needed, and in that case:

actually check carefully what we are using, both by reading the code and by analysing the development and community practices of the authors;
if possible, share that work by becoming ourselves maintainers of that library in our favourite distribution, to prevent duplication of effort and to give back to the community whose work we get advantage from.

Guido Günther: Debian Fun in August 2016

Planet Debian - Mar, 06/09/2016 - 8:08md
Debian LTS

August marked the sixteenth month I contributed to Debian LTS under the Freexian umbrella. I spent 9 hours (of allocated 8) mostly on Rails related CVEs which resulted in DLA-603-1 and DLA-604-1 fixing 6 CVEs and marking others as not affecting the packages. The hardest part was proper testing since the split packages in Wheezy don't allow to run the upstream test suite as is. There's still CVE-2016-0753 which I need to check if it affects activerecord or activesupport.

Additionally I had one relatively quiet week of LTS frontdesk work triaging 10 CVEs.

Other Debian stuff
  • I uploaded git-buildpackage 0.8.2 to experimental and 0.8.3 to unstable. The later bringing all the enhanements and bugfixes since Debconf 16 to sid and testing.
  • The usual bunch of libvirt related uploads

Lasse Schuirmann: A Git Workflow for Humans

Planet GNOME - Mar, 06/09/2016 - 2:39md
Introduction

This blog post serves as a documentation for a Git workflow that I successfully use for my Open Source projects (e.g. coala) as well as my clients. It’s focused on two things:

  • Code quality, because we need it. Otherwise our stuff will break.
  • Simplicity, because we’re humans and we don’t want to use something as complicated as Git flow. (I have seen a lot of people claiming to use Git flow, however when we talked about it it almost always turned out they don’t actually use it. :))

It gives general guidelines and I encourage people to change the workflow according to their special needs – however make sure that everything you do goes towards simplificy and quality and happens for a good reason.

The following paragraphs will define the most simple and minimal approach which is a base case of how this workflow works, the extensions paragraph defines some extensions which help you dealing with several common usecases. You will likely end up using the base workflow with one or two extensions.

The last paragraph will recommend some tooling which allows you to run this workflow more efficiently.

Base Workflow Branch Names

Branch names are important because they influence how we think about the workflow. The main branch for Git repositories is master. Master is supposed to be always stable and the main point for developers to start with. The respect for a branch named master is higher than for e.g. develop and you will yield higher quality results by just naming it like that.

For development you will want to go with user owned branches. If I name my branches feature/newui, the name contains less information than me naming it sils/newui, sils being my user identification. Any developer knows who to contact if there is a stale branch or any problems – the owner of that branch.

As an owner of a branch, I can also reset my branch to a new commit that has nothing to do with the previous history. It’s my branch and it’s my responsibility.

Code Review

Great. I have my owned branch, I developed a crazy new thing and I want it to be in master! How does it work?

Do the natural thing. Submit a Pull Request, Merge Request, patches on BugZilla or whatever review UI you already use.

Start reviewing: my strong recommendation is to make good commits and review every commit on it’s own. Make sure that every commit only changes one thing and is as small as possible. Reviewers will find more bug and you will have saved a lot of time on the long run. “Reduce technical debt.” Of course you will also want to use continuous integration and code analysis on your project to save you lots of review time and enable people to find and fix issues earlier. You can use the git rebase –interactive for fixing up your commits – don’t be afraid, after you lean it once it’ll come in handy in a lot of situations.

Many workflows would now propose to do a merge commit. I recommend to do a fast forward or implement a semi-linear workflow – why? If you worked with merge commits for a longer time you probably saw failing builds on master or other critical branches even if you had CI on all branches – merge commits are changes. If you don’t review them (and that’s a hard thing to do) they may bite you. What does this mean?

Before doing a merge you have to rebase your commits onto the latest version of master. The continuous integration will be retriggered and your builds verify your code again. You should also check manually if the commits you added underneath your existing ones could do any harm! After doing that you can either do a fastforward (git merge –ff-only) or a merge commit (git merge –no-ff) if you want to keep history of your PRs/MRs. I recommend doing the fastforward and thinking in changes, not in features. This purely psychological thing can change the way you develop source code. Your builds will not fail of deterministic reasons anymore.

Releases

I recommend doing continuous releases from your master branch. Either push your website to your server or your package as a prerelease to PyPI.

If you manually want to trigger releases, set up your CI to do it for you on your command right from master. (E.g. using the “when: manual” in GitLab CI or when tagging a commit.)

If that is sufficient for you, you won’t need any other branch than master and user owned branches.

Extensions

The following paragraphs explain how you can extend your workflow.

Hotfix Branches

You may have the need to be able to fix any production issues really quickly. You will want to bypass code review. You might even want to bypass continuous integration. The solution is simple:

Just set up automatic deployment for hotfix/… branches.

The most important thing however is not to use master! Master is always stable and reviewed. You deploy a hotfix *temporarily* and pause all other development until a clean equivalent of the hotfix is merged/fastforwarded to master. This way you don’t get your master broken but you’ll be able to temporarily deploy potentially dirty hacks when needed.

Release Branches

If you want to maintain bugfix releases featuring only selected bugfix commits you will want to branch off a release/… branch when doing a release. Usually you’ll want to name it after the major and minor but not include the micro as your branch will move over your micro releases. (E.g. release/0.8 is good.) Whenever you want to do a bugfix release, just cherry pick your commits onto that branch and trigger a release when needed.

Apply the same code review policies as for master. Doing automatic prereleases may be awesome for the people using your software, being able to get the latest stuff from master in no time.

Tooling

Long story short: keep away from GitHub. GitHub forces you into their workflow using merges, cluttering history, compromising your code quality (at the advantage of being a bit simpler for them to implement and for you to use).

The best tool I found so far for this is the GitLab Enterprise Edition, which is sadly not free software. The recommended setup is:

  • Protect the master branch. Nobody can push. Everybody can merge.
  • Allow merges only when builds pass.
  • Allow merges only when at least one (potentially more) nonauthor approved a merge request.
  • Set merges to fastforward only. GitLab will offer coders a rebase button even so you don’t have to do it manually every time.
  • Automatic deployment or when: manual for master/release/hotfix branches.
  • Set up GitLab CI to build your stuff and test it, if you’re deploying with docker, test in docker!
  • Use static code analysis like coala in your GitLab CI.
  • Enforce a minimal test coverage, ideally your coverage should always grow or stay. That’s a good way to handle legacy projects as well as mature well tested ones.

Andrew Shadura: Manual control of OpenEmbedded -dbg packages

Planet Debian - Mar, 06/09/2016 - 2:28md

In December last year, OpenEmbebbed introduced automatic debug packages. Prior to that, you’d need to manually construct FILES_${PN}-dbg variable in your recipe. If you need to retain manual control over precisely what does into debug packages, set an undocumented NOAUTOPACKAGEDEBUG variable to 1, the same way Qt recipe does:

NOAUTOPACKAGEDEBUG = "1" FILES_${PN}-dev = "${includedir}/${QT_DIR_NAME}/Qt/*" FILES_${PN}-dbg = "/usr/src/debug/" FILES_${QT_BASE_NAME}-demos-doc = "${docdir}/${QT_DIR_NAME}/qch/qt.qch"

P.S. Knowing this would have saved me and my colleagues days of work.

Norbert Preining: Yukio Mishima: Patriotism (憂国)

Planet Debian - Mar, 06/09/2016 - 1:59md

A masterpiece by Yukio Mishima – Patriotism – the story of love and dead. A short story about the double suicide of a Lieutenant and his wife following the Ni Ni Roku Incident where some parts of the military tried to overthrow government and military leaders. Although Lieutenant Takeyama wasn’t involved into the coup, because his friends wanted to safeguard him and his new wife, he found himself facing a fight and execution of his friends. Not being able to cope with this situation he commits suicide, followed by his wife.

Written in 1960 by one of the most interesting writers of Japanese modern history, Yukio Mishima, this book and the movie made by Mishima himself, are very disturbing images of the relation between human and state.

Although the English title says Patriotism, the Japanese one is 憂国 (Yukoku) which is closer to Concern for one’s own country. This concern, and the feeling of devotion to the Imperial system and the country that leads the two into their deed. We are guided through the whole book and movie by a large scroll with 至誠 (shisei, devotion) written on it.

But indeed, Patriotism is a good title I think – one of the most dangerous concepts mankind has brought forth. If Patriotism would be only the love for one’s own country, all would be fine. But reality shows that patriotism unfailingly brings along xenophobia and the feeling of superiority.

For someone coming from a small and unimportant country, I never had even the slightest allure to be patriotic in the bad sense. And looking at the world and people around me, I often have the feeling that mainly big countries produce the biggest and worst style of patriotism. This is obvious in countries like China, but having recently learned that all US pupils have to recite (obviously without understanding) the Pledge of Allegiance, the shock of how bad patriotism can start washing the brains of even the smallest kids in a seemingly free country is still present.

But back to the book: Here the patriotism is exhibited by the presence of the Imperial images and shrine in the entrance, in front of which the two pray the last time before executing themselves.

Not only the book is a masterpiece by itself, also the movie is a special piece of art: Filmed in silent movie style with text inserts, the whole story takes place on a Noh stage. This is in particular interesting as Mishima was one of the few, if not the only, modern Noh play writer. He has written several Noh plays.

Another very impressive scene for me was when, after her husbands suicide, Reiko returns from putting up her final make-up into the central room. Her kimono is already blood soaked and the trailing kimono leaves traces on the Noh stage resembling the strokes of a calligraphy, as if her movement is guided, too, by 至誠.

The final scene of the movie shows the two of them in a Zen stone garden, forming the stone, the unreachable island of happiness.

Very impressive, both the book as well as the movie.

Guido Günther: Debian Fun in Augst 2016

Planet Debian - Mar, 06/09/2016 - 8:35pd
Debian LTS

August marked the sixteenth month I contributed to Debian LTS under the Freexian umbrella. I spent 9 hours (of allocated 8) mostly on Rails related CVEs which resulted in DLA-603-1 and DLA-604-1 fixing 6 CVEs and marking others as not affecting the packages. The hardest part was proper testing since the split packages in Wheezy don't allow to run the upstream test suite as is. There's still CVE-2016-0753 which I need to check if it affects activerecord or activesupport.

Additionally I had one relatively quiet week of LTS frontdesk work triaging 10 CVEs.

Other Debian stuff
  • I uploaded git-buildpackage 0.8.2 to experimental and 0.8.3 to unstable. The later bringing all the enhanements and bugfixes since Debconf 16 to sid and testing.
  • The usual bunch of libvirt related uploads

Arun Raghavan: GStreamer on Android and universal builds

Planet GNOME - Mar, 06/09/2016 - 5:37pd

This is a quick PSA for those of you using the GStreamer binary builds for Android.

With the Android NDK r12, the default behaviour while building native code changed from building for armeabi to building for all ABIs. So if your app doesn’t specify APP_ABI in its Application.mk, you will now get an error about unsupported architectures. This was tracked as bug 770631.

The idea behind this change is that your Android app should ship versions of your native code for all supported architectures as a “universal” build, so it is accessible to as many devices as possible.

To deal with this, we now provide a universal tarball which contains binaries for all archiectures that we support. This is currently ARM, ARMv7-A, ARMv8-A (64-bit), x86, and x86-64. That leaves MIPS and MIPS64 that are not currently supported.

If you’ve been using the GStreamer Android binaries before GStreamer 1.9.2, then you should start using the universal tarball rather than the architecture-specific tarball. You will need minor updates to your native build, like we made to the player example. You probably want to put the gstAndroidRoot variable in ~/.gradle/gradle.properties instead, though.

As Sebastian announced, assuming all goes well with the universal tarballs, we will stop shipping the per-arch tarballs — they are redundant, and just take up CI and disk resources.

There are some things that I’d like for us to be able to do better. The first is that Android Studio doesn’t pick up native code with our current build approach. This is a limitation of the Android Gradle NDK plugin, which doesn’t support a custom build. This should change with Android Studio 2.2.

I would also like to integrate better with Android Studio — either be able to specify the GStreamer Android binary path in the UI (like you do for the SDK/NDK), or better yet, have it be possible to specify the dependency in Gradle, and have it be automatically pulled from the Internet. If any of you are familiar with how we can do this, please shout out!

Clint Adams: Can't put your arms around a memory

Planet Debian - Mar, 06/09/2016 - 3:41pd

“I think it stems from employing people who are capable of telling you what BGP stands for,” he said. “Watching my DevOps team in action is an infuriating mix of ‘Damn, that's a slick CI/CD process you’ve built,’ and ‘What do you mean you don't know what the output of netstat means?’”

Shaun McCance: Restricted Funds in Non-Profit Accounting

Planet GNOME - Hën, 05/09/2016 - 10:35md

I’ve served as treasurer for three separate organizations over the last six years. Two of them are US 501(c)(3) non-profit organizations. The other is a consumer-owned cooperative. I’m not an accountant, but I’ve learned a lot about accounting, and each organization has forced me to learn something new.

Today’s adventure is learning how to deal with restricted funds, or funds that have to be used for a particular purpose. I’m going to show four different techniques for dealing with restricted funds, along with some pros and cons.

Restricted funds are conceptually similar to earmarked funds and fiscal sponsorships. In all cases, you have a specific amount of money that is supposed to be used for a specific purpose. I’ll make the following distinction: Both restricted funds and earmarked funds are short-term situations where you expect to spend the money, zero the balance, and stop looking at it. When I talk about restricted funds, I mean funds where you are legally obligated to spend and account for every penny, and pay back any money you did not spend. This could be a grant where the granting organization places requirements on the funds. When I talk about earmarked funds, I mean funds that you have designated for a particular purpose, but where you’re not legally obligated to spend it all, and you could roll a small leftover balance into general funds. This could be money from a targeted fundraising campaign.

Fiscal sponsorships, however, are long-term arrangements where you handle the money for a group of people doing something in line with your charitable cause. They officially operate under your organization so they don’t have to deal with all the non-profit paperwork, but they have a certain amount of autonomy, and importantly, their funds are theirs.

I’ll use the term targeted funds to refer to all three. The following techniques all conceivably work for each type of targeted funds, with different pros and cons on each. None of these techniques are, in my opinion, perfect. Also, by the way, I have to assume you have a passing familiarity with double-entry accounting and split transactions, among other things. Explaining those is another blog post. Onward…

Asset Subaccount

With this technique, you create a subaccount of your primary asset account (probably a checking account), and post all transactions for the targeted funds to that account. This has the advantage of letting you use expense and income accounts however you want, and it might be OK if you have very small and very simple transactions.

I don’t like this technique, though, for a number of reasons. It can lead to really crazy split transactions. Split transactions where money comes from separate income accounts are fine. Split transactions where money goes to multiple asset accounts are weird. Split transactions where both happen hurt my brain.

This really falls apart when you have multiple very liquid asset accounts. By that I mean accounts you regularly make payments from. This could be a checking account, a PayPal account with money in it, a prepaid postage account for bulk mailings, or a prepaid purchase card at a store. (I’ve dealt with all of these.) Or you might have liability accounts you regularly expense from, like a credit card or an expense account at a store.

If your targeted funds are a subaccount of your primary checking, what happens when you use another account to pay with those funds? If you used a credit card, you probably need to use a split transaction when you make your monthly payment. I hope you remember how much. You’d better look through your transaction report. I don’t even know what to do if you carry a balance on the card. And if you paid with PayPal or another asset account, you should really transfer money from checking to cover it, and record that as coming out of the subaccount. What a pain. The easier approach is probably just to use a journal entry to move money from the targeted subaccount to its parent. But that basically means your books will be littered with fake adjustment entries.

Also, there’s a big question mark around how well your accounting tools deals with transactions in a subaccount of a checking account when doing things like reconciliation or automatic transaction importing. My advice is that asset accounts on your books should match real accounts. This technique is just a hack.

Actual Separate Account

If accounts in your books should always match real accounts, why not create an actual separate checking account for the targeted funds? This technique involves quite a bit of overhead, and it’s not something you’re going to do for short-term restricted or earmarked funds. But it could work for fiscal sponsors, with the added benefit that you could even allow your sponsored organization access to that account.

This at least ensures that accounts are accounts and transactions are transactions, but it can still be tricky if you want to spend those targeted funds with other payment methods like a prepaid account or a credit card. If you do that, you’ll have to write an actual check (or make an actual bank transfer) to pay one part of your organization from another part. That’s a hassle, although it is sensible.

There’s also an issue of minimum balances. Your bank may insist on a minimum balance on all checking accounts. You probably don’t care about a minimum balance for your sponsored organizations. You might even be OK with them holding a zero balance. (In the case of restricted or earmarked funds, a zero balance is your goal.) You just might be OK with allowing sponsored funds to be temporarily negative. Banks don’t care for that, so you would have to spend money from your general account, and then record the money owed in some sort of weird entry that is both accounts payable and accounts receivable.

Liability Account

This is probably the technique your accountant will recommend, especially for legally restricted funds like restricted donations. A liability account is a type of account you use to record money you owe. You can make purchases with a liability account. You can pay into it to pay it off. A credit card, for example, is a liability account. A credit card purchase is a transaction involving a liability account and (usually) an expense account. A payment toward your credit card balance is a transaction involving a liability account and an asset account. Liability accounts can be other types of debts, or other types of money you have to spend, like taxes payable.

Money you have to spend. Like that big grant that has to be spent for a specific purpose. It’s strange to think about a grant as a liability, but from an accounting perspective, it fits.

This has the advantage of quickly showing you how much you need to spend and how much money you have in general funds, right on the balance sheet. No other technique makes those numbers quite as clear. The downside is that it’s a black box on an activity statement. You don’t get to use expense and income accounts for anything, and if you want to get any sense of how much you’ve actually spent, or how you’ve spent it, you have to dig into transaction reports. That’s doable, but I don’t enjoy it.

One other thing to consider with this technique is what to do with a not-quite-zero balance. If you overspend, you’ll need to do a split transaction with part of it coming from a general expense account. Otherwise you’ll spend the rest of your life with a weird negative liability on your balance sheet. And if it’s earmarked funds where you can roll a small remaining balance into general funds, make sure to post a transaction that moves that money from the liability account to an income account

If you deal with a very large number of restricted grants, and you need to be absolutely certain they all get spent entirely and in a timely fashion, you should strongly consider this approach. You should also talk to an actual accountant, preferably one with non-profit experience, instead of reading a blog by some guy on the internet.

Income and Expense Accounts

Finally, you can just create specific income and expense accounts to track money into and out of targeted funds, respectively. You can do this alongside the first two techniques involving separate asset accounts, but one of the advantages of those techniques is that you don’t absolutely have to use targeted income and expense accounts. With those techniques, you could record payments to a contractor in your general “Contractor Payments” expense account, if you want.

With this technique, you have to make absolutely certain that every transaction involving the targeted funds is posted to a targeted income or expense account. If you don’t, you will never be able to find the targeted fund balance, you’ll start treating that money as general funds, and some people will be very very upset with you. But frankly, with any technique, you need to be careful to file things correctly.

The biggest downside of this technique is that it tells you nothing on your balance sheet. Your balance sheet won’t tell you how much targeted funds you’ve spent, how much you still need to spend, or how much general funds you actually have after accounting for targeted funds. For some people, that will be a really big deal. If restricted grants or fiscal sponsorships constitute a very large portion of your finances, you probably want to see them on the balance sheet. If you handle targeted funds only rarely, and your general funds are comparatively large, it’s less of an issue.

The really nice thing about this technique is that you still get to see income and expenses in actual income and expense accounts. It shows up in your activity statement. It can show up in your budget report, if you want to budget it. You aren’t even limited to using a single account for expenses. You could have a parent expense account for the specific targeted fund, then subaccounts for things like contractor pay, material costs, etc.

Since this technique doesn’t show you anything on a balance sheet, you’ll need to use an activity statement to see the balance of any targeted funds. Grab an activity statement covering at least the entire period of time the targeted funds have been active (or just for all time), then subtract the targeted expense account from the targeted income account to see how much you have left to spend. Or much more simply, most accounting software lets you run an activity statement on a specific list of accounts. Run one on just your targeted accounts, and it will tell you the balance right at the bottom.

This might seem awkward, but it’s actually no different from what you probably do for non-targeted funds with matched income and expense accounts. For example, if you run a large event, you probably have one or more income categories specifically for that event, and one or more expense categories specifically for that event. If you want to see if the event made or lost money, you have to compare numbers in an activity statement, just like you would do to see the balance of targeted funds in this technique.

This technique has no problem with overspending. Just use the same expense categories and don’t bother with splits. You’ll just have higher expenses than income, just like in an event that lost money. It doesn’t show up on your balance sheet, and eventually it will roll off the activity statements you regularly look at. And if you underspend earmarked funds and intend to retain them in general funds, you can just literally do nothing. Just stop looking at the targeted fund activity statement and the money is in general funds. Or, if you want to be really pedantic, post a transaction that moves money from the targeted expense account to an income account like “Retained Earmarked Income”. That way you can go back and see how much earmarked funds you retain over time, if you’re into that.

This technique is actually the one I’ve decided to use for restricted grants in the organization I’m currently treasurer of. Each year, we deal with just a handful of restricted grants, six at the most. We also run exactly one fundraising event where we earmark the funds, and we retain a portion of those earmarked funds into general funds upfront. I have an income account called “Restricted Grants” and an expense account called “Restricted Grant Spending”. For each restricted grant, I create subaccounts of both the income and expense accounts. I can set up custom activity reports for each grant and put them in the pile with other reports that I review monthly. When a restricted grant is spent and accounted for, I just remove it from my regular reports.

Ideally, I’d like a system where I can record transactions as if in income and expense accounts, but somehow group those accounts in a way that makes their difference show up as a liability on the balance sheet. I don’t know if that’s a thing, but it should be.

Joachim Breitner: The new CIS-194

Planet Debian - Hën, 05/09/2016 - 8:09md

The Haskell minicourse at the University of Pennsylvania, also known as CIS-194, has always had a reach beyond the students of Penn. At least since Brent Yorgey gave the course in 2013, who wrote extensive lecture notes and eventually put the material on Github.

This year, it is my turn to give the course. I could not resist making some changes, at least to the first few weeks: Instead of starting with a locally installed compiler, doing execises that revolve mostly around arithmetic and lists, I send my students to CodeWorld, which is a web programming environment created by Chris Smith1.

This greatly lowers the initial hurlde of having to set up the local toolchain, and is inclusive towards those who have had little expose to the command line before. Not that I do not expect my students to handle that, but it does not hurt to move that towards later in the course.

But more importantly: CodeWorld comes with a nicely designed simple API to create vector graphics, to animate these graphics and even create interactive programs. This means that instead of having to come up with yet another set of exercieses revolving around lists and numbers, I can have the students create Haskell programs that are visual. I believe that this is more motivating and stimulating, and will nudge the students to spend more time programming and thus practicing.

In fact, the goal is that in their third homework assignemnt, the students will implement a fully functional, interactive Sokoban game. And all that before talking about the built-in lists or tuples, just with higher order functions and custom datatypes. (Click on the picture above, which is part of the second weeks’s homework. You can use the arrow keys to move the figure around and press the escape key to reset the game. Boxes cannot be moved yet -- that will be part of homework 3.)

If this sounds interesting to you, and you always wanted to learn Haskell from scratch, feel free to tag along. The lecture notes should be elaborate enough to learn from that, and with the homework problems, you should be able to tell whether you have solved it yourself. Just do not publish your solutions before the due date. Let me know if you have any comments about the course so far.

Eventually, I will move to local compilation, use of the interpreter and text-based IO and then start using more of the material of previous iterations of the course, which were held by Richard Eisenberg in 2014 and by Noam Zilberstein in 2015.

  1. Chris has been very helpful in making sure CodeWorld works in a way that suits my course, thanks for that!

Chris Lamb: How to write your first Lintian check

Planet Debian - Hën, 05/09/2016 - 5:33md

Lintian's humble description of "Debian package checker" belies its importance within the Debian GNU/Linux project. An extensive static analysis tool, it's not only used by the vast majority of developers, falling foul of some of its checks even cause uploads to be automatically rejected by the archive maintenance software.

As you may have read in my recent monthly report, I've recently been hacking on Lintian itself. In particular:

  • #798983: Check for libjs-* binary package name outside of the web section
  • #814326: Warn if filenames contain wildcard characters
  • #829744: Add new-package-should-not-package-python2-module tag
  • #831864: Warn about Python packages that ship Coverage.py information
  • #832096 Check for common typos in debian/rules target names
  • #832099: Check for unnecessary SOURCE_DATE_EPOCH assignments
  • #832771: Warn about systemd .service files with a missing Install key

However, this rest of this post will go through the steps needed to start contributing yourself.

To demonstrate this I will be walking through submitting a patch for bug #831864 which warns about Python packages that ship .coverage files generated by Coverage.py.

Getting started

First, let's obtain the Lintian sources and create a branch for our work:

$ git clone https://anonscm.debian.org/git/lintian/lintian.git […] $ cd lintian $ git checkout -b warn-about-dotcoverage-files Switched to a new branch 'warn-about-dotcoverage-files'

The most interesting files are under checks/*:

$ ls -l checks/ | head -n 9 total 1356 -rw-r--r-- 1 lamby lamby 6393 Jul 29 14:19 apache2.desc -rw-r--r-- 1 lamby lamby 8619 Jul 29 14:19 apache2.pm -rw-r--r-- 1 lamby lamby 1956 Jul 29 14:19 application-not-library.desc -rw-r--r-- 1 lamby lamby 3285 Jul 29 14:19 application-not-library.pm -rw-r--r-- 1 lamby lamby 544 Jul 29 14:19 automake.desc -rw-r--r-- 1 lamby lamby 1354 Jul 29 14:19 automake.pm -rw-r--r-- 1 lamby lamby 19506 Jul 29 14:19 binaries.desc -rw-r--r-- 1 lamby lamby 25204 Jul 29 14:19 binaries.pm -rw-r--r-- 1 lamby lamby 15641 Aug 24 21:42 changelog-file.desc -rw-r--r-- 1 lamby lamby 19606 Jul 29 14:19 changelog-file.pm

Note that the files are in pairs; a foo.desc file that contains description of the tags and a sibling foo.pm Perl module that actually performs the checks.


Let's add our new tag before we go any further. After poking around, it looks like files.{pm,desc} would be most appropriate, so we'll add our new tag definition to files.desc:

Tag: package-contains-python-coverage-file Severity: normal Certainty: certain Info: The package conains a file that looks like output from the Python coverage.py tool. These are generated by python{,3}-coverage during a test run, noting which parts of the code have been executed. They can then be subsequently analyzed to identify code that could have been executed but was not. . As they are are unlikely to be of no utility to end-users, these files should be removed from the package.

The Severity and Certainty fields are documented in the manual. Note the convention of using double spaces after full stops in the Info section.

Extending the testsuite

Lintian has many moving parts based on regular expressions and other subtle logic, so it's especially important to provide tests in order to handle edge cases and to catch any regressions in the future.

We create tests by combining a tiny Debian package that will deliberately violate our check, along with some metadata and the expected output of running Lintian against this package.

The tests themselves are stored under t/tests. There may be an existing test that it would be more appropriate to extend, but I've gone with creating a new directory called files-python-coverage:

$ mkdir -p t/tests/files-python-coverage $ cd t/tests/files-python-coverage

First, we create a simple package:

$ mkdir -p debian/debian $ printf '#!/usr/bin/make -f\n\n%%:\n\tdh $@\n' > debian/debian/rules $ chmod +x debian/debian/rules

… and then we install dummy file to trigger the check:

$ touch debian/.coverage $ echo ".coverage /usr/share/files-python-coverage" > debian/debian/install

We then add the aforementioned metadata to t/tests/files-python-coverage/desc:

Testname: files-python-coverage Sequence: 6000 Version: 1.0 Description: Check for Python .coverage files Test-For: package-contains-python-coverage-file

… and the expected warning to t/tests/files-python-coverage/tags:

$ echo "W: files-python-coverage: package-contains-python-coverage-file" \ "usr/share/files-python-coverage/.coverage" > tags

When we run the testsuite, it should fail because we don't emit the check yet:

$ cd $(git rev-parse --show-toplevel) $ debian/rules runtests onlyrun=tag:package-contains-python-coverage-file […] --- t/tests/files-python-coverage/tags +++ debian/test-out/tests/files-python-coverage/tags.files-python-coverage @@ -1 +0,0 @@ -W: files-python-coverage: package-contains-python-coverage-file usr/share/files-python-coverage/.coverage fail tests::files-python-coverage: output differs! Failed tests (1) tests::files-python-coverage debian/rules:48: recipe for target 'runtests' failed make: *** [runtests] Error 1 $ echo $? 1

Specifying onlyrun= means we only run the tests that are designed to trigger this tag rather than the whole testsuite. This is controlled by the Test-For key in our desc file, not by scanning the tags files.

This recipe for creating a testcase could be used when submitting a regular bug against Lintian — providing a failing testcase not only clarifies misunderstandings resulting from the use of natural language, it also makes it easier, quicker and safer to correct the offending code itself.

Emitting the tag

Now, let's actually implement the check:

tag 'package-installs-python-egg', $file; } + # ---------------- .coverage (coverage.py output) + if ($fname =~ m,\.coverage$,o) { + tag 'package-contains-python-coverage-file', $file; + } # ---------------- /usr/lib/site-python

Our testsuite now passes:

$ debian/rules runtests onlyrun=tag:package-contains-python-coverage-file private/generate-profiles.pl .... running tests .... mkdir -p "debian/test-out" t/runtests -k -j 9 t "debian/test-out" tag:package-contains-python-coverage-file ENV[PATH]=[..] pass tests::files-python-coverage if [ "tag:package-contains-python-coverage-file" = "" ]; then touch runtests; fi $ echo $? 0 Submitting the patch

Lastly, we create a patch for submission to the bug tracking system:

$ git commit -a -m "c/files: Warn about Python packages which ship" \ "coverage.py information. (Closes: #831864)" $ git format-patch HEAD~ 0001-c-files-Warn-about-Python-packages-which-ship-covera.patch

… and we finally attach it to the existing bug:

To: 831864@bugs.debian.org Cc: 831864-submitter@bugs.debian.org Bcc: control@bugs.debian.org tags 831864 + patch thanks Patch attached. /lamby
Summary

I hope this post will encourage at some extra contributions towards this important tool.

(Be aware that I'm not a Lintian maintainer, so not only should you not treat anything here as gospel and expect this post may be edited over time if clarifications arise.)

Faqet

Subscribe to AlbLinux agreguesi