You are here

Agreguesi i feed

Jeremy Bicha: gksu is dead. Long live PolicyKit

Planet GNOME - Mër, 21/03/2018 - 6:37md

Today, gksu was removed from Debian unstable. It was already removed 2 months ago from Debian Testing (which will eventually be released as Debian 10 “Buster”).

It’s not been decided yet if gksu will be removed from Ubuntu 18.04 LTS. There is one blocker bug there.

 

Jordan Petridis: Continues Integration in Librsvg, Part 1

Planet GNOME - Mër, 21/03/2018 - 4:09md
The base setup

Rust makes it trivial to write any kind of tests for your project. But what good are they if you do not run them? In this blog series I am gonna explore the capabilities of Gitlab-CI and document how it is used in Librsvg.

First things first. What’s CI? It stands for Continues integration, basically it makes sure what you push in your repository continues to build and pass the tests. Even if someone committed something without testings it, or the tests happened to pass on their machine but not but not in a clean environment, we can know without having to clone and built manually.

CI also can have other uses, like enforcing a coding style or running resource heavy tests.

What’s Librsvg?

As the README.md file puts it.

It’s a small library to render Scalable Vector Graphics(SVG), associated with the GNOME Project. It renders SVG files to Cairo surfaces. Cairo is the 2D, antialiased drawing library that GNOME uses to draw things to the screen or to generate output for printing.

Basic test case

First of all we will add a .gitlab-ci.yml file in the repo.

We will start of with a simple case. A single stage and a single job. A job, is a single action that can be done. A stage is a collection of jobs. Jobs of the same state can be run in parallel.

Minor things were omitted, such as the full list of dependencies. The original file is here.

stages: - test opensuse:tumbleweed: image: opensuse:tumbleweed stage: test before_script: - zypper install -y gcc rust ... gtk3-devel script: - ./autogen.sh --enable-debug - make check

Line, 1 and 2 define the our stages. If a stage is defined but has no jobs attached it is skipped.

Line 3 defines our job, with the name opensuse:tumbleweed.

Line 4 will fetch the opensuse:tumbleweed OCI image from dockerhub.

In line 5 we specify that that job is part of the test stage that we defined in line 2.

before_script: is something like a setup phase. In our case we will install our dependencies.

after_script: accordingly is what runs after every job including failed ones. We are not going to use it yet though.

Then in line 11 we will write our script. The commands that would have to be run to build librsvg like if we where to do it from a shell. Indeed the script: part is like a shell script.

If everything went well hopefully it will look like this.

Testing Multiple Distributions

Builds on opensuse based images work, but we can do better. We can test multiple distros!

Let’s add Debian testing and Fedora 27 builds to the pipeline.

fedora:latest: image: fedora:latest stage: test before_script: - dnf install -y gcc rust ... gtk3-devel script: - ./autogen.sh --enable-debug - make check debian:testing: image: debian:testing stage: test before_script: - apt install -y gcc rust ... libgtk-3-dev script: - ./autogen.sh --enable-debug - make check

Similar to what we did for opensuse. Notice that the only things that change are the names of the container images and the before_script:

specific to each distro’s package manager. This will work even better when we add caching and artifacts extractions into the template. But that’s for a later post.

We could refactor the above by using a template(yaml anchors). Here is how our file will look like after that.

stages: - test .base_template: &distro_test stage: test script: - ./autogen.sh --enable-debug - make check opensuse:tumbleweed: image: opensuse:tumbleweed before_script: - zypper install -y gcc rust ... gdk-pixbuf-devel gtk3-devel <<: *distro_test fedora:latest: image: fedora:latest before_script: - dnf install -y gcc rust ... gdk-pixbuf-devel gtk3-devel <<: *distro_test debian:testing: image: fedora:latest before_script: - dnf install -y gcc rust ... gdk-pixbuf-devel gtk3-devel <<: *distro_test And Failure :(. I mean Success!


* Debian test was added later

Apparently the librsvg test-suite was failing on anything other than opensuse. Later we found out that this was the result of freetype being a bit outdated on the system Federico used to generate the reference “good” result. In Freetype 2.8/2.9 there was a bugfix that affected how the test cases were rendered. Thankfully this wasn’t librsvg‘s code misbehaving but rather a bug only in the test-suite. After regenerating the reference results with a newer version of Freetype everything worked.

Adding Rust Lints

Rust has it’s own style formatting tool, rustfmt, which is highly configurable. We will use it to make sure our codebase style stays  consistent. By adding a test in the Gitlab-CI we can be sure that Merge Requests will be properly formatted before reviewing and merging them.

There’s also clippy! An amazing collection of a bunch of lints for Rust code.  If we would have used it sooner it would probably have even caught a couple bugs  occurring when comparing floating point numbers. We haven’t decide yet on what lints to enable/deny, so it has a manual trigger for now and won’t be run unless explicitly triggered by someone. I hope that will change Soon.

First we will add another test stage called lint.

stage: - test - lint

Then we will add 2 jobs. One for each tool. Both tools require the rust nightly toolchain of the compiler.

# Configure and run rustfmt on nightly toolchain # Exits and builds fails if on bad format rustfmt: image: "rustlang/rust:nightly" stage: lint script: - rustc --version && cargo --version - cargo install rustfmt-nightly --force - cargo fmt --all -- --write-mode=diff # Configure and run clippy on nightly toolchain clippy: image: "rustlang/rust:nightly" stage: lint before_script: - apt update -yqq - apt-get install -y libgdk-pixbuf2.0-dev ... libxml2-dev script: - rustc --version && cargo --version - cargo install clippy --force - cargo clippy --all when: manual

**

And that’s it, with the only caveat that it would take 40-60min for each  pipeline run to complete. There are couple of ways this could be sped up though, which will be the topic of part2 and part3.

** During the first experiments, rustfmt was set as a manual trigger(enabled by default later) and cross-distro tests were grouped to their own stage. But it’s functional identical to the setup described in the post.

Federico Mena-Quintero: Making sure the repository doesn't break, automatically

Planet GNOME - Mër, 21/03/2018 - 2:37pd

Gitlab has a fairly conventional Continuous Integration system: you push some commits, the CI pipelines build the code and presumably run the test suite, and later you can know if this succeeded of failed.

But by the time something fails, the broken code is already in the public repository.

The Rust community uses Bors, a bot that prevents this from happening:

  • You push some commits and submit a merge request.

  • A human looks at your merge request; they may tell you to make changes, or they may tell Bors that your request is approved for merging.

  • Bors looks for approved merge requests. It merges each into a temporary branch and waits for the CI pipeline to run there. If CI passes, Bors automatically merges to master. If CI fails, Bors annotates the merge request with the failure, and the main repository stays working.

Bors also tells you if the mainline has moved forward and there's a merge conflict. In that case you need to do a rebase yourself; the repository stays working in the meantime.

This leads to a very fair, very transparent process for contributors and for maintainers. For all the details, watch Emily Dunham's presentation on Rust's community automation (transcript).

For a description of where Bors came from, read Graydon Hoare's blog.

Bors evolved into Homu and it is what Rust and Servo use currently. However, Homu depends on Github.

I just found out that there is a port of Homu for Gitlab. Would anyone care to set it up?

Bastian Ilsø Hougaard: Reflections on Distractions in Work, Productivity and Time Usage

Planet GNOME - Mër, 21/03/2018 - 1:48pd

For the past year or so I have mostly worked at home or remote in my daily life. Currently I’m engaged in my master thesis and need to manage my daily time and energy to work on it. It is no surprise to many of us that working using your internet-connected personal computer at home can make you prone to many distractions. However, managing your own time is not just about whipping and self-discipline. It is about setting yourself up in a structure which rewards you for hard work and gives your mind the breaks it needs. Based on reflections and experimentation with many scheduling systems and tools I finally felt I have achieved a set of principles I really like and that’s what I’ll be sharing with you today.

Identifying the distractions

Here’s a typical scenario I used to experience: I would wake up and often the first thing I do is turn on my computer, check my e-mail, check social media, check the news. I then go eat my breakfast and start working. After a while I would find myself returning to check mail and social media. Not that much important necessarily happened. But it’s fairly easy for me to press “Super” and type “Gea” and press “Enter” (and Geary will show my e-mail inbox). It’s also fairly easy to press “Ctrl+L” to focus the address bar in Firefox and write “f” (and Facebook.com is autocompleted). Firefox is by default so (ironically) helpful to suggest facebook.com. At other times, a distraction can simply be an innocent line of thought that hits you fx. “oh it would be so cool if I started sorting my pictures folder, let me just start on that quickly before I continue my work“.

From speaking with friends I am fairly sure this type of behavior is not uncommon at all. The first step in trying to combat it myself was to identify the scope of it. I don’t blame anyone else for dealing with this – I see this more as an unfortunate design consequence of the way our personal computers are “universal” and isn’t context-aware enough. Afterall, GNOME Shell was just trying to be helpful, Firefox was also just trying to be helpful, although they are also in some aspects making it easier for me to distract myself like that.

Weapons against distractions

Let me start with a few practical suggestions, which helped me initially break the worst patterns (using big hammers).

  • Stylish: using Inspection tools and CSS hacks I remove endless scrolling news feeds, and news content from websites that I might otherwise on reflex open up and read when in a distracted scenario. The CSS hacks are easy to turn off again of course, but it adds an extra step and makes it purposely less appealing for me to do unless it’s for something important.

  • BlockSite: I use BlockSite in “Whitelist mode” and turn it on while I work. This is a big hammer which essentially blocks all of internet except for whitelisted websites I use for work. Knowing that you can’t access anything really had a positive initial psychological effect for me.
  • Minimizing shell notifications: While I don’t have the same big hammer to “block access to my e-mail” here, I decided to change the order of my e-mail inboxes in Geary so my more relevant (and far less activity prone) student e-mail inbox appears first. I also turned off the background e-mail daemon and turned off notification banners in GNOME Shell.
  • Putting Phone in Ultra Battery Saving Mode: I restrict my phone to calls and SMS so that I don’t receive notifications from various chat apps which are irrelevant whilst working. This also saves the battery nicely.

My final weapon is The Work Schedule.This doesn’t sound new or surprising and we probably all tried it, however with more or less success.

..Schedules can be terrible.

I’m actually not that big a fan of putting microscheduling my life usually. Traditional time schedules are too focused around doing things from timestamp X to timestamp Y. They require that you “judge” how fast you are in working and their structure just feels super inflexible. The truth in real life is that my day never look like how I planned it to be. In fact, I found myself sometimes even more demotivated (and distracted) because I was failing to live up to my own schedule and by the end of the day never really managed to complete that “ideal day”. The traditional time schedule ended up completely missing up what it was supposed to fix and help against.

But on the other hand, working without a schedule often results in:

  • Forgetting to take breaks from work which is unhealthy and kills my productivity later.
  • No sense of progress except from the work itself but if the work is ongoing for longer time this will feel endless and exhausting.
  • Lack of work duration meant that my productivity continued to fluctate between overwork and underwork since it is hard to judge when it is okay to stop.
The resulting system

For the past couple of weeks I have been using a system which is a bit like a “semi-structured time schedule”. To you it might just seem like a list of checkboxes and in some sense it is! However, the simplicity in this system has some important principles behind it I have learned along the way:

  • Checking the checkboxes give a sense of progress as I work throughout my day.
  • The schedule supports adding breaks in-between work sessions and puts my day in an order.
  • The schedule makes no assumptions about “What work” I will be doing or reaching that day. Instead it specifies that I work for 1 hour and this enables me to funnel my energy. I use GNOME Clock’s Timer function and let it count down for 1 hour until there’s a nice simple “ding” to be heard when it finishes. It’s up to you whether you then take the break or continue a bit longer.
  • The schedule makes no assumptions about “When” I will do work and only approximates for how long. In reality I might wake up at 7:00, 8:00 or 9:00 AM and it doesn’t really matter. What’s important is that I do as listed and take my breaks in the order presented.
  • If there are aspects of the order I end up changing, the schedule permits it – It is possible to tick off tasks independent of the order.
  • If I get ideas for additional things I need to do (banking, sending an important e-mail, etc) I can add them to the bottom of the list.
  • The list is made the day before. This makes it easier to follow it straight after waking up.
  • I always use the breaks for something which does not involve computers. I use dancing, going for a walk or various house duties (Interestingly house duties become more exciting for me to do as work break items, than as items I do in my free time).
  • In the start you won’t have much feeling for how much work you can manage to make and it is easy to overestimate and get out of breath or unable to complete everything. It works much better for me to underestimate my performance (fx 2 hours of focused work before lunch instead of 3 hours) and feel rewarded that I did everything I had planned and perhaps even more than that.
  • I insert items I want to do in my free time into my scheduling after I finish work. These items are purely there to give additional incentive and motivation to finish.
  • The system is analog on purpose because I’m interested in keeping the list visually present on my desk at all times. I also think it is an advantage that making changes to the list doesn’t interfere with the work context I maintain on the computer.

Lastly, I want to give two additional tips. If you like listening to music while working, consider whether it might affect your productivity. For example, I found music with vocals to be distracting me if I try to immerse myself in reading difficult litterature. I can really recommend Doctor Turtle’s acoustic instrumental music while working though (all free). Secondly, I find that different types of tasks requires different postures. For abstract, high-level or vaguely formulated tasks (fx formulating goals, reviewing something or reflecting), I find interacting with the computer whilst standing up and walking around to really help gather my thoughts. On the other hand with practical tasks or tasks which require immersion (fx programming tasks), I find sitting down to be much more comfortable.

Hopefully my experiences here might be useful or interesting for some of you. Let me know!

Neil McGovern: ED Update – week 11

Planet GNOME - Mar, 20/03/2018 - 4:52md

It’s time (Well, long overdue) for a quick update on stuff I’ve been doing recently, and some things that are coming up. I’ve worked out a new way of doing these, so they should be more regular now, about every couple of weeks or so.

  • The annual report is moving ahead. I’ve moved up the timelines a bit here from previous years, so hopefully, the people who very kindly help author this can remember what we did in the 2016/17 financial year!
  • GUADEC/GNOME.Asia/LAS sponsorship – elements are coming together for the sponsorship brochure
    • Some sponsors are lined up, and these will be announced by the usual channels – thanks to everyone who supports the project and our conferences!
  • Shell Extensions – It’s been noticed that reviews of extensions have been taking quite some time recently, so I’ve stepped in to help. I still think that part of the process could be automated, but at the moment it’s quite manual. Help is very much appreciated!
  • The Code of Conduct consultation has been useful, and there’s been a couple of points raised where clarity could be added. I’m getting those drafted at the moment, and hope to get the board to approve this soon.
  • A couple of administrative bits:
    • We now have a filing system for paperwork in NextCloud
    • Reviewing accounts for the end of year accounts – it’s the end of the tax year, so our finances need to go to the IRS
    • Tracking of accounts receivable hasn’t been great in the past, probably not helped by GNUCash. I’m looking at alternatives at the moment.
  • Helping out with a couple of trademark issues that have come up
  • Regular working sessions for Flathub legal bits with our lawyers
  • I’ll be at LibrePlanet 2018 this weekend, and I’m giving a talk on Sunday. With the FSF, we’re hosting a SpinachCon on Friday. This aims to do some usability testing and finding those small things which annoy people.

Sebastian Dröge: GStreamer Rust bindings 0.11 / plugin writing infrastructure 0.2 release

Planet GNOME - Mar, 20/03/2018 - 12:52md

Following the GStreamer 1.14 release and the new round of gtk-rs releases, there are also new releases for the GStreamer Rust bindings (0.11) and the plugin writing infrastructure (0.2).

Thanks also to all the contributors for making these releases happen and adding lots of valuable changes and API additions.

GStreamer Rust Bindings

The main changes in the Rust bindings were the update to GStreamer 1.14 (which brings in quite some new API, like GstPromise), a couple of API additions (GstBufferPool specifically) and the addition of the GstRtspServer and GstPbutils crates. The former allows writing a full RTSP server in a couple of lines of code (with lots of potential for customizations), the latter provides access to the GstDiscoverer helper object that allows inspecting files and streams for their container format, codecs, tags and all kinds of other metadata.

The GstPbutils crate will also get other features added in the near future, like encoding profile bindings to allow using the encodebin GStreamer element (a helper element for automatically selecting/configuring encoders and muxers) from Rust.

But the biggest changes in my opinion is some refactoring that was done to the Event, Message and Query APIs. Previously you would have to use a view on a newly created query to be able to use the type-specific functions on it

let mut q = gst::Query::new_position(gst::Format::Time); if pipeline.query(q.get_mut().unwrap()) { match q.view() { QueryView::Position(ref p) => Some(p.get_result()), _ => None, } } else { None }

Now you can directly use the type-specific functions on a newly created query

let mut q = gst::Query::new_position(gst::Format::Time); if pipeline.query(&mut q) { Some(q.get_result()) } else { None }

In addition, the views can now dereference directly to the event/message/query itself and provide access to their API, which simplifies some code even more.

Plugin Writing Infrastructure

While the plugin writing infrastructure did not see that many changes apart from a couple of bugfixes and updating to the new versions of everything else, this does not mean that development on it stalled. Quite the opposite. The existing code works very well already and there was just no need for adding anything new for the projects I and others did on top of it, most of the required API additions were in the GStreamer bindings.

So the status here is the same as last time, get started writing GStreamer plugins in Rust. It works well!

Philippe Normand: GStreamer’s playbin3 overview for application developers

Planet GNOME - Hën, 19/03/2018 - 8:13pd

Multimedia applications based on GStreamer usually handle playback with the playbin element. I recently added support for playbin3 in WebKit. This post aims to document the changes needed on application side to support this new generation flavour of playbin.

So, first of, why is it named playbin3 anyway? The GStreamer 0.10.x series had a playbin element but a first rewrite (playbin2) made it obsolete in the GStreamer 1.x series. So playbin2 was renamed to playbin. That’s why a second rewrite is nicknamed playbin3, I suppose :)

Why should you care about playbin3? Playbin3 (and the elements it’s using internally: parsebin, decodebin3, uridecodebin3 among others) is the result of a deep re-design of playbin2 (along with decodebin2 and uridecodebin) to better support:

  • gapless playback
  • audio cross-fading support (not yet implemented)
  • adaptive streaming
  • reduced CPU, memory and I/O resource usage
  • faster stream switching and full control over the stream selection process

This work was carried on mostly by Edward Hervey, he presented his work in detail at 3 GStreamer conferences. If you want to learn more about this and the internals of playbin3 make sure to watch his awesome presentations at the 2015 gst-conf, 2016 gst-conf and 2017 gst-conf.

Playbin3 was added in GStreamer 1.10. It is still considered experimental but in my experience it works already very well. Just keep in mind you should use at least the latest GStreamer 1.12 (or even the upcoming 1.14) release before reporting any issue in Bugzilla. Playbin3 is not a drop-in replacement for playbin, both elements share only a sub-set of GObject properties and signals. However, if you don’t want to modify your application source code just yet, it’s very easy to try playbin3 anyway:

$ USE_PLAYBIN3=1 my-playbin-based-app

Setting the USE_PLAYBIN environment variable enables a code path inside the GStreamer playback plugin which swaps the playbin element for the playbin3 element. This trick provides a glance to the playbin3 element for the most lazy people :) The problem is that depending on your use of playbin, you might get runtime warnings, here’s an example with the Totem player:

$ USE_PLAYBIN3=1 totem ~/Videos/Agent327.mp4 (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-audio' sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-text' sys:1: Warning: ../../../../gobject/gsignal.c:3492: signal name 'get-video-pad' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

As mentioned previously, playbin and playbin3 don’t share the same set of GObject properties and signals, so some changes in your application are required in order to use playbin3.

If your application is based on the GstPlayer library then you should set the GST_PLAYER_USE_PLAYBIN3 environment variable. GstPlayer already handles both playbin and playbin3, so no changes needed in your application if you use GstPlayer!

Ok, so what if your application relies directly on playbin? Some changes are needed! If you previously used playbin stream selection properties and signals, you will now need to handle the GstStream and GstStreamCollection APIs. Playbin3 will emit a stream collection message on the bus, this is very nice because the collection includes information (metadata!) about the streams (or tracks) the media asset contains. In playbin this was handled with a bunch of signals (audio-tags-changed, audio-changed, etc), properties (n-audio, n-video, etc) and action signals (get-audio-tags, get-audio-pad, etc). The new GstStream API provides a centralized and non-playbin-specific access point for all these informations. To select streams with playbin3 you now need to send a select_streams event so that the demuxer can know exactly which streams should be exposed to downstream elements. That means potentially improved performance! Once playbin3 completed the stream selection it will emit a streams selected message, the application should handle this message and potentially update its internal state about the selected streams. This is also the best moment to update your UI regarding the selected streams (like audio track language, video track dimensions, etc).

Another small difference between playbin and playbin3 is about the source element setup. In playbin there is a source read-only GObject property and a source-setup GObject signal. In playbin3 only the latter is available, so your application should rely on source-setup instead of the notify::source GObject signal.

The gst-play-1.0 playback utility program already supports playbin3 so it provides a good source of inspiration if you consider porting your application to playbin3. As mentioned at the beginning of this post, WebKit also now supports playbin3, however it needs to be enabled at build time using the CMake -DUSE_GSTREAMER_PLAYBIN3=ON option. This feature is not part of the WebKitGTK+ 2.20 series but should be shipped in 2.22. As a final note I wanted to acknowledge my favorite worker-owned coop Igalia for allowing me to work on this WebKit feature and also our friends over at Centricular for all the quality work on playbin3.

Replacing a lost Yubikey

Planet Debian - Mër, 14/03/2018 - 7:05pd

Some weeks ago I lost my purse with everything in there, from residency card, driving license, credit cards, cash cards, all kind of ID cards, and last but not least my Yubikey NEO. Being Japan I did expect that the purse will show up in a few days, most probably the money gone but all the cards intact. Unfortunately not this time. So after having finally reissued most of the cards, I also took the necessary procedures concerning the Yubikey, which contained my GnuPG subkeys, and was used as second factor for several services (see here and here).

Although the GnuPG keys on the Yubikey are considered safe from extraction, I still decided to revoke them and create new subkeys – one of the big advantage of subkeys, one does not start at zero but just creates new subkeys instead of running around trying to get signatures again.

Other things that have to be made is removing the old Yubikey from all the services where it has been used as second factor. In my case that were quite a lot (Google, Github, Dropbox, NextCloud, WordPress, …). BTW, you have a set of backup keys saved somewhere for all the services you are using, right? It helps a lot getting into the system.

GnuPG keys renewal

To remind myself of what is necessary, here are the steps:

  • Get your master key from the backup USB stick
  • revoke the three subkeys that are on the Yubikey
  • create new subkeys
  • install the new subkeys onto a new Yubikey, update keyservers

All of that is quite straight-forward: Use gpg --expert --edit-key YOUR_KEY_ID, after this you select the subkey with key N, followed by a revkey. You can select all three subkeys and revoke them at the same time: just type key N for each of the subkeys (where N is the index starting from 0 of the key).

Next create new subkeys, here you can follow the steps laid out in the original blog. In the same way you can move them to a new Yubikey Neo (good that I bought three of them back then!).

Last but not least you have to update the key-servers with your new public key, which is normally done with gpg --send-keys (again see the original blog).

The most tricky part was setting up and distributing the keys on my various computers: The master key remains as usual on offline media only. On my main desktop at home I have the subkeys available, while on my laptop I only have stubs pointing at the Yubikey. This needs a bit of shuffling around, but should be obvious somehow when looking at the previous blogs.

Full disk encryption

I had my Yubikey also registered as unlock device for the LUKS based full disk encryption. The status before the update was as follows:

$ cryptsetup luksDump /dev/sdaN Version: 1 Cipher name: aes .... Key Slot 0: ENABLED ... Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: ENABLED ...

I was pretty sure that the Slot for the old Yubikey was Slot 7, but I wasn’t sure. So I first registered the new Yubikey in slot 6 with

yubikey-luks-enroll -s 6 -d /dev/sdaN

and checked that I can unlock during boot using the new Yubikey. Then I cleared the slot information in slot 7 with

cryptsetup luksKillSlot /dev/sdaN 7

and again made sure that I can boot using my passphrase (in slot 0) and the new Yubikey (in slot6).

TOTP/U2F second factor authentication

The last step was re-registering the new Yubikey with all the favorite services as second factor, removing the old key on the way. In my case the list comprises several WordPress sites, GitHub, Google, NextCloud, Dropbox and what else I have forgotten.

Although this is the nearly worst case scenario (ok, the main key was not compromised!), everything went very smooth and easy, to my big surprise. Even my Debian upload ability was not interrupted considerably. All in all it shows that having subkeys on a Yubikey is a very useful and effective solution.

Norbert Preining https://www.preining.info/blog There and back again

Playing with water

Planet Debian - Mër, 14/03/2018 - 5:00pd

I'm currently taking a machine learning class and although it is an insane amount of work, I like it a lot. I initially had planned to use R to play around with the database I have, but the teacher recommended I use H2o, a FOSS machine learning framework.

I was a bit sceptical at first since I'm already pretty good with R, but then I found out you could simply import H2o as an R library. H2o replaces most R functions by its own parallelized ones to cut down on processing time (no more doParallel calls) and uses an "external" server you have to run on the side instead of running R calls directly.

I was pretty happy with this situation, that is until I actually started using H2o in R. With the huge database I'm playing with, the library felt clunky and I had a hard time doing anything useful. Most of the time, I just ended up with long Java traceback calls. Much love.

I'm sure in the right hands using H2o as a library could have been incredibly powerful, but sadly it seems I haven't earned my black belt in R-fu yet.

I was pissed for at least a whole day - not being able to achieve what I wanted to do - until I realised H2o comes with a WebUI called Flow. I'm normally not very fond of using web thingies to do important work like writing code, but Flow is simply incredible.

Automated graphing functions, integrated ETA when running resource intensive models, descriptions for each and every model parameters (the parameters are even divided in sections based on your familiarly with the statistical models in question), Flow seemingly has it all. In no time I was able to run 3 basic machine learning models and get actual interpretable results.

So yeah, if you've been itching to analyse very large databases using state of the art machine learning models, I would recommend using H2o. Try Flow at first instead of the Python or R hooks to see what it's capable of doing.

The only downside to all of this is that H2o is written in Java and depends on Java 1.7 to run... That, and be warned: it requires a metric fuckton of processing power and RAM. My poor server struggled quite a bit even with 10 available cores and 10Gb of RAM...

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Reproducible Builds: Weekly report #149

Planet Debian - Mër, 07/03/2018 - 4:21pd

Here's what happened in the Reproducible Builds effort between Sunday February 25 and Saturday March 3 2018:

diffoscope development

Version 91 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks as well as new ones from:

In addition, Juliana — our Outreachy intern — continued her work on parallel processing; the above work is part of it.

reproducible-website development Packages reviewed and fixed, and bugs filed

An issue with the pydoctor documentation generator was merged upstream.

Reviews of unreproducible packages

73 package reviews have been added, 37 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Jeremy Bicha (4)
Misc.

This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Skellam distribution likelihood

Planet Debian - Mar, 06/03/2018 - 10:37md

I wondered if it was possible to make a ranking system based on the Skellam distribution, taking point spread as the only input; first step is figuring out what the likelihood looks like, so here's an example for k=4 (ie., one team beat the other by four goals):

It's pretty, but unfortunately, it shows that the most likely combination is µ1 = 0 and µ2 = 4, which isn't really that realistic. I don't know what I expected, though :-)

Perhaps it's different when we start summing many of them (more games, more teams), but you get into too high dimensionality to plot. If nothing else, it shows that it's hard to solve symbolically by looking for derivatives, as the extreme point is on an edge, not on a hill.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Debian Bug Squashing Party in Tirana

Planet Debian - Mar, 06/03/2018 - 10:15md

On 3 March I attended a Debian Bug Squashing Party in Tirana. Organized by colleagues at Open Labs Albania Anisa and friends and Daniel. Debian is the second oldest GNU/Linux distribution still active and a launchpad for so many others.

A large number of Kosovo participants took place, mostly female students. I chose to focus on adding Kosovo to country-lists in Debian by verifying that Kosovo was missing and then filing bug reports or, even better, doing pull requests.

apt-cache rdepends iso-codes will return a list of packages that include ISO codes. However, this proved hard to examine by simply looking at these applications on Debian; one would have to search through their code to find out how the ISO MA-3166 codes are used. So I left that for another time.

I moved next to what I thought I would be able complete within the event. Coding is becoming quite popular with children in Kosovo. I looked into MIT’s Scratch and Google’s Blockly, the second one being freeer software and targeting younger children. They both work by snapping together logical building blocks into a program.

Translation of Blockly into Albanian is now complete and hopefully will get much use. You can improve on my work at Translatewiki.

Thank you for the all fish and see you at the next Debian BSP.

Advertisements &b &b Arianit https://arianit2.wordpress.com debian – Arianit's Blog

Ubuntu Insights: LXD weekly status #37

Planet Ubuntu - Hën, 05/03/2018 - 7:17md
Introduction

So this past week was rather intense, in a nutshell, we’ve:

  • Merged LXD clustering support
  • Split python3-lxc, lua-lxc and lxc-templates out of the LXC codebase
  • Moved libpam-cgfs from lxcfs to lxc
  • Released 3.0.0 beta1 of python3-lxc and lxc-templates
  • Released 3.0.0 beta1 of lxcfs
  • Released 3.0.0 beta1 of lxc
  • Released 3.0.0 beta1 of lxd
  • Released 3.0.0 beta2 of lxd

So we’ve finally done it, most of the work that we wanted in for our 3.0 LTS release of all LXC/LXD/LXCFS repositories has been merged and we’re now focused on a few remaining tweaks, small additions and fixes with a plan to release the final 3.0 by the end of the month.

With all of this activity we’ve also had to update all the relevant packaging, moving a bunch of stuff around between packages and adding support for all the new features.

For those interesting in trying the new betas, the easiest way to see everything working together is through the LXD beta snap:

snap install lxd --beta

Note that the betas aren’t supported, you may incur data loss when upgrading or later down the line. Testing would be very much appreciated, but please do this on systems you don’t mind reinstalling if something goes wrong 

This week, the entire LXD team is meeting in Budapest, Hungary to go through the list of remaining things and make progress towards the final 3.0 release.

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Various kernel work
  • Stable release work for LXC, LXCFS and LXD
Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD LXC LXCFS Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu
  • Uploaded python3-lxc 3.0.0~beta1 to Ubuntu 18.04 and PPAs.
  • Uploaded lxc-templates 3.0.0~beta1 to Ubuntu 18.04 and PPAs.
  • Uploaded lxcfs 3.0.0~beta1 to Ubuntu 18.04.
  • Uploaded lxc 3.0.0~beta1 to Ubuntu 18.04.
  • Uploaded lxd 3.0.0~beta1 to Ubuntu 18.04.
  • Uploaded lxd 3.0.0~beta2 to Ubuntu 18.04.
  • Several follow-up updates as we move content between packages and get automated tests to pass again.
Snap
  • Switched to Go 1.10.
  • Updated edge packaging to support LXD clustering.
  • Updated liblxc handling to reduce build time and automatically pick the right version of the library.
  • Created a new beta channel using the latest beta of all components.

Sean Davis: Parole Media Player 1.0.0 Released

Planet Ubuntu - Dje, 04/03/2018 - 2:38md

It’s here, it’s finally here! The first 1.0 release of Parole Media Player has finally arrived. This release greatly improves the user experience for users without hardware-accelerated video and includes several fixes.

What’s New? Parole 0.9.x Developments

If you’ve been following along with the stable release channel, you have a lot of updates to catch up on. Here’s a quick recap. For everybody else, skip to the next header.

  • Parole 0.9.0 introduced a new mini mode, boosted X11 playback, and made the central logo clickable. When your playlist is complete, the “play” logo changes to a “replay” logo.
  • Parole 0.9.1 improved support for remote files and live stream playback. Older code was stripped away to make Parole even leaner and faster.
  • Parole 0.9.2 introduced a keyboard shortcuts helper (Help > Keyboard Shortcuts), fixed numerous bugs, and included a huge code cleanup and refactor.
Parole 1.0.0: New Feature, Automatic Video Playback Output
  • We’ve finally resolved the long-standing “Could not initialise Xv output” error (Xfce #11950) that has plagued a number of our users, both in virtual machines and on real hardware.
  • In the past, we were delighted when we were able to implement the Clutter backend to solve this issue, but that API proved to be unstable and difficult to maintain between releases.
  • Now, we are using the “autoimagesink” for our newly defaulted “Automatic” video output option. This sink provides the best available sink (according to GStreamer) for the available environment, and should produce great results no matter the setup.
Parole 1.0.0: Bug Fixes
  • Fixed 32-bit crashes when using the MPRIS2 plugin (LP: #1374887)
  • Fixed crash on “Clear History” button press (LP: #1214514)
  • Fixed appdata validation (Xfce #13632)
  • Fixed full debug builds and resolved implicit-fallthrough build warning
  • Replaced stock icon by freedesktop.org compliant option (Xfce #13738)
Parole 1.0.0: Translations

Albanian, Arabic, Asturian, Basque, Bulgarian, Catalan, Chinese (China), Chinese (Taiwan), Croatian, Czech, Danish, Dutch, English (Australia), Finnish, French, Galician, German, Greek, Hebrew, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Norwegian Bokmål, Occitan (post 1500), Polish, Portuguese, Portuguese (Brazil), Russian, Serbian, Slovak, Spanish, Swedish, Thai, Turkish, Uighur, Ukrainian

Downloads

Parole Media Player 1.0.0 is included in Xubuntu 18.04. Check it out this week when you test out the Beta!

sudo apt update sudo apt install parole

The latest version of Parole Media Player can always be downloaded from the Xfce archives. Grab version 1.0.0 from the below link.

https://archive.xfce.org/src/apps/parole/1.0/parole-1.0.0.tar.bz2

  • SHA-256: 6666b335aeb690fb527f77b62c322baf34834b593659fdcd21d21ed3f1e14010
  • SHA-1: ed56ab0ab34db6a5e0924a9da6bf2ee91233da8a
  • MD5: d00d3ca571900826bf5e1f6986e42992

Sean Davis: Xfce Settings 4.12.2 Released

Planet Ubuntu - Dje, 04/03/2018 - 1:33md

Xfce has been steadily heading towards it’s GTK+ 3 future with Xfce 4.14, but that doesn’t mean our current stable users have been left behind. We’ve got some new features, bug fixes, and translations for you!

What’s New? New Features
  • Default monospace font option in the Appearance dialog
  • Improved support for embedded DisplayPort connectors on laptops
  • Show location of the mouse pointer on keypress (as seen in the featured image)
Bug Fixes
  • Leave monitors where they were if possible (Xfce #14096)
  • syncdaemon not starting with certain locales
  • division by 0 crash from gdk_screen_height_mm()
Translation Updates

Arabic, Asturian, Basque, Bengali, Bulgarian, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Croatian, Czech, Danish, Dutch, English (Australia), English (United
Kingdom), Finnish, French, Galician, German, Greek, Hebrew, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Norwegian Bokmål, Norwegian Nynorsk, Occitan (post 1500), Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Uighur, Ukrainian

Downloads

The latest version of Xfce Settings can always be downloaded from the Xfce archives. Grab version 4.12.2 from the below link.

http://archive.xfce.org/src/xfce/xfce4-settings/4.12/xfce4-settings-4.12.2.tar.bz2

  • SHA-256: af0e3c0a6501fc99e874103f2597abd1723f06c98f4d9e19a9aabf1842cc2f0d
  • SHA-1: 5991f4a542a556c24b6d8f5fe4698992e42882ae
  • MD5: 32263f1b704fae2db57517a2aff4232d

Xubuntu: Testing for Xubuntu

Planet Ubuntu - Dje, 04/03/2018 - 9:21pd

Xubuntu 18.04 “Bionic Beaver” is just around the corner. The first beta milestone arrives next week, and the final release is a little over a month away. 18.04 is an LTS release, meaning it has a 3-year support cycle and is definitely recommended for all users. Or it would be, if we knew it was ready. Stick around… this is a bit of a long read, but it’s important.

The ISO Tracker has seen little activity for the last few development cycles. We know we have some excited users already using and testing 18.04. But without testing results being recorded anywhere, we have to assume that nobody is testing the daily images and milestones. And this has major implications for both the 18.04 release and the project as a whole.

From the perspective of the QA team, and with full support from the development team – If we aren’t able to gauge an ISO at any of the milestones (Beta, Final Beta, Release Candidate, and the LTS Point Release), how can we possibly mark those as “Ready for Release”? And why should we?

It is notable that following any of our releases, often within less than a day, we have multiple reports of issues that were NEVER seen on the ISO Tracker. With the current SRU procedure, this means that all users will now have a minimum of 7 days before they can possibly see a fix. With development and testing time, these fixes may take significantly longer or never even make it into the 3-year support release.

Xubuntu is a community project. That includes all of you. If the community doesn’t care until it’s too late, what should we take from that? In fact, community support is part of the deal every flavor makes with Canonical to enable all of the things that make it possible for the flavor to exist. It’s actually the first bullet point in remaining a recognized flavor:

  • Image has track record of community interested in creating, supporting and promoting its use.
Ready to help? Let’s do this.

It is now time for the community to step up. Test ISOs, test the versions of packages you regularly use, check for any regressions, and record your results! Our ISO builds EVERY day around 0200UTC and the newest daily ISO is then available shortly after. The daily build can always be found on the daily builds page, regardless of the current development release name.

For those of you who do not believe you can help… you can!

Regression Testing

How hard is it to check for regression? Use the software you use every day. Does it work differently than it used to?

  • If not, no regression!
  • If it does, but works better than before, no regression!
  • Anything else, you’ve found a regression. Report it !
ISO Testing

How hard is it to check an ISO? If you have at 1Gb of disk space available, read on.

  • If you have sufficient disk space for a 10Gb file, you can probably use a virtual machine to run installation and post-installation tests.
  • If you are able to virtualize but lack the disk space for a full installation, consider using a VM to verify the ISO boots and applications run on the live disk.
  • If you have physical media available, either a DVD-R (RW to not waste the media on daily tests) or 2+ Gb capacity USB stick, you can boot Xubuntu from the media and perform installation, post-installation, and live testing.
More Information

In May of 2017, we ran a session on IRC for prospective testers. Other than our regular visitors, one new prospective tester attended and shared in the discussion. The logs for that session are still available if you want to spend 10 minutes checking out how easy it is to help.

We hope that you’ll join us in making Xubuntu 18.04 a success. We think it’s going to be the best release ever, but if the community can’t find the time to contribute to the release, we can’t guarantee we can have one.

David Tomaschik: OpenSSH Two Factor Authentication (But Not Service Accounts)

Planet Ubuntu - Sht, 03/03/2018 - 9:00pd

Very often, people hear “SSH” and “two factor authentication” and assume you’re talking about an SSH keypair that’s got the private key protected with a passphrase. And while this is a reasonable approximation of a two factor system, it’s not actually two factor authentication because the server is not using two separate factors to authenticate the user. The only factor is the SSH keypair, and there’s no way for the server to know if that key was protected with a passphrase. However, OpenSSH has supported true two factor authentication for nearly 5 years now, so it’s quite possible to build even more robust security.

Read more...

The Fridge: Ubuntu 16.04.4 LTS released

Planet Ubuntu - Sht, 03/03/2018 - 1:01pd
The Ubuntu team is pleased to announce the release of Ubuntu 16.04.4 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support. Like previous LTS series', 16.04.4 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures except for 32-bit powerpc, and is installed by default when using one of the desktop images. Ubuntu Server defaults to installing the GA kernel, however you may select the HWE kernel from the installer bootloader. As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 16.04 LTS. Kubuntu 16.04.4 LTS, Xubuntu 16.04.4 LTS, Mythbuntu 16.04.4 LTS, Ubuntu GNOME 16.04.4 LTS, Lubuntu 16.04.4 LTS, Ubuntu Kylin 16.04.4 LTS, Ubuntu MATE 16.04.4 LTS and Ubuntu Studio 16.04.4 LTS are also now available. More details can be found in their individual release notes: https://wiki.ubuntu.com/XenialXerus/ReleaseNotes#Official_flavours Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu Base, and Ubuntu Kylin. All the remaining flavours will be supported for 3 years. To get Ubuntu 16.04.4 --------------------- In order to download Ubuntu 16.04.4, visit: http://www.ubuntu.com/download Users of Ubuntu 14.04 will be offered an automatic upgrade to 16.04.4 via Update Manager. For further information about upgrading, see: https://help.ubuntu.com/community/XenialUpgrades

 

https://lists.ubuntu.com/archives/ubuntu-announce/2018-March/000229.html

Originally posted to the Ubuntu Release mailing list on Thu Mar 1 21:09:03 UTC 2018 by Lukasz Zemczak, on behalf of the Ubuntu Release Team

Emacs #2: Introducing org-mode

Planet Debian - Mër, 28/02/2018 - 11:09md

In my first post in my series on Emacs, I described returning to Emacs after over a decade of vim, and org-mode being the reason why.

I really am astounded at the usefulness, and simplicity, of org-mode. It is really a killer app.

So what exactly is org-mode?

I wrote yesterday:

It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”

That’s true, but doesn’t quite capture it. org-mode is a toolkit for you to organize things. It has reasonable out-of-the-box defaults, but it’s designed throughout for you to customize.

To highlight a few things:

  • Maintaining TODO lists: items can be scattered across org-mode files, contain attachments, have tags, deadlines, schedules. There is a convenient “agenda” view to show you what needs to be done. Items can repeat.
  • Authoring documents: org-mode has special features for generating HTML, LaTeX, slides (with LaTeX beamer), and all sorts of other formats. It also supports direct evaluation of code in-buffer and literate programming in virtually any Emacs-supported language. If you want to bend your mind on this stuff, read this article on literate devops. The entire Worg website
    is made with org-mode.
  • Keeping notes: yep, it can do that too. With full-text search, cross-referencing by file (as a wiki), by UUID, and even into other systems (into mu4e by Message-ID, into ERC logs, etc, etc.)

Getting started

I highly recommend watching Carsten Dominik’s excellent Google Talk on org-mode. It is an excellent introduction.

org-mode is included with Emacs, but you’ll often want a more recent version. Debian users can apt-get install org-mode, or it comes with the Emacs packaging system; M-x package-install RET org-mode RET may do it for you.

Now, you’ll probably want to start with the org-mode compact guide’s introduction section, noting in particular to set the keybindings mentioned in the activation section.

A good tutorial…

I’ve linked to a number of excellent tutorials and introductory items; this post is not going to serve as a tutorial. There are two good videos linked at the end of this post, in particular.

Some of my configuration

I’ll document some of my configuration here, and go into a bit of what it does. This isn’t necessarily because you’ll want to copy all of this verbatim — but just to give you a bit of an idea of some of what can be configured, an idea of what to look up in the manual, and maybe a reference for “now how do I do that?”

First, I set up Emacs to work in UTF-8 by default.

(prefer-coding-system 'utf-8)
(set-language-environment "UTF-8")

org-mode can follow URLs. By default, it opens in Firefox, but I use Chromium.

(setq browse-url-browser-function 'browse-url-chromium)

I set the basic key bindings as documented in the Guide, plus configure the M-RET behavior.

(global-set-key "\C-cl" 'org-store-link)
(global-set-key "\C-ca" 'org-agenda)
(global-set-key "\C-cc" 'org-capture)
(global-set-key "\C-cb" 'org-iswitchb)

(setq org-M-RET-may-split-line nil)

Configuration: Capturing

I can press C-c c from anywhere in Emacs. It will capture something for me, and include a link back to whatever I was working on.

You can define capture templates to set how this will work. I am going to keep two journal files for general notes about meetings, phone calls, etc. One for personal, one for work items. If I press C-c c j, then it will capture a personal item. The %a in all of these includes the link to where I was (or a link I had stored with C-c l).

(setq org-default-notes-file "~/org/tasks.org") (setq org-capture-templates '( ("t" "Todo" entry (file+headline "inbox.org" "Tasks") "* TODO %?\n %i\n %u\n %a") ("n" "Note/Data" entry (file+headline "inbox.org" "Notes/Data") "* %? \n %i\n %u\n %a") ("j" "Journal" entry (file+datetree "~/org/journal.org") "* %?\nEntered on %U\n %i\n %a") ("J" "Work-Journal" entry (file+datetree "~/org/wjournal.org") "* %?\nEntered on %U\n %i\n %a") )) (setq org-irc-link-to-logs t)

I like to link by UUIDs, which lets me move things between files without breaking locations. This helps generate UUIDs when I ask Org to store a link target for future insertion.


(require 'org-id)
(setq org-id-link-to-org-use-id 'create-if-interactive)

Configuration: agenda views

I like my week to start on a Sunday, and for org to note the time when I mark something as done.


(setq org-log-done 'time)
(setq org-agenda-start-on-weekday 0)

Configuration: files and refiling

Here I tell it what files to use in the agenda, and to add a few more to the plain text search. I like to keep a general inbox (from which I can move, or “refile”, content), and then separate tasks, journal, and knowledge base for personal and work items.

(setq org-agenda-files (list "~/org/inbox.org" "~/org/email.org" "~/org/tasks.org" "~/org/wtasks.org" "~/org/journal.org" "~/org/wjournal.org" "~/org/kb.org" "~/org/wkb.org" )) (setq org-agenda-text-search-extra-files (list "~/org/someday.org" "~/org/config.org" )) (setq org-refile-targets '((nil :maxlevel . 2) (org-agenda-files :maxlevel . 2) ("~/org/someday.org" :maxlevel . 2) ("~/org/templates.org" :maxlevel . 2) ) ) (setq org-outline-path-complete-in-steps nil) ; Refile in a single go (setq org-refile-use-outline-path 'file)

Configuration: Appearance

I like a pretty screen. After you’ve gotten used to org a bit, you might try this.

(require 'org-bullets) (add-hook 'org-mode-hook (lambda () (org-bullets-mode t))) (setq org-ellipsis "⤵")

Coming up next…

This hopefully showed a few things that org-mode can do. Coming up next, I’ll cover how to customize TODO keywords and tags, archiving old tasks, forwarding emails to org-mode, and using git to synchronize between machines.

You can also see a list of all articles in this series.

Resources to accompany this article

John Goerzen http://changelog.complete.org The Changelog

#17: Dependencies.

Planet Debian - Mër, 28/02/2018 - 10:45md

Dependencies are invitations for other people to break your package.
-- Josh Ulrich, private communication

Welcome to the seventeenth post in the relentlessly random R ravings series of posts, or R4 for short.

Dependencies. A truly loaded topic.

As R users, we are spoiled. Early in the history of R, Kurt Hornik and Friedrich Leisch built support for packages right into R, and started the Comprehensive R Archive Network (CRAN). And R and CRAN had a fantastic run with. Roughly twenty years later, we are looking at over 12,000 packages which can (generally) be installed with absolute ease and no suprises. No other (relevant) open source language has anything of comparable rigour and quality. This is a big deal.

And coding practices evolved and changed to play to this advantage. Packages are a near-unanimous recommendation, use of the install.packages() and update.packages() tooling is nearly universal, and most R users learned to their advantage to group code into interdependent packages. Obvious advantages are versioning and snap-shotting, attached documentation in the form of help pages and vignettes, unit testing, and of course continuous integration as a side effect of the package build system.

But the notion of 'oh, let me just build another package and add it to the pool of packages' can get carried away. A recent example I had was the work on the prrd package for parallel recursive dependency testing --- coincidentally, created entirely to allow for easier voluntary tests I do on reverse dependencies for the packages I maintain. It uses a job queue for which I relied on the liteq package by Gabor which does the job: enqueue jobs, and reliably dequeue them (also in a parallel fashion) and more. It looks light enough:

R> tools::package_dependencies(package="liteq", recursive=FALSE, db=AP)$liteq [1] "assertthat" "DBI" "rappdirs" "RSQLite" R>

Two dependencies because it uses an internal SQLite database, one for internal tooling and one for configuration.

All good then? Not so fast. The devil here is the very innocuous and versatile RSQLite package because when we look at fully recursive dependencies all hell breaks loose:

R> tools::package_dependencies(package="liteq", recursive=TRUE, db=AP)$liteq [1] "assertthat" "DBI" "rappdirs" "RSQLite" "tools" [6] "methods" "bit64" "blob" "memoise" "pkgconfig" [11] "Rcpp" "BH" "plogr" "bit" "utils" [16] "stats" "tibble" "digest" "cli" "crayon" [21] "pillar" "rlang" "grDevices" "utf8" R> R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=AP)$RSQLite [1] "bit64" "blob" "DBI" "memoise" "methods" [6] "pkgconfig" "Rcpp" "BH" "plogr" "bit" [11] "utils" "stats" "tibble" "digest" "cli" [16] "crayon" "pillar" "rlang" "assertthat" "grDevices" [21] "utf8" "tools" R>

Now we went from four to twenty-four, due to the twenty-two dependencies pulled in by RSQLite.

There, my dear friend, lies madness. The moment one of these packages breaks we get potential side effects. And this is no laughing matter. Here is a tweet from Kieran posted days before a book deadline of his when he was forced to roll a CRAN package back because it broke his entire setup. (The original tweet has by now been deleted; why people do that to their entire tweet histories is somewhat I fail to comprehened too; in any case the screenshot is from a private discussion I had with a few like-minded folks over slack.)

That illustrates the quote by Josh at the top. As I too have "production code" (well, CRANberries for one relies on it), I was interested to see if we could easily amend RSQLite. And yes, we can. A quick fork and few commits later, we have something we could call 'RSQLighter' as it reduces the dependencies quite a bit:

R> IP <- installed.packages() # using my installed mod'ed version R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=IP)$RSQLite [1] "bit64" "DBI" "methods" "Rcpp" "BH" "bit" [7] "utils" "stats" "grDevices" "graphics" R>

That is less than half. I have not proceeded with the fork because I do not believe in needlessly splitting codebases. But this could be a viable candidate for an alternate or shadow repository with more minimal and hence more robust dependencies. Or, as Josh calls, the tinyverse.

Another maddening aspect of dependencies is the ruthless application of what we could jokingly call Metcalf's Law: the likelihood of breakage does of course increase with the number edges in the dependency graph. A nice illustration is this post by Jenny trying to rationalize why one of the 87 (as of today) tidyverse packages has now state "ORPHANED" at CRAN:

An invitation for other people to break your code. Well put indeed. Or to put rocks up your path.

But things are not all that dire. Most folks appear to understand the issue, some even do something about it. The DBI and RMySQL packages have saner strict dependencies, maybe one day things will improve for RMariaDB and RSQLite too:

R> tools::package_dependencies(package=c("DBI", "RMySQL", "RMariaDB"), recursive=TRUE, db=AP) $DBI [1] "methods" $RMySQL [1] "DBI" "methods" $RMariaDB [1] "bit64" "DBI" "hms" "methods" "Rcpp" "BH" [7] "plogr" "bit" "utils" "stats" "pkgconfig" "rlang" R>

And to be clear, I do not believe in giving up and using everything via docker, or virtualenvs, or packrat, or ... A well-honed dependency system is wonderful and the right resource to get code deployed and updated. But it required buy-in from everyone involved, and an understanding of the possible trade-offs. I think we can, and will, do better going forward.

Or else, there will always be the tinyverse ...

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Faqet

Subscribe to AlbLinux agreguesi