You are here

Planet Debian

Subscribe to Feed Planet Debian
random musings and comments The experience of a free software community member Entries tagged english random musings and comments Indeed, there are many other ways to make the world a better place; but Free Software is the one I like the most. (y eso no es poca cosa) Random thoughts about everything tagged by Debian Just another WordPress.com weblog random musings and comments Random thoughts about everything tagged by Debian Thinking inside the box Joachim Breitners Denkblogade Thinking inside the box Debian and Free Software Random thoughts about everything tagged by Debian Free Software Indie Hacker Echoes Random thoughts about everything tagged by Debian Recent content in Planet Debian on Iain R. Learmonth Current Working Directory liw's English language blog feed Reproducible builds blog A blog from a scientist and Debian developer (and occasional book writer)... Tricks for data handling, programming, debian administration and development, command-line and many other joyful things in the same spirit. Oh, and sometimes completey unrelated things ! ganbatte kudasai! Ben Hutchings's diary of life and technology Random thoughts about everything tagged by Debian Conteúdo de Antonio Terceiro marcado com a tag "english" Entries tagged english Recent content in Planet Debian on Iain R. Learmonth Joachim Breitners Denkblogade Ricardo Mones - LiveJournal.com Recent content in Planet Debian on Iain R. Learmonth faiblog Payson, AZ Open Source Developer and enthusiast dedicated to KDE Insider infos, master your Debian/Ubuntu distribution WEBlog -- Wouter's Eclectic Blog Debian and Free Software Digital-Scurf Ramblings Recent content in Planet Debian on Iain R. Learmonth random musings and comments Thinking inside the box Reproducible builds blog a personal blog of Dimitri John Ledkov Recent content in Planet Debian on Iain R. Learmonth anarcat jmtd liw's English language blog feed showing latest 10 James McCoy As time goes by ... pabs
Përditësimi: 3 months 3 javë më parë

Vasudev Kamath: debcargo: Replacing subprocess crate with git2 crate

Sht, 15/07/2017 - 12:05md

In my previous post I talked about using subprocess crate to extract beginning and ending year from git repository for generating debian/copyright file. In this post I'm going to talk on how I replaced subprocess with native git2 crate and achieved the same result in much cleaner and safer way.

git2 is a native Rust crate which provides access to Git repository internals. git2 does not involve any unsafe invocation as it is built against libgit2-sys which is actually using Rust FFI to directly bind to underlying libgit library. Below is the new copyright_fromgit function with git2 crate.

fn copyright_fromgit(repo_url: &str) -> Result<String> { let tempdir = TempDir::new_in(".", "debcargo")?; let repo = Repository::clone(repo_url, tempdir.path())?; let mut revwalker = repo.revwalk()?; revwalker.push_head()?; // Get the latest and first commit id. This is bit ugly let latest_id = revwalker.next().unwrap()?; let first_id = revwalker.last().unwrap()?; // revwalker ends here is consumed by last let first_commit = repo.find_commit(first_id)?; let latest_commit = repo.find_commit(latest_id)?; let first_year = DateTime::<Utc>::from_utc( NaiveDateTime::from_timestamp(first_commit.time().seconds(), 0), Utc).year(); let latest_year = DateTime::<Utc>::from_utc( NaiveDateTime::from_timestamp(latest_commit.time().seconds(), 0), Utc).year(); let notice = match first_year.cmp(&latest_year) { Ordering::Equal => format!("{}", first_year), _ => format!("{}-{},", first_year, latest_year), }; Ok(notice) }
So here is what I'm doing
  1. Use git2::Repository::clone to clone the given URL. We are thus avoiding exec of git clone command.
  2. Get a revision walker object. git2::RevWalk implements Iterator trait and allows walking through the git history. This is what we are using to avoid exec of git log command.
  3. revwalker.push_head() is important because we want to tell revwalker from where we want to walk the history. In this case we are asking it to walk history from repository HEAD. Without this line next line will not work. (Learned it in hard way :-) ).
  4. Then we extract git2::Oid which is we can say similar to commit hash and can be used to lookup a particular commit. We take latest commit hash using RevWalk::next call and the first commit using Revwalk::last, note the order this is because Revwalk::last consumes the revwalker so doing it in reverse order will make borrow checker unhappy :-). This replaces exec of head -n1 command.
  5. Look up the git2::Commit objects using git2::Repository::find_commit
  6. Then convert the git2::Time to chrono::DateTime and take out the years.

After this change I found an obvious error which went unnoticed in previous version, that is if there was no repository key in Cargo.toml. When there was no repo URL git clone exec did not error out and our shell commands happily extracted year from the debcargo repository!. Well since I was testing code from debcargo repository It never failed, but when I executed from non-git repository folder git threw an error but that was git log and not git clone. This error was spotted right away because git2 threw me an error that I gave it empty URL.

When it comes to performance I see that debcargo is faster compared to previous version. This makes sense because previously it was doing 5 fork and exec system calls and now that is avoided.

Chris Lamb: Installation birthday

Pre, 14/07/2017 - 12:08md

Fancy receiving congratulations on the anniversary of when you installed your system?

Installing the installation-birthday package on your Debian machines will celebrate each birthday of your system by automatically sending a message to the local system administrator.

The installation date is based on the system installation time, not the package itself.

installation-birthday is available in Debian 9 ("stretch") via the stretch-backports repository, as well as in the testing and unstable distributions:

$ apt install installation-birthday

Enjoy, and patches welcome. :)

Jose M. Calhariz: Crossgrading a more typical server in Debian9

Enj, 13/07/2017 - 7:32md

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

apt-get clean apt-get upgrade apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64 dpkg --install /var/cache/apt/archives/*_amd64.deb dpkg --install /var/cache/apt/archives/*_amd64.deb dpkg --print-architecture dpkg --print-foreign-architectures

Third, do a crossgrade of the libraries:

dpkg --list > original.dpkg apt-get --fix-broken --allow-remove-essential install for pack32 in $(grep :i386 original.dpkg | awk '{print $2}' ) ; do if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then apt-get install --yes --allow-remove-essential ${pack32%:i386} ; fi ; done

Forth, do a full crossgrade:

if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then apt-get --fix-broken --allow-remove-essential install apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) fi

Lars Wirzenius: Adding Yakking to Planet Debian

Enj, 13/07/2017 - 12:07md

In a case of blatant self-promotion, I am going to add the Yakking RSS feed to the Planet Debian aggregation. (But really because I think some of the readership of Planet Debian may be interested in the content.)

Yakking is a group blog by a few friends aimed at new free software contributors. From the front page description:

Welcome to Yakking.

This is a blog for topics relevant to someone new to free software development. We assume you are already familiar with computers, and are curious about participating in the production of free software. You don't need to be a programmer: software development requires a wide variety of skills, and you can be a valued core contributor to a project without being a programmer.

If anyone objects, please let me know.

Lucy Wayland: Basic Chilli Template

Enj, 13/07/2017 - 3:59pd

Amounts are to taste:
[Stage one]
Chopped red onion
Chopped garlic
Chopped fresh ginger
Chopped big red chillies (mild)
Chopped birds eye chillies (red or green, quite hot)
Chopped scotch bonnets (hot)
[Fry onion in some olive oil. When getting translucent, and rest of ingredients. May need to add some more oil. When the garlic is browning. On to stage two.]
[Stage two]
Some tins of chopped tomato
Some tomato puree
Some basil
Some thyme
Bay leaf optional
Some sliced mushroom
Some chopped capsicum pepper
Some kidney beans
Other beans optional (butter beans are nice)
Lentils optional (Pro tip: if adding lentils to adding lentils, especially red lentils, I recommend adding some garam masala as well. Lifts the flavour.)
Veggie mince optional
Pearled barley very optional
Stock (some reclaimed from swilling water around tom tims)
Water to keep topping up with if it get too sticky or dry
Dash of red wine optional
Worcester sauce optional
Any other flavouring you feel like optional (I quite often add random herbs or spices
[[Secret ingredient: a spoonful of Marmite]]
[Cook everything up together, but wait until there is enough fluid before you add the dry/sticky ingredients in.]
[Serve with carb of choice. I currently fond of using Ryvita as dipper instead of tortilla chips.]
[Also serve with a a “cooler” such as natural yogurt, soured cream or something else.

You want more than one type of chilli in there to broaden the flavour. I use all three, plus occasionally others as well. If you are feeling masochistic you can go hotter than scotch bonnets, but I although you may get something of the same heat, I think you lose something in the flavour.

BTW – if you get the chance, of all the tortilla chips, I think blue corn ones are the best. Only seem to find them in health food shops.

There you go. It’s a “Zen” recipe, which is why I couldn’t give you a link. You just do it until it looks right, feels right, tastes right. And with practice you get it better and better.


Steinar H. Gunderson: Nageru 1.6.1 released

Dje, 09/07/2017 - 12:14md

I've released version 1.6.1 of Nageru, my live video mixer.

Now that Solskogen is coming up, there's been a lot of activity on the Nageru front, but hopefully everything is actually coming together now. Testing has been good, but we'll see whether it stands up to the battle-hardening of the real world or not. Hopefully I won't be needing any last-minute patches. :-)

Besides the previously promised Prometheus metrics (1.6.1 ships with a rather extensive set, as well as an example Grafana dashboard) and frame queue management improvements, a surprising late addition was that of a new transcoder called Kaeru (following the naming style of Nageru itself, from the japanese verb kaeru (換える) which means roughly to replace or excahnge—iKnow! claims it can also mean “convert”, but I haven't seen support for this anywhere else).

Normally, when I do streams, I just let Nageru do its thing and send out a single 720p60 stream (occasionally 1080p), usually around 5 Mbit/sec; less than that doesn't really give good enough quality for the high-movement scenarios I'm after. But Solskogen is different in that there's a pretty diverse audience when it comes to networking conditions; even though I have a few mirrors spread around the world (and some JavaScript to automatically pick the fastest one; DNS round-robin is really quite useless here!), not all viewers can sustain such a bitrate. Thus, there's also a 480p variant with around 1 Mbit/sec or so, and it needs to come from somewhere.

Traditionally, I've been using VLC for this, but streaming is really a niche thing for VLC. I've been told it will be an increased focus for 4.0 now that 3.0 is getting out the door, but over the last few years, there's been a constant trickle of little issues that have been breaking my transcoding pipeline. My solution for this was to simply never update VLC, but now that I'm up to stretch, this didn't really work anymore, and I'd been toying around with the idea of making a standalone transcoder for a while. (You'd ask “why not the ffmpeg(1) command-line client?”, but it's a bit too centered around files and not streams; I use it for converting to HLS for iOS devices, but it has a nasty habit of I/O blocking real work, and its HTTP server really isn't meant for production work. I could survive the latter if it supported Metacube and I could feed it into Cubemap, but it doesn't.)

It turned out Nageru had already grown most of the pieces I needed; it had video decoding through FFmpeg, x264 encoding with speed control (so that it automatically picks the best preset the machine can sustain at any given time) and muxing, audio encoding, proper threading everywhere, and a usable HTTP server that could output Metacube. All that was required was to add audio decoding to the FFmpeg input, and then replace the GPU-based mixer and GUI with a very simple driver that just connects the decoders to the encoders. (This means it runs fine on a headless server with no GPU, but it also means you'll get FFmpeg's scaling, which isn't as pretty or fast as Nageru's. I think it's an okay tradeoff.) All in all, this was only about 250 lines of delta, which pales compared to the ~28000 lines of delta that are between 1.3.1 (used for last Solskogen) and 1.6.1. It only supports a rather limited set of Prometheus metrics, and it has some limitations, but it seems to be stable and deliver pretty good quality. I've denoted it experimental for now, but overall, I'm quite happy with how it turned out, and I'll be using it for Solskogen.

Nageru 1.6.1 is on its way into Debian, but it depends on a new version of Movit which needs to go through the NEW queue (a soname bump), so it might be a few days. In the meantime, I'll be busy preparing for Solskogen. :-)

Daniel Silverstone: Gitano - Approaching Release - Access Control Changes

Sht, 08/07/2017 - 11:31pd

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects.

In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines

With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:

define is_steve user exact steve allow "Steve can read my repo" is_steve op_read

And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:

define is_jeff user exact jeff define is_steve user exact steve define readers anyof is_jeff is_steve allow "Steve and Jeff can read my repo" readers op_read

This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:

allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]

Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:

define readers anyof [user exact jeff] [user exact steve] [user exact susan] allow "My friends can read my repo" op_read readers

The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY

As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better.

If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:

  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.
No more 'basic' matches

Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/${user}/. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy.

To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is.

This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.

  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano

If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system.

Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

Joachim Breitner: The Micro Two Body Problem

Enj, 06/07/2017 - 5:27md

Inspired by recent PhD comic “Academic Travel” and not-so-recent xkcd comic “Movie Narrative Charts”, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).

Two bodies traveling the world

Holger Levsen: 20170706-fcmc.tv

Enj, 06/07/2017 - 4:54md
a media experiment: fcmc.tv / G20 not welcome

Our view currently every day and night:

No one is illegal!

No football for fascists!

The FC/MC is a collective initiative to change the perception of the G20 in Hamburg - the summit itself and the protests surrounding it. FC/MC is a media experiment, located in the stadium of the amazing St.Pauli football club. We will operate until this Sunday, providing live coverage (text, photos, audio, video), back stories and much much more. Another world is possible!

Disclaimer: I'm not involved in content generation, I'm just doing computer stuff as usual, but that said, I really like the work of those who are!

Thadeu Lima de Souza Cascardo: News on Debian on apexqtmo

Enj, 06/07/2017 - 6:20pd

I had been using my Samsung Galaxy S Relay 4G for almost three years when I decided to get a new phone. I would use this new phone for daily tasks and take the chance to get a new model for hacking in the future. My apexqtmo would still be my companion and would now be more available for real hacking.

And so it also happened that its power button got stuck. It was not the first time, but now it would happen every so often, and would require me to disassemble it. So I managed to remove the plastic button and leave it with a hole so I could press the button with a screwdriver or a paperclip. That was the excuse I needed to get it to running Debian only.

Though it's now always plugged on my laptop, I got the chance to hack on it on my scarce free time. As I managed to get a kernel I built myself running on it, I started fixing things like enabling devtmpfs. I didn't insist much on running systemd, though, and kept with System V. The Xorg issues were either on the server or the client, depending on which client I ran.

I decided to give a chance to running the Android userspace on a chroot, but gave up after some work to get some firmware loaded.

I managed to get the ALSA controls right after saving them inside a chroot on my CyanogenMod system. Then, restoring them on Debian allowed to play songs. Unfortunately, it seems I broke the audio jack when disassembling it. Otherwise, it would have been a great portable audio player. I even wrote a small program that would allow me to control mpd by swiping on the touchscreen.

Then, as Debian release approached, I decided to investigate the framebuffer issue closely. I ended finding out that it was really a bug in the driver, and after fixing it, the X server and client crashes were gone. It was beautiful to get some desktop environment running with the right colors, get a calculator started and really using the phone as a mobile device.

There are two lessons or findings here for me. The first one is that the current environments are really lacking. Even something like GPE can't work. The buttons are tiny, scrollbars are still the only way for scrolling, some of the time. No automatic virtual keyboards. So, there needs to be some investing in the existing environments, and maybe even the development of new environments for these kinds of devices. This was something I expected somehow, but it's still disappointing to know that we had so much of those developed in the past and now gone. I really miss Maemo. Running something like Qtopia would mean grabing a very old unmaintained software not available in Debian. There is still matchbox, but it's as subpar as the others I tested.

The second lesson is that building a userspace to run on old kernels will still hit the problem of broken drivers. In my particular case, unless I wrote code for using Ion instead of the framebuffer, I would have had that problem. Or it would require me to add code to xorg-xserver that is not appropriate. Or fix the kernel drivers of available kernel sourcecodes. But this does not scale much more than doing the right thing and adding upstream support for these devices.

So, I decided it was time I started working on upstream support for my device. I have it in progress and may send some upstream patches soon. I have USB and MMC/SDcard working fine. DRM is still a challenge, but thanks to Rob Clark, it's something I expect to get working soon, and after that, I would certainly celebrate. Maybe even consider starting the work on other devices a little sooner.

Trying to review my post on GNU on smartphones, here is where I would put some of the status of my device and some extra notes.

On Halium

I am really glad people started this project. This was one of the things I criticized: that though Ubuntu Phone and FirefoxOS built on Android userspace, they were not easily portable to many devices out there.

But as I am looking for a more pure GNU experience, let's call it that, Halium does not help much in that direction. But I'd like to see it flourish and allow people to use more OSes on more devices.

Unfortunately, it suffers from similar problems as the strategy I was trying to go with. If you have a device with a very old kernel, you won't be able to run some of the latest userspace, even with Android userspace help. So, lots of devices would be left unsupported, unless we start working on some upstream support.

On RYF Hardware

My device is one of the worst out there. It's a modem that has a peripherical CPU. Much has already been said about Qualcomm chips being some of the least freedom-friendly. Ironically, it's some with the best upstream support, as far as I found out while doing this upstreaming work. Guess we'll have to wait for opencores, openrisc and risc-v to catch up here.

Diversity

Though I have been experimenting with Debian, the upstream work would sure benefit lots of other OSes out there, mainly GNU+Linux based ones, but also other non-GNU Linux based ones. Not so much for other kernels.

On other options

After the demise of Ubuntu Phone, I am glad to see UBports catching up. I hope the project is sustainable and produce more releases for more devices.

Rooting

This needs documentation. Most of the procedures rely on booting a recovery system, which means we are already past the root requirement. We simply boot our own system, then. However, for some debugging strategies, getting root on the OEM system is useful. So, try to get root on your system, but beware of malware out there.

Booting

Most of these devices will have their bootloaders in there. They may be unlocked, allowing unsigned kernels to be booted. Replacing these bootloaders is still going to be a challenge for another future phase. Though adding a second bootloader there, one that is freedom respecting, and that allows more control on that booting step to the user is something possible once you have some good upstream support. One could either use kexec for that, or try to use the same device tree for U-Boot, and use the knowledge of the device drivers for Linux on writing drivers for U-Boot, GRUB or Libreboot.

Installation

If you have root on your OEM system, this is something that could be worked on. Otherwise, there is magic-device-tool, whose approach is one that could be used.

Kernels

While I am working on adding Linux upstream support for my device, it would be wonderful to see more kernels supporting those gadgets. Hopefully, some of the device driver writing and reverse engineering could help with that, though I am not too much optimistic. But there is hope.

Basic kernel drivers

Adding the basic support, like USB and MMC, after clocks, gpios, regulators and what not, is the first step to a long road. But it would allow using the device as a board computer, under better control of the user. Hopefully, lots of eletronic garbage out there would have some use as control gadgets. Instead of buying a new board, just grab your old phone and put it to some nice use.

Sensors, input devices, LEDs

There are usually easy too. Some sensors may depend on your modem or some userspace code that is not that easily reverse engineered. But others would just require some device tree work, or some small input driver.

Graphics

Here, things may get complicated. Even basic video output is something I have some trouble with. Thanks to some other people's work, I have hope at least for my device. And using the vendor's linux source code, some framebuffer should be possible, even some DRM driver. But OpenGL or other 3D acceleration support requires much more work than that, and, at this moment, it's not something I am counting on. I am thankful for the work lots of people have been doing on this area, nonetheless.

Wireless

Be it Wifi or Bluetooth, things get ugly here. The vendor driver might be available. Rewriting it would take a long time. Even then, it would most likely require some non-free firmware loading. Using USB OTG here might be an option.

Modem/GSM

The work of the Replicant folks on that is what gives me some hope that it might be possible to get this working. Something I would leave to after I have a good interface experience in my hands.

GPS

Problem is similar to the Modem/GSM one, as some code lives in userspace, sometimes talking to the modem is a requirement to get GPS access, etc.

Shells

This is where I would like to see new projects, even if they work on current software to get them more friendly to these form factors. I consider doing some work there, though that's not really my area of expertise.

Next steps

For me, my next steps are getting what I have working upstream, keep working on DRM support, packaging GPE, then experimenting with some compositor code. In the middle of that, trying to get some other devices started.

But documenting some of my work is something I realized I need to do more often, and this post is some try on that.

Shirish Agarwal: Debian 9 release party at Co-hive

Mër, 05/07/2017 - 8:11md

Dear all, This would be a biggish one so please have a chai/coffee or something stronger as it would take a while.

I would start with attempt at some alcohol humor. While some people know that I have had a series of convulsive epileptic seizure I had shared bits about it in another post as well. Recovery is going through allopathic medicines as well as physiotherapy which I go to every alternate day.

One of the exercises that I do in physiotherapy sessions is walk cross-legged on a line. While doing it today, it occurred to me that this is the same test that a Police inspector would do if they caught you drinking or are suspected of drunk driving. While some in the police force have now also have breath analyzer machines to determine alcohol content in the breath and body (and ways to deceive it are also there) the above exercise is still an integral part of examination. Now few of my friends who do drink have and had made expertise of walking on a line, while I due to this neurological disorder still have issues of walking on a line. So while I don’t think of a drinking party in the near future (6 months at least), if I ever do get caught with a friend who is drunk (by association I would also be a suspect) by a policeman who doesn’t have a breath analyzer machine, I could be in a lot of trouble. In addition if I tell him I have a neurological disorder I am bound to land up in a cell as he will think I’m trying to make a fool of him. If you are able to picturize the situation, I’m sure you will get a couple of laughs.

Now coming to the release party, I was a bit apprehensive. It’s been quite a while I had faced an audience and just coming out of illness didn’t know how well or ill-prepared I would be for the session. I had forsaken/given up exercising two days earlier before the event as I wanted to have loose body, loose limbs all over. I also took a mild sedative (1mg) the day before just so I will have a fit night sleep and be able to focus all my energies on the big day. (I don’t recommend sedatives unless the doctor prescribes) and I did have a doctor prescription so was able to have a nice sleep. I didn’t do any Debian study as I hoped my somewhat long experience with both Ubuntu and Debian should help me.

On the d-day, I had asked dhanesh (the organizer of the event) to accompany me from home to venue and back as I was unsure of the journey as it was around 9-10 kms. from my place and while I had been to the venue about couple of years back, I had just a mild remembrance of the place.

Anyways, Dhanesh compiled with my request and together we reached the venue before the appointed 1500 hrs. As it was a Sunday I was unsure as how many people would turn up as people usually like to cozy up on a Sunday.

Around 1530 hrs everybody showed up

It included couple of co-organizers with most people being newbies so while I had thought of showing how to contribute via reporting bugs or putting up patches, had to set that aside and explain how things work in free software and open-source world. We didn’t get into the debate of free vs open-source or free/open-source/open-core as that would have been counter-productive and probably confusing for newbies.

We did however share the debian tree structure

I was stumped by /var and /proc . I hadn’t taken my lappy as it is expensive (a lenovo thinkpad I love very dearly) and I was unsure if I would be able to take care of it (weight wise). Dhanesh had told me that he had Debian on his lappy + zsh both of which are my favourites.

While back at home I realized /var has been relegated to having apache/server logs and stuff like that, I do recall (vaguely) a thread on debian-devel about removing /var although that discussion went nowhere.

One of the bugs that we hit early on is that nobody had enough space on their hdd to have Debian comfortably. It took a while to get an external hdd and push some of the content from somebody’s lappy to the external drive to have space for the installation.

I did share the /. /home, optional swap while Dhanesh helped by sharing about having a separate /boot partition as well which I had forgotten. I can’t even begin to remember the number of times having a separate /boot partition has helped me in all of my systems.

That done, we did try to install/show Debian 9 without network but were hit with #866629 so wasn’t able to complete the installation. We had got the latest 9.0.1 as I had seen Steve’s message about issues with the live images but even then we were hit with the above bug. As shared in the bug history, it might be a good idea to have the last couple of RC’s (Release Candidate releases) as pre-release parties so people have a chance to report bugs and get them fixed. It was also nice to see Praveen raising the seriousness of the bug shared above.

The next day I also filed #866971 as I had mistaken the release to be a regular release and not the live instance. I have pushed my rationale and hope the right thing happens.

As installation takes a bit of time, we used the time to share about Google’s Summer of Code and absence of Debian from GSoC this year. I should have directed them to an itfoss article I wrote sometime ago and also shared that Debian is also looking to having a similar apprenticeship within Debian itself. There were questions about why Debian would like to take the administrative overhead, my response was that it probably had to do with Debian wanting more control over the process. While Debian has had some great luck getting all number of seats that it asks for in GSoC, the ball is always in Google’s court. Having that uncertainty off would be beneficial to Debian both in short-term as well as long-term. One interesting stat that was shared with me was that something akin to 89 students from India had been selected this year to GSoC even with the lower stipend and that is a far cry from the 10-15 students who are able to complete GSoC every year. Let’s see what happens this year.

One of the interesting fact/gossip I shared with them is that Google uses a modified/forked Debian internally which it probably would never release ever.

There were quite a few queries about GSoC which resulted into how contributions are made and how git had become the master of all VCS (Version Control Systems). While I do have pet bugs about git (the biggest one being for places/countries having not big bandwidth git fails many a times while cloning). I *think* the bug has been reported enough times but haven’t seen any improvements yet. There is a need of a solution like wget and wget -c so git just works even under the most trying circumstances (bandwidth wise)

We also shared what a repo is and Dhanesh helpfully did a git shortlog to show people how commits are commented. It was from his official work so he couldn’t show anything apart from the shortlog.

I also shared how non-technical people can help with regard to documentation, artwork but didn’t get into any concrete examples although wiki.debian.org would have been a good start. I also don’t know if it’s a fact or not but it seems/seemed that moinmoin (the current wiki solution used by debian) seems to have got sectional edit feature which I used sometime back. If moinmoin has got this feature then it is on par with mediawiki, although do know that mediawiki has lot more features going for it.

Dhanesh did manage to install a Debian 8.0.7 (while 8.8.0 was the last release) which might have been better. The debian-installer (d-i) looks the same even though I know there are improvements with respect to UEFI and many updated components.

There are and were many bugs which I wanted to share but didn’t know if it was the right forum or not, for e.g. #597176 which probably needs improvements in other libraries along with the JPEG 2000 support #604859 all of which I’m subscribed to.

We also had a talk about code documentation and code readability, python (as almost everything in Debian is based on python) I had actually wanted to show metrics.debian.net but had seen it was down the day before and again checked to see it is down now as well, hence reported it, will hopefully come on Debian BTS some time soonish. The idea was to share that Debian does have/uses other programming languages as well and is only limited by people interested in a specific language and be willing to package and maintain packages in that specific programming language.

As Dhanesh was part of Firefox OS and a Campus Ambassador we did discuss what went wrong in Firefox OS deployment and Firefox as a whole, specifically between the community and Mozilla the Corporation.

In this there were lots that was lot that I wasn’t able to share as had become tired otherwise would have shared that ncurses debian-installer interface although bit ugly to look at is still good as it has speech for visually differently abled people as well as people with poor sight.

There is and was lots to share about Debian but was conscious that I might over-burden the audience and my stamina was also stretched.

We also shared about automated builds bud didn’t show either travis.debian.net or jeankins.debian.net

We also had a small discussion about Artificial Intelligence, Robotics and how the outlook for people coming in Computer Science looks today.

One thing I forgot to share we did do the cake-cutting together

Dhanesh is on the left while I’m on the right.

We did have a cute 3-4 year old boy as our guest as well bud didn’t get good pictures of him.

Lastly, the Debian cake itself

We also talked about the kernel and subsystem maintainers and how Linus is the overlord.

Look forward to comments. I am hoping Dhanesh might recollect some points that I might have missed/forgotten.

Update 07/07/17 – All programming languages stats. can be seen here


Filed under: Miscellenous Tagged: #alcohol humor, #co-hive, #crystal-gazing, #Debian bugs, #Debian cake, #Debian-release-party-9-pune, #live-installer, #moinmoin, #planet-debian, debian-installer, Pune

Patrick Matthäi: Bug in Stretch Linux vmxnet3 driver

Mër, 05/07/2017 - 6:21md

Hey,

Are I am the only one experiencing #864642 (vmxnet3: Reports suspect GRO implementation on vSphere hosts / one VM crashes)? Unluckily I still do not have got any answer on my report so that I may help to track this issue down.

Help is welcome :)

Steve Kemp: I've never been more proud

Mar, 04/07/2017 - 11:00md

This morning I remembered I had a beefy virtual-server setup to run some kernel builds upon (when I was playing with Linux security moduels), and I figured before I shut it down I should use the power to run some fuzzing.

As I was writing some code in Emacs at the time I figured "why not fuzz emacs?"

After a few hours this was the result:

deagol ~ $ perl -e 'print "`" x ( 1024 * 1024 * 12);' > t.el deagol ~ $ /usr/bin/emacs --batch --script ./t.el .. .. Segmentation fault (core dumped)

Yup, evaluating an lisp file caused a segfault, due to a stack-overflow (no security implications). I've never been more proud, even though I have contributed code to GNU Emacs in the past.

Reproducible builds folks: Reproducible Builds: week 114 in Stretch cycle

Mar, 04/07/2017 - 2:49md

Here's what happened in the Reproducible Builds effort between Sunday June 25 and Saturday July 1 2017:

Upcoming and past events

Our next IRC meeting is scheduled for July 6th at 17:00 UTC (agenda). Topics to be discussed include an update on our next Summit, a potential NMU campaign, a press release for buster, branding, etc.

Toolchain development and fixes
  • James McCoy reviewed and merged Ximin Luo's script debpatch into the devscripts Git repository. This is useful for rebasing our patches onto new versions of Debian packages.
Packages fixed and bugs filed

Ximin Luo uploaded dash, sensible-utils and xz-utils to the deferred uploads queue with a delay of 14 days. (We have had patches for these core packages for over a year now and the original maintainers seem inactive so Debian conventions allow for this.)

Patches submitted upstream:

Reviews of unreproducible packages

4 package reviews have been added, 4 have been updated and 35 have been removed in this week, adding to our knowledge about identified issues.

One issue types has been updated:

One issue type has been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (68)
  • Daniel Schepler (1)
  • Michael Hudson-Doyle (1)
  • Scott Kitterman (6)
diffoscope development tests.reproducible-builds.org
  • Vagrant Cascadian working on testing Debian:
    • Upgraded the 27 armhf build machines to stretch.
    • Fix mtu check to only display status when eth0 is present.
  • Helmut Grohne worked on testing Debian:
    • Limit diffoscope memory usage to 10GB virtual per process. It currently tends to use 50GB virtual, 36GB resident which is bad for everything else sharing the machine. (This is #865660)
  • Alexander Couzens working on testing LEDE:
    • Add multiple architectures / targets.
    • Limit LEDE and OpenWrt jobs to only allow one build at the same time using the jenkins build blocker plugin.
    • Move git log -1 > .html to node`document environment().
    • Update TODOs.
  • Mattia Rizzolo working on testing Debian:
    • Updated the maintenance script for postgresql-9.6.
    • Restored the postgresql database from backup after Holger accidently purged it.
  • Holger Levsen working on:
    • Merging all the above commits.
    • Added a check for (known) Jenkins zombie jobs and report them. (This is an long known problem with jenkins; deleted jobs sometimes come back…)
    • Upgraded the remaining amd64 nodes to stretch.
    • Accidentially purged postgres-9.4 from jenkins, so we could test our backups
    • Updated our stretch upgrade TODOs.
Misc.

This week's edition was written by Chris Lamb, Ximin Luo, Holger Levsen, Bernhard Wiedemann, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Foteini Tsiami: Internationalization, part two

Mar, 04/07/2017 - 10:35pd

Now that sch-scripts has been renamed to ltsp-manager and translated to English, it was time to set up a proper project site for it in launchpad: http://www.launchpad.net/ltsp-manager

The following subpages were created for LTSP Manager there:

  • Code: a review of all the project code, which currently only includes the master git repository.
  • Bugs: a tracker where bugs can be reported. I already filed a few bugs there!
  • Translations: translators will use this to localize LTSP Manager to their languages. It’s not quite ready yet.
  • Answers: a place to ask the LTSP Manager developers for anything regarding the project.

We currently have an initial version of LTSP Manager running in Debian Stretch; although more testing and more bug reports will be needed before we start the localization phase. Attaching a first screenshot!

 


Arturo Borrero González: Netfilter Workshop 2017: I'm new coreteam member!

Mar, 04/07/2017 - 10:00pd

I was invited to attend the Netfilter Workshop 2017 in Faro, Portugal this week, so I’m here with all the folks enjoying some days of talks, discussions and hacking around Netfilter and general linux networking.

The Coreteam of the Netfilter project, with active members Pablo Neira Ayuso (head), Jozsef Kadlecsik, Eric Leblond and Florian Westphal have invited me to join them, and the appointment has happened today.

You may contact me now at my new email address: arturo@netfilter.org

This is the result of my continued contribution to the Netfilter project since several years now (probably since 2012-2013). I’m really happy with this, and I appreciate their recognition. I will do my best in this new position. Thanks!

Regarding the workshop itself, we are having lots of interesting talks and discussions about the state of the Netfilter technology, open issues, missing features and where to go in the future.

Really interesting!

John Goerzen: Time, Frozen

Mar, 04/07/2017 - 5:00pd

We’re expecting a baby any time now. The last few days have had an odd quality of expectation: any time, our family will grow.

It makes time seem to freeze, to stand still.

We have Jacob, about to start fifth grade and middle school. But here he is, still a sweet and affectionate kid as ever. He loves to care for cats and seeks them out often. He still keeps an eye out for the stuffed butterfly he’s had since he was an infant, and will sometimes carry it and a favorite blanket around the house. He will also many days prepare the “Yellow House News” on his computer, with headlines about his day and some comics pasted in — before disappearing to play with Legos for awhile.

And Oliver, who will walk up to Laura and “give baby a hug” many times throughout the day — and sneak up to me, try to touch my arm, and say “doink” before running off before I can “doink” him back. It was Oliver that had asked for a baby sister for Christmas — before he knew he’d be getting one!

In the past week, we’ve had out the garden hose a couple of times. Both boys will enjoy sending mud down our slide, or getting out the “water slide” to play with, or just playing in mud. The rings of dirt in the bathtub testify to the fun that they had. One evening, I built a fire, we made brats and hot dogs, and then Laura and I sat visiting and watching their water antics for an hour after, laughter and cackles of delight filling the air, and cats resting on our laps.

These moments, or countless others like Oliver’s baseball games, flying the boys to a festival in Winfield, or their cuddles at bedtime, warm the heart. I remember their younger days too, with fond memories of taking them camping or building a computer with them. Sometimes a part of me wants to just keep soaking in things just as they are; being a parent means both taking pride in children’s accomplishments as they grow up, and sometimes also missing the quiet little voice that can be immensely excited by a caterpillar.

And yet, all four of us are so excited and eager to welcome a new life into our home. We are ready. I can’t wait to hold the baby, or to lay her to sleep, to see her loving and excited older brothers. We hope for a smooth birth, for mom and baby.

Here is the crib, ready, complete with a mobile with a cute bear (and even a plane). I can’t wait until there is a little person here to enjoy it.

Antoine Beaupré: My free software activities, June 2017

Hën, 03/07/2017 - 6:37md
Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This time I worked on Mercurial, sudo and Puppet.

Mercurial remote code execution

I issued DLA-1005-1 to resolve problems with the hg server --stdio command that could be abused by "remote authenticated users to launch the Python debugger, and consequently execute arbitrary code, by using --debugger as a repository name" (CVE-2017-9462).

Backporting the patch was already a little tricky because, as is often the case in our line of work, the code had changed significantly in newer version. In particular, the commandline dispatcher had been refactored which made the patch non-trivial to port. On the other hand, mercurial has an extensive test suite which allowed me to make those patches in all confidence. I also backported a part of the test suite to detect certain failures better and to fix the output so that it matches the backported code. The test suite is slow, however, which meant slow progress when working on this package.

I also noticed a strange issue with the test suite: all hardlink operations would fail. Somehow it seems that my new sbuild setup doesn't support doing hardlinks. I ended up building a tarball schroot to build those types of packages, as it seems the issue is related to the use of overlayfs in sbuild. The odd part is my tests of overlayfs, following those instructions, show that it does support hardlinks, so there maybe something fishy here that I misunderstand.

This, however, allowed me to get a little more familiar with sbuild and the schroots. I also took this opportunity to optimize the builds by installing an apt-cacher-ng proxy to speed up builds, which will also be useful for regular system updates.

Puppet remote code execution

I have issued DLA-1012-1 to resolve a remote code execution attack against puppetmaster servers, from authenticated clients. To quote the advisory: "Versions of Puppet prior to 4.10.1 will deserialize data off the wire (from the agent to the server, in this case) with a attacker-specified format. This could be used to force YAML deserialization in an unsafe manner, which would lead to remote code execution."

The fix was non-trivial. Normally, this would have involved fixing the YAML parsing, but this was considered problematic because the ruby libraries themselves were vulnerable and it wasn't clear we could fix the problem completely by fixing YAML parsing. The update I proposed took the bold step of switching all clients to PSON and simply deny YAML parsing from the server. This means all clients need to be updated before the server can be updated, but thankfully, updated clients will run against an older server as well. Thanks to LeLutin at Koumbit for helping in testing patches to solve this issue.

Sudo privilege escalation

I have issued DLA-1011-1 to resolve an incomplete fix for a privilege escalation issue (CVE-2017-1000368 from CVE-2017-1000367). The backport was not quite trivial as the code had changed quite a lot since wheezy as well. Whereas mercurial's code was more complex, it's nice to see that sudo's code was actually simpler and more straightforward in newer versions, which is reassuring. I uploaded the packages for testing and uploaded them a year later.

I also took extra time to share the patch in the Debian bugtracker, so that people working on the issue in stable may benefit from the backported patch, if needed. One issue that came up during that work is that sudo doesn't have a test suite at all, so it is quite difficult to test changes and make sure they do not break anything.

Should we upload on fridays?

I brought up a discussion on the mailing list regarding uploads on fridays. With the sudo and puppet uploads pending, it felt really ... daring to upload both packages, on a friday. Years of sysadmin work hardwired me to be careful on fridays; as the saying goes: "don't deploy on a friday if you don't want to work on the weekend!"

Feedback was great, but I was surprised to find that most people are not worried worried about those issues. I have tried to counter some of the arguments that were brought up: I wonder if there could be a disconnection here between the package maintainer / programmer work and the sysadmin work that is at the receiving end of that work. Having myself to deal with broken updates in the past, I'm surprised this has never come up in the discussions yet, or that the response is so underwhelming.

So far, I'll try to balance the need for prompt security updates and the need for stable infrastructure. One does not, after all, go without the other...

Triage

I also did small fry triage:

Hopefully some of those will come to fruitition shortly.

Other work

My other work this month was a little all over the place.

Stressant

Uploaded a new release (0.4.1) of stressant to split up the documentation from the main package, as the main package was taking up too much space according to grml developers.

The release also introduces limited anonymity option, by blocking serial numbers display in the smartctl output.

Debiman

Also did some small followup on the debiman project to fix the FAQ links.

Local server maintenance

I upgraded my main server to Debian stretch. This generally went well, althought the upgrade itself took way more time than I would have liked (4 hours!). This is partly because I have a lot of cruft installed on the server, but also because of what I consider to be issues in the automation of major Debian upgrades. For example, I was prompted for changes in configuration files at seemingly random moments during the upgrade, and got different debconf prompts to answer. This should really be batched together, and unfortunately I had forgotten to use the home-made script I established when i was working at Koumbit which shortens the upgrade a bit.

I wish we would improve on our major upgrade mechanism. I documented possible solutions for this in the AutomatedUpgrade wiki page, but I'm not sure I see exactly where to go from here.

I had a few regressions after the upgrade:

  • the infrared remote control stopped working: still need to investigate
  • my home-grown full-disk encryption remote unlocking script broke, but upstream has a nice workaround, see Debian bug #866786
  • gdm3 breaks bluetooth support (Debian bug #805414 - to be fair, this is not a regression in stretch, it's just that I switched my workstation from lightdm to gdm3 after learning that the latter can do rootless X11!)
Docker and Subsonic

I did my first (and late?) foray into Docker and containers. My rationale was that I wanted to try out Subsonic, an impressive audio server which some friends have shown me. Since Subsonic is proprietary, I didn't want it to contaminate the rest of my server and it seemed like a great occasion to try out containers to keep things tidy. Containers may also allow me to transparently switch to the FLOSS fork LibreSonic once the trial period is over.

I have learned a lot and may write more about the details of that experience soon, for now you can look at the contributions I made to the unofficial Subsonic docker image, but also the LibreSonic one.

Since Subsonic also promotes album covers as first-class citizens, I used beets to download a lot of album covers, which was really nice. I look forward to using beets more, but first I'll need to implement two plugins.

Wallabako

I did a small release of wallabako to fix the build with the latest changes in the underlying wallabago library, which led me to ask upstream to make versionned releases.

I also looked into creating a separate documentation site but it looks like mkdocs doesn't like me very much: the table of contents is really ugly...

Small fry

That's about it! And that was supposed to be a slow month...

Ben Hutchings: Debian LTS work, June 2017

Hën, 03/07/2017 - 6:11md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 5 hours. I worked all 20 hours.

I spent most of my time working - together with other Linux kernel developers - on backporting and testing several versions of the fix for CVE-2017-1000364, part of the "Stack Clash" problem. I uploaded two updates to linux and issued DLA-993-1 and DLA-993-2. Unfortunately the latest version still causes regressions for some applications, which I will be investigating this month.

I also released a stable update on the Linux 3.2 longterm stable branch (3.2.89) and prepared another (3.2.90) which I released today.

Vincent Bernat: Performance progression of IPv4 route lookup on Linux

Hën, 03/07/2017 - 3:25md

TL;DR: Each of Linux 2.6.39, 3.6 and 4.0 brings notable performance improvements for the IPv4 route lookup process.

In a previous article, I explained how Linux implements an IPv4 routing table with compressed tries to offer excellent lookup times. The following graph shows the performance progression of Linux through history:

Two scenarios are tested:

  • 500,000 routes extracted from an Internet router (half of them are /24), and
  • 500,000 host routes (/32) tightly packed in 4 distinct subnets.

All kernels are compiled with GCC 4.9 (from Debian Jessie). This version is able to compile older kernels1 as well as current ones. The kernel configuration used is the default one with CONFIG_SMP and CONFIG_IP_MULTIPLE_TABLES options enabled (however, no IP rules are used). Some other unrelated options are enabled to be able to boot them in a virtual machine and run the benchmark.

The measurements are done in a virtual machine with one vCPU2. The host is an Intel Core i5-4670K and the CPU governor was set to “performance”. The benchmark is single-threaded. Implemented as a kernel module, it calls fib_lookup() with various destinations in 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC (and converted to nanoseconds by assuming a constant clock).

The following kernel versions bring a notable performance improvement:

  • In Linux 2.6.39, commit 3630b7c050d9, David Miller removes the hash-based routing table implementation to switch to the LPC-trie implementation (available since Linux 2.6.13 as a compile-time option). This brings a small regression for the scenario with many host routes but boosts the performance for the general case.

  • In Linux 3.0, commit 281dc5c5ec0f, the improvement is not related to the network subsystem. Linus Torvalds disables the compiler size-optimization from the default configuration. It was believed that optimizing for size would help keeping the instruction cache efficient. However, compilers generated under-performing code on x86 when this option was enabled.

  • In Linux 3.6, commit f4530fa574df, David Miller adds an optimization to not evaluate IP rules when they are left unconfigured. From this point, the use of the CONFIG_IP_MULTIPLE_TABLES option doesn’t impact the performances unless some IP rules are configured. This version also removes the route cache (commit 5e9965c15ba8). However, this has no effect on the benchmark as it directly calls fib_lookup() which doesn’t involve the cache.

  • In Linux 4.0, notably commit 9f9e636d4f89, Alexander Duyck adds several optimizations to the trie lookup algorithm. It really pays off!

  • In Linux 4.1, commit 0ddcf43d5d4a, Alexander Duyck collapses the local and main tables when no specific IP rules are configured. For non-local traffic, both those tables were looked up.

  1. Compiling old kernels with an updated userland may still require some small patches

  2. The kernels are compiled with the CONFIG_SMP option to use the hierarchical RCU and activate more of the same code paths as actual routers. However, progress on parallelism are left unnoticed. 

Faqet