You are here

Agreguesi i feed

Patching Firefox

Planet Debian - Enj, 05/10/2017 - 4:49md

At work, I help maintain a smartcard middleware that is provided to Belgian citizens who want to use their electronic ID card to, e.g., log on to government websites. This middleware is a piece of software that hooks into various browsers and adds a way to access the smartcard in question, through whatever APIs the operating system and the browser in question provide for that purpose. The details of how that is done differ between each browser (and in the case of Google Chrome, for the same browser between different operating systems); but for Firefox (and Google Chrome on free operating systems), this is done by way of a PKCS#11 module.

For Firefox 57, mozilla decided to overhaul much of their browser. The changes are large and massive, and in some ways revolutionary. It's no surprise, therefore, that some of the changes break compatibility with older things.

One of the areas in which breaking changes were made is in the area of extensions to the browser. Previously, Firefox had various APIs available for extensions; right now, all APIs apart from the WebExtensions API are considered "legacy" and support for them will be removed from Firefox 57 going forward.

Since installing a PKCS#11 module manually is a bit complicated, and since the legacy APIs provided a way to do so automatically provided the user would first install an add-on (or provided the installer of the PKCS#11 module sideloads it), most parties who provide a PKCS#11 module for use with Firefox will provide an add-on to automatically install it. Since the alternative involves entering the right values in a dialog box that's hidden away somewhere deep in the preferences screen, the add-on option is much more user friendly.

I'm sure you can imagine my dismay when I found out that there was no WebExtensions API to provide the same functionality. So, after asking around a bit, I filed bug 1357391 to get a discussion started. While it took some convincing initially to get people to understand the reasons for wanting such an API, eventually the bug was assigned the "P5" priority -- essentially, a "we understand the need and won't block it, but we don't have the time to implement it. Patches welcome, though" statement.

Since having an add-on was something that work really wanted, and since I had the time, I got the go-ahead from management to look into implementing the required code myself. I made it obvious rather quickly that my background in Firefox was fairly limited, though, and so was assigned a mentor to help me through the process.

Having been a Debian Developer for the past fifteen years, I do understand how to develop free software. Yet, the experience was different enough that still learned some new things about free software development, which was somewhat unexpected.

Unfortunately, the process took much longer than I had hoped, which meant that the patch was not ready by the time Firefox 57 was branched off mozilla's "central" repository. The result of that is that while my patch has been merged into what will eventually become Firefox 58, it looks strongly as though it won't make it into Firefox 57. That's going to cause some severe headaches, which I'm not looking forward to; and while I can certainly understand the reasons for not wanting to grant the exception for the merge into 57, I can't help but feeling like this is a missed opportunity.

Anyway, writing code for the massive Open Source project that mozilla is has been a load of fun, and in the process I've learned a lot -- not only about Open Source development in general, but also about this weird little thing that Javascript is. That might actually be useful for this other project that I've got running here.

In closing, I'd like to thank Tomislav 'zombie' Jovanovic for mentoring me during the whole process, without whom it would have been doubtful if I would even have been ready by now. Apologies for any procedural mistakes I've made, and good luck in your future endeavours!

Wouter Verhelst https://grep.be/blog//pd/ pd

Tracking aircraft in real-time, via software-defined-radio

Planet Debian - Mër, 04/10/2017 - 11:00md

So my last blog-post was about creating a digital-radio, powered by an ESP8266 device, there's a joke there about wireless-control of a wireless. I'm not going to make it.

Sticking with a theme this post is also about radio, software-defined radio. I know almost nothing about SDR, except that it can be used to let your computer "do stuff" with radio. The only application I've ever read about that seemed interesting was tracking aircraft.

This post is about setting up a Debian GNU/Linux system to do exactly that, show aircraft in real-time above your head! This was almost painless to setup.

  • Buy the hardware.
  • Plug in the hardware.
  • Confirm it is detected.
  • Install the appropriate sdr development-package(s).
  • Install the magic software.
    • Written by @antirez, no less, you know it is gonna be good!

So I bought this USB device from AliExpress for the grand total of €8.46. I have no idea if that URL is stable, but I suspect it is probably not. Good luck finding something similar if you're living in the future!

Once I connected the Antenna to the USB stick, and inserted it into a spare slot it showed up in the output of lsusb:

$ lsusb .. Bus 003 Device 043: ID 0bda:2838 Realtek Semiconductor Corp. RTL2838 DVB-T ..

In more detail I see the major/minor numbers:

idVendor 0x0bda Realtek Semiconductor Corp. idProduct 0x2838 RTL2838 DVB-T

So far, so good. I installed the development headers/library I needed:

# apt-get install librtlsdr-dev libusb-1.0-0-dev

Once that was done I could clone antirez's repository, and build it:

$ git clone https://github.com/antirez/dump1090.git $ cd dump1090 $ make

And run it:

$ sudo ./dump1090 --interactive --net

This failed initially as a kernel-module had claimed the device, but removing that was trivial:

$ sudo rmmod dvb_usb_rtl28xxu $ sudo ./dump1090 --interactive --net

Once it was running I'd see live updates on the console, every second:

Hex Flight Altitude Speed Lat Lon Track Messages Seen . -------------------------------------------------------------------------------- 4601fc 14200 0 0.000 0.000 0 11 1 sec 4601f2 9550 0 0.000 0.000 0 58 0 sec 45ac52 SAS1716 2650 177 60.252 24.770 47 26 1 sec

And opening a browser pointing at http://localhost:8080/ would show that graphically, like so:

NOTE: In this view I'm in Helsinki, and the airport is at Vantaa, just outside the city.

Of course there are tweaks to be made:

  • With the right udev-rules in place it is possible to run the tool as non-root, and blacklist the default kernel module.
  • There are other forks of the dump1090 software that are more up-to-date to explore.
  • SDR can do more than track planes.
Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

F/LOSS (in)activity, September 2017

Planet Debian - Mër, 04/10/2017 - 2:53md

In the interests of keeping myself "honest" regarding F/LOSS activity, here's a report, sadly it's not very good.

Unfortunately, September was a poor month for me in terms of motivation and energy for F/LOSS work. I did some amount of Gitano work, merging a patch from Richard Ipsum for help text of the config command. I also submitted another patch to the STM32F103xx Rust repository, though it wasn't a particularly big thing. Otherwise I've been relatively quiet on the Rust/USB stuff and have otherwise kept away from projects.

Sometimes one needs to take a step away from things in order to recuperate and care for oneself rather than the various demands on ones time. This is something I had been feeling I needed for a while, and with a lack of motivation toward the start of the month I gave myself permission to take a short break.

Next weekend is the next Gitano developer day and I hope to pick up my activity again then, so I should have more to report for October.

Daniel Silverstone http://blog.digital-scurf.org/ Digital-Scurf Ramblings

MAC Catching

Planet Debian - Mër, 04/10/2017 - 10:00pd

As we walk around with mobile phones in our pockets, there are multiple radios each with identifiers that can be captured and recorded just through their normal operation. Bluetooth and Wifi devices have MAC addresses and can advertise their presence to other devices merely by sending traffic, or by probing for devices to connect to if they’re not connected.

I found a simple tool, probemon that allows for anyone with a wifi card to track who is at which location at any given time. You could deploy a few of these with Raspberry Pis or even go even cheaper with a number of ESP8266.

In the news recently was a report from TfL about their WiFi data collection. Sky News reported that TfL “plans to make £322m by collecting data from passengers’ mobiles”. TfL have later denied this but the fact remains that collecting this data is trivial.

I’ve been thinking about ideas for spoofing mass amounts of wireless devices making the collected data useless. I’ve found that people have had success in using Scapy to forge WiFi frames. When I have some free time I plan to look into some kind of proof-of-concept for this.

On the underground, this is the way to do this, but above ground I’ve also heard of systems that use the TMSI from 3G/4G, not WiFi data, to identify mobile phones. You’ll have to be a bit more brave if you want to forge these (please do not, unless using alternative licensed frequencies, you may interfere with mobile service and prevent 999 calls).

If you wanted to spy on mobile phones near to you, you can do this with the gr-gsm package now available in Debian.

Iain R. Learmonth https://iain.learmonth.me/tags/planet-debian/ Planet Debian on Iain R. Learmonth

Einstein and Freud’s letters on “Why War?” – 85th anniversary

Planet Debian - Mër, 04/10/2017 - 5:57pd

85 years ago, on 30 July 1932, Albert Einstein send a letter to Sigmund Freud discussing the question: Why War? Freud answered to this letter in early September 1932. To commemorate the 85 year anniversary, the German typographer Harald Geisler has started a project on Kickstarter to recreate the letters send back then. Over the last weeks the two letters have arrived at my place in Japan:

But not only were the letters reproduces, but typeset in the original digitized handwriting and sent from the original locations. Harald Geisler crafted fonts based on the hand-writing of Einstein and Freud, an layout out the pages of the letters according to the originals. Since the letters were originally written in German, an English translation also typeset in the hand-writing font was added.

In addition to the a bit archaic hand writing of Freud which even many German natives will not be able to read, the German text of his letter has been included in normal print style. Not only that, Harald Geisler even managed to convince the Sigmund Freud Museum to let rest the letters for one night in the very office where the original letter was written, so all the letters send out actually came from Freud’s office.

This project was one of the first Kickstarter projects I supported, and I really liked the idea, and would like to thanks Harald Geisler for realizing it. These kind of activities, combining of typography, history, action, dedication, keep our culture and history alive. Thanks.

Harald Geisler also invites us all to continue the dialog on Why War?, which is getting more and more relevant again, with war-mongering becoming respected practice.

Norbert Preining https://www.preining.info/blog There and back again

RProtoBuf 0.4.11

Planet Debian - Mër, 04/10/2017 - 2:28pd

RProtoBuf provides R bindings for the Google Protocol Buffers ("ProtoBuf") data encoding and serialization library used and released by Google, and deployed fairly widely in numerous projects as a language and operating-system agnostic protocol.

A new releases RProtoBuf 0.4.11 appeared on CRAN earlier today. Not unlike the other recent releases, it is mostly a maintenance release which switches two of the vignettes over to using the pinp package and its template for vignettes.

Changes in RProtoBuf version 0.4.11 (2017-10-03)
  • The RProtoBuf-intro and RProtoBuf-quickref vignettes were converted to Rmarkdown using the templates and style file from the pinp package.

  • A few minor internal upgrades

CRANberries also provides a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Observations on Catalunya

Planet Debian - Mër, 04/10/2017 - 12:17pd

Some things I don't really understand reading in German media

  • Suddenly the electoral system becomes a legitimacy problem. While it has never been a problem for any of the previous decisions of the Catalunyan regional government suddenly a "only 48% of people voted for the government" results in the decisions being illegitimate? This is also a property of many governments (Greece and the US president being obvious examples but also the German Bundestag can have a majority government without the majority of votes). Is this just the media trying to find something they can blame on "the other side"?

  • How can you ever possibly excuse violence against people peacefully and non-violently doing whatever they're doing. Sure this referendum was considered illegal (and it may be legitimate to ignore the result, or legal prosecution of the initiators) but how can that ever possibly be an excuse for half a population peacefully doing whatever they are about to do? How can you possibly claim that "both sides are to blame" for the violence? "Die Zeit" seems to be the only one with an somewhat convincing argument ("Deciding to press on despite the obviously happening violence") while "Welt", "Spiegel" and "Süddeutsche" all trying to blame the regional government for the violence with as much of an argument as asking people to do something illegal in a totally peaceful way. Possibly an argument for legal consequences, sure -- but for violence?

Too bad I didn't keep the links / articles from Sunday night.

Christoph Egger https://weblog.siccegge.de/ Christoph's last Weblog entries

Reproducible Builds: Weekly report #127

Planet Debian - Mar, 03/10/2017 - 8:15md

Here's what happened in the Reproducible Builds effort between Sunday September 24 and Saturday September 30 2017:

Development and fixes in key packages

Kai Harries did an initial packaging of the Nix package manager for Debian. You can track his progress in #877019.

Uploads in Debian:

Packages reviewed and fixed, and bugs filed

Patches sent upstream:

Reproducible bugs (with patches) filed in Debian:

QA bugs filed in Debian:

Reviews of unreproducible packages

103 package reviews have been added, 153 have been updated and 78 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (177)
  • Andreas Beckmann (2)
  • Daniel Schepler (1)
diffoscope development

Mattia Rizzolo uploaded version 87 to stretch-backports.

  • Holger Levsen:
    • Bump standards version to 4.1.1, no changes needed.
strip-nondeterminism development
  • Holger Levsen:
    • Bump Standards-Version to 4.1.1, no changes needed.
reprotest development
  • Ximin Luo:
    • New features:
      • Add a --env-build option for testing different env vars. (In-progress, requires the python-rstr package awaiting entry into Debian.)
      • Add a --source-pattern option to restrict copying of source_root.
    • Usability improvements:
      • Improve error messages in some common scenarios.
      • Output hashes after a successful --auto-build.
      • Print a warning message if we reproduced successfully but didn't vary everything.
      • Update examples in documentation.
    • Have dpkg-source extract to different build dir iff varying the build-path.
    • Pass --debug to diffoscope if verbosity >= 2.
    • Pass --exclude-directory-metadata to diffoscope(1) by default.
    • Much refactoring to support the other work and several minor bug fixes.
  • Holger Levsen:
    • Bump standards version to 4.1.1, no changes needed.
tests.reproducible-builds.org
  • Holger Levsen:
    • Fix scheduler to not send empty scheduling notifications in the rare cases nothing has been scheduled.
    • Fix colors in 'amount of packages build each day on $ARCH' graphs.
reproducible-website development
  • Holger Levsen:
    • Fix up HTML syntax
    • Announce that RWS3 will happen at Betahaus, Berlin
Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Another Xor (CSAW 2017)

Planet Debian - Mar, 03/10/2017 - 6:40md

A short while ago, FAUST participated in this year's CSAW qualification and -- as usual -- I was working on the Crypto challenges again. The first puzzle I worked on was called "Another Xor" -- and, while there are quite some write ups already our solution was somewhat different (maybe even the intended solution given how nice things worked out) and certainly interesting.

The challenge provides a cipher-text. It's essentially a stream cipher with key repeated to generate the key stream. The plain-text was plain + key + checksum.

p = this is a plaintextThis is the keyfa5d46a2a2dcdeb83e0241ee2c0437f7 k = This is the keyThis is the keyThis is the keyThis is the keyThis i Key length

Our first step was figuring out the key length. Let's assume for now the key was This is the key. Notice that the key is also part of the plain-text and we know something about its location -- it ends at 32 characters from the back. If we only take a look at the encrypted key it should have the following structure:

p' = This is the key k' = he keyThis is t

The thing to notice here is that every character in the Key appears both in the plain-text and key stream sequence. And the cipher-text is the XOR (⊕) of both. Therefore XOR over the cipher-text sequence encrypting the key should equal 0 (⊕(p') ⊕ ⊕(k') = 0). So remove the last 32 characters and find all suffixes that result in a XOR of 0. Fortunately there is exactly one such suffix (there could be multiple) and therefore we know the key size: 67.

To put it in code, this basically is the function we implemented for this:

def calculate(ciphertextcandidate): accumulator = 0 for char in ciphertextcandidate: accumulator = accumulator ^ char

Which, for the matching plain-text and key-stream fragments is equal (due to the XOR encryption) to

def calculate(plainfragment, keyfragment): accumulator = 0 for i in range(len(plainfragment): accumulator = accumulator ^ (plainfragment[i] ^ keyfragment[i])

Now XOR lets us nicely reorder this to

def calculate(plainfragment, keyfragment): accumulator = 0 for i in range(len(plainfragment): accumulator = accumulator ^ (plainfragment[i] ^ keyfragment[(i + 6) % len(plainfragment)])

And, as plainfragment[i] and keyfragment[(i + 6) % len(plainfragment)] are equal for the plain-text range encoding the key this becomes

def calculate(plainfragment, keyfragment): accumulator = 0 for i in range(len(plainfragment): accumulator = accumulator ^ 0

Or simply 0 if the guess of the cipher-text range is correct.

Key recovery

Now the nice thing to notice is that the length of the key (67) is a prime (and 38, the plain-text length, is a generator). As a result, we only need to guess one byte of the key:

Assume you know one byte of the key (and the position). Now you can use that one byte of the key to decrypt the next byte of the key (using the area where the key is part of the plain-text). Due to the primeness of the key length this allows recovery of the full key.

Finally you can either print all 256 options and look for the one that looks reasonable or you can verify the md5sum which will give you the one valid solution, flag{sti11_us3_da_x0r_for_my_s3cratz}.

Code cipher = b"'L\x10\x12\x1a\x01\x00I[P-U\x1cU\x7f\x0b\x083X]\x1b'\x03\x0bR(\x04\r7SI\n\x1c\x02T\x15\x05\x15%EQ\x18\x00\x19\x11SJ\x00RV\n\x14YO\x0b\x1eI\n\x01\x0cE\x14A\x1e\x07\x00\x14aZ\x18\x1b\x02R\x1bX\x03\x05\x17\x00\x02\x07K\n\x1aLAM\x1f\x1d\x17\x1d\x00\x15\x1b\x1d\x0fH\x0eI\x1e\x02I\x01\x0c\x15\x00P\x11\\PXPCB\x03B\x13TBL\x11PC\x0b^\tM\x14IW\x08\rDD%FC" def keycover(guess): key = dict() pos = 38 key[38] = guess for i in range(67): newpos = (pos % 67) + 38 key[newpos] = xor(cipher[pos:], key[pos]) pos = newpos try: return b''.join([ key[i] for i in range(38, 105, 1) ]) except: return b'test' for guess in range(256): keycand = keycover(bytes([guess])) plaincand = xor(cipher, repeat(keycand, len(cipher))) if md5(plaincand[:-32]).hexdigest().encode() == plaincand[-32:]: print(keycand, plaincand) Christoph Egger https://weblog.siccegge.de/ Christoph's last Weblog entries

Looking for a mail program + desktop environment

Planet Debian - Mar, 03/10/2017 - 5:16md

Seems it is now almost a decade since I migrated from Thunderbird to GNUS. And GNUS is an awesome mail program that I still rather like. However GNUS is also heavily quirky. It's essentially single-threaded and synchronous which means you either have to wait for the "IMAP check for new mails" to finish or you have to C-g abort it if you want the user interface to work; You have to wait for the "Move mail" to complete (which can take a while -- especially with dovecot-antispam training the filter) before you can continue working. It has it's funny way around TLS and certificate validation. And it seems to hang from time to time until it is C-g interrupted.

So when I set up my new desktop machine I decided to try something else. My first try was claws-mail which seems OK but totally fails in the asynchronous area. While the GUI stays reactive, all actions that require IMAP interactions become incredibly slow when a background IMAP refresh is running. I do have quite some mailboxes and waiting the 5+ minutes after opening claws or whenever it decides to do a refresh is just to much.

Now my last try has been Kmail -- also driven by the idea of having a more integrated setup with CalDAV and CardDAV around and similar goodies. And Kmail really compares nicely to claws in many ways. After all, I can use it while it's doing its things in the background. However the KDE folks seem to have dropped all support for the \recent IMAP flag which I heavily rely on. I do -- after all -- keep a GNUS like workflow where all unread mail (ref \seen) needs to still be acted upon which means there can easily be quite a few unread messages when I'm busy at the moment and just having a quick look at the new (ref \recent) mail to see if there's something super-urgent is essential.

So I'm now looking for useful suggestions for a mail program (ideally with desktop integration) with the following essential features:

  • It stays usable at all times -- which means smarter queuing than claws -- so foreground actions are not delayed by any background task the mail program might be up to and tasks like moving mail are handled in the background.
  • Decent support for filtering. Apart from some basic stuff I need shortcut filtering for \recent mail.
  • Option to hide \seen mail (and ideally hide all folders that only contain \seen mail). Hopefully toggle-able by some hotkey. "Age in days" would be an acceptable approximation, but Kmail doesn't seem to allow that in search (it's available as a filter though).
Christoph Egger https://weblog.siccegge.de/ Christoph's last Weblog entries

An interesting bug - network-manager, glibc, dpkg-shlibdeps, systemd, and finally binutils

Planet Debian - Mar, 03/10/2017 - 3:27md
Not so long ago I went to effectively recompile NetworkManager and fix up minor bug in it. It built fine across all architectures, was considered to be installable etc. And I was expecting it to just migrate across. At the time, glibc was at 2.26 in artful-proposed and NetworkManager was built against it. However release pocket was at glibc 2.24. In Ubuntu we have a ProposedMigration process in place which ensures that newly built packages do not regress in the number of architectures built for; installable on; and do not regress themselves or any reverse dependencies at runtime.

Thus before my build of NetworkManager was considered for migration, it was tested in the release pocket against packages in the release pocket. Specifically, since package metadata only requires glibc 2.17 NetworkManager was tested against glibc currently in the release pocket, which should just work fine....
autopkgtest [21:47:38]: test nm: [-----------------------
test_auto_ip4 (__main__.ColdplugEthernet)
ethernet: auto-connection, IPv4 ... FAIL ----- NetworkManager.log -----
NetworkManager: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by NetworkManager)At first I only saw failing tests, which I thought is transient failure. Thus they were retried a few times. Then I looked at the autopkgtest log and saw above error messages. Perplexed, I have started a lxd container with ubuntu artful, enabled proposed and installed just network-manager from artful-proposed and indeed a simple `NetworkManager --help` failed with above error from linker.
I am too young to know what dependency-hell means, since ever since I used Linux (Ubuntu 7.04) all glibc symbols were versioned, and dpkg-shlibdeps would generate correct minimum dependencies for a package. Alas in this case readelf confirmed that indeed /usr/sbin/NetworkManager requires 2.25 and dpkg depends is >= 2.17.
Further reading readelf output I checked that all of the glibc symbols used are 2.17 or lower, and only the "Version needs section '.gnu.version_r'" referenced GLIBC_2.25 symbol. Inspecting dpkg-shlibdeps code I noticed that it does not parse that section and only searches through the dynamic symbols used to establish the minimum required version.
Things started to smell fishy. On one hand, I trust dpkg-shlibdeps to generate the right dependencies. On the other hand I also trust linker to not tell lies either. Hence I opened a Debian BTS bug report about this issue.
At this point, I really wanted to figure out where the reference to 2.25 comes from. Clearly it was not from any private symbols as then the reference would be on 2.26. Checking glibc abi lists I found there were only a handful of symbols marked as 2.25$ grep 2.25 ./sysdeps/unix/sysv/linux/x86_64/64/libc.abilist
GLIBC_2.25 GLIBC_2.25 A
GLIBC_2.25 __explicit_bzero_chk F
GLIBC_2.25 explicit_bzero F
GLIBC_2.25 getentropy F
GLIBC_2.25 getrandom F
GLIBC_2.25 strfromd F
GLIBC_2.25 strfromf F
GLIBC_2.25 strfroml FBlindly grepping for these in network-manager source tree I found following:$ grep explicit_bzero -r configure.ac src/
configure.ac: explicit_bzero],
src/systemd/src/basic/string-util.h:void explicit_bzero(void *p, size_t l);
src/systemd/src/basic/string-util.c:void explicit_bzero(void *p, size_t l) {
src/systemd/src/basic/string-util.c:        explicit_bzero(x, strlen(x));First of all it seems like network-manager includes a partial embedded copy of systemd. Secondly that code is compiled into a temporary library and has autconf detection logic to use explicit_bzero. It also has an embedded implementation of explicit_bzero when it is not available in libc, however it does not have FORTIFY_SOURCES implementation of said function (__explicit_bzero_chk) as was later pointed out to me. And whilst this function is compiled into an intermediary noinst library, no functions that use explicit_bzero are used in the end by NetworkManger binary. To proof this, I've dropped all code that uses explicit_bzero, rebuild the package against glibc 2.26, and voila it only had Version reference on glibc 2.17 as expected from the end-result usage of shared symbols.
At this point toolchain bug was a suspect. It seems like whilst explicit_bzero shared symbol got optimised out, the version reference on 2.25 persisted to the linked binaries. At this point in the archive a snapshot version of binutils was in use. And in fact forcefully downgrading bintuils resulted in correct compilation / versions table referencing only glibc 2.17.
Mathias then took over a tarball of object files and filed upstream bug report against bintuils: "[2.29 Regression] ld.bfd keeps a version reference in .gnu.version_r for symbols which are optimized out". The discussion in that bug report is a bit beyond me as to me binutils is black magic. All I understood there was "we moved sweep and pass to another place due to some bugs", doing that introduced this bug, thus do multiple sweep and passes to make sure we fix old bugs and don't regress this either. Or something like that. Comments / Better description of the bintuils fix are welcomed.
Binutils got fixed by upstream developers, cherry-picked into debian, and ubuntu, network-manager got rebuild and everything is wonderful now. However, it does look like unused / deadend code paths tripped up optimisations in the toolchain which managed to slip by distribution package dependency generation and needless require a higher up version of glibc. I guess the lesson here is do not embed/compile unused code. Also I'm not sure why network-manager uses networkd internals like this, and maybe systemd should expose more APIs or serialise more state into /run, as most other things query things over dbus, private socket, or by establishing watches on /run/systemd/netif. I'll look into that another day.
Thanks a lot to Guillem Jover, Matthias Klose, Alan Modra, H.J. Lu, and others for getting involved. I would not be able to raise, debug, or fix this issue all by myself. Dimitri John Ledkov noreply@blogger.com Surgut

Facebook Lies

Planet Debian - Mar, 03/10/2017 - 2:00md

In the past, I had a Facebook account. Long ago I “deleted” this account through the procedure outlined on their help pages. In theory, 14 days after I used this process my account would be irrevocably gone. This was all lies.

My account was not deleted and yesterday I received an email:

Screenshot of the email I received from Facebook

It took me a moment to figure it out, but what had happened here is someone had logged into my Facebook account using my email address and password. Facebook simply reactivated the account, which had not had its data deleted, as if I had logged in.

This was possible because:

  1. Facebook was clinging to the hope that I would like to return
  2. The last time I used Facebook I didn’t know what a password manager was and was using the same password for basically everything

When I logged back in, all I needed to provide to prove I was me was my date of birth. Given that old Facebook passwords are readily available from dumps (people think their accounts are gone, so why should they be changing their passwords?) and my date of birth is not secret either, this is not great.

I followed the deletion procedure again and in 2 weeks (you can’t immediately request deletion apparently) I’ll check to see if the account is really gone. I’ve updated the password so at least the deletion process can’t be interrupted by whoever has that password (probably lots of people - it’ll be in a ton of dumps where databases have been hacked).

If it’s still not gone, I hear you can just post obscene and offensive material until Facebook deletes you. I’d rather not have to take that route though.

If you’re interested to see if you’ve turned up in a hacked database dump yourself, I would recommend hibp.

Update (2017-10-04): Thanks for all the comments. Sorry I haven’t been able to reply to all of them. Discussion around this post occured at Hacker News if you would like to read more there. You can also read about a similar, and more frustrating, case that came up in the HN discussion.

Iain R. Learmonth https://iain.learmonth.me/tags/planet-debian/ Planet Debian on Iain R. Learmonth

My free software activities, September 2017

Planet Debian - Mar, 03/10/2017 - 1:55pd
Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I mostly worked on the git, git-annex and ruby packages this month but didn't have time to completely use my allocated hours because I started too late in the month.

Ruby

I was hoping someone would pick up the Ruby work I submitted in August, but it seems no one wanted to touch that mess, understandably. Since then, new issues came up, and not only did I have to work on the rubygems and ruby1.9 package, but now the ruby1.8 package also had to get security updates. Yes: it's bad enough that the rubygems code is duplicated in one other package, but wheezy had the misfortune of having two Ruby versions supported.

The Ruby 1.9 also failed to build from source because of test suite issues, which I haven't found a clean and easy fix for, so I ended up making test suite failures non-fatal in 1.9, which they were already in 1.8. I did keep a close eye on changes in the test suite output to make sure tests introduced in the security fixes would pass and that I wouldn't introduce new regressions as well.

So I published the following advisories:

  • ruby 1.8: DLA-1113-1, fixing CVE-2017-0898 and CVE-2017-10784. 1.8 doesn't seem affected by CVE-2017-14033 as the provided test does not fail (but it does fail in 1.9.1). test suite was, before patch:

    2199 tests, 1672513 assertions, 18 failures, 51 errors

    and after patch:

    2200 tests, 1672514 assertions, 18 failures, 51 errors
  • rubygems: uploaded the package prepared in August as is in DLA-1112-1, fixing CVE-2017-0899, CVE-2017-0900, CVE-2017-0901. here the test suite passed normally.

  • ruby 1.9: here I used the used 2.2.8 release tarball to generate a patch that would cover all issues and published DLA-1114-1 that fixes the CVEs of the two packages above. the test suite was, before patches:

    10179 tests, 2232711 assertions, 26 failures, 23 errors, 51 skips

    and after patches:

    1.9 after patches (B): 10184 tests, 2232771 assertions, 26 failures, 23 errors, 53 skips
Git

I also quickly issued an advisory (DLA-1120-1) for CVE-2017-14867, an odd issue affecting git in wheezy. The backport was tricky because it wouldn't apply cleanly and the git package had a custom patching system which made it tricky to work on.

Git-annex

I did a quick stint on git-annex as well: I was able to reproduce the issue and confirm an approach to fixing the issue in wheezy, although I didn't have time to complete the work before the end of the month.

Other free software work New project: feed2exec

I should probably make a separate blog post about this, but ironically, I don't want to spend too much time writing those reports, so this will be quick.

I wrote a new program, called feed2exec. It's basically a combination of feed2imap, rss2email and feed2tweet: it allows you to fetch RSS feeds and send them in a mailbox, but what's special about it, compared to the other programs above, is that it is more generic: you can basically make it do whatever you want on new feed items. I have, for example, replaced my feed2tweet instance with it, using this simple configuration:

[anarcat] url = https://anarc.at/blog/index.rss output = feed2exec.plugins.exec args = tweet "%(title)0.70s %(link)0.70s"

The sample configuration file also has examples to talk with Mastodon, Pump.io and, why not, a torrent server to download torrent files available over RSS feeds. A trivial configuration can also make it work as a crude podcast client. My main motivation to work on this was that it was difficult to extend feed2imap to do what I needed (which was to talk to transmission to download torrent files) and rss2email didn't support my workflow (which is delivering to feed-specific mail folders). Because both projects also seemed abandoned, it seemed like a good idea at the time to start a new one, although the rss2email community has now restarted the project and may produce interesting results.

As an experiment, I tracked my time working on this project. It turns out it took about 45 hours to write that software. Considering feed2exec is about 1400 SLOC, that's 30 lines of code per hour. I don't know if that's slow or fast, but it's an interesting metric for future projects. It sure seems slow to me, but we need to keep in mind those 30 lines of code don't include documentation and repeated head banging on the keyboard. For example, I found two issues with the upstream feedparser package which I use to parse feeds which also seems unmaintained, unfortunately.

Feed2exec is beta software at this point, but it's working well enough for me and the design is much simpler than the other programs of the kind. The main issue people can expect from it at this point is formatting issues or parse errors on exotic feeds, and noisy error messages on network errors, all of which should be fairly easy to fix in the test suite. I hope it will be useful for the community and, as usual, I welcome contributions, help and suggestions on how to improve the software.

More Python templates

As part of the work on feed2exec, I did cleanup a few things in the ecdysis project, mostly to hook tests up in the CI, improve on the advancedConfig logger and cleanup more stuff.

While I was there, it turns out that I built a pretty decent basic CI configuration for Python on GitLab. Whereas the previous templates only had a non-working Django example, you should now be able to chose a Python template when you configure CI on GitLab 10 and above, which should hook you up with normal Python setup procedures like setup.py install and setup.py test.

Selfspy

I mentioned working on a monitoring tool in my last post, because it was a feature from Workrave missing in SafeEyes. It turns out there is already such a tool called selfspy. I did an extensive review of the software to make sure it wouldn't leak out confidential information out before using it, and it looks, well... kind of okay. It crashed on me at least once so far, which is too bad because then it loses track of the precious activity. I have used it at least once to figure out what the heck I worked on during the day, so it's pretty useful. I particularly used it to backtrack my work on feed2exec as I didn't originally track my time on the project.

Unfortunately, selfspy seems unmaintained. I have proposed a maintenance team and hopefully the project maintainer will respond and at least share access so we don't end up in a situation like linkchecker. I also sent a bunch of pull requests to fix some issues like being secure by default and fixing the build. Apart from the crash, the main issue I have found with the software is that it doesn't detect idle time which means certain apps are disproportionatly represented in statistics. There are also some weaknesses in the crypto that should be adressed for people that encrypt their database.

Next step is to package selfspy in Debian which should hopefully be simple enough...

Restic documentation security

As part of a documentation patch on the Restic backup software, I have improved on my previous Perl script to snoop on process commandline arguments. A common flaw in shell scripts and cron jobs is to pass secret material in the environment (usually safe) but often through commandline arguments (definitely not safe). The challenge, in this peculiar case, was the env binary, but the last time I encountered such an issue was with the Drush commandline tool, which was passing database credentials in clear to the mysql binary. Using my Perl sniffer, I could get to 60 checks per second (or 60Hz). After reimplementing it in Python, this number went up to 160Hz, which still wasn't enough to catch the elusive env command, which is much faster at hiding arguments than MySQL, in large part because it simply does an execve() once the environment is setup.

Eventually, I just went crazy and rewrote the whole thing in C which was able to get 700-900Hz and did catch the env command about 10-20% of the time. I could probably have rewritten this by simply walking /proc myself (since this is what all those libraries do in the end) to get better result, but then my point was made. I was able to prove to the restic author the security issues that warranted the warning. It's too bad I need to repeat this again and again, but then my tools are getting better at proving that issue... I suspect it's not the last time I have to deal with this issue and I am happy to think that I can come up with an even more efficient proof of concept tool the next time around.

Ansible 101

After working on documentation last month, I ended up writing my first Ansible playbook this month, converting my tasksel list to a working Ansible configuration. This was a useful exercise: it allow me to find a bunch of packages which have been removed from Debian and provides much better usability than tasksel. For example, it provides a --diff argument that shows which packages are missing from a given setup.

I am still unsure about Ansible. Manifests do seem really verbose and I still can't get used to the YAML DSL. I could probably have done the same thing with Puppet and just run puppet apply on the resulting config. But I must admit my bias towards Python is showing here: I can't help but think Puppet is going to be way less accessible with its rewrite in Clojure and C (!)... But then again, I really like Puppet's approach of having generic types like package or service rather than Ansible's clunky apt/yum/dnf/package/win_package types...

Pat and Ham radio

After responding (too late) to a request for volunteers to help in Puerto Rico, I realized that my amateur radio skills were somewhat lacking in the "packet" (data transmission in ham jargon) domain, as I wasn't used to operate a Winlink node. Such a node can receive and transmit actual emails over the airwaves, for free, without direct access to the internet, which is very useful in disaster relief efforts. Through summary research, I stumbled upon the new and very promising Pat project which provides one of the first user-friendly Linux-compatible Winlink programs. I provided improvements on the documentation and some questions regarding compatibility issues which are still pending.

But my pet issue is the establishment of pat as a normal internet citizen by using standard protocols for receiving and sending email. Not sure how that can be implemented, but we'll see. I am also hoping to upload an official Debian package and hopefully write more about this soon. Stay tuned!

Random stuff

I ended up fixing my Kodi issue by starting it as a standalone systemd service, instead of gdm3, which is now completely disabled on the media box. I simply used the following /etc/systemd/service/kodi.service file:

[Unit] Description=Kodi Media Center After=systemd-user-sessions.service network.target sound.target [Service] User=xbmc Group=video Type=simple TTYPath=/dev/tty7 StandardInput=tty ExecStart=/usr/bin/xinit /usr/bin/dbus-launch --exit-with-session /usr/bin/kodi-standalone -- :1 -nolisten tcp vt7 Restart=on-abort RestartSec=5 [Install] WantedBy=multi-user.target

The downside of this is that it needs Xorg to run as root, whereas modern Xorg can now run rootless. Not sure how to fix this or where... But if I put needs_root_rights=no in Xwrapper.config, I get the following error in .local/share/xorg/Xorg.1.log:

[ 2502.533] (EE) modeset(0): drmSetMaster failed: Permission denied

After fooling around with iPython, I ended up trying the xonsh shell, which is supposed to provide a bash-compatible Python shell environment. Unfortunately, I found it pretty unusable as a shell: it works fine to do Python stuff, but then all my environment and legacy bash configuration files were basically ignored so I couldn't get working quickly. This is too bad because the project looked very promising...

Finally, one of my TLS hosts using a Let's Encrypt certificate wasn't renewing properly, and I figured out why. It turns out the ProxyPass command was passing everything to the backend, including the /.well-known requests, which obviously broke ACME verification. The solution was simple enough, disable the proxy for that directory:

ProxyPass /.well-known/ ! Antoine Beaupré http://anarc.at/tag/debian-planet/ pages tagged debian-planet

PhD

Planet Debian - Hën, 02/10/2017 - 4:49md

I'm very excited to (finally) announce that I've embarked upon a part-time PhD in Computing Science at Newcastle University!

I'm at the very beginning of a journey that is expected to last about six years. The area I am going to be working in is functional stream processing and distributed systems architecture, in the context of IoT. This means investigating and working with technologies such as Apache Spark; containers (inc. Docker); Kubernetes and OpenShift; but also Haskell. My supervisor is Prof. Paul Watson. This would not be possible without the support of my employer, Red Hat, for which I am extremely grateful.

I hope to write much more about this topic here in the near future, so watch this space!

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Attracting contributors to a new project

Planet Debian - Hën, 02/10/2017 - 2:29md

How do you attract contributors to a new free software project?

I'm in the very early stages of a new personal project. It is irrelevant for this blog post what the new project actually is. Instead, I am thinking about the following question:

Do I want the project to be mainly for myself, and maybe a handful of others, or do I want to try to make it a more generally useful, possibly even a well-known, popular project? In other words, do I want to just solve a specific problem I have or try to solve it for a large group of people?

If it's a personal project, I'm all set. I can just start writing code. (In fact, I have.) If it's the latter, I'll need to attract contributions from others, and how do I do that?

I asked that question on Twitter and Mastodon and got several suggestions. This is a summary of those, with some editorialising from me.

  • The most important thing is probably that the project should aim for something that interests other people. The more people it interests, the easier it will be to attract contributors. This should be written up and displayed prominently: what does (or will) the software do and what can it e used for.

  • Having something that kind of works, and easy to improve, seems to also be key. An empty project is daunting to do anything with. Part of this is that the software the project is producing should be easy to install and get running. It doesn't have to be fully featured. It doesn't even have to be alpha level quality. It needs to do something.

    If the project is about producing a spell checker, say, and it doesn't even try to read an input file, it's probably too early for anyone else to contribute. A spell checker that lists every word in the input file as badly spelt is probably more attractive to contribute to.

  • It helps to document where a new contributor should start, and how they would submit their contribution. A list of easy things to work on may also help. Having a roadmap of near future developent steps and a long-term vision will make things easier. Having an architectural document to explain how the system hangs together will help.

  • A welcoming, constructive atmosphere helps. People should get quick feedback to questions, issues, patches, in order to build momentum. Make it fun for people to contibute, and they'll contribute more.

  • A public source code repository, and a public ticketing system, and public discussion forums (mailing lists, web forums, IRC channels, etc) will help.

  • Share the power in the project. Give others the power to make decisions, or merge things from other contributors. Having a clear, functioning governance structure from the start helps.

I don't know if these things are all correct, or that they're enough to grow a successful, popular project.

Karl Foger'l seminal book Producing Open Source Software should also be mentioned.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

IPv6 in my home network

Planet Debian - Hën, 02/10/2017 - 12:15md

I am lucky and get both IPv4 (without CGNAT) and IPv6 from my provider. Recently after upgrading my desk router (that is an Netgear WNDR3800 that serves the network on my desk) from OpenWRT to latest LEDE I looked into what can be improved in the IPv6 setup for both my home network (served by a FRITZ!Box) and my desk network.

Unfortunately I was unable to improve the situation compared to what I already had before.

Things that work

Making IPv6 work in general was easy, just a few clicks in the configuration of the FRITZ!Box and it mostly worked. After that I have:

  • IPv6 connectivity in the home net
  • IPv6 connectivity in the desk net
Things that don't work

There are a few things however that I'd like to have, that are not that easy it seems:

ULA for both nets

I let the two routers announce an ULA prefix each. Unfortunately I was unable to make the LEDE box announce its net on the wan interface for clients in the home net. So the hosts in the desk net know how to reach the hosts in the home net but not the other way round which makes it quite pointless. (It works fine as long as the FRITZ!Box announces a global net, but I'd like to have local communication work independent of the global connectivity.)

To fix this I'd need something like radvd on my LEDE router, but that isn't provided by LEDE (or OpenWRT) any more as odhcpd is supposed to be used which AFAICT is unable to send RAs on the wan interface though. Ok, probably I could install bird, but that seems a bit oversized. I created an entry in the LEDE forum but without any reply up to now.

Alternatively (but less pretty) I could setup an IPv6 route in the FRITZ!Box, but that only works with a newer firmware and as this router is owned by my provider I cannot update it.

Firewalling

The FRITZ!Box has a firewall that is not very configurable. I can punch a hole in it for hosts with a given interface-ID, but that only works for hosts in the home net, not the machines in the delegated subnet behind the LEDE router. In fact I think the FRITZ!Box should delegate firewalling for a delegated net also to the router of that subnet.

So having a global address on the machines on my desk doesn't allow me to reach them from the internet.

Update: according to the German changelog firmware 6.83 seems to include that feature. Cheers AVM. Now waiting for my provider to update ...

Uwe Kleine-König https://blog.kleine-koenig.org/ukl/ ukl's blog

Recently I was writing log analysis tools in javascript.

Planet Debian - Hën, 02/10/2017 - 9:33pd
Recently I was writing log analysis tools in javascript. Javascript part is challenging.

Junichi Uekawa http://www.netfort.gr.jp/~dancer/diary/201710.html.en Dancer's daily hackings

Monthly FLOSS activity - 2017/09 edition

Planet Debian - Hën, 02/10/2017 - 5:38pd
Debian devscripts

Before deciding to take an indefinite hiatus from devscripts, I prepared one more upload merging various contributed patches and a bit of last minute cleanup.

  • build-rdeps

    • Updated build-rdeps to work with compressed apt indices. (Debian bug #698240)
    • Added support for Build-Arch-{Conflicts,Depends} to build-rdeps. (adc87981)
    • Merged Andreas Henriksson's patch for setting remote.<name>.push-url when using debcheckout to clone a git repository. (Debian bug #753838)
  • debsign

    • Updated bash completion for gpg keys to use gpg --with-colons, instead of manually parsing gpg -K output. Aside from being the Right Way™ to get machine parseable information out of gpg, it fixed completion when gpg is a 2.x version. (Debian bug #837380)

I also setup integration with Travis CI to hopefully catch issues sooner than "while preparing an upload", as was typically the case before. Anyone with push access to the Debian/devscripts GitHub repo can take advantage of this to test out changes, or keep the development branches up to date. In the process, I was able to make some improvements to travis.debian.net, namely support for DEB_BUILD_PROFILES ¹² and using a separate, minimal docker image for running autopkgtests.

unibilium
  • Packaged the new upstream release (1.2.1)

  • Basic package maintenance (-dbgsym package, policy update, enabled hardening flags).

  • Uploaded 1.2.1-1

neovim
  • Attempted to nudge lua-nvim's builds along on a couple architectures where they were waiting for neovim to be installable

    • x32: Temporarily removed lua-nvim Build-Depends to break the BD-Uninstallable cycle between lua-nvim and neovim. ✓
    • powerpcspe: Temporarily removed luajit Build-Depends, reducing test scope, to fix the build. ❌
      • If memory serves, the test failures are fixed upstream for the next release.
  • Uploaded 0.2.0-4

Oddly, the mips64el builds were in BD-Uninstallable state, even though luajit's buildd status showed it was built. Looking further, I noticed the libluajit-5.1{,-dev} binary packages didn't have the mips64el architecture enabled, so I asked for it to be enabled.

msgpack-c

There were a few packages left which would FTBFS if I uploaded msgpack-c 2.x to unstable.

All of the bug reports had either trivial work arounds (i.e., forcing use of the v1 C++ API) or trivial patches. However, I didn't want to continue waiting for the packages to get fixed since I knew other people had expressed interest in the new msgpack-c.

Trying to avoid making other packages insta-buggy, I NMUed autobahn-cpp with the v1 work around. That didn't go over well, partly because I didn't send a finalized "Hey, I'd like to get this done and here's my plan to NMU" email.

Based on that feedback, I decided to bump the remaining bugs to "serious" instead of NMUing and upload msgpack-c. Thanks to Jonas Smedegaard for quickly integrating my proposed fix for libdata-messagepack-perl. Hopefully, upstream has some time to review the PR soon.

vim
  • Used the powerpc porterbox to debug and fix a 32-bit integer overflow that was causing test failures.

  • Asked the vim-perl folks about getting updated runtime files to Bram, after Jakub Wilk filed Debian bug #873755. This had been fixed 4+ years earlier, but not yet merged back into Vim. Thanks to Rob Hoelz for pulling things together and sending the updates to Bram.

  • I've continued to receive feedback from Debian users about their frustration with Vim's new "defaults.vim", both in regards to the actual default settings and its interaction with the system-wide vimrc file. While I still don't intend to deviate from upstream's behavior, I did push back some more on the existing behavior. I appreciate Christian Brabandt's effort, as always, to understand the issue at hand and have constructive discussions. His final suggestion seems like it will resolve the system vimrc interaction, so hopefully Bram is receptive to it.

  • Uploaded 2:8.0.1144-1

  • Thanks to a nudge from Salvatore Bonaccorso and Moritz Mühlenhoff, I uploaded 2:8.0.0197-4+deb9u1 which fixes CVE-2017-11109. I had intended to do this much sooner, but it fell through the cracks. Due to Adam Barratt's quick responses, this should make it into the upcoming Stretch 9.2 release.

subversion
  • Started work on updating the packaging
    • Converted to 3.0 (quilt) source format
    • Updated to debhelper 10 compat
    • Initial attempts at converting to a dh rules file
      • Running into various problems here and still trying to figure out whether they're in the upstream build system, Debian's patches, or both.
neovim
  • Worked with Niko Dittmann to fix build failures Niko was experiencing on OpenBSD 6.1 #7298

  • Merged upstream Vim patches into neovim from various contributors

  • Discussed focus detection behavior after a recent change in the implementation (#7221)

    • While testing focus detection in various terminal emulators, I noticed pangoterm didn't support this. I submitted a merge request on libvterm to provide an API for reporting focus changes. If that's merged, it will be trivial for pangoterm to notify applications when the terminal has focus.
  • Fixed a bug in our tooling around merging Vim patches, which was causing it to incorrectly drop certain files from the patches. #7328

James McCoy https://jamessan.com/~jamessan//tags/planet-debian/ pages tagged planet-debian

My Debian Activities in September 2017

Planet Debian - Dje, 01/10/2017 - 6:07md

FTP assistant

This month almost the same numbers as last month appeared in the statistics. I accepted 213 packages and rejected 15 uploads. The overall number of packages that got accepted this month was 425.

Debian LTS

This was my thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 15.75h. During that time I did LTS uploads of:

  • [DLA 1109-1] libraw security update for one CVE
  • [DLA 1117-1] opencv security update for 13 CVEs

I also took care of libstrusts1.2-java and marked all CVEs as not-affected and I marked all CVEs for jasper as no-dsa. I also started to work on sam2p.

Just as I wanted to upload a new version of libofx, a new CVE was discovered that was not closed in time. I tried to find a patch on my own but had difficulties in reproducing this issue.

Other stuff

This month I made myself familiar with glewlwyd and according to upstream, the Debian packages work out-of-the box. However upstream does not stop working on that software, so I uploaded new versions of hoel, ulfius and glewlwyd.

As libjwt needs libb64, which was orphanded, I used it as DOPOM and adopted it.

Does anybody still know the Mayhem-bugs? I could close one by uploading an updated version of siggen.

I also went through my packages and looked for patches that piled up in the BTS. As a result i uploaded updated versions of radlib, te923con, node-starttls, harminv and uucp.

New upstream versions of openoverlayrouter and fasttree also made it into the archive.

Last but not least I moved several packages to the debian-mobcom group.

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

FLOSS Activities September 2017

Planet Debian - Dje, 01/10/2017 - 3:39pd
Changes Issues Review Administration
  • icns: merged patches
  • Debian: help guest user with access, investigate/escalate broken network, restart broken stunnels, investigate static.d.o storage, investigate weird RAID mails, ask hoster to investigate power issue,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: merged & deployed patch, redirect DDTSS translator, redirect user support requests, whitelist email addresses, update email for accounts with bouncing email,
  • Debian derivatives census: merged/deployed patches
  • Debian PTS: debugged cron mails, deployed changes, reran scripts, fixed configuration file
  • Openmoko: debug reboot issue, debug load issues
Communication Sponsors

The samba bug was sponsored by my employer. All other work was done on a volunteer basis.

Paul Wise http://bonedaddy.net/pabs3/log/ Log

Faqet

Subscribe to AlbLinux agreguesi