You are here

Planet Debian

Subscribe to Feed Planet Debian
Dude! Sweet! Free Software Hacking Free Software Indie Hacker My WebLog at Intitut Mines-Telecom, Télécom SudParis Reproducible builds blog Any sufficiently advanced thinking is indistinguishable from madness rebel with rather too many causes Joachim Breitners Denkblogade Sam Hartman WEBlog -- Wouter's Eclectic Blog random musings and comments rebel with rather too many causes joey (y eso no es poca cosa) rebel with rather too many causes jmtd code is freedom -- patching my itch Enrico Zini: pdo Your typical transgender polyamory pansexual geek. entries tagged planetdebian sesse's blog Free Software Hacking a blog random musings and comments something around Debian, written in funny Eng"r"ish ;) Debian and Free Software Just another WordPress.com weblog just my thoughts a blog jmtd I began this blog as part of my homework of Master of Libre Software at URJC. I finished my studies but I keep on writing about free (as in freedom) software, networks and knowledge. random musings and comments Reproducible builds blog random musings and comments Insider infos, master your Debian/Ubuntu distribution Any sufficiently advanced thinking is indistinguishable from madness something around Debian, written in funny Eng"r"ish ;) Debian related wlog entries - the MirOS Project "Passion and dispassion. Choose two." -- Larry Wall Thinking inside the box something around Debian, written in funny Eng"r"ish ;) "Passion and dispassion. Choose two." -- Larry Wall random musings and comments sesse's blog Just another WordPress.com weblog Welcome, hope you enjoy what you're reading! :) showing latest 10 code is freedom -- patching my itch Enrico Zini: pdo Just another WordPress.com weblog Free Software Hacking Blog from the Debian Project
Përditësimi: 4 months 2 javë më parë

Lessons from OpenStack Telemetry: Incubation

Enj, 12/04/2018 - 2:50md

It was mostly around that time in 2012 that I and a couple of fellow open-source enthusiasts started working on Ceilometer, the first piece of software from the OpenStack Telemetry project. Six years have passed since then. I've been thinking about this blog post for several months (even years, maybe), but lacked the time and the hindsight needed to lay out my thoughts properly. In a series of posts, I would like to share my observations about the Ceilometer development history.

To understand the full picture here, I think it is fair to start with a small retrospective on the project. I'll try to keep it short, and it will be unmistakably biased, even if I'll do my best to stay objective – bear with me.

Incubation

Early 2012, I remember discussing with the first Ceilometer developers the right strategy to solve the problem we were trying to address. The company I worked for wanted to run a public cloud, and billing the resources usage was at the heart of the strategy. The fact that no components in OpenStack were exposing any consumption API was a problem.

We debated about how to implement those metering features in the cloud platform. There were two natural solutions: either achieving some resource accounting report in each OpenStack projects or building a new software on the side, covering for the lack of those functionalities.

At that time there were only less than a dozen of OpenStack projects. Still, the burden of patching every project seemed like an infinite task. Having code reviewed and merged in the most significant projects took several weeks, which, considering our timeline, was a show-stopper. We wanted to go fast.

Pragmatism won, and we started implementing Ceilometer using the features each OpenStack project was offering to help us: very little.

Our first and obvious candidate for usage retrieval was Nova, where Ceilometer aimed to retrieves statistics about virtual machines instances utilization. Nova offered no API to retrieve those data – and still doesn't. Since it was out of the equation to wait several months to have such an API exposed, we took the shortcut of polling directly libvirt, Xen or VMware from Ceilometer.

That's precisely how temporary hacks become historical design. Implementing this design broke the basis of the abstraction layer that Nova aims to offer.

As time passed, several leads were followed to mitigate those trade-offs in better ways. But on each development cycle, getting anything merged in OpenStack became harder and harder. It went from patches long to review, to having a long list of requirements to merge anything. Soon, you'd have to create a blueprint to track your work, write a full specification linked to that blueprint, with that specification being reviewed itself by a bunch of the so-called core developers. The specification had to be a thorough document covering every aspect of the work, from the problem that was trying to be solved, to the technical details of the implementation. Once the specification was approved, which could take an entire cycle (6 months), you'd have to make sure that the Nova team would make your blueprint a priority. To make sure it was, you would have to fly a few thousands of kilometers from home to an OpenStack Summit, and orally argue with developers in a room filled with hundreds of other folks about the urgency of your feature compared to other blueprints.

An OpenStack design session in Hong-Kong, 2013

Even if you passed all of those ordeals, the code you'd send could be rejected, and you'd get back to updating your specification to shed light on some particular points that confused people. Back to square one.

Nobody wanted to play that game. Not in the Telemetry team at least.

So Ceilometer continued to grow, surfing the OpenStack hype curve. More developers were joining the project every cycle – each one with its list of ideas, features or requirements cooked by its in-house product manager.

But many features did not belong in Ceilometer. They should have been in different projects. Ceilometer was the first OpenStack project to pass through the OpenStack Technical Committee incubation process that existed before the rules were relaxed.

This incubation process was uncertain, long, and painful. We had to justify the existence of the project, and many technical choices that have been made. Where we were expecting the committee to challenge us at fundamental decisions, such as breaking abstraction layers, it was mostly nit-picking about Web frameworks or database storage.

Consequences

The rigidity of the process discouraged anyone to start a new project for anything related to telemetry. Therefore, everyone went ahead and started dumping its idea in Ceilometer itself. With more than ten companies interested, the frictions were high, and the project was at some point pulled apart in all directions. This phenomenon was happening to every OpenStack projects anyway.

On the one hand, many contributions brought marvelous pieces of technology to Ceilometer. We implemented several features you still don't find any metering system. Dynamically sharded, automatic horizontally scalable polling? Ceilometer has that for years, whereas you can't have it in, e.g., Prometheus.

On the other hand, there were tons of crappy features. Half-baked code merged because somebody needed to ship something. As the project grew further, some of us developers started to feel that this was getting out of control and could be disastrous. The technical debt was growing as fast as the project was.

Several technical choices made were definitely bad. The architecture was a mess; the messaging bus was easily overloaded, the storage engine was non-performant, etc. People would come to me (as I was the Project Team Leader at that time) and ask why the REST API would need 20 minutes to reply to an autoscaling request. The willingness to solve everything for everyone was killing Ceilometer. It's around that time that I decided to step out of my role of PTL and started working on Gnocchi to, at least, solve one of our biggest challenge: efficient data storage.

Ceilometer was also suffering from the poor quality of many OpenStack projects. As Ceilometer retrieves data from a dozen of other projects, it has to use their interface for data retrieval (API calls, notifications) – or sometimes, palliate for their lack of any interface. Users were complaining about Ceilometer dysfunctioning while the root of the problem was actually on the other side, in the polled project. The polling agent would try to retrieve the list of virtual machines running on Nova, but just listing and retrieving this information required several HTTP requests to Nova. And those basic retrieval requests would overload the Nova API. The API does not offer any genuine interface from where the data could be retrieved in a small number of calls. And it had terrible performances.
From the point of the view of the users, the load was generated by Ceilometer. Therefore, Ceilometer was the problem. We had to imagine new ways of circumventing tons of limitation from our siblings. That was exhausting.

At its peak, during the Juno and Kilo releases (early 2015), the code size of Ceilometer reached 54k lines of code, and the number of committers reached 100 individuals (20 regulars). We had close to zero happy user, operators were hating us, and everybody was wondering what the hell was going in those developer minds.

Nonetheless, despite the impediments, most of us had a great time working on Ceilometer. Nothing's ever perfect. I've learned tons of things during that period, which were actually mostly non-technical. Community management, social interactions, human behavior and politics were at the heart of the adventure, offering a great opportunity for self-improvement.

In the next blog post, I will cover what happened in the years that followed that booming period, up until today. Stay tuned!

Julien Danjou https://julien.danjou.info/ Julien Danjou

Bursary applications for DebConf18 are closing in 48 hours!

Enj, 12/04/2018 - 12:30md

If you intend to apply for a DebConf18 bursary and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf18 will be accepted until April 13th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

See you in Hsinchu!

Laura Arjona Reina https://bits.debian.org/ Bits from Debian

Streaming the Norwegian ultimate championships

Enj, 12/04/2018 - 1:36pd

As the Norwegian indoor frisbee season is coming to a close, the Norwegian ultimate nationals are coming up, too. Much like in Trøndisk 2017, we'll be doing the stream this year, replacing a single-camera Windows/XSplit setup with a multi-camera free software stack based on Nageru.

The basic idea is the same as in Trøndisk; two cameras (one wide and one zoomed) for the main action and two static ones above the goal zones. (The hall has more amenities for TV productions than the one in Trøndisk, so a basic setup is somewhat simpler.) But there are so many tweaks:

  • We've swapped out some of the cameras for more suitable ones; the DSLRs didn't do too well under the flicker of the fluorescent tubes, for instance, and newer GoPros have rectilinear modes). And there's a camera on the commentators now, with side-by-side view as needed.

  • There are tally lights on the two human-operated cameras (new Nageru feature).

  • We're doing CEF directly in Nageru (new Nageru feature) instead of through CasparCG, to finally get those 60 fps buttery smooth transitions (and less CPU usage!).

  • HLS now comes out directly of Cubemap (new Cubemap feature) instead of being generated by a shell script using FFmpeg.

  • Speaking of CPU usage, we now have six cores instead of four, for more x264 oomph (we wanted to do 1080p60 instead of 720p60, but alas, even x264 at nearly superfast can't keep up when there's too much motion).

  • And of course, a ton of minor bugfixes and improvements based on our experience with Trøndisk—nothing helps as much as battle-testing.

For extra bonus, we'll be testing camera-over-IP from Android for interviews directly on the field, which will be a fun challenge for the wireless network. Nageru does have support for taking in IP streams through FFmpeg (incidentally, a feature originally added for the now-obsolete CasparCG integration), but I'm not sure if the audio support is mature enough to run in production yet—most likely, we'll do the reception with a laptop and use that as a regular HDMI input. But we'll see; thankfully, it's a non-essential feature this time, so we can afford to have it break. :-)

Streaming starts Saturday morning CEST (UTC+2), will progress until late afternoon, and then restart on Sunday with the playoffs (the final starts at 14:05). There will be commentary in a mix of Norwegian and English depending on the mood of the commentators, so head over to www.plastkast.no if you want to watch :-) Exact schedule on the page.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Debian LTS work, March 2018

Mër, 11/04/2018 - 10:41md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from February. I worked 15 hours and will again carry over 2 hours to April.

I made another two releases on the Linux 3.2 longterm stable branch (3.2.100 and 3.2.101), the latter including mitigations for Spectre on x86. I rebased the Debian package onto 3.2.101 but didn't upload an update to Debian this month. We will need to add gcc-4.9 to wheezy before we can enable all the mitigations for Spectre variant 2.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

Debian SecureBoot Sprint 2018

Mër, 11/04/2018 - 5:01md

Monday morning I gave back the keys to Office Factory Fulda, who sponsored the location for the SecureBoot Sprint from Thursday, 4th April to Sunday, 8th April. Appearently we left a pretty positive impression (we managed to clean up), so are welcome again for future sprints.

The goal of this sprint was enabling SecureBoot in/for Debian, so that users who have SecureBoot enabled machines do not need to turn that off to be able to run Debian. That needs us to handle signing a certain set of packages in a defined way, handling it as automated as possible while ensuring that stuff is done in a safe/secure way.

Now add details like secure handling of keys, only signing pre-approved sets (to make abusing it harder), revocations, key rollovers, combine it all with the infrastructue and situation we have in Debian, say dak, buildd, security archive with somewhat different rules of visibility, reproducability, a huge set of architectures only some of which do SecureBoot, proper audit logging of signatures and you end up with 7 people from different teams taking the whole first day just discussing and hashing out a specification. Plus some joining in virtually.

I’m not going into actual details of all that, as a sprint report will follow soon.

Friday to Sunday was used for actual implementation of the agreed solution. The actual dak changes turned out to not be too large, and thankfully Ansgar was on them, so I could take time to push the FTPTeams move to the new Salsa service forward. I still have a few of our less-important repositories to move, but thats a simple process I will be doing during this week, the most important step was coming up with a sane way of using Salsa.

That does not mean the actual web interface, but getting code changes from there to the various Debian hosts we run our services on. In the past, we pushed the hosts directly, so all code changes appearing on them meant that someone who was in the right unix group on that machine made them appear.1 “Verified by ssh login” basically.

With Salsa, we now add a service that has a different set of administrators added on top. And a big piece of software too, with a huge possibility of bugs, worst case allowing random users access to our repositories. Which is a way larger problem area than “git push via ssh” as in the past, and as such more likely to be bad. If we blindly pull from a repository on such shared space, the confirmation “a FTPMaster said this code is good” is gone.

So it needs a way of adding that confirmation back, while still being able to use all the nice features that Salsa offers. Within Debian, whats better than using already established ways of trusting something, gnupg created signatures?!

So how to go forward? I have been lucky, i did not need to entirely invent this on my own, Enrico had similar concerns for the New-Maintainer web pages. He setup CI to test his stuff and, if successful, installs the tested stuff on the NM machine, provided that the commit is signed by a key from a defined set.

Unfortunately, for me, he deals with a Django app that listens somewhere and can be pushed to. No such thing for me, neither do I have Django nor do I have a service listening that I can tell about changes to fetch.

We also have to take care when a database schema upgrade needs to be done, no automatic deployment on database-using FTPMaster hosts for that, a human needs to trigger this.

So the actual implementation that I developed for us, and which is in use on all hosts that we maintain code on, is implemented in our standard framework for regular jobs, cronscript.2

It turns out to live in multiple files (as usual with cronscript), where the actual code is in deploy.functions, deploy.variables, and the order to call things is defined in deploy.tasks.

cronscript around it takes care to setup the environment and keep logs, and we now call the deploy every few minutes, securely getting our code deployed.

  1. Or someone abused root rights, but if you do not trust root, you lost anyways, and there is no reason to think that any DSA-member would do this. 

  2. A framework for FTPMaster scripts that ensures the same basic setup everywhere and makes it easy to call functions and stuff, with or without error checking, in background or foreground. ALso easy to restart in the middle of a script run after breakage, as it keeps track where it was. 

Joerg Jaspert https://blog.ganneff.de/ Ganneff's Little Blog

Preventing resume immediately after suspend on Dell Latitude 5580 (Debian testing)

Mër, 11/04/2018 - 1:14md

I’ve installed Debian buster (testing at the time of writing) on a new Dell Latitude 5580 laptop, and one annoyance I’ve found is that the laptop would almost always resume as soon as it was suspended.

AFAIU, it seems the culprit is the network card (Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM) which would be configured with Wake-On-Lan (wol) set to the “magic packet” mode (ethtool enp0s31f6 | grep Wake-on would return ‘g’). One hint is that grep enabled /proc/acpi/wakeup returns GLAN.

There are many ways to change that for the rest of the session with a command like ethtool -s enp0s31f6 wol d.

But I had a hard time figuring out if there was a preferred way to make this persistant among the many hits in so many tutorials and forum posts.

My best hit so far is to add the a file named /etc/systemd/network/50-eth0.link containing :

[Match] Driver=e1000e [Link] WakeOnLan=off

The driver can be found by checking udev settings as reported by udevadm info -a /sys/class/net/enp0s31f6

There are other ways to do that with systemd, but so far it seems to be working for me. Hth,

Olivier Berger https://www-public.tem-tsp.eu/~berger_o/weblog debian-en – WebLog Pro Olivier Berger

Bread and data

Mër, 11/04/2018 - 11:01pd

For the past two weeks I've mostly been baking bread. I'm not sure what made me decide to make some the first time, but it actually turned out pretty good so I've been doing every day or two ever since.

This is the first time I've made bread in the past 20 years or so - I recall in the past I got frustrated that it never rose, or didn't turn out well. I can't see that I'm doing anything differently, so I'll just write it off as younger-Steve being daft!

No doubt I'll get bored of the delicious bread in the future, but for the moment I've got a good routine going - juggling going to the shops, child-care, and making bread.

Bread I've made includes the following:

Beyond that I've spent a little while writing a simple utility to embed resources in golang projects, after discovering the tool I'd previously been using, go-bindata, had been abandoned.

In short you feed it a directory of files and it will generate a file static.go with contents like this:

files[ "data/index.html" ] = "<html>.... files[ "data/robots.txt" ] = "User-Agent: * ..."

It's a bit more complex than that, but not much. As expected getting the embedded data at runtime is trivial, and it allows you to distribute a single binary even if you want/need some configuration files, templates, or media to run.

For example in the project I discussed in my previous post there is a HTTP-server which serves a user-interface based upon bootstrap. I want the HTML-files which make up that user-interface to be embedded in the binary, rather than distributing them seperately.

Anyway it's not unique, it was a fun experience writing, and I've switched to using it now:

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

DRM, DRM, oh how I hate DRM...

Mër, 11/04/2018 - 6:43pd

I love flexibility. I love when the rules of engagement are not set in stone and allow us to lead a full, happy, simple life. (Apologies to Felipe and Marianne for using their very nice sculpture for this rant. At least I am not desperately carrying a brick! ☺)

I have been very, very happy after I switched to a Thinkpad X230. This is the first computer I have with an option for a cellular modem, so after thinking it a bit, I got myself one:

After waiting for a couple of weeks, it arrived in a nonexciting little envelope straight from Hong Kong. If you look closely, you can even appreciate there's a line (just below the smaller barcode) that reads "Lenovo"). I soon found how to open this laptop (kudos to Lenovo for a very sensible and easy opening process, great documentation... So far, it's the "openest" computer I have had!) and installed my new card!

The process was decently easy, and after patting myself in the back, I eagerly turned on my computer... Only to find the BIOS to halt with the following message:

1802: Unauthorized network card is plugged in - Power off and remove the miniPCI network card (1199/6813). System is halted

So... Got everything back to its original state. Stupid DRM in what I felt the openest laptop I have ever had. Gah.

Anyway... As you can see, I have a brand new cellular modem. I am willing to give it to the first person that offers me a nice beer in exchange, here in Mexico or wherever you happen to cross my path (just tell me so I bring the little bugger along!)

Of course, I even tried to get one of the nice volunteers to install Libreboot in my computer now that I was to Libreplanet, which would have solved the issue. But they informed me that Libreboot is supported only in the (quite a bit older) X200 machines, not in the X230.

AttachmentSize IMG_20180409_225503.jpg1003.02 KB IMG_20180409_225835.jpg1.77 MB IMG_20180409_230000.jpg113.36 KB IMG_20180409_225835.jpg1.77 MB IMG_20180408_085157.jpg3.44 MB gwolf http://gwolf.org Gunnar Wolf

My Free Software Activities in March 2018

Hën, 09/04/2018 - 11:58md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java
  • I spent most of my free time on Java packages because…OpenJDK 9 is now the default Java runtime environment in Debian! As of today I count 319 RC bugs (bugs with severity normal would be serious today as well) of which 227 are already resolved. That means one third of the Java team’s packages have to be adjusted for the new OpenJDK version. Java 9 comes with a new module system called Jigsaw. Undoubtedly it represents a lot of new interesting ideas but it is also a major paradigm shift. For us mere packagers it means more work than any other version upgrade in the past. Let’s say we are a handful of regular contributors (I’m generous) and we spend most of our time to stabilize the Java ecosystem in Debian to the point that we can build all of our packages again. Repeat for every new Debian release. Unfortunately not much time is actually spent on packaging new and cool applications or libraries unless they are strictly required to fix a specific Java 9 issue. It just doesn’t feel right at the moment. Most upstreams are rather indifferent or relaxed when it comes to porting their applications to Java 9 because they still can use Java 8, so why can’t we? They don’t have to provide security support for five years and can make the switch to Java 9 much later. They can also cherry-pick certain versions of libraries whereas we have to ensure that everything works with one specific version of a library. But that’s not all: Java 9 will not be shipped with Buster and we even aim for OpenJDK 11! Releases of OpenJDK will be more frequent from now on, expect a new release every six months, and there are certain versions which will receive extended security support like OpenJDK 11. One thing we can look forward to: Apparently more commercial features of Oracle JDK will be merged into OpenJDK and it appears the longterm goal is to make Oracle JDK and OpenJDK builds completely interchangeable. So maybe one day only one free software JDK for everything and everyone? I hope so.
  • I worked on the following packages to address Java 9 or other bugs: activemq, snakeyaml, libjchart2d-java, jackson-dataformat-yaml, jboss-threads, jboss-logmanager, jboss-logging-tools, qdox2, wildfly-common, activemq-activeio, jackson-datatype-joda, antlr, axis, libitext5-java, libitext1-java, libitext-java, jedit, conversant-disruptor, beansbinding, cglib, undertow, entagged, jackson-databind, libslf4j-java, proguard, libhtmlparser-java, libjackson-json-java and sweethome3d (patch by Emmanuel Bourg)
  • New upstream versions: jboss-threads, okio, libokhttp-java, snakeyaml, robocode.
  • I NMUed jtb and applied a patch from Tiago Stürmer Daitx.
Debian LTS

This was my twenty-fifth month as a paid contributor and I have been paid to work 23,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 19.03.2018 until 25.03.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in imagemagick, libvirt, freeplane, exempi, calibre, gpac, ipython, binutils, libraw, memcached, mosquitto, sdl-image1.2, slurm-llnl, graphicsmagick, libslf4j-java, radare2, sam2p, net-snmp, apache2, ldap-account-manager, librelp, ruby-rack-protection, libvncserver, zsh and xerces-c.
  • DLA-1310-1. Issued a security update for exempi fixing 6 CVE.
  • DLA-1315-1. Issued a security update for libvirt fixing 2 CVE.
  • DLA-1316-1. Issued a security update for freeplane fixing 1 CVE.
  • DLA-1322-1. Issued a security update for graphicsmagick fixing 6 CVE.
  • DLA-1325-1. Issued a security update for drupal7 fixing 1 CVE.
  • DLA-1326-1. Issued a security update for php5 fixing 1 CVE.
  • DLA-1328-1. Issued a security update for xerces-c fixing 1 CVE.
  • DLA-1335-1. Issued a security update for zsh fixing 2 CVE.
  • DLA-1340-1. Issued a security update for sam2p fixing 5 CVE. I also prepared a security update for Jessie. (#895144)
  • DLA-1341-1. Issued a security update for sdl-image1.2 fixing 6 CVE.
Misc
  • I triaged all open bugs in imlib2 and forwarded the issues upstream. The current developer of imlib2 was very responsive and helpful. Thanks to Kim Woelders several longstanding bugs could be fixed.
  • There was also a new upstream release for xarchiver. Check it out!

Thanks for reading and see you next time.

Apo https://gambaru.de/blog planetdebian – gambaru.de

Migrating PET features to distro-tracker

Hën, 09/04/2018 - 3:30md

After joining the Debian Perl Team some time ago, PET has helped me a lot to find work to do in the team context, and also helped the whole team in our workflow. For those who do not know what PET is: “a collection of scripts that gather information about your (or your group’s) packages. It allows you to see in a bird’s eye view the health of hundreds of packages, instantly realizing where work is needed.”. PET became an important project since about 20 Debian teams were using it, including Perl and Ruby teams in which I am more active.

In Cape Town, during the DebConf16, I had a conversation with Raphael Hertzog about the possibility to migrate PET features to distro-tracker. He is one of the distro-tracker maintainers, and we found some similarities between those tools. Altough, after that I did not have enough time to push it forward. However, after the migration from Alioth to Salsa PET became almost unuseful because a lot of things were done based on Alioth. This brought me the motivation to get this migration idea off the drawing board, and support the PET features in distro-tracker team’s visualization.

In the meantime, the Debian Outreach team published a GSoC call for mentors for this year. I was a Debian GSoC student in 2014 and 2015, and this was a great opportunity for me to join the community. With that in mind and the wish to give this opportunity to others, I decided to become a mentor this year and proposed a project to implement the PET features in distro-tracker, called Improving distro-tracker to better support Debian Teams. We are at the selection students phase and I received great proposals. I am looking forward to the start of the program and finally have the PET features available in tracker.debian.org. And of course, bring new blood to the Debian Project, since this is the idea behind those outreach programs.

Lucas Kanashiro http://blog.kanashiro.xyz/ Lucas Kanashiro’s blog

New projects on Hosted Weblate

Hën, 09/04/2018 - 12:00md

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Michal Čihař https://blog.cihar.com/archives/debian/ Michal Čihař's Weblog, posts tagged by Debian

Securing WordPress with AppArmor

Sht, 31/03/2018 - 12:24md

WordPress is a very popular CMS. According to one report, 30% of websites use WordPress, which is an impressive feat.

Despite this popularity, WordPress is built upon PHP which is often lacking in the security department. Add to this that the user that runs the webserver often has a fair bit of access and there is no distinguishing between the webserver code and the WordPress code and you set yourself up for troubles.

So, let’s introduce something that not only can tell the difference between Apache running and WordPress running under it, but also limit what WordPress can access.

As the AppArmor wiki says “AppArmor is Mandatory Access Control (MAC) like security system for Linux. AppArmor confines individual programs to a set of files, capabilities, network access and rlimits…”.  AppArmor also has this concept of hats, so your webserver code (e.g. apache) can be one hat with one policy but the WordPress PHP code has another hat and therefore another policy. For some reason, AppArmor calls a policy a profile, so wherever you see profile translate that to policy.

The idea here is to limit what WordPress can access down to the files and directories it needs, and nothing more. What follows is how I have setup my system but you may need to tweak it, especially for some plugins.

Change your hat

By default, apache will run in its own  AppArmor profile, called something like the “/usr/sbin/apache2” profile.  As the authors of this profile do not know what you will run on the webserver, it is very permissive and with the standard AppArmor setup is what the WordPress code will also run under.

First, you need to enable and install the mod_apparmor Apache module. This module allows you to change what profile is used, depending on what directory or URL is being requested. The link for mod_apparmor describes how to do this.

Once you have the module enabled, you need to tell Apache what directories you want the hat or profile to be changed and the name of the new hat. I put this into /etc/apache2/conf-available/wordpress and then “a2enconf wordpress”

<Directory "/usr/share/wordpress"> Require all granted <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> <IfModule mod_apparmor.c> AAHatName wordpress </IfModule> </Directory> Alias /wp-content /var/lib/wordpress/wp-content/ <Directory /var/lib/wordpress > Require all granted <IfModule mod_apparmor.c> AAHatName wordpress </IfModule> </Directory>

Most of this configuration is pretty standard WordPress setup for Apache. The important differences are the AAHatName lines.

What we have done here is if Apache serves up files from /usr/share/wordpress (where the WordPress code lives) or /var/lib/wordpress/wp-content (where things like plugins, themes and uploaded images live) then it will use the wordpress sub-profile.

Defining the profile

Now that we have the right profile for our WordPress directories, we need to create a profile. This will tell AppArmor what files and directories WordPress is allowed to access and how they are accessed. Obvious ones are the directories where the code and content live, but you will also need to include the log file locations.

This definition needs to sit “inside” the apache proper profile. In Debian and most other systems, it is just a matter of making a file in /etc/apparmor.d/apache.d/ directory.

^wordpress { include <abstractions/apache2-common> include <abstractions/base> include <abstractions/nameservice> include <abstractions/php5> /var/log/apache2/*.log w, /etc/wordpress/config-*.php r, /usr/share/wordpress/** r, /usr/share/wordpress/.maintenance w, # Change "/var/lib/wordpress/wp-content" to whatever you set # WP_CONTENT_DIR in the # /etc/wordpress/config-*.php file /var/lib/wordpress/wp-content r, /var/lib/wordpress/wp-content/** r, /var/lib/wordpress/wp-content/uploads/** rw, /var/lib/wordpress/wp-content/upgrade/** rw, # Uncomment to permit plugins Install/Update via web /var/lib/wordpress/wp-content/plugins/** rw, # Uncomment to permit themes Install/Update via web #/var/lib/wordpress/wp-content/themes/** rw, # This is what PHP sys_get_temp_dir() returns /tmp/* rw,

What we have here is a policy that basically says you can read the WordPress code and read the WordPress content. The plugins and themes sub-directories can their own line because you can selectively permit write access if you want to update plugins and themes using the web GUI.

The /etc file glob is where the Debian package stores its configuration file. The surprise for me was the maintenance dot-file which is created when WordPress is updating some component. Without this write permission, it is unable to update plugins or do many other things.

Audit Log

So how do you know its working? The simplest way is to apply the policy and then see what appears in your auditd log (Mine is at /var/log/audit/audit.log).

Two main things will go wrong. The first is the wrong profile will get used. I had this problem when I forgot to add the WordPress content directory.

Wrong Profile

type=AVC msg=audit(1522235033.386:43401): apparmor=”ALLOWED” operation=”open” profile=”/usr/sbin/apache2//null-dropbear.xyz” name=”/var/lib/wordpress/wp-content/plugins/akismet/akismet.php” pid=5036 comm=”apache2″ requested_mask=”r” denied_mask=”r” fsuid=33 ouid=33

So, what is AppArmor trying to say here? First we have the wrong profile! It’s not wordpress, but “/usr/sbin/apache2//null-dropbear,xyz” which is basically saying there was no specific sub-profile for this website so we will use the apache2 profile.

The apache2 profile is in complain, not enforce mode, so that’s why it says apparmor=”ALLOWED” yet has denied_mask=”r”.

Adding that second <Directory> clause to use the wordpress AAHatName fixed this.

Profile missing entries

The second type of problem is that you have Apache switching to the correct profile but you missed a line in the profile.  Initially I didn’t know WordPress created a file in the top-level of its code directory when undergoing maintenance. The log file showed:

type=AVC msg=audit(1522318023.409:51143): apparmor=”DENIED” operation=”mknod” profile=”/usr/sbin/apache2//wordpress” name=”/usr/share/wordpress/.maintenance” pid=16165 comm=”apache2″ requested_mask=”c” denied_mask=”c” fsuid=33 ouid=33

We have the correct profile here (wordpress is always sub-profile of apache). But we are getting a DENIED message because the profile (initially) didn’t permit the file /usr/share/wordpress/.maintenance” to be created.

Adding that file to the profile and reloading the profile and Apache fixed this.

Additional Tweaks

The given profile will probably work for most WordPress installations. Make sure you change the code and content directories to wherever you use.  Also, this profile will not let you auto-update the WordPress code. For a Debian package, this is a good thing as the Apache process is not writing the new files, dpkg is and it runs as root. If you are happy for a webserver to update PHP code that runs under it, you can change the permission for read/write for /usr/share/wordpress

I imagine some plugins out there will need additional directories. I don’t use many and none I use do those odd things, but there are plenty of odd plugins out there. Check your audit logs for any DENIED lines.

Craig http://dropbear.xyz Small Dropbear

three conferences one week

Sht, 31/03/2018 - 3:52pd

Thought I'd pack my entire year's conference schedule into one week...

First was a Neuroinformatics infrastructure interoperability workshop at McGill, my second trip to Montreal this year. Well outside my wheelhouse, but there's a fair amount of interest in that community in git-annex/datalad. This was a roll with the acronyms, and try to draw parallels to things I know affair. Also excellent sushi and a bonus Secure Scuttlebutt meetup.

Then LibrePlanet. A unique and super special conference, that utterly flew by this year. This is my sixth LibrePlanet and I enjoy it more each time. Hghlights for me were Bassam's photogrammetry workshop, Karen receiving the Free Software award, and Seth's thought-provoking talk on "incompossibilities" especially as applied to social networks. And some epic dinner conversations in central square.

Finally today, a one-day local(!) functional programming(!!) conference in Knoxville TN. Lambda Squared was the best constructed single-track conference I've seen. Starting with an ex-pro-figure skater getting the whole audience to pirouette to capture that uncomfortable out of your element feeling you get learning FP, and ramping gradually past "functional javascript" to orthagonality, contravariant functors, the lambda cube, and constructivist logic.

I notice that I've spent a lot more time in Boston than I ever have in Knoxville -- Cambridge MA is starting to feel like my old haunts, though I've never really lived there. There are not a lot of functional programming conferences in the southeastern USA, and I think this explains how Lambda Squared attracted such a good lineup of speakers. Also Knoxville has a surprisingly large and lively FP community shaping up. There will be another Lambda Squared next year, and this might be a good opportunity to visit with me and go to a FP conference too.

And now time to retreat into my retreaty place for a good long while.

Joey Hess http://joeyh.name/blog/ see shy jo

Cluster analysis lecture notes

Pre, 30/03/2018 - 5:33md

In Winter Term 2017/2018 I was substitute professor at Univeristy Heidelberg, and giving the lecture “Knowledge Discovery in Databases”, i.e., the data mining lecture.

While I won’t make all my slides available, I decided to make the chapter on cluster analysis available. Largely, because there do not appear to be good current books on this topic. Many of the books on data mining barely cover the basics. And I am constantly surprised to see how little people know beyond k-means. But clustering is much broader than k-means!

As I hope to give this lecture frequently at some point, I appreciate feedback to further improve them. This year, I almost completely reworked them, so there are a lot of things to fine tune.

There exist three versions of the slides:

These slides took me about 9 sessions of 90 minutes each.
On one hand, I was not very fast this year, and I probably need to cut down on the extra blackboard material, too. Next time, I would try to use at most 8 sessions for this, to be able to cover other important topics such as outlier detection in more detail, that were a bit too short this time.

I hope the slides will be interesting and useful, and I would appreciate if you give me credit, e.g., by citing my work appropriately.

Erich Schubert https://www.vitavonni.de/blog/ Techblogging

My Laptop

Pre, 30/03/2018 - 10:32pd

My laptop is an old used Samsung R439 that I bought for around Rs 12000/-(USD 185) from a local store when I was in second year in my college. It have a 14” screen, 2GB of RAM, a Pentium p1600 processor, and a 320GB HDD.

Recently it started showing performance issues like loading applications take a while and firefox freezes. I quite fond of this laptop and have some emotional attachment for it. I was reluctant to buy a new one but the same time I need the VT-x (virtualization) feature that my pentium p1600 lacks. So I chose to upgrade it from the ground. The hard part of upgrading is to find a compatible processor for the board. My friend Akhil Varkey whom I met during a Debian packaging session helped me to find a processor that suits my board. I bought the suggested one from Aliexpress, cause I found it cheaply available. I disassembled and installed new processor myself. During disassembling I totally destroyed two screws(stripped) including the holes :(. Now I need to be more careful when carrying laptop around. I already have changed my laptop battery and keyboard couple of months after I bought it. I will be upgrading RAM and hard disk soon.

Abhijith PA http://abhijithpa.me/ Abhijith PA

Rewriting some services in golang

Pre, 30/03/2018 - 9:00pd

The past couple of days I've been reworking a few of my existing projects, and converting them from Perl into Golang.

Bytemark had a great alerting system for routing alerts to different enginners, via email, SMS, and chat-messages. The system is called mauvealert and is available here on github.

The system is built around the notion of alerts which have different states (such as "pending", "raised", or "acknowledged"). Each alert is submitted via a UDP packet getting sent to the server with a bunch of fields:

  • Source IP of the submitter (this is implicit).
  • A human-readable ID such as "heartbeat", "disk-space-/", "disk-space-/root", etc.
  • A raise-field.
  • More fields here ..

Each incoming submission is stored in a database, and events are considered unique based upon the source+ID pair, such that if you see a second submission from the same IP, with the same ID, then any existing details are updated. This update-on-receive behaviour is pretty crucial to the way things work, especially when coupled with the "raise"-field.

A raise field might have values such as:

  • +5m
    • This alert will be raised in 5 minutes.
  • now
    • This alert will be raised immediately.
  • clear
    • This alert will be cleared immediately.

One simple way the system is used is to maintain heartbeat-alerts. Imagine a system sends the following message, every minute:

  • id:heartbeat raise:+5m [source:1.2.3.4]
    • The first time this is received by the server it will be recorded in the database.
    • The next time this is received the existing event will be updated, and crucially the time to raise an alert will be bumped (i.e. it will become current-time + 5m).
    • The next time the update is received the raise-time will also be bumped
    • ..

At some point the submitting system crashes, and five minutes after the last submission the alert moves from "pending" to "raised" - which will make it visible in the web-based user-interface, and also notify an engineer.

With this system you could easily write trivial and stateless ad-hoc monitoring scripts like so which would raise/clear :

curl https://example.com && send-alert --id http-example.com --raise clear --detail "site ok" || \ send-alert --id http-example.com --raise now --detail "site down"

In short mauvealert allows aggregation of events, and centralises how/when engineers are notified. There's the flexibility to look at events, and send them to different people at different times of the day, decide some are urgent and must trigger SMSs, and some are ignorable and just generate emails .

(In mauvealert this routing is done by having a configuration file containing ruby, this attempts to match events so you could do things like say "If the event-id contains "failed-disc" then notify a DC-person, or if the event was raised from $important-system then notify everybody.)

I thought the design was pretty cool, and wanted something similar for myself. My version, which I setup a couple of years ago, was based around HTTP+JSON, rather than UDP-messages, and written in perl:

The advantage of using HTTP+JSON is that writing clients to submit events to the central system could easily and cheaply be done in multiple environments for multiple platforms. I didn't see the need for the efficiency of using binary UDP-based messages for submission, given that I have ~20 servers at the most.

Anyway the point of this blog post is that I've now rewritten my simplified personal-clone as a golang project, which makes deployment much simpler. Events are stored in an SQLite database and when raised they get sent to me via pushover:

The main difference is that I don't allow you to route events to different people, or notify via different mechanisms. Every raised alert gets sent to me, and only me, regardless of time of day. (Albeit via an pluggable external process such that you could add your own local logic.)

I've written too much already, getting sidetracked by explaining how neat mauvealert and by extension purple was, but also I rewrote the Perl DNS-lookup service at https://dns-api.org/ in golang too:

That had a couple of regressions which were soon reported and fixed by a kind contributor (lack of CORS headers, most obviously).

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Debian Policy call for participation -- March 2018

Pre, 30/03/2018 - 2:23pd

We’re getting close to a new release of Policy. Just this week Adam Borowski stepped up to get a patch written for #881431 – thanks for getting things moving along!

Please consider jumping into some of these bugs.

Consensus has been reached and help is needed to write a patch

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

#881431 Clarify a version number is unique field

#892142 update example to use default-mta instead of exim

Merged for the next release (no action needed)

#299007 Transitioning perms of /usr/local

#515856 remove get-orig-source

#742364 Document debian/missing-sources

#886890 Fix for found typos

#888437 Several example scripts are not valid.

#889960 stray line break at clean target in section 4.9

#892142 update example to use default-mta instead of exim

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

A look at terminal emulators, part 1

Pre, 30/03/2018 - 2:00pd

This article is the first in a two-part series about terminal emulators.

Terminals have a special place in computing history, surviving along with the command line in the face of the rising ubiquity of graphical interfaces. Terminal emulators have replaced hardware terminals, which themselves were upgrades from punched cards and toggle-switch inputs. Modern distributions now ship with a surprising variety of terminal emulators. While some people may be happy with the default terminal provided by their desktop environment, others take great pride at using exotic software for running their favorite shell or text editor. But as we'll see in this two-part series, not all terminals are created equal: they vary wildly in terms of functionality, size, and performance.

Some terminals have surprising security vulnerabilities and most have wildly different feature sets, from support for a tabbed interface to scripting. While we have covered terminal emulators in the distant past, this article provides a refresh to help readers determine which terminal they should be running in 2018. This first article compares features, while the second part evaluates performance.

Here are the terminals examined in the series:

Terminal Debian Fedora Upstream Notes Alacritty N/A N/A 6debc4f no releases, Git head GNOME Terminal 3.22.2 3.26.2 3.28.0 uses GTK3, VTE Konsole 16.12.0 17.12.2 17.12.3 uses KDE libraries mlterm 3.5.0 3.7.0 3.8.5 uses VTE, "Multi-lingual terminal" pterm 0.67 0.70 0.70 PuTTY without ssh, uses GTK2 st 0.6 0.7 0.8.1 "simple terminal" Terminator 1.90+bzr-1705 1.91 1.91 uses GTK3, VTE urxvt 9.22 9.22 9.22 main rxvt fork, also known as rxvt-unicode Xfce Terminal 0.8.3 0.8.7 0.8.7.2 uses GTK3, VTE xterm 327 330 331 the original X terminal

Those versions may be behind the latest upstream releases, as I restricted myself to stable software that managed to make it into Debian 9 (stretch) or Fedora 27. One exception to this rule is the Alacritty project, which is a poster child for GPU-accelerated terminals written in a fancy new language (Rust, in this case). I excluded web-based terminals (including those using Electron) because preliminary tests showed rather poor performance.

Unicode support

The first feature I considered is Unicode support. The first test was to display a string that was based on a string from the Wikipedia Unicode page: "é, Δ, Й, ק ,م, ๗,あ,叶, 葉, and 말". This tests whether a terminal can correctly display scripts from all over the world reliably. xterm fails to display the Arabic Mem character in its default configuration:

By default, xterm uses the classic "fixed" font which, according to Wikipedia has "substantial Unicode coverage since 1997". Something is happening here that makes the character display as a box: only by bumping the font size to "Huge" (20 points) is the character finally displayed correctly, and then other characters fail to display correctly:

Those screenshots were generated on Fedora 27 as it gave better results than Debian 9, where some older versions of the terminals (mlterm, namely) would fail to properly fallback across fonts. Thankfully, this seems to have been fixed in later versions.

Now notice the order of the string displayed by xterm: it turns out that Mem and the following character, the Semitic Qoph, are both part of right-to-left (RTL) scripts, so technically, they should be rendered right to left when displayed. Web browsers like Firefox 57 handle this correctly in the above string. A simpler test is the word "Sarah" in Hebrew (שרה). The Wikipedia page about bi-directional text explains that:

Many computer programs fail to display bi-directional text correctly. For example, the Hebrew name Sarah (שרה) is spelled: sin (ש) (which appears rightmost), then resh (ר), and finally heh (ה) (which should appear leftmost).

Many terminals fail this test: Alacritty, VTE-derivatives (GNOME Terminal, Terminator, and XFCE Terminal), urxvt, st, and xterm all show Sarah's name backwards—as if we would display it as "Haras" in English.

The other challenge with bi-directional text is how to align it, especially mixed RTL and left-to-right (LTR) text. RTL scripts should start from the right side of the terminal, but what should happen in a terminal where the prompt is in English, on the left? Most terminals do not make special provisions and align all of the text on the left, including Konsole, which otherwise displays Sarah's name in the right order. Here, pterm and mlterm seem to be sticking to the standard a little more closely and align the test string on the right.

Paste protection

The next critical feature I have identified is paste protection. While it is widely known that incantations like:

$ curl http://example.com/ | sh

are arbitrary code execution vectors, a less well-known vulnerability is that hidden commands can sneak into copy-pasted text from a web browser, even after careful review. Jann Horn's test site brilliantly shows how the apparently innocuous command: git clone git://git.kernel.org/pub/scm/utils/kup/kup.git

gets turned into this nasty mess (reformatted a bit for easier reading) when pasted from Horn's site into a terminal:

git clone /dev/null; clear; echo -n "Hello "; whoami|tr -d '\n'; echo -e '!\nThat was a bad idea. Don'"'"'t copy code from websites you don'"'"'t trust! \ Here'"'"'s the first line of your /etc/passwd: '; head -n1 /etc/passwd git clone git://git.kernel.org/pub/scm/utils/kup/kup.git

This works by hiding the evil code in a <span> block that's moved out of the viewport using CSS.

Bracketed paste mode is explicitly designed to neutralize this attack. In this mode, terminals wrap pasted text in a pair of special escape sequences to inform the shell of that text's origin. The shell can then ignore special editing characters found in the pasted text. Terminals going all the way back to the venerable xterm have supported this feature, but bracketed paste also needs support from the shell or application running on the terminal. For example, software using GNU Readline (e.g. Bash) needs the following in the ~/.inputrc file:

set enable-bracketed-paste on

Unfortunately, Horn's test page also shows how to bypass this protection, by including the end-of-pasted-text sequence in the pasted text itself, thus ending the bracketed mode prematurely. This works because some terminals do not properly filter escape sequences before adding their own. For example, in my tests, Konsole fails to properly escape the second test, even with .inputrc properly configured. That means it is easy to end up with a broken configuration, either due to an unsupported application or misconfigured shell. This is particularly likely when logged on to remote servers where carefully crafted configuration files may be less common, especially if you operate many different machines.

A good solution to this problem is the confirm-paste plugin of the urxvt terminal, which simply prompts before allowing any paste with a newline character. I haven't found another terminal with such definitive protection against the attack described by Horn.

Tabs and profiles

A popular feature is support for a tabbed interface, which we'll define broadly as a single terminal window holding multiple terminals. This feature varies across terminals: while traditional terminals like xterm do not support tabs at all, more modern implementations like Xfce Terminal, GNOME Terminal, and Konsole all have tab support. Urxvt also features tab support through a plugin. But in terms of tab support, Terminator takes the prize: not only does it support tabs, but it can also tile terminals in arbitrary patterns (as seen at the right).

Another feature of Terminator is the capability to "group" those tabs together and to send the same keystrokes to a set of terminals all at once, which provides a crude way to do mass operations on multiple servers simultaneously. A similar feature is also implemented in Konsole. Third-party software like Cluster SSH, xlax, or tmux must be used to have this functionality in other terminals.

Tabs work especially well with the notion of "profiles": for example, you may have one tab for your email, another for chat, and so on. This is well supported by Konsole and GNOME Terminal; both allow each tab to automatically start a profile. Terminator, on the other hand, supports profiles, but I could not find a way to have specific tabs automatically start a given program. Other terminals do not have the concept of "profiles" at all.

Eye candy

The last feature I considered is the terminal's look and feel. For example, GNOME, Xfce, and urxvt support transparency, background colors, and background images. Terminator also supports transparency, but recently dropped support for background images, which made some people switch away to another tiling terminal, Tilix. I am personally happy with only a Xresources file setting a basic color set (Solarized) for urxvt. Such non-standard color themes can create problems however. Solarized, for example, breaks with color-using applications such as htop and IPTraf.

While the original VT100 terminal did not support colors, newer terminals usually did, but were often limited to a 256-color palette. For power users styling their terminals, shell prompts, or status bars in more elaborate ways, this can be a frustrating limitation. A Gist keeps track of which terminals have "true color" support. My tests also confirm that st, Alacritty, and the VTE-derived terminals I tested have excellent true color support. Other terminals, however, do not fare so well and actually fail to display even 256 colors. You can see below the difference between true color support in GNOME Terminal, st, and xterm, which still does a decent job at approximating the colors using its 256-color palette. Urxvt not only fails the test but even shows blinking characters instead of colors.

Some terminals also parse the text for URL patterns to make them clickable. This is the case for all VTE-derived terminals, while urxvt requires the matcher plugin to visit URLs through a mouse click or keyboard shortcut. Other terminals reviewed do not display URLs in any special way.

Finally, a new trend treats scrollback buffers as an optional feature. For example, st has no scrollback buffer at all, pointing people toward terminal multiplexers like tmux and GNU Screen in its FAQ. Alacritty also lacks scrollback buffers but will add support soon because there was "so much pushback on the scrollback support". Apart from those outliers, every terminal I could find supports scrollback buffers.

Preliminary conclusions

In the next article, we'll compare performance characteristics like memory usage, speed, and latency of the terminals. But we can already see that some terminals have serious drawbacks. For example, users dealing with RTL scripts on a regular basis may be interested in mlterm and pterm, as they seem to have better support for those scripts. Konsole gets away with a good score here as well. Users who do not normally work with RTL scripts will also be happy with the other terminal choices.

In terms of paste protection, urxvt stands alone above the rest with its special feature, which I find particularly convenient. Those looking for all the bells and whistles will probably head toward terminals like Konsole. Finally, it should be noted that the VTE library provides an excellent basis for terminals to provide true color support, URL detection, and so on. So at first glance, the default terminal provided by your favorite desktop environment might just fit the bill, but we'll reserve judgment until our look at performance in the next article.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

The subjectification of a racial group

Pre, 30/03/2018 - 12:13pd

In the philosophy department the other day we were discussing race-based sexual preferences. As well as considering the cases in which this is ethically problematic, we were trying to determine cases in which it might be okay.

A colleague suggested that a preference with the following history would not be problematic. There is a culture with which he feels a strong affiliation, having spent time living in this culture and having a keen interest in various aspects of that culture, such as its food. As a result, he is more likely, on average, to find himself sexually attracted to someone from that culture—he shares something with them. And since almost all members of that culture are of a particular racial group, that means he is more likely to find himself sexually attracted to someone of that race than to other races, ceteris paribis.

The cultural affiliation is something good. The sexual preference is then an ethically neutral side effect of that affiliation. My colleague suggested a name for the process which is responsible for the preference: he has subjectified his relationship with the culture. Instead of objectifying members of that group, as happens with problematic race-based sexual preferences, he has done something which counts as the opposite.

I am interested in thinking more about the idea of subjectification.

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

Starting the Ayatana Indicators Transition in Debian

Enj, 29/03/2018 - 3:03md

This is to make people aware and inform about an ongoing effort to replace Indicators in Debian (most people know the concept from Ubuntu) by a more generically developed and actively maintained fork: Ayatana Indicators.

TL;DR;

In Debian, we will soon start sending out patches to SNI supporting applications via Debian's BTS (and upstream trackers, too, probably), that make the shift from Ubuntu AppIndicator (badly maintained in Debian) to Ayatana AppIndicator.

Status of the work being done is documented here: https://wiki.debian.org/Ayatana/IndicatorsTransition

Why Ayatana Indicators

The fork is currently pushed forward by the Debian and Ubuntu MATE packaging team.

The Indicators concept has originally been documented by Canonical, find your entry point in the readings here [1,2].

Some great work and achievement was done around Ubuntu Indicators by Canonical Ltd. and the Indicators concept has always been a special identifying feature of Ubuntu. Now with the switch to GNOMEv3, the future of Indicators in Ubuntu is uncertain. This is where Ayatana Indicators come in...

The main problem with Ubuntu Indicators today (and ever since) is (has been): they only work properly on Ubuntu, mostly because of one Ubuntu-specific patch against GTK-3 [3].

In Ayatana Indicators (speaking with my upstream hat on now), we are currently working on a re-implementation of the rendering part of the indicators (using GTK's popovers rather then menushells), so that it works on vanilla GTK-3. Help from GTK-3 developers is highly welcome, in case you feel like chiming in.

Furthermore, the various indicator icons in Ubuntu (-session, -power, -sound, etc. - see below for more info) have been targetted more and more for sole usage with the Unity 7 and 8 desktop environments. They can be used with other desktop environments, but are likely to behave quite agnostic (and sometimes stupid) there.

In Ayatana Indicators, we are working on generalizing the functionality of those indicator icon applications and make them more gnostic on other desktop environments.

Ayatana Indicators as an upstream project will be very open to contributions from other desktop environment developers that want to utilize the indicator icons with their desktop shell, but need adaptations for their environment. Furthermore, we want to encourage Unity 7 and Unity 8 developers to consider switching over (and getting one step further with the goal of shipping Unity on non-Ubuntu systems). With the Unity 8 maintainers (the people from UBports / Ubuntu Touch) first discussion exchanges have taken place.

The different Components of Ayatana Indicators The 'indicator-renderer' Applets

Theses are panel plugins mostly, that render the system tray icons and menus (and widgets) defined by indicator aware applications. They normally come with your desktop environment (if it supports indicators).

Letting the desktop environment render the system tray itself assures that the indicator icons (i.e. the desktop system tray) looks just like the rest of the desktop shell. With the classical (xembed based) system tray (or notification) areas, all applications render their icon and menus themselves, which can cause theming problems and a11y issues and more.

Examples of indicator renderers are: mate-indicator-applet, budgie-indicator-applet, xfce4-indicator-pluign, etc.

Shared Library: Rendering and Loading of Indicators

The Ayatana Indicators project currently only provides a rendering shared lib for GTK-2 and GTK-3 based applications. We still need to connect better with the Qt-world.

The rendering library (used by the above renderers) is libayatana-indicator.

This library supports:

  • loading and rendering of old style indicators
  • loading and rendering of NG indicators

The libayatana-indicator library also utilizes a variety of versatile GTK-3 widget defined in another shared library: aytana-ido.

Ayatana Indicator Applets

The Ayatana Indicators project continues and generalizes various indicator icon applications that are not applications by themselves really, but more like system / desktop control elements:

  • ayatana-indicator-session (logout, lock screen, user guides, etc.)
  • ayatana-indicator-power (power management)
  • ayatana-indicator-sound (sound and multimedia control)
  • ayatana-indicator-datetime (clock, calendar, evolution-data-server integration)
  • ayatana-indicator-notifications (libnotify collector of system messages)
  • ayatana-indicator-printers (interact with CUPS print jobs and queues)

These indicators are currently under heavy re-development. The current effort in Ayatana Indicators is to make them far more generic and usable on all desktop environments that want to support them. E.g. we recently added XFCE awareness to the -session and the -power indicator icons.

One special indicator icon is the Ayatana Indicator Application indicator. It provides SNI support to third-party applications (see below). For the desktop applet, it appears just like any of the other above named indicators, but it opens the door to the world of SNI supporting applications.

One available and easy-to-install test case in Debian buster for indicator icons provided by the Ayatana Indicators project is the arctica-greeter package. The icons displayed in the greeter are Ayatana Indicators.

Ayatana AppIndicator API

The Ayatana AppIndicator API is just one way of talking to an SNI DBus service. The implementation is done in the shared lib 'libayatana-appindicator'. This library provides an easy to implement API that allows GTK-2/3 applications to create an indicator icon in a panel with an indicator renderer added.

In the application, the developer creates a generic menu structure and defines one or more icons for the system tray (more than one icon: only one icon is shown (plus some text, if needed), but that icon may changed based on the applet's status). This generic menu is sent to a DBus interface (org.kde.StatusNotifier). Sometimes, people say, that such applications have SNI support (StatusNotifier Interface support).

The Ayatana Indicators project offers Ayatana AppIndicator to GTK-3 developers (and GTK-2, but well...). Canonical implemented bindings for Python2, Perl, GIR, Mono/CLI and we continue to support these as long as it makes sense.

The nice part of Ayatana AppIndicator shared library is: if a desktop shell does not offer the SNI service, then it tries to fall back to the xembed-way of adding system tray icons to your panel / status bar.

In Debian, we will start sending out patches too SNI supporting applications soon, that make the shift from Ubuntu AppIndicator (badly maintained in Debian) to Ayatana AppIndicator. The cool part of this is, you can convert your GTK-3 application from Ubuntu AppIndicator to Ayatana AppIndicator and use it on top of any(!) SNI implementation, be it an applet based on Ubuntu Indicators, based on Ayatana Indicators or some other implementation, like the vala-sntray-applet or SNI support in KDE.

Further Readings

Some more URLs for deeper reading...

You can also find more info on my blog: https://sunweavers.net

References sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

Faqet