You are here

Agreguesi i feed

Odd dependency on Google Chrome

Planet Debian - Mër, 20/06/2018 - 1:21md

For weeks I have had problems with Google Chrome. It would work very few times and then for reasons I didn’t understand, would stop working. On the command line you would get several screens of text, but never would the Chrome window appear.

So I tried the Beta, and it worked… once.

Deleted all the cache and configuration and it worked… once.

Every time the process would be in an infinite loop listening to a Unix socket (fd 7) but no window for the second and subsequent starts of Chrome.

By sheer luck in the screenfulls of spam I noticed this:

Gkr-Message: 21:07:10.883: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files

Hmm, so I noticed every time I started a fresh new Chrome, I logged into my Google account. So, once again clearing things I started Chrome, didn’t login and closed and reopened.  I had Chrome running the second time! Alas, not with all the stuff synchronised.

An issue for Mailspring put me onto the right path. installing gnome-keyring (or the dependencies p11-kit and gnome-keyring-pkcs11) fixed Chrome.

So if Chrome starts but you get no window, especially if you use cinnamon, try that trick.



Craig Small Dropbear

Plans for DebCamp18

Planet Debian - Mër, 20/06/2018 - 10:32pd


I’m going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21.

My Agenda
  • DebConf Video: Research if/how MediaDrop can be used with existing Debian video archive backends (basically, just a bunch of files on http).
  • DebConf Video: Take a better look at PeerTube and prepare a summary/report for the video team so that we better know if/how we can use it for publishing videos.
  • Debian Live: I have a bunch of loose ideas that I’d like to formalize before then. At the very least I’d like to file a bunch of paper cut bugs for the live images that I just haven’t been getting to. Live team may also need some revitalization, and better co-ordination with packagers of the various desktop environments in terms of testing and release sign-offs. There’s a lot to figure out and this is great to do in person (might lead to a DebConf BoF as well).
  • Debian Live: Current live weekly images have Calamares installed, although it’s just a test and there’s no indication yet on whether it will be available on the beta or final release images, we’ll have to do a good assessment on all the consequences and weigh up what will work out the best. I want to put together an initial report with live team members who are around.
  • AIMS Desktop: Get core AIMS meta-packages in to Debian… no blockers on this but just haven’t had enough quite time to do it (And thanks to AIMS for covering my travel to Hsinchu!)
  • Get some help on ITPs that have been a little bit more tricky than expected:
    • gamemode – Adjust power saving and cpu governor settings when launching games
    • notepadqq – A linux clone of notepad++, a popular text editor on Windows
    • Possibly finish up zram-tools which I just don’t get the time for. It aims to be a set of utilities to manage compressed RAM disks that can be used for temporary space, compressed in-memory swap, etc.
  • Debian Package of the Day series: If there’s time and interest, make some in-person videos with maintainers about their packages.
  • Get to know more Debian people, relax and socialize!
jonathan Jonathan Carter

Triggering Debian Builds on OBS

Planet Debian - Mër, 20/06/2018 - 4:26pd

This is my fifth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian builds on OBS

OBS supports building Debian packages. To do so, one must properly configure a project so OBS knows it is building a .deb package and to have the packages needed to handle and build debian packages installed.

openSUSE’s OBS instance has repositories for Debian 8, Debian 9, and Debian testing.

We will use base Debian projects in our OBS instance as Download on Demand projects and use subprojects to achieve our final goal (build packages agains Clang). By using the same configurations as the ones in the openSUSE public projects, we could perform builds in Debian 8 and Debian 9 in our local OBS deploys. However, builds for Debian Testing and Unstable were failing.

With further investigation, we realized the OBS version packaged in Debian cannot decompress control.tar.xz files in .deb packages, which is the default compression format for the control tarball since dpkg-1.19 (it used to be control.tar.gz before that). This issue was reported on the OBS repositories and was fixed on a Pull Request that is not included in the current Debian OBS version yet. For now, we apply this patch in our OBS instance on our salt states.

After applying the patch, the builds on Debian 8 and 9 are still finishing with success, but builds against Debian Testing and Unstable are getting stuck in a blocked state: dependencies are being downloaded, the OBS scheduler stalls for a while, the downloaded packages get cleaned up, and then the dependencies are downloaded again. OBS backend enters in a loop doing the described procedure and never assigns a build to a worker. No logs with hints leading to a possible issue are issued, giving us no clue of the current problem.

Although I am inclined to believe we have a problem with our dependencies list, I am still debugging this issue during this week and will bring more news on my next post.

Refactoring project configuration files

Reshabh opened a Pull Request in our salt repository with the OBS configuration files for Ubuntu, also based on the openSUSE’s OBS public configurations. Based on Sylvestre comments, I have been refactoring the Debian configuration files based on the OBS docuemtation. One of the proposed improvements is to use debootstrap to mount the builder chroot. This will allow us to reduce the number of dependencies listed in the projects configuration files. The issue which generated debootstrap support in OBS is available at and may lead to more interesting resources on the matter.

Next steps (A TODO list to keep on the radar)
  • Fix OBS builds on Debian Testing and Unstable
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)
Athos Ribeiro Debian on /home/athos

Google Summer of Code 2018 with Debian - Week 5

Planet Debian - Mar, 19/06/2018 - 8:30md

During week 5, there were 3 merge requests undergoing review process simultaneously. I learned a lot about how code should be written in order to assist the reader since the code is read more times than the time it is written.

Services and Utility

After the user has entered their information on the signin or signup screen, the job of querying the database was given to a module named updatedb. The job of updatedb was to clean user input, hash password, query the database and respond with appropriate result after the database query is executed. In a discussion with Sanyam, he said updatedb doesn't conform to its name with what functions it incorporated. And explained the virtue of Service and Utility modules/functions and that this is the best place to restructure code with the same.

Utility functions can be described roughly as the functions which perform some operations on the data without caring much about the relationship of the data with respect to the application. So, generating uuid, cleaning email address, cleaning full name and hashing password becomes out utility functions and can be seen in for signup and similarly for signin.

Service functions can be described roughly as the functions which while performing operations on the data take their relationship with the application into account. Hence, these functions are not generic and application specific. sign_up_user is one such service function which received user information, calls utility functions to modify that information, query the database with respect to the signup operation i.e. adding the new user's detail to the database or raise SignUpError if details are already present. This can be seen in services module for signup and signin as well.

Persisting database connection

This is how the connection to the database used to work before the review. The settings module used to create the connection to the database, create table schema if not present and close the connection. Few constants are saved in the module to be used by signup and signin in order to connect to the database. But, the problem is, now database connection has to be established everytime there's a query to be executed by the services of signup or signin. Since the sqlite3 database is saved in a file alongside the application, I though it'll not be a problem to make connection whenever needed. But it overhead on the OS now which can slow down the application when scaled. To resolve this, now settings return the connection object which can be used again in any other module.

Integrating SignUp with Dashboard

While the SignUp feature was being reviewed the Dashbaord was merged and I had to refactor SignUp merge request accordingly. The natural flow of this should be the SignUp being the default screen up on the UI and after successful signup operation the Dashboard should be displayed. To achieve such a flow, I used screen manager which handles different screens and transition between them with predefined animation. This is defined in main module and the entire flow can be seen in action below.

Your browser does not support HTML5 video. Designing Tutorials and Tools menu

Once user is on the Dashboard, they have an option of picking up from different modules and going through the tutorials and tools available in the respective modules. The idea is to display difficulty tip as well so it becomes easier for the user to begin. Hence, below is what I've designed in order to incorporate the same.

Implementing Tutorials and Tools menu

Now comes the fun part, thinking about the architecture of the modules just designed in order for them to take shape of some code in the application. The idea here is to define them in a json file to be picked from the respective module afterwards. This way it'll be easier to add new tutorials and tools and hence we have this resultant json. The developement of this feature can be followed on this merge request

Now remains the quest to design and implement the structure of tutorials which can be generalized in a way that it can be populated using a json file. This will provide flexibility to the developer of tutorials and a UI module can also be implemented to modify this json to add new tutorials without even knowing how to code. Sounds amazing right? We'll see how it works out soon. If you have any suggestions this make sure to comment down below, on the merge request or reach out to me.

The Conclusion

Since the SignUp has also been merged I'll have to refactor SignIn now to integrate all of it in one happy application and complete the natural flow of things. Also, the design and development of tools/tutorials is underway and by the next blog is out you might be able to test the application with atleast one tool or tutorial from one of the modules on the dashboard.

Shashank Kumar Shanky's Brainchild

How markets coopted free software’s most powerful weapon (LibrePlanet 2018 Keynote)

Planet Debian - Mar, 19/06/2018 - 8:03md

Several months ago, I gave the closing keynote address at LibrePlanet 2018. The talk was about the thing that scares me most about the future of free culture, free software, and peer production.

A video of the talk is online on Youtube and available as WebM video file (both links should skip the first 3m 19s of thanks and introductions).

Here’s a summary of the talk:

App stores and the so-called “sharing economy” are two examples of business models that rely on techniques for the mass aggregation of distributed participation over the Internet and that simply didn’t exist a decade ago. In my talk, I argue that the firms pioneering these new models have learned and adapted processes from commons-based peer production projects like free software, Wikipedia, and CouchSurfing.

The result is an important shift: A decade ago,  the kind of mass collaboration that made Wikipedia, GNU/Linux, or Couchsurfing possible was the exclusive domain of people producing freely and openly in commons. Not only is this no longer true, new proprietary, firm-controlled, and money-based models are increasingly replacing, displacing, outcompeting, and potentially reducing what’s available in the commons. For example, the number of people joining Couchsurfing to host others seems to have been in decline since Airbnb began its own meteoric growth.

In the talk, I talk about how this happened and what I think it means for folks of that are committed to working in commons. I talk a little bit about the free culture and free software should do now that mass collaboration, these communities’ most powerful weapon, is being used against them.

I’m very much interested in feedback provided any way you want to reach me including in person, over email, in comments on my blog, on Mastodon, on Twitter, etc.

Work on the research that is reflected and described in this talk was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). Some of the initial ideas behind this talk were developed while working on this paper (official link) which was led by Maximilian Klein and contributed to by Jinhao Zhao, Jiajun Ni, Isaac Johnson, and Haiyi Zhu.

Benjamin Mako Hill copyrighteous

I'm going to DebCamp18, Hsinchu, Taiwan

Planet Debian - Mar, 19/06/2018 - 5:43md

Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items.

DebCamp work Throughout DebCamp and DebConf
  • Debian Policy: sticky bugs; process; participation; translations

  • Helping people use dgit and git-debrebase

    • Writing up or following up on feature requests and bugs

    • Design work with Ian and others

Sean Whitton Notes from the Library

Freexian’s report about Debian Long Term Support, May 2018

Planet Debian - Mar, 19/06/2018 - 10:27pd

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy’s Extended LTS support.

We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian’s regular security team.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog apt-get install debian-wizard

Predatory publishers: SciencePG

Planet Debian - Mar, 19/06/2018 - 10:12pd

I got spammed again by SciencePG (“Science Publishing Group”).

One of many (usually Chinese or Indian) fake publishers, that will publish anything as long as you pay their fees. But, unfortunately, once you published a few papers, you inevitably land on their spam list: they scrape the websites of good journals for email adresses, and you do want your contact email address on your papers.

However, this one is particularly hilarious: They have a spelling error right at the top of their home page!


Speaking of fake publishers. Here is another fun example:

Kim Kardashian, Satoshi Nakamoto, Tomas Pluskal
Wanion: Refinement of RPCs.
Drug Des Int Prop Int J 1(3)- 2018. DDIPIJ.MS.ID.000112.

Yes, that is a paper in the “Drug Designing & Intellectual Properties” International (Fake) Journal. And the content is a typical SciGen generated paper that throws around random computer buzzword and makes absolutely no sense. Not even the abstract. The references are also just made up. And so are the first two authors, VIP Kim Kardashian and missing Bitcoin inventor Satoshi Nakamoto…

In the PDF version, the first headline is “Introductiom”, with “m”…

So Lupine Publishers is another predatory publisher, that does not peer review, nor check if the article is on topic for the journal.

Via Retraction Watch

Conclusion: just because it was published somewhere does not mean this is real, or correct, or peer reviewed…

Erich Schubert Techblogging

Reproducible Builds: Weekly report #164

Planet Debian - Mar, 19/06/2018 - 9:40pd

Here’s what happened in the Reproducible Builds effort between Sunday June 10 and Saturday June 16 2018:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 96 was uploaded to Debian unstable by Chris Lamb. It includes contributions already covered by posts in previous weeks as well as new ones from: development

There were a number of changes to our Jenkins-based testing framework that powers, including:

Packages reviewed and fixed, and bugs filed Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks

GSoC Status Update - First Month

Planet Debian - Mar, 19/06/2018 - 5:00pd

In the past month I have been working on my GSoC project in Debian’s Distro Tracker. This project aims at designing and implementing new features in Distro Tracker to better support Debian teams to track the health of their packages and to prioritize their work efforts. In this post, I will describe the current status of my contributions, highlight the main challenges, and point the next steps.

Work Management and Communication

I communicate with Lucas Kanashiro (my mentor) constantly via IRC and personally at least once a week as we live in the same city. We have a weekly meeting with Raphael Hertzog at #debian-qa IRC channel to report advances, collect feedback, solve technical doubts, and planning the next steps.

I created a new repository in Salsa to save the log of our IRC meetings and to track my tasks through the repository’s issue tracker.Besides that, once a month I’ll post a new status update in my blog, such as this one, with more details regarding my contributions.


When GSoC officially started, Distro Tracker already had some team-related features. Briefly, a team is an entity composed by one or more users that are interested in the same set of packages. Teams are created manually by users and anyone may join public teams. The team page aggregates some basic information about the team and the list of packages of interest.

Distro Tracker offers a page to enable users to browser public teams which shows a paginated, sorted list of names. It used to be hard to find a team based on this list since Distro Tracker has more 110 teams distributed over 6 pages. In this sense, I created a new search field with auto-complete on the top of teams page to enable users to find a team’s page faster, as show in the following figure:

Also, I have been working on improving the current teams infrastructure to enable Debian’s teams to better track the health of their packages. Initially, we decided to use the current data available in Distro Tracker to create the first version of a new team’s page based on PET.

Presenting team’s packages data in a table on the team’s page would be a relatively trivial task. However, Distro Tracker architecture aims to provide a generic core which can be extended through specific distro applications, such as Kali Linux. The core source code provides generic infrastructure to import data related to deb packages and also to present them in HTML pages. Therefore, we had to consider this Distro Tracker requirement to properly provide a extensible infrastructure to show packages data through tables in so that it would be easy to add new table fields and to change the default behavior of existing columns provided by the core source code.

So, based on the previously existing panels feature and on Hertzog’s suggestions, I designed and developed a framework to create customizable package tables for teams. This framework is composed of two main classes:

  • BaseTableField - A base class representing fields to be displayed on package tables. Among other things, it must define the column name and a template to render the cell content for a package.
  • BasePackageTable - A base class representing package tables which are displayed on a team page. It may have several BaseTableFields to display package’s information. Different tables may show a different list of packages based on its scope.

We have been discussing my implementation in an open Merge Request, although we are very close to the version that should be incorporated. The following figures show the comparison between the earlier PET’s table and our current implementation.

PET Packages Table Distro Tracker Packages Table

Currently, the team’s page only have one table, which displays all packages related to that team. We are already presenting a very similar set of data to PET’s table. More specifically, the following columns are shown:

  • Package - displays the package name on the cell. It is implemented by the core’s GeneralInformationTableField class
  • VCS - by default, it displays the type of package’s repository (i.e. GIT, SVN) or Unknown. It is implemented by the core’s VcsTableField class. However, Debian app extend this behavior by adding the changelog version on the latest repository tag and displaying issues identified by Debian’s VCS Watch.
  • Archive - displays the package version on distro archive. It is implemented by the core’s ArchiveTableField class.
  • Bugs - displays the total number of bugs of a package. It is implemented by the core’s BugsTableField class. Ideally, each third-party apps should extend this field table to both add links for their bug tracker system.
  • Upstream - displays the upstream latest version available. This is a specific table field implemented by Debian app since this data is imported through Debian-specific tasks. In this sense, it is not available for other distros.

As the table’s cells are small to present detailed information, we have added Popper.js, a javascript library to display popovers. In this sense, some columns show a popover with more details regarding its content which is displayed on mouse hover. The following figure shows the popover to the Package column:

In additional to designing the table framework, the main challenge were to avoid the N+1 problem which introduces performance issues since for a set of N packages displayed in a table, each field element must perform 1 or more lookup for additional data for a given package. To solve this problem, each subclass of BaseTableField must define a set of Django’s Prefetch objects to enable BasePackageTable objects to load all required data in batch in advance through prefetch_related, as listed bellow.

class BasePackageTable(metaclass=PluginRegistry): @property def packages_with_prefetch_related(self): """ Returns the list of packages with prefetched relationships defined by table fields """ package_query_set = self.packages for field in self.table_fields: for l in field.prefetch_related_lookups: package_query_set = package_query_set.prefetch_related(l) additional_data, implemented = 'additional_prefetch_related_lookups' ) if implemented and additional_data: for l in additional_data: package_query_set = package_query_set.prefetch_related(l) return package_query_set @property def packages(self): """ Returns the list of packages shown in the table. One may define this based on the scope """ return PackageName.objects.all().order_by('name') class ArchiveTableField(BaseTableField): prefetch_related_lookups = [ Prefetch( 'data', queryset=PackageData.objects.filter(key='general'), to_attr='general_archive_data' ), Prefetch( 'data', queryset=PackageData.objects.filter(key='versions'), to_attr='versions' ) ] @cached_property def context(self): try: info = self.package.general_archive_data[0] except IndexError: # There is no general info for the package return general = info.value try: info = self.package.versions[0].value general['default_pool_url'] = info['default_pool_url'] except IndexError: # There is no versions info for the package general['default_pool_url'] = '#' return general

Finally, it is worth noticing that we also improved the team’s management page by moving all team management features to a single page and improving its visual structure:

Next Steps

Now, we are moving towards adding other tables with different scopes, such as the tables presented by PET:

To this end, we will introduce the Tag model class to categorize the packages based on their characteristics. Thus, we will create an additional task responsible for tagging packages based on their available data. The relationship between packages and tags should be ManyToMany. In the end, we want to perform a simple query to define the scope of a new table, such as the following example to query all packages with Release Critical (RC) bugs:

class RCPackageTable(BasePackageTable): def packages(self): tag = Tag.objects.filter(name='rc-bugs') return tag.packages.all()

We probably will need to work on Debian’s VCSWatch to enable it to receive update through Salsa’s webhook, especially for real-time monitoring of repositories.

Let’s get moving on! \m/

Arthur Del Esposte http://localhost:4000/ Arthur Del Esposte

Demoting multi-factor authentication

Planet Debian - Mar, 19/06/2018 - 3:11pd

I started teaching at Facultad de Ingeniería, UNAM in January 2013. Back then, I was somewhat surprised (for good!) that the university required me to create a digital certificate for registering student grades at the end of the semester. The setup had some not-so-minor flaws (i.e. the private key was not generated at my computer but centrally, so there could be copies of it outside my control — Not only could, but I noted for a fact a copy was kept at the relevant office at my faculty, arguably to be able to timely help poor teachers if they lost their credentials or patience), but was decent...
Authentication was done via a Java applet, as there needs to be a verifiably(?)-secure way to ensure the certificate was properly checked at the client without transfering it over the network. Good thing!
But... Java applets grow out of favor. I don't think I have ever been able to register my grading from a Linux desktop (of course, I don't have a typical Linux desktop, so luck might smile to other people). But last semester and this semester I suffered even to get the grades registered from Windows — Seems that every browser has deprecated the extensions for the Java runtime, and applets are no longer a thing. I mean, I could get the Oracle site to congratulate me for having Java 8 installed, but it just would not run the university's applet!
So, after losing the better part of an already-busy evening... I got a mail. It says (partial translation mine):

Subject: Problems to electronically sign at UNAM

We are from the Advance Electronic Signature at UNAM. We are sending you this mail as we have detected you have problems to sign the grades, probably due to the usage of Java.

Currently, we have a new Electronic Signature system that does not use Java, we can migrate you to this system.

The certificate will thus be stored in the cloud, we will deposit it at signing time, you just have to enter the password you will have assigned.

Of course, I answered asking which kind of "cloud" was it, as we all know that the cloud does not exist, it's just other people's computers... And they decided to skip this question.

You can go see what is required for this implementation at de la firma (Test your signature): It asks me for my CURP (publicly known number that identifies every Mexican resident). Then, it asks me for a password. And that's it. Yay :-Þ

Anyway I accepted, as losing so much time to grade is just too much. And... Yes, many people will be happy. Partly, I'm releieved by this (I have managed to hate Java for over 20 years). I am just saddened by the fact we have lost an almost-decent-enough electronic signature implementation and fallen back to just a user-password scheme. There are many ways to do crypto verification on the client side nowadays; I know JavaScript is sandboxed and cannot escape to touch my filesystem, but... It is amazing we are losing this simple and proven use case.

And it's amazing they are pulling it off as if it were a good thing.

gwolf Gunnar Wolf

Debian Policy call for participation -- June 2018

Planet Debian - Mër, 13/06/2018 - 4:15md

I’d like to push a substantive release of Policy but I’m waiting for DDs to review and second patches in the following bugs. I’d be grateful for your involvement!

If a bug already has two seconds, or three seconds if the proposer of the patch is not a DD, please consider reviewing one of the others, instead, unless you have a particular interest in the topic of the bug.

If you’re not a DD, you are welcome to review, but it might be a more meaningful contribution to spend your time writing patches bugs that lack them, instead.

#786470 [copyright-format] Add an optional “License-Grant” field

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

#880920 Document Rules-Requires-Root field

#891216 Requre d-devel consultation for epoch bump

#897217 Vcs-Hg should support -b too

Sean Whitton Notes from the Library

Progress bar for file descriptors

Planet Debian - Mër, 13/06/2018 - 1:43md

I ran gzip on an 80Gb file, it's processing, but who knows how much it has done yet, and when it will end? I wish gzip had a progressbar. Or MySQL. Or…

Ok. Now every program that reads a file sequentially can have a progressbar:


Print progress indicators for programs that read files sequentially.

fdprogress monitors file descriptor offsets and prints progressbars comparing them to file sizes.

Pattern can be any glob expression.

usage: fdprogress [-h] [--verbose] [--debug] [--pid PID] [pattern] show progress from file descriptor offsets positional arguments: pattern file name to monitor optional arguments: -h, --help show this help message and exit --verbose, -v verbose output --debug debug output --pid PID, -p PID PID of process to monitor pv

pv has a --watchfd option that does most of what fdprogress is trying to do: use that instead.


fivi also exists, with specific features to show progressbars for filter commands.

Enrico Zini Enrico Zini: pdo

Hooking up Home Assistant to Alexa + Google Assistant

Planet Debian - Mar, 12/06/2018 - 10:21md

I have an Echo Dot. Actually I have two; one in my study and one in the dining room. Mostly we yell at Alexa to play us music; occasionally I ask her to set a timer, tell me what time it is or tell me the news. Having setup Home Assistant it seemed reasonable to try and enable control of the light in the dining room via Alexa.

Perversely I started with Google Assistant, even though I only have access to it via my phone. Why? Because the setup process was a lot easier. There are a bunch of hoops to jump through that are documented on the Google Assistant component page, but essentially you create a new home automation component in the Actions on Google interface, connect it with the Google OAuth stuff for account linking, and open up your Home Assistant instance to the big bad internet so Google can connect.

This final step is where I differed from the provided setup. My instance is accessible internally at home, but I haven’t wanted to expose it externally yet (and I suspect I never well, but instead have the ability to VPN back in to access or similar). The default instructions need you to open up API access publicly, and configure up Google with your API password, which allows access to everything. I’d rather not.

So, firstly I configured up my external host with an Apache instance and a Let’s Encrypt cert (luckily I have a static IP, so this was actually the base host that the Home Assistant container runs on). Rather than using this to proxy the entire Home Assistant setup I created a unique /external/google/randomstring proxy just for the Google Assistant API endpoint. It looks a bit like this:

<VirtualHost *:443> ServerName ProxyPreserveHost On ProxyRequests off RewriteEngine on # External access for Google Assistant ProxyPassReverse /external/google/randomstring http://hass-host:8123/api/google_assistant RewriteRule ^/external/google/randomstring$ http://hass-host:8123/api/google_assistant?api_password=myapipassword [P] RewriteRule ^/external/google/randomstring/auth$ http://hass-host:8123/api/google_assistant/auth?%{QUERY_STRING}&&api_password=myapipassword [P] SSLEngine on SSLCertificateFile /etc/ssl/ SSLCertificateKeyFile /etc/ssl/private/ SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt </VirtualHost>

This locks down the external access to just being the Google Assistant end point, and means that Google have a specific shared secret rather than the full API password. I needed to configure up Home Assistant as well, so configuration.yaml gained:

google_assistant: project_id: homeautomation-8fdab client_id: oFqHKdawWAOkeiy13rtr5BBstIzN1B7DLhCPok1a6Jtp7rOI2KQwRLZUxSg00rIEib2NG8rWZpH1cW6N access_token: l2FrtQyyiJGo8uxPio0hE5KE9ZElAw7JGcWRiWUZYwBhLUpH3VH8cJBk4Ct3OzLwN1Fnw39SR9YArfKq agent_user_id: api_key: nyAxuFoLcqNIFNXexwe7nfjTu2jmeBbAP8mWvNea exposed_domains: - light

Setting up Alexa access is more complicated. Amazon Smart Home skills must call an AWS Lambda - the code that services the request is essential a small service run within Lambda. Home Assistant supports all the appropriate requests, so the Lambda code is a very simple proxy these days. I used Haaska which has a complete setup guide. You must do all 3 steps - the OAuth provider, the AWS Lambda and the Alexa Skill. Again, I wanted to avoid exposing the full API or the API password, so I forked Haaska to remove the use of a password and instead use a custom URL. I then added the following additional lines to the Apache config above:

# External access for Amazon Alexa ProxyPassReverse /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home RewriteRule /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home?api_password=myapipassword [P]

In the config.json I left the password field blank and set url to configuration.yaml required less configuration than the Google equivalent:

alexa: smart_home: filter: include_entities: - light.dining_room_lights - light.living_room_lights - - light.snug

(I’ve added a few more lights, but more on the exact hardware details of those at another point.)

To enable in Alexa I went to the app on my phone, selected the “Smart Home” menu option, enabled my Home Assistant skill and was able to search for the available devices. I can then yell “Alexa, turn on the snug” and magically the light turns on.

Aside from being more useful (due to the use of the Dot rather than pulling out a phone) the Alexa interface is a bit smoother - the command detection is more reliable (possibly due to the more limited range of options it has to work out?) and adding new devices is a simple rescan. Adding new devices with Google Assistant seems to require unlinking and relinking the whole setup.

The only problem with this setup so far is that it’s only really useful for the room with the Alexa in it. Shouting from the living room in the hope the Dot will hear is a bit hit and miss, and I haven’t yet figured out a good alternative method for controlling the lights there that doesn’t mean using a phone or a tablet device.

Jonathan McDowell Noodles' Emptiness

Syncing with a memory: a unique use of tar –listed-incremental

Planet Debian - Mar, 12/06/2018 - 1:27md

I have a Nextcloud instance that various things automatically upload photos to. These automatic folders sync to a directory on my desktop. I wanted to pull things out of that directory without deleting them, and only once. (My wife might move them out of the directory on her computer, and I might arrange them into targets on my end.)

In other words, I wanted to copy a file from a source to a destination, but remember what had been copied before so it only ever copies once.

rsync doesn’t quite do this. But it turns out that tar’s listed-incremental feature can do exactly that. Ordinarily, it would delete files that were deleted on the source. But if we make the tar file with the incremental option, but extract it without, it doesn’t try to delete anything at extract time.

Here’s my synconce script:

#!/bin/bash set -e if [ -z "$3" ]; then echo "Syntax: $0 snapshotfile sourcedir destdir" exit 5 fi SNAPFILE="$(realpath "$1")" SRCDIR="$2" DESTDIR="$(realpath "$3")" cd "$SRCDIR" if [ -e "$SNAPFILE" ]; then cp "$SNAPFILE" "${SNAPFILE}.new" fi tar "--listed-incremental=${SNAPFILE}.new" -cpf - . | \ tar -xf - -C "$DESTDIR" mv "${SNAPFILE}.new" "${SNAPFILE}"

Just have the snapshotfile be outside both the sourcedir and destdir and you’re good to go!

John Goerzen The Changelog

R 3.5.0 on Debian and Ubuntu: An Update

Planet Debian - Mar, 12/06/2018 - 3:27pd

R 3.5.0 was released a few weeks ago. As it changes some (important) internals, packages installed with a previous version of R have to be rebuilt. This was known and expected, and we took several measured steps to get R binaries to everybody without breakage.

The question of but how do I upgrade without breaking my system was asked a few times, e.g., on the r-sig-debian list as well as in this StackOverflow question.


Core Distribution As usual, we packaged R 3.5.0 as soon as it was released – but only for the experimental distribution, awaiting a green light from the release masters to start the transition. A one-off repository [drr35]( was created to provide R 3.5.0 binaries more immediately; this was used, e.g., by the r-base Rocker Project container / the official R Docker container which we also update after each release.

The actual transition was started last Friday, June 1, and concluded this Friday, June 8. Well over 600 packages have been rebuilt under R 3.5.0, and are now ready in the unstable distribution from which they should migrate to testing soon. The Rocker container r-base was also updated.

So if you use Debian unstable or testing, these are ready now (or will be soon once migrated to testing). This should include most Rocker containers built from Debian images.

Contributed CRAN Binaries Johannes also provided backports with a -cran35 suffix in his CRAN-mirrored Debian backport repositories, see the README.


Core (Upcoming) Distribution Ubuntu, for the upcoming 18.10, has undertaken a similar transition. Few users access this release yet, so the next section may be more important.

Contributed CRAN and PPA Binaries Two new Launchpad PPA repositories were created as well. Given the rather large scope of thousands of packages, multiplied by several Ubuntu releases, this too took a moment but is now fully usable and should get mirrored to CRAN ‘soon’. It covers the most recent and still supported LTS releases as well as the current release 18.04.

One PPA contains base R and the recommended packages, RRutter3.5. This is source of the packages that will soon be available on CRAN. The second PPA (c2d4u3.5) contains over 3,500 packages mainly derived from CRAN Task Views. Details on updates can be found at Michael’s R Ubuntu Blog.

This can used for, e.g., Travis if you managed your own sources as Dirk’s r-travis does. We expect to use this relatively soon, possibly as an opt-in via a variable upon which selects the appropriate repository set. It will also be used for Rocker releases built based off Ubuntu.

In both cases, you may need to adjust the sources list for apt accordingly.


There may also be ongoing efforts within Arch and other Debian-derived distributions, but we are not really aware of what is happening there. If you use those, and coordination is needed, please feel free to reach out via the the r-sig-debian list.


In case of questions or concerns, please consider posting to the r-sig-debian list.

Dirk, Michael and Johannes, June 2018

Dirk Eddelbuettel Thinking inside the box

Reproducible Builds: Weekly report #163

Planet Debian - Hën, 11/06/2018 - 11:45md

Here’s what happened in the Reproducible Builds effort between Sunday June 3 and Saturday June 9 2018:

Development work Upcoming events development

There were a number of changes to our Jenkins-based testing framework that powers, including:

In addition, Mattia Rizzolo has been working in a large refactor of the Python part of the setup.

Documentation updates Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo, Santiago Torres, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks

Kirigaming – Kolorfill

Planet Debian - Hën, 11/06/2018 - 9:07md

Last time, I was doing a recipe manager. This time I’ve been doing a game with javascript and QtQuick, and for the first time dipping my feet into the Kirigami framework.

I’ve named the game Kolorfill, because it is about filling colors. It looks like this:

The end goal is to make the board into one color in as few steps as possible. The way to do it is “Paint bucket”-tool from top left corner with various colors.

But enough talk. Let’s see some code:

And of course, there is some QML tests for the curious.
A major todo item is saving the high score and getting that to work. Patches welcome. Or pointer to what QML components that can help me with that.

Sune Vuorela english – Blog :: Sune Vuorela

Google Summer of Code 2018 with Debian - Week 4

Planet Debian - Hën, 11/06/2018 - 8:30md

After working on designs and getting my hands dirty with KIVY for the first 3 weeks, I became comfortable with my development environment and was able to deliver features within a couple of days with UI, tests, and documentation. In this blog, I explain how I converted all my Designs into Code and what I've learned along the way.

The Sign Up

In order to implement above design in KIVY, the best way is to write a user kv-lang. It involves writing a kv file which contains widget tree of the layout and a lot more. One can learn more about kv-lang from the documentation. To begin with, let us look at the simplest kv file.

BoxLayout: Label: text: 'Hello' Label: text: 'World' KV Language

In KIVY, in order to build UI widgets are used. Also, widget base class is what is derived to create all other UI elements like layouts, button, label and so on in KIVY. Indentation is used in kv just like in Python to define children. In our kv file above, we're using BoxLayout which allows us to arrange all its children in either horizontal(by default) or vertical orientation. So, both the Labels will be oriented horizontally one after another.

Just like children widgets, one can also set values to properties like Hello to text of the first Label in above code. More information about what properties can be defined for BoxLayout and Label can be seen from their API documentaion. All which remains is importing this .kv (say sample.kv) file from your module which runs KIVY app. You might notice that for now Language and Timezone are kept static. The reason is, Language support architecture is yet to be finalized and both the options would require a Drop Down list, design and implementation for which will be handled separately.

In order for me to build the UI following the design, I had to experiment with widgets. When all was done, signup.kv file contained the resultant UI.


Now, the good part is we have a UI, the user can input data. And the bad part is user can input any data! So, it's very important to validate whether the user is submitting data in the correct format or not. Specifically for Sign Up module, I had to validate Email, Passwords and Full Name submitted by the user. Validation module can be found here which contains classes and methods for what I intended to do.

It's important that user gets feedback after validation if something is wrong with the input. This is done by exchanging the Label's text with error message and color with bleeding red by calling prompt_error_message for unsuccessful validation.

Updating The Database

After successful validation, Sign Up module steps forward to update the database in sqlite3 module. But before that, Email and Full Name is cleaned for any unnecessary whitespaces, tabs and newline characters. Universally unique identifier or uuid is generated for the user_id. Plain text Password in changed to sha256 hash string for security. Finally, sqlite3 is integrated to to update the database. SQlite database is stored in a single file and named new_contributor_wizard.db. For user information, the table named USERS is created if not present during initialization of UpdateDB instance. Finally, information is stored or error is returned if the Email already exists. This is how the USERS schema looks like.

id VARCHAR(36) PRIMARY KEY, email UNIQUE, pass VARCHAR(64), fullname TEXT, language TEXT, timezone TEXT

After the Database is updated, i.e. successful account creation of user, the natural flow is to take the user to the Dashboard screen. In order to make this feature atomic, integration with Dashboard would be done once all 3 (SignUp, SignIn, and Dashboard) features are merged. So, in order to showcase successful sign-up, I've used text confirmation. Below is the screencast of how the feature looks and what changes it makes in the database.

Your browser does not support HTML5 video. The Sign In

If you look into the difference in UI of SignIn module in comparison with the SignUp, you might notice a few changes.

  • The New Contributor Wizard is now right-aligned
  • Instead of 2 columns taking user information, here we have just one with Email and Password

Hence, the UI experiences only a little change and the result can be seen in


Just like in the Sign Up modules, we are not trusting user's input to be sane. Hence, we validate whether the user is giving us a good format Email and Password. The resultant validations of Sign In modules can be seen in

Updating The Database

After successful validation, next step would be cleaning Email and hashing the Password entered by the user. Here we have two possibilities of unsuccessful signin,

  • Either the Email entered by the user doesn't exist in the database
  • Or the Password entered by the user is not correct

Else, the user is signed in successfully. For the unsuccessful signin, I have created a module to prompt the error correctly. contains the database operations for Sign In module.

The Exceptions of Sign In contains Exception classes and they are defined as

  • UserError - this class is used to throw an exception when Email doesn't exist
  • PasswordError - this class is used to throw an exception when Password doesn't match the one saved in the database with the corresponding email.

All these modules are integrated with and the resultant feature can be seen in action in the screencast below. Also, here's the merge request for the same.

Your browser does not support HTML5 video. The Dashboard

The Dashboard is completely different than the above two modules. If New Contributor Wizard is the culmination of different user stories and interactive screen then Dashboard is the protagonist of all the other features. A successful SignIn or SignUp will direct the user to the Dashboard. All the tutorials and tools will be available to the user henceforth.

The UI

There are 2 segments of the Dashboard screen, one is for all the menu options on the left and another is for the tutorials and tools for the selected menu option on the right. So, it was needed to change the screen on the right all the time while selecting the menu options. KIVY provides a widget named Screen Manager to manage such an issue gracefully. But in order to have control over the transition of just a part of the screen rather than the entire screen, one has to dig deep into the API and work it out. Here's when I remembered a sentence from the Zen of Python, "Simple is better than complex" and I chose the simple way of changing the screen i.e. by adding/removing widget functions.

In the, I'm overidding on_touch_down function to check which menu option the user clicks on and calling enable_menu accordingly.

The menu options on the left are not the Button widget. I had an option of using the Button directly but it would need customization to make them look pretty. Instead, I used BoxLayout and Label to incorporate a button like feature. In enable_menu I only check on top of which option user is clicking using the touch API. Now, all I have to do is highlight the selected option and unfocus all the other options. The final UI can be seen here in dashboard.kv.


Along with highlighting the selected option, Dashboard also changes to the courseware i.e. tools and tutorials for the selected option on the right. To provide a modular structure to application, all these options are build as separate modules and then integrated into the Dashboard. Here are all the modules for the courseware build for the Dashboard,

  • blog - Users will be given tools to create and deploy their blogs and also learn the best practices.
  • cli - Understanding Command Line Interface will be the goal with all the tutorials provided in this module.
  • communication - Communication module will have tutorials for IRC and mailing lists and showcase best communication practices. The tools in this module will help user subscribe to the mailing lists of different open source communities.
  • encryption - Encrypting communication and data will be tough using this module.
  • how_to_use - This would be an introductory module for the user for them to understand how to user this application.
  • vcs - Version Control Systems like git is important while working on a project whether personal or with a team and everything in between.
  • way_ahead - This module will help users reach out to different open source communities and organizations. It will also showcase open source project to the user with respect to their preference and information about programs like Google Summer of Code and Outreachy.

Below the menu are the options for settings. These settings also have separate modules just like courseware. Specifically, they are described as

  • application_settings - Would help out user to manage setting which are specific to KIVY application like resolutions.
  • theme_settings - User can manage theme related setting like color schema using this option
  • profile_settings - Would help the user manage information about themselves

The merge request which incorporates the Dashboard feature in the project can be seen in action in the screencast below.

Your browser does not support HTML5 video. The Conclusion

The week 4 was a bit satisfying for me as I felt like adding value to the project with these merge requests. As soon as the merge requests are reviewed and merged in the repository, I'll work on integrating all these features together to create a seamless experience as it should be for the user. There are few necessary modifications to be made in the features like supporting multiple languages and adding the gradient to the background as it can be seen in the design. I'll create issues on redmine for the same and will work on them as soon as integration is done. My next task would be designing how tutorials and tasks would look in the right segment of the Dashboard.

Shashank Kumar Shanky's Brainchild

Microsoft’s failed attempt on Debian packaging

Planet Debian - Hën, 11/06/2018 - 11:13pd

Just recently Microsoft Open R 3.5 was announced, as an open source implementation of R with some improvements. Binaries are available for Windows, Mac, and Linux. I dared to download and play around with the files, only to get shocked how incompetent Microsoft is in packaging.

From the microsoft-r-open-mro-3.5.0 postinstall script:

#!/bin/bash #TODO: Avoid hard code VERSION number in all scripts VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk -F. '{print $1 "." $2 "." $3}'` INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}" echo $VERSION ln -s "${INSTALL_PREFIX}/lib64/R/bin/R" /usr/bin/R ln -s "${INSTALL_PREFIX}/lib64/R/bin/Rscript" /usr/bin/Rscript rm /bin/sh ln -s /bin/bash /bin/sh

First of all, the ln -s will not work in case the standard R package is installed, but much worse, forcibly relinking /bin/sh to bash is something I didn’t expect to see.

Then, looking at the prerm script, it is getting even more funny:

#!/bin/bash VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk -F. '{print $1 "." $2 "." $3}'` INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}/" rm /usr/bin/R rm /usr/bin/Rscript rm -rf "${INSTALL_PREFIX}/lib64/R/backup"

Stop, wait, you are removing /usr/bin/R without even checking that it points to the R you have installed???

I guess Microsoft should read a bit up, in particular about dpkg-divert and proper packaging. What came in here was such an exhibition of incompetence that I can only assume they are doing it on purpose.

PostScriptum: A short look into the man page of dpkg-divert will give a nice example how it should be done.

PPS: I first reported these problems in the R Open Forums and later got an answer that they look into it.

Norbert Preining There and back again


Subscribe to AlbLinux agreguesi