You are here

Agreguesi i feed

Progress bar for file descriptors

Planet Debian - Mër, 13/06/2018 - 1:43md

I ran gzip on an 80Gb file, it's processing, but who knows how much it has done yet, and when it will end? I wish gzip had a progressbar. Or MySQL. Or…

Ok. Now every program that reads a file sequentially can have a progressbar:

https://gitlab.com/spanezz/fdprogress

fdprogress

Print progress indicators for programs that read files sequentially.

fdprogress monitors file descriptor offsets and prints progressbars comparing them to file sizes.

Pattern can be any glob expression.

usage: fdprogress [-h] [--verbose] [--debug] [--pid PID] [pattern] show progress from file descriptor offsets positional arguments: pattern file name to monitor optional arguments: -h, --help show this help message and exit --verbose, -v verbose output --debug debug output --pid PID, -p PID PID of process to monitor pv

pv has a --watchfd option that does most of what fdprogress is trying to do: use that instead.

fivi

fivi also exists, with specific features to show progressbars for filter commands.

Enrico Zini http://www.enricozini.org/tags/pdo/ Enrico Zini: pdo

Hooking up Home Assistant to Alexa + Google Assistant

Planet Debian - Mar, 12/06/2018 - 10:21md

I have an Echo Dot. Actually I have two; one in my study and one in the dining room. Mostly we yell at Alexa to play us music; occasionally I ask her to set a timer, tell me what time it is or tell me the news. Having setup Home Assistant it seemed reasonable to try and enable control of the light in the dining room via Alexa.

Perversely I started with Google Assistant, even though I only have access to it via my phone. Why? Because the setup process was a lot easier. There are a bunch of hoops to jump through that are documented on the Google Assistant component page, but essentially you create a new home automation component in the Actions on Google interface, connect it with the Google OAuth stuff for account linking, and open up your Home Assistant instance to the big bad internet so Google can connect.

This final step is where I differed from the provided setup. My instance is accessible internally at home, but I haven’t wanted to expose it externally yet (and I suspect I never well, but instead have the ability to VPN back in to access or similar). The default instructions need you to open up API access publicly, and configure up Google with your API password, which allows access to everything. I’d rather not.

So, firstly I configured up my external host with an Apache instance and a Let’s Encrypt cert (luckily I have a static IP, so this was actually the base host that the Home Assistant container runs on). Rather than using this to proxy the entire Home Assistant setup I created a unique /external/google/randomstring proxy just for the Google Assistant API endpoint. It looks a bit like this:

<VirtualHost *:443> ServerName my.external.host ProxyPreserveHost On ProxyRequests off RewriteEngine on # External access for Google Assistant ProxyPassReverse /external/google/randomstring http://hass-host:8123/api/google_assistant RewriteRule ^/external/google/randomstring$ http://hass-host:8123/api/google_assistant?api_password=myapipassword [P] RewriteRule ^/external/google/randomstring/auth$ http://hass-host:8123/api/google_assistant/auth?%{QUERY_STRING}&&api_password=myapipassword [P] SSLEngine on SSLCertificateFile /etc/ssl/my.external.host.crt SSLCertificateKeyFile /etc/ssl/private/my.external.host.key SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt </VirtualHost>

This locks down the external access to just being the Google Assistant end point, and means that Google have a specific shared secret rather than the full API password. I needed to configure up Home Assistant as well, so configuration.yaml gained:

google_assistant: project_id: homeautomation-8fdab client_id: oFqHKdawWAOkeiy13rtr5BBstIzN1B7DLhCPok1a6Jtp7rOI2KQwRLZUxSg00rIEib2NG8rWZpH1cW6N access_token: l2FrtQyyiJGo8uxPio0hE5KE9ZElAw7JGcWRiWUZYwBhLUpH3VH8cJBk4Ct3OzLwN1Fnw39SR9YArfKq agent_user_id: noodles@earth.li api_key: nyAxuFoLcqNIFNXexwe7nfjTu2jmeBbAP8mWvNea exposed_domains: - light

Setting up Alexa access is more complicated. Amazon Smart Home skills must call an AWS Lambda - the code that services the request is essential a small service run within Lambda. Home Assistant supports all the appropriate requests, so the Lambda code is a very simple proxy these days. I used Haaska which has a complete setup guide. You must do all 3 steps - the OAuth provider, the AWS Lambda and the Alexa Skill. Again, I wanted to avoid exposing the full API or the API password, so I forked Haaska to remove the use of a password and instead use a custom URL. I then added the following additional lines to the Apache config above:

# External access for Amazon Alexa ProxyPassReverse /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home RewriteRule /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home?api_password=myapipassword [P]

In the config.json I left the password field blank and set url to https://my.external.host/external/amazon/stringrandom. configuration.yaml required less configuration than the Google equivalent:

alexa: smart_home: filter: include_entities: - light.dining_room_lights - light.living_room_lights - light.kitchen - light.snug

(I’ve added a few more lights, but more on the exact hardware details of those at another point.)

To enable in Alexa I went to the app on my phone, selected the “Smart Home” menu option, enabled my Home Assistant skill and was able to search for the available devices. I can then yell “Alexa, turn on the snug” and magically the light turns on.

Aside from being more useful (due to the use of the Dot rather than pulling out a phone) the Alexa interface is a bit smoother - the command detection is more reliable (possibly due to the more limited range of options it has to work out?) and adding new devices is a simple rescan. Adding new devices with Google Assistant seems to require unlinking and relinking the whole setup.

The only problem with this setup so far is that it’s only really useful for the room with the Alexa in it. Shouting from the living room in the hope the Dot will hear is a bit hit and miss, and I haven’t yet figured out a good alternative method for controlling the lights there that doesn’t mean using a phone or a tablet device.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

Syncing with a memory: a unique use of tar –listed-incremental

Planet Debian - Mar, 12/06/2018 - 1:27md

I have a Nextcloud instance that various things automatically upload photos to. These automatic folders sync to a directory on my desktop. I wanted to pull things out of that directory without deleting them, and only once. (My wife might move them out of the directory on her computer, and I might arrange them into targets on my end.)

In other words, I wanted to copy a file from a source to a destination, but remember what had been copied before so it only ever copies once.

rsync doesn’t quite do this. But it turns out that tar’s listed-incremental feature can do exactly that. Ordinarily, it would delete files that were deleted on the source. But if we make the tar file with the incremental option, but extract it without, it doesn’t try to delete anything at extract time.

Here’s my synconce script:

#!/bin/bash set -e if [ -z "$3" ]; then echo "Syntax: $0 snapshotfile sourcedir destdir" exit 5 fi SNAPFILE="$(realpath "$1")" SRCDIR="$2" DESTDIR="$(realpath "$3")" cd "$SRCDIR" if [ -e "$SNAPFILE" ]; then cp "$SNAPFILE" "${SNAPFILE}.new" fi tar "--listed-incremental=${SNAPFILE}.new" -cpf - . | \ tar -xf - -C "$DESTDIR" mv "${SNAPFILE}.new" "${SNAPFILE}"

Just have the snapshotfile be outside both the sourcedir and destdir and you’re good to go!

John Goerzen http://changelog.complete.org The Changelog

R 3.5.0 on Debian and Ubuntu: An Update

Planet Debian - Mar, 12/06/2018 - 3:27pd
Overview

R 3.5.0 was released a few weeks ago. As it changes some (important) internals, packages installed with a previous version of R have to be rebuilt. This was known and expected, and we took several measured steps to get R binaries to everybody without breakage.

The question of but how do I upgrade without breaking my system was asked a few times, e.g., on the r-sig-debian list as well as in this StackOverflow question.

Debian

Core Distribution As usual, we packaged R 3.5.0 as soon as it was released – but only for the experimental distribution, awaiting a green light from the release masters to start the transition. A one-off repository [drr35](https://github.com/eddelbuettel/drr35) was created to provide R 3.5.0 binaries more immediately; this was used, e.g., by the r-base Rocker Project container / the official R Docker container which we also update after each release.

The actual transition was started last Friday, June 1, and concluded this Friday, June 8. Well over 600 packages have been rebuilt under R 3.5.0, and are now ready in the unstable distribution from which they should migrate to testing soon. The Rocker container r-base was also updated.

So if you use Debian unstable or testing, these are ready now (or will be soon once migrated to testing). This should include most Rocker containers built from Debian images.

Contributed CRAN Binaries Johannes also provided backports with a -cran35 suffix in his CRAN-mirrored Debian backport repositories, see the README.

Ubuntu

Core (Upcoming) Distribution Ubuntu, for the upcoming 18.10, has undertaken a similar transition. Few users access this release yet, so the next section may be more important.

Contributed CRAN and PPA Binaries Two new Launchpad PPA repositories were created as well. Given the rather large scope of thousands of packages, multiplied by several Ubuntu releases, this too took a moment but is now fully usable and should get mirrored to CRAN ‘soon’. It covers the most recent and still supported LTS releases as well as the current release 18.04.

One PPA contains base R and the recommended packages, RRutter3.5. This is source of the packages that will soon be available on CRAN. The second PPA (c2d4u3.5) contains over 3,500 packages mainly derived from CRAN Task Views. Details on updates can be found at Michael’s R Ubuntu Blog.

This can used for, e.g., Travis if you managed your own sources as Dirk’s r-travis does. We expect to use this relatively soon, possibly as an opt-in via a variable upon which run.sh selects the appropriate repository set. It will also be used for Rocker releases built based off Ubuntu.

In both cases, you may need to adjust the sources list for apt accordingly.

Others

There may also be ongoing efforts within Arch and other Debian-derived distributions, but we are not really aware of what is happening there. If you use those, and coordination is needed, please feel free to reach out via the the r-sig-debian list.

Closing

In case of questions or concerns, please consider posting to the r-sig-debian list.

Dirk, Michael and Johannes, June 2018

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Reproducible Builds: Weekly report #163

Planet Debian - Hën, 11/06/2018 - 11:45md

Here’s what happened in the Reproducible Builds effort between Sunday June 3 and Saturday June 9 2018:

Development work Upcoming events tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

In addition, Mattia Rizzolo has been working in a large refactor of the Python part of the setup.

Documentation updates Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo, Santiago Torres, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Kirigaming – Kolorfill

Planet Debian - Hën, 11/06/2018 - 9:07md

Last time, I was doing a recipe manager. This time I’ve been doing a game with javascript and QtQuick, and for the first time dipping my feet into the Kirigami framework.

I’ve named the game Kolorfill, because it is about filling colors. It looks like this:

The end goal is to make the board into one color in as few steps as possible. The way to do it is “Paint bucket”-tool from top left corner with various colors.

But enough talk. Let’s see some code:
https://cgit.kde.org/scratch/sune/kolorfill.git/

And of course, there is some QML tests for the curious.
A major todo item is saving the high score and getting that to work. Patches welcome. Or pointer to what QML components that can help me with that.

Sune Vuorela http://pusling.com/blog english – Blog :: Sune Vuorela

Google Summer of Code 2018 with Debian - Week 4

Planet Debian - Hën, 11/06/2018 - 8:30md

After working on designs and getting my hands dirty with KIVY for the first 3 weeks, I became comfortable with my development environment and was able to deliver features within a couple of days with UI, tests, and documentation. In this blog, I explain how I converted all my Designs into Code and what I've learned along the way.

The Sign Up

In order to implement above design in KIVY, the best way is to write a user kv-lang. It involves writing a kv file which contains widget tree of the layout and a lot more. One can learn more about kv-lang from the documentation. To begin with, let us look at the simplest kv file.

BoxLayout: Label: text: 'Hello' Label: text: 'World' KV Language

In KIVY, in order to build UI widgets are used. Also, widget base class is what is derived to create all other UI elements like layouts, button, label and so on in KIVY. Indentation is used in kv just like in Python to define children. In our kv file above, we're using BoxLayout which allows us to arrange all its children in either horizontal(by default) or vertical orientation. So, both the Labels will be oriented horizontally one after another.

Just like children widgets, one can also set values to properties like Hello to text of the first Label in above code. More information about what properties can be defined for BoxLayout and Label can be seen from their API documentaion. All which remains is importing this .kv (say sample.kv) file from your module which runs KIVY app. You might notice that for now Language and Timezone are kept static. The reason is, Language support architecture is yet to be finalized and both the options would require a Drop Down list, design and implementation for which will be handled separately.

In order for me to build the UI following the design, I had to experiment with widgets. When all was done, signup.kv file contained the resultant UI.

Validations

Now, the good part is we have a UI, the user can input data. And the bad part is user can input any data! So, it's very important to validate whether the user is submitting data in the correct format or not. Specifically for Sign Up module, I had to validate Email, Passwords and Full Name submitted by the user. Validation module can be found here which contains classes and methods for what I intended to do.

It's important that user gets feedback after validation if something is wrong with the input. This is done by exchanging the Label's text with error message and color with bleeding red by calling prompt_error_message for unsuccessful validation.

Updating The Database

After successful validation, Sign Up module steps forward to update the database in sqlite3 module. But before that, Email and Full Name is cleaned for any unnecessary whitespaces, tabs and newline characters. Universally unique identifier or uuid is generated for the user_id. Plain text Password in changed to sha256 hash string for security. Finally, sqlite3 is integrated to updatedb.py to update the database. SQlite database is stored in a single file and named new_contributor_wizard.db. For user information, the table named USERS is created if not present during initialization of UpdateDB instance. Finally, information is stored or error is returned if the Email already exists. This is how the USERS schema looks like.

id VARCHAR(36) PRIMARY KEY, email UNIQUE, pass VARCHAR(64), fullname TEXT, language TEXT, timezone TEXT

After the Database is updated, i.e. successful account creation of user, the natural flow is to take the user to the Dashboard screen. In order to make this feature atomic, integration with Dashboard would be done once all 3 (SignUp, SignIn, and Dashboard) features are merged. So, in order to showcase successful sign-up, I've used text confirmation. Below is the screencast of how the feature looks and what changes it makes in the database.

Your browser does not support HTML5 video. The Sign In

If you look into the difference in UI of SignIn module in comparison with the SignUp, you might notice a few changes.

  • The New Contributor Wizard is now right-aligned
  • Instead of 2 columns taking user information, here we have just one with Email and Password

Hence, the UI experiences only a little change and the result can be seen in singin.py.

Validations

Just like in the Sign Up modules, we are not trusting user's input to be sane. Hence, we validate whether the user is giving us a good format Email and Password. The resultant validations of Sign In modules can be seen in validations.py.

Updating The Database

After successful validation, next step would be cleaning Email and hashing the Password entered by the user. Here we have two possibilities of unsuccessful signin,

  • Either the Email entered by the user doesn't exist in the database
  • Or the Password entered by the user is not correct

Else, the user is signed in successfully. For the unsuccessful signin, I have created a exceptions.py module to prompt the error correctly. updatedb.py contains the database operations for Sign In module.

The Exceptions

Exceptions.py of Sign In contains Exception classes and they are defined as

  • UserError - this class is used to throw an exception when Email doesn't exist
  • PasswordError - this class is used to throw an exception when Password doesn't match the one saved in the database with the corresponding email.

All these modules are integrated with signin.py and the resultant feature can be seen in action in the screencast below. Also, here's the merge request for the same.

Your browser does not support HTML5 video. The Dashboard

The Dashboard is completely different than the above two modules. If New Contributor Wizard is the culmination of different user stories and interactive screen then Dashboard is the protagonist of all the other features. A successful SignIn or SignUp will direct the user to the Dashboard. All the tutorials and tools will be available to the user henceforth.

The UI

There are 2 segments of the Dashboard screen, one is for all the menu options on the left and another is for the tutorials and tools for the selected menu option on the right. So, it was needed to change the screen on the right all the time while selecting the menu options. KIVY provides a widget named Screen Manager to manage such an issue gracefully. But in order to have control over the transition of just a part of the screen rather than the entire screen, one has to dig deep into the API and work it out. Here's when I remembered a sentence from the Zen of Python, "Simple is better than complex" and I chose the simple way of changing the screen i.e. by adding/removing widget functions.

In the dashboard.py, I'm overidding on_touch_down function to check which menu option the user clicks on and calling enable_menu accordingly.

The menu options on the left are not the Button widget. I had an option of using the Button directly but it would need customization to make them look pretty. Instead, I used BoxLayout and Label to incorporate a button like feature. In enable_menu I only check on top of which option user is clicking using the touch API. Now, all I have to do is highlight the selected option and unfocus all the other options. The final UI can be seen here in dashboard.kv.

Courseware

Along with highlighting the selected option, Dashboard also changes to the courseware i.e. tools and tutorials for the selected option on the right. To provide a modular structure to application, all these options are build as separate modules and then integrated into the Dashboard. Here are all the modules for the courseware build for the Dashboard,

  • blog - Users will be given tools to create and deploy their blogs and also learn the best practices.
  • cli - Understanding Command Line Interface will be the goal with all the tutorials provided in this module.
  • communication - Communication module will have tutorials for IRC and mailing lists and showcase best communication practices. The tools in this module will help user subscribe to the mailing lists of different open source communities.
  • encryption - Encrypting communication and data will be tough using this module.
  • how_to_use - This would be an introductory module for the user for them to understand how to user this application.
  • vcs - Version Control Systems like git is important while working on a project whether personal or with a team and everything in between.
  • way_ahead - This module will help users reach out to different open source communities and organizations. It will also showcase open source project to the user with respect to their preference and information about programs like Google Summer of Code and Outreachy.
Settings

Below the menu are the options for settings. These settings also have separate modules just like courseware. Specifically, they are described as

  • application_settings - Would help out user to manage setting which are specific to KIVY application like resolutions.
  • theme_settings - User can manage theme related setting like color schema using this option
  • profile_settings - Would help the user manage information about themselves

The merge request which incorporates the Dashboard feature in the project can be seen in action in the screencast below.

Your browser does not support HTML5 video. The Conclusion

The week 4 was a bit satisfying for me as I felt like adding value to the project with these merge requests. As soon as the merge requests are reviewed and merged in the repository, I'll work on integrating all these features together to create a seamless experience as it should be for the user. There are few necessary modifications to be made in the features like supporting multiple languages and adding the gradient to the background as it can be seen in the design. I'll create issues on redmine for the same and will work on them as soon as integration is done. My next task would be designing how tutorials and tasks would look in the right segment of the Dashboard.

Shashank Kumar https://blog.shanky.xyz/ Shanky's Brainchild

Microsoft’s failed attempt on Debian packaging

Planet Debian - Hën, 11/06/2018 - 11:13pd

Just recently Microsoft Open R 3.5 was announced, as an open source implementation of R with some improvements. Binaries are available for Windows, Mac, and Linux. I dared to download and play around with the files, only to get shocked how incompetent Microsoft is in packaging.

From the microsoft-r-open-mro-3.5.0 postinstall script:

#!/bin/bash #TODO: Avoid hard code VERSION number in all scripts VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk -F. '{print $1 "." $2 "." $3}'` INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}" echo $VERSION ln -s "${INSTALL_PREFIX}/lib64/R/bin/R" /usr/bin/R ln -s "${INSTALL_PREFIX}/lib64/R/bin/Rscript" /usr/bin/Rscript rm /bin/sh ln -s /bin/bash /bin/sh

First of all, the ln -s will not work in case the standard R package is installed, but much worse, forcibly relinking /bin/sh to bash is something I didn’t expect to see.

Then, looking at the prerm script, it is getting even more funny:

#!/bin/bash VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk -F. '{print $1 "." $2 "." $3}'` INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}/" rm /usr/bin/R rm /usr/bin/Rscript rm -rf "${INSTALL_PREFIX}/lib64/R/backup"

Stop, wait, you are removing /usr/bin/R without even checking that it points to the R you have installed???

I guess Microsoft should read a bit up, in particular about dpkg-divert and proper packaging. What came in here was such an exhibition of incompetence that I can only assume they are doing it on purpose.

PostScriptum: A short look into the man page of dpkg-divert will give a nice example how it should be done.

PPS: I first reported these problems in the R Open Forums and later got an answer that they look into it.

Norbert Preining https://www.preining.info/blog There and back again

Running Digikam inside Docker

Planet Debian - Hën, 11/06/2018 - 9:35pd

After my recent complaint about AppImage, I thought I’d describe how I solved my problem. I needed a small patch to Digikam, which was already in Debian’s 5.9.0 package, and the thought of rebuilding the AppImage was… unpleasant.

I thought – why not just run it inside Buster in Docker? There are various sources on the Internet for X11 apps in Docker. It took a little twiddling to make it work, but I did.

My Dockerfile was pretty simple:

FROM debian:buster MAINTAINER John Goerzen RUN apt-get update && \ apt-get -yu dist-upgrade && \ apt-get --install-recommends -y install firefox-esr digikam digikam-doc \ ffmpegthumbs imagemagick minidlna hugin enblend enfuse minidlna pulseaudio \ strace xterm less breeze && \ apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* RUN adduser --disabled-password --uid 1000 --gecos "John Goerzen" jgoerzen && \ rm -r /home/jgoerzen/.[a-z]* RUN rm /etc/machine-id CMD /usr/bin/docker RUN mkdir -p /nfs/personalmedia /run/user/1000 && chown -R jgoerzen:jgoerzen /nfs /run/user/1000

I basically create the container and my account in it.

Then this script starts up Digikam:

#!/bin/bash set -e # This will be unnecessary with docker 18.04 theoretically.... --privileged see # https://stackoverflow.com/questions/48995826/which-capabilities-are-needed-for-statx-to-stop-giving-eperm # and https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1755250 docker run -ti \ -v /tmp/.X11-unix:/tmp/.X11-unix -v "/run/user/1000/pulse:/run/user/1000/pulse" -v /etc/machine-id:/etc/machine-id \ -v /etc/localtime:/etc/localtime \ -v /dev/shm:/dev/shm -v /var/lib/dbus:/var/lib/dbus -v /var/run/dbus:/var/run/dbus -v /run/user/1000/bus:/run/user/1000/bus \ -v "$HOME:$HOME" -v "/nfs/personalmedia/Pictures:/nfs/personalmedia/Pictures" \ -e DISPLAY="$DISPLAY" \ -e XDG_RUNTIME_DIR="$XDG_RUNTIME_DIR" \ -e DBUS_SESSION_BUS_ADDRESS="$DBUS_SESSION_BUS_ADDRESS" \ -e LANG="$LANG" \ --user "$USER" \ --hostname=digikam \ --name=digikam \ --privileged \ --rm \ jgoerzen/digikam "$@" /usr/bin/digikam

The goal here was not total security isolation; if it had been, then all the dbus mounting and $HOME mounting was a poor idea. But as an alternative to AppImage — well, it worked perfectly. I could even get security updates if I wanted.

John Goerzen http://changelog.complete.org The Changelog

Weblate 3.0.1

Planet Debian - Dje, 10/06/2018 - 10:15md

Weblate 3.0.1 has been released today. It contains several bug fixes, most importantly possible migration issue on users when migrating from 2.20. There was no data corruption, just some of the foreign keys were possibly not properly migrated. Upgrading from 3.0 to 3.0.1 will fix this as well as going directly from 2.20 to 3.0.1.

Full list of changes:

  • Fixed possible migration issue from 2.20.
  • Localization updates.
  • Removed obsolete hook examples.
  • Improved caching documentation.
  • Fixed displaying of admin documentation.
  • Improved handling of long language names.

If you are upgrading from older version, please follow our upgrading instructions, the upgrade is more complex this time.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Michal Čihař https://blog.cihar.com/archives/debian/ Michal Čihař's Weblog, posts tagged by Debian

RcppZiggurat 0.1.5

Planet Debian - Dje, 10/06/2018 - 8:27md

A maintenance release 0.1.5 of RcppZiggurat is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made. Other changes that have been pending are a minor rewrite of DOIs in DESCRIPTION, a corrected state setter thanks to a PR by Ralf Stubner, and a tweak for function registration to have user_norm_rand() visible.

The NEWS file entry below lists all changes.

Changes in version 0.1.5 (2018-06-10)
  • Description rewritten using doi for references.

  • Re-setting the Ziggurat generator seed now correctly re-sets state (Ralf Stubner in #7 fixing #3)

  • Dynamic registration reverts to manual mode so that user_norm_rand() is visible as well (#7).

  • The vignette was updated to accomodate pandoc 2* [CRAN request].

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

RcppGSL 0.3.6

Planet Debian - Dje, 10/06/2018 - 8:20md

A maintenance update 0.3.6 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made. The (this time really boring) NEWS file entry follows:

Changes in version 0.3.6 (2018-06-10)
  • The vignette was updated to accomodate pandoc 2* [CRAN request].

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

RcppClassic 0.9.10

Planet Debian - Dje, 10/06/2018 - 6:36md

A maintenance release RcppClassic 0.9.9 is now at CRAN. This package provides a maintained version of the otherwise deprecated first Rcpp API; no new projects should use it.

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made.

CRANberries also reports the changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Debian LTS work, May 2018

Planet Debian - Dje, 10/06/2018 - 5:05md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked all those hours.

I uploaded the pending changes to linux at the beginning of the month, one of which had been embargoed. I prepared and released another update to the Linux 3.2 longterm stable branch (3.2.102). I then made a final upload of linux based on that.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

Please stop making the library situation worse with attempts to fix it

Planet Debian - Dje, 10/06/2018 - 10:31pd

I recently had a simple-sounding desire. I would like to run the latest stable version of Digikam. My desktop, however, runs Debian stable, which has 5.3.0, not 5.9.0.

This is not such a simple proposition.


$ ldd /usr/bin/digikam | wc -l
396

And many of those were required at versions that weren’t in stable.

I had long thought that AppImage was a rather bad idea, but I decided to give it a shot. I realized it was worse than I had thought.

The problems with AppImage

About a year ago, I wrote about the problems Docker security. I go into much more detail there, but the summary for AppImage is quite similar. How can I trust all the components in the (for instance) Digikam AppImage image are being kept secure? Are they using the latest libssl and libpng, to avoid security issues? How will I get notified of a security update? (There seems to be no mechanism for this right now.) An AppImage user that wants to be secure has to manually answer every one of those questions for every application. Ugh.

Nevertheless, the call of better facial detection beckoned, and I downloaded the Digikam AppImage and gave it a whirl. The darn thing actually fired up. But when it would play videos, there was no sound. Hmmmm.

I found errors like this:

Cannot access file ././/share/alsa/alsa.conf

Nasty. I spent quite some time trying to make ALSA work, before a bunch of experimentation showed that if I ran alsoft-conf on the host, and selected only the PulseAudio backend, then it would work. I reported this bug to Digikam.

Then I thought it was working — until I tried to upload some photos. It turns out that SSL support in Qt in the AppImage was broken, since it was trying to dlopen an incompatible version of libssl or libcrypto on the host. More details are in the bug I reported about this also.

These are just two examples. In the rather extensive Googling I did about these problems, I came across issue after issue people had with running Digikam in an AppImage. These issues are not limited to the ALSA and SSL issues I describe here. And they are not occurring due to some lack of skill on the part of Digikam developers.

Rather, they’re occurring because AppImage packaging for a complex package like this is hard. It’s hard because it’s based on a fiction — the fiction that it’s possible to make an AppImage container for a complex desktop application act exactly the same, when the host environment is not exactly the same. Does the host run PulseAudio or ALSA? Where are its libraries stored? How do you talk to dbus?

And it’s not for lack of trying. The scripts to build the Digikam appimage support runs to over 1000 lines of code in the AppImage directory, plus another 1300 lines of code (at least) in CMake files that handle much of the work, and another 3000 lines or so of patches to 3rd-party packages. That’s over 5000 lines of code! By contrast, the Debian packaging for the same version of Digikam, including Debian patches but excluding the changelog and copyright files, amounts to 517 lines. Of course, it is reusing OS packages for the dependencies that were already built, but this amounts to a lot simpler build.

Frankly I don’t believe that AppImage really lives up to its hype. Requiring reinventing a build system and making some dangerous concessions on security for something that doesn’t really work in the end — not good in my book.

The library problem

But of course, AppImage exists for a reason. That reason is that it’s a real pain to deal with so many levels of dependencies in software. Even if we were to compile from source like the old days, and even if it was even compatible with the versions of the dependencies in my OS, that’s still a lot of work. And if I have to build dependencies from source, then I’ve given up automated updates that way too.

There’s a lot of good that ELF has brought us, but I can’t help but think that it wasn’t really designed for a world in which a program links 396 libraries (plus dlopens a few more). Further, this world isn’t the corporate Unix world of the 80s; Open Source developers aren’t big on maintaining backwards compatibility (heck, both the KDE and Qt libraries under digikam have both been entirely rewritten in incompatible ways more than once!) The farther you get from libc, the less people seem to care about backwards compatibility. And really, who can blame volunteers? You want to work on new stuff, not supporting binaries from 5 years ago, right?

I don’t really know what the solution is here. Build-from-source approaches like FreeBSD and Gentoo have plenty of drawbacks too. Is there some grand solution I’m missing? Some effort to improve this situation without throwing out all the security benefits that individually-packaged libraries give us in distros like Debian?

John Goerzen http://changelog.complete.org The Changelog

Hacker Noir developments

Planet Debian - Sht, 09/06/2018 - 8:47md

I've been slowly writing on would-be novel, Hacker Noir. See also my Patreon post. I've just pushed out a new public chapter, Assault, to the public website, and a patron-only chapter to Patreon: "Ambush", where the Team is ambushed, and then something bad happens.

The Assault chapter was hard to write. It's based on something that happened to me earlier this year. The Ambush chapter was much more fun.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

New chapter of Hacker Noir on Patreon

Planet Debian - Sht, 09/06/2018 - 8:45md

For the 2016 NaNoWriMo I started writing a novel about software development, "Hacker Noir". I didn't finish it during that November, and I still haven't finished it. I had a year long hiatus, due to work and life being stressful, when I didn't write on the novel at all. However, inspired by both the Doctorow method and the Seinfeld method, I have recently started writing again.

I've just published a new chapter. However, unlike last year, I'm publishing it on my Patreon only, for the first month, and only for patrons. Then, next month, I'll be putting that chapter on the book's public site (noir.liw.fi), and another new chapter on Patreon.

I don't expect to make a lot of money, but I am hoping having active supporters will motivate me to keep writing.

I'm writing the first draft of the book. It's likely to be as horrific as every first-time author's first draft is. If you'd like to read it as raw as it gets, please do. Once the first draft is finished, I expect to read it myself, and be horrified, and throw it all away, and start over.

Also, I should go get some training on marketing.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

RcppDE 0.1.6

Planet Debian - Sht, 09/06/2018 - 7:11md

Another maintenance release, now at version 0.1.6, of our RcppDE package is now on CRAN. It follows the most recent (unblogged, my bad) 0.1.5 release in January 2016 and the 0.1.4 release in September 2015.

RcppDE is a "port" of DEoptim, a popular package for derivative-free optimisation using differential evolution optimization, to C++. By using RcppArmadillo, the code becomes a lot shorter and more legible. Our other main contribution is to leverage some of the excellence we get for free from using Rcpp, in particular the ability to optimise user-supplied compiled objective functions which can make things a lot faster than repeatedly evaluating interpreted objective functions as DEoptim (and, in fairness, just like most other optimisers) does.

That is also what lead to this upload: Kyle Baron noticed an issue when nesting a user-supplied compiled function inside a user-supplied compiled objective function -- and when using the newest Rcpp. This has to do with some cleanups we made for how RNG state is, or is not, set and preserved. Kevin Ushey was (once again) a real trooper here and added a simple class to Rcpp (in what is now the development version 0.12.17.2 available on the Rcpp drat repo) and used that here to (selectively) restore behaviour similarly to what we had in Rcpp (but which created another issue for another project). So all that is good now in all use cases. We also have some other changes contributed by Yi Kang some time ago for both JADE style randomization and some internal tweaks. Some packaging details were updated, and that sums up release 0.1.6.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Talk about the Debian GNU/Linux riscv64 port at RISC-V workshop

Planet Debian - Pre, 08/06/2018 - 11:20md

About a month ago I attended the RISC-V workshop (conference, congress) co-organised by the Barcelona Supercomputing Center (BSC) and Universitat Politècnica de Catalunya (UPC).

There I presented a talk with the (unimaginative) name of “Debian GNU/Linux Port for RISC-V 64-bit”, talking about the same topic as many other posts of this blog.

There are 2-3 such RISC-V Workshop events per year, one somewhere in Silicon Valley (initially at UC Berkeley, its birthplace) and the others spread around the world.

The demographics of this gathering are quite different to those of planet-debian; the people attending usually know a lot about hardware and often Linux, GNU toolchains and other FOSS, but sometimes very little about the inner workings of FOSS organisations such as Debian. My talk had these demographics as target, so a lot of its content will not teach anything new for most readers of planet-debian.

Still, I know that some readers are interested in parts of this, now that the slides and videos are published, so here it is:

Also very relevant is that they were using Debian (our very own riscv64 port, recently imported into debian-ports infra) in two of the most important hardware demos in the corridors. The rest were mostly embedded distros to showcase FPS games like Quake2, Doom or similar.


All the feedback that I received from many of the attendees about the availability of the port was very positive and they were very enthusiastic, basically saying that they and their teams were really delighted to be able to use Debian to test their different prototypes and designs, and to drive development.

Also, many used Debian daily in their work and research for other purposes, for example a couple of people were proudly showing to me Debian installed on their laptops.

For me, this feedback is a testament of how much of what we do everyday matters to the world out there.


For the historical curiosity, I also presented a similar talk in a previous workshop (2 years back) at CSAIL / MIT.

At that time the port was in a much more incipient state, mostly a proof of concept (for example the toolchain had not even started to be upstreamed). Links:

Manuel A. Fernandez Montecelo https://people.debian.org/~mafm/ Manuel A. Fernandez Montecelo :: Personal Debian page - planet-debian

Recently I'm not writing any code.

Planet Debian - Pre, 08/06/2018 - 10:58md
Recently I'm not writing any code.

Junichi Uekawa http://www.netfort.gr.jp/~dancer/diary/201806.html.en Dancer's daily hackings

Faqet

Subscribe to AlbLinux agreguesi