You are here

Agreguesi i feed

Add a PGP subkey to Yubikey 4

Planet Debian - Enj, 26/07/2018 - 10:03pd

I have a Yubikey from the job and wanted to start signing git commit without copying my Debian PGP key to the work computer. No, I did not want to create a second class PGP key just for the work. Here are the instructions for someone else do the same.

On the master computer

  • Create a second home dir for gpg

Because of bug #904596 I recommend to move your GPG home directory out of the way. Copy it into the original directory before starting.

mv ~/.gnupg ~/.gnupg.ref cp -r ~/.gnupg.ref ~/.gnupg
  • Create a subkey just for signing.

Create a subkey and take noticy of is id.

gpg --edit-key <KEY ID> addkey list save
  • Move into the Yubikey.

Select the new subkey and move it into the Yubikey.

gpg --edit-key <KEY ID> key <SUB KEY ID> keytocard save
  • Publish the updated PGP Key
gpg --keyserver http://... --send-keys <KEY ID>
  • Store the public URL of the key on Yubikey
gpg --edit-card url http://... quit
  • Backup both GPG home dir

On your master computer you need to use the old GPG home dir. But need to store both for the future.

mv ~/.gnupg ~/.gnupg.yubikey4 mv ~/.gnupg.ref ~/.gnupg cd ~ tar cf gnupg-homedir.backup.tar .gnupg .gnupg.yubikey4
  • Test
gpg --armor --sign

Should work without asking for the Yubikey.

  • Wait for the Key server to update your public key with the new subkey.

On a new computer

  • Plug the Yubikey
  • Through Yubikey fetch the public PGP Key
gpg --edit-card fetch quit
  • Test
gpg -armor --sign

Should ask for the Yubikey.

Jose M. Calhariz http://blog.calhariz.com/ One Suggestion by ... Calhariz

Inception: VM inside Docker inside KVM – Testing Debian VM installation builds on Travis CI

Planet Debian - Mër, 25/07/2018 - 5:31md

Back in 2006 I started to write a tool called grml-debootstrap. grml-debootstrap is a wrapper around debootstrap for installing Debian systems. Using grml-debootstrap, it’s possible to install Debian systems from the command line, without having to boot a Debian installer ISO. This is very handy when you’re running a live system (like Grml or Tails) and want to install Debian. It’s as easy as running:

% sudo grml-debootstrap --target /dev/sda1 --grub /dev/sda

I’m aware that grml-debootstrap is used in Continuous Integration/Delivery environments, installing Debian systems several hundreds or even thousands of times each month. Over the time grml-debootstrap gained many new features. For example, since 2011 grml-debootstrap supports installation into VM images:

% sudo grml-debootstrap --vmfile --vmsize 3G --target debian.img

In 2016 we also added (U)EFI support (the target device in this example is a logical device on LVM):

% sudo grml-debootstrap --grub /dev/sdb --target /dev/mapper/debian--server-rootfs --efi /dev/sdb1

As you might imagine, every new feature we add also increases the risk of breaking something™ existing. Back in 2014, I contributed a setup using Packer to build automated machine images, using grml-debootstrap. That allowed me to generate Vagrant boxes with VirtualBox automation via Packer, serving as a base for reproducing customer environments, but also ensuring that some base features of grml-debootstrap work as intended (including backwards compatibility until Debian 5.0 AKA lenny).

The problem of this Packer setup though is, contributors usually don’t necessarily have Packer and VirtualBox (readily) available. They also might not have the proper network speed/bandwidth to run extensive tests. To get rid of those (local) dependencies and make contributing towards grml-debootstrap more accessible (we’re currently working on e.g. systemd-networkd integration), I invested some time at DebCamp at DebConf18.

I decided to give Travis CI a spin. Travis CI is a well known Continuous Integration service in the open source community. Among others, it’s providing Ubuntu Linux environments, either Container-based or as full Virtual Machines, providing us what we need. Working on the Travis CI integration, I started with enabling ShellCheck (which is also available as Debian package, BTW!), serving as lint tool for shell scripts. All of that takes place in an isolated docker container.

To be able to execute grml-debootstrap, we need to install the latest version of grml-debootstrap from Git. That’s where travis.debian.net helps us – it is a hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every update. The result is a Debian package (grml-debootstrap_*.deb) which we can use for installation, ensuring that we run exactly what we will ship to users (including scripts, configuration + dependencies). This also takes place in an isolated docker instance.

Then it’s time to start a Debian/stretch docker container, installing the resulting grml-debootstrap*.deb file from the travis.debian.net container run there. Inside it, we execute grml-debootstrap with its VM installation feature, to install Debian into a qemu.img file. Via qemu-system-x86_64 we can boot this VM file. Finally, goss takes care of testing and validation of the resulting system.

The overall architecture looks like:

So Travis CI is booting a KVM instance on GCE (Google Compute Engine) for us, inside of which we start three docker instances:

  1. shellcheck (koalaman/shellcheck:stable)
  2. travis.debian.net (debian:stretch + debian:unstable, controlled via TRAVIS_DEBIAN_DISTRIBUTION)
  3. VM image installation + validation (debian:stretch)

Inside the debian/stretch docker environment, we install and execute grml-debootstrap. Finally we’re booting it via Qemu/KVM and running tests against it.

An example of such a Travis CI run is available at https://travis-ci.org/grml/grml-debootstrap/jobs/407751811.

Travis CI builds heavily depend on a bunch of external resources, which might result in false negatives in builds, this is something that we might improve by further integrating and using our infrastructure with Jenkins, GitLab etc. Anyway, it serves as a great base to make contributions and refactoring of grml-debootstrap easier.

Thanks to Christian Hofstaedtler + Darshaka Pathirana for for proof-reading this.

mika https://michael-prokop.at/blog Debian – mikas blog

Debian/TeX Live 2018.20180724-1

Planet Debian - Mar, 24/07/2018 - 12:13md

After more than two months finally an update to TeX Live in Debian again. I was a bit distracted by work, private life, travels, and above all the update to texdoc which required a few changes. Anyway, here is the new shipload, should be arriving at your computer in due time.

Having skipped more than two months gives a huge bunch of updates, hard to pick some interesting ones. As usual the work by Michael Sharpe, this time the extension of the Stix2 fonts in the stickstoo package, are greatly admired by me. I never understood where he finds all the time.

On the update side I am happy to see that Takuto Asakura has taken over responsability of texdoc, added already fuzzy search and better command line parsing, and I am sure we will see great improvements over time in this so very important puzzle piece to find relevant documentation.

With this I am diving into the preparations for DebConf18 in Taiwan, where I will report besides other on the status of typesetting CJK languages with TeX in Debian. Looking forward to meet a lot of interesting people in Taiwan.

Please enjoy.

New packages

axessibility, beamertheme-focus, biblatex-socialscienceshuberlin, cellprops, cqubeamer, ecothesis, endnotesj, erw-l3, etsvthor, gatherenum, guitartabs, hyperbar, jnuexam, kanaparser, lualatex-truncate, luavlna, modulus, onedown, padcount, pdfoverlay, pdfpc-movie, penrose, postage, powerdot-tuliplab, pst-contourplot, statistics, stickstoo, tagpdf, texdate, tikz-nef, tikzmarmots, tlc-article, topletter, xbmks.

Updated packages

academicons, achemso, acmart, alegreya, animate, apxproof, arabluatex, arara, babel, babel-french, babel-ukrainian, beebe, bezierplot, bib2gls, biblatex-archaeology, biblatex-caspervector, biblatex-ext, biblatex-gb7714-2015, biblatex-sbl, bibleref, bidi, bundledoc, bxjscls, cabin, caption, carlisle, cascade, catechis, classicthesis, clipboard, cochineal, colophon, colortbl, contracard, cooking-units, crossrefware, ctex, dashundergaps, datepicker-pro, datetime2, datetime2-galician, datetime2-irish, datetime2-latin, datetime2-lsorbian, dccpaper, doclicense, docsurvey, dozenal, dynkin-diagrams, elsarticle, esami, eso-pic, etoc, europecv, exercisebank, factura, fduthesis, fetchbibpes, filecontents, fira, fontawesome, fontawesome5, gbt7714, gentombow, geometry, getmap, glossaries, glossaries-extra, handin, ipaex-type1, isodoc, japanese-otf-uptex, japanese-otf-uptex-nonfree, jlreq, jsclasses, ketcindy, knowledge, komacv-rg, l3build, l3experimental, l3kernel, l3packages, latex, latex-make, latex-via-exemplos, latex2e-help-texinfo, latex2e-help-texinfo-spanish, latex2man, latexindent, latexmk, libertinus-otf, libertinust1math, lm, lni, lstbayes, luatexja, luaxml, lwarp, ly1, lyluatex, make4ht, marginnote, mcf2graph, media9, mhchem, minitoc, musicography, musixtex, na-position, ncctools, newtx, newtxsf, nicematrix, ocgx2, optidef, paracol, pgfornament-han, pkuthss, plantuml, platex, pst-ode, pstricks, ptex, ptex2pdf, pxjahyper, regexpatch, register, reledmac, roboto, scientific-thesis-cover, scsnowman, semantic-markup, serbian-lig, siunitx, stix, structmech, struktex, synctex, t2, tex-gyre, tex4ebook, tex4ht, texdoc, texdoctk, texlive-de, texlive-en, thesis-gwu, thucoursework, thuthesis, tikz-relay, tikzducks, tikzsymbols, todonotes, tools, toptesi, tracklang, turabian-formatting, uantwerpendocs, unicode-data, updmap-map, uptex, venndiagram, visualtikz, witharrows, xassoccnt, xcharter, xepersian, xint, xltabular, xsavebox, xurl, yathesis, zxjafont, zxjatype.

Norbert Preining https://www.preining.info/blog There and back again

libhandy 0.0.2

Planet Debian - Mar, 24/07/2018 - 11:32pd

Last month we tagged the first release of libhandy, a GTK+ library to ease the development of GNOME applications for mobile devices and small screens. Two of the contained widgets, HdyLeaflet and HdyColumn, are containers to address the specific size constraints of phones (video by Adrien). The rest are special purpose widgets, needed more than once on mobile devices, e.g. a Keypad (video).

This time around for the v0.0.2 release we mostly have bugfixes. From the Debian package's changelog:

[ Adrien Plazas ] * dialer: Make the grid visible and forbid show all. * example: Drop usage of show_all() * dialer: Add column-spacing and row-spacing props. * example: Change the grid's spacing and minimum size request. * flatpak: Allow access to the dconf config dir. * Replace phone-dial-symbolic by call-start-symbolic. * column: Fix height for width request. [ Guido Günther ] * Use source.puri.sm instead of code.puri.sm. * Add AUTHORS file * gitlab-ci: Build on Debian buster using provided build-deps. * arrows: test object construction * Multiple gtk-doc fixes * docs: Abort on warnings. * DialerButton: free letters

The Debian package was uploaded to Debian's NEW queue.

Guido Günther http://honk.sigxcpu.org/con/ Colors of Noise - Entries tagged planetdebian

Rcpp 0.12.18: Another batch of updates

Planet Debian - Mar, 24/07/2018 - 2:30pd

Another bi-monthly update in the 0.12.* series of Rcpp landed on CRAN early this morning following less than two weekend in the incoming/ directory of CRAN. As always, thanks to CRAN for all the work they do so well.

So once more, this release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, the 0.12.14.release in November 2017, the 0.12.15.release in January 2018, the 0.12.16.release in March 2018, and the 0.12.17 release in May 2018 making it the twenty-second release at the steady and predictable bi-montly release frequency (which started with the 0.11.* series).

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1403 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 138 in the current BioConductor release 3.7.

A pretty decent number of changes, contributed by a number of Rcpp core team members as well as Rcpp user, went into this. Full details are below.

Changes in Rcpp version 0.12.18 (2018-07-21)
  • Changes in Rcpp API:

    • The StringProxy::operator== is now const correct (Romain in #855 fixing #854).

    • The Environment::new_child() is now const (Romain in #858 fixing #854).

    • Next eval codes now properly unwind (Lionel in the large and careful #859 fixing #807).

    • In debugging mode, more type information is shown on abort() (Jack Wasey in #860 and #882 fixing #857).

    • A new class was added which allow suspension of the RNG synchronisation to address an issue seen in RcppDE (Kevin in #862).

    • Evaluation calls now happen in the base environment (which may fix an issue seen between conflicted and some BioConductor packages) (Kevin in #863 fixing #861).

    • Call stack display on error can now be controlled more finely (Romain in #868).

    • The new Rcpp_fast_eval is used instead of Rcpp_eval though this still requires setting RCPP_USE_UNWIND_PROTECT before including Rcpp.h (Qiang Kou in #867 closing #866).

    • The Rcpp::unwindProtect() function extracts the unwinding from the Rcpp_fast_eval() function and makes it more generally available. (Lionel in #873 and #877).

    • The tm_gmtoff part is skipped on AIX too (#876).

  • Changes in Rcpp Attributes:

    • The sourceCpp() function now evaluates R code in the correct local environment in which a function was compiled (Filip Schouwenaars in #852 and #869 fixing #851).

    • Filenames are now sorted in a case-insenstive way so that the RcppExports files are more stable across locales (Jack Wasey in #878).

  • Changes in Rcpp Sugar:

    • The sugar functions min and max now recognise empty vectors (Dirk in #884 fixing #883).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Extremely hot and humid - over 40℃ in Tokyo

Planet Debian - Hën, 23/07/2018 - 4:21md
I can't do anything, it's too hot and humid... hope it'd be better in Hsinchu, Taiwan.

Yes, I'll go to DebConf18, see you there. Hideki Yamane noreply@blogger.com Henrich plays with Debian

Reproducible Builds: Weekly report #169

Planet Debian - Hën, 23/07/2018 - 3:35md

Here’s what happened in the Reproducible Builds effort between Sunday July 15 and Saturday July 21 2018:

Packages reviewed and fixed, and bugs filed tests.reproducible-builds.org development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Passwords Used by Daemons

Planet Debian - Hën, 23/07/2018 - 9:11pd

There’s a lot of advice about how to create and manage user passwords, and some of it is even good. But there doesn’t seem to be much advice about passwords for daemons, scripts, and other system processes.

I’m writing this post with some rough ideas about the topic, please let me know if you have any better ideas. Also I’m considering passwords and keys in a fairly broad sense, a private key for a HTTPS certificate has more in common with a password to access another server than most other data that a server might use. This also applies to SSH host secret keys, keys that are in ssh authorized_keys files, and other services too.

Passwords in Memory

When SSL support for Apache was first released the standard practice was to have the SSL private key encrypted and require the sysadmin enter a password to start the daemon. This practice has mostly gone away, I would hope that would be due to people realising that it offers little value but it’s more likely that it’s just because it’s really annoying and doesn’t scale for cloud deployments.

If there was a benefit to having the password only in RAM (IE no readable file on disk) then there are options such as granting read access to the private key file only during startup. I have seen a web page recommending running “chmod 0” on the private key file after the daemon starts up.

I don’t believe that there is a real benefit to having a password only existing in RAM. Many exploits target the address space of the server process, Heartbleed is one well known bug that is still shipping in new products today which reads server memory for encryption keys. If you run a program that is vulnerable to Heartbleed then it’s SSL private key (and probably a lot of other application data) are vulnerable to attackers regardless of whether you needed to enter a password at daemon startup.

If you have an application or daemon that might need a password at any time then there’s usually no way of securely storing that password such that a compromise of that application or daemon can’t get the password. In theory you could have a proxy for the service in question which runs as a different user and manages the passwords.

Password Lifecycle

Ideally you would be able to replace passwords at any time. Any time a password is suspected to have been leaked then it should be replaced. That requires that you know where the password is used (both which applications and which configuration files used by those applications) and that you are able to change all programs that use it in a reasonable amount of time.

The first thing to do to achieve this is to have one password per application not one per use. For example if you have a database storing accounts used for a mail server then you would be tempted to have an outbound mail server such as Postfix and an IMAP server such as Dovecot both use the same password to access the database. The correct thing to do is to have one database account for the Dovecot and another for Postfix so if you need to change the password for one of them you don’t need to change passwords in two locations and restart two daemons at the same time. Another good option is to have Postfix talk to Dovecot for authenticating outbound mail, that means you only have a single configuration location for storing the password and also means that a security flaw in Postfix (or more likely a misconfiguration) couldn’t give access to the database server.

Passwords Used By Web Services

It’s very common to run web sites on Apache backed by database servers, so common that the acronym LAMP is widely used for Linux, Apache, Mysql, and PHP. In a typical LAMP installation you have multiple web sites running as the same user which by default can read each other’s configuration files. There are some solutions to this.

There is an Apache module mod_apparmor to use the Apparmor security system [1]. This allows changing to a specified Apparmor “hat” based on the URI or a specified hat for the virtual server. Each Apparmor hat is granted access to different files and therefore files that contain passwords for MySQL (or any other service) can be restricted on a per vhost basis. This only works with the prefork MPM.

There is also an Apache module mpm-itk which runs each vhost under a specified UID and GID [2]. This also allows protecting sites on the same server from each other. The ITK MPM is also based on the prefork MPM.

I’ve been thinking of writing a SE Linux MPM for Apache to do similar things. It would have to be based on prefork too. Maybe a change to mpm-itk to support SE Linux context as well as UID and GID.

Managing It All

Once the passwords are separated such that each service runs with minimum privileges you need to track and manage it all. At the simplest that needs a document listing where all of the passwords are used and how to change them. If you use a configuration management tool then that could manage the passwords. Here’s a list of tools to manage service passwords in tools like Ansible [3].

Related posts:

  1. Email Passwords I was doing some routine sysadmin work for a client...
  2. SE Linux Play Machine and Passwords My SE Linux Play Machine has been online again since...
  3. Case Sensitivity and Published Passwords When I first started running a SE Linux Play Machine...
etbe https://etbe.coker.com.au etbe – Russell Coker

SPAKE2 In Golang: Journey to Cryptoland begins

Planet Debian - Dje, 22/07/2018 - 6:37md
This or the series of SPAKE2 related posts are inspired from Jonathan Lange's series on SPAKE2 when he ported it to Haskell, which is also the reference I used for my implementation in Golang. Brief Background

Before I can go to detail I should tell why/how I came to implementing SPAKE2 in Golang. Story starts a couple of month back when I started contributing to *magic-wormhole.rs*, a Rust port of original Python project of magic-wormhole. You can read this LWN article to understand more about what magic-wormhole is.

During contribution my friend Ramakrishnan Muthukrishnan said to me that I should try to port the magic-wormhole to Golang. I was not a expert Go programmer but had understanding of language basics and thought why not use it to improve my language understanding. And this is where it all started.

What is SPAKE2 why is it used?

At this point we need to know why SPAKE2 is needed and how magic-wormhole uses it. SPAKE2 is Simple Password Authenticated Key Exchange protocol. It allows 2 parties having a shared weak password to derive a strong shared key which can then be used by the parties to setup a encrypted+authenticated channel between them.

magic-wormhole uses SPAKE2 to negotiate a shared session key between communicating parties which is then used by magic-wormhole to derive different keys needed for different purpose.

So I hit my first road block after agreeing to implement the magic-wormhole in Golang, there is no SPAKE2 implementation readily available in Go!.

Enter Cryptography and My knowledge about it

Ram convinced me that its easy to implement SPAKE2 in Go and I agreed. But my knowledge on cryptography was limited. I knew Cryptography is

  • basically math with big numbers and relies on fact that factoring big prime number is hard for computers.
  • These big numbers are derived from Abelian Group and related concepts from number theory, which I have studied in academics but since I've never learnt a practical use case for them, its eating dust some where inside my memory.

I've taken some theoretical course on Cryptography but never thought of implementing one myself. With these weak foundation I set out for a new adventure.

Python SPAKE2 Implementation

Since magic-wormhole is implemented in python, SPAKE2 implementation in Python is considered as reference for other language implementation. SPAKE2 paper does not specify much about where or how the requirement public constants are defined, so implementer can take liberty at defining those. Python code uses 2 groups one is Twisted Edwards Curve group Ed25519 and others are multiplicative group over Integer of 1024, 2048 and 3072 bits.

In Python code Warner himself has defined Ed25519 and other Integer group and related operations. In Rust code though there is only Ed25519 group but its created using curve25519-dalek library. Haskell code also defines the group operations by itself instead of depending on any other libraries (possibly Cryptonite). So as a first step I started for searching any library equivalent to curve25519-dalek as I've no clue what is an Elliptic curve.(forget groups I don't know basic itself).

First Try, Bruteforce

I've this bad habit of tackling problem with brute force; some times it works but most time it just exhausts me, taking me nowhere. So with my normal habit I started looking for Ed25519 curve operations library. (Without actually knowing what operations are and how it works). I tried to read through curve25519-dalek but invain, nothing entered into my head. I found ed25519 package for Go written by Adam Langley but eventually it turned out to be actually a signature package. I found an internal package defined in ed25519 package called edwards25519, which seem to have some operations defined but I was unable to understand it nor figure out why its made internal. I later even took a dig at embedding edwards25519 as part of my implementation of ed25519 group but finally had to drop it for my own version that story will be part of another post in this series.

Conclusion

During all these I was constantly in talk with Ram and first thing he told me was to slow down a bit and go from scratch. And that's what was the learning point for me. In short I can say the following.

Nothing can be done in single day, before you code understand the basic concepts and then build it from there. As they say you can't build a stable house on weak foundation.

In the next post in this series I will write about my learning on Elliptic Curves and Elliptic curve groups, followed by experiment with number groups and finally learning and decision I had to make in writing gospake2.

copyninja https://copyninja.info/ Random Ramblings

WebGL, Fragment Shader, GHCJS and reflex-dom

Planet Debian - Dje, 22/07/2018 - 4:41md

What a potpourri of topics... too long to read? Click here!

On the side and very slowly I am working on a little game that involves breeding spherical patterns… more on that later (maybe). I want to implement it in Haskell, but have it run in the browser, so I reached for GHCJS, the Haskell-to-Javascript compiler.

WebGL for 2D images

A crucial question was: How do I draw a generative pattern onto a HTML canvas element. My first attempt was to calculate the pixel data into a bit array and use putImageData() to push it onto the canvas, but it was prohibitively slow. I might have done something stupid along the way, and some optimization might have helped, but I figured that I should not myself calculate the colors of each pixel, but leave this to who is best at it: The browser and (ideally) the graphic card.

So I took this as an opportunity to learn about WebGL, in particular fragment shaders. The term shader is misleading, and should mentally be replaced with “program”, because it is no longer (just) about shading. WebGL is intended to do 3D graphics, and one sends a bunch of coordinates for triangles, a vertex shader and a fragment shader to the browser. The vertex shader can places the vertices, while the fragment shader colors each pixel on the visible triangles. This is a gross oversimplification, but that is fine: We only really care about the last step, and if our coordinates always just define a rectangle that fills the whole canvas, and the vertex shader does not do anything interesting, then what remains is a HTML canvas that takes a program (written in the GL shader language), which is run for each pixel and calculates the color to be shown at that pixel.

Perfect! Just what I need. Dynamically creating a program that renders the pattern I want to show is squarely within Haskell’s strengths.

A reflex-dom widget

As my game UI grows, I will at some point no longer want to deal with raw DOM access, events etc., and the abstraction that makes creating such user interfaces painless is Functional Reactive Programming (FRP). One of the main mature implementations is Ryan Trinkle's reflex-dom, and I want to use this project to get more hands-on experience with it.

Based on my description above, once I hide all the details of the WebGL canvas setup, what I really have is a widget that takes a text string (representing the fragment shader), and creates a DOM element for it. This would suggest a function with this type signature

fragmentShaderCanvas :: MonadWidget t m => Dynamic t Text -> m ()

where the input text is dynamic, meaning it can change over time (and the canvas will be updated) accordingly. In fact, I also want to specify attributes for the canvas (especially width and height), and if the supplied fragment shader source is invalid and does not compile, I want to get my hands on error messages, as provided by the browser. So I ended up with this:

fragmentShaderCanvas :: MonadWidget t m => Map Text Text -> Dynamic t Text -> m (Dynamic t (Maybe Text))

which very pleasingly hides all the complexity of setting up the WebGL context from the user. This is abstraction at excellence!

I published this widget in the hackage.haskell.org/package/reflex-dom-fragment-shader-canvas package on Hackage.

A Demo

And because reflex-dom make it so nice, I created a little demo program; it is essentially a fragment shader playground!

On https://nomeata.github.io/reflex-dom-fragment-shader-canvas/ you will find a text area where you can edit the fragment shader code. All your changes are immediately reflected in the canvas on the right, and in the list of warnings and errors below the text area. The code for this demo is pretty short.

A few things could be improved, of course: For example, the canvas element should have its resolution automatically adjusted to the actual size on screen, but it is somewhat tricky to find out when and if a DOM element has changed size. Also, the WebGL setup should be rewritten to be more defensively, and fail more gracefully if things go wrong.

BTW, if you need a proper shader playground, check out Shadertoy.

Development and automatic deployment

The reflex authors all use Nix as their development environment, and if you want to use reflex-dom, then using Nix is certainly the path of least resistance. But I would like to point out that it is not a necessity, and you can stay squarely in cabal land if you want:

  • You don’t actually need ghcjs to develop your web application: reflex-dom builds on jsaddle which has a mode where you build your program using normal GHC, and it runs a web server that your browser connects to. It works better with Chrome than with Firefox at the moment, but is totally adequate to develop a program.

  • If you do want to install ghcjs, then it is actually relatively easily: The README on the ghc-8.2 branch of GHCJS tells you how to build and install GHCJS with cabal new-build.

  • cabal itself supports ghcjs just like ghc! Just pass --ghcjs -w ghcjs to it.

  • Because few people use ghcjs and reflex with cabal some important packages (ghcjs-base, reflex, reflex-dom) are not on Hackage, or only with old versions. You can point cabal to local checkouts using a cabal.project file or even directly to the git repositories. But it is simpler to just use a Hackage overlay that I created with these three packages, until they are uploaded to Hackage.

  • If the application you create is a pure client-based program and could therefore be hosted on any static web host, wouldn’t it be nice if you could just have it appear somewhere in the internet whenever you push to your project? Even that is possible, as I describe in an example repository!

It uses Travis CI to build GHCJS and the dependencies, caches them, builds your program and – if successful – uploads the result to GitHub Pages. In fact, the demo linked above is produced using that. Just push, and 5 minutes later the changes available online!

I know about rumors that Herbert’s excellent multi-GHC PPA repository might provide .deb packages with GHCJS prebuilt soon. Once that happens, and maybe ghcjs-base and reflex get uploaded to Hackage, then the power of reflex-based web development will be conveniently available to all Haskell developers (even those who shunned Nix so far), and I am looking forward to many cool projects coming out of that.

Joachim Breitner mail@joachim-breitner.de nomeata’s mind shares

Review: The Power of Habit

Planet Debian - Sht, 21/07/2018 - 6:00pd

Review: The Power of Habit, by Charles Duhigg

Publisher: Random House Copyright: 2012, 2014 Printing: 2014 ISBN: 0-679-60385-9 Format: Kindle Pages: 366

One problem with reading pop psychology is that one runs into a lot of books like this one: summaries of valid psychological research that still leave one with the impression that the author was more interested in being dramatic and memorable than accurate. But without reproducing the author's research, it's hard to tell whether that fear is well-grounded or unfair, so one comes away feeling vaguely dissatisfied and grumpy.

Or at least I do. I might be weird.

As readers of my book reviews may have noticed, and which will become more apparent shortly, I'm going through another round of reading "self-help" books. This time, I'm focusing on work habits, concentration, and how to more reliably reach a flow state. The Power of Habit isn't on that topic but it's adjacent to it, so I picked it up when a co-worker recommended it.

Duhigg's project here is to explain habits, both good ones and bad ones, at a scientific level. He starts with a memorable and useful model of the habit loop: a cue triggers a routine, which results in a reward. The reward reinforcement strengthens the loop, and the brain starts internalizing the routine, allowing it to spend less cognitive energy and essentially codifying the routine like a computer program. With fully-formed habits (one's daily bathing routine, for example), the routine is run by a small, tuned part of your brain and requires very little effort, which is why we can have profound shower thoughts about something else entirely. That example immediately shows why habits are valuable and why our brain is so good at creating them: they reduce the mental energy required for routine actions so that we can spend that energy elsewhere.

The problem, of course, is that this mechanism doesn't first consult our conscious intent. It works just as well for things that we do repeatedly but may not want to automatically do, like smoking a pack of cigarettes a day. It's also exploitable; you are not the only person involved in creating your habits. Essentially every consumer product company is trying to get you to form habits around their products, often quite successfully. Duhigg covers marketing-generated habits as well as social and societal habits, the science behind how habits can be changed, and the evidence that often a large collection of apparently unrelated habits are based in a "keystone habit" that, if changed, makes changing all of the other habits far easier.

Perhaps the most useful part of this book is Duhigg's discussion of how to break the habit loop through substitution. When trying to break habits, our natural tendency is to consciously resist the link between cue and routine. This is possible, but it's very hard. It requires making an unconscious process conscious, and we have a limited amount of conscious decision-making energy available to us in a day. More effective than fighting the cues is to build a replacement habit with the same cue, but this requires careful attention to the reward stage so that the substituted habit will complete the loop and have a chance of developing enough strength to displace the original habit.

So far, so good. All of this seems consistent with other psychological research I've read (particularly the reasons why trying to break habits by willpower alone is rarely successful). But there are three things that troubled me about this book and left me reluctant to recommend it or rely on it.

The first is that a useful proxy for checking the research of a book is to look at what the author says about a topic that one already knows something about. Here, I'm being a bit unfair by picking on a footnote, but Duhigg has one anecdote about a woman with a gambling problem that has following definitive-sounding note attached:

It may seem irrational for anyone to believe they can beat the house in a casino. However, as regular gamblers know, it is possible to consistently win, particularly at games such as blackjack. Don Johnson of Bensalem, Pennsylvania, for instance, won a reported $15.1 million at blackjack over a six-month span starting in 2010. The house always wins in the aggregate because so many gamblers bet in a manner that doesn't maximize their odds, and most people do not have enough money to see themselves through losses. A gambler can consistently win over time, though, if he or she has memorized the complicated formulas and odds that guide how each hand should be played. Most players, however, don't have the discipline or mathematical skills to beat the house.

This is just barely this side of being outright false, and is dangerously deceptive to the point of being casino propaganda. And the argument from anecdote is both intellectually bogus (a lot of people gamble, which means that not only is it possible that someone will go on that sort of winning streak through pure chance, it is almost guaranteed) and disturbingly similar to how most points are argued in this book.

If one assumes an effectively infinite deck (in other words, assume each card dealt is an independent event), there is no complicated rule you can memorize to beat the house at blackjack. The best that you can do is to reduce the house edge to 1-2% depending on the exact local rules. Wikipedia has a comprehensive discussion if you want the details. Therefore, what Duhigg has to be talking about is counting cards (modifying your play based on what cards have already been dealt and therefore what cards are remaining in the deck).

However, and Duhigg should know this if he's going to make definitive statements about blackjack, US casinos except in Atlantic City (every other example in this book is from the US) can and do simply eject players who count cards. (There's a legal decision affecting Atlantic City that makes the story more complicated there.) They also use other techniques (large numbers of decks, frequent reshuffling) to make counting cards far less effective. Even if you are very good at counting cards, this is not a way to win "consistently over time" because you will be told to stop playing. Counting cards is therefore not a matter of memorizing complicated formulas and odds. It's a cat-and-mouse game against human adversaries to disguise your technique enough to not be ejected while still maintaining an edge over the house. This is rather far from Duhigg's description.

Duhigg makes another, if less egregious, error by uncritically accepting the popular interpretation of the Stanford marshmallow experiment. I'll spare you my usual rant about this because The Atlantic has now written it for me. Surprise surprise, new research shows that the original experiment was deeply flawed in its choice of subjects and that the effect drastically decreases once one controls for social and economic background.

So that's one problem: when writing on topics about which I already have some background, he makes some significant errors. The second problem is related: Duhigg's own sources in this book seem unconvinced by the conclusions he's drawing from their research.

Here, I have to give credit to Duhigg for publishing his own criticism, although you won't find it if you read only the main text of the book. Duhigg has extensive end notes (distinct from the much smaller number of footnotes that elaborate on some point) in which he provides excerpts from fact-checking replies he got from the researchers and interview subjects in this book. I read them all after finishing the rest of the book, and I thought a clear pattern emerged. After reading early drafts of portions of the book, many of Duhigg's sources replied with various forms of "well, but." They would say that the research is accurately portrayed, but Duhigg's conclusion isn't justified by the research. Or that Duhigg described part of the research but left out other parts that complicated the picture. Or that Duhigg has simplified dangerously. Or that Duhigg latched on to an ancillary part of their research or their story and ignored the elements that they thought were more central. Note after note reads as a plea to add more nuance, more complication, less certainty, and fewer sweeping conclusions.

Science is messy. Psychological research is particularly messy because humans are very good at doing what they're "supposed" to do, or changing behavior based on subtle cues from the researcher. And most psychological research of the type Duhigg is summarizing is based on very small sample sizes (20-60 people is common) drawn from very unrepresentative populations (often college students who are conveniently near the researchers and cheap to bribe to do weird things while being recorded). When those experiments are redone with larger sample sizes or more representative populations, often they can't be replicated. This is called the replication crisis.

Duhigg is not a scientist. He's a reporter. His job is to take complicated and messy stories and simplify them into entertaining, memorable, and understandable narratives for a mass audience. This is great for making difficult psychological research more approachable, but it also inherently involves amplifying tentative research into rules of human behavior and compelling statements about how humans work. Sometimes this is justified by the current state of the research. Sometimes it isn't. Are Duhigg's core points in this book justified? I don't know and, based on the notes, neither does Duhigg, but none of that uncertainty is on the pages of the main text.

The third problem is less foundational, but seriously hurt my enjoyment of The Power of Habit as a reader: Duhigg's examples are horrific. The first chapter opens with the story of a man whose brain was seriously injured by a viral infection and could no longer form new memories. Later chapters feature a surgeon operating on the wrong side of a stroke victim's brain, a woman who destroyed her life and family through gambling, and a man who murdered his wife in his sleep believing she was an intruder. I grant that these examples are memorable, and some are part of a long psychological tradition of learning about the brain from very extreme examples, but these were not the images that I wanted in my head while reading a book about the science of habits. I'm not sure this topic should require the reader brace themselves against nightmares.

The habit loop, habit substitution, and keystone habits are useful concepts. Capitalist manipulation of your habits is something everyone should be aware of. There are parts of this book that seem worth knowing. But there's also a lot of uncritical glorification of particular companies and scientific sloppiness and dubious assertions in areas I know something about. I didn't feel like I could trust this book, or Duhigg. The pop psychology I like the best is either written by practicing scientists who (hopefully) have a feel for which conclusions are justified by research and which aren't, or admits more questioning and doubt, usually by personalizing the research and talking about what worked for the author. This is neither, and I therefore can't bring myself to recommend it.

Rating: 6 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

Freexian’s report about Debian Long Term Support, June 2018

Planet Debian - Pre, 20/07/2018 - 4:28md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, about 202 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 8 hours (out of 10 hours allocated, thus keeping 2 extra hours for July).
  • Antoine Beaupré did 24 hours (out of 12 hours allocated + 12 extra hours).
  • Ben Hutchings did 12 hours (out of 15 hours allocated, thus keeping 3 extra hours for July).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did 17 hours (out of 23.75 hours allocated, thus keeping 6.75 extra hours for July).
  • Holger Levsen did nothing (out of 8 hours allocated, thus keeping 8 extra hours for July).
  • Hugo Lefeuvre did 4.25 hours (out of 23.75 hours allocated, but gave back 10 hours, thus keeping 9.5 hours for July).
  • Markus Koschany did 23.75 hours.
  • Ola Lundqvist did 6 hours (out of 8 hours allocated + 17.5 remaining hours, but gave back 15.5 unused hours, thus keeping 4 extra hours for July).
  • Roberto C. Sanchez did 29.5 hours (out of 18 hours allocated + 11.5 extra hours).
  • Santiago Ruano Rincón did 5.5 hours (out of 8 hours allocated + 7 extra hours, thus keeping 9.5 extra hours for July).
  • Thorsten Alteholz did 23.75 hours.
Evolution of the situation

The number of sponsored hours increased to 210 hours per month. We lost a silver sponsor but gained a new platinum sponsor with the Civil Infrastructure Platform project (hosted by the Linux Foundation, see their announce).

We are very happy to see the CIP project engage directly with the Debian project and try to work together to build the software stack for tomorrow’s world’s infrastructure.

The security tracker currently lists 57 packages with a known CVE and the dla-needed.txt file 52.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog https://raphaelhertzog.com apt-get install debian-wizard

PKCS#11 v2.20

Planet Debian - Pre, 20/07/2018 - 12:18md

By way of experiment, I've just enabled the PKCS#11 v2.20 implementation in the eID packages for Linux, but for now only in the packages in the "continuous" repository. In the past, enabling this has caused issues; there have been a few cases where Firefox would deadlock when PKCS#11 v2.20 was enabled, rather than the (very old and outdated) v2.11 version that we support by default. We believe we have identified and fixed all outstanding issues that caused such deadlocks, but it's difficult to be sure. So, if you have a Belgian electronic ID card and are willing to help me out and experiment a bit, here's something I'd like you to do:

  • Install the eID software (link above) as per normal.
  • Enable the "continuous" repository and upgrade to the packages in that repository:

    • For Debian, Ubuntu, or Linux Mint: edit /etc/apt/sources.list.d/eid.list, and follow the instructions there to enable the "continuous" repository. Don't forget the dpkg-reconfigure eid-archive step. Then, run apt update; apt -t continuous upgrade.
    • For Fedora and CentOS: run yum --enablerepo=beid-continuous install eid-mw
    • For OpenSUSE: run zypper mr -e beid-continuous; zypper up

The installed version of the eid-mw-libs or libbeidpkcs11-0 package should be v4.4.3-42-gf78d786e or higher.

One of the new features in version 2.20 of the PKCS#11 API is that it supports hotplugging of card readers; in version 2.11 of that API, this is not the case, since it predates USB (like I said, it is outdated). So, try experimenting with hotplugging your card reader a bit; it should generally work. Try leaving it installed and using your system (and webbrowser) for a while with that version of the middleware; you shouldn't have any issues doing so, but if you do I'd like to know about it.

Bug reports are welcome as issues on our github repository.

Thanks!

Wouter Verhelst https://grep.be/blog//pd/ pd

Building Debian packages in CI (ick)

Planet Debian - Enj, 19/07/2018 - 5:58md

I've recently made the first release of ick, my CI engine, which was built by ick itself. It went OK, but the process needs improvement. This blog post is some pondering on how the process of building Debian packages should happen in the best possible taste.

I'd appreciate feedback, preferably by email (liw@liw.fi).

Context

I develop a number of (fairly small) programs, as a hobby. Some of them I also maintain as packages in Debian. All of them I publish as Debian packages in my own APT repository. I want to make the process for making a release of any of my programs as easy and automated as possible, and that includes building Debian packages and uploading them to my personal APT repository, and to Debian itself.

My personal APT repository contains builds of my programs against several Debian releases, because I want people to have the latest version of my stuff regardless of what version of Debian they run. (This is somewhat similar to what OEMs that provide packages of their own software as Debian packages need to do. I think. I'm not an OEM and I'm extrapolating wildly here.)

I currently don't provide packages for anything but Debian. That's mostly because Debian is the only Linux distribution I know well, or use, or know how to make packages for. I could do Ubuntu builds fairly easily, but supporting Fedora, RHEL, Suse, Arch, Gentoo, etc, is not something I have the energy for at this time. I would appreciate help in doing that, however.

I currently don't provide Debian packages for anything other than the AMD64 (x86-64, "Intel 64-bit") architecture. I've previously provided packages for i386 (x86-32), and may in the future want to provide packages for other architectures (RISC-V, various Arm variants, and possibly more). I want to keep this in mind for this discussion.

Overview

For the context of this blog post, let's assume I have a project Foo. Its source code is stored in foo.git. When I make a release, I tag it using a signed git tag. From this tag, I want to build several things:

  • A release tarball. I will publish and archive this. I don't trust git, and related tools (tar, compression programs, etc) to be able to reproducibly produce the same bit-by-bit compressed tarball in perpetuity. There's too many things that can go wrong. For security reasons it's important to be able to have the exact same tarball in the future as today. The simplest way to achive this is to not try to reproduce, but to archive.

  • A Debian source package.

  • A Debian binary package built for each target version of Debian, and each target hardware architecture (CPU, ABI, possibly toolchain version). The binary package should be built from the source package, because otherwise we don't know the source package can be built.

The release tarball should be put in a (public) archive. A digital signature using my personal PGP key should also be provided.

The Debian source and binary packages should be uploaded to one or more APT repositories: my personal one, and selected packages also the Debian one. For uploading to Debian, the packages will need to be signed with my personal PGP key.

(I am not going to give my CI access to my PGP key. Anything that needs to be signed with my own PGP key needs to be a manual step.)

Package versioning

In Debian, packages are uploaded to the "unstable" section of the package archive, and then automatically copied into the "testing" section, and from there to the "stable" section, unless there are problems in a specific version of a package. Thus all binary packages are built against unstable, using versions of build dependencies in unstable. The process of copying via testing to stable can take years, and is a core part of how Debian achieves quality in its releases. (This is simplified and skips consideration like security updates and other updates directly to stable, which bypass unstable. These details are not relevant to this discussion, I think.)

In my personal APT repository, no such copying takes place. A package built for unstable does not get copied into section with packages built for a released version of Debian, when Debian makes a release.

Thus, for my personal APT repository, there may be several builds of the any one version of Foo available.

  • foo 1.2, built for unstable
  • foo 1.2, built for Debian 9
  • foo 1.2, built for Debian 8

In the future, that list may be expanded by having builds for several architectures:

  • foo 1.2, built for unstable, on amd64
  • foo 1.2, built for Debian 9, on amd64
  • foo 1.2, built for Debian 8, on amd64

  • foo 1.2, built for unstable, on riscv

  • foo 1.2, built for Debian 9, on riscv
  • foo 1.2, built for Debian 8, on riscv

When I or my users upgrade our Debian hosts, say from Debian 8 to Debian 9, any packges from my personal APT archive should be updated accordingly. When I upgrade a host running Debian 8, with foo 1.2 built for Debian 8, gets upgraded to Debian 9, foo should be upgraded to the version of 1.2 built for Debian 9.

Because the Debian package manager works on combinations of package name and package version, that means that the version built for Debian 8 should have a different, and lesser, version than the one built for Debian 9, even if the source code is identical except for the version number. The easiest way to achieve this is probably to build a different source package for each target Debian release. That source package has no other differences than the debian/changelog entry with a new version number, so it doesn't necessarily need to be stored persistently.

(This is effectively what Debians "binary NMU" uploads do: use the same source package version, but do a build varying only the version number. Debian does this, among other reasons, to force a re-build of a package using a new version of a build depenency, for which it is unnecessary to do a whole new sourceful upload. For my CI build purposes, it may be useful to have a new source package, for cases where there are other changes than the source package. This will need further thought and research.)

Thus, I need to produce the following source and binary packages:

  • foo_1.2-1.dsc — source package for unstable
  • foo_1.2-1.orig.tar.xz — upstream tarball
  • foo_1.2-1.debian.tar.xz — Debian packaging and changes
  • foo_1.2-1_amd64.deb — binary package for unstable, amd64
  • foo_1.2-1_riscv.deb — binary package for unstable, riscv

  • foo_1.2-1~debian8.dsc — source package for Debian 8

  • foo_1.2-1~debian8.debian.tar.xz — Debian packaging and changes
  • foo_1.2-1~debian8_amd64.deb — binary package for Debian 8, amd64
  • foo_1.2-1~debian8_riscv.deb — binary package for Debian 8, riscv

  • foo_1.2-1~debian9.dsc — source package for Debian 9

  • foo_1.2-1~debian9.debian.tar.xz — Debian packaging and changes
  • foo_1.2-1~debian9_amd64.deb — binary package for Debian 9, amd64
  • foo_1.2-1~debian9_riscv.deb — binary package for Debian 9, riscv

The orig.tar.xz file is a bit-by-bit copy of the upstream release tarball. The debian.tar.xz files have the Debian packaging files, plus any Debian specific changes. (For simplicity, I'm assuming a specific Debian source package format. The actual list of files may vary, but the .dsc file is crucial, and references the other files in the source package. Again, these details don't really matter for this discussion.)

To upload to Debian, I would upload the foo_1.2-1.dsc source package from the list above, after downloading the files and signing them with my PGP key. To upload to my personal APT repository, I would upload all of them.

Where should Debian packaging be stored in version control?

There seems to be no strong consensus in Debian about where the packaging files (the debian/ subdirectory and its contents) should be stored in version control. Several approaches are common. The examples below use git as the version control system, as it's clearly the most common one now.

  • The "upstream does the packaging" approach: upstream's foo.git also contains the Debian packaging. Packages are built using that. This seems to be especially common for programs, where upstream and the Debian package maintainer are the same entity. That's also the OEM model.

  • The "clone upstream and add packaging" approach: the Debian package maintainer clonse the upstream repository, and adds the packaging files in a separate branch. When upstream makes a release, the master branch in the packaging repository is updated to match the upstream's master branch, and the packaging branch is rebased on top of that.

  • The "keep it separate" approach: the Debian packager puts the packaging files in their own repository, and the source tree is constructed from botht the upstream repository and the packaging repository.

For my own use, I prefer the "upstream does packaging" approach, as it's the least amount of friction for me. For ick, I want to support any approach.

There are various tools for maintaining package source in git (e.g., dgit and git-buildpackage), but those seem to not be relevant to this blog post, so I'm not discussing them in any detail.

The build process

Everything starts from a signed git tag in the foo.git plus additional tags in any packaging repository. The tags are made by the upstream developers and Debian package maintainers. CI will notice the new tag, and build a release from that.

  • Create the upstream tarball (foo-1.2.tar.gz).

  • Manully download and sign the upstream tarball with PGP.

  • Manully publish the upstream tarball and its signature in a suitable place.

  • Create the Debian source package for unstable (foo_1.2-1.dsc), using a copy of the upstream tarball, renamed.

  • Using the Debian source package, build a Debian binary package for unstable for each target architecture (foo_1.2-1_amd64.deb etc).

  • For each target Debian release other than unstable, create a new source package by unpacking the source package for unstable, and adding a debian/changelog entry with ~debianN appended to the version number. If there is a need, make any additional Debian release specific changes to the source package.

  • Build each of those source packages for each target architecture, in a build environment with the target Debian release. (foo_1.2-1~debianN_amd64.deb etc).

  • Upload all the Debian source and binary packages to an APT repository that allows upload by CI. Have that APT repository sign the resulting Packages file with its own PGP key.

  • Manully download the Debian packages and sign the unstable build to Debian, and upload it to Debian. (Source package only, except in cases where the binary package also needs to be uploaded, such as for new packages.)

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

My DebCamp/DebConf 18 plans

Planet Debian - Enj, 19/07/2018 - 3:30md


Tomorrow I am going to another DebCamp and DebConf; this time at Hsinchu, Taiwan. Thanks to Debian project, I received a sponsor to attend the event, in this sense I plan to do the following contributions:

  • Bootstrap the DebConf 19 website. I volunteered myself to lead the DebConf 19 website things, and to do that I intend to get in touch with more experienced people from the DebConf team.

  • Participate part-time at the Perl team sprint. Despite I have not been so active in the team as I used to be, I’ll try to use the opportunity to help with packages update and some bug fixing.

  • Keep working with Arthur Del Esposte in our GSoC project, which aims to improving distro-tracker to better support Debian teams workflow. Also, prepare him to make an excellent presentation in the GSoC session. Hope see you there!

  • If I have enough time I want to work on some of my packages too, specially Redmine.

If anyone is interested in what I’ll do these days just reach me out! Could be in person, via IRC (my nickname: kanashiro) or just mail me (kanashiro@debian.org).

I hope meet you soon in Hsinchu!

Lucas Kanashiro http://blog.kanashiro.xyz/ Lucas Kanashiro’s blog

Things you can do with Debian: multimedia editing

Planet Debian - Enj, 19/07/2018 - 10:28pd

The Debian operating system serves many purposes and you can do amazing things with it. Apart of powering the servers behind big internet sites like Wikipedia and others, you can use Debian in your PC or laptop. I’ve been doing that for many years.

One of the great things you can do is some multimedia editing. It turns out I love nature, outdoor sports and adventures, and I usually take videos and photos with my friends while doing such activities. And when I arrive home I love editing them for my other blog, or putting them together in a video.

The setup I’ve been using is composed of several different programs:

  • gimp - image processing
  • audacity - quick audio recording / editing
  • ardour - audio recording / editing / mixing / mastering
  • kdenlive - video editing / mixing
  • openshot - video editing / mixing
  • handbrake - video transcoding

My usage of these tools ranges from very simple to more complex. In the case of gimp, for example, I mostly do quick editting, crop, resize, fix colours, etc. I use audacity for quick audio recording and editing, like cutting a song in half or quickly record my mic. Ardour is such a powerfull DAW, which is more complex to use. I can use it because my background in the audio business (did you know I worked as recording/mixing/mastering engineer in a recording studio 10 years ago?). The last amazing feature I discovered in Ardour was the hability to do side-chain compression, great!

For video editing, I started using openshot some years ago, but I recently switched to kdenlive, which from my point of view is more robust and more fine-tunned. You should try both and decide which one fits your needs.

And another awesome tool in my setup is handbrake, which allows to easily convert and transcode video between many formats, so you can reproduce your videos in different platforms.

It amazes me how these FLOSS tools can be so usefull, powerful and easy to install/use. From here, I would like to send a big thanks you a lot! to all those upstream communities.

In Debian, getting them is a matter of installing the packages from the repositories. All this setup is waiting for you in the Debian archive. This wouldn’t be possible without the hard work of the Debian Multimedia team and other collaborators, who maintain these packages ready to install and use. Well, in fact, thanks to every single Debian contributor :-)

Arturo Borrero González http://ral-arturo.org/ ral-arturo.org

Plans for DebCamp and DebConf 18

Planet Debian - Enj, 19/07/2018 - 5:00pd

I recently became an active contributor to the Debian project, which has been consolidated throughout my GSoC project. In addition to the great learning with my mentors, Lucas Kanashiro and Raphäel Hertzog, the feedback from other community members has been very valuable to the progress we are making in the Distro Tracker. Tomorrow, thanks to Debian project sponsorship, I will take off for Hsinchu, Taiwan to attend DebCamp and DebConf18. It is my first DebConf and I’m looking forward to meeting new people from the Debian community, learn a lot and make useful contributions during the time I am there.

During DebCamp, I plan to make the following contributions:

  • Keep working with Lucas Kanashiro in our GSoC project on Distro Tracker. In particular, I intend to finish my two open Merge Requests to improve Team’s page performance and to highlight packages with RC bugs. Also, I plan to advance in adding new packages tables to Team’s page based on PET’s categories.
  • Help DebConf19 team to bootstrap the website and help with other things that are needed.

In DebConf, I’ll make a presentation in the GSoC Session to present the advances in my project.

I hope to talk to more experienced people and collect feedback to improve my work. If anyone is interested in what I will be working on, feel free to talk to me personally, via IRC (nick: arthurmde), or email (arthurmde@gmail.com).

I am certainly looking forward to meeting Taiwan too. I’m sure I’ll be positively surprised by the culture, food, places, and people on the other side of the world.

See you soon in Taiwan!

Let’s get moving on! ;)

Arthur Del Esposte http://localhost:4000/ Arthur Del Esposte

nanotime 0.2.2

Planet Debian - Enj, 19/07/2018 - 3:56pd

A new maintenance release of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it now uses a more rigorous S4-based approach thanks to a rewrite by Leonardo Silvestri.

This release re-disables tests for xts use. At some point we had hoped a new xts version would know what nanotime is. That xts version is out now, and it doesn’t. Our bad for making that assumption.

Changes in version 0.2.2 (2018-07-18)
  • Unit tests depending on future xts behaviour remain disabled (Dirk in #41).

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Ick version 0.53 released: CI engine

Planet Debian - Mër, 18/07/2018 - 5:17md

I have just made a new release of ick, my CI system. The new version number is 0.53, and a summary of the changes is below. The source code is pushed to my git server (git.liw.fi), and Debian packages to my APT repository (code.liw.fi/debian). See https://ick.liw.fi/download/ for instructions.

See the website for more information: https://ick.liw.fi/

A notable change from previous releases should be invisible to users: the release is built by ick2 itself, instead of my old mostly-manual CI script. This means I can abandon the old script and live in a brave, new world with tea, frozen-bubble, and deep meaningful relationships with good people.

Version 0.53, released 2018-07-18
  • Notification mails now include controller URL, so it's easy to see which ick instance they come from. They also include the exit code (assuming the notification itself doesn't fail), and a clear SUCCESS or FAILURE in the subject.

  • Icktool shows a more humane error message if getting a token fails, instead of a Python stack trace.

  • Icktool will now give a more humane error message if user triggers the build of a project that doesn't exist, instead of a Python stack trace.

  • Icktool now looks for credentials using both the controller URL, and the authentication URL.

  • Icktool can now download artifacts from the artifact store, with the new get-artifact subcomand.

  • The archive: workspace action now takes an optional globs field, which is a list of Unix filename globs, for what to include in the artifact. Also, optionally the field name_from can be used to specify the name of a project parameter, which contains the name of the artifact. The default is the artifact_name parameter.

  • A Code of Conduct has been added to the ick project. https://ick.liw.fi/conduct/ has the canonical copy.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

Facebook is overly optimistic with respect to Cambridge Analytica data scope

Planet Debian - Mar, 17/07/2018 - 11:20md

Facebook is too optimistic when it comes to Cambridge Analytica extends.

Sorry for this post on a fairly old topic. I just did not get around to write this up.

Several media outlets (e.g., Bloomberg) ran the story that Facebook privacy policy director Stephen Satterfield claimed that “European’s data” may not have been accessed by Cambridge Analytica in an EU hearing.

This claim is nonsense. It is almost a lie - except that he used the weasel word “may”.

For fairly trivial reasons, you can be sure that the data of at least some European’s data has been accessed. Largely because it’s pretty much impossible to perfectly separate U.S. and EU users. People move. People use Proxies. People use wrong locations. People forget to update their location. Location does not imply residency nor citizenship. People may have multiple nationalities. On Facebook, people may make up all of this, too.

Even if Dr. Aleksandr Kogan did try his best to provide only U.S. users to Cambridge Analytica, there ought to be some mistakes. Even if he only provided the data of users he could map to U.S. voter records, there likely is someone in there that has both U.S. and EU citizenship. Or that became a EU citizen since.

Because they shared the data of 87 million people. According to some numbers I found, there are around 70,000 people with U.S. and German citizenship. That is “just” a tiny 0.02% of U.S. citizens. Since Facebook users are younger than average, and in particular kids will often have both citizenships if their parents have different nationalities, we can expect the rate to be higher than that. If you now draw 87 million random samples, the chance of not having at least one of these U.S.-EU-citizens in your sample is effectively 0. This does not even take other EU nationalities into account yet.

Already a random sample of 100,000 U.S. citizens will with very high probability contain at least one E.U. citizen (in fact, at least one German citizen, because I didn’t include any other numbers but the 70,000 above). In 87 million, you likely have even several accounts created for a cat.

Says math.

To anyone trained in statistics, this should be obvious version of the birthday paradoxon.

So yes, I bet that at least one EU citizen was affected.

Just because the data is too big (and too unreliable) to be able to rule this out.

Apparently, neither the U.S. nor Germany (or the EU) even have reliable numbers on how many people have multiple nationalities. So do not trust Facebook (or Kogan’s) data to be better here…

Erich Schubert https://www.vitavonni.de/blog/ Techblogging

Faqet

Subscribe to AlbLinux agreguesi