You are here

Bits from Debian

Subscribe to Feed Bits from Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 1 ditë 12 orë më parë

Jaldhar Vyas: Sal Mubarak 2077!

Mar, 17/11/2020 - 9:05pd

Best wishes to the entire Debian world for a happy, prosperous and safe Gujarati new year, Vikram Samvat 2077 named Paridhawi.

Louis-Philippe Véronneau: A better git diff

Mar, 17/11/2020 - 6:00pd

A few days ago I wrote a quick patch and missed a dumb mistake that made the program crash. When reviewing the merge request on Salsa, the problem became immediately apparent; Gitlab's diff is much better than what git diff shows by default in a terminal.

Well, it turns out since version 2.9, git bundles a better pager, diff-highlight. À la Gitlab, it will highlight what changed in the line.

Sadly, even though diff-highlight comes with the git package in Debian, it is not built by default (925288). You will need to:

$ sudo make --directory /usr/share/doc/git/contrib/diff-highlight

You can then add this line to your .gitconfig file:

[core] pager = /usr/share/doc/git/contrib/diff-highlight/diff-highlight | less --tabs=4 -RFX

If you use tig, you'll also need to add this line in your tigrc:

set diff-highlight = /usr/share/doc/git/contrib/diff-highlight/diff-highlight

Dirk Eddelbuettel: RcppArmadillo 0.10.1.2.0

Mar, 17/11/2020 - 3:03pd

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 779 other packages on CRAN.

This release ties up a few loose ends from the recent 0.10.1.0.0.

Changes in RcppArmadillo version 0.10.1.2.0 (2020-11-15)
  • Upgraded to Armadillo release 10.1.2 (Orchid Ambush)

  • Remove three unused int constants (#313)

  • Include main armadillo header using quotes instead of brackets

  • Rewrite version number use in old-school mode because gcc 4.8.5

  • Skipping parts of sparse conversion on Windows as win-builder fails

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RcppAnnoy 0.0.17

Mar, 17/11/2020 - 2:48pd

A new release 0.0.17 of RcppAnnoy is now on CRAN. RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik Bernhardsson. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the famous Spotify music discovery algorithm.

This release brings a new upstream version 1.17, released a few weeks ago, which adds multithreaded index building. This changes the API by adding a new ‘threading policy’ parameter requiring code using the main Annoy header to update. For this reason we waited a little for the dust to settle on the BioConductor 3.12 release before bringing the changes to BiocNeighbors via this commit and to uwot via this simple PR. Aaron and James updated their packages accordingly so by the time I uploaded RcppAnnoy it made for very smooth sailing as we all had done our homework with proper conditional builds, and the package had no other issue preventing automated processing at CRAN. Yay. I also added a (somewhat overdue one may argue) header file RcppAnnoy.h regrouping defines and includes which should help going forward.

Detailed changes follow below.

Changes in version 0.0.17 (2020-11-15)
  • Upgrade to Annoy 1.17, but default to serial use.

  • Add new header file to regroup includes and defines.

  • Upgrade CI script to use R with bspm on focal.

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bits from Debian: New Debian Developers and Maintainers (September and October 2020)

Hën, 16/11/2020 - 8:00md

The following contributors got their Debian Developer accounts in the last two months:

  • Benda XU (orv)
  • Joseph Nahmias (jello)
  • Marcos Fouces (marcos)
  • Hayashi Kentaro (kenhys)
  • James Valleroy (jvalleroy)
  • Helge Deller (deller)

The following contributors were added as Debian Maintainers in the last two months:

  • Ricardo Ribalda Delgado
  • Pierre Gruet
  • Henry-Nicolas Tourneur
  • Aloïs Micard
  • Jérôme Lebleu
  • Nis Martensen
  • Stephan Lachnit
  • Felix Salfelder
  • Aleksey Kravchenko
  • Étienne Mollier

Congratulations!

Adnan Hodzic: Degiro trading tracker – Simplified tracking of your investments

Hën, 16/11/2020 - 8:21pd

TL;DRVisit degiro-trading-tracker on Github I was always interested in stocks and investing. While I wanted to get into trading for long time, I could never...

The post Degiro trading tracker – Simplified tracking of your investments appeared first on FoolControl: Phear the penguin.

Jamie McClelland: Being your own Certificate Authority

Dje, 15/11/2020 - 4:11md

There are many blogs and tutorials with nice shortcuts providing the necessary openssl commands to create and sign x509 certficates.

However, there is precious few instructions for how to easily create your own certificate authority.

You probably never want to do this in a production environment, but in a development environment it will make your life signficantly easier.

Create the certificate authority Create the key and certificate

Pick a directory to store things in. Then, make your certificate authority key and certificate:

openssl genrsa -out cakey.pem 2048 openssl req -x509 -new -nodes -key cakey.pem -sha256 -days 1024 -out cacert.pem

Some tips:

  • You will be prompted to enter some information about your certificate authoirty. Provide the minimum information - i.e., only overwrite the defaults. So, provide a value for Country, State or Province, and Organization Name and leave the rest blank.
  • You probably want to leave the password blank if this is a development/testing environment.

Want to review what you created?

openssl x509 -text -noout -in cacert.pem Prepare your directory

You can create your own /etc/ssl/openssl.cnf file and really customize things. But I find it safer to use your distribution's default file so you can benefit from changes to it every time you upgrade.

If you do take the default file, you may have the dir option coded to demoCA (in Debian at least, maybe it's the upstream default too).

So, to avoid changing any configuration files, let's just use this value. Which means... you'll need to create that directory. The setting is relative - so you can create this directory in the same directory you have your keys.

mkdir demoCA

Lastly, you have to have a file that keeps track of your certificates. If it doesn't exist, you get an error:

touch demoCA/index.txt

That's it! Your certificate authority is ready to go.

Create a key and ceritificate signing request

First, pick your domain names (aka "common" names). For example, example.org and www.example.org.

Set those values in an environment variable. If you just have one:

export subjectAltName=DNS:example.org

If you have more then one:

export subjectAltName=DNS:example.org,DNS:www.example.org

Next, create a key and a certificate signing request:

openssl req -new -nodes -out new.csr -keyout new.key

Again, you will be prompted for some values (country, state, etc) - be sure to choose the same values you used with your certficiate authority! I honestly don't understand why this is necessary (when I set different values I get an error on the signing request step below). Maybe someone can add a comment to this post explaining why these values have to match?

Also, you must provide a common name for your certificate - you can choose the same name as the altSubjectNames value you set above (but just one domain).

Want to review what you created?

openssl req -in new.csr -text -noout Sign it!

At last the momenet we have been waiting for.

openssl ca -keyfile cakey.pem -cert cacert.pem -out new.crt -outdir . -rand_serial -infiles new.csr

Now yu have a new.crt and new.csr that you can install via your web browser, mail server, etc specification.

Sanity check it

This command will confirm that the certificate is trusted by your certificate authority.

openssl verify -no-CApath -CAfile cacert.pem new.crt But wait, there's still a question of trust

You probably want to tell your computer or browser that you want to trust your certificate signing authority.

Command line tools

Most tools in linux by default will trust all the certificates in /etc/ssl/certs/ca-certificates.crt. (If that file doesn't exist, try installing the ca-certificates package). If you want to add your certificate to that file:

cp cacert.pem /usr/local/share/ca-certificates/cacert.crt sudo dpkg-reconfigure ca-certificates

Want to know what's funny? Ok, not really funny. If the certificate name ends with .pem the command above won't work. Seriously.

Once your certificate is installed with your web server you can now test to make sure it's all working with:

gnutls-cli --print-cert $domainName

Want a second opinion?

curl https://$domainName wget https://$domainName -O-

Both will report errors if the certificate can't be verified by a system certificate.

If you really want to narrow down the cause of error (maybe reconfiguring ca-certificates didn't work)?

curl --cacert /path/to/your/cacert.pem --capath /tmp

Those arguments tell curl to use your certificate authority file and not to load any other certificate authority files (well, unless you have some installed in the temp directory).

Web browsers

Firefox and Chrome have their own store of trusted certificates - you'll have to import your cacert.pem file into each browser that you want to trust your key.

Steinar H. Gunderson: Using Buypass card readers in Linux

Dje, 15/11/2020 - 12:15md

If you want to know the result of your corona test in Norway (or really, any other health information), you'll need to either get an electronic ID with a confidential spec where your bank holds the secret key, can use it towards other banks with no oversight, and allows whoever has that key to take up huge loans and other binding agreements in your name in a matter of minutes (also known as “BankID”)… or you can get a smart card from Buypass, where you hold the private key yourself.

Since most browsers won't talk directly to a smart card, you'll need a small proxy that exposes a REST interface on 127.0.0.1. (It used to be solved with a Java applet, but, yeah. That was 40 Chrome releases ago.) Buypass publishes those only for Windows and Mac, but the protocol was simple enough, so I made my own reimplementation called Linux Dallas Multipass. It's rough, only really seems to work in Firefox (and only if you spoof your UA to be Windows), you'll need to generate and install certificates to install it yourself… but yes. You can log in to find out that you're negative.

Vincent Bernat: Zero-Touch Provisioning for Juniper

Dje, 15/11/2020 - 11:20pd

Juniper’s official documentation on ZTP explains how to configure the ISC DHCP Server to automatically upgrade and configure on first boot a Juniper device. However, the proposed configuration could be a bit more elegant. This note explains how.

TL;DR

Do not redefine option 43. Instead, specify the vendor option space to use to encode parameters with vendor-option-space.

When booting for the first time, a Juniper device requests its IP address through a DHCP discover message, then request additional parameters for autoconfiguration through a DHCP request message:

Dynamic Host Configuration Protocol (Request) Message type: Boot Request (1) Hardware type: Ethernet (0x01) Hardware address length: 6 Hops: 0 Transaction ID: 0x44e3a7c9 Seconds elapsed: 0 Bootp flags: 0x8000, Broadcast flag (Broadcast) Client IP address: 0.0.0.0 Your (client) IP address: 0.0.0.0 Next server IP address: 0.0.0.0 Relay agent IP address: 0.0.0.0 Client MAC address: 02:00:00:00:00:01 (02:00:00:00:00:01) Client hardware address padding: 00000000000000000000 Server host name not given Boot file name not given Magic cookie: DHCP Option: (54) DHCP Server Identifier (10.0.2.2) Option: (55) Parameter Request List Length: 14 Parameter Request List Item: (3) Router Parameter Request List Item: (51) IP Address Lease Time Parameter Request List Item: (1) Subnet Mask Parameter Request List Item: (15) Domain Name Parameter Request List Item: (6) Domain Name Server Parameter Request List Item: (66) TFTP Server Name Parameter Request List Item: (67) Bootfile name Parameter Request List Item: (120) SIP Servers Parameter Request List Item: (44) NetBIOS over TCP/IP Name Server Parameter Request List Item: (43) Vendor-Specific Information Parameter Request List Item: (150) TFTP Server Address Parameter Request List Item: (12) Host Name Parameter Request List Item: (7) Log Server Parameter Request List Item: (42) Network Time Protocol Servers Option: (50) Requested IP Address (10.0.2.15) Option: (53) DHCP Message Type (Request) Option: (60) Vendor class identifier Length: 15 Vendor class identifier: Juniper-mx10003 Option: (51) IP Address Lease Time Option: (12) Host Name Option: (255) End Padding: 00

It requests several options, including the TFTP server address option 150, and the Vendor-Specific Information Option 43—or VSIO. The DHCP server can use option 60 to identify the vendor-specific information to send. For Juniper devices, option 43 encodes the image name and the configuration file name. They are fetched from the IP address provided in option 150.

The official documentation on ZTP provides a valid configuration to answer such a request. However, it does not leverage the ability of the ISC DHCP Server to support several vendors and redefines option 43 to be Juniper-specific:

option NEW_OP-encapsulation code 43 = encapsulate NEW_OP;

Instead, it is possible to define an option space for Juniper, using a self-descriptive name, without overriding option 43:

# Juniper vendor option space option space juniper; option juniper.image-file-name code 0 = text; option juniper.config-file-name code 1 = text; option juniper.image-file-type code 2 = text; option juniper.transfer-mode code 3 = text; option juniper.alt-image-file-name code 4 = text; option juniper.http-port code 5 = text;

Then, when you need to set these suboptions, specify the vendor option space:

class "juniper-mx10003" { match if (option vendor-class-identifier = "Juniper-mx10003") { vendor-option-space juniper; option juniper.transfer-mode "http"; option juniper.image-file-name "/images/junos-vmhost-install-mx-x86-64-19.3R2-S4.5.tgz"; option juniper.config-file-name "/cfg/juniper-mx10003.txt"; }

This configuration returns the following answer:1

Dynamic Host Configuration Protocol (ACK) Message type: Boot Reply (2) Hardware type: Ethernet (0x01) Hardware address length: 6 Hops: 0 Transaction ID: 0x44e3a7c9 Seconds elapsed: 0 Bootp flags: 0x8000, Broadcast flag (Broadcast) Client IP address: 0.0.0.0 Your (client) IP address: 10.0.2.15 Next server IP address: 0.0.0.0 Relay agent IP address: 0.0.0.0 Client MAC address: 02:00:00:00:00:01 (02:00:00:00:00:01) Client hardware address padding: 00000000000000000000 Server host name not given Boot file name not given Magic cookie: DHCP Option: (53) DHCP Message Type (ACK) Option: (54) DHCP Server Identifier (10.0.2.2) Option: (51) IP Address Lease Time Option: (1) Subnet Mask (255.255.255.0) Option: (3) Router Option: (6) Domain Name Server Option: (43) Vendor-Specific Information Length: 89 Value: 00362f696d616765732f6a756e6f732d766d686f73742d69… Option: (150) TFTP Server Address Option: (255) End

Using vendor-option-space directive allows you to make different ZTP implementations coexist. For example, you can add the option space for PXE:

option space PXE; option PXE.mtftp-ip code 1 = ip-address; option PXE.mtftp-cport code 2 = unsigned integer 16; option PXE.mtftp-sport code 3 = unsigned integer 16; option PXE.mtftp-tmout code 4 = unsigned integer 8; option PXE.mtftp-delay code 5 = unsigned integer 8; option PXE.discovery-control code 6 = unsigned integer 8; option PXE.discovery-mcast-addr code 7 = ip-address; option PXE.boot-server code 8 = { unsigned integer 16, unsigned integer 8, ip-address }; option PXE.boot-menu code 9 = { unsigned integer 16, unsigned integer 8, text }; option PXE.menu-prompt code 10 = { unsigned integer 8, text }; option PXE.boot-item code 71 = unsigned integer 32; class "pxeclients" { match if substring (option vendor-class-identifier, 0, 9) = "PXEClient"; vendor-option-space PXE; option PXE.mtftp-ip 10.0.2.2; # […] }

On the same topic, do not override option 125 “VIVSO.” See “Zero-Touch Provisioning for Cisco IOS.”

  1. Wireshark knows how to decode option 43 for some vendors, thanks to option 60, but not for Juniper. ↩︎

Russ Allbery: PGP::Sign 1.04

Dje, 15/11/2020 - 1:05pd

The refactor of PGP::Sign in the 1.00 release to use IPC::Run instead of hand-rolled process management code broke signing large files, which I discovered when trying to use the new module to sign checkgroups for the Big Eight Usenet hierarchies.

There were two problems: IPC::Run sets sockets to talk to the child process to non-blocking, and when you pass a scalar in as the data to pass to a child socket, IPC::Run expects to use it as a queue and thus doesn't send EOF to the child process when the input is exhausted.

This release works around both problems by handling non-blocking writes to the child using select and using a socket to write the passphrase to the child process instead of a scalar variable. It also adds a test to ensure that signing long input keeps working.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

Junichi Uekawa: Rewrote my build system in C++.

Sht, 14/11/2020 - 9:43pd
Rewrote my build system in C++. I used to write build rules in Nodejs, but I figured if my projects are mostly C++ I should probably write them in C++. I wanted to make it a bit more like BUILD files but couldn't really and ended up looking more C++ than I wanted to. Seems like key-value struct initialization isn't available until C++20.

Martin Michlmayr: beancount2ledger 1.3 released

Pre, 13/11/2020 - 1:12md

I released version 1.3 of beancount2ledger, the beancount to ledger converter that was moved from bean-report ledger into a standalone tool.

You can get beancount2ledger from GitHub or via pip install.

Here are the changes in 1.3:

  • Add rounding postings only when required (issue #9)
  • Avoid printing too much precision for a currency (issue #21)
  • Avoid creating two or more postings with null amount (issue #23)
  • Add price to cost when needed by ledger (issue #22)
  • Preserve posting order (issue #18)
  • Add config option indent
  • Show metadata with hledger output
  • Support setting auxiliary dates and posting dates from metadata (issue #14)
  • Support setting the code of transactions from metadata
  • Support mapping of account and currency names (issue #24)
  • Improve documentation:
    • Add user guide
    • Document limitations (issue #12)

Dirk Eddelbuettel: tidyCpp 0.0.2: More documentation and features

Enj, 12/11/2020 - 4:03md

A first update of the still fairly new package tidyCpp is now on CRAN. The packages offers a C++ layer on top of the C API for R which aims to make its use a little easier and more consistent.

The vignette has been extended with a new examples, a new section and some general editing. A few new defines have been added mostly from the Rinternals.h header. We also replaced the Shield class with a simpler yet updated version class Protect. The name better represent the core functionality of offering a simpler alternative to the PROTECT and UNPROTECT macro pairing. We also added a short discussion to the vignette of a gotcha one has to be mindful of, and that we fell for ourselves in version 0.0.1. We also added a typedef so that code using Shield can still be used.

The NEWS entry follows.

Changes in tidyCpp version 0.0.2 (2020-11-12)
  • Expanded definitions in internals.h to support new example.

  • The vignette has been extended with an example based on package uchardet.

  • Class Shield has been replaced by an new class Protect; a compatibility typdef has been added.

  • The examples and vignette have been clarified with respect to proper ownership of protected objects; a new vignette section was added.

Thanks to my CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bits from Debian: "Homeworld" will be the default theme for Debian 11

Enj, 12/11/2020 - 1:30md

The theme "Homeworld" by Juliette Taka has been selected as default theme for Debian 11 'bullseye'. Juliette says that this theme has been inspired by the Bauhaus movement, an art style born in Germany in the 20th century.

After the Debian Desktop Team made the call for proposing themes, a total of eighteen choices have been submitted. The desktop artwork poll was open to the public, and we received 5,613 responses ranking the different choices, of which Homeworld has been ranked as the winner among them.

This is the third time that a submission by Juliette has won. Juliette is also the author of the lines theme that was used in Debian 8 and the softWaves theme that was used in Debian 9.

We'd like to thank all the designers that have participated and have submitted their excellent work in the form of wallpapers and artwork for Debian 11.

Congratulations, Juliette, and thank you very much for your contribution to Debian!

Mike Hommey: Announcing git-cinnabar 0.5.6

Enj, 12/11/2020 - 3:40pd
Please partake in the git-cinnabar survey.

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.5?
  • Updated git to 2.29.2 for the helper.
  • git cinnabar git2hg and git cinnabar hg2git now have a --batch flag.
  • Fixed a few issues with experimental support for python 3.
  • Fixed compatibility issues with mercurial >= 5.5.
  • Avoid downloading unsupported clonebundles.
  • Provide more resilience to network problems during bundle download.
  • Prebuilt helper for Apple Silicon macos now available via git cinnabar download.

Vincent Fourmond: Solution for QSoas quiz #1: averaging spectra

Mër, 11/11/2020 - 8:53md
This post describes the solution to the Quiz #1, based on the files found there. The point is to produce both the average and the standard deviation of a series of spectra. Below is how the final averaged spectra shoud look like: I will present here two different solutions. Solution 1: using the definition of standard deviation There is a simple solution using the definition of the standard deviation: $$\sigma_y = \sqrt{<y^2> - {<y>}^2}$$ in which \(<y^2>\) is the average of \(y^2\) (and so on). So the simplest solution is to construct datasets with an additional column that would contain \(y^2\), average these columns, and replace the average with the above formula. For that, we need first a companion script that loads a single data file and adds a column with \(y^2\). Let's call this script load-one.cmds: load ${1} apply-formula y2=y**2 /extra-columns=1 flag /flags=processed When this script is run with the name of a spectrum file as argument, it loads it (replaces ${1} by the first argument, the file name), adds a column y2 containing the square of the y column, and flag it with the processed flag. This is not absolutely necessary, but it makes it much easier to refer to all the spectra when they are processed. Then to process all the spectra, one just has to run the following commands: run-for-each load-one.cmds Spectrum-1.dat Spectrum-2.dat Spectrum-3.dat average flagged:processed apply-formula y2=(y2-y**2)**0.5 dataset-options /yerrors=y2 The run-for-each command runs the load-one.cmds script for all the spectra (one could also have used Spectra-*.dat to not have to give all the file names). Then, the average averages the values of the columns over all the datasets. To be clear, it finds all the values that have the same X (or very close X values) and average them, column by column. The result of this command is therefore a dataset with the average of the original \(y\) data as y column and the average of the original \(y^2\) data as y2 column. So now, the only thing left to do is to use the above equation, which is done by the apply-formula code. The last command, dataset-options, is not absolutely necessary but it signals to QSoas that the standard error of the y column should be found in the y2 column. This is now available as script method-one.cmds in the git repository.

Solution 2: use QSoas's knowledge of standard deviation The other method is a little more involved but it demonstrates a good approach to problem solving with QSoas. The starting point is that, in apply-formula, the value $stats.y_stddev corresponds to the standard deviation of the whole y column... Loading the spectra yields just a series of x,y datasets. We can contract them into a single dataset with one x column and several y columns: load Spectrum-*.dat /flags=spectra contract flagged:spectra After these commands, the current dataset contains data in the form of: lambda1 a1_1 a1_2 a1_3 lambda2 a2_1 a2_2 a2_3 ... in which the ai_1 come from the first file, ai_2 the second and so on. We need to use transpose to transform that dataset into: 0 a1_1 a2_1 ... 1 a1_2 a2_2 ... 2 a1_3 a2_3 ... In this dataset, values of the absorbance for the same wavelength for each dataset is now stored in columns. The next step is just to use expand to obtain a series of datasets with the same x column and a single y column (each corresponding to a different wavelength in the original data). The game is now to replace these datasets with something that looks like: 0 a_average 1 a_stddev For that, one takes advantage of the $stats.y_average and $stats.y_stddev values in apply-formula, together with the i special variable that represents the index of the point: apply-formula "if i == 0; then y=$stats.y_average; end; if i == 1; then y=$stats.y_stddev; end" strip-if i>1 Then, all that is left is to apply this to all the datasets created by expand, which can be just made using run-for-datasets, and then, we reverse the splitting by using contract and transpose ! In summary, this looks like this. We need two files. The first, process-one.cmds contains the following code: apply-formula "if i == 0; then y=$stats.y_average; end; if i == 1; then y=$stats.y_stddev; end" strip-if i>1 flag /flags=processed The main file, method-two.cmds looks like this: load Spectrum-*.dat /flags=spectra contract flagged:spectra transpose expand /flags=tmp run-for-datasets process-one.cmds flagged:tmp contract flagged:processed transpose dataset-options /yerrors=y2 Note some of the code above can be greatly simplified using new features present in the upcoming 3.0 version, but that is the topic for another post.

About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code and compile it yourself or buy precompiled versions for MacOS and Windows there.

Reproducible Builds: Reproducible Builds in October 2020

Mër, 11/11/2020 - 3:35md

Welcome to the October 2020 report from the Reproducible Builds project.

In our monthly reports, we outline the major things that we have been up to over the past month. As a brief reminder, the motivation behind the Reproducible Builds effort is to ensure flaws have not been introduced in the binaries we install on our systems. If you are interested in contributing to the project, please visit our main website.

General

On Saturday 10th October, Morten Linderud gave a talk at Arch Conf Online 2020 on The State of Reproducible Builds in Arch. The video should be available later this month, but as a teaser:

The previous year has seen great progress in Arch Linux to get reproducible builds in the hands of the users and developers. In this talk we will explore the current tooling that allows users to reproduce packages, the rebuilder software that has been written to check packages and the current issues in this space.

During the Reproducible Builds summit in Marrakesh in 2019, developers from the GNU Guix, NixOS and Debian distributions were able to produce a bit-for-bit identical GNU Mes binary despite using three different versions of GCC. Since this summit, additional work resulted in a bit-for-bit identical Mes binary using tcc, and last month a fuller update was posted to this effect by the individuals involved. This month, however, David Wheeler updated his extensive page on Fully Countering Trusting Trust through Diverse Double-Compiling, remarking that:

GNU Mes rebuild is definitely an application of [Diverse Double-Compiling]. [..] This is an awesome application of DDC, and I believe it’s the first publicly acknowledged use of DDC on a binary

There was a small, followup discussion on our mailing list.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

This month, the Reproducible Builds project restarted our IRC meetings, managing to convene twice: the first time on October 12th (summary & logs), and later on the 26th (logs). As mentioned in previous reports, due to the unprecedented events throughout 2020, there will be no in-person summit event this year.

On our mailing list this month Elías Alejandro posted a request for help with a local configuration

Debian-related work

In August, Lucas Nussbaum performed an archive-wide rebuild of packages to test enabling the reproducible=+fixfilepath Debian build flag by default. Enabling this fixfilepath feature will likely fix reproducibility issues in an estimated 500-700 packages. However, this month Vagrant Cascadian posted to the debian-devel mailing list:

It would be great to see the reproducible=+fixfilepath feature enabled by default in dpkg-buildflags, and we would like to proceed forward with this soon unless we hear any major concerns or other outstanding issues. […] We would like to move forward with this change soon, so please raise any concerns or issues not covered already.

Debian Developer Stuart Prescott has been improving python-debian, a Python library that is used to parse Debian-specific files such as changelogs, .dscs, etc. In particular, Stuart is working on adding support for .buildinfo files used for recording reproducibility-related build metadata:

This can mostly be a very thin layer around the existing Deb822 types, using the existing Changes code for the file listings, the existing PkgRelations code for the package listing and gpg_* functions for signature handling.

A total of 159 Debian packages were categorised, 69 had their categorisation updated, and 33 had their classification removed this month, adding to our knowledge about identified issues. As part of this, Chris Lamb identified and classified two new issues: build_path_captured_in_emacs_el_file and rollup_embeds_build_path.

Software development

This month, we tried to fix a large number of currently-unreproducible packages, including:

Bernhard M. Wiedemann also reported three issues against bison, ibus and postgresql12.

Tools

diffoscope is our in-depth and content-aware diff utility. Not only could you locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds too. This month, Chris Lamb uploaded version 161 to Debian (later backported by Mattia Rizzolo), as well as made the following changes:

  • Move test_ocaml to the assert_diff helper. []
  • Update tests to support OCaml version 4.11.1. Thanks to Sebastian Ramacher for the report. (#972518)
  • Bump minimum version of the Black source code formatter to 20.8b1. (#972518)

In addition, Jean-Romain Garnier temporarily updated the dependency on radare2 to ensure our test pipelines continue to work [], and for the GNU Guix distribution Vagrant Cascadian diffoscope to version 161 [].

In related development, trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb made the following changes:

  • Mark a --help-only test as being a ‘superficial’ test. (#971506)
  • Add a real, albeit flaky, test that interacts with the try.diffoscope.org service. []
  • Bump debhelper compatibility level to 13 [] and bump Standards-Version to 4.5.0 [].

Lastly, disorderfs version 0.5.10-2 was uploaded to Debian unstable by Holger Levsen, which enabled security hardening via DEB_BUILD_MAINT_OPTIONS [] and dropped debian/disorderfs.lintian-overrides [].

Website and documentation

This month, a number of updates to the main Reproducible Builds website and related documentation were made by Chris Lamb:

  • Add a citation link to the academic article regarding dettrace [], and added yet another supply-chain security attack publication [].
  • Reformatted the Jekyll’s Liquid templating language and CSS formatting to be consistent [] as well as expand a number of tab characters [].
  • Used relative_url to fix missing translation icon on various pages. []
  • Published two announcement blog posts regarding the restarting of our IRC meetings. [][]
  • Added an explicit note regarding the lack of an in-person summit in 2020 to our events page. []
Testing framework

The Reproducible Builds project operates a Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Refactor and improve the Debian dashboard. [][][]
    • Track bugs which are usertagged as ‘filesystem’, ‘fixfilepath’, etc.. [][][]
    • Make a number of changes to package index pages. [][][]
  • System health checks:

    • Relax disk space warning levels. []
    • Specifically detect build failures reported by dpkg-buildpackage. []
    • Fix a regular expression to detect outdated package sets. []
    • Detect Lintian issues in diffoscope. []

  • Misc:

    • Make a number of updates to reflect that our sponsor Profitbricks has renamed itself to IONOS. [][][][]
    • Run a F-Droid maintenance routine twice a month to utilise its cleanup features. []
    • Fix the target name in OpenWrt builds to ath79 from ath97. []
    • Add a missing Postfix configuration for a node. []
    • Temporarily disable Arch Linux builds until a core node is back. []
    • Make a number of changes to our “thanks” page. [][][]

Build node maintenance was performed by both Holger Levsen [][] and Vagrant Cascadian [][][], Vagrant Cascadian also updated the page listing the variations made when testing to reflect changes for in build paths [] and Hans-Christoph Steiner made a number of changes for F-Droid, the free software app repository for Android devices, including:

  • Do not fail reproducibility jobs when their cleanup tasks fail. []
  • Skip libvirt-related sudo command if we are not actually running libvirt. []
  • Use direct URLs in order to eliminate a useless HTTP redirect. []

If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. However, you can also get in touch with us via:

Thorsten Alteholz: My Debian Activities in October 2020

Mar, 10/11/2020 - 3:48md

FTP master

This month I accepted 208 packages and rejected 29. The overall number of packages that got accepted was 563, so yeah, I was not alone this month :-).

Anyway, this month marked another milestone in my NEW package handling. My overall number of ACCEPTed package exceeded the magic number of 20000 packages. This is almost 30% of all packages accepted in Debian. I am a bit proud of this achievement.

Debian LTS

This was my seventy-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 20.75h. During that time I did LTS uploads of:

  • [DLA 2415-1] freetype security update for one CVE
  • [DLA 2419-1] dompurify.js security update for two CVEs
  • [DLA 2418-1] libsndfile security update for eight CVEs
  • [DLA 2421-1] cimg security update for eight CVEs

I also started to work on golang-1.7 and golang-1.8

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty eighth ELTS month.

During my allocated time I uploaded:

  • ELA-289-2 for python3.4
  • ELA-304-1 for freetype
  • ELA-305-1 for libsndfile

The first upload of python3.4, last month, did not build on armel, so I had to reupload an improved package this month. For amd64 and i386 the ELTS packages are built in native mode, whereas the packages on armel are cross-built. There is some magic in debian/rules of python to detect in which mode the package is built. This is important as some tests of the testsuite are not really working in cross-build-mode. Unfortunately I had to learn this the hard way …

The upload of libsndfile now aligns the number of fixed CVEs in all releases.

Last but not least I did some days of frontdesk duties.

Other stuff

Despite my NEW-handling and LTS/ELTS stuff I hadn’t much fun with Debian packages this month. Given the approaching freeze, I hope this will change again in November.

Jonathan Dowland: Borg, confidence in backups, GtkPod and software preservation

Mar, 10/11/2020 - 12:01md

Over the summer I decided to migrate my backups from rdiff-backup to borg, which offers some significant advantages, in particular de-duplication, but comes at a cost of complexity, and a corresponding sense of unease about how sound my backup strategy might be. I've now hit the Point Of No Return: my second external backup drive is overdue being synced with my NAS, which will delete the last copy of the older rdiff-backup backups.

Whilst I hesitate over this last action to commit to borg, something else happened. My wife wanted to put a copy of her iTunes music library on her new phone, and I couldn't find it: not only could I not find it on any of our computers, I also couldn't find a copy on the NAS, or in backups, or even in old DVD-Rs. This has further knocked my confidence in our family data management, and makes me even more nervous to commit to borg. I'm now wondering about stashing the contents of the second external backup disk on some cloud service as a fail-safe.

There was one known-good copy of Sarah's music: on her ancient iPod Nano. Apple have gone to varying lengths to prevent you from copying music from an iPod. When Music is copied to an iPod, the files are stripped of all their metadata (artist, title, album, etc.) and renamed to something non-identifying (e.g. F01/MNRL.m4a), and the metadata (and correlation to the obscure file name) is saved in separate database files. The partition of the flash drive containing all this is also marked as "hidden" to prevent it appearing on macOS and Windows systems. We are lucky that the iPod is so old, because Apple went even further in more recent models, adding a layer of encryption.

To get the music off the iPod, one has to undo all of these steps.

Luckily, other fine folks have worked out reversing all these steps and implemented it in software such as libgpod and its frontend, GtkPod, which is still currently available as a Debian package. It mostly worked, and I got back 95% of the tracks. (It would have been nice if GtkPod had reported the tracks it hadn't recovered, it was aware they existed based on the errors it did print. But you can't have everything.)

GtkPod is a quirky, erratic piece of software, that is only useful for old Apple equipment that is long out of production, prior to the introduction of the encryption. The upstream homepage is dead, and I suspect it is unmaintained. The Debian package is orphaned. It's been removed from testing, because it won't build with GCC 10. On the other hand, my experience shows that it worked, and was useful for a real problem that someone had today.

I'm in two minds about GtkPod's fate. On the one hand, I think Debian has far too many packages, with a corresponding burden of maintenance responsibility (for the whole project, not just the individual package maintainers), and there's a quality problem: once upon a time, if software had been packaged in a distribution like Debian, that was a mark of quality, a vote of confidence, and you could have some hope that the software would work and integrate well with the rest of the system. That is no longer true, and hasn't been in my experience for many years. If we were more discerning about what software we included in the distribution, and what we kept, perhaps we could be a leaner distribution, faster to adapt to the changing needs in the world, and of a higher quality.

On the other hand, this story about GtkPod is just one of many similar stories. Real problems have been solved in open source software, and computing historians, vintage computer enthusiasts, researchers etc. can still benefit from that software long into the future. Throwing out all this stuff in the name of "progress", could be misguided. I'm especially sad when I see the glee which people have expressed when ditching libraries like Qt4 from the archive. Some software will not be ported on to Qt5 (or Gtk3, Qt6, Gtk4, Qt7, etc., in perpetuity). Such software might be all of: unmaintained, "finished", and useful for some purpose (however niche), all at the same time.

Jonathan Dowland: Red Hat at the Turing Institute

Mar, 10/11/2020 - 10:54pd

In Summer 2019 Red Hat were invited to the Turing Institute to provide a workshop on issues around building and sustaining an Open Source community. I was part of a group of about 6 people to visit the Turing and deliver the workshop. It seemed to have been well received by the audience.

The Turing Institute is based within the British Library. For many years I have enjoyed visiting the British Library if I was visiting or passing through London for some reason or other: it's such a lovely serene space in a busy, hectic part of London. On one occasion they had Jack Kerouac's manuscript for "On The Road" on display in one of the public gallery spaces: it's a continuous 120-foot long piece of paper that Kerouac assembled to prevent the interruption of changing sheets of paper in his typewriter from disturbing his flow whilst writing.

The Institute itself is a really pleasant-looking working environment. I got a quick tour of it back in February 2019 when visiting a friend who worked there, but last year's visit was my first prolonged experience of working there. (I also snuck in this February, when passing through London, to visit my supervisor who is a Turing Fellow)

I presented a section of a presentation entitled "How to build a successful Open Source community". My section attempted to focus on the "how". We've put out all the presentations under a Creative Commons license, and we've published them on the Red Hat Research website: https://research.redhat.com/blog/2020/08/12/open-source-at-the-turing/

The workshop participants were drawn from PhD students, research associates, research software engineers and Turing Institute fellows. We had some really great feedback from them which we've fed back into revisions of the workshop material including the presentations.

I'm hoping to stay involve in further collaborations between the Turing and Red Hat. I'm pleased to say that we participated in a recent Tools, practices and systems seminar (although I was not involved).

Faqet