You are here

Agreguesi i feed

Protecting Software Updates

Planet Debian - Enj, 28/06/2018 - 5:57md

In my work at the ACLU, we fight for civil rights and civil liberties. This includes the ability to communicate privately, free from surveillance or censorship, and to control your own information. These are principles that I think most free software developers would agree with. In that vein, we just released a guide to securing software update channels in collaboration with students from NYU Law School.

The guide focuses specifically on what people and organizations that distribute software can do to ensure that their software update processes and mechanisms are actually things that their users can reliably trust. The goal is to make these channels trustworthy, even in the face of attempts by government agencies to force software vendors to ship malware to their users.

Why software updates specifically? Every well-engineered system on today's Internet will have a software update mechanism, since there are inevitably bugs that need fixing, or new features added to improve the system for the users. But update channels also represent a risk: they are an unclosable hole that enables installation of arbitrary software, often at the deepest, most-privileged level of the machine. This makes them a tempting target for anyone who wants to force the user to run malware, whether that's a criminal organization, a corporate or political rival, or a government surveillance agency.

I'm pleased to say that Debian has already implemented many of the technical recommendations we describe, including leading the way on reproducible builds. But as individual developers we might also be targeted, as lamby points out, and it's worth thinking about how you'd defend your users from such a situation.

As an organization, it would be great to see Debian continue to expand its protections for its users by holding ourselves even more accountable in our software update mechanisms than we already do. In particular, I'd love to see work on binary transparency, similar to what Mozilla has been doing, but that ensures that the archive signing keys (which our users trust) can't be abused/misused/compromised without public exposure, and that allows for easy monitoring and investigation of what binaries we are actually publishing.

In addition to technical measures, if you think you might ever get a government request to compromise your users, please make sure you are in touch with a lawyer who has your back, who knows how to challenge requests in court, and who understands why software update channels should not be used for deliberately shipping malware. If you're facing such a situation, and you're in the USA and you don't have a lawyer yet yourself, you can reach out to the lawyers my workplace, the ACLU's Speech, Privacy, and Technology Project for help.

Protecting software update channels is the right thing for our users, and for free software -- Debian's priorities. So please take a look at the guidance, think about how it might affect you or the people that you work with, and start a conversation about what you can do to defend these systems that everyone is obliged to trust on today's communications.

Daniel Kahn Gillmor (dkg) https://dkg.fifthhorseman.net/blog/ dkg's blog

Debian Perl Sprint 2018

Planet Debian - Mër, 27/06/2018 - 6:40md

Three members of the Debian Perl team met in Hamburg between May 16 and May 20 2018 as part of the Mini-DebConf Hamburg to continue perl development work for Buster and to work on QA tasks across our 3500+ packages.

The participants had a good time and met other Debian friends. The sprint was productive:

  • 21 bugs were filed or worked on, many uploads were accepted.
  • The transition to Perl 5.28 was prepared, and versioned provides were again worked on.
  • Several cleanup tasks were performed, especially around the move from Alioth to Salsa in documentation, website, and wiki.
  • For src:perl, autopkgtests were enabled, and work on Versioned Provides has been resumed.

The full report was posted to the relevant Debian mailing lists.

The participants would like to thank the Mini-DebConf Hamburg organizers for providing the framework for our sprint, and all donors to the Debian project who helped to cover a large part of our expenses.

Dominic Hargreaves https://bits.debian.org/ Bits from Debian

debian cryptsetup sprint report

Planet Debian - Mër, 27/06/2018 - 3:40md
Cryptsetup sprint report

The Cryptsetup team – consisting of Guilhem and Jonas – met on June 15 to 17 in order to work on the Debian cryptsetup packages. We ended up working three days (and nights) on the packages, refactored the whole initramfs integration, the SysVinit init scripts and the package build process and discussed numerous potential improvements as well as new features. The whole sprint was great fun and we enjoyed a lot sitting next to each other, being able to discuss design questions and implementation details in person instead of using clunky internet communication means. Besides, we had very nice and interesting chats, contacted other Debian folks from the Frankfurt area and met with jfs on Friday evening.

Splitting cryptsetup into cryptsetup-run and cryptsetup-initramfs

First we split the cryptsetup initramfs integration into a separate package cryptsetup-initramfs. The package that contains other Debian specific features like SysVinit scripts, keyscripts, etc. now is called cryptsetup-run and cryptsetup itself is a mere metapackage depending on both split off packages. So from now on, people can install cryptsetup-run if they don't need the cryptsetup initramfs integration. Once Buster is released we intend to rename cryptsetup-run to cryptsetup, which then will no longer have a strict dependency on cryptsetup-initramfs. This transition over two releases is necessary to avoid unexpected breakage on (dist-)upgrades. Meanwhile cryptsetup-initramfs ships a hook that upon generation of a new initramfs image detects which devices need to be unlocked early in the boot process and, in case it didn't find any, suggests the user to remove the package.

The package split allows us to define more fine-grained dependencies: since there are valid use case for wanting the cryptsetup binaries scripts but not the initramfs integration (in particular, on systems without encrypted root device), cryptsetup ≤2:2.0.2-1 was merely recommending initramfs-tools and busybox, while cryptsetup-initramfs now has hard dependencies on these packages.

We also updated the packages to latest upstream release and uploaded 2:2.0.3-1 on Friday shortly before 15:00 UTC. Due to the cryptsetup → cryptsetup-{run,initramfs} package split we hit the NEW queue, and it was manually approved by an ftpmaster… a mere 2h later. Kudos to them! That allowed us to continue with subsequent uploads during the following days, which was beyond our expectations for this sprint :-)

Extensive refactoring work

Afterwards we started working on and merging some heavy refactoring commits that touched almost all parts of the packages. First was a refactoring of the whole cryptsetup initramfs implementation that downsized both the cryptroot hook and script dramatically (less than half the size they were before). The logic to detect crypto disks was changed from parsing /etc/fstab to /proc/mounts and now the sysfs(5) block hierarchy is used to detect dm-crypt device dependencies. A lot of code duplication between the initramfs script and the SysVinit init script was removed by outsourcing common functions into a shared shell functions include file that is sourced by initramfs and SysVinit scripts. To complete the package refactoring, we also overhauled the build process by migrating it to the latest Debhelper 11 style. debian/rules as well was downsized to less than half the size and as an extra benefit we now run the upstream build-time testsuite during the package build process.

Some git statistics speak more than a thousand words:

$ git --no-pager diff --ignore-space-change --shortstat debian/2%2.0.2-1..debian/2%2.0.3-2 -- ./debian/ 92 files changed, 2247 insertions(+), 3180 deletions(-) $ find ./debian -type f \! -path ./debian/changelog -print0 | xargs -r0 cat | wc -l 7342 $ find ./debian -type f \! -path ./debian/changelog -printf x | wc -c 106 On CVE-2016-4484

Since 2:1.7.3-2, our initramfs boot script went to sleep for a full minute when the number of failed unlocking attempts exceeds the configured value (tries crypttab(5) option, which defaults to 3). This was added in order to defeat local brute force attacks, and mitigate one aspect of CVE-2016-4484; back then Jonas wrote a blog post to cover that story. Starting with 2:2.0.3-2 we changed this behavior and the script will now sleep for one second after each unsuccessful unlocking attempt. The new value should provide better user experience while still offering protection against local brute force attacks for very fast password hashing functions. The other aspect mentioned in the security advisory — namely the fact that the initramfs boot process drops to a root (rescue/debug) shell after the user fails to unlock the root device too many times — was not addressed at the time, and still isn't. initramfs-tools has a boot parameter panic=<sec> to disable the debug shell, and while setting this is beyond the scope of cryptsetup, we're planing to ask the initramfs-tools maintainers to change the default. (Of course setting panic=<sec> alone doesn't gain much, and one would need to lock down the full boot chain, including BIOS and boot loader.)

New features (work started)

Apart from the refactoring work we started/continued work on several new features:

  • We started to integrate luksSuspend support into system suspend. The idea is to luksSuspend all dm-crypt devices before suspending the machine in order to protect the storage in suspend mode. In theory, this seemed as simple as creating a minimal chroot in ramfs with the tools required to unlock (luksResume) the disks after machine resume, running luksSuspend from that chroot, putting the machine into suspend mode and running luksResume after it got resumed. Unfortunately it turned out to be way more complicated due to unpredictable race conditions between luksSuspend and machine suspend. So we ended up spending quite some time on debugging (and understanding) the issue. In the end it seems like the final sync() before machine suspend ( https://lwn.net/Articles/582648/ ) causes races in some cases as the dm-crypt device to be synced to is already luksSuspended. We ended up sending a request for help to the dm-crypt mailinglist but unfortunately so far didn't get a helpful response yet.
  • In order to get internationalization support for the messages and password prompts in the initramfs scripts, we patched gettext and locale support into initramfs-tools.
  • We started some preliminary work on adding beep support to the cryptsetup initramfs and sysVinit scripts for better accessibility support.

The above features are not available in the current Debian package yet, but we hope they will be included in a future release.

Bugs and Documentation

We also squashed quite some longstanding bugs and improved the crypttab(5) documentation. In total, we squashed 18 bugs during the sprint, the oldest one being from June 2013.

On the need for better QA

In addition to the many crypttab(5) options we also support a huge variety of block device stacks, such as LUKS-LVM2-MD combined in all ways one can possibly imagine. And that's a Debian addition hence something we, the cryptsetup package maintainers, have to develop and maintain ourselves. The many possibilities imply corner cases (it's not a surprise that complex or unusual setups can break in subtle ways) which motivated us to completely refactor the Debian-specific code, so it becomes easier to maintain.

While our final upload squashed 18 bugs, it also introduced new ones. In particular 2 rather serious regressions which fell through our tests. We have thorough tests for the most usual setups, as well as for some complex stacks we hand-crafted in order to detect corner cases, but this approach doesn't scale to covering the full spectrum of user setups: even with minimal sid installations the disk images would just take far too much space! Ideally we would have a automated test-suite, each test deploying a new transient sid VM with a particular setup. As the current and past regressions show, that's a beyond-the-scenes area we should work on. (In fact that's an effort we started already, but didn't touch during the sprint due to lack of time.)

More to come

There's some more things on our list that we didn't find time to work on. Apart from the unfinished new features we mentioned above, that's mainly the LUKS nuke feature that Kali Linux ships and the lack of keyscripts support to crypttab(5) in systemd.

Conclusion

In our eyes, the sprint was both a great success and great fun. We definitely want to repeat it anytime soon in order to further work on the open tasks and further improve the Debian cryptsetup package. There's still plenty of work to be done. We thank the Debian project and its generous donors for funding Guilhem's travel expenses.

Guilhem and Jonas, June 25th 2018

mejo roaming https://blog.freesources.org// mejo roaming

Montreal's Debian &amp; Stuff June Edition

Planet Debian - Mër, 27/06/2018 - 6:00pd

Hello world!

This is me inviting you to the next Montreal Debian & Stuff. This one will take place at Koumbit's offices in Montreal on June 30th from 10:00 to 17:00 EST.

The idea behind 'Debian & Stuff' is to have an informal gatherings of the local Debian community to work on Debian-related stuff - or not. Everyone is welcome to drop by and chat with us, hack on a nice project or just hang out!

We've been trying to have monthly meetings of the Debian community in Montreal since April, so this will be the third event in a row.

Chances are we'll take a break in July because of DebConf, but I hope this will become a regular thing!

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Add-on to control the projector from within Kodi

Planet Debian - Mar, 26/06/2018 - 11:55md

My movie playing setup involve Kodi, OpenELEC (probably soon to be replaced with LibreELEC) and an Infocus IN76 video projector. My projector can be controlled via both a infrared remote controller, and a RS-232 serial line. The vendor of my projector, InFocus, had been sensible enough to document the serial protocol in its user manual, so it is easily available, and I used it some years ago to write a small script to control the projector. For a while now, I longed for a setup where the projector was controlled by Kodi, for example in such a way that when the screen saver went on, the projector was turned off, and when the screen saver exited, the projector was turned on again.

A few days ago, with very good help from parts of my family, I managed to find a Kodi Add-on for controlling a Epson projector, and got in touch with its author to see if we could join forces and make a Add-on with support for several projectors. To my pleasure, he was positive to the idea, and we set out to add InFocus support to his add-on, and make the add-on suitable for the official Kodi add-on repository.

The Add-on is now working (for me, at least), with a few minor adjustments. The most important change I do relative to the master branch in the github repository is embedding the pyserial module in the add-on. The long term solution is to make a "script" type pyserial module for Kodi, that can be pulled in as a dependency in Kodi. But until that in place, I embed it.

The add-on can be configured to turn on the projector when Kodi starts, off when Kodi stops as well as turn the projector off when the screensaver start and on when the screesaver stops. It can also be told to set the projector source when turning on the projector.

If this sound interesting to you, check out the project github repository. Perhaps you can send patches to support your projector too? As soon as we find time to wrap up the latest changes, it should be available for easy installation using any Kodi instance.

For future improvements, I would like to add projector model detection and the ability to adjust the brightness level of the projector from within Kodi. We also need to figure out how to handle the cooling period of the projector. My projector refuses to turn on for 60 seconds after it was turned off. This is not handled well by the add-on at the moment.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Historical inventory of collaborative editors

Planet Debian - Mar, 26/06/2018 - 8:19md

A quick inventory of major collaborative editor efforts, in chronological order.

As with any such list, it must start with an honorable mention to the mother of all demos during which Doug Engelbart presented what is basically an exhaustive list of all possible software written since 1968. This includes not only a collaborative editor, but graphics, programming and math editor.

Everything else after that demo is just a slower implementation to compensate for the acceleration of hardware.

Software gets slower faster than hardware gets faster. - Wirth's law

So without further ado, here is the list of notable collaborative editors that I could find. By "notable" i mean that they introduce a notable feature or implementation detail.

Project Date Platform Notes SubEthaEdit 2003-2015? Mac-only first collaborative, real-time, multi-cursor editor I could find. An reverse-engineering attempt in Emacs failed to produce anything. DocSynch 2004-2007 ? built on top of IRC! Gobby 2005-now C, multi-platform first open, solid and reliable implementation and still around! The protocol ("libinfinoted") is notoriously hard to port to other editors (e.g. Rudel failed to implement this in Emacs). 0.7 release in jan 2017 adds possible python bindings that might improve this. Interesting plugins: autosave to disk. Ethercalc 2005-now Web, Javascript First spreadsheet, along with Google docs moonedit 2005-2008? ? Original website died. Other user's cursors visible and emulated keystrokes noises. Included a calculator and music sequencer! synchroedit 2006-2007 ? First web app. Inkscape 2007-2011 C++ First graphics editor with collaborative features backed by the "whiteboard" plugin built on top of Jabber, now defunct. Abiword 2008-now C++ First word processor Etherpad 2008-now Web First solid web app. Originally developped as a heavy Java app in 2008, acquired and opensourced by Google in 2009, then rewritten in Node.js in 2011. Widely used. Wave 2009-2010 Web, Java Failed attempt at a grand protocol unification CRDT 2011 Specification Standard for replicating a document's datastructure among different computers reliably. Operational transform 2013 Specification Similar to CRDT, yet, well, different. Floobits 2013-now ? Commercial, but opensource plugins for different editors LibreOffice Online 2015-now Web free Google docs equivalent, now integrated in Nextcloud HackMD 2015-now ? Commercial but opensource. Inspired by hackpad, which was bought up by Dropbox. Cryptpad 2016-now web? spin-off of xwiki. encrypted, "zero-knowledge" on server Prosemirror 2016-now Web, Node.JS "Tries to bridge the gap between Markdown text editing and classical WYSIWYG editors." Not really an editor, but something that can be used to build one. Qill 2013-now Web, Node.JS Rich text editor, also javascript. Not sure it is really collaborative. Teletype 2017-now WebRTC, Node.JS For the GitHub's Atom editor, introduces "portal" idea that makes guests follow what the host is doing across multiple docs. p2p with webRTC after visit to introduction server, CRDT based. Tandem 2018-now Node.JS? Plugins for atom, vim, neovim, sublime... uses a relay to setup p2p connexions CRDT based. Dubious license issues were resolved thanks to the involvement of Debian developers, which makes it a promising standard to follow in the future. Other lists Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

two security holes and a new library

Planet Debian - Mar, 26/06/2018 - 8:18md

For the past week and a half, I've been working on embargoed security holes. The embargo is over, and git-annex 6.20180626 has been released, fixing those holes. I'm also announcing a new Haskell library, http-client-restricted, which could be used to avoid similar problems in other programs.

Working in secret under a security embargo is mostly new to me, and I mostly don't like it, but it seems to have been the right call in this case. The first security hole I found in git-annex turned out to have a wider impact, affecting code in git-annex plugins (aka external special remotes) that uses HTTP. And quite likely beyond git-annex to unrelated programs, but I'll let their developers talk about that. So quite a lot of people were involved in this behind the scenes.

See also: The RESTLESS Vulnerability: Non-Browser Based Cross-Domain HTTP Request Attacks

And then there was the second security hole in git-annex, which took several days to notice, in collaboration with Daniel Dent. That one's potentially very nasty, allowing decryption of arbitrary gpg-encrypted files, although exploiting it would be hard. It logically followed from the first security hole, so it's good that the first security hole was under embagro long enough for us to think it all though.

These security holes involved HTTP servers doing things to exploit clients that connect to them. For example, a HTTP server that a client asks for the content of a file stored on it can redirect to a file:// on the client's disk, or to http://localhost/ or a private web server on the client's internal network. Once the client is tricked into downloading such private data, the confusion can result in private data being exposed. See the_advisory for details.

Fixing this kind of security hole is not necessarily easy, because we use HTTP libraries, often via an API library, which may not give much control over following redirects. DNS rebinding attacks can be used to defeat security checks, if the HTTP library doesn't expose the IP address it's connecting to.

I faced this problem in git-annex's use of the Haskell http-client library. So I had to write a new library, http-client-restricted. Thanks to the good design of the http-client library, particularly its Manager abstraction, my library extends it rather than needing to replace it, and can be used with any API library built on top of http-client.

I get the impression that a lot of other language's HTTP libraries need to have similar things developed. Much like web browsers need to enforce same-origin policies, HTTP clients need to be able to reject certain redirects according to the security needs of the program using them.

I kept a private journal while working on these security holes, and am publishing it now:

Joey Hess http://joeyh.name/blog/ see shy jo

Hosted monitoring

Planet Debian - Mar, 26/06/2018 - 6:01md

I don't run hosted monitoring as a service, I just happen to do some monitoring for a few (local) people, in exchange for money.

Setting up some new tests today I realised my monitoring software had an embarassingly bad bug:

  • The IMAP probe would connect to an IMAP/IMAPS server.
  • Optionally it would login with a username & password.
    • Thus it could test the service was functional

Unfortunately the IMAP probe would never logout after determining success/failure, which would lead to errors from the remote host after a few consecutive runs:

dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections=10)

Oops. Anyway that bug was fixed promptly once it manifested itself, and it also gained the ability to validate SMTP authentication as a result of a customer user-request.

Otherwise I think things have been mixed recently:

  • I updated the webserver of Charlie Stross
  • Did more geekery with hardware.
  • Had a fun time in a sauna, on a boat.
  • Reported yet another security issue in an online PDF generator/converter
    • If you read a remote URL and convert the contents to PDF then be damn sure you don't let people submit file:///etc/passwd.
    • I've talked about this previously.
  • Made plaited bread for the first time.
    • It didn't suck.

(Hosted monitoring is interesting; many people will give you ping/HTTP-fetch monitoring. If you want to remotely test your email service? Far far far fewer options. I guess firewalls get involved if you're testing self-hosted services, rather than cloud-based stuff. But still an interesting niche. Feel free to tell me your budget ;)

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Reproducible Builds: Weekly report #165

Planet Debian - Mar, 26/06/2018 - 4:24md

Here’s what happened in the Reproducible Builds effort between Sunday June 17 and Saturday June 23 2018:

Packages reviewed and fixed, and bugs filed
  • Bernhard M. Wiedemann:

    • gcc (sort, second attempt)
    • pip (sort hash)
    • librep (version update to fix embedded hostname)
  • Chris Lamb:

tests.reproducible-builds.org development

There were a large number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Forming, storming, norming, performing, and …chloroforming?

Planet Debian - Mar, 26/06/2018 - 4:21pd

In 1965, Bruce Tuckman proposed a “developmental sequence in small groups.” According to his influential theory, most successful groups go through four stages with rhyming names:

  1. Forming: Group members get to know each other and define their task.
  2. Storming: Through argument and disagreement, power dynamics emerge and are negotiated.
  3. Norming: After conflict, groups seek to avoid conflict and focus on cooperation and setting norms for acceptable behavior.
  4. Performing: There is both cooperation and productive dissent as the team performs the task at a high level.

Fortunately for organizational science, 1965 was hardly the last stage of development for Tuckman’s theory!

Twelve years later, Tuckman suggested that adjourning or mourning reflected potential fifth stages (Tuckman and Jensen 1977). Since then, other organizational researchers have suggested other stages including transforming and reforming (White 2009), re-norming (Biggs), and outperforming (Rickards and Moger 2002).

What does the future hold for this line of research?

To help answer this question, we wrote a regular expression to identify candidate words and placed the full list is at this page in the Community Data Science Collective wiki.

The good news is that despite the active stream of research producing new stages that end or rhyme with -orming, there are tons of great words left!

For example, stages in a group’s development might include:

  • Scorning: In this stage, group members begin mocking each other!
  • Misinforming: Group that reach this stage start producing fake news.
  • Shoehorning: These groups try to make their products fit into ridiculous constraints.
  • Chloroforming: Groups become languid and fatigued?

One benefit of keeping our list in the wiki is that the organizational research community can use it to coordinate! If you are planning to use one of these terms—or if you know of a paper that has—feel free to edit the page in our wiki to “claim” it!

Also posted on the Community Data Science Collective blog. Although credit for this post goes primarily to Jeremy Foote and Benjamin Mako Hill, the other Community Data Science Collective members can’t really be called blameless in the matter either.

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

Yes! I am going to...

Planet Debian - Hën, 25/06/2018 - 1:44pd

Having followed through some paperwork I was still missing...

I can finally say...

Dates

I’m going to DebCamp18! I should arrive at NCTU in the afternoon/evening of Tuesday, 2018-07-24.

I will spend a day prior to that in Tokio, visiting a friend and probably making micro-tourism.

My Agenda

Of course, DebCamp is not a vacation, so we expect people that take part of DebCamp to have at least a rough sketch of activities. There are many, many things I want to tackle, and experience shows there's only time for a fraction of what's planned. But lets try:

keyring-maint training
We want to add one more member to the keyring-maint group. There is a lot to prepare before any announcements, but I expect a good chunk of DebCamp to be spent explaining the details to a new team member.
DebConf organizing
While I'm no longer a core orga-team member, I am still quite attached to helping out during the conference. This year, I took the Content Team lead, and we will surely be ironing out details such as fixing schedule bugs.
Raspberry Pi images
I replied to Michael Stapelberg's call for adoption of the unofficial-but-blessed Raspberry Pi 3 disk images. I will surely be spending some time on that.
Key Signing Party Coordination
I just sent out the Call for keys for keysigning in Hsinchu, Taiwan. At that point, I expect very little work to be needed, but it will surely be on my radar.

Of course... I *do* want to spend some minutes outside NCTU and get to know a bit of Taiwan. This is my first time in East Asia, and don't know when, if ever, I will have the opportunity to be there again. So, I will try to have at least the time to enjoy a little bit of Taiwan!

gwolf http://gwolf.org Gunnar Wolf

#19: Intel MKL in Debian / Ubuntu follow-up

Planet Debian - Dje, 24/06/2018 - 11:41md

Welcome to the (very brief) nineteenth post in the ruefully recalcitrant R reflections series of posts, or R4 for short.

About two months ago, in the most recent post in the series, #18, we provided a short tutorial about how to add the Intel Math Kernel Library to a Debian or Ubuntu system thanks to the wonderful apt tool -- and the prepackaged binaries by Intel. This made for a simple, reproducible, scriptable, and even reversible (!!) solution---which a few people seem to have appreciated. Good.

In the meantime, more good things happened. Debian maintainer Mo Zhou had posted this 'intent-to-package' bug report leading to this git repo on salsa and this set of packages currently in the 'NEW' package queue.

So stay tuned, "soon" (for various definitions of "soon") we should be able to directly get the MKL onto Debian systems via apt without needing Intel's repo. And in a release or two, Ubuntu should catch up. The fastest multithreaded BLAS and LAPACK for everybody, well-integrated and package. That said, it is still a monstrously large package so I mostly stick with the (truly open source rather than just 'gratis') OpenBLAS but hey, choice is good. And yes, technically these packages are 'outside' of Debian in the non-free section but they will be visible by almost all default configurations.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Review: The Trouble with Physics

Planet Debian - Dje, 24/06/2018 - 6:07pd

Review: The Trouble with Physics, by Lee Smolin

Publisher: Mariner Copyright: 2006 Printing: 2007 ISBN: 0-618-91868-X Format: Trade paperback Pages: 355

A brief recap of the state of theoretical physics: Quantum mechanics and particle physics have settled on the standard model, which provides an apparently complete inventory of fundamental particles and explains three of the four fundamental forces. This has been very experimentally successful up to and including the recent tentative observation of the Higgs boson, one of the few predictions of the standard model that had yet to be confirmed by experiment. Meanwhile, Einstein's theory of general relativity continues as the accepted explanation of gravity, experimentally verified once again by LIGO and Virgo detection of gravitational waves.

However, there are problems. Perhaps the largest is the independence of these two branches of theoretical physics: quantum mechanics does not include or explain gravity, and general relativity does not sit easily alongside current quantum theory. This causes theoretical understanding to break down in situations where both theories need to be in play simultaneously, such as the very early universe or event horizons of black holes.

There are other problems within both theories as well. Astronomy shows that objects in the universe behave as if there is considerably more mass in galaxies than we've been able to observe (the dark matter problem), but we don't have a satisfying theory of what would make up that mass. Worse, the universe is expanding more rapidly than it should, requiring introduction of a "dark energy" concept with no good theoretical basis. And, on the particle physics side, the standard model requires a large number (around 20, depending on how you measure them) of apparently arbitrary free constants: numbers whose values don't appear to be predicted by any basic laws and therefore could theoretically be set to any value. Worse, if those values are set even very slightly differently than we observe in our universe, the nature of the universe would change beyond recognition. This is an extremely unsatisfying property for an apparently fundamental theory of nature.

Enter string theory, which is the dominant candidate for a deeper, unifying theory behind the standard model and general relativity that tries to account for at least some of these problems. And enter this book, which is a critique of string theory as both a scientific theory and a sociological force within the theoretical physics community.

I should admit up-front that Smolin's goal in writing this book is not the same as my goal in reading it. His primary concern is the hold that string theory has on theoretical physics and the possibility that it is stifling other productive avenues, instead spinning off more and more untestable theories that can be tweaked to explain any experimental result. It may even be leading people to argue against the principles of experimental science itself (more on that in a moment). But to mount his critique for the lay reader, he has to explain the foundations of both accepted theoretical physics and string theory (and a few of the competing alternative theories). That's what I was here for.

About a third of this book is a solid explanation of the history and current problems of theoretical physics for the lay person who is already familiar with basic quantum mechanics and general relativity. Smolin is a faculty member at the Perimeter Institution for Theoretical Physics and has done significant work in string theory, loop quantum gravity (one of the competing attempts to unify quantum mechanics and general relativity), and the (looking dubious) theory of doubly special relativity, so this is an engaged and opinionated overview from an active practitioner. He lays out the gaps in existing theories quite clearly, conveys some of the excitement and disappointment of recent (well, as of 2005) discoveries and unsolved problems, provides a solid if succinct summary of string theory, and manages all of that without relying on too much complex math. This is exactly the sort of thing I was looking for after Brian Greene's The Elegant Universe.

Another third of this book is a detailed critique of string theory, and specifically the assumption that string theory is correct despite its lack of testable predictions and its introduction of new problems. I noted in my review of Greene's book that I was baffled by his embrace of a theory that appears to add even more free variables than the standard model, an objection that he skipped over entirely. Smolin tackles this head-on, along with other troublesome aspects of a theory that is actually an almost infinitely flexible family of theories and whose theorized unification (M-theory) is still just an outline of a hoped-for idea.

The core of Smolin's technical objection to string theory is that it is background-dependent. Like quantum mechanics, it assumes a static space-time backdrop against which particle or string interactions happen. However, general relativity is background-independent; indeed, that's at the core of its theoretical beauty. It states that the shape of space-time itself changes, and is a participant in the physical effects we observe (such as gravity). Smolin argues passionately that background independence is a core requirement for any theory that aims to unify general relativity and quantum mechanics. As long as a theory remains background-dependent, it is, in his view, missing Einstein's key insight.

The core of his sociological objection is that he believes string theory has lost its grounding in experimental verification and has acquired far too much aura of certainty than it deserves given its current state, and has done so partly because of the mundane but pernicious effects of academic and research politics. On this topic, I don't know nearly enough to referee the debate, but his firm dismissal of attempts to justify string theory's weaknesses via the anthropic principle rings true to me. (The anthropic principle, briefly, is the idea that the large number of finely-tuned free constants in theories of physics need not indicate a shortcoming in the theory, but may be that way simply because, if they weren't, we wouldn't be here to observe them.) Smolin's argument is that no other great breakthroughs of physics have had to rely on that type of hand-waving, elegance of a theory isn't sufficient justification to reach for this sort of defense, and that to embrace the anthropic principle and its inherent non-refutability is to turn one's back on the practice of science. I suspect this ruffled some feathers, but Smolin put his finger squarely on the discomfort I feel whenever the anthropic principle comes up in scientific discussions.

The rest of the book lays out some alternatives to string theory and some interesting lines of investigation that, as Smolin puts it, may not pan out but at least are doing real science with falsifiable predictions. This is the place where the book shows its age, and where I frequently needed to do some fast Wikipedia searching. Most of the experiments Smolin points out have proven to be dead ends: we haven't found Lorentz violations, the Pioneer anomaly had an interesting but mundane explanation, and the predictions of modified Newtonian dynamics do not appear to be panning out. But I doubt this would trouble Smolin; as he says in the book, the key to physics for him is to make bold predictions that will often be proven wrong, but that can be experimentally tested one way or another. Most of them will lead to nothing but one can reach a definitive result, unlike theories with so many tunable parameters that all of their observable effects can be hidden.

Despite not having quite the focus I was looking for, I thoroughly enjoyed this book and only wish it were more recent. The physics was pitched at almost exactly the level I wanted. The sociology of theoretical physics was unexpected but fascinating in a different way, although I'm taking it with a grain of salt until I read some opposing views. It's an odd mix of topics, so I'm not sure if it's what any other reader would be looking for, but hopefully I've given enough of an outline above for you to know if you'd be interested.

I'm still looking for the modern sequel to One Two Three... Infinity, and I suspect I may be for my entire life. It's hard to find good popularizations of theoretical physics that aren't just more examples of watching people bounce balls on trains or stand on trampolines with bowling balls. This isn't exactly that, but it's a piece of it, and I'm glad I read it. And I wish Smolin the best of luck in his quest for falsifiable theories and doable experiments.

Rating: 8 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

OSSummit Japan 2018

Planet Debian - Dje, 24/06/2018 - 4:33pd

I've participated OSSumit Japan 2018 as volunteer staff for three days.










 Some Debian developers (Jose from Microsoft and Michael from credativ) gave a talk during this event.

Got some stickers (why fedora? because I've got a help with introduce improvement from Fedora people as previously noted :)



Hideki Yamane noreply@blogger.com Henrich plays with Debian

nginx, lua, uuid and a nchan bug

Planet Debian - Sht, 23/06/2018 - 6:08md

At work we're running nginx in several instances. Sometimes running on Debian/stretch (Woooh) and sometimes on Debian/jessie (Boooo). To improve our request tracking abilities we set out to add a header with a UUID version 4 if it does not exist yet. We expected this to be a story we could implemented in a few hours at most ...

/proc/sys/kernel/random/uuid vs lua uuid module

If you start to look around on how to implement it you might find out that there is a lua module to generate a UUID. Since this module is not packaged in Debian we started to think about packaging it, but on a second thought we wondered if simply reading from the Linux /proc interface isn't faster after all? So we build a very unscientific test case that we deemed good enough:

$ cat uuid_by_kernel.lua #!/usr/bin/env lua5.1 local i = 0 repeat local f = assert(io.open("/proc/sys/kernel/random/uuid", "rb")) local content = f:read("*all") f:close() i = i + 1 until i == 1000 $ cat uuid_by_lua.lua #!/usr/bin/env lua5.1 package.path = package.path .. ";/home/sven/uuid.lua" local i = 0 repeat local uuid = require("uuid") local content = uuid() i = i + 1 until i == 1000

The result is in favour of using the Linux /proc interface:

$ time ./uuid_by_kernel.lua real 0m0.013s user 0m0.012s sys 0m0.000s $ time ./uuid_by_lua.lua real 0m0.021s user 0m0.016s sys 0m0.004s nginx in Debian/stretch vs nginx in Debian/jessie

Now that we had settled on the lua code

if (ngx.var.http_correlation_id == nil or ngx.var.http_correlation_id == "") then local f = assert(io.open("/proc/sys/kernel/random/uuid", "rb")) local content = f:read("*all") f:close() return content:sub(1, -2) else return ngx.var.http_correlation_id end

and the nginx configuration

set_by_lua_file $ngx.var.http_correlation_id /etc/nginx/lua-scripts/lua_uuid.lua;

we started to roll this one out to our mixed setup of Debian/stretch and Debian/jessie hosts. While we tested this one on Debian/stretch, and it all worked fine, we never gave it a try on Debian/jessie. Within seconds of the rollout all our nginx instances on Debian/jessie started to segfault.

Half an hour later it was clear that the nginx release shipped in Debian/jessie does not yet allow you to write directly into the internal variable $ngx.var.http_correlation_id. To workaround this issue we configured nginx like this to use the add_header configuration option to create the header.

set_by_lua_file $header_correlation_id /etc/nginx/lua-scripts/lua_uuid.lua; add_header correlation_id $header_correlation_id;

This configuration works on Debian/stretch and Debian/jessie.

Another possibility we considered was using the backported version of nginx. But this one depends on a newer openssl release. I didn't want to walk down the road of manually tracking potential openssl bugs against a release not supported by the official security team. So we rejected this option. Next item on the todo list is for sure the migration to Debian/stretch, which is overdue now anyway.

and it just stopped

A few hours later we found that the nginx running on Debian/stretch was still running, but no longer responding. Attaching strace revealed that all processes (worker and master) were waiting on a futex() call. Logs showed an assert pointing in the direction of the nchan module. I think the bug we're seeing is #446, I've added the few bits of additional information I could gather. We just moved on and disabled the module on our systems. Now it's running fine in all cases for a few weeks.

Kudos to Martin for walking down this muddy road together on a Friday.

Sven Hoexter http://sven.stormbind.net/blog/ a blog

Nageru deployments

Planet Debian - Sht, 23/06/2018 - 12:31md

As we're preparing our Nageru video chains for another Solskogen, I thought it worthwhile to make some short posts about deployments in the wild (neither of which I had much involvement with myself):

  • The Norwegian municipality of Frøya is live streaming streaming all of their council meetings using Nageru (Norwegian only). This is a fairly complex setup with a custom frontend controlling PTZ cameras, so that someone non-technical can just choose from a few select scenes and everything else just clicks into place.
  • Breizhcamp, a French technology conference, used Nageru in 2018, transitioning from OBS. If you speak French, you can watch their keynote about it (itself produced with Nageru) and all their other video online. Breizhcamp ran their own patched version of Nageru (available on Github); I've taken in most of their patches into the main repository, but not all of them yet.

Also, someone thought it was a good idea to take an old version of Nageru, strip all the version history and put it on Github with (apparently) no further changes. Like, what. :-)

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

I’m a maker, baby

Planet Debian - Sht, 23/06/2018 - 1:34pd

 

What does the “maker movement” think of the song “Maker” by Fink?

Is it an accidental anthem or just unfortunate evidence of the semantic ambiguity around an overloaded term?

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

Ick ALPHA-6 released: CI/CD engine

Planet Debian - Enj, 21/06/2018 - 6:34md

It gives me no small amount of satisfaction to announce the ALPHA-6 version of ick, my fledgling continuous integration and deployment engine. Ick has been now deployed and used by other people than myself.

Ick can, right now:

  • Build system trees for containers.
  • Use system trees to run builds in containers.
  • Build Debian packages.
  • Publish Debian packages via its own APT repository.
  • Deploy to a production server.

There's still many missing features. Ick is by no means ready to replace your existing CI/CD system, but if you'd like to have a look at ick, and help us make it the CI/CD system of your dreams, now is a good time to give it a whirl.

(Big missing features: web UI, building for multiple CPU architectures, dependencies between projects, good documentation, a development community. I intend to make all of these happen in due time. Help would be welcome.)

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

Making a difference

Planet Debian - Mër, 20/06/2018 - 8:24md

Every day, ask yourself this question: What one thing can I do today that will make this democracy stronger and honor and support its institutions? It doesn’t have to be a big thing. And it probably won’t shake the Earth. The aggregation of them will shake the Earth.

– Benjamin Wittes

I have written some over the past year or two about the dangers facing the country. I have become increasingly alarmed about the state of it. And that Benjamin Wittes quote, along with the terrible tragedy, spurred me to action. Among other things, I did two things I never have done before:

I registered to protest on June 30.

I volunteered to do phone banking with SwingLeft.

And I changed my voter registration from independent to Republican.

No, I have not gone insane. The reason for the latter is that here in Kansas, the Democrats rarely field candidates for most offices. The real action happens in the Republican primary. So if I can vote in that primary, I can have a voice in keeping the crazy out of office. It’s not much, but it’s something.

Today we witnessed, hopefully, the first victory in our battle against the abusive practices happening to children at the southern border. Donald Trump caved, and in so doing, implicitly admitted the lies he and his administration have been telling about the situation. This only happened because enough people thought like Wittes: “I am small, but I can do SOMETHING.” When I called the three Washington offices of my senators and representatives — far-right Republicans all — it was apparent that I was by no means the first to give them an earful about this, and that they were changing their tone because of what they heard. Mind you, they hadn’t taken any ACTION yet, but the calls mattered. The reporting mattered. The attention mattered.

I am going to keep doing what little bit I can. I hope everyone else will too. Let us shake the Earth.

John Goerzen http://changelog.complete.org The Changelog

Stop merging your pull requests manually

Planet Debian - Mër, 20/06/2018 - 5:53md

If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so.

Nevertheless, every day, they are thousands of developers using GitHub that are doing the same thing over and over again: they click on this button:

This does not make any sense.

Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time.

It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines:

  • Is the test suite passing?
  • Is the documentation up to date?
  • Does this follow our code style guideline?
  • Have N developers reviewed this?

As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell?

In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated.

When those conditions are all set, I want the code to be merged.

Without clicking a single button.

That's exactly how Mergify started.

Mergify is a service that pushes that merge button for you. You define rules in the .mergify.yml file of your repository, and when the rules are satisfied, Mergify merges the pull request.

No need to press any button.

Take a random pull request, like this one:

This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday.

With Mergify enabled, you'd just have to put this .mergify.yml a the root of the repository:

rules: default: protection: required_status_checks: contexts: - continuous-integration/travis-ci required_pull_request_reviews: required_approving_review_count: 1

With such a configuration, Mergify enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged.

We built Mergify as a free service for open-source projects. The engine powering the service is also open-source.

Now go check it out and stop letting those pull requests hang out one second more. Merge them!

If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about!

Julien Danjou https://julien.danjou.info/ Julien Danjou

Faqet

Subscribe to AlbLinux agreguesi