You are here

Planet Debian

Subscribe to Feed Planet Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 7 months 19 min më parë

Junichi Uekawa: Already December.

Enj, 13/12/2018 - 12:39md
Already December. Nice. I tried using tramp for a while but I am back to mosh. tramp is not usable when ssh connection is not reliable.

Keith Packard: newt

Enj, 13/12/2018 - 8:55pd
Newt: A Tiny Embeddable Python Subset

I've been helping teach robotics programming to students in grades 5 and 6 for a number of years. The class uses Lego models for the mechanical bits, and a variety of development environments, including Robolab and Lego Logo on both Apple ][ and older Macintosh systems. Those environments are quite good, but when the Apple ][ equipment died, I decided to try exposing the students to an Arduino environment so that they could get another view of programming languages.

The Arduino environment has produced mixed results. The general nature of a full C++ compiler and the standard Arduino libraries means that building even simple robots requires a considerable typing, including a lot of punctuation and upper case letters. Further, the edit/compile/test process is quite long making fixing errors slow. On the positive side, many of the students have gone on to use Arduinos in science research projects for middle and upper school (grades 7-12).

In other environments, I've seen Python used as an effective teaching language; the direct interactive nature invites exploration and provides rapid feedback for the students. It seems like a pretty good language to consider for early education -- "real" enough to be useful in other projects, but simpler than C++/Arduino has been. However, I haven't found a version of Python that seems suitable for the smaller microcontrollers I'm comfortable building hardware with.

How Much Python Do We Need?

Python is a pretty large language in embedded terms, but there's actually very little I want to try and present to the students in our short class (about 6 hours of language introduction and another 30 hours or so of project work). In particular, all we're using on the Arduino are:

  • Numeric values
  • Loops and function calls
  • Digital and analog I/O

Remembering my childhood Z-80 machine with its BASIC interpreter, I decided to think along those lines in terms of capabilities. I think I can afford more than 8kB of memory for the implementation, and I really do want to have "real" functions, including lexical scoping and recursion.

I'd love to make this work on our existing Arduino Duemilanove compatible boards. Those have only 32kB of flash and 2kB of RAM, so that might be a stretch...

What to Include

Exploring Python, I think there's a reasonable subset that can be built here. Included in that are:

  • Lists, numbers and string types
  • Global functions
  • For/While/If control structures.
What to Exclude

It's hard to describe all that hasn't been included, but here's some major items:

  • Objects, Dictionaries, Sets
  • Comprehensions
  • Generators (with the exception of range)
  • All numeric types aside from single-precision float
Implementation

Newt is implemented in C, using flex and bison. It includes the incremental mark/sweep compacting GC system I developed for my small scheme interpreter last year. That provides a relatively simple to use and efficient memory system.

The Newt “Compiler”

Instead of directly executing a token stream as my old BASIC interpreter did, Newt is compiling to a byte coded virtual machine. Of course, we have no memory, so we don't generate a parse tree and perform optimizations on that. Instead, code is generated directly in the grammar productions.

The Newt “Virtual Machine”

With the source compiled to byte codes, execution is pretty simple -- read a byte code, execute some actions related to it. To keep things simple, the virtual machine has a single accumulator register and a stack of other values.

Global and local variables are stored in 'frames', with each frame implemented as a linked list of atom/value pairs. This isn't terribly efficient in space or time, but was quick to implement the required Python semantics for things like 'global'.

Lists and tuples are simple arrays in memory, just like C Python. I use the same sizing heuristic for lists that Python does; no sense inventing something new for that. Strings are C strings.

When calling a non-builtin function, a new frame is constructed that includes all of the formal names. Those get assigned values from the provided actuals and then the instructions in the function are executed. As new locals are discovered, the frame is extended to include them.

Testing

Any new language implementation really wants to have a test suite to ensure that the desired semantics are implemented correctly. One huge advantage for Newt is that we can cross-check the test suite by running it with Python.

Current Status

I think Newt is largely functionally complete at this point; I just finished adding the limited for statement capabilities this evening. I'm sure there are a lot of bugs to work out, and I expect to discover additional missing functionality as we go along.

I'm doing all of my development and testing on my regular x86 laptop, so I don't know how big the system will end up on the target yet.

I've written 4836 lines of code for the implementation and another 65 lines of Python for simple test cases. When compiled -Os for x86_64, the system is about 36kB of text and another few bytes of initialized data.

Links

The source code is available from my server at https://keithp.com/cgit/newt.git/, and also at github https://github.com/keith-packard/newt. It is licensed under the GPLv2 (or later version).

Jonathan Dowland: Game Engine Black Book: DOOM

Mër, 12/12/2018 - 5:50md

Fabien's proof copies

proud smug face

Today is Doom's' 25th anniversary. To mark the occasion, Fabien Sanglard has written and released a book, Game Engine Black Book: DOOM.

It's a sequel of-sorts to "Game Engine Black Book: Wolfenstein 3D", which was originally published in August 2017 and has now been fully revised for a second edition.

I had the pleasure of proof-reading an earlier version of the Doom book and it's a real treasure. It goes into great depth as to the designs, features and limitations of PC hardware of the era, from the 386 that Wolfenstein 3D targetted to the 486 for Doom, as well as the peripherals available such as sound cards. It covers NeXT computers in similar depth. These were very important because Id Software made the decision to move all their development onto NeXT machines instead of developing directly on PC. This decision had some profound implications on the design of Doom as well as the speed at which they were able to produce it. I knew very little about the NeXTs and I really enjoyed the story of their development.

Detailed descriptions of those two types of personal computer set the scene at the start of the book, before Doom itself is described. The point of this book is to focus on the engine and it is explored sub-system by sub-system. It's fair to say that this is the most detailed description of Doom's engine that exists anywhere outside of its own source code. Despite being very familiar with Doom's engine, having worked on quite a few bits of it, I still learned plenty of new things. Fabien made special modifications to a private copy of Chocolate Doom in order to expose how various phases of the renderer worked. The whole book is full of full colour screenshots and illustrations.

The main section of the book closes with some detailed descriptions of the architectures of various home games console systems of the time to which Doom was ported, as well as describing the fate of that particular version of Doom: some were impressive technical achievements, some were car-crashes.

I'm really looking forward to buying a hard copy of the final book. I would recommend this to anyone has fond memories of that era, or is interested to know more about the low level voodoo that was required to squeeze every ounce of performance possible out of the machines from the time.

Edit: Fabien has now added a "pay what you want" option for the ebook. If the existing retailer prices were putting you off, now you can pay him for his effort at a level you feel is reasonable. The PDF is also guaranteed not to be mangled by Google Books or anyone else.

Petter Reinholdtsen: Non-blocking bittorrent plugin for vlc

Mër, 12/12/2018 - 7:20pd

A few hours ago, a new and improved version (2.4) of the VLC bittorrent plugin was uploaded to Debian. This new version include a complete rewrite of the bittorrent related code, which seem to make the plugin non-blocking. This mean you can actually exit VLC even when the plugin seem to be unable to get the bittorrent streaming started. The new version also include support for filtering playlist by file extension using command line options, if you want to avoid processing audio, video or images. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Louis-Philippe Véronneau: Montreal Bug Squashing Party - Jan 19th & 20th 2019

Mër, 12/12/2018 - 6:00pd

We are organising a BSP in Montréal in January! Unlike the one we organised for the Stretch release, this one will be over a whole weekend so hopefully folks from other provinces in Canada and from the USA can come.

So yeah, come and squash bugs with us! Montreal in January can be cold, but it's usually snowy and beautiful too.

As always, the Debian Project is willing to reimburse 100 USD (or equivalent) of expenses to attend Bug Squashing Parties. If you can find a cheap flight or want to car pool with other people that are interested, going to Montréal for a weekend doesn't sound that bad, eh?

When: January 19th and 20th 2019

Where: Montréal, Eastern Bloc

Why: to squash bugs!

Matthew Palmer: Falsehoods Programmers Believe About Pagination

Mër, 12/12/2018 - 1:00pd

The world needs it, so I may as well write it.

  • The number of items on a page is fixed for all time.
  • The number of items on a page is fixed for one user.
  • The number of items on a page is fixed for one result set.
  • The pages are only browsed in one direction.
  • No item will be added to the result set during retrieval.
  • No item will be removed from the result set during retrieval.
  • Item sort order is stable.
  • Only one page of results will be retrieved at one time.
  • Pages will be retrieved in order.
  • Pages will be retrieved in a timely manner.
  • No problem will result from two different users seeing different pagination of the same items at about the same time. (From @ronburk)

Reproducible builds folks: Reproducible Builds: Weekly report #189

Mar, 11/12/2018 - 5:01md

Here’s what happened in the Reproducible Builds effort between Sunday December 2 and Saturday December 8 2018:

Packages reviewed and fixed, and bugs filed Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this week, including:

  • Chris Lamb:
    • Re-add support for calculating a PureOS package set. (MR: 115)
    • Support arbitrary package filters when generating deb822 output. (MR: 22)
    • Add missing DBDJSON_PATH import. (MR: 21)
    • Correct Tails’ build manifest URL. (MR: 20)
  • Holger Levsen:
    • Ignore disk full false-positives building the GNU C Library. []
    • Various node maintenance. (eg. [], [], etc.)
    • Exclusively use the database to track blacklisted packages in Arch Linux. []

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Muz & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Bits from Debian: Debian Cloud Sprint 2018

Mar, 11/12/2018 - 12:30md

The Debian Cloud team held a sprint for the third time, hosted by Amazon at its Seattle offices from October 8th to October 10th, 2018.

We discussed the status of images on various platforms, especially in light of moving to FAI as the only method for building images on all the cloud platforms. The next topic was building and testing workflows, including the use of Debian machines for building, testing, storing, and publishing built images. This was partially caused by the move of all repositories to Salsa, which allows for better management of code changes, especially reviewing new code.

Recently we have made progress supporting cloud usage cases; grub and kernel optimised for cloud images help with reducing boot time and required memory footprint. There is also growing interest in non-x86 images, and FAI can now build such images.

Discussion of support for LTS images, which started at the sprint, has now moved to the debian-cloud mailing list). We also discussed providing many image variants, which requires a more advanced and automated workflow, especially regarding testing. Further discussion touched upon providing newer kernels and software like cloud-init from backports. As interest in using secure boot is increasing, we might cooperate with other team and use work on UEFI to provide images signed boot loader and kernel.

Another topic of discussion was the management of accounts used by Debian to build and publish Debian images. SPI will create and manage such accounts for Debian, including user accounts (synchronised with Debian accounts). Buster images should be published using those new accounts. Our Cloud Team delegation proposal (prepared by Luca Fillipozzi) was accepted by the Debian Project Leader. Sprint minutes are available, including a summary and a list of action items for individual members.

Dirk Eddelbuettel: RQuantLib 0.4.7: Now with corrected Windows library

Mar, 11/12/2018 - 11:47pd

A new version 0.4.7 of RQuantLib reached CRAN and Debian. Following up on the recent 0.4.6 release post which contained a dual call for help: RQuantLib was (is !!) still in need of a macOS library build, but also experienced issues on Windows.

Since then we set up a new (open) mailing list for RQuantLib and, I am happy to report, sorted that Windows issue out! In short, with the older g++ 4.9.3 imposed for R via Rtools, we must add an explicit C++11 flag at configuration time. Special thanks to Josh Ulrich for tireless and excellent help with testing these configurations, and to everybody else on the list!

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

This release re-enable most examples and tests that were disabled when Windows performance was shaky (due to, as we now know, as misconfiguration of ours for the windows binary library used). With the exception of the AffineSwaption example when running Windows i386, everything is back!

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.7 (2018-12-10)
  • Changes in RQuantLib tests:

    • Thanks to the updated #rwinlib/quantlib Windows library provided by Josh, all tests that previously exhibited issues have been re-enabled (Dirk in #126).
  • Changes in RQuantLib documentation:

    • The CallableBonds example now sets an evaluation date (#124).

    • Thanks to the updated #rwinlib/quantlib Windows library provided by Josh, examples that were set to dontrun are re-activated (Dirk in #126). AffineSwaption remains the sole holdout.

  • Changes in RQuantLib build system:

    • The src/Makevars.win file was updated to reflect the new layout used by the upstream build.

    • The -DBOOST_NO_AUTO_PTR compilation flag is now set.

As stated above, we are still looking for macOS help though. Please get in touch on-list if you can help build a library for Simon’s recipes repo.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible builds folks: Reproducible Builds: Weekly report #188

Mër, 05/12/2018 - 8:20pd

Here’s what happened in the Reproducible Builds effort between Sunday November 25 and Saturday December 1 2018:

Patches filed Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this week, including:

  • Chris Lamb prepared a merge request to generate and serve diffoscope JSON output in addition to the existing HTML and text formats (example output). This required Holger Levsen to increase the partition holding /var/lib/jenkins/userContent/reproducible from 255G to 400G. Thanks to Profitbricks for sponsoring this virtual hardware for more than 6 years now.

  • Holger Levsen and Jelle van der Waa started to add integrate new Arch Linux build nodes, namely repro1.pkgbuild.com and repro2.pkgbuild.com,

  • In addition, Holger Levsen installed the needrestart package everywhere [] updated an interface to always use short hostname [], explained what some nodes were doing [] as well as performed the usual node maintenance ([], [], [], etc.).

  • Jelle van der Waa also fixed a number of issues in the Arch Linux integration including showing the language in the first build [] and setting LANG/LC_ALL in the first build [].

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Benjamin Mako Hill: Banana Peels

Mër, 05/12/2018 - 5:25pd

Although it’s been decades since I last played, it’s still flashbacks to Super Mario Kart and pangs of irrational fear every time I see a banana peel in the road.

Gunnar Wolf: New release of the Raspberry Pi 3 *unofficial Debian preview* image

Mër, 05/12/2018 - 2:35pd

Back in June, Michael Stapelberg asked for somebody interested in adopting the unofficial Debian image for the Raspberry Pi 3 family. It didn't take me long to raise my hand.
What did take me long is to actually do it. I have adopted Raspberry3 image spec repository, with the recipes to build the image using Lars' great vmdb2, as well as the raspi3-firmware non-free Debian package.
After delaying this for too long, first in order to understand it better, and second because of the workload I had this last semester, I think we are ready to announce...

There is a new, updated preview image!

You can look at the instructions at the Debian Wiki page on RaspberryPi3. Or you can just jump to the downloads, at my people.debian.org — xzipped image (388MB, unzips to 1.5GB, and resizes to the capacity of your boot SD at first boot), verification sha256sum, and PGP-signed verification sha256sum.
There are still many things that can be improved, for sure. The main issues for me are:

  • No wireless support. Due to a bug in Linux kenel 4.18, wlan0 support is broken. It is reported, and we expect it to be fixed in the next kernel upload.
  • Hardcoded root password. This will be tackled later on — part of the issue is that I cannot ensure how this computer will be booted. I have some ideas to tackle this, though...

Other than that, what we have is a very minimal Debian system, ready for installing software!
At some point in the future, I plan to add build profiles for some common configurations. But lets go a step at a time.

Jonathan Dowland: iPod refresh

Mar, 04/12/2018 - 8:46md

Recently I filled up the storage in my iPod and so planned to upgrade it. one. This is a process I've been through several times in the past. My routine used to be to buy the largest capacity SD card that existed at the time (usually twice the capacity of the current one) and spend around £90. Luckily, SD capacity has been growing faster than my music collection. You can buy 400G SD cards today, but I only bought a 200G one, and I only spent around £38.

As I wrote last time, I don't use iTunes: I can move music on and off it from any computer, and I choose music to listen to using a simple file manager. One drawback of this approach is I tend to listen to the same artists over and over, and large swathes of my collection lie forgotten about. The impression I get is that music managers like iTunes have various schemes to help you keep in touch with the rest of your collection, via playlists: "recently added", "stuff you listened to this time last year", or whatever.

As a first step in this direction, I decided it would be useful to build up playlists of recently modified (or added) files. I thought it would be easiest to hook this into my backup solution. In case it's of interest to anyone else, I thought I'd share my solution. The scheme I describe there is used to run a shell script to perform the syncing, which now looks (mostly) like this:

date="$(/bin/date +%Y-%m-%d)" plsd=/home/jon/pls make_playlists() { grep -v deleting \ | grep -v '/\._' \ | grep -E '(m4a|mp3|ogg|wav|flac)$' \ | tee -a "$plsd/$date.m3u8" } # set the attached blinkstick LED to a colour indicating "work in progress" # systemd sets it to either red or green once the job is complete blinkstick --index 1 --limit 10 --set-color 33c280 # sync changes from my iPod onto my NAS; feed the output of files changed # into "make_playlists" rsync -va --delete --no-owner --no-group --no-perms \ --exclude=/.Spotlight-V100 --exclude=/.Trash-1000 \ --exclude=/.Trashes --exclude=/lost+found /media/ipod/ /music/ \ | make_playlists # sync all generated playlists back onto the iPod rsync -va --no-owner --no-group --no-perms \ /home/jon/pls/ /media/ipod/playlists/

Time will tell whether this will help.

Daniel Lange: Google GMail continues to own the email market, Microsoft is catching up

Mar, 04/12/2018 - 7:41md

Back in 2009 I wrote about Google's GMail emerging as the dominant platform for email. It had 46% of all accounts I sampled from American bloggers for the Ph.D. thesis of a friend. Blogging was big back then .

Now I wondered how things have changed over the last decade while I was working on another email related job. Having access to a list of 2.3 million email addresses from a rather similar (US-centric) demographic, let's do some math:

Google's GMail has 39% in that (much larger, but still non-scientific and skewed) sample. This is down from 46% in 2009. Microsoft, with its various email domains from Hotmail to Live.com has massively caught up from 10% to 35%. This is definitely also due to now focussing more on the strong Microsoft Office brands e.g. for Office 365 and Outlook.com. Yahoo, the #2 player back in 2009, is at 18%, still up from the 12% back then.

So Google plus Microsoft command nearly ¾ of all email addresses in that US-centric sample. Adding Yahoo into the equation leaves the accounts covered at >92%. Wow.

Email has essentially centralized onto three infrastructure providers and with this the neutrality advantage of open standards will probably erode. Interoperability is something two or three players can make or break for 90% of the user base within a single meeting in Sunnyvale.

Google is already trying their luck with "confidential email" which carry expiry dates and revokable reading rights for the recipient. So ... not really email anymore. More like Snapchat. Microsoft has been famous for their winmail.dat attachments and other negligence of email best practices. Yahoo is probably busy trying to develop a sustainable business model and trying to find cash that Marissa didn't spend so hopefully less risk of trying out misguided "innovations" in the email space from them.

All other players are less that 1% of the email domains in the sample. AOL used to have 3.1% and now the are at 0.6% which is in the same (tiny) ball park as the combined Apple offerings (mac.com, me.com) at 0.4%.

There is virtually no use of the new TLDs for (real, user)1 email. Just a few hundreds of .info and .name. And very few that consider themselves .sexy or .guru and want to tell via their email TLD.

Domain owner   2009 2018 GMail   46.1% 38.6% Yahoo 11.6% 18.3% Microsoft 9.9% 35.4% AOL 3.1% 0.6% Apple 1.0% 0.4% Comcast 2.3% 0.2% SBCGlobal 0.9%   0.09%
  1. There is extensive use of cheap TLDs for "throw-away" spam operations

Russ Allbery: Review: The Winter Long

Mar, 04/12/2018 - 4:24pd

Review: The Winter Long, by Seanan McGuire

Series: October Daye #8 Publisher: DAW Copyright: 2014 ISBN: 1-101-60175-2 Format: Kindle Pages: 368

This is the eighth book in the October Daye series and leans heavily on the alliances, friendship, world-building, and series backstory. This is not the sort of series that can be meaningfully started in the middle. And, for the same reason, it's also rather hard to review without spoilers, although I'll give it a shot.

Toby has had reason to fear Simon Torquill for the entire series. Everything that's happened to her was set off by him turning her into a fish and destroying her life. She's already had to deal with his partner (in Late Eclipses), so it's not a total surprise that he would show up again. But Toby certainly didn't expect him to show up at her house, or to sound weirdly unlike an enemy, or to reference a geas and an employer. She had never understood his motives, but there may be more to them than simple evil.

I have essentially struck out trying to recommend this series to other people. I think everyone else who's started it has bounced off of it for various reasons: unimpressed by Toby's ability to figure things out, feeling the bits borrowed from the mystery genre are badly done, not liking Irish folklore transplanted to the San Francisco Bay Area, or just finding it too dark. I certainly can't argue with people's personal preferences, but I want to, since this remains my favorite urban fantasy series and I want to talk about it with more people. Thankfully, the friends who started reading it independent of my recommendation all love it too. (Perhaps I'm cursing it somehow?)

Regardless, this is more of exactly what I like about this series, which was never the private detective bits (that have now been discarded entirely) and was always the maneuverings and dominance games of faerie politics, the comfort and solid foundation of Toby's chosen family, Toby's full-throttle-forward approach to forcing her way through problems, and the lovely layered world-building. There is so much going on in McGuire's faerie realm, so many hidden secrets, old grudges, lost history, and complex family relationships. I can see some of the shape of problems that the series will eventually resolve, but I still have no guesses as to how McGuire will resolve them.

The Winter Long takes another deep look at some of Toby's oldest relationships, including revisiting some events from Rosemary and Rue (the first book of the series) in a new light. It also keeps, and further deepens, my favorite relationships in this series: Tybalt, Mags and the Library (introduced in the previous book), and of course the Luidaeg, who is my favorite character in the entire series and the one I root for the most.

I've been trying to pinpoint what I like so much about this series, particularly given the number of people who disagree, and I think it's that Toby gets along with, and respects, a wide variety of difficult people, and brings to every interaction a consistent set of internal ethics and priorities. McGuire sets this against a backdrop of court politics, ancient rivalries and agreements, and hidden races with contempt for humans; Toby's role in that world is to stubbornly do the right thing based mostly on gut feeling and personal loyalty. It's not particularly complex ethics; most of the challenges she faces are eventually resolved by finding the right person to kick (or, more frequently now, use her slowly-growing power against) and the right place to kick them.

That simplicity is what I like. This is my comfort reading. Toby looks at tricky court intrigues, bull-headedly does the right thing, and manages to make that work out, which for me (particularly in this political climate) is escapism in the best sense. She has generally good judgment in her friends, those friends stand by her, and the good guys win. Sometimes that's just what I want in a series, particularly when it comes with an impressive range of mythological creations, an interesting and slowly-developing power set, enjoyable character banter, and a ton of world-building mysteries that I want to know more about.

Long story short, this is more of Toby and friends in much the same vein as the last few books in the series. It adds new depth to some past events, moves Toby higher into the upper echelons of faerie politics, and contains many of my favorite characters. Oh, and, for once, Toby isn't sick or injured or drugged for most of the story, which I found a welcome relief.

If you've read this far into the series, I think you'll love it. I certainly did.

Followed by A Red-Rose Chain.

Rating: 8 out of 10

Colin Watson: Deploying Swift

Mar, 04/12/2018 - 2:37pd

Sometimes I want to deploy Swift, the OpenStack object storage system.

Well, no, that’s not true. I basically never actually want to deploy Swift as such. What I generally want to do is to debug some bit of production service deployment machinery that relies on Swift for getting build artifacts into the right place, or maybe the parts of the Launchpad librarian (our blob storage service) that use Swift. I could find an existing private or public cloud that offers the right API and test with that, but sometimes I need to test with particular versions, and in any case I have a terribly slow internet connection and shuffling large build artifacts back and forward over the relevant bit of wet string makes it painfully slow to test things.

For a while I’ve had an Ubuntu 12.04 VM lying around with an Icehouse-based Swift deployment that I put together by hand. It works, but I didn’t keep good notes and have no real idea how to reproduce it, not that I really want to keep limping along with manually-constructed VMs for this kind of thing anyway; and I don’t want to be dependent on obsolete releases forever. For the sorts of things I’m doing I need to make sure that authentication works broadly the same way as it does in a real production deployment, so I want to have Keystone too. At the same time, I definitely don’t want to do anything close to a full OpenStack deployment of my own: it’s much too big a sledgehammer for this particular nut, and I don’t really have the hardware for it.

Here’s my solution to this, which is compact enough that I can run it on my laptop, and while it isn’t completely automatic it’s close enough that I can spin it up for a test and discard it when I’m finished (so I haven’t worried very much about producing something that runs efficiently). It relies on Juju and LXD. I’ve only tested it on Ubuntu 18.04, using Queens; for anything else you’re on your own. In general, I probably can’t help you if you run into trouble with the directions here: this is provided “as is”, without warranty of any kind, and all that kind of thing.

First, install Juju and LXD if necessary, following the instructions provided by those projects, and also install the python-openstackclient package as you’ll need it later. You’ll want to set Juju up to use LXD, and you should probably make sure that the shells you’re working in don’t have http_proxy set as it’s quite likely to confuse things unless you’ve arranged for your proxy to be able to cope with your local LXD containers. Then add a model:

juju add-model swift

At this point there’s a bit of complexity that you normally don’t have to worry about with Juju. The swift-storage charm wants to mount something to use for storage, which with the LXD provider in practice ends up being some kind of loopback mount. Unfortunately, being able to perform loopback mounts exposes too much kernel attack surface, so LXD doesn’t allow unprivileged containers to do it. (Ideally the swift-storage charm would just let you use directory storage instead.) To make the containers we’re about to create privileged enough for this to work, run:

lxc profile set juju-swift security.privileged true lxc profile device add juju-swift loop-control unix-char \ major=10 minor=237 path=/dev/loop-control for i in $(seq 0 255); do lxc profile device add juju-swift loop$i unix-block \ major=7 minor=$i path=/dev/loop$i done

Now we can start deploying things! Save this to a file, e.g. swift.bundle:

series: bionic description: "Swift in a box" applications: mysql: charm: "cs:mysql-62" channel: candidate num_units: 1 options: dataset-size: 512M keystone: charm: "cs:keystone" num_units: 1 swift-storage: charm: "cs:swift-storage" num_units: 1 options: block-device: "/etc/swift/storage.img|5G" swift-proxy: charm: "cs:swift-proxy" num_units: 1 options: zone-assignment: auto replicas: 1 relations: - ["keystone:shared-db", "mysql:shared-db"] - ["swift-proxy:swift-storage", "swift-storage:swift-storage"] - ["swift-proxy:identity-service", "keystone:identity-service"]

And run:

juju deploy swift.bundle

This will take a while. You can run juju status to see how it’s going in general terms, or juju debug-log for detailed logs from the individual containers as they’re putting themselves together. When it’s all done, it should look something like this:

Model Controller Cloud/Region Version SLA swift lxd localhost 2.3.1 unsupported App Version Status Scale Charm Store Rev OS Notes keystone 13.0.1 active 1 keystone jujucharms 290 ubuntu mysql 5.7.24 active 1 mysql jujucharms 62 ubuntu swift-proxy 2.17.0 active 1 swift-proxy jujucharms 75 ubuntu swift-storage 2.17.0 active 1 swift-storage jujucharms 250 ubuntu Unit Workload Agent Machine Public address Ports Message keystone/0* active idle 0 10.36.63.133 5000/tcp Unit is ready mysql/0* active idle 1 10.36.63.44 3306/tcp Ready swift-proxy/0* active idle 2 10.36.63.75 8080/tcp Unit is ready swift-storage/0* active idle 3 10.36.63.115 Unit is ready Machine State DNS Inst id Series AZ Message 0 started 10.36.63.133 juju-d3e703-0 bionic Running 1 started 10.36.63.44 juju-d3e703-1 bionic Running 2 started 10.36.63.75 juju-d3e703-2 bionic Running 3 started 10.36.63.115 juju-d3e703-3 bionic Running

At this point you have what should be a working installation, but with only administrative privileges set up. Normally you want to create at least one normal user. To do this, start by creating a configuration file granting administrator privileges (this one comes verbatim from the openstack-base bundle):

_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ') for param in $_OS_PARAMS; do if [ "$param" = "OS_AUTH_PROTOCOL" ]; then continue; fi if [ "$param" = "OS_CACERT" ]; then continue; fi unset $param done unset _OS_PARAMS _keystone_unit=$(juju status keystone --format yaml | \ awk '/units:$/ {getline; gsub(/:$/, ""); print $1}') _keystone_ip=$(juju run --unit ${_keystone_unit} 'unit-get private-address') _password=$(juju run --unit ${_keystone_unit} 'leader-get admin_passwd') export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v3 export OS_USERNAME=admin export OS_PASSWORD=${_password} export OS_USER_DOMAIN_NAME=admin_domain export OS_PROJECT_DOMAIN_NAME=admin_domain export OS_PROJECT_NAME=admin export OS_REGION_NAME=RegionOne export OS_IDENTITY_API_VERSION=3 # Swift needs this: export OS_AUTH_VERSION=3 # Gnocchi needs this export OS_AUTH_TYPE=password

Source this into a shell: for instance, if you saved this to ~/.swiftrc.juju-admin, then run:

. ~/.swiftrc.juju-admin

You should now be able to run openstack endpoint list and see a table for the various services exposed by your deployment. Then you can create a dummy project and a user with enough privileges to use Swift:

USERNAME=your-username PASSWORD=your-password openstack domain create SwiftDomain openstack project create --domain SwiftDomain --description Swift \ SwiftProject openstack user create --domain SwiftDomain --project-domain SwiftDomain \ --project SwiftProject --password "$PASSWORD" "$USERNAME" openstack role add --project SwiftProject --user-domain SwiftDomain \ --user "$USERNAME" Member

(This is intended for testing rather than for doing anything particularly sensitive. If you cared about keeping the password secret then you’d use the --password-prompt option to openstack user create instead of supplying the password on the command line.)

Now create a configuration file granting privileges for the user you just created. I felt like automating this to at least some degree:

touch ~/.swiftrc.juju chmod 600 ~/.swiftrc.juju sed '/^_password=/d; s/\( OS_PROJECT_DOMAIN_NAME=\).*/\1SwiftDomain/; s/\( OS_PROJECT_NAME=\).*/\1SwiftProject/; s/\( OS_USER_DOMAIN_NAME=\).*/\1SwiftDomain/; s/\( OS_USERNAME=\).*/\1'"$USERNAME"'/; s/\( OS_PASSWORD=\).*/\1'"$PASSWORD"'/' \ <~/.swiftrc.juju-admin >~/.swiftrc.juju

Source this into a shell. For example:

. ~/.swiftrc.juju

You should now find that swift list works. Success! Now you can swift upload files, or just start testing whatever it was that you were actually trying to test in the first place.

This is not a setup I expect to leave running for a long time, so to tear it down again:

juju destroy-model swift

This will probably get stuck trying to remove the swift-storage unit, since nothing deals with detaching the loop device. If that happens, find the relevant device in losetup -a from another window and use losetup -d to detach it; juju destroy-model should then be able to proceed.

Credit to the Juju and LXD teams and to the maintainers of the various charms used here, as well as of course to the OpenStack folks: their work made it very much easier to put this together.

Sean Whitton: Debian Policy call for participation -- December 2018

Hën, 03/12/2018 - 8:20md

Here’s are some of the bugs against the Debian Policy Manual. Please consider getting involved.

Consensus has been reached and help is needed to write a patch

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

#902612 Packages should not touch users’ home directories

#905453 Policy does not include a section on NEWS.Debian files

#906286 repository-format sub-policy

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs

#786470 [copyright-format] Add an optional “License-Grant” field

#845255 Include best practices for packaging database applications

#850156 Please firmly deprecate vendor-specific series files

#897217 Vcs-Hg should support -b too

Merged for the next release (no action needed)

#188731 Also strip .comment and .note sections

#845715 Please document that packages are not allowed to write outside thei…

#912581 Slightly relax the requirement to include verbatim copyright inform…

Gunnar Wolf: Chairing «Topics on Internet Censorship and Surveillance»

Hën, 03/12/2018 - 7:07md

I have been honored to be invited as a co-chair (together with Vasilis Ververis and Mario Isaakidis) for a Special Track called «Topics on Internet Censorship and Surveillance» (TICS), at the The Eighteenth International Conference on Networks, which will be held in Valencia, Spain, 2019.03.24–2019.03.28, and organized under IARIA's name and umbrella.

I am reproducing here the Call for Papers. Please do note that if you are interested in participating, the relevant dates are those publicized for the Special Track (submission by 2019.01.29; notification by 2019.02.18; registration and camera-ready by 2019.02.27), not those on ICN's site.

Over the past years there has been a greater demand for online censorship and surveillance, as an understandable reaction against hate speech, copyright violations, and other cases related to citizen compliance with civil laws and regulations by national authorities. Unfortunately, this is often accompanied by a tendency of extensively censoring online content and massively spying on citizens actions. Numerous whistleblower revelations, leaks from classified documents, and a vast amount of information released by activists, researchers and journalists, reveal evidence of government-sponsored infrastructure that either goes beyond the requirements and scope of the law, or operates without any effective regulations in place. In addition, this infrastructure often supports the interests of big private corporations, such as the companies that enforce online copyright control.

TICS is a special track the area of Internet censorship, surveillance and other adversarial burdens to technology that bring in danger; to a greater extent the safety (physical security and privacy) of its users.

Proposals for TICS 2019 should be situated within the field of Internet censorship, network measurements, information controls, surveillance and content moderation. Ideally topics should connect to the following , but not limited to:

  • Technical, social, political, and economical implications of Internet censorship and surveillance
  • Detection and analysis of network blocking and surveillance infrastructure (hardware or software)
  • Research on legal frameworks, regulations and policies that imply blocking or limitation of the availability of network services and online content
  • Online censorship circumvention and anti-surveillance practices
  • Network measurements methodologies to detect and categorize network interference
  • Research on the implications of automated or centralized user content regulation (such as for hate speech, copyright, or disinformation)

Please help me share this invitation with possible interested people!
Oh — And to make this more interesting and enticing for you, ICN will take place at the same city and just one week before the Internet Freedom Festival, the Global Unconference of the Internet Freedom Communities ☺

Julien Danjou: A multi-value syntax tree filtering in Python

Hën, 03/12/2018 - 2:29md

A while ago, we've seen how to write a simple filtering syntax tree with Python. The idea was to provide a small abstract syntax tree with an easy to write data structure that would be able to filter a value. Filtering meaning that once evaluated, our AST would return either True or False based on the passed value.

With that, we were able to write small rules like Filter({"eq": 3})(4) that would return False since, well, 4 is not equal to 3.

In this new post, I propose we enhance our filtering ability to support multiple values. The idea is to be able to write something like this:

>>> f = Filter( {"and": [ {"eq": ("foo", 3)}, {"gt": ("bar", 4)}, ] }, ) >>> f(foo=3, bar=5) True >>> f(foo=4, bar=5) False

The biggest change here is that the binary operators (eq, gt, le, etc.) now support getting two values, and not only one, and that we can pass multiple values to our filter by using keyword arguments.

How should we implement that? Well, we can keep the same data structure we built previously. However, this time we're gonna do the following change:

  • The left value of the binary operator will be a string that will be used as the key to access the keyword arguments passed to our Filter.__call__ values.
  • The right value of the binary operator will be kept as it is (like before).

We therefore need to change our Filter.build_evaluator to accommodate this as follow:

def build_evaluator(self, tree): try: operator, nodes = list(tree.items())[0] except Exception: raise InvalidQuery("Unable to parse tree %s" % tree) try: op = self.multiple_operators[operator] except KeyError: try: op = self.binary_operators[operator] except KeyError: raise InvalidQuery("Unknown operator %s" % operator) assert len(nodes) == 2 # binary operators take 2 values def _op(values): return op(values[nodes[0]], nodes[1]) return _op # Iterate over every item in the list of the value linked # to the logical operator, and compile it down to its own # evaluator. elements = [self.build_evaluator(node) for node in nodes] return lambda values: op((e(values) for e in elements))

The algorithm is pretty much the same, the tree being browsed recursively.

First, the operator and its arguments (nodes) are extracted.

Then, if the operator takes multiple arguments (such as and and or operators), each node is recursively evaluated and a function is returned evaluating those nodes.
If the operator is a binary operator (such as eq, lt, etc.), it checks that the passed argument list length is 2. Then, it returns a function that will apply the operator (e.g., operator.eq) to values[nodes[0]] and nodes[1]: the former access the arguments (values) passed to the filter's __call__ function while the latter is directly the passed argument.

The full class looks like this:

import operator class InvalidQuery(Exception): pass class Filter(object): binary_operators = { u"=": operator.eq, u"==": operator.eq, u"eq": operator.eq, u"<": operator.lt, u"lt": operator.lt, u">": operator.gt, u"gt": operator.gt, u"<=": operator.le, u"≤": operator.le, u"le": operator.le, u">=": operator.ge, u"≥": operator.ge, u"ge": operator.ge, u"!=": operator.ne, u"≠": operator.ne, u"ne": operator.ne, } multiple_operators = { u"or": any, u"∨": any, u"and": all, u"∧": all, } def __init__(self, tree): self._eval = self.build_evaluator(tree) def __call__(self, **kwargs): return self._eval(kwargs) def build_evaluator(self, tree): try: operator, nodes = list(tree.items())[0] except Exception: raise InvalidQuery("Unable to parse tree %s" % tree) try: op = self.multiple_operators[operator] except KeyError: try: op = self.binary_operators[operator] except KeyError: raise InvalidQuery("Unknown operator %s" % operator) assert len(nodes) == 2 # binary operators take 2 values def _op(values): return op(values[nodes[0]], nodes[1]) return _op # Iterate over every item in the list of the value linked # to the logical operator, and compile it down to its own # evaluator. elements = [self.build_evaluator(node) for node in nodes] return lambda values: op((e(values) for e in elements))

We can check that it works by building some filters:

x = Filter({"eq": ("foo", 1)}) assert not x(foo=1, bar=1) x = Filter({"eq": ("foo", "bar")}) assert not x(foo=1, bar=1) x = Filter({"or": ( {"eq": ("foo", "bar")}, {"eq": ("bar", 1)}, )}) assert x(foo=1, bar=1)

Supporting multiple values is handy as it allows to pass complete dictionaries to the filter, rather than just one value. That enables users to filter more complex objects.

Sub-dictionary support

It's also possible to support deeper data structure, like a dictionary of dictionary. By replacing values[nodes[0]] by self._resolve_name(values, node[0]) with a _resolve_name method like this one, the filter is able to traverse dictionaries:

ATTR_SEPARATOR = "." def _resolve_name(self, values, name): try: for subname in name.split(self.ATTR_SEPARATOR): values = values[subname] return values except KeyError: raise InvalidQuery("Unknown attribute %s" % name)

It then works like that:

x = Filter({"eq": ("baz.sub", 23)}) assert x(foo=1, bar=1, baz={"sub": 23}) x = Filter({"eq": ("baz.sub", 23)}) assert not x(foo=1, bar=1, baz={"sub": 3})

By using the syntax key.subkey.subsubkey the filter is able to access item inside dictionaries on more complex data structure.

That basic filter engine can evolve quite easily in something powerful, as you can add new operators or new way to access/manipulate the passed data structure.

If you have other ideas on nifty features that could be added, feel free to add a comment below!

Joachim Breitner: Sliding Right into Information Theory

Hën, 03/12/2018 - 10:56pd

It's hardly news any more, but it seems I have not blogged about my involvement last year with an interesting cryptanalysis project, which resulted in the publication Sliding right into disaster: Left-to-right sliding windows leak by Daniel J. Bernstein, me, Daniel Genkin, Leon Groot Bruinderink, Nadia Heninger, Tanja Lange, Christine van Vredendaal and Yuval Yarom, which was published at CHES 2017 and on ePrint (ePrint is the cryptographer’s version of arXiv).

This project nicely touched upon many fields of computer science: First we need systems expertise to mount a side-channel attack that uses cache timing difference to observe which line of a square-and-multiply algorithm the target process is executing. Then we need algorithm analysis required to learn from these observations partial information about the bits of the private key. This part includes nice PLy concepts like rewrite rules (see Section 3.2). Oncee we know enough about the secret keys, we can use fancy cryptography to recover the whole secret key (Section 3.4). And finally, some theoretical questions arise, such as: “How much information do we need for the attack to succeed?” and “Do we obtain this much information”, and we need some nice math and information theory to answer these.

Initially, I focused on the PL-related concepts. We programming language people are yak-shavers, and in particular “rewrite rules” just demands the creation of a DSL to express them, and an interpreter to execute them, doesn’t it? But it turned out that these rules are actually not necessary, as the key recovery can use the side-channel observation directly, as we found out later (see Section 4 of the paper). But now I was already hooked, and turned towards the theoretical questions mentioned above.

Shannon vs. Rényi

It felt good to shake the dust of some of the probability theory that I learned for my maths degree, and I also learned some new stuff. For example, it was intuitively clear that whether the attack succeeds depends on the amount of information obtained by the side channel attack, and based on prior work, the expectation was that if we know more than half the bits, then the attack would succeed. Note that for this purpose, two known “half bits” are as good as knowing one full bit; for example knowing that the secret key is either 01 or 11 (one bit known for sure) is just as good as knowing that the key is either 00 or 11.

Cleary, this is related to entropy somehow -- but how? Trying to prove that the attack works if the entropy rate of the leak is >0.5 just did not work, against all intuition. But when we started with a formula that describes when the attack succeeds, and then simplified it, we found a condition that looked suspiciously like what we wanted, namely H > 0.5, only that H was not the conventional entropy (also known as the Shannon entropy, H = −∑p ⋅ log p), but rather something else: H = −∑p2, which turned to be called the collision entropy or Rényi entropy.

This resulted in Theorem 3 in the paper, and neatly answers the question when the Heninger and Shacham key recovery algorithm, extended to partial information, can be expected to succeed in a much more general setting that just this particular side-channel attack.

Markov chains and an information theoretical spin-off

The other theoretical question is now: Why does this particular side channel attack succeed, i.e. why is the entropy rate H > 0.5. As so often, Markov chains are an immensly powerful tool to answer that question. After some transformations, I managed to model the state of the square-and-multiply algorithm, together with the side-channel leak, as a markov chain with a hidden state. Now I just had to calculate its Rényi entropy rate, right? I wrote some Haskell code to do this transformation, and also came up with an ad-hoc, intuitive way of calculating the rate. So when it was time to write up the paper, I was searching for a reference that describes the algorithm that I was using…

Only I could find none! I contacted researchers who have published related to Markov chains and entropies, but they just referred me in circles, until one of them, Maciej Skórski responded. Our conversation, highly condendensed, went like this: “Nice idea, but it can’t be right, it would solve problem X” – “Hmm, but it feels so right. Here is a proof sketch.” – “Oh, indeed, cool. I can even generalize this! Let’s write a paper”. Which we did! Analytic Formulas for Renyi Entropy of Hidden Markov Models (preprint only, it is still under submission).

More details

Because I joined the sliding-right project late, not all my contributions made it into the actual paper, and therefore I published an “inofficial appendix” separately on ePrint. It contains

  1. an alternative way to find the definitively knowable bits of the secret exponent, which is complete and can (in rare corner cases) find more bits than the rewrite rules in Section 3.1
  2. an algorithm to calculate the collision entropy H, including how to model a side-channel attack like this one as a markov chain, and how to calculate the entropy of such a markov chain, and
  3. the proof of Theorem 3.

I also published the Haskell code that I wrote for this projects, including the markov chain collision entropy stuff. It is not written with public consumption in mind, but feel free to ask if you have questions about this.

Note that all errors, typos and irrelevancies in that document and the code are purely mine and not of any of the other authors of the sliding-right paper. I’d like to thank my coauthors for the opportunity to join this project.

Faqet