You are here

Planet Debian

Subscribe to Feed Planet Debian
random musings and comments The experience of a free software community member Entries tagged english random musings and comments Indeed, there are many other ways to make the world a better place; but Free Software is the one I like the most. (y eso no es poca cosa) Random thoughts about everything tagged by Debian Just another WordPress.com weblog random musings and comments Random thoughts about everything tagged by Debian Thinking inside the box Joachim Breitners Denkblogade Thinking inside the box Debian and Free Software Random thoughts about everything tagged by Debian Free Software Indie Hacker Echoes Random thoughts about everything tagged by Debian Recent content in Planet Debian on Iain R. Learmonth Current Working Directory liw's English language blog feed Reproducible builds blog A blog from a scientist and Debian developer (and occasional book writer)... Tricks for data handling, programming, debian administration and development, command-line and many other joyful things in the same spirit. Oh, and sometimes completey unrelated things ! ganbatte kudasai! Ben Hutchings's diary of life and technology Random thoughts about everything tagged by Debian Conteúdo de Antonio Terceiro marcado com a tag "english" Entries tagged english Recent content in Planet Debian on Iain R. Learmonth Joachim Breitners Denkblogade Ricardo Mones - LiveJournal.com Recent content in Planet Debian on Iain R. Learmonth faiblog Payson, AZ Open Source Developer and enthusiast dedicated to KDE Insider infos, master your Debian/Ubuntu distribution WEBlog -- Wouter's Eclectic Blog Debian and Free Software Digital-Scurf Ramblings Recent content in Planet Debian on Iain R. Learmonth random musings and comments Thinking inside the box Reproducible builds blog a personal blog of Dimitri John Ledkov Recent content in Planet Debian on Iain R. Learmonth anarcat jmtd liw's English language blog feed showing latest 10 James McCoy As time goes by ... pabs
Përditësimi: 3 months 3 javë më parë

Sean Whitton: I'm going to DebCamp17, Montréal, Canada

Sht, 22/07/2017 - 11:38md

Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items. In rough order of priority/likelihood of completion:

  • Debian Policy sprint

  • conversations about using git for Debian packaging, especially with regard to dgit

    • writing up or following up on feature requests
  • Emacs team sprint

    • talking to maintainers about transitioning their packages to use dh-elpa
  • submitting and revising patch series to dgit

  • writing a test suite for git-remote-gcrypt

Niels Thykier: Improving bulk performance in debhelper

Sht, 22/07/2017 - 1:45md

Since debhelper/10.3, there has been a number of performance related changes.  The vast majority primarily improves bulk performance or only have visible effects at larger “input” sizes.

Most visible cases are:
  • dh + dh_* now scales a lot better for large number of binary packages.  Even more so with parallel builds.
  • Most dh_* tools are now a lot faster when creating many directories or installing files.
  • dh_prep and dh_clean now bulk removals.
  • dh_install can now bulk some installations.  For a concrete corner-case, libssl-doc went from approximately 11 seconds to less than a second.  This optimization is implicitly disabled with –exclude (among other).
  • dh_installman now scales a lot better with many manpages.  Even more so with parallel builds.
  • dh_installman has restored its performance under fakeroot (regression since 10.2.2)

 

For debhelper, this mostly involved:
  • avoiding fork+exec of commands for things doable natively in perl.  Especially, when each fork+exec only process one file or dir.
  • bulking as many files/dirs into the call as possible, where fork+exec is still used.
  • caching / memorizing slow calls (e.g. in parts of pkgfile inside Dh_Lib)
  • adding an internal API for dh to do bulk check for pkgfiles. This is useful for dh when checking if it should optimize out a helper.
  • and, of course, doing things in parallel where trivially possible.

 

How to take advantage of these improvements in tools that use Dh_Lib:
  • If you use install_{file,prog,lib,dir}, then it will come out of the box.  These functions are available in Debian/stable.  On a related note, if you use “doit” to call “install” (or “mkdir”), then please consider migrating to these functions instead.
  • If you need to reset owner+mode (chown 0:0 FILE + chmod MODE FILE), consider using reset_perm_and_owner.  This is also available in Debian/stable.
    • CAVEAT: It is not recursive and YMMV if you do not need the chown call (due to fakeroot).
  • If you have a lot of items to be processed by a external tool, consider using xargs().  Since 10.5.1, it is now possible to insert the items anywhere in the command rather than just in the end.
  • If you need to remove files, consider using the new rm_files function.  It removes files and silently ignores if a file does not exist. It is also available since 10.5.1.
  • If you need to create symlinks, please consider using make_symlink (available in Debian/stable) or make_symlink_raw_target (since 10.5.1).  The former creates policy compliant symlinks (e.g. fixup absolute symlinks that should have been relative).  The latter is closer to a “ln -s” call.
  • If you need to rename a file, please consider using rename_path (since 10.5).  It behaves mostly like “mv -f” but requires dest to be a (non-existing) file.
  • Have a look at whether on_pkgs_in_parallel() / on_items_in_parallel() would be suitable for enabling parallelization in your tool.
    • The emphasis for these functions is on making parallelization easy to add with minimal code changes.  It pre-distributes the items which can lead to unbalanced workloads, where some processes are idle while a few keeps working.
Credits:

I would like to thank the following for reporting performance issues, regressions or/and providing patches.  The list is in no particular order:

  • Helmut Grohne
  • Kurt Roeckx
  • Gianfranco Costamagna
  • Iain Lane
  • Sven Joachim
  • Adrian Bunk
  • Michael Stapelberg

Should I have missed your contribution, please do not hesitate to let me know.

 


Filed under: Debhelper, Debian

Junichi Uekawa: asterisk fails to start on my raspberry pi.

Sht, 22/07/2017 - 10:02pd
asterisk fails to start on my raspberry pi. I don't quite understand what the error message is but systemctl tells me there was a timeout. Don't know which timeout it hits.

Michal Čihař: Making Weblate more secure and robust

Pre, 21/07/2017 - 12:00md

Having publicly running web application always brings challenges in terms of security and in generally in handling untrusted data. Security wise Weblate has been always quite good (mostly thanks to using Django which comes with built in protection against many vulnerabilities), but there were always things to improve in input validation or possible information leaks.

When Weblate has joined HackerOne (see our first month experience with it), I was hoping to get some security driven core review, but apparently most people there are focused on black box testing. I can certainly understand that - it's easier to conduct and you need much less knowledge of the tested website to perform this.

One big area where reports against Weblate came in was authentication. Originally we were mostly fully relying on default authentication pipeline coming with Python Social Auth, but that showed some possible security implications and we ended up with having heavily customized authentication pipeline to avoid several risks. Some patches were submitted back, some issues reported, but still we've diverged quite a lot in this area.

Second area where scanning was apparently performed, but almost none reports came, was input validation. Thanks to excellent XSS protection in Django nothing was really found. On the other side this has triggered several internal server errors on our side. At this point I was really happy to have Rollbar configured to track all errors happening in the production. Thanks to having all such errors properly recorded and grouped it was really easy to go through them and fix them in our codebase.

Most of the related fixes have landed in Weblate 2.14 and 2.15, but obviously this is ongoing effort to make Weblate better with every release.

Filed under: Debian English SUSE Weblate

Gunnar Wolf: Hey, everybody, come share the joy of work!

Enj, 20/07/2017 - 7:17pd

I got several interesting and useful replies, both via the blog and by personal email, to my two previous posts where I mentioned I would be starting a translation of the Made With Creative Commons book. It is my pleasure to say: Welcome everybody, come and share the joy of work!

Some weeks ago, our project was accepted as part of Hosted Weblate, lowering the bar for any interested potential contributor. So, whoever wants to be a part of this: You just have to log in to Weblate (or create an account if needed), and start working!

What is our current status? Amazingly better than anything I have exepcted: Not only we have made great progress in Spanish, reaching >28% of translated source strings, but also other people have started translating into Norwegian Bokmål (hi Petter!) and Dutch (hats off to Heimen Stoffels!). So far, Spanish (where Leo Arias and myself are working) is most active, but anything can happen.

I still want to work a bit on the initial, pre-po4a text filtering, as there are a small number of issues to fix. But they are few and easy to spot, your translations will not be hampered much when I solve the missing pieces.

So, go ahead and get to work! :-D Oh, and if you translate sizeable amounts of work into Spanish: As my university wants to publish (in paper) the resulting works, we would be most grateful if you can fill in the (needless! But still, they ask me to do this...) authorization for your work to be a part of a printed book.

Benjamin Mako Hill: Testing Our Theories About “Eternal September”

Enj, 20/07/2017 - 2:12pd
Graph of subscribers and moderators over time in /r/NoSleep. The image is taken from our 2016 CHI paper.

Last year at CHI 2016, my research group published a qualitative study examining the effects of a large influx of newcomers to the /r/nosleep online community in Reddit. Our study began with the observation that most research on sustained waves of newcomers focuses on the destructive effect of newcomers and frequently invokes Usenet’s infamous “Eternal September.” Our qualitative study argued that the /r/nosleep community managed its surge of newcomers gracefully through strategic preparation by moderators, technological systems to reign in on norm violations, and a shared sense of protecting the community’s immersive environment among participants.

We are thrilled that, less a year after the publication of our study, Zhiyuan “Jerry” Lin and a group of researchers at Stanford have published a quantitative test of our study’s findings! Lin analyzed 45 million comments and upvote patterns from 10 Reddit communities that a massive inundation of newcomers like the one we studied on /r/nosleep. Lin’s group found that these communities retained their quality despite a slight dip in its initial growth period.

Our team discussed doing a quantitative study like Lin’s at some length and our paper ends with a lament that our findings merely reflected, “propositions for testing in future work.” Lin’s study provides exactly such a test! Lin et al.’s results suggest that our qualitative findings generalize and that sustained influx of newcomers need not doom a community to a descent into an “Eternal September.” Through strong moderation and the use of a voting system, the subreddits analyzed by Lin appear to retain their identities despite the surge of new users.

There are always limits to research projects work—quantitative and qualitative. We think the Lin’s paper compliments ours beautifully, we are excited that Lin built on our work, and we’re thrilled that our propositions seem to have held up!

This blog post was written with Charlie Kiene. Our paper about /r/nosleep, written with Charlie Kiene and Andrés Monroy-Hernández, was published in the Proceedings of CHI 2016 and is released as open access. Lin’s paper was published in the Proceedings of ICWSM 2017 and is also available online.

Dirk Eddelbuettel: RcppAPT 0.0.4

Mër, 19/07/2017 - 2:12md

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- arrived on CRAN yesterday.

We added a few more functions in order to compute on the package graph. A concrete example is shown in this vignette which determines the (minimal) set of remaining Debian packages requiring a rebuild under R 3.4.* to update their .C() and .Fortran() registration code. It has been used for the binNMU request #868558.

As we also added a NEWS file, its (complete) content covering all releases follows below.

Changes in version 0.0.4 (2017-07-16)
  • New function getDepends

  • New function reverseDepends

  • Added package registration code

  • Added usage examples in scripts directory

  • Added vignette, also in docs as rendered copy

Changes in version 0.0.3 (2016-12-07)
  • Added dumpPackages, showSrc
Changes in version 0.0.2 (2016-04-04)
  • Added reverseDepends, dumpPackages, showSrc
Changes in version 0.0.1 (2015-02-20)
  • Initial version with getPackages and hasPackages

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Lars Wirzenius: Dropping Yakking from Planet Debian

Mër, 19/07/2017 - 7:54pd

A couple of people objected to having Yakking on Planet Debian, so I've removed it.

Daniel Silverstone: Yay, finished my degree at last

Mar, 18/07/2017 - 11:56md

A little while back, in June, I sat my last exam for what I hoped would be the last module in my degree. For seven years, I've been working on a degree with the Open University and have been taking advantage of the opportunity to have a somewhat self-directed course load by taking the 'Open' degree track. When asked why I bothered to do this, I guess my answer has been a little varied. In principle it's because I felt like I'd already done a year's worth of degree and didn't want it wasted, but it's also because I have been, in the dim and distant past, overlooked for jobs simply because I had no degree and thus was an easy "bin the CV".

Fed up with this, I decided to commit to the Open University and thus began my journey toward 'qualification' in 2010. I started by transferring the level 1 credits from my stint at UCL back in 1998/1999 which were in a combination of basic programming in Java, some mathematics including things like RSA, and some psychology and AI courses which at the time were aiming at a degree called 'Computer Science with Cognitive Sciences'.

Then I took level 2 courses, M263 (Building blocks of software), TA212 (The technology of music) and MS221 (Exploring mathematics). I really enjoyed the mathematics course and so...

At level 3 I took MT365 (Graphs, networks and design), M362 (Developing concurrent distributed systems), TM351 (Data management and analysis - which I ended up hating), and finally finishing this June with TM355 (Communications technology).

I received an email this evening telling me the module result for TM355 had been posted, and I logged in to find I had done well enough to be offered my degree. I could have claimed my degree 18+ months ago, but I persevered through another two courses in order to qualify for an honours degree which I have now been awarded. Since I don't particularly fancy any ceremonial awarding, I just went through the clicky clicky and accepted my qualification of 'Batchelor of Science (Honours) Open, Upper Second-class Honours (2.1)' which grants me the letters 'BSc (Hons) Open (Open)' which, knowing me, will likely never even make it onto my CV because I'm too lazy.

It has been a significant effort, over the course of the past few years, to complete a degree without giving up too much of my personal commitments. In addition to earning the degree, I have worked, for six of the seven years it has taken, for Codethink doing interesting work in and around Linux systems and Trustable software. I have designed and built Git server software which is in use in some universities, and many companies, along with a good few of my F/LOSS colleagues. And I've still managed to find time to attend plays, watch films, read an average of 2 novel-length stories a week (some of which were even real books), and be a member of the Manchester Hackspace.

Right now, I'm looking forward to a stress free couple of weeks, followed by an immense amount of fun at Debconf17 in Montréal!

Foteini Tsiami: Internationalization, part three

Mar, 18/07/2017 - 12:18md

The first builds of the LTSP Manager were uploaded and ready for testing. Testing involves installing or purging the ltsp-manager package, along with its dependencies, and using its GUI to configure LTSP, create users, groups, shared folders etc. Obviously, those tasks are better done on a clean system. And the question that emerges is: how can we start from a clean state, without having to reinstall the operating system each time?

My mentors pointed me to an answer for that: VirtualBox snapshots. VirtualBox is a virtualization application (others are KVM or VMware) that allows users to install an operating system like Debian in a contained environment inside their host operating system. It comes with an easy to use GUI, and supports snapshots, which are points in time where we mark the guest operating system state, and can revert to that state later on.

So I started by installing Debian Stretch with the MATE desktop environment in VirtualBox, and I took a snapshot immediately after the installation. Now whenever I want to test LTSP Manager, I revert to that snapshot, and that way I have a clean system where I can properly check the installation procedure and all of its features!


Reproducible builds folks: Reproducible Builds: week 116 in Stretch cycle

Mar, 18/07/2017 - 9:29pd

Here's what happened in the Reproducible Builds effort between Sunday July 9 and Saturday July 15 2017:

Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

13 package reviews have been added, 12 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

3 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (47)
diffoscope development

Version 84 was uploaded to unstable by Mattia Rizzolo. It included contributions already reported from the previous weeks, as well as new ones:

After the release, development continued in git with contributions from:

strip-nondeterminism development

Versions 0.036-1, 0.037-1 and 0.038-1 were uploaded to unstable by Chris Lamb. They included contributions from:

reprotest development

Development continued in git with contributions from:

buildinfo.debian.net development tests.reproducible-builds.org
  • Mattia Rizzolo:
    • Make database backups quicker to restore by avoiding --column-inserts's pg_dump option.
    • Fixup the deployment scripts after the stretch migration.
    • Fixup Apache redirects that were broken after introducing the buster suite
    • Fixup diffoscope jobs that were not always installing the highest possible version of diffoscope
  • Holger Levsen:
    • Add a node health check for a too big jenkins.log.
Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Matthew Garrett: Avoiding TPM PCR fragility using Secure Boot

Mar, 18/07/2017 - 8:48pd
In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].

One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.

The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.

Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.

I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.

However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.

The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archives.

My proposal is to generate a small initramfs whose sole job is to get secrets from the TPM and stash them in the kernel keyring, and then measure an additional value into PCR 7 in order to ensure that the secrets can't be obtained again. Later disk encryption setup will then be able to set up dm-crypt using the secret already stored within the kernel. This small initramfs will be built into the signed kernel image, and the bootloader will be responsible for appending it to the end of any user-provided initramfs. This means that the TPM will only grant access to the secrets while trustworthy code is running - once the secret is in the kernel it will only be available for in-kernel use, and once PCR 7 has been modified the TPM won't give it to anyone else. A similar approach for some kernel command-line arguments (the kernel, module-init-tools and systemd all interpret the kernel command line left-to-right, with later arguments overriding earlier ones) would make it possible to ensure that certain kernel configuration options (such as the iommu) weren't overridable by an attacker.

There's obviously a few things that have to be done here (standardise how to embed such an initramfs in the kernel image, ensure that luks knows how to use the kernel keyring, teach all relevant bootloaders how to handle these images), but overall this should make it practical to use PCR 7 as a mechanism for supporting TPM-backed disk encryption secrets on Linux without introducing a hug support burden in the process.

[1] The patchset I've posted to add measured boot support to Grub use PCRs 8 and 9 to measure various components during the boot process, but other bootloaders may have different policies.

[2] This is because most Linux systems generate the initramfs locally rather than shipping it pre-built. It may also get rebuilt on various userspace updates, even if the kernel hasn't changed. Including it in PCR 7 would entirely break the fragility guarantees and defeat the point of all of this.

comments

Jonathan McDowell: Just because you can, doesn't mean you should

Hën, 17/07/2017 - 8:41md

There was a recent Cryptoparty Belfast event that was aimed at a wider audience than usual; rather than concentrating on how to protect ones self on the internet the 3 speakers concentrated more on why you might want to. As seems to be the way these days I was asked to say a few words about the intersection of technology and the law. I think people were most interested in all the gadgets on show at the end, but I hope they got something out of my talk. It was a very high level overview of some of the issues around the Investigatory Powers Act - if you’re familiar with it then I’m not adding anything new here, just trying to provide some sort of details about why it’s a bad thing from both a technological and a legal perspective.

Download

Steinar H. Gunderson: Solskogen 2017: Nageru all the things

Hën, 17/07/2017 - 5:47md

Solskogen 2017 is over! What a blast that was; I especially enjoyed that so many old-timers came back to visit, it really made the party for me.

This was the first year we were using Nageru for not only the stream but also for the bigscreen mix, and I was very relieved to see the lack of problems; I've had nightmares about crashes with 150+ people watching (plus 200-ish more on stream), but there were no crashes and hardly a dropped frame. The transition to a real mixing solution as well as from HDMI to SDI everywhere gave us a lot of new opportunities, which allowed a number of creative setups, some of them cobbled together on-the-spot:

  • Nageru with two cameras, of which one was through an HDMI-to-SDI converter battery-powered from a 20000 mAh powerbank (and sent through three extended SDI cables in series): Live music compo (with some, er, interesting entries).
  • 1080p60 bigscreen Nageru with two computer inputs (one of them through a scaler) and CasparCG graphics run from an SQL database, sent on to a 720p60 mixer Nageru (SDI pass-through from the bigscreen) with two cameras mixed in: Live graphics compo
  • Bigscreen Nageru switching from 1080p50 to 1080p60 live (and stream between 720p50 and 720p60 correspondingly), running C64 inputs from the Framemeister scaler: combined intro compo
  • And finally, it's Nageru all the way down: A camera run through a long extended SDI cable to a laptop running Nageru, streamed over TCP to a computer running VLC, input over SDI to bigscreen Nageru and sent on to streamer Nageru: Outdoor DJ set/street basket compo (granted, that one didn't run entirely smoothly, and you can occasionally see Windows device popups :-) )

It's been a lot of fun, but also a lot of work. And work will continue for an even better show next year… after some sleep. :-)

Jose M. Calhariz: Crossgrading a complex Desktop and Debian Developer machine running Debian 9

Dje, 16/07/2017 - 6:49md

This article is an experiment in progress, please recheck, while I am updating with the new information.

I have a very old installation of Debian, possibly since v2, dot not remember, that I have upgraded since then both in software and hardware. Now the hardware is 64bits, runs a kernel of 64bits but the run-time is still 32bits. For 99% of tasks this is very good. Now that I have made many simulations I may have found a solution to do a crossgrade of my desktop. I write here the tentative procedure and I will update with more ideias on the problems that I may found.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading and the instalation of all the libs as amd64:

apt-get update apt-get upgrade apt-get clean dpkg --list > original.dpkg apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64 cd /var/cache/apt/archives/ dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb dpkg --configure --pending dpkg -i --skip-same-version dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb for pack32 in $(grep i386 original.dpkg | egrep "^ii " | awk '{print $2}' ) ; do echo $pack32 ; if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; fi ; done dpkg --install /var/cache/apt/archives/*_amd64.deb dpkg --install /var/cache/apt/archives/*_amd64.deb dpkg --print-architecture dpkg --print-foreign-architectures

But this procedure does not prevent the "apt-get install" to have broken dependencies.

So trying to install the core packages and the libraries using "dpkg -i".

apt-get update apt-get upgrade apt-get autoremove apt-get clean dpkg --list > original.dpkg apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64 for pack32 in $(grep i386 original.dpkg | egrep "^ii " | awk '{print $2}' ) ; do echo $pack32 ; if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; fi ; done cd /var/cache/apt/archives/ dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb dpkg --configure --pending dpkg --install --skip-same-version dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb dpkg --remove libcurl4-openssl-dev dpkg -i libcurl4-openssl-dev_*_amd64.deb

Remove packages until all there is no brokens packages

dpkg --print-architecture dpkg --print-foreign-architectures apt-get --fix-broken --allow-remove-essential install

Still broken, because apt-get removed dpkg

So instead of only installing the libs with dpkg -i, I am going to try to install all the packages with dpkg -i:

apt-get update apt-get upgrade apt-get autoremove apt-get clean dpkg --list > original.dpkg apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64 for pack32 in $(grep i386 original.dpkg | egrep "^ii " | awk '{print $2}' ) ; do echo $pack32 ; apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; done cd /var/cache/apt/archives/ dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb dpkg --configure --pending dpkg --install --skip-same-version dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb dpkg --configure --pending

Remove packages and reinstall selected packages until you fix all off them. Follow the trial for my machine:

dpkg --remove rkhunter dpkg --remove libmarco-private1:i386 marco mate-control-center mate-desktop-environment-core mate-desktop-environment-core mate-desktop-environment mate-desktop-environment-core mate-desktop-environment-extras dpkg --remove libmate-menu2:i386 libmate-window-settings1:i386 mate-panel mate-screensaver python-mate-menu libmate-slab0:i386 mozo mate-menus dpkg --remove libmate-menu2:i386 mate-panel python-mate-menu mate-applets mate-menus dpkg -i libmate-menu2_1.16.0-2_amd64.deb dpkg --remove gir1.2-ibus-1.0:i386 gnome-shell gnome-shell-extensions gdm3 gnome-session dpkg --remove gir1.2-ibus-1.0:i386 dpkg --remove libmateweather1:i386 dpkg -i libmateweather1_1.16.1-2_amd64.deb apt-get --fix-broken --download-only install dpkg --skip-same-version --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb dpkg --configure --pending dpkg -i python_2.7.13-2_amd64.deb dpkg --configure --pending dpkg -i perl_5.24.1-3+deb9u1_amd64.deb perl-base_5.24.1-3+deb9u1_amd64.deb dpkg -i exim4-daemon-light_4.89-2+deb9u1_amd64.deb exim4-base_4.89-2+deb9u1_amd64.deb dpkg -i libuuid-perl_0.27-1_amd64.deb dpkg --configure --pending dpkg --install gstreamer1.0-plugins-bad_1.10.4-1_amd64.deb libmpeg2encpp-2.1-0_1%3a2.1.0+debian-5_amd64.deb libmplex2-2.1-0_1%3a2.1.0+debian-5_amd64.deb dpkg --configure --pending dpkg --audit

Now fixing broken dependencies on apt-get. Found no other way than removing all the broken packages.

dpkg --remove $(apt-get --fix-broken install | cut -f 2 -d ' ' ) apt-get install $(grep -v ":i386" ~/original.dpkg | egrep "^ii" | grep -v "aiccu" | grep -v "acroread" | grep -v "flash-player-properties" | grep -v "flashplayer-mozilla" | egrep -v "tp-flash-marillat" | awk '{print $2}')

Vasudev Kamath: Overriding version information from setup.py with pbr

Dje, 16/07/2017 - 5:23md

I recently raised a pull request on zfec for converting its python packaging from pure setup.py to pbr based. Today I got review from Brian Warner and one of the issue mentioned was python setup.py --version is not giving same output as previous version of setup.py.

Previous version used versioneer which extracts version information needed from VCS tags. Versioneer also provides flexibility of specifying type of VCS used, style of version, tag prefix (for VCS) etc. pbr also does extract version information from git tag but it expects git tag to be of format tags/refs/x.y.z format but zfec used a zfec- prefix to tag (example zfec-1.4.24) and pbr does not process this. End result, I get a version in format 0.0.devNN where NN is number of commits in the repository from its inception.

Me and Brian spent few hours trying to figure out a way to tell pbr that we would like to override version information it auto deduces, but there was none other than putting version string in PBR_VERSION environment variable. That documentation was contributed by me 3 years back to pbr project.

So finally I used versioneer to create a version string and put it in the environment variable PBR_VERSION.

import os import versioneer os.environ['PBR_VERSION'] = versioneer.get_version() ... setup( setup_requires=['pbr'], pbr=True, ext_modules=extensions )

And added below snippet to setup.cfg which is how versioneer can be configured with various information including tag prefixes.

[versioneer] VCS = git style = pep440 versionfile_source = zfec/_version.py versionfile_build = zfec/_version.py tag_prefix = zfec- parentdir_prefix = zfec-

Though this work around gets the work done, it does not feel correct to set environment variable to change the logic of other part of same program. If you guys know the better way do let me know!. Also probably I should consider filing an feature request against pbr to provide a way to pass tag prefix for version calculation logic.

Lior Kaplan: PDO_IBM: tracking changes publicly

Dje, 16/07/2017 - 3:13md

As part of my work at Zend (now a RogueWave company), I maintain the various patch sets. One of those is the changes for PDO_IBM extension for PHP.

After some patch exchange I decided it’s would be easier to manage the whole process over a public git repository, and maybe gain some more review / feedback along the way. Info at https://github.com/kaplanlior/pecl-database-pdo_ibm/commits/zend-patches

Another aspect of this, is having IBMi specific patches from YIPS (young i professionals) at http://www.youngiprofessionals.com/wiki/index.php/XMLService/PHP, which itself are patches on top of vanilla releases. Info at https://github.com/kaplanlior/pecl-database-pdo_ibm/commits/zend-patches-for-yips

So keeping track over these changes as well is easier while using git’s ability to rebase efficiently, so when a new release is done, I can adapt my patches quite easily. Make sure the changes can be back and forward ported between vanilla and IBMi versions of the extension.


Filed under: PHP

Joey Hess: Functional Reactive Propellor

Sht, 15/07/2017 - 11:43md

I wrote this code, and it made me super happy!

data Variety = Installer | Target deriving (Eq) seed :: UserInput -> Versioned Variety Host seed userinput ver = host "foo" & ver ( (== Installer) --> hostname "installer" <|> (== Target) --> hostname (inputHostname userinput) ) & osDebian Unstable X86_64 & Apt.stdSourcesList & Apt.installed ["linux-image-amd64"] & Grub.installed PC & XFCE.installed & ver ( (== Installer) --> desktopUser defaultUser <|> (== Target) --> desktopUser (inputUsername userinput) ) & ver ( (== Installer) --> autostartInstaller )

This is doing so much in so little space and with so little fuss! It's completely defining two different versions of a Host. One version is the Installer, which in turn installs the Target. The code above provides all the information that propellor needs to convert a copy of the Installer into the Target, which it can do very efficiently. For example, it knows that the default user account should be deleted, and a new user account created based on the user's input of their name.

The germ of this idea comes from a short presentation I made about propellor in Portland several years ago. I was describing RevertableProperty, and Joachim Breitner pointed out that to use it, the user essentially has to keep track of the evolution of their Host in their head. It would be better for propellor to know what past versions looked like, so it can know when a RevertableProperty needs to be reverted.

I didn't see a way to address the objection for years. I was hung up on the problem that propellor's properties can't be compared for equality, because functions can't be compared for equality (generally). And on the problem that it would be hard for propellor to pull old versions of a Host out of git. But then I ran into the situation where I needed these two closely related hosts to be defined in a single file, and it all fell into place.

The basic idea is that propellor first reverts all the revertible properties for other versions. Then it ensures the property for the current version.

Another use for it would be if you wanted to be able to roll back changes to a Host. For example:

foos :: Versioned Int Host foos ver = host "foo" & hostname "foo.example.com" & ver ( (== 1) --> Apache.modEnabled "mpm_worker" <|> (>= 2) --> Apache.modEnabled "mpm_event" ) & ver ( (>= 3) --> Apt.unattendedUpgrades ) foo :: Host foo = foos `version` (4 :: Int)

Versioned properties can also be defined:

foobar :: Versioned Int -> RevertableProperty DebianLike DebianLike foobar ver = ver ( (== 1) --> (Apt.installed "foo" <!> Apt.removed "foo") <|> (== 2) --> (Apt.installed "bar" <!> Apt.removed "bar") )

Notice that I've embedded a small DSL for versioning into the propellor config file syntax. While implementing versioning took all day, that part was super easy; Haskell config files win again!

API documentation for this feature

PS: Not really FRP, probably. But time-varying in a FRP-like way.

Development of this was sponsored by Jake Vosloo on Patreon.

Dirk Eddelbuettel: Rcpp 0.12.12: Rounding some corners

Sht, 15/07/2017 - 7:09md

The twelveth update in the 0.12.* series of Rcpp landed on CRAN this morning, following two days of testing at CRAN preceded by five full reverse-depends checks we did (and which are always logged in this GitHub repo). The Debian package has been built and uploaded; Windows and macOS binaries should follow at CRAN as usual. This 0.12.12 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, the 0.12.10.release in March, and the 0.12.11.release in May making it the sixteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1097 packages (and hence 71 more since the last release in May) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases contain a fairly large number of fairly small and focused pull requests most of which either correct some corner cases or improve other aspects. JJ tirelessly improved the package registration added in the previous release and following R 3.4.0. Kirill tidied up a number of small issues allowing us to run compilation in even more verbose modes---usually a good thing. Jeroen, Elias Pipping and Yo Gong all contributed as well, and we thank everybody for their contributions.

All changes are listed below in some detail.

Changes in Rcpp version 0.12.12 (2017-07-13)
  • Changes in Rcpp API:

    • The tinyformat.h header now ends in a newline (#701).

    • Fixed rare protection error that occurred when fetching stack traces during the construction of an Rcpp exception (Kirill Müller in #706).

    • Compilation is now also possibly on Haiku-OS (Yo Gong in #708 addressing #707).

    • Dimension attributes are explicitly cast to int (Kirill Müller in #715).

    • Unused arguments are no longer declared (Kirill Müller in #716).

    • Visibility of exported functions is now supported via the R macro atttribute_visible (Jeroen Ooms in #720).

    • The no_init() constructor accepts R_xlen_t (Kirill Müller in #730).

    • Loop unrolling used R_xlen_t (Kirill Müller in #731).

    • Two unused-variables warnings are now avoided (Jeff Pollock in #732).

  • Changes in Rcpp Attributes:

    • Execute tools::package_native_routine_registration_skeleton within package rather than current working directory (JJ in #697).

    • The R portion no longer uses dir.exists to no require R 3.2.0 or newer (Elias Pipping in #698).

    • Fix native registration for exports with name attribute (JJ in #703 addressing #702).

    • Automatically register init functions for Rcpp Modules (JJ in #705 addressing #704).

    • Add Shield around parameters in Rcpp::interfaces (JJ in #713 addressing #712).

    • Replace dot (".") with underscore ("_") in package names when generating native routine registrations (JJ in #722 addressing #721).

    • Generate C++ native routines with underscore ("_") prefix to avoid exporting when standard exportPattern is used in NAMESPACE (JJ in #725 addressing #723).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Junichi Uekawa: revisiting libjson-spirit.

Sht, 15/07/2017 - 2:18md
revisiting libjson-spirit. I tried compiling a program that uses libjson-spirit and noticed that it still is broken. New programs compiled against the header does not link with the provided static library. Trying to rebuild it fixes it, but it uses compat version 8, and that needs to be fixed (trivially). hmm... actually code doesn't build anymore and there's multiple new upstream versions. ... and then I noticed that it was a stale copy already removed from Debian repository. What's a good C++ json implementation these days?

Faqet