You are here

Agreguesi i feed

Ubuntu Insights: Security Team Weekly Summary: September 27, 2017

Planet Ubuntu - Enj, 28/09/2017 - 4:14md

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

During the last week, the Ubuntu Security team:

  • Triaged 296 public security vulnerability reports, retaining the 81 that applied to Ubuntu.
  • Published 16 Ubuntu Security Notices which fixed 37 security issues (CVEs) across 18 supported packages.
Ubuntu Security Notices Bug Triage Mainline Inclusion Requests Updates to Community Supported Packages
  • Simon Quigley (tsimonq2) provided debdiffs for trusty-zesty for jython (LP: #1714728)

Development
  • review
    • udisks2 PR 3931
    • snap-confile calls snap-update-ns PR 3621
    • bind mount relative to snap-confine PR 3956
    • snaps on NFS support
  • completed: create PR 3937 to use only ‘udevadm trigger –action=change’ instead of ‘udevadm control –reload-rules’
  • update snap-confine to unconditional add the nvidia devices to the device cgroup and rely only on apparmor for mediation
  • wrote/tested libseccomp-golang changes to complement the libseccomp changes: https://github.com/seccomp/libseccomp-golang/pull/29

  • uploaded libseccomp, with the most minimal change needed to support snapd, to artful after receiving a Feature Freeze exception
What the Security Team is Reading This Week Weekly Meeting More Info

Process Monitoring

Planet Debian - Enj, 28/09/2017 - 3:46md

Since forking the Mon project to etbemon [1] I’ve been spending a lot of time working on the monitor scripts. Actually monitoring something is usually quite easy, deciding what to monitor tends to be the hard part. The process monitoring script ps.monitor is the one I’m about to redesign.

Here are some of my ideas for monitoring processes. Please comment if you have any suggestions for how do do things better.

For people who don’t use mon, the monitor scripts return 0 if everything is OK and 1 if there’s a problem along with using stdout to display an error message. While I’m not aware of anyone hooking mon scripts into a different monitoring system that’s going to be easy to do. One thing I plan to work on in the future is interoperability between mon and other systems such as Nagios.

Basic Monitoring ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2

I’m currently planning some sort of rewrite of the process monitoring script. The current functionality is to have a list of process names on the command line with minimum and maximum numbers for the instances of the process in question. The above is a sample of the configuration of the monitor. There are some limitations to this, the “master” process in this instance refers to the main process of Postfix, but other daemons use the same process name (it’s one of those names that’s wrong because it’s so obvious). One obvious solution to this is to give the option of specifying the full path so that /usr/lib/postfix/sbin/master can be differentiated from all the other programs named master.

The next issue is processes that may run on behalf of multiple users. With sshd there is a single process to accept new connections running as root and a process running under the UID of each logged in user. So the number of sshd processes running as root will be one greater than the number of root login sessions. This means that if a sysadmin logs in directly as root via ssh (which is controversial and not the topic of this post – merely something that people do which I have to support) and the master process then crashes (or the sysadmin stops it either accidentally or deliberately) there won’t be an alert about the missing process. Of course the correct thing to do is to have a monitor talk to port 22 and look for the string “SSH-2.0-OpenSSH_”. Sometimes there are multiple instances of a daemon running under different UIDs that need to be monitored separately. So obviously we need the ability to monitor processes by UID.

In many cases process monitoring can be replaced by monitoring of service ports. So if something is listening on port 25 then it probably means that the Postfix “master” process is running regardless of what other “master” processes there are. But for my use I find it handy to have multiple monitors, if I get a Jabber message about being unable to send mail to a server immediately followed by a Jabber message from that server saying that “master” isn’t running I don’t need to fully wake up to know where the problem is.

SE Linux

One feature that I want is monitoring SE Linux contexts of processes in the same way as monitoring UIDs. While I’m not interested in writing tests for other security systems I would be happy to include code that other people write. So whatever I do I want to make it flexible enough to work with multiple security systems.

Transient Processes

Most daemons have a second process of the same name running during the startup process. This means if you monitor for exactly 1 instance of a process you may get an alert about 2 processes running when “logrotate” or something similar restarts the daemon. Also you may get an alert about 0 instances if the check happens to run at exactly the wrong time during the restart. My current way of dealing with this on my servers is to not alert until the second failure event with the “alertafter 2” directive. The “failure_interval” directive allows specifying the time between checks when the monitor is in a failed state, setting that to a low value means that waiting for a second failure result doesn’t delay the notification much.

To deal with this I’ve been thinking of making the ps.monitor script automatically check again after a specified delay. I think that solving the problem with a single parameter to the monitor script is better than using 2 configuration directives to mon to work around it.

CPU Use

Mon currently has a loadavg.monitor script that to check the load average. But that won’t catch the case of a single process using too much CPU time but not enough to raise the system load average. Also it won’t catch the case of a CPU hungry process going quiet (EG when the SETI at Home server goes down) while another process goes into an infinite loop. One way of addressing this would be to have the ps.monitor script have yet another configuration option to monitor CPU use, but this might get confusing. Another option would be to have a separate script that alerts on any process that uses more than a specified percentage of CPU time over it’s lifetime or over the last few seconds unless it’s in a whitelist of processes and users who are exempt from such checks. Probably every regular user would be exempt from such checks because you never know when they will run a file compression program. Also there is a short list of daemons that are excluded (like BOINC) and system processes (like gzip which is run from several cron jobs).

Monitoring for Exclusion

A common programming mistake is to call setuid() before setgid() which means that the program doesn’t have permission to call setgid(). If return codes aren’t checked (and people who make such rookie mistakes tend not to check return codes) then the process keeps elevated permissions. Checking for processes running as GID 0 but not UID 0 would be handy. As an aside a quick examination of a Debian/Testing workstation didn’t show any obvious way that a process with GID 0 could gain elevated privileges, but that could change with one chmod 770 command.

On a SE Linux system there should be only one process running with the domain init_t. Currently that doesn’t happen in Stretch systems running daemons such as mysqld and tor due to policy not matching the recent functionality of systemd as requested by daemon service files. Such issues will keep occurring so we need automated tests for them.

Automated tests for configuration errors that might impact system security is a bigger issue, I’ll probably write a separate blog post about it.

Related posts:

  1. Monitoring of Monitoring I was recently asked to get data from a computer...
  2. When to Use SE Linux Recently someone asked on IRC whether they should use SE...
  3. Health and Status Monitoring via Smart Phone Health Monitoring Eric Topol gave an interesting TED talk about...
etbe https://etbe.coker.com.au etbe – Russell Coker

LibreOffice community celebrates 7th anniversary

Planet Debian - Enj, 28/09/2017 - 2:52md

The Document foundation blog have a post about LibreOffice 7th anniversary:

Berlin, September 28, 2017 – Today, the LibreOffice community celebrates the 7th anniversary of the leading free office suite, adopted by millions of users in every continent. Since 2010, there have been 14 major releases and dozens of minor ones, fulfilling the personal productivity needs of both individuals and enterprises, on Linux, macOS and Windows.

I wanted to take a moment to remind people that 7 years ago the community decided to make the de facto fork of OpenOffice.org official after life under Sun (and then Oracle) were problematic. From the very first hours the project showed its effectiveness. See my post about LibreOffice first steps. Not to mention what it achieved in the past 7 years.

This is still one of my favourite open source contributions, not because it was sophisticated or hard, but because it as about using the freedom part of the free software:
Replace hardcoded “product by Oracle” with “product by %OOOVENDOR”.

On a personal note, for me, after years of trying to help with OOo l10n for Hebrew and RTL support, things started to go forward in a reasonable pace, getting patches in after years of trying, having upstream fix some of the issues, and actually able doing the translation. We made it to 100% with LibreOffice 3.5.0 in February 2012 (something we should redo soon…).


Filed under: i18n & l10n, Israeli Community, LibreOffice Kaplan https://liorkaplan.wordpress.com Free Software Universe

Review: The Seventh Bride

Planet Debian - Enj, 28/09/2017 - 6:41pd

Review: The Seventh Bride, by T. Kingfisher

Publisher: 47North Copyright: 2015 ISBN: 1-5039-4975-3 Format: Kindle Pages: 225

There are two editions of this book, although only one currently for sale. This review is of the second edition, released in November of 2015. T. Kingfisher is a pen name for Ursula Vernon when she's writing for adults.

Rhea is a miller's daughter. She's fifteen, obedient, wary of swans, respectful to her parents, and engaged to Lord Crevan. The last was a recent and entirely unexpected development. It's not that she didn't expect to get married eventually, since of course that's what one does. And it's not that Lord Crevan was a stranger, since that's often how it went with marriage for people like her. But she wasn't expecting to get married now, and it was not at all clear why Lord Crevan would want to marry her in particular.

Also, something felt not right about the entire thing. And it didn't start feeling any better when she finally met Lord Crevan for the first time, some days after the proposal to her parents. The decidedly non-romantic hand kissing didn't help, nor did the smug smile. But it's not like she had any choice. The miller's daughter doesn't say no to a lord and a friend of the viscount. The miller's family certainly doesn't say no when they're having trouble paying the bills, the viscount owns the mill, and they could be turned out of their livelihood at a whim.

They still can't say no when Lord Crevan orders Rhea to come to his house in the middle of the night down a road that quite certainly doesn't exist during the day, even though that's very much not the sort of thing that is normally done. Particularly before the marriage. Friends of the viscount who are also sorcerers can get away with quite a lot. But Lord Crevan will discover that there's still a limit to how far he can order Rhea around, and practical-minded miller's daughters can make a lot of unexpected friends even in dire circumstances.

The Seventh Bride is another entry in T. Kingfisher's series of retold fairy tales, although the fairy tale in question is less clear than with The Raven and the Reindeer. Kirkus says it's a retelling of Bluebeard, but I still don't quite see that in the story. I think one could argue equally easily that it's an original story. Nonetheless, it is a fairy tale: it has that fairy tale mix of magical danger and practical morality, and it's about courage and friendships and their consequences.

It also has a hedgehog.

This is an T. Kingfisher story, so it's packed full of bits of marvelous phrasing that I want to read over and over again. It has wonderful characters, the hedgehog among them, and it has, at its heart, a sort of foundational decency and stubborn goodness that's deeply satisfying for the reader.

The Seventh Bride is a lot closer to horror than the other T. Kingfisher books I've read, but it never fell into my dislike of the horror genre, despite a few gruesome bits. I think that's because neither Rhea nor the narrator treat the horrific aspects as representative of the true shape of the world. Rhea instead confronts them with a stubborn determination and an attempt to make the best of each moment, and with a practical self-awareness that I loved reading about.

The problem with crying in the woods, by the side of a white road that leads somewhere terrible, is that the reason for crying isn't inside your head. You have a perfectly legitimate and pressing reason for crying, and it will still be there in five minutes, except that your throat will be raw and your eyes will itch and absolutely nothing else will have changed.

Lord Crevan, when Rhea finally reaches him, toys with her by giving her progressively more horrible puzzle tasks, threatening her with the promised marriage if she fails at any of them. The way this part of the book finally resolves is one of the best moments I've read in any book. Kingfisher captures an aspect of moral decisions, and a way in which evil doesn't work the way that evil people expect it to work, that I can't remember seeing an author capture this well.

There are a lot of things here for Rhea to untangle: the nature of Crevan's power, her unexpected allies in his manor, why he proposed marriage to her, and of course how to escape his power. The plot works, but I don't think it was the best part of the book, and it tends to happen to Rhea rather than being driven by her. But I have rarely read a book quite this confident of its moral center, or quite as justified in that confidence.

I am definitely reading everything Vernon has published under the T. Kingfisher name, and quite possibly most of her children's books as well. Recommended, particularly if you liked the excerpt above. There's an entire book full of paragraphs like that waiting for you.

Rating: 8 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

RcppZiggurat 0.1.4

Planet Debian - Enj, 28/09/2017 - 4:06pd

A maintenance release of RcppZiggurat is now on the CRAN network for R. It switched the vignette to the our new pinp package and its two-column pdf default.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl---all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

The NEWS file entry below lists all changes.

Changes in version 0.1.4 (2017-07-27)
  • The vignette now uses the pinp package in two-column mode.

  • Dynamic symbol registration is now enabled.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Systemd device units

Planet Debian - Enj, 28/09/2017 - 12:00pd

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.device units

Several devices are automatically represented inside systemd by .device units, which can be used to activate services when a given device exists in the file system.

See systemctl --all --full -t device to see a list of all decives for which systemd has a unit in your system.

For example, this .service unit plays a sound as long as a specific USB key is plugged in my system:

[Unit] Description=Beeps while a USB key is plugged DefaultDependencies=false StopWhenUnneeded=true [Install] WantedBy=dev-disk-by\x2dlabel-ERLUG.device [Service] Type=simple ExecStart=/bin/sh -ec 'while true; do /usr/bin/aplay -q /tmp/beep.wav; sleep 2; done'

If you need to work with a device not seen by default by systemd, you can add a udev rule that makes it available, by adding the systemd tag to the device with TAG+="systemd".

It is also possible to give the device an extra alias using ENV{SYSTEMD_ALIAS}="/dev/my-alias-name".

To figure out all you can use for matching a device:

  1. Run udevadm monitor --environment and plug the device
  2. Look at the DEVNAME= values and pick one that addresses your device the way you prefer
  3. udevadm info --attribute-walk --name=*the value of devname* will give you all you can use for matching in the udev rule.

See:

Enrico Zini http://www.enricozini.org/tags/pdo/ Enrico Zini: pdo

Qt cross-architecture development in Debian

Planet Debian - Mër, 27/09/2017 - 3:25md

Use case: use Debian Stable as an environment to run amd64 development machines to develop Qt applications for Raspberry Pi or other smallish armhf devices.

Qt Creator is used as Integrated Development Environment, and it supports cross-compiling, running the built source on the target system, and remote debugging.

Debian Stable (vanilla or Raspbian) runs on both the host and the target systems, so libraries can be kept in sync, and both systems have access to a vast amount of libraries, with security support.

On top of that, armhf libraries can be installed with multiarch also in the host machine, so cross-builders have access to the exact same libraries as the target system.

This sounds like a dream system. But. We're not quite there yet.

cross-compile attempts

I tried cross compiling a few packages:

$ sudo debootstrap stretch cross $ echo "strech_cross" | sudo tee cross/etc/debian_chroot $ sudo systemd-nspawn -D cross # dpkg --add-architecture armhf # echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list # apt update # apt install --no-install-recommends build-essential crossbuild-essential-armhf

Some packages work:

# apt source bc # cd bc-1.06.95/ # apt-get build-dep -a armhf . # dpkg-buildpackage -aarmhf -j2 -b … dh_auto_configure -- --prefix=/usr --with-readline ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=\${prefix}/include --mandir=\${prefix}/share/man --infodir=\${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=\${prefix}/lib/arm-linux-gnueabihf --libexecdir=\${prefix}/lib/arm-linux-gnueabihf --disable-maintainer-mode --disable-dependency-tracking --host=arm-linux-gnueabihf --prefix=/usr --with-readline … dpkg-deb: building package 'dc-dbgsym' in '../dc-dbgsym_1.06.95-9_armhf.deb'. dpkg-deb: building package 'bc-dbgsym' in '../bc-dbgsym_1.06.95-9_armhf.deb'. dpkg-deb: building package 'dc' in '../dc_1.06.95-9_armhf.deb'. dpkg-deb: building package 'bc' in '../bc_1.06.95-9_armhf.deb'. dpkg-genbuildinfo --build=binary dpkg-genchanges --build=binary >../bc_1.06.95-9_armhf.changes dpkg-genchanges: info: binary-only upload (no source code included) dpkg-source --after-build bc-1.06.95 dpkg-buildpackage: info: binary-only upload (no source included)

With qmake based Qt packages, qmake is not configured for cross-building, probably because it is not currently supported:

# apt source pumpa # cd pumpa-0.9.3/ # apt-get build-dep -a armhf . # dpkg-buildpackage -aarmhf -j2 -b … qmake -makefile -nocache "QMAKE_CFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2" "QMAKE_CFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2" "QMAKE_CXXFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2" "QMAKE_CXXFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2" "QMAKE_LFLAGS_RELEASE=-Wl,-z,relro -Wl,-z,now" "QMAKE_LFLAGS_DEBUG=-Wl,-z,relro -Wl,-z,now" QMAKE_STRIP=: PREFIX=/usr qmake: could not exec '/usr/lib/x86_64-linux-gnu/qt5/bin/qmake': No such file or directory … debian/rules:19: recipe for target 'build' failed make: *** [build] Error 2 dpkg-buildpackage: error: debian/rules build gave error exit status 2

With cmake based Qt packages it goes a little better in that it finds the cross compiler, pkg-config and some multiarch paths, but then it tries to run armhf moc, which fails:

# apt source caneda # cd caneda-0.3.0/ # apt-get build-dep -a armhf . # dpkg-buildpackage -aarmhf -j2 -b … cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_SYSTEM_NAME=Linux -DCMAKE_SYSTEM_PROCESSOR=arm -DCMAKE_C_COMPILER=arm-linux-gnueabihf-gcc -DCMAKE_CXX_COMPILER=arm-linux-gnueabihf-g\+\+ -DPKG_CONFIG_EXECUTABLE=/usr/bin/arm-linux-gnueabihf-pkg-config -DCMAKE_INSTALL_LIBDIR=lib/arm-linux-gnueabihf … CMake Error at /usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfig.cmake:27 (message): The imported target "Qt5::Core" references the file "/usr/lib/arm-linux-gnueabihf/qt5/bin/moc" but this file does not exist. Possible reasons include: * The file was deleted, renamed, or moved to another location. * An install or uninstall procedure did not complete successfully. * The installation package was faulty and contained "/usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfigExtras.cmake" but not all the files it references.

Note: Although I improvised a chroot to be able to fool around with it, I would use pbuilder or sbuild to do the actual builds.

Helmut suggests pbuilder --host-arch or sbuild --host.

Doing it the non-Debian way

This guide in the meantime explains how to set up a cross-compiling Qt toolchain in a rather dirty way, by recompiling Qt pointing it at pieces of the Qt deployed on the Raspberry Pi.

Following that guide, replacing the CROSS_COMPILE value with /usr/bin/arm-linux-gnueabihf- gave me a working qtbase, for which it is easy to create a Kit for Qt Creator that works, and supports linking applications with Debian development packages that do not use Qt.

However, at that point I need to recompile all dependencies that use Qt myself, and I quickly got stuck at that monster of QtWebEngine, whose sources embed the whole of Chromium.

Having a Qt based development environment in which I need to become the maintainer for the whole Qt toolchain is not a product I can offer to a customer. Cross compiling qmake based packages on stretch is not currently supported, so at the moment I had to suggest to postpone all plans for total world domination for at least two years.

Cross-building Debian

In the meantime, Helmut Grohne has been putting a lot of effort into making Debian packages cross-buildable:

helmut> enrico: yes, cross building is painful. we have ~26000 source packages. of those, ~13000 build arch-dep packages. of those, ~6000 have cross-satisfiable build-depends. of those, I tried cross building ~2300. of those 1300 cross built. so we are at about 10% working.

helmut> enrico: plus there are some 607 source packages affected by some 326 bugs with patches.

helmut> enrico: gogo nmu them

helmut> enrico: I've filed some 1000 bugs (most of them with patches) now. around 600 are fixed :)

He is doing it mostly alone, and I would like people not to be alone when they do a lot of work in Debian, so…

Join Helmut in the effort of making Debian cross-buildable!

Build any Debian package for any device right from the comfort of your own work computer!

Have a single development environment seamlessly spanning architecture boundaries, with the power of all that there is in Debian!

Join Helmut in the effort of making Debian cross-buildable!

Apply here, or join #debian-bootstrap on OFTC!

Cross-building Qt in Debian

mitya57 summarised the situation on the KDE team side:

mitya57> we have cross-building stuff on our TODO list, but it will likely require a lot of time and neither Lisandro nor I have it currently.

mitya57> see https://gobby.debian.org/export/Teams/KDE/qt-cross for a summary of what needs to be done.

mitya57> Any help or patches are always welcome :))

qemu-user-static

Helmut also suggested to use qemu-user-static to make the host system able to run binaries compiled for the target system, so that even if a non-cross-compiling Qt build tries to run moc and friends in their target architecture version, they would transparently succeed.

At that point, it would just be a matter of replacing compiler paths to point to the native cross-compiling gcc, and the build would not be slowed down by much.

Fixing bug #781226 would help in making it possible to configure a multiarch version of qmake as the qmake used for cross compiling.

I have not had a chance of trying to cross-build in this way yet.

In the meantime...

Having qtcreator able to work on an amd64 devel machine and deploy/test/debug remotely on an arm target machine, where both machine run debian stable and have libraries in sync, would be a great thing to have even though packages do not cross-build yet.

Helmut summarised the situation on IRC:

svuorela and others repeat that Qt upstream is not compatible with Debian's multiarch thinking, in that Qt upstream insists on having one toolchain for each pair of architectures, whereas the Debian way tends to be to make packages generic and split stuff such that it can be mixed and matched.

An example being that you need to run qmake (thus you need qmake for the build architecture), but qmake also embeds the relevant paths and you need to query it for them (so you need qmake for the host architecture)

Either you run it through qemu, or you have a particular cross qmake for your build/host pair, or you fix qt upstream to stop this madness

Building qmake in Debian for each host-target pair, even just limited to released architectures, would mean building Qt 100 times, and that's not going to scale.

I wonder:

  • can I have a qmake-$ARCH binary that can build a source using locally installed multiarch Qt libraries, do I need to recompile and ship the whole of Qt, or just qmake?
  • is there a recipe for building a cross-building Qt environment that would be able use Debian development libraries installed the normal multiarch way?
  • we can't do perfect yet, but can we do better than this?
Enrico Zini http://www.enricozini.org/tags/pdo/ Enrico Zini: pdo

Jonathan Riddell: KGraphViewer 2.4.2

Planet Ubuntu - Mër, 27/09/2017 - 3:23md

KGraphViewer 2.4.2 has been released.

KGraphViewer is a visualiser for Graphviz’s DOT format of graphs.
https://www.kde.org/applications/graphics/kgraphviewer

Changelog compared to 2.4.0:

  • add missing find dependency macro https://build.neon.kde.org/job/xenial_unstable_kde-extras_kgraphviewer_lintcmake/lastCompletedBuild/testReport/libkgraphviewer-dev/KGraphViewerPart/find_package/
  • Fix broken reloading and broken layout changing due to lost filename https://phabricator.kde.org/D7932
  • kgraphviewer_part.rc: set fallback text for toplevel menu entries
  • desktop-mime-but-no-exec-code
  • Codefix, comparisons were meant to be assignments

KGraphViewer 2.4.1 was made with an incorrect internal version number and should be ignored

It can be used by massif-visualizer to add graphing features.

Download from:
https://download.kde.org/stable/kgraphviewer/2.4.2/

sha256:
49438b4e6cca69d2e658de50059f045ede42cfe78ee97cece35959e29ffb85c9 kgraphviewer-2.4.2.tar.xz

Signed with my PGP key
2D1D 5B05 8835 7787 DE9E E225 EC94 D18F 7F05 997E
Jonathan Riddell <jr@jriddell.org>
kgraphviewer-2.4.2.tar.xz.sig

by

Alexander Larsson: Spotify and Skype flatpaks moved to flathub

Planet GNOME - Mër, 27/09/2017 - 3:05md

This is a public service announcement.

I used to maintain two custom repositories of flatpaks for spotify and skype. These are now at flathub (in addition to a lot of other apps), and if you were using the old repository you should switch to the new one to continue getting updates.

This is easiest done by removing the current version and then following the directions on the flathub site for installing.

RcppAnnoy 0.0.10

Planet Debian - Mër, 27/09/2017 - 4:05pd

A few short weeks after the more substantial 0.0.9 release of RcppAnnoy, we have a quick bug-fix update.

RcppAnnoy is our Rcpp-based R integration of the nifty Annoy library by Erik. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours.

Michaël Benesty noticed that our getItemsVector() function didn't, ahem, do much besides crashing. Simple bug, they happen--now fixed, and a unit test added.

Changes in this version are summarized here:

Changes in version 0.0.10 (2017-09-25)
  • The getItemsVector() function no longer crashes (#24)

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Systemd timer units

Planet Debian - Mër, 27/09/2017 - 12:00pd

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.timer units

Configure activation of other units (usually a .service unit) at some given time.

The functionality is similar to cron, with more features and a finer time granularity. For example, in Debian Stretch apt has a timer for running apt update which runs at a random time to distribute load on servers:

# /lib/systemd/system/apt-daily.timer [Unit] Description=Daily apt download activities After=network-online.target Wants=network-online.target [Timer] OnCalendar=*-*-* 6,18:00 RandomizedDelaySec=12h Persistent=true [Install] WantedBy=timers.target

The corresponding apt-daily.service file then only runs when the system is on mains power, to avoid unexpected batter drains for systems like laptops:

# /lib/systemd/system/apt-daily.service [Unit] Description=Daily apt download activities Documentation=man:apt(8) ConditionACPower=true [Service] Type=oneshot ExecStart=/usr/lib/apt/apt.systemd.daily update

Note that if you want to schedule tasks with an accuracy under a minute (for example to play a beep every 5 seconds when running on battery), you need to also configure AccuracySec= for the timer to a delay shorter than the default 1 minute.

This is how to make your computer beep when on battery:

# /etc/systemd/system/beep-on-battery.timer [Unit] Description=Beeps every 10 seconds [Install] WantedBy=timers.target [Timer] AccuracySec=1s OnUnitActiveSec=10s # /etc/systemd/system/beep-on-battery.service [Unit] Description=Beeps when on battery ConditionACPower=false [Service] Type=oneshot ExecStart=/usr/bin/aplay /tmp/beep.wav

See:

Enrico Zini http://www.enricozini.org/tags/pdo/ Enrico Zini: pdo

Simos Xenitellis: How to set up LXD on Packet.net (baremetal servers)

Planet Ubuntu - Mar, 26/09/2017 - 9:45md

Packet.net has premium baremetal servers that start at $36.50 per month for a quad-core Atom C2550 with 8GB RAM and 80GB SSD, on a 1Gbps Internet connection. On the other end of the scale, there is an option for a 24-core (two Intel CPUs) system with 256GB RAM and a total of 2.8TB SSD disk space at around $1000 per month.

In this post we are trying out the most affordable baremetal server (type 0 from the list) with Ubuntu and LXD.

Starting the server is quite uneventful. Being baremetal, it takes more time than a VPS. It started, and we are SSHing into it.

$ ssh root@ip.ip.ip.ip Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.10.0-24-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@lxd:~#

Here there is some information about the booted system,

root@lxd:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.2 LTS Release: 16.04 Codename: xenial root@lxd:~#

And the CPU details,

root@lxd:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 77 model name : Intel(R) Atom(TM) CPU C2550 @ 2.40GHz stepping : 8 microcode : 0x122 cpu MHz : 1200.000 cache size : 1024 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch epb tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida arat bugs : bogomips : 4800.19 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: ... omitting the other three cores ...

Let’s update the package list,

root@lxd:~# apt update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB] ...

They are using the official Ubuntu repositories instead of caching the packages with local mirrors. In retrospect, not an issue because the Internet connectivity is 1Gbps, bonded from two identical interfaces.

Let’s upgrade the packages and deal with issues. You tend to have issues with upgraded packages that complain that local configuration files are different from what they are expecting.

root@lxd:~# apt upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: apt apt-utils base-files cloud-init gcc-5-base grub-common grub-pc grub-pc-bin grub2-common initramfs-tools initramfs-tools-bin initramfs-tools-core kmod libapparmor1 libapt-inst2.0 libapt-pkg5.0 libasn1-8-heimdal libcryptsetup4 libcups2 libdns-export162 libexpat1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgnutls-openssl27 libgnutls30 libgraphite2-3 libgssapi3-heimdal libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libisc-export160 libkmod2 libkrb5-26-heimdal libpython3.5 libpython3.5-minimal libpython3.5-stdlib libroken18-heimdal libstdc++6 libsystemd0 libudev1 libwind0-heimdal libxml2 logrotate mdadm ntp ntpdate open-iscsi python3-jwt python3.5 python3.5-minimal systemd systemd-sysv tcpdump udev unattended-upgrades 59 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 24.3 MB of archives. After this operation, 77.8 kB of additional disk space will be used. Do you want to continue? [Y/n] ...

First is grub, and the diff shows (now shown here) that it is a minor issue. The new version of grub.cfg changes the system to appear as Debian instead of Ubuntu. Did not investigate into this.

We are then asked where to install grub. We set to /dev/sda and hope that the server can successfully reboot. We note that instead of a 80GB SSD disk as written in the description, we got a 160GB SSD. Not bad.

Setting up cloud-init (0.7.9-233-ge586fe35-0ubuntu1~16.04.2) ... Configuration file '/etc/cloud/cloud.cfg' ==> Modified (by you or by a script) since installation. ==> Package distributor has shipped an updated version. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** cloud.cfg (Y/I/N/O/D/Z) [default=N] ? N Progress: [ 98%] [##################################################################################.]

Still through apt upgrade, it complains for /etc/cloud/cloud.cfg. Here is the diff between the installed and packaged versions. We keep the existing file and we do not installed the new packaged generic version (will not boot).

At the end, it complains about

W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast

Time to reboot the server and check if we messed it up.

root@lxd:~# shutdown -r now $ ssh root@ip.ip.ip.ip Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Last login: Tue Sep 26 15:29:58 2017 from 1.2.3.4 root@lxd:~#

We are good! Note that now it says Ubuntu 16.04.3 while before it was Ubuntu 16.04.2.

LXD is not installed by default,

root@lxd:~# apt policy lxd lxd:       Installed: (none)       Candidate: 2.0.10-0ubuntu1~16.04.1       Version table:               2.0.10-0ubuntu1~16.04.1 500                       500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages               2.0.0-0ubuntu4 500                       500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

There are two versions, 2.0.0 which is the stock version released initially with Ubuntu 16.04. And 2.0.10, which is currently the latest stable version for Ubuntu 16.04. Let’s install.

root@lxd:~# apt install lxd ...

We are now ready to add the non-root user account.

root@lxd:~# adduser myusername Adding user `myusername' ... Adding new group `myusername' (1000) ... Adding new user `myusername' (1000) with group `myusername' ... Creating home directory `/home/myusername' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for myusername Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y root@lxd:~# ssh myusername@localhost Permission denied (publickey). root@lxd:~# cp -R ~/.ssh/ ~myusername/ root@lxd:~# chown -R myusername:myusername ~myusername/

We added the new username, then tested that password authentication is indeed disabled. Finally, we copied the authorized_keys file from ~root/ to the new non-root account, and adjusted the ownership of those files.

Let’s log out from the server and log in again as the new non-root account.

$ ssh myusername@ip.ip.ip.ip Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. ************************************************************************** # This system is using the EC2 Metadata Service, but does not appear to # # be running on Amazon EC2 or one of cloud-init's known platforms that # # provide a EC2 Metadata service. In the future, cloud-init may stop # # reading metadata from the EC2 Metadata Service unless the platform can # # be identified. # # # # If you are seeing this message, please file a bug against # # cloud-init at # # https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid # # Make sure to include the cloud provider your instance is # # running on. # # # # For more information see # # https://bugs.launchpad.net/bugs/1660385 # # # # After you have filed a bug, you can disable this warning by # # launching your instance with the cloud-config below, or # # putting that content into # # /etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg # # # # #cloud-config # # datasource: # # Ec2: # # strict_id: false # ************************************************************************** Disable the warnings above by: touch /home/myusername/.cloud-warnings.skip or touch /var/lib/cloud/instance/warnings/.skip myusername@lxd:~$

This issue is related to our action to keep the existing cloud.cfg after we upgraded the cloud-init package. It is something that packet.net (the provider) should deal with.

We are ready to try out LXD on packet.net.

Configuring LXD

Let’s configure LXD. First, how much free space do we have?

myusername@lxd:~$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/sda3 136G 1.1G 128G 1% / myusername@lxd:~$

There is plenty of space, we are using 100GB for LXD.

We are using ZFS as the LXD storage backend, therefore,

myusername@lxd:~$ sudo apt install zfsutils-linux

Now, we set up LXD.

myusername@lxd:~$ sudo lxd init Name of the storage backend to use (dir or zfs) [default=zfs]: zfs Create a new ZFS pool (yes/no) [default=yes]? yes Name of the new ZFS pool [default=lxd]: lxd Would you like to use an existing block device (yes/no) [default=no]? no Size in GB of the new loop device (1GB minimum) [default=27]: 100 Would you like LXD to be available over the network (yes/no) [default=no]? no Do you want to configure the LXD bridge (yes/no) [default=yes]? yes LXD has been successfully configured. myusername@lxd:~$ lxc list +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+ myusername@lxd:~$ Trying out LXD

Let’s create a container, install nginx and then make the web server accessible through the Internet.

myusername@lxd:~$ lxc launch ubuntu:16.04 web Creating web Retrieving image: rootfs: 100% (47.99MB/s) Starting web myusername@lxd:~$

Let’s see the details of the container, called web.

myusername@lxd:~$ lxc list --columns ns4tS +------+---------+---------------------+------------+-----------+ | NAME | STATE | IPV4 | TYPE | SNAPSHOTS | +------+---------+---------------------+------------+-----------+ | web | RUNNING | 10.253.67.97 (eth0) | PERSISTENT | 0 | +------+---------+---------------------+------------+-----------+ myusername@lxd:~$

We can see the container IP address. The parameter ns4tS simply omits the IPv6 address (‘6’) so that the table will look nice on the blog post.

Let’s enter the container and install nginx.

myusername@lxd:~$ lxc exec web -- sudo --login --user ubuntu To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@web:~$

We execute in the web container the whole command sudo –login –user ubuntu that gives us a login shell in the container. All Ubuntu containers have a default non-root account called ubuntu.

ubuntu@web:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html
ubuntu@web:~$ logout

Before installing a package, we must update. We updated and then installed nginx. Subsequently, we touched up a bit the default HTML file to mention packet.net and LXD. Finally, we logged out from the container.

Let’s test that the web server in the container is working.

myusername@lxd:~$ curl 10.253.67.97 <!DOCTYPE html> <html> <head> <title>Welcome to nginx on Packet.net in an LXD container!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx on Packet.net in an LXD container!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> myusername@lxd:~$

The last step is to get Ubuntu to forward any Internet connections from port 80 to the container at port 80. For this, we need the public IP of the server and the private IP of the container (it’s 10.253.67.97).

myusername@lxd:~$ ifconfig bond0 Link encap:Ethernet HWaddr 0c:c4:7a:de:51:a8       inet addr:147.75.82.251 Bcast:255.255.255.255 Mask:255.255.255.254     inet6 addr: 2604:1380:2000:600::1/127 Scope:Global     inet6 addr: fe80::ec4:7aff:fee5:4462/64 Scope:Link       UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1       RX packets:144216 errors:0 dropped:0 overruns:0 frame:0       TX packets:14181 errors:0 dropped:0 overruns:0 carrier:0       collisions:0 txqueuelen:1000       RX bytes:211518302 (211.5 MB) TX bytes:1443508 (1.4 MB)

The interface is a bond, bond0. Two 1Gbps connections are bonded together.

myusername@lxd:~$ PORT=80 PUBLIC_IP=147.75.82.251 CONTAINER_IP=10.253.67.97 sudo -E bash -c 'iptables -t nat -I PREROUTING -i bond0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"' myusername@lxd:~$

Let’s test it out!

That’s it!

Richard Hughes: fwupd about to break API and ABI

Planet GNOME - Mar, 26/09/2017 - 9:35md

Soon I’m going to merge a PR to fwupd that breaks API and ABI and bumps the soname. If you want to use the stable branch, please track 0_9_X. The API break removes all the deprecated API and cruft we’ve picked up in the months since we started the project, and with the upcoming 1.0.0 version coming up in a few weeks it seems a sensible time to have a clean out. If it helps, I’m going to put 0.9.x in Fedora 26 and F27, so master branch probably only for F28/rawhide and jhbuild at this point.

In other news, 4 days ago I became a father again, so expect emails to be delayed and full of confusion. All doing great, but it turns out sleep is for the weak. :)

A mysterious bug with Twisted plugins

Planet Debian - Mar, 26/09/2017 - 5:20md

I fixed a bug in Launchpad recently that led me deeper than I expected.

Launchpad uses Buildout as its build system for Python packages, and it’s served us well for many years. However, we’re using 1.7.1, which doesn’t support ensuring that packages required using setuptools’ setup_requires keyword only ever come from the local index URL when one is specified; that’s an essential constraint we need to be able to impose so that our build system isn’t immediately sensitive to downtime or changes in PyPI. There are various issues/PRs about this in Buildout (e.g. #238), but even if those are fixed it’ll almost certainly only be in Buildout v2, and upgrading to that is its own kettle of fish for other reasons. All this is a serious problem for us because newer versions of many of our vital dependencies (Twisted and testtools, to name but two) use setup_requires to pull in pbr, and so we’ve been stuck on old versions for some time; this is part of why Launchpad doesn’t yet support newer SSH key types, for instance. This situation obviously isn’t sustainable.

To deal with this, I’ve been working for some time on switching to virtualenv and pip. This is harder than you might think: Launchpad is a long-lived and complicated project, and it had quite a number of explicit and implicit dependencies on Buildout’s configuration and behaviour. Upgrading our infrastructure from Ubuntu 12.04 to 16.04 has helped a lot (12.04’s baseline virtualenv and pip have some deficiencies that would have required a more complicated bootstrapping procedure). I’ve dealt with most of these: for example, I had to reorganise a lot of our helper scripts (1, 2, 3), but there are still a few more things to go.

One remaining problem was that our Buildout configuration relied on building several different environments with different Python paths for various things. While this would technically be possible by way of building multiple virtualenvs, this would inflate our build time even further (we’re already going to have to cope with some slowdown as a result of using virtualenv, because the build system now has to do a lot more than constructing a glorified link farm to a bunch of cached eggs), and it seems like unnecessary complexity. The obvious thing to do seemed to be to collapse these into a single environment, since there was no obvious reason why it should actually matter if txpkgupload and txlongpoll were carefully kept off the path when running most of Launchpad: so I did that.

Then our build system got very sad.

Hmm, I thought. To keep our test times somewhat manageable, we run them in parallel across 20 containers, and we randomise the order in which they run to try to shake out test isolation bugs. It’s not completely unknown for there to be some oddities resulting from that. So I ran it again. Nope, but slightly differently sad this time. Furthermore, I couldn’t reproduce these failures locally no matter how hard I tried. Oh dear. This was obviously not going to be a good day.

In fact I spent a while on various different guesswork-based approaches. I found bug 571334 in Ampoule, an AMP-based process pool implementation that we use for some job runners, and proposed a fix for that, but cherry-picking that fix into Launchpad didn’t help matters. I tried backing out subsets of my changes and determined that if both txlongpoll and txpkgupload were absent from the Python module path in the context of the tests in question then everything was fine. I tried running strace locally and staring at the output for some time in the hope of enlightenment: that reminded me that the two packages in question install modules under twisted.plugins, which did at least establish a reason they might affect the environment that was more plausible than magic, but nothing much more specific than that.

On Friday I was fiddling about with this again and trying to insert some more debugging when I noticed some interesting behaviour around plugin caching. If I caused the txpkgupload plugin to raise an exception when loaded, the Twisted plugin system would remove its dropin.cache (because it was stale) and not create a new one (because there was now no content to put in it). After that, running the relevant tests would fail as I’d seen in our buildbot. Aha! This meant that I could also reproduce it by doing an even cleaner build than I’d previously tried to do, by removing the cached txpkgupload and txlongpoll eggs and allowing the build system to recreate them. When they were recreated, they didn’t contain dropin.cache, instead allowing that to be created on first use.

Based on this clue I was able to get to the answer relatively quickly. Ampoule has a specialised bootstrapping sequence for its worker processes that starts by doing this:

from twisted.application import reactors reactors.installReactor(reactor)

Now, twisted.application.reactors.installReactor calls twisted.plugin.getPlugins, so the very start of this bootstrapping sequence is going to involve loading all plugins found on the module path (I assume it’s possible to write a plugin that adds an alternative reactor implementation). If dropin.cache is up to date, then it will just get the information it needs from that; but if it isn’t, it will go ahead and import the plugin. If the plugin happens (as Twisted code often does) to run from twisted.internet import reactor at some point while being imported, then that will install the platform’s default reactor, and then twisted.application.reactors.installReactor will raise ReactorAlreadyInstalledError. Since Ampoule turns this into an info-level log message for some reason, and the tests in question only passed through error-level messages or higher, this meant that all we could see was that a worker process had exited non-zero but not why.

The Twisted documentation recommends generating the plugin cache at build time for other reasons, but we weren’t doing that. Fixing that makes everything work again.

There are still a few more things needed to get us onto pip, but we’re now pretty close. After that we can finally start bringing our dependencies up to date.

Colin Watson https://www.chiark.greenend.org.uk/~cjwatson/blog/ Colin Watson's blog

Colin Watson: A mysterious bug with Twisted plugins

Planet Ubuntu - Mar, 26/09/2017 - 5:20md

I fixed a bug in Launchpad recently that led me deeper than I expected.

Launchpad uses Buildout as its build system for Python packages, and it’s served us well for many years. However, we’re using 1.7.1, which doesn’t support ensuring that packages required using setuptools’ setup_requires keyword only ever come from the local index URL when one is specified; that’s an essential constraint we need to be able to impose so that our build system isn’t immediately sensitive to downtime or changes in PyPI. There are various issues/PRs about this in Buildout (e.g. #238), but even if those are fixed it’ll almost certainly only be in Buildout v2, and upgrading to that is its own kettle of fish for other reasons. All this is a serious problem for us because newer versions of many of our vital dependencies (Twisted and testtools, to name but two) use setup_requires to pull in pbr, and so we’ve been stuck on old versions for some time; this is part of why Launchpad doesn’t yet support newer SSH key types, for instance. This situation obviously isn’t sustainable.

To deal with this, I’ve been working for some time on switching to virtualenv and pip. This is harder than you might think: Launchpad is a long-lived and complicated project, and it had quite a number of explicit and implicit dependencies on Buildout’s configuration and behaviour. Upgrading our infrastructure from Ubuntu 12.04 to 16.04 has helped a lot (12.04’s baseline virtualenv and pip have some deficiencies that would have required a more complicated bootstrapping procedure). I’ve dealt with most of these: for example, I had to reorganise a lot of our helper scripts (1, 2, 3), but there are still a few more things to go.

One remaining problem was that our Buildout configuration relied on building several different environments with different Python paths for various things. While this would technically be possible by way of building multiple virtualenvs, this would inflate our build time even further (we’re already going to have to cope with some slowdown as a result of using virtualenv, because the build system now has to do a lot more than constructing a glorified link farm to a bunch of cached eggs), and it seems like unnecessary complexity. The obvious thing to do seemed to be to collapse these into a single environment, since there was no obvious reason why it should actually matter if txpkgupload and txlongpoll were carefully kept off the path when running most of Launchpad: so I did that.

Then our build system got very sad.

Hmm, I thought. To keep our test times somewhat manageable, we run them in parallel across 20 containers, and we randomise the order in which they run to try to shake out test isolation bugs. It’s not completely unknown for there to be some oddities resulting from that. So I ran it again. Nope, but slightly differently sad this time. Furthermore, I couldn’t reproduce these failures locally no matter how hard I tried. Oh dear. This was obviously not going to be a good day.

In fact I spent a while on various different guesswork-based approaches. I found bug 571334 in Ampoule, an AMP-based process pool implementation that we use for some job runners, and proposed a fix for that, but cherry-picking that fix into Launchpad didn’t help matters. I tried backing out subsets of my changes and determined that if both txlongpoll and txpkgupload were absent from the Python module path in the context of the tests in question then everything was fine. I tried running strace locally and staring at the output for some time in the hope of enlightenment: that reminded me that the two packages in question install modules under twisted.plugins, which did at least establish a reason they might affect the environment that was more plausible than magic, but nothing much more specific than that.

On Friday I was fiddling about with this again and trying to insert some more debugging when I noticed some interesting behaviour around plugin caching. If I caused the txpkgupload plugin to raise an exception when loaded, the Twisted plugin system would remove its dropin.cache (because it was stale) and not create a new one (because there was now no content to put in it). After that, running the relevant tests would fail as I’d seen in our buildbot. Aha! This meant that I could also reproduce it by doing an even cleaner build than I’d previously tried to do, by removing the cached txpkgupload and txlongpoll eggs and allowing the build system to recreate them. When they were recreated, they didn’t contain dropin.cache, instead allowing that to be created on first use.

Based on this clue I was able to get to the answer relatively quickly. Ampoule has a specialised bootstrapping sequence for its worker processes that starts by doing this:

from twisted.application import reactors reactors.installReactor(reactor)

Now, twisted.application.reactors.installReactor calls twisted.plugin.getPlugins, so the very start of this bootstrapping sequence is going to involve loading all plugins found on the module path (I assume it’s possible to write a plugin that adds an alternative reactor implementation). If dropin.cache is up to date, then it will just get the information it needs from that; but if it isn’t, it will go ahead and import the plugin. If the plugin happens (as Twisted code often does) to run from twisted.internet import reactor at some point while being imported, then that will install the platform’s default reactor, and then twisted.application.reactors.installReactor will raise ReactorAlreadyInstalledError. Since Ampoule turns this into an info-level log message for some reason, and the tests in question only passed through error-level messages or higher, this meant that all we could see was that a worker process had exited non-zero but not why.

The Twisted documentation recommends generating the plugin cache at build time for other reasons, but we weren’t doing that. Fixing that makes everything work again.

There are still a few more things needed to get us onto pip, but we’re now pretty close. After that we can finally start bringing our dependencies up to date.

Debian/TeX Live 2017.20170926-1

Planet Debian - Mar, 26/09/2017 - 5:01md

A full month or more has past since the last upload of TeX Live, so it was high time to prepare a new package. Nothing spectacular here I have to say, two small bugs fixed and the usual long list of updates and new packages.

From the new packages I found fontloader-luaotfload and interesting project. Loading fonts via lua code in luatex is by now standard, and this package allows for experiments with newer/alternative font loaders. Another very interesting new-comer is pdfreview which lets you set pages of another PDF on a lined background and add notes to it, good for reviewing.

Enjoy.

New packages

abnt, algobox, beilstein, bib2gls, cheatsheet, coelacanth, dijkstra, dynkin-diagrams, endofproofwd, fetchcls, fixjfm, fontloader-luaotfload, forms16be, hithesis, ifxptex, komacv-rg, ku-template, latex-refsheet, limecv, mensa-tex, multilang, na-box, notes-tex, octave, pdfreview, pst-poker, theatre, upzhkinsoku, witharrows.

Updated packages

2up, acmart, acro, amsmath, animate, babel, babel-french, babel-hungarian, bangorcsthesis, beamer, beebe, biblatex-gost, biblatex-philosophy, biblatex-source-division, bibletext, bidi, bpchem, bxjaprnind, bxjscls, bytefield, checkcites, chemmacros, chet, chickenize, complexity, curves, cweb, datetime2-german, e-french, epstopdf, eqparbox, esami, etoc, fbb, fithesis, fmtcount, fnspe, fontspec, genealogytree, glossaries, glossaries-extra, hvfloat, ifptex, invoice2, jfmutil, jlreq, jsclasses, koma-script, l3build, l3experimental, l3kernel, l3packages, latexindent, libertinust1math, luatexja, lwarp, markdown, mcf2graph, media9, nddiss, newpx, newtx, novel, numspell, ocgx2, philokalia, phfqit, placeat, platex, poemscol, powerdot, pst-barcode, pst-cie, pst-exa, pst-fit, pst-func, pst-geometrictools, pst-ode, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst-vehicle, pst2pdf, pstricks, pstricks-add, ptex-base, ptex-fonts, pxchfon, quran, randomlist, reledmac, robustindex, scratch, skrapport, spectralsequences, tcolorbox, tetex, tex4ht, texcount, texdef, texinfo, texlive-docindex, texlive-scripts, tikzducks, tikzsymbols, tocloft, translations, updmap-map, uplatex, widetable, xepersian, xetexref, xint, xsim, zhlipsum.

Norbert Preining https://www.preining.info/blog There and back again

Simos Xenitellis: How to use Ubuntu and LXD on Alibaba Cloud

Planet Ubuntu - Mar, 26/09/2017 - 3:38md

Alibaba Cloud is like Amazon Web Services as they offer quite similar cloud services. They are part of the Alibaba Group, a huge Chinese conglomerate. For example, the retailer component of the Alibaba Group is now bigger than Walmart. Here, we try out the cloud services.

The main reason to select Alibaba Cloud is to get a server running inside China. They also have several data centers outside China, but inside China it is mostly Alibaba Cloud. To get a server running inside mainland China though, you need to go through a registration process where you submit photos of your passport. We ain’t have time for that, therefore we select the closest data center to China, Hong Kong.

Creating an account on Alibaba Cloud

Click to create an account on Alibaba Cloud (update: no referral link). You get $300 credit to use within two months, and up to $50 of that credit can go towards launching virtual private servers. Actually, make that account with the referral now, before continuing with this section below..

When creating the account, there is either the option to verify your email or phone number. Let’s do the email verification.

Let’s check our mails. Where is that email from Alibaba Cloud? Nothing arrived!?!

The usability disaster is almost evident. When you get to this page about the Verification, the text says We need to verify your email. Please input the number you receive. Alibaba Cloud did not already send that email to us. We need to first click on Send to get it to send that email. The text should have said instead something like To use email verification, click Send below, then input the numbercode you have received.

You can pay Alibaba Cloud using either a bank card or Paypal. Let’s try Paypal! Actually, to make use of the $300 credit, it has to be a bank card instead.

We have added a bank card. This bank card has to go through a type verification. Alibaba Cloud will make a small debit (to be refunded later) and you can input either the transaction amount or the transaction code (see screenshot) below in order to verify that you do have access to your bank card.

After a couple of days, you get worried because there is no transaction with the description INTL*?????.ALIYUN.COM at your online banking. What went wrong? And what is this weird transaction with a different description in my bank statement?

Description: INTL*175 LUXEM LU ,44

Debit amount: 0.37€

What is LUXEM, a municipality in Germany, doing on my bank statement? Let’s hope that the processor for Alibaba in Europe is LUXEM, not ALIYUN.

Let’s try as transaction code the number 175. Did not work. Four more tries remaining.

Let’s try the transaction amount, 0.37€. Of course, it did not work. It says USD, nor EURO! Three tries remaining.

Let’s google a bit, Add a payment method documentation on Alibaba Cloud talks only about dollars. A forum post about non-dollar currencies says:

I did not get an authorization charge, therefore there is no X.

Let’s do something really crazy:

We type 0.44 as the transaction amount. IT WORKED!

In retrospect, there is a reference on ,44 in the description, who would have thought that this undocumented info might refer to the dollar amount.

After a week, the micro transaction of 0.37€ was not reimbursed. What’s more, I was also charged with a 2.5€ commission which I am not getting back either.

We are now ready to use the $300 Free Credit!

Creating a server on Alibaba Cloud

When trying to create a server, you may encounter this website, with a hostname YUNDUN.console.aliyun.com. If you get that, you are in the wrong place. You cannot add your SSH key here, nor do you create a server.

Instead, it should say ECS, Elastic Compute Service.

Here is the full menu for ECS,

Under Networks & Security, there is Key Pairs. Let’s add there the SSH public key, not the whole key pair.

First of all, we need to select the appropriate data center. Ok, we change to Hong Kong which is listed in the middle.

But how do we add our own SSH key? There is only an option to Create Key Pair!?! Well, let’s create a pair.

Ah, okay. Although the page is called Create Key Pair, we can actually Import an Existing Key Pair.

Now, click back to Elastic Computer S…/Overview, which shows each data center.

If we were to try to create a server in Mainland China, we get

In that case, we would need to send first a photo of our passport or our driver’s license.

Let’s go back, and select Hong Kong.

We are ready to configure our server.

There is an option of either a Starter Package or an Advanced Purchase. The Starter Package is really cool, you can get a server for only $4.5. But the find print for the $300 credit says that you cannot use the Starter Package here.

So, Advanced Purchase it will be.

There are two pricing models, Subscription and Pay As You Go. Subscription means that you pay monthly, Pay As You Go means that you pay hourly. We go for Subscription.

We select the 1-core, 1GB instance, and we can see the price at $12.29. We also pay separately for the Internet traffic. The cost is shown on an overlay, we still have more options to select before we create the server.

We change the default Security Group to the one shown above. We want our server to be accessible from outside on ports 80 and 443. Also port 22 is added by default, along with the port 3389 (Remote Desktop in Windows).

We select Ubuntu 16.04.  The order of the operating systems is a bit weird. Ideally, the order should reflect the popularity.

There is an option for Server Guard. Let’s try it since it is free. (it requires to install some closed-source package in our Linux. Eventually I did not try it).

The Ultra Cloud Disk is a network share and it is included in the earlier price. The other option would be to select an SSD. It is nice that we can add up to 16 disks to our server.

We are ready to place the order. It correctly shows $0 and mentions the $50 credit. We select not to auto renew.

Now we pay the $0.

And that’s how we start a server. We have received an email with the IP address but can also find the public IP address from the ECS settings.

Let’s have a look at the IP block for this IP address.

ffs.

How to set up LXD on an Alibaba server

First, we SSH to the server. The command looks like ssh root@_public_ip_address_

It looks like real Ubuntu, with real Ubuntu Linux kernel. Let’s update.

root@iZj6c66d14k19wi7139z9eZ:~# apt update Get:1 http://mirrors.cloud.aliyuncs.com/ubuntu xenial InRelease [247 kB] Hit:2 http://mirrors.aliyun.com/ubuntu xenial InRelease ... Get:45 http://mirrors.aliyun.com/ubuntu xenial-security/universe i386 Packages [147 kB] Get:46 http://mirrors.aliyun.com/ubuntu xenial-security/universe Translation-en [89.8 kB] Fetched 40.8 MB in 24s (1682 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 105 packages can be upgraded. Run 'apt list --upgradable' to see them. root@iZj6c66d14k19wi7139z9eZ:~#

We upgraded (apt upgrade) and there was a kernel update. We restarted (shutdown -r now) and the newly updated Ubuntu has the updated kernel. Nice!

Let’s check /proc/cpuinfo,

root@iZj6c66d14k19wi7139z9eZ:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz stepping : 2 microcode : 0x1 cpu MHz : 2494.224 cache size : 30720 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt bugs : bogomips : 4988.44 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: root@iZj6c66d14k19wi7139z9eZ:/proc#

How much free space from the 40GB disk?

root@iZj6c66d14k19wi7139z9eZ:~# df -h / Filesystem Size Used Avail Use% Mounted on /dev/vda1 40G 2,2G 36G 6% / root@iZj6c66d14k19wi7139z9eZ:~#

Let’s add a non-root user.

root@iZj6c66d14k19wi7139z9eZ:~# adduser myusername Adding user `myusername' ... Adding new group `myusername' (1000) ... Adding new user `myusername' (1000) with group `myusername' ... Creating home directory `/home/myusername' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for myusername Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] root@iZj6c66d14k19wi7139z9eZ:~#

Is LXD already installed?

root@iZj6c66d14k19wi7139z9eZ:~# apt policy lxd lxd: Installed: (none) Candidate: 2.0.10-0ubuntu1~16.04.2 Version table: 2.0.10-0ubuntu1~16.04.2 500 500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates/main amd64 Packages 500 http://mirrors.aliyun.com/ubuntu xenial-updates/main amd64 Packages 100 /var/lib/dpkg/status 2.0.2-0ubuntu1~16.04.1 500 500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-security/main amd64 Packages 500 http://mirrors.aliyun.com/ubuntu xenial-security/main amd64 Packages 2.0.0-0ubuntu4 500 500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial/main amd64 Packages 500 http://mirrors.aliyun.com/ubuntu xenial/main amd64 Packages root@iZj6c66d14k19wi7139z9eZ:~#

Let’s install LXD.

root@iZj6c66d14k19wi7139z9eZ:~# apt install lxd

Now, we can add our user account myusername to the groups sudo, lxd.

root@iZj6c66d14k19wi7139z9eZ:~# usermod -a -G lxd,sudo myusername root@iZj6c66d14k19wi7139z9eZ:~#

Let’s copy the SSH public key from root to the new non-root account.

root@iZj6c66d14k19wi7139z9eZ:~# cp -R /root/.ssh ~myusername/ root@iZj6c66d14k19wi7139z9eZ:~# chown -R myusername:myusername ~myusername/.ssh/ root@iZj6c66d14k19wi7139z9eZ:~#

Now, log out and log in as the new non-root account.

$ ssh myusername@IP.IP.IP.IP Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-96-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Welcome to Alibaba Cloud Elastic Compute Service ! myusername@iZj6c66d14k19wi7139z9eZ:~$

We are going to install the ZFS utilities so that LXD can use ZFS as a storage backend.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo apt install zfsutils-linux ...myusername@iZj6c66d14k19wi7139z9eZ:~$

Now, we can configure LXD. From before, the server had about 35GB free. We are allocating 20GB of that for LXD.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo lxd init
sudo: unable to resolve host iZj6c66d14k19wi7139z9eZ
[sudo] password for myusername:  ********
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=15]: 20
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
Warning: Stopping lxd.service, but it can still be activated by:
lxd.socket

LXD has been successfully configured.
myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list
Generating a client certificate. This may take a minute…
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+——+——-+——+——+——+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+——+——-+——+——+——+———–+
myusername@iZj6c66d14k19wi7139z9eZ:~$

Okay, we can create now our first LXD container. We are creating just a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc launch ubuntu:16.04 web Creating web Retrieving image: rootfs: 100% (6.70MB/s) Starting web myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s see the container,

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list +------+---------+---------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+------------+-----------+ | web | RUNNING | 10.35.87.141 (eth0) | | PERSISTENT | 0 | +------+---------+---------------------+------+------------+-----------+ myusername@iZj6c66d14k19wi7139z9eZ:~$

Nice. We get into the container and install a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc exec web -- sudo --login --user ubuntu To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@web:~$

We executed into the web container the command sudo –login –user ubuntu. The container has a default non-root account ubuntu.

ubuntu@web:~$ sudo apt update Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease ... Reading state information... Done 3 packages can be upgraded. Run 'apt list --upgradable' to see them. ubuntu@web:~$ sudo apt install nginx Reading package lists... Done ... Processing triggers for ufw (0.35-0ubuntu2) ... ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html ubuntu@web:~$ logout myusername@iZj6c66d14k19wi7139z9eZ:~$ curl 10.35.87.141 <!DOCTYPE html> <html> <head> <title>Welcome to nginx running in an LXD container on Alibaba Cloud!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx running in an LXD container on Alibaba Cloud!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> myusername@iZj6c66d14k19wi7139z9eZ:~$

Obviously, the web server in the container is not accessible from the Internet. We need to do something like add iptables rules to forward appropriately the connection.

Alibaba Cloud gives two IP address per server. One is the public IP address and the other is a private IP address (172.[16-31].*.*). The eth0 interface of the server has that private IP address. This information is important for iptables below.

myusername@iZj6c66d14k19wi7139z9eZ:~$ PORT=80 PUBLIC_IP=my172.IPAddress CONTAINER_IP=10.35.87.141 sudo -E bash -c 'iptables -t nat -I PREROUTING -i eth0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"' myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s load up our site using the public IP address from our own computer:

And that’s it!

Conclusion

Alibaba Cloud is yet another provider for cloud services. They are big in China, actually the biggest in China. They are trying to expand to the rest of the world. There are several teething problems, probably arising from the fact that the main website is in Mandarin and there is no infrastructure for immediate translation to English.

On HN there has been a sort of relaunch a few of months ago. It appears there is interest from them to get international users. What they need is people to attend immediately to issues as they are discovered.

If you want to learn more about LXD, see https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

 

Update #1

After a day of running a VPS on Alibaba Cloud, I received this email.

From: Alibaba Cloud Subject: 【Immediate Attention Needed】Alibaba Cloud Fraud Prevention We have detected a security risk with the card you are using to make purchases. In order to protect your account, please provide your account ID and the following information within one working day via your registered Alibaba Cloud email to compliance_support@aliyun.com for further investigation. If you are using a credit card as your payment method, please provide the following information directly. Please provide clear copies of: 1. Any ONE of the following three forms of government-issued photo identification for the credit card holder or payment account holder of this Alibaba Cloud account: (i) National identification card; (ii) Passport; (iii) Driver's License. 2. A clear copy of the front side of your credit card in connection with this Alibaba Account; (Note: For security reasons, we advise you to conceal the middle digits of your card number. Please make sure that the card holder's name, card issuing bank and the last four digits of the card number are clearly visible). 3. A clear copy of your card's bank statement. We will process your case within 3 working days of receiving the information listed above. NOTE: Please do not provide information in this ticket. All the information needed should be sent to this email compliance_support@aliyun.com. If you fail to provide all the above information within one working day , your instances will be shut down. Best regards, Alibaba Cloud Customer Service Center

What this means, is that update #2 has to happen now.

 

Update #2

Newer versions of LXD have a utility called lxd-benchmark. This utility spawns, starts and stops containers, and can be used to have an idea how efficient a server may be. I suppose primarily it is used to figure out if there is a regression in the LXD code. Let see it anyway in action here, the clock is ticking.

The new LXD is in a PPA at https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Let’s install it on Alibaba Cloud.

sudo apt-get install software-properties-common sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable sudo apt updatesudo apt upgrade          # Now LXD will be upgraded.sudo apt install lxd-tools    # Now lxd-benchmark will be installed.

Let’s see the options for lxd-benchmark.

Usage: lxd-benchmark spawn [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT] lxd-benchmark start [--parallel=COUNT] lxd-benchmark stop [--parallel=COUNT] lxd-benchmark delete [--parallel=COUNT] --count (= 100) Number of containers to create --freeze (= false) Freeze the container right after start --image (= "ubuntu:") Image to use for the test --parallel (= -1) Number of threads to use --privileged (= false) Use privileged containers --report-file (= "") A CSV file to write test file to. If the file is present, it will be appended to. --report-label (= "") A label for the report entry. By default, the action is used. --start (= true) Start the container after creation

First, we need to spawn new containers that we can later start, stop or delete. Ideally, I would expect the terminology to be launch instead of spawn, to keep in sync with the existing container management commands.

Second, there are defaults for each command as shown above. There is no indication yet as to how much RAM you need to spawn the default 100 containers. Obviously it would be more than the 1GB RAM we have on this server. Regarding the disk space, that would be fine because of copy-on-write with ZFS; any newly created LXD container does not use up additional space as they all are based on the space of the first container. Perhaps after a day when unattended-upgrades kicks in, each container would use up some space for any required security updates that get automatically applied.

Let’s try out with 3 containers. We have stopped and deleted the original web container that we have created in this tutorial (lxc stop web ; lxc delete web).

$ lxd-benchmark spawn --count 3 Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 Test variables: Container count: 3 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 3 Batch size: 1 Remainder: 0 [Sep 27 17:31:41.074] Importing image into local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 17:32:12.825] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 17:32:12.825] Batch processing start [Sep 27 17:32:37.614] Processed 1 containers in 24.790s (0.040/s) [Sep 27 17:32:42.611] Processed 2 containers in 29.786s (0.067/s) [Sep 27 17:32:49.110] Batch processing completed in 36.285s $ lxc list --columns ns4tS +-------------+---------+---------------------+------------+-----------+ | NAME | STATE | IPV4 | TYPE | SNAPSHOTS | +-------------+---------+---------------------+------------+-----------+ | benchmark-1 | RUNNING | 10.35.87.252 (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+------------+-----------+ | benchmark-2 | RUNNING | 10.35.87.115 (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+------------+-----------+ | benchmark-3 | RUNNING | 10.35.87.72 (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+------------+-----------+ | web | RUNNING | 10.35.87.141 (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+------------+-----------+ $

We created three extra containers, named benchmark-?, and got them started. There were launched in three batches, which means that one was started after another, not in parallel.

The total time on this server, when the storage backend is zfs, was 36.2 seconds. It is not clear what the numbers in the parenthesis mean at Processed 1 containers in 18.770s (0.053/s).

The total time on this server, when the storage backend was dir, was 68.6 instead.

Let’s stop them!

$ lxd-benchmark stop Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 [Sep 27 18:06:08.822] Stopping 3 containers [Sep 27 18:06:08.822] Batch processing start [Sep 27 18:06:09.680] Processed 1 containers in 0.858s (1.165/s) [Sep 27 18:06:10.543] Processed 2 containers in 1.722s (1.162/s) [Sep 27 18:06:11.406] Batch processing completed in 2.584s $

With dir, it was around 2.4 seconds.

And then delete them!

$ lxd-benchmark delete Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 [Sep 27 18:07:12.020] Deleting 3 containers [Sep 27 18:07:12.020] Batch processing start [Sep 27 18:07:12.130] Processed 1 containers in 0.110s (9.116/s) [Sep 27 18:07:12.224] Processed 2 containers in 0.204s (9.814/s) [Sep 27 18:07:12.317] Batch processing completed in 0.297s $

With dir, it was 2.5 seconds.

Let’s create three containers in parallel.

$ lxd-benchmark spawn --count=3 --parallel=3 Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 Test variables: Container count: 3 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 1 Batch size: 3 Remainder: 0 [Sep 27 18:11:01.570] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 18:11:01.570] Batch processing start [Sep 27 18:11:11.574] Processed 3 containers in 10.004s (0.300/s) [Sep 27 18:11:11.574] Batch processing completed in 10.004s $

With dir, it was 58.7 seconds.

Let’s push it further and try to hit some memory limits! First, we delete all, and launch 5 in parallel.

$ lxd-benchmark spawn --count=5 --parallel=5 Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 Test variables: Container count: 5 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 1 Batch size: 5 Remainder: 0 [Sep 27 18:13:11.171] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 18:13:11.172] Batch processing start [Sep 27 18:13:33.461] Processed 5 containers in 22.290s (0.224/s) [Sep 27 18:13:33.461] Batch processing completed in 22.290s $

So, 5 containers can start in 1GB of RAM, in just 22 seconds.

We also tried the same with the dir storage backend, and got

[Sep 27 17:24:16.409] Batch processing start [Sep 27 17:24:54.508] Failed to spawn container 'benchmark-5': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-5/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . [Sep 27 17:25:11.129] Failed to spawn container 'benchmark-3': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-3/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . [Sep 27 17:25:35.906] Processed 5 containers in 79.496s (0.063/s) [Sep 27 17:25:35.906] Batch processing completed in 79.496s

Out of the five containers, it managed to create 3 (No 1, 3, 4). The reason is that unsquashfs needs to run to expand an image, and that process uses a lot of memory. When using zfs, the same process probably does not need that much memory.

Let’s delete the five containers (storage backend: zfs):

[Sep 27 18:18:37.432] Batch processing completed in 5.006s

Let’s close the post with

$ lxd-benchmark spawn --count=10 --parallel=5 Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 Test variables: Container count: 10 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 2 Batch size: 5 Remainder: 0 [Sep 27 18:19:44.706] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 18:19:44.706] Batch processing start [Sep 27 18:20:07.705] Processed 5 containers in 22.998s (0.217/s) [Sep 27 18:20:57.114] Processed 10 containers in 72.408s (0.138/s) [Sep 27 18:20:57.114] Batch processing completed in 72.408s

We launched 10 containers in two batches of five containers each. The lxd-benchmark command completed successfully, in just 72 seconds. However, after the command completed, each container would start up, get an IP and get working. We hit the memory limit when the second batch of five containers where starting up. The network monitor on the Alibaba Cloud management console shows 100% CPU utilization, and it is not possible to access the server over SSH. Let’s delete the server from the management console and wind down this trial of Alibaba Cloud.

lxd-benchmark is quite useful and can be used to get practical understanding as to how many containers can make it on a server and much more.

Update #3

I just restarted the server from the management console and connected using SSH.

Here are the ten containers from Update #2,

$ lxc list --columns ns4 +--------------+---------+------+ | NAME | STATE | IPV4 | +--------------+---------+------+ | benchmark-01 | STOPPED | | +--------------+---------+------+ | benchmark-02 | STOPPED | | +--------------+---------+------+ | benchmark-03 | STOPPED | | +--------------+---------+------+ | benchmark-04 | STOPPED | | +--------------+---------+------+ | benchmark-05 | STOPPED | | +--------------+---------+------+ | benchmark-06 | STOPPED | | +--------------+---------+------+ | benchmark-07 | STOPPED | | +--------------+---------+------+ | benchmark-08 | STOPPED | | +--------------+---------+------+ | benchmark-09 | STOPPED | | +--------------+---------+------+ | benchmark-10 | STOPPED | | +--------------+---------+------+

The containers are in the stopped state. That is, they do not consume memory. How much free memory is there?

$ free total used free shared buff/cache available Mem: 1016020 56192 791752 2928 168076 805428 Swap: 0 0 0

About 792MB free memory.

There is not enough memory to get them all to run at the same time. It is good that they get into the stopped state when you reboot, so that you can fix.

Felipe Borges: GNOME 3.26 Release Party in Brno, Czech Republic

Planet GNOME - Mar, 26/09/2017 - 1:55md

Last Monday our local GNOME community in Brno gathered together to celebrate once more one of our releases.

This time (after many releases) we had a cake! Other than that, we had drinks and great people chatting in a very cozy venue. It was a blast to see old friends and make new.

Pictures taken by our fellow GNOMEr Jiří Eischmann

I would like to thank the GNOME Foundation for sponsoring our meetup and Dominika Vágnerová for organizing it all!

Sebastian K&uuml;gler: Plasma Mobile and Convergence

Planet Ubuntu - Mar, 26/09/2017 - 1:12md

Convergence, or the ability the serve different form factors from the same code base, is an often discussed concept. Convergence is at the heart of Plasma‘s design philosophy, but what does this actually mean to how apps are developed? What’s in it for the user? Let’s have a look!

Plasma — same code, different devices
First, let’s have a look at different angles of “Convergence”. It can actually mean different things, and there is overlap between these. Depending on who you ask, convergence could mean any of the following:

  • Being able to plug a monitor, keyboard and mouse into smartphone and use it as a full-fledged desktop replacement
  • Develop an application that works on a phone as well as on a desktop
  • Create different device user interfaces from the same code base

Convergence, in the broadest sense, has been one of the design goals of Plasma when we started creating it. When we work on Plasma, we ultimately expect components to run on a wide variety of target devices, we refer to that concept as the device spectrum.

Alex, one of Plasma’s designers has created a visual concept for a convergent user interface, that gives an impression how a fully convergent Plasma could look like to the user:

Input Methods and Screen Characteristics

Technically, there are a few aspects of convergence, the most important being: input methods, for example mouse, keyboard, touchscreens or combinations of those, and screen size (both physical dimensions, portrait vs. landscape layout and pixel density).

Touchscreen support is one aspect when it comes to run KDE software on a mobile device or within Plasma Mobile. Touchscreens are not specific to phones any more however, so making an app, or a Plasma component ready for touchscreen usage also benefits people who run Plasma on their convertible laptops, for example. Another big factor is that the app needs to work well on the screen of a smartphone, this means support for high dpi screens as well as a layout that presents the necessary controls in a way that is functional, attractive and user-friendly. With the Kirigami toolkit, which builds on top of QtQuick, we develop apps that work well on both target devices. From a more general point of view, KDE has always developed apps in a cross- platform way, so portability to other platforms is very much at the heart of our codebase.

The Kirigami toolkit, which offers a set of high-level application flow-controls for QtQuick applications achieves exactly that: it allows to built responsive apps that adapt to screen characteristics and input method.

(As an aside, there’s the case for Kirigami also supporting Android. Developing an app specifically for usage in Plasma may be easier, but it is also limiting its reach. Imagine an app running fine on your laptop, but also on your smartphone, be it Android or drive by Plasma Mobile (in the future). That would totally rock, and it would mean a target audience in the billions, not millions. Conversely, providing the technology to create such apps decreases the relative investment compared to the target audience, making technologies such as QtQuick and Kirigami an excellent choice for developers that want to maximize their target audience.)

Plasma Mobile vs. Plasma Desktop

Plasma Mobile is being developed in tandem with the popular Plasma desktop, in fact it shares more then 90% of the code with it. This means that work done on either of the two, mobile and desktop often benefits the other, and that there’s a large degree of compatibility between the two. The result is a system that feels the same across different devices, but makes use of the special capabilities of a given device, and supports different ways of using the software. On the development side, this means huge gains in terms of productivity and quality: A wider set of usage scenarios and having the code running on more machines means that it gets more real-world testing and bugs get shaken out quicker.

Who cares, anyway?

Whether or not convergence is something that users want, I think so. It takes a learning curve for users, and I think advancements in technology to bring this to the market, you need rather powerful hardware, the right connectors, and the right hardware components, so it’s not an easy end-goal. The path to convergence already bears huge benefits, as it means more efficient development, more consistency across different form factors and higher quality code.

Whether or not users care is only relevant to a certain point. Arguably, the biggest benefit of convergence lies in the efficiency of the development process, especially when multiple devices are involved. It doesn’t actually matter all that much if users are going to plug their mouse and keyboard into a phone and use it as a desktop device. Already today, users expect touchscreen to just work, even on laptops, users already expect the convertible being usable when the keyboard is flipped away or unplugged, users already expect to plug a 4K into their 1024×768 resolution laptop and the UI neither becoming unreadable or comically large.

In short: There really is no way around a large degree of convergence in Plasma (and similar products).

Reproducible Builds: Weekly report #126

Planet Debian - Mar, 26/09/2017 - 9:22pd

Here's what happened in the Reproducible Builds effort between Sunday September 17th and Saturday September 23rd 2017:

Media coverage
  • Christos Zoulas gave a talk entitled Reproducible builds on NetBSD at EuroBSDCon 2017
Reproducible work in other packages Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

1 package reviews was added, 49 have been updated and 54 have been removed in this week, adding to our knowledge about identified issues.

One issue type was updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (56)
  • Bas Couwenberg (1)
  • Helmut Grohne (1)
  • Nobuhiro Iwamatsu (2)
diffoscope development

Version 87 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

strip-nondeterminism development reprotest development

Version 0.7 was uploaded to unstable by Ximin Luo:

tests.reproducible-builds.org

Vagrant Cascadian and Holger Levsen:

  • Re-add and armhf build node that had been disabled due to performance issues, but works linux 4.14-rc1 now! #876212

Holger Levsen:

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Faqet

Subscribe to AlbLinux agreguesi