You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 6 ditë 18 orë më parë

Stephen Michael Kellat: Not Messing With Hot Wheels Car Insertion

Pre, 08/12/2017 - 7:03pd

Being on furlough from your job for just under four full months and losing 20 pounds during that time can hardly be considered healthy. If anything, it means that something is wrong. I allude in various fora that I work for a bureau of the United States of America's federal government as a civil servant. I am not particularly high-ranking as I only come in at GS-7 Step 1 under "CLEVELAND-AKRON-CANTON, OH" locality pay. My job doesn't normally have me working a full 12 months out of the year (generally 6-8 months depending upon the needs of the bureau) and I am normally on-duty only 32 hours per week.

As you might imagine, I have been trying to leave that job. Unfortunately, working for this particular government bureau makes any resume look kinda weird. My local church has some domestic missions work to do and not much money to fund it. I already use what funding we have to help with our mission work reaching out to one of the local nursing homes to provide spiritual care as well as frankly one of the few lifelines to the outside world some of those residents have. Xubuntu and the bleeding edge of LaTeX2e plus CTAN help greatly in preparing devotional materials for use in the field at the nursing home. Funding held us back from letting me assist with Hurricane Harvey or Hurricane Maria relief especially since I am currently finishing off quite a bit of training in homeland security/emergency management. But for the lack of finances to back it up as well as the lack of a large enough congregation, there is quite a bit to do. Unfortunately the numbers we get on a Sunday morning are not what they once were when the congregation had over a hundred in attendance.

I don't like talking about numbers in things like this. If you take 64 hours in a two week pay period multiplied it by the minimum of 20 pay periods that generally occur and then multiplied by the hourly equivalent rate for my grade and step it only comes out to a pre-tax gross under $26,000. I rounded up to a whole number. Admittedly it isn't too much.

At this time of the year last year, many people across the Internet burned cash by investing in the Holiday Hole event put on by the Cards Against Humanity people. Over $100,000 was raised to dig a hole about 90 miles outside Chicago and then fill the thing back in. This year people spent money to help buy a piece of land to tie up the construction of President Trump's infamous border wall and even more which resulted in Cards Against Humanity raking in $2,250,000 in record time.

Now, the church I would be beefing up the missionary work with doesn't have a web presence. It doesn't have an e-mail address. It doesn't have a fax machine. Again, it is a small church in rural northeast Ohio. According to IRS Publiction 526, contributions to them are deductible under current law provided you read through the stipulations in that thin booklet and are a taxpayer in the USA. Folks outside the USA could contribute in US funds but I don't know what the rules are for foreign tax administrations to advise about how such is treated if at all.

The congregation is best reached by writing to:

West Avenue Church of Christ 5901 West Avenue Ashtabula, OH 44004 United States of America

With the continuing budget shenanigans about how to fund Fiscal Year 2018 for the federal government, I get left wondering if/when I might be returning to duty. Helping the congregation fund me to undertake missions for it removes that as a concern. Besides, any job that gives you gray hair and puts 30 pounds on you during eight months of work cannot be good for you to remain at. Too many co-workers took rides away in ambulances at times due to the pressures of the job during the last work season.


Not Messing With Hot Wheels Car Insertion by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Robert Ancell: Setting up Continuous Integration on gitlab.gnome.org

Pre, 08/12/2017 - 1:40pd
Simple Scan recently migrated to the new gitlab.gnome.org infrastructure. With modern infrastructure I now have the opportunity to enable Continuous Integration (CI), which is a fancy name for automatically building and testing your software when you make changes (and it can do more than that too).

I've used CI in many projects in the past, and it's a really handy tool. However, I've never had to set it up myself and when I've looked it's been non-trivial to do so. The great news is this is really easy to do in GitLab!

There's lots of good documentation on how to set it up, but to save you some time I'll show how I set it up for Simple Scan, which is a fairly typical GNOME application.

To configure CI you need to create a file called .gitlab-ci.yml in your git repository. I started with the following:

build-ubuntu:
  image: ubuntu:rolling
  before_script:
    - apt-get update
    - apt-get install -q -y --no-install-recommends meson valac gcc gettext itstool libgtk-3-dev libgusb-dev libcolord-dev libpackagekit-glib2-dev libwebp-dev libsane-dev
  script:
    - meson _build
    - ninja -C _build install

The first line is the name of the job - "build_ubuntu". This is going to define how we build Simple Scan on Ubuntu.

The "image" is the name of a Docker image to build with. You can see all the available images on Docker Hub. In my case I chose an official Ubuntu image and used the "rolling" link which uses the most recently released Ubuntu version.

The "before_script" defines how to set up the system before building. Here I just install the packages I need to build simple-scan.

Finally the "script" is what is run to build Simple Scan. This is just what you'd do from the command line.

And with that, every time a change is made to the git repository Simple Scan is built on Ubuntu and tells me if that succeeded or not! To make things more visible I added the following to the top of the README.md:

[![Build Status](https://gitlab.gnome.org/GNOME/simple-scan/badges/master/build.svg)](https://gitlab.gnome.org/GNOME/simple-scan/pipelines)

This gives the following image that shows the status of the build:

And because there's many more consumers of Simple Scan that just Ubuntu, I added the following to.gitlab-ci.yml:

build-fedora:
  image: fedora:latest
  before_script:
    - dnf install -y meson vala gettext itstool gtk3-devel libgusb-devel colord-devel PackageKit-glib-devel libwebp-devel sane-backends-devel
  script:
    - meson _build
    - ninja -C _build install

Now it builds on both Ubuntu and Fedora with every commit!

I hope this helps you getting started with CI and gitlab.gnome.org. Happy hacking.

Ubuntu Insights: Security Team Weekly Summary: December 7, 2017

Enj, 07/12/2017 - 4:11md

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

Due to the holiday last week, there was no weekly report, so this report covers the previous two weeks. During the last two weeks, the Ubuntu Security team:

  • Triaged 379 public security vulnerability reports, retaining the 74 that applied to Ubuntu.
  • Published 32 Ubuntu Security Notices which fixed 70 security issues (CVEs) across 34 supported packages.
Ubuntu Security Notices

 

Bug Triage

 

Mainline Inclusion Requests

 

Development

 

  • add max compressed size check to the review tools
  • adjust review-tools runtime errors output for store (final)
  • adjust review-tools for redflagged base snap overrides
  • adjust review-tools for resquashing with fakeroot
  • upload a couple of bad snaps to test r945 of the review tools in the store. The store is correctly not auto-approving, but is also not handling them right. File LP: #1733699
  • investigate SNAPCRAFT_BUILD_INFO=1 with snapcraft cleanbuild and attempt rebuilds
  • respond to feedback in PR 4245, close and resubmit as PR 4255 (interfaces/screen-inhibit-control: fix case in screen inhibit control)
  • investigate reported godot issue. Send up PR 4257 (interfaces/opengl: also allow ‘revision’ on /sys/devices/pci…)
  • investigation of potential biometrics-observe interface
  • snapd reviews
    • PR 4258: fix unmounting on systems without rshared
    • PR 4170: cmd/snap-update-ns: add planWritableMimic
    • PR 4306 (use #include instead of bare ‘include’)
    • PR 4224 – cmd/snap-update-ns: teach update logic to handle synthetic changes
    • PR 4312 – ‘create mount targe for lib32,vulkan on demand
    • PR 4323 – interfaces: add gpio-memory-control interface
    • PR 4325 (add test for netlink-connector interface) and investigate NETLINK_CONNECTOR denials
    • review design of PR 4329 – discard stale mountspaces (v2)
  • finalized squashfs fix for 1555305 and submitted it upstream (https://sourceforge.net/p/squashfs/mailman/message/36140758/)

  • investigation into users 16.04 apparmor issues with tomcat
What the Security Team is Reading This Week

 

Weekly Meeting

 

More Info

 

Ubuntu Insights: Kernel Team Summary – December 6, 2017

Mër, 06/12/2017 - 9:14md
November 21 through December 04 Development (18.04)

Every 6 months the Ubuntu Kernel Team is tasked to pick the kernel to be used in the next release. This is a difficult thing to do because we don’t definitively know what will be going into the upstream kernel over the next 6 months nor the quality of that kernel. We look at the Ubuntu release schedule and how that will line up with the upstream kernel releases. We talk to hardware vendors about when they will be landing their changes upstream and what they would prefer as the Ubuntu kernel version. We talk to major cloud vendors and ask them what they would like. We speak to large consumers of Ubuntu to solicit their opinion. We look at what will be the next upstream stable kernel. We get input from members of the Canonical product strategy team. Taking all of that into account we are tentatively planning to converge on 4.15 for the Bionic Beaver 18.04 LTS release.

On the road to 18.04 we have a 4.14 based kernel in the Bionic -proposed repository.

Stable (Released & Supported)
  • The kernels for the current SRU cycle are being respun to include fixes for CVE-2017-16939 and CVE-2017-1000405.

  • Kernel versions in -proposed:

    trusty 3.13.0-137.186 trusty/linux-lts-xenial 4.4.0-103.126~14.04.1 xenial 4.4.0-103.126 xenial/linux-hwe 4.10.0-42.46~16.04.1 xenial/linux-hwe-edge 4.13.0-19.22~16.04.1 zesty 4.10.0-42.46 artful 4.13.0-19.22
  • Current cycle: 17-Nov through 09-Dec

    17-Nov Last day for kernel commits for this cycle. 20-Nov - 25-Nov Kernel prep week. 26-Nov - 08-Dec Bug verification & Regression testing. 11-Dec Release to -updates.
  • Next cycle: 08-Dec through 30-Dec(This cycle will only contain CVE fixes)

    08-Dec Last day for kernel commits for this cycle. 11-Dec - 16-Dec Kernel prep week. 17-Dec - 29-Dec Bug verification & Regression testing. 01-Jan Release to -updates.
Misc
  • The current CVE status
  • If you would like to reach the kernel team, you can find us at the #ubuntu-kernel
    channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing
    list at: kernel-team@lists.ubuntu.com.

Lubuntu Blog: Join Phabricator

Mar, 05/12/2017 - 8:54md
Inspired by the wonderful KDE folks, Lubuntu has created a Phabricator instance for our project. Phabricator is an open source, version control system-agnostic collaborative development environment similar in some ways to GitHub, GitLab, and perhaps a bit more remotely, like Launchpad. We were looking for tools to organize, coordinate, and collaborate, especially across teams within […]

Simos Xenitellis: How to migrate LXD from DEB/PPA package to Snap package

Mar, 05/12/2017 - 2:35md

You are using LXD from a Linux distribution package and you would like to migrate your existing installation to the Snap LXD package. Let’s do the migration together!

This post is not about live container migration in LXD. Live container migration is about moving a running container from one LXD server to another.

If you do not have LXD installed already, then look for another guide about the installation and set up of LXD from a snap package. A fresh installation of LXD as a snap package is easy.

Note that from the end of 2017, LXD will be generally distributed as a Snap package. If you run LXD 2.0.x from Ubuntu 16.04, you are not affected by this.

Prerequisites

Let’s check the version of LXD (Linux distribution package).

$ lxd --version 2.20 $ apt policy lxd lxd: Installed: 2.20-0ubuntu4~16.04.1~ppa1 Candidate: 2.20-0ubuntu4~16.04.1~ppa1 Version table: *** 2.20-0ubuntu4~16.04.1~ppa1 500 500 http://ppa.launchpad.net/ubuntu-lxc/lxd-stable/ubuntu xenial/main amd64 Packages 100 /var/lib/dpkg/status 2.0.11-0ubuntu1~16.04.2 500 500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages 2.0.2-0ubuntu1~16.04.1 500 500 http://archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages 2.0.0-0ubuntu4 500 500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

In this case, we run LXD version 2.20, and it was installed from the LXD PPA repository.

If you did not enable the LXD PPA repository, you would have an LXD version 2.0.x, the version that was released with Ubuntu 16.04 (what is running above). LXD version 2.0.11 is currently the default version for Ubuntu 16.04.3 and will be supported in that form until 2016 + 5 = 2021. LXD version 2.0.0 is the original LXD version in Ubuntu 16.04 (when original released) and LXD version 2.0.2 is the security update of that LXD 2.0.0.

We are migrating to the LXD snap package. Let’s see how many containers will be migrated.

$ lxc list | grep RUNNING | wc -l 6

It would be a good test to check if something goes horribly wrong.

Let’s check the available incoming LXD snap packages.

$ snap info lxd name: lxd summary: System container manager and API publisher: canonical contact: https://github.com/lxc/lxd/issues description: | LXD is a container manager for system containers. It offers a REST API to remotely manage containers over the network, using an image based workflow and with support for live migration. Images are available for all Ubuntu releases and architectures as well as for a wide number of other Linux distributions. LXD containers are lightweight, secure by default and a great alternative to virtual machines. snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu channels: stable: 2.20 (5182) 44MB - candidate: 2.20 (5182) 44MB - beta: ↑ edge: git-b165982 (5192) 44MB - 2.0/stable: 2.0.11 (4689) 20MB - 2.0/candidate: 2.0.11 (4770) 20MB - 2.0/beta: ↑ 2.0/edge: git-03e9048 (5131) 19MB -

There are several channels to choose from. The stable channel has LXD 2.20, just like the candidate channel. When the LXD 2.21 snap is ready, it will first be released in the candidate channel and stay there for 24 hours. If everything goes well, it will get propagated to the stable channel. LXD 2.20 was released some time ago, that’s why both channels have the same version (at the time of writing this blog post).

There is the edge channel, which has the auto-compiled version from the git source code repository. It is handy to use this channel if you know that a specific fix (that affects you) has been added to the source code, and you want to verify that it actually fixed the issue. Note that the beta channel is not used, therefore it inherits whatever is found in the channel below; the edge channel.

Finally, there are these 2.0/ tagged channels that correspond to the stock 2.0.x LXD versions in Ubuntu 16.04. It looks that those who use the 5-year supported LXD (because Ubuntu 16.04) have the option to switch to a snap version after all.

Installing the LXD snap

Install the LXD snap.

$ snap install lxd lxd 2.20 from 'canonical' installed Migrating to the LXD snap

Now, the LXD snap is installed, but the DEB/PPA package LXD is the one that is running. We need to run the migration script lxd.migrate that will move the data from the DEB/PPA version over to the Snap version of LXD. In practical terms, it will move files from /var/lib/lxd (old DEB/PPA LXD location), to

$ sudo lxd.migrate => Connecting to source server => Connecting to destination server => Running sanity checks === Source server LXD version: 2.20 LXD PID: 4414 Resources: Containers: 6 Images: 3 Networks: 1 Storage pools: 1 === Destination server LXD version: 2.20 LXD PID: 30329 Resources: Containers: 0 Images: 0 Networks: 0 Storage pools: 0 The migration process will shut down all your containers then move your data to the destination LXD. Once the data is moved, the destination LXD will start and apply any needed updates. And finally your containers will be brought back to their previous state, completing the migration. Are you ready to proceed (yes/no) [default=no]? yes => Shutting down the source LXD => Stopping the source LXD units => Stopping the destination LXD unit => Unmounting source LXD paths => Unmounting destination LXD paths => Wiping destination LXD clean => Moving the data => Moving the database => Backing up the database => Opening the database => Updating the storage backends => Starting the destination LXD => Waiting for LXD to come online === Destination server LXD version: 2.20 LXD PID: 2812 Resources: Containers: 6 Images: 3 Networks: 1 Storage pools: 1 The migration is now complete and your containers should be back online. Do you want to uninstall the old LXD (yes/no) [default=no]? yes All done. You may need to close your current shell and open a new one to have the "lxc" command work. Testing the migration to the LXD snap

Let’s check that the containers managed to start successfully,

$ lxc list | grep RUNNING | wc -l 6

But let’s check that we can still run Firefox from an LXD container, according to the following post,

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

Yep, all good. The artifact in the middle (over the c in packaged) is the mouse cursor in wait mode, while GNOME Screenshot is about to take the screenshot. I did not find a report about that in the GNOME Screenshot bugzilla. It is a minor issue and there are several workarounds (1. try one more time, 2. use timer screenshot).

Let’s do some actual testing,

Yep, works as well.

Exploring the LXD snap commands

Let’s type lxd and press Tab.

$ lxd<Tab> lxd lxd.check-kernel lxd.migrate lxd.benchmark lxd.lxc

There are two commands left to try out, lxd.check-kernel and lxd.benchmark. The snap package is called lxd, therefore any additional commands are prepended with lxd.. lxd is the actually LXD server executable. lxd.lxc is the lxc command that we are using for all LXD actions. The LXD snap package makes the appropriate symbolic link so that we just need to write lxc instead of lxd.lxc.

Trying out lxd.check-kernel

Let’s run lxd.check-kernel.

$ sudo lxd.check-kernel Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /lib/modules/4.10.0-40-generic/build/.config --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled newuidmap is not installed newgidmap is not installed Network namespace: enabled --- Control groups --- Cgroups: enabled Cgroup v1 mount points: /sys/fs/cgroup/systemd /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/freezer /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/memory /sys/fs/cgroup/devices /sys/fs/cgroup/perf_event /sys/fs/cgroup/cpuset /sys/fs/cgroup/hugetlb /sys/fs/cgroup/pids /sys/fs/cgroup/blkio Cgroup v2 mount points: Cgroup v1 clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Macvlan: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Vlan: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Bridges: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Advanced netfilter: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NF_NAT_IPV4: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NF_NAT_IPV6: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_IP_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_IP6_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabledmodprobe: ERROR: missing parameters. See -h. , not loadedCONFIG_NETFILTER_XT_MATCH_COMMENT: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded FUSE (for use with lxcfs): enabledmodprobe: ERROR: missing parameters. See -h. , not loaded --- Checkpoint/Restore --- checkpoint restore: enabled CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: enabled CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: enabled CONFIG_NETLINK_DIAG: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /snap/lxd/5182/bin/lxc-checkconfig

This is an important tool if you have issues in getting the LXD to run. In this example in the Misc section, it shows some errors about missing parameters. I suppose they are issues with the tool as the appropriate kernel modules are indeed loaded. My installation of the LXD snap works okay.

Trying out lxd.benchmark

Let’s try out the command without parameters.

$ lxd.benchmark Usage: lxd-benchmark launch [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT] lxd-benchmark start [--parallel=COUNT] lxd-benchmark stop [--parallel=COUNT] lxd-benchmark delete [--parallel=COUNT] --count (= 100) Number of containers to create --freeze (= false) Freeze the container right after start --image (= "ubuntu:") Image to use for the test --parallel (= -1) Number of threads to use --privileged (= false) Use privileged containers --report-file (= "") A CSV file to write test file to. If the file is present, it will be appended to. --report-label (= "") A label for the report entry. By default, the action is used. --start (= true) Start the container after creation error: A valid action (launch, start, stop, delete) must be passed. Exit 1

It is a benchmark tool that allows to create many containers. We can then use the tool to remove those containers. There is an issue with the default number of containers, 100, which is too high. If you run lxd-benchmark launch without specifying a smaller count,  you will mess up your LXD installation because you will run out of memory and maybe of disk space. Looking for a bug report… Okay it got buried into this pull request https://github.com/lxc/lxd/pull/3857 and needs to re-open. Ideally, the default count should be 1, and let the user knowingly select a bigger number. TODO. Here is the new pull request, https://github.com/lxc/lxd/pull/4074

Let’s try carefully lxd-benchmark.

$ lxd.benchmark launch --count 3 Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 Test variables: Container count: 3 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 0 Batch size: 4 Remainder: 3 [Dec 5 13:24:26.044] Found image in local store: 5f364e2e3f460773a79e9bec2edb5e993d236f035f70267923d43ab22ae3bb62 [Dec 5 13:24:26.044] Batch processing start [Dec 5 13:24:28.817] Batch processing completed in 2.773s

It took just 2.8s to launch then on this computer.
lxd-benchmark
launched 3 containers, with names benchmark-%d. Obviously, refrain from using the word benchmark as a name for your own containers. Let’s see these containers

$ lxc list --columns ns4 +---------------+---------+----------------------+ | NAME | STATE | IPV4 | +---------------+---------+----------------------+ | benchmark-1 | RUNNING | 10.52.251.121 (eth0) | +---------------+---------+----------------------+ | benchmark-2 | RUNNING | 10.52.251.20 (eth0) | +---------------+---------+----------------------+ | benchmark-3 | RUNNING | 10.52.251.221 (eth0) | +---------------+---------+----------------------+ ...

Let’s stop them, and finally remove them.

$ lxd.benchmark stop Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 [Dec 5 13:31:16.517] Stopping 3 containers [Dec 5 13:31:16.517] Batch processing start [Dec 5 13:31:20.159] Batch processing completed in 3.642s $ lxd.benchmark delete Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 [Dec 5 13:31:24.902] Deleting 3 containers [Dec 5 13:31:24.902] Batch processing start [Dec 5 13:31:25.007] Batch processing completed in 0.105s

Note that the lxd-benchmark actions follow the naming of the lxc actions (launch, start, stop and delete).

Troubleshooting Error “Target LXD already has images” $ sudo lxd.migrate => Connecting to source server => Connecting to destination server => Running sanity checks error: Target LXD already has images, aborting. Exit 1

This means that the snap version of LXD has some images and it is not clean. lxd.migrate requires the snap version of LXD to be clean. Solution: remove the LXD snap and install again.

$ snap remove lxd lxd removed $ snap install lxd lxd 2.20 from 'canonical' installed Which “lxc” command am I running?

This is the lxc command of the DEB/PPA package,

$ which lxc /usr/bin/lxc

This is the lxc command from the LXD snap package.

$ which lxc /snap/bin/lxc

If you installed the LXD snap but you do not see the the /snap/bin/lxc executable, it could be an artifact of your Unix shell. You may have to close that shell window and open a new one.

Error “bash: /usr/bin/lxc: No such file or directory”

If you get the following,

$ which lxc /snap/bin/lxc

but the lxc command is not found,

$ lxc bash: /usr/bin/lxc: No such file or directory Exit 127

then you must close the terminal window and open a new one.

Note: if you loudly refuse to close the current terminal window, you can just type

$ hash -r

which will refresh the list of executables from the $PATH. Applies to bash, zsh. Use rehash if on *csh.

 

Simos Xenitellishttps://blog.simos.info/

Sebastian Heinlein: Aptdaemon

Hën, 04/12/2017 - 9:01md
I am glad to announce aptdaemon: It is a DBus controlled and PolicyKit using package management...

Raphaël Hertzog: My Free Software Activities in November 2017

Dje, 03/12/2017 - 6:52md

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (16 commits to the security tracker).

I prepared and released DLA-1171-1 on libxml-libxml-perl.

I prepared a new update for simplesamlphp (1.9.2-1+deb7u1) fixing 6 CVE. I did not release any DLA yet since I was not able to test the updated package yet. I’m hoping that the the current maintainer can do it since he wanted to work on the update a few months ago.

Distro Tracker

Distro Tracker has seen a high level of activity in the last month. Ville Skyttä continued to contribute a few patches, he helped notably to get rid of the last blocker for a switch to Python 3.

I then worked with DSA to get the production instance (tracker.debian.org) upgraded to stretch with Python 3.5 and Django 1.11. This resulted in a few regressions related to the Python 3 switch (despite the large number of unit tests) that I had to fix.

In parallel Pierre-Elliott Bécue showed up on the debian-qa mailing list and he started to contribute. I have been exchanging with him almost daily on IRC to help him improve his patches. He has been very responsive and I’m looking forward to continue to cooperate with him. His first patch enabled the use “src:” and “bin:” prefix in the search feature to specify if we want to lookup among source packages or binary packages.

I did some cleanup/refactoring work after the switch of the codebase to Python 3 only.

Misc Debian work

Sponsorship. I sponsored many new packages: python-envparse 0.2.0-1, python-exotel 0.1.5-1, python-aws-requests-auth 0.4.1-1, pystaticconfiguration 0.10.3-1, python-jira 1.0.10-1, python-twilio 6.8.2-1, python-stomp 4.1.19-1. All those are dependencies for elastalert 0.1.21-1 that I also sponsored.

I sponsored updates for vboot-utils 0~R63-10032.B-2 (new upstream release for openssl 1.1 compat), aircrack-ng 1:1.2-0~rc4-4 (introducing airgraph-ng package) and asciidoc 8.6.10-2 (last upstream release, tool is deprecated).

Debian Installer. I submitted a few patches a while ago to support finding ISO images in LVM logical volumes in the hd-media installation method. Colin Watson reviewed them and made a few suggestions and expressed a few concerns. I improved my patches to take into account his suggestions and I resolved all the problems he pointed out. I then committed everything to the respective git repositories (for details review #868848, #868859, #868900, #868852).

Live Build. I merged 3 patches for live-build (#879169, #881941, #878430).

Misc. I uploaded Django 1.11.7 to stretch-backports. I filed an upstream bug on zim for #881464.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Clive Johnston: Bye bye LastPass, hello bitwarden

Dje, 03/12/2017 - 5:42md

I have been a loyal customer for a password manager called LastPass for a number of years now.  It all started when I decided to treat myself to an early Christmas present by purchasing the “Premium” version back in 2013, in order to take advantage of the extra features such as the mobile app.

Now, don’t get me wrong, I do think $12 is very good value for money and I was very happy with LastPass, but I must say this article really, really got my back up.  (Apparently I’m an “entitled user”).  Not only that but the fact that not one, but three of the Google ads on the page are for LastPass (now there’s a spooky coincidence!)

I do agree with a lot of other users that to double the price for absolutely no benefits is an extremely bitter pill to swallow, especially as there are a number of issues I been having regarding the security of the mobile app.  But anyways, I calmed down and the topic went out of my head until I received an email reminding me that they would automatically charge my credit card with the new $24 price.  Then, about a week later, as I watched a YouTube video by TuxDigital, he mentioned another password manager called bitwarden

So a big thank you to Michael for bringing this to my attention. Not only does it have way more features than LastPass, but it is also open source (code on GitHub), self host-able and the “Premium” version is only $10. My issues with the LastPass mobile app are gone in bitwarden and replaced with the option to lock the app with your fingerprint or a pin code, which is a nice happy medium of having to log out of LastPass and then re-enter your entire master code to regain access!

Also another feature I *beeping* love (excuse my French), is the app and vault allows you to store a “Google Authenticator” key in the vault and then automatically generates a One Time Password (OTP) on the fly and copies it to the device clipboard.  This allows it to be easily copied in when auto-filling the username and password, great for those who use this feature on their blogs.

Simos Xenitellis: How to set the timezone in LXD containers

Sht, 02/12/2017 - 1:06md

See https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/ on how to set up and test LXD on Ubuntu (or another Linux distribution).

In this post we see how to set up the timezone in a newly created container.

The problem

The default timezone for a newly created container is Etc/UTC, which is what we used to call Greenwich Mean Time.

Let’s observe.

$ lxc launch ubuntu:16.04 mycontainer Creating mycontainer Starting mycontainer $ lxc exec mycontainer -- date Sat Dec 2 11:40:57 UTC 2017 $ lxc exec mycontainer -- cat /etc/timezone Etc/UTC

That is, the observed time in a container follows a timezone that is different from the vast majority our computer settings. When we connect with a shell inside the container, the time and date is not the same with that of our computer.

The time is recorded correctly inside the container, it is just the way it is presented, that is off by a few hours.

Depending on our use of the container, this might or might not be an issue to pursue.

The workaround

We can set the environment variable TZ (for timezone) of each container to our preferred timezone setting.

$ lxc exec mycontainer -- date Sat Dec 2 11:50:37 UTC 2017 $ lxc config set mycontainer environment.TZ Europe/London $ lxc exec mycontainer -- date Sat Dec 2 11:50:50 GMT 2017

That is, we use the lxc config set action to set, for mycontainer,  the environment variable TZ to the proper timezone (here, Europe/London). UTC time and Europe/London time happen to be the same during the winter.

How do we unset the container timezone and return back to Etc/UTC?

$ lxc config unset mycontainer environment.TZ

Here we used the lxc config unset action to unset the environment variable TZ.

The solution

LXD supports profiles and you can edit the default profile in order to get the timezone setting automatically applied to any containers that follow this profile. Let’s get a list of the profiles.

$ lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | default | 7 | +---------+---------+

Only one profile, called default. It is used by 7 containers already on this LXD installation.

We set the environment variable TZ in the profile with the following,

$ lxc exec mycontainer -- date Sat Dec 2 12:02:37 UTC 2017 $ lxc profile set default environment.TZ Europe/London $ lxc exec mycontainer -- date Sat Dec 2 12:02:43 GMT 2017

How do we unset the profile timezone and get back to Etc/UTC?

lxc profile unset default environment.TZ

Here we used the lxc profile unset action to unset the environment variable TZ.

 

Simos Xenitellishttps://blog.simos.info/

Daniel Pocock: Hacking with posters and stickers

Pre, 01/12/2017 - 9:27md

The FIXME.ch hackerspace in Lausanne, Switzerland has started this weekend's VR Hackathon with a somewhat low-tech 2D hack: using the FSFE's Public Money Public Code stickers in lieu of sticky tape to place the NO CLOUD poster behind the bar.

Get your free stickers and posters

FSFE can send you these posters and stickers too.