You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 1 ditë 20 orë më parë

Simos Xenitellis: I am running Steam/Wine on Ubuntu 19.10 (no 32-bit on the host)

Hën, 24/06/2019 - 12:24pd

I like to take care of my desktop Linux and I do so by not installing 32-bit libraries. If there are any old 32-bit applications, I prefer to install them in a LXD container. Because in a LXD container you can install anything, and once you are done with it, you delete it and poof it is gone forever!

In the following I will show the actual commands to setup a LXD container for a system with an NVidia GPU so that we can run graphical programs. Someone can take these and make some sort of easy-to-use GUI utility. Note that you can write a GUI utility that uses the LXD API to interface with the system container.

Prerequisites

You are running Ubuntu 19.10.

You are using the snap package of LXD.

You have an NVidia GPU.

Setting up LXD (performed once)

Install LXD.

sudo snap install lxd

Set up LXD. Accept all defaults. Add your non-root account to the lxd group. Replace myusername with your own username.

sudo lxd init usermod -G lxd -a myusername newgrp lxd

You have setup LXD. Now you can create containers.

Creating the system container

Launch a system container. You can create as many as you wish. This one we will call steam and will put Steam in it.

lxc launch ubuntu:18.04 steam

Create a GPU passthrough device for your GPU.

lxc config device add steam gt2060 gpu

Create a proxy device for the X11 Unix socket of the host to this container. The proxy device is called X0. The abstract Unix socket @/tmp/.X11-unix/X0 of the host is proxied into the container. The 1000/1000 is the UID and GID of your desktop user on the host.

lxc config device add steam X0 proxy listen=unix:@/tmp/.X11-unix/X0 connect=unix:@/tmp/.X11-unix/X0 bind=container security.uid=1000 security.gid=1000

Get a shell into the system container.

lxc exec steam -- sudo --user ubuntu --login

Add the NVidia 430 driver to this Ubuntu 18.04 LTS container, using the PPA. The driver in the container has to match the driver on the host. This is an NVidia requirement.

sudo add-apt-repository ppa:graphics-drivers/ppa

Install the NVidia library, both 32-bit and 64-bit. Also install utilities to test X11, OpenGL and Vulkan.

sudo apt install -y libnvidia-gl-430 sudo apt install -y libnvidia-gl-430:i386 sudo apt install -y x11-apps mesa-utils vulkan-utils

Set the $DISPLAY. You can add this into ~/.profile as well.

export DISPLAY=:0 echo export DISPLAY=:0 >> ~/.profile

Enjoy by testing X11, OpenGL and Vulkan.

xclock
glxinfo
vulkaninfo xclock X11 application running in a LXD container ubuntu@steam:~$ glxinfo name of display: :0 display: :0 screen: 0 direct rendering: Yes server glx vendor string: NVIDIA Corporation server glx version string: 1.4 server glx extensions: GLX_ARB_context_flush_control, GLX_ARB_create_context, ... ubuntu@steam:~$ vulkaninfo =========== VULKANINFO =========== Vulkan Instance Version: 1.1.101 Instance Extensions: ==================== Instance Extensions count = 16 VK_EXT_acquire_xlib_display : extension revision 1 ...

The system is now ready to install Steam, and also Wine!

Installing Steam

We grab the deb package of Steam and install it.

wget https://steamcdn-a.akamaihd.net/client/installer/steam.deb
sudo dpkg -i steam.deb
sudo apt install -f

Then, we run it.

steam

Here is some sample output.

ubuntu@steam:~$ steam Running Steam on ubuntu 18.04 64-bit STEAM_RUNTIME is enabled automatically Pins up-to-date! Installing breakpad exception handler for appid(steam)/version(0) Installing breakpad exception handler for appid(steam)/version(1.0) Installing breakpad exception handler for appid(steam)/version(1.0) ... Installing Wine

Here is how you install Wine in the container.

sudo dpkg --add-architecture i386
wget -nc https://dl.winehq.org/wine-builds/winehq.key
sudo apt-key add winehq.key
sudo apt update
sudo apt install --install-recommends winehq-stable Conclusion

There are options to run legacy 32-bit software, and here we show how to do that using LXD containers. We pick NVidia (closed-source drivers) which entails a bit of extra difficulty. You can create many system containers and put in them all sorts of legacy software. Your desktop (host) remains clean and when you are done with a legacy app, you can easily remove the container and it is gone!

https://blog.simos.info/

Costales: Podcast Ubuntu y otras hierbas S03E06: Huawei y Android; IoT ¿más intrusión en los hogares?

Sht, 22/06/2019 - 3:58md
Paco Molinero, Fernando Lanero y Marcos Costales debatiremos sobre la polémica de Huawei con el Gobierno de los Estados Unidos. Además hablaremos sobre los problemas de privacidad y seguridad de los dispositivos conectados al Internet de las Cosas.
Ubuntu y otras hierbasEscúchanos en:

Canonical Design Team: ROS 2 Command Line Interface

Pre, 21/06/2019 - 7:23md

Disclosure: read the post until the end, a surprise awaits you!

Moving from ROS 1 to ROS 2 can be a little overwhelming.
It is a lot of (new) concepts, tools and a large codebase to get familiar with. And just like many of you, I am getting started with ROS 2.

One of the central pieces of the ROS ecosystem is its Command Line Interface (CLI). It allows for performing all kind of actions; from retrieving information about the codebase and/or the runtime system, to executing code and of course helping debugging in general. It’s a very valuable set of tools that ROS developers use on a daily basis. Fortunately, pretty much all of those tools were ported from ROS 1 to ROS 2.

To those already familiar with ROS, the ROS 2 CLI wording will sound very familiar. Commands such as roslaunch is ported to ros2 launch, rostopic becomes ros2 topic while rosparam is now ros2 param.
Noticed the pattern already ? Yes that’s right ! The keyword ‘ros2‘ has become the unique entry-point for the CLI.

So what ? ROS CLI keywords where broke in two and that’s it ?


Well, yes pretty much.

Every command starts with the ros2 keyword, followed by a verb, a sub-verb and possibly positional/optional arguments. The pattern is then,

$ ros2 verb sub-verb <positional-argument> <optional-arguments>

Notice that throughout the CLI, the auto-completion (the infamous [tab][tab]) is readily available for verbs, sub-verbs and most positional arguments. Similarly, helpers are available at each stage,

$ ros2 verb --help
$ ros2 verb sub-verb -h

Let us see a few examples,

$ ros2 run demo_node_cpp talker
starts the talker cpp node from the demo_nodes_cpp package.

$ ros2 run demo_node_py listener
starts the listener python node from the demo_nodes_py package.

$ ros2 topic echo /chatter
outputs the messages sent from the talker node.

$ ros2 node info /listener
outputs information about the listener node.

$ ros2 param list
lists all parameters of every node.

Fairly similar to ROS 1 right ?

Missing CLI tools

We mentioned earlier that most of the CLI tools were ported to ROS 2, but not all. We believe such missing tools is one of the barriers to greater adoption of ROS 2, so we’ve started added some that we noticed were missing. Over the past week we contributed 5 sub-verbs, including one that is exclusive to ROS 2. Let us briefly review them,

$ ros2 topic find <message-type>
outputs a list of all topics publishing messages of a given type (#271).

$ ros2 topic type <topic-name>
outputs the message type of a given topic (#272).

$ ros2 service find <service-type>
outputs a list of all services of a given type (#273).

$ ros2 service type <service-name>
outputs the service type of a given service (#274).

This tools are pretty handy by themselves, especially to debug and grasp an overview of a running system. And they become even more interesting when combined, say, in handy little scripts,

$ ros2 topic pub /chatter $(ros2 topic type /chatter) "data: Hello ROS 2 Developers"

Advertisement:
Have you ever looked for the version of a package you are using ?
Ever wondered who is the package author ?
Or which other packages it depends upon ?
All of this information, locked in the package’s xml manifest is now easily available at the tip of your fingers !

The new sub-verb we introduced allows one to retrieve any information contained in a package xml manifest (#280). The command,

$ ros2 pkg xml <package-name>
outputs the entirety of the xml manifest of a given package.
To retrieve solely a piece of it, or a tag in xml wording, use the --tag option,

$ ros2 pkg xml <package-name> --tag <tag-name>

A few examples are (at the time of writing),

$ ros2 pkg xml demo_nodes_cpp --tag version
0.7.6

$ ros2 pkg xml demo_nodes_py -t author
Mikael Arguedas
Esteve Fernandez

$ ros2 pkg xml intra_process_demo -t build_depend libopencv-dev
rclcpp
sensor_msgs
std_msgs

This concludes our brief review of the changes that ROS 2 introduced to the CLI tools.

Before leaving, let me offer you a treat.

— A ROS 2 CLI Cheats Sheet that we put together —

Feel free to share it, print and pin it above your screen but also contribute to it as it is hosted on github !

Cheers.

The post ROS 2 Command Line Interface appeared first on Ubuntu Blog.

Jonathan Riddell: Plasma Vision

Pre, 21/06/2019 - 4:19md

The Plasma Vision got written a couple years ago, a short text saying what Plasma is and hopes to create and defines our approach to making a useful and productive work environment for your computer.  Because of creative differences it was never promoted or used properly but in my quest to make KDE look as up to date in its presence on the web as it does on the desktop I’ve got the Plasma sprinters who are meeting in Valencia this week to agree to adding it to the KDE Plasma webpage.

 

Canonical Design Team: Kubernetes on Mac: how to set up

Enj, 20/06/2019 - 8:28md

MicroK8s can be used to run Kubernetes on Mac for testing and developing apps on macOS.

MicroK8s is the local distribution of Kubernetes developed by Ubuntu. It’s a compact Linux snap that installs a single node cluster on a local PC. Although MicroK8s is only built for Linux, Kubernetes on Mac works by setting up a cluster in an Ubuntu VM.

It runs all Kubernetes services natively on Ubuntu and any operating system (OS) which supports snaps. This is beneficial for testing and building apps, creating simple Kubernetes clusters and developing microservices locally –  essentially all dev work that needs deployment.

MicroK8s provides another level of reliability because it provides the most current version of Kubernetes for development. The latest upstream version of Kubernetes is always available on Ubuntu within one week of official release.

Kubernetes and MicroK8s both need a Linux kernel to work and require an Ubuntu VM as mentioned above. Mac users also need Multipass, the tool for launching Ubuntu VMs on Mac, Windows and Linux.

Here are instructions to set up Multipass and to run Kubernetes on Mac.

Install a VM for Mac using Multipass

The latest Multipass package is available on GitHub. Double click the .pkg file to install it.

To start a VM with MicroK8s run:

multipass launch --name microk8s-vm --mem 4G --disk 40G
multipass exec microk8s-vm -- sudo snap install microk8s --classic
multipass exec microk8s-vm -- sudo iptables -P FORWARD ACCEPT

Make enough resources available for hosting. Above we’ve created a VM named microk8s-vm and given it 4GB of RAM and 40GB of disk.

The VM has an IP that can be checked with the following: (Take note of this IP since our services will become available here).

multipass list
Name         State IPv4            Release
microk8s-vm  RUNNING 192.168.64.1   Ubuntu 18.04 LTS
Interact with MicroK8s on the VM

This can be done in three ways:

  • Using a Multipass shell prompt (command line) by running:
multipass shell microk8s-vm
  • Using multipass exec to execute a command without a shell prompt by inputting:
multipass exec microk8s-vm -- /snap/bin/microk8s.status
  • Using the Kubernetes API server running in the VM. Here one would use MicroK8s kubeconfig file with a local installation of kubectl to access the in-VM-kubernetes. Do this by running:
multipass exec microk8s-vm -- /snap/bin/microk8s.config > kubeconfig

Next, install kubectl on the host machine and then use the kubeconfig:

kubectl --kubeconfig=kubeconfig get all --all-namespaces NAMESPACE  NAME TYPE CLUSTER-IP   EXTERNAL-IP PORT(S) AGE
Default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 3m12s
Accessing in-VM Multipass services – enabling MicroK8s add-ons

A basic MicroK8s add-on to set up is the Grafana dashboard. Below we show one way of accessing Grafana to monitor and analyse a MicroK8s instance. To do this execute:

multipass exec microk8s-vm -- /snap/bin/microk8s.enable dns dashboard
Enabling DNS
Applying manifest
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.extensions/kube-dns created
Restarting kubelet
DNS is enabled
Enabling dashboard
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
service/monitoring-grafana created
service/monitoring-influxdb created
service/heapster created
deployment.extensions/monitoring-influxdb-grafana-v4 created
serviceaccount/heapster created
configmap/heapster-config created
configmap/eventer-config created
deployment.extesions/heapster-v1.5.2 created
dashboard enabled

Next, check the deployment progress by running:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all --all-namespaces

Which should return output similar to:

Once all the necessary services are running, the next step is to access the dashboard, for which we need a URL to visit. To do this, run:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:16443
Heapster is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Grafana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

If we were inside the VM, we could access the Grafana dashboard by visiting: this URL But, we want to access the dashboard from the host (i.e. outside the VM). We can use a proxy to do this:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl proxy --address='0.0.0.0' --accept-hosts='.*'
Starting to serve on [::][::]:8001

Leave the Terminal open with this command running and take note of the port (8001). We will need this next.

To visit the Grafana dashboard, we need to modify the in-VM dashboard URL by:

Kubernetes on Mac in summary

Building apps that are easy to scale and distribute has taken pride-of-place for developers and DevOp teams. Developing and testing apps locally using MicroK8s should help teams to deploy their builds faster.

Useful reading

The post Kubernetes on Mac: how to set up appeared first on Ubuntu Blog.

Ubuntu Podcast from the UK LoCo: S12E11 – 1942

Enj, 20/06/2019 - 5:00md

This week we’ve been to FOSS Talk Live and created games in Bash. We have a little LXD love in and discuss 32-bit Intel being dropped from Ubuntu 19.10. OggCamp tickets are on sale and we round up some tech news.

It’s Season 12 Episode 11 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Elizabeth K. Joseph: Building a PPA for s390x

Mar, 18/06/2019 - 4:59md

About 20 years ago a few clever, nerdy folks got together and ported Linux to the mainframe (s390x architecture). Reasons included because it’s there, and other ones you’d expect from technology enthusiasts, but if you read far enough, you’ll learn that they also saw a business case, which has been realized today. You can read more about that history over on Linas Vepstas’ Linux on the IBM ESA/390 Mainframe Architecture.

Today the s390x architecture not only officially supports Ubuntu, Red Hat Enterprise Linux (RHEL), and SUSE Linux Enterprise Server (SLES), but there’s an entire series of IBM Z mainframes available that are devoted to only running Linux, that’s LinuxONE. At the end of April I joined IBM to lend my Linux expertise to working on these machines and spreading the word about them to my fellow infrastructure architects and developers.

As its own architecture (not the x86 that we’re accustomed to), compiled code needs to be re-compiled in order to run on the s390x platform. In the case of Ubuntu, the work has already been done to get a large chunk of the Ubuntu repository ported, so you can now run thousands of Linux applications on a LinuxONE machine. In order to effectively do this, there’s a team at Canonical responsible for this port and they have access to an IBM Z server to do the compiling.

But the most interesting thing to you and me? They also lend the power of this machine to support community members, by allowing them to build PPAs as well!

By default, Launchpad builds PPAs for i386 and amd64, but if you select “Change details” of your PPA, you’re presented with a list of other architectures you can target.

Last week I decided to give this a spin with a super simple package: A “Hello World” program written in Go. To be honest, the hardest part of this whole process is creating the Debian package, but you have to do that regardless of what kind of PPA you’re creating and there’s copious amounts of documentation on how to do that. Thankfully there’s dh-make-golang to help the process along for Go packages, and within no time I had a source package to upload to Launchpad.

From there it was as easy as clicking the “IBM System z (s390x)” box under “Change details” and the builds were underway, along with build logs. Within a few minutes all three packages were built for my PPA!

Now, mine was the most simple Go application possible, so when coupled with the build success, I was pretty confident that it would work. Still, I hopped on my s390x Ubuntu VM and tested it.

It worked! But aren’t I lucky, as an IBM employee I have access to s390x Linux VMs.

I’ll let you in on a little secret: IBM has a series of mainframe-driven security products in the cloud: IBM Cloud Hyper Protect Services. One of these services is Hyper Protect Virtual Servers which are currently Experimental and you can apply for access. Once granted access, you can launch and Ubuntu 18.04 VM for free to test your application, or do whatever other development or isolation testing you’d like on a VM for a limited time.

If this isn’t available to you, there’s also the LinuxONE Community Cloud. It’s also a free VM that can be used for development, but as of today the only distributions you can automatically provision are RHEL or SLES. You won’t be able to test your deb package on these, but you can test your application directly on one of these platforms to be sure the code itself works on Linux on s390x before creating the PPA.

And if you’re involved with an open source project that’s more serious about a long-term, Ubuntu-based development platform on s390x, drop me an email at lyz@ibm.com so we can have a chat!

Santiago Zarate: Permission denied for hugepages in QEMU without libvirt

Mar, 18/06/2019 - 2:00pd

So, say you’re running qemu, and decided to use hugepages, nice isn’t it? helps with performace and stuff, however a wild wall appears!

QEMU: qemu-system-aarch64: can't open backing store /dev/hugepages/ for guest RAM: Permission denied

This basically means that you’re using the amazing -mem-path /dev/hugepages, and that QEMU running as an unprivileged user can’t write there… This is how it looked for me:

sudo -u _openqa-worker qemu-system-aarch64 -device virtio-gpu-pci -m 4094 -machine virt,gic-version=host -cpu host \ -mem-prealloc -mem-path /dev/hugepages -serial mon:stdio -enable-kvm -no-shutdown -vnc :102,share=force-shared \ -cdrom openSUSE-Tumbleweed-DVD-aarch64-Snapshot20190607-Media.iso \ -pflash flash0.img -pflash flash1.img -drive if=none,file=opensuse-Tumbleweed-aarch64-20190607-gnome-x11@aarch64.qcow2,id=hd0 \ -device virtio-blk-device,drive=hd0

The machine tries to start, but utimately I get that dreadful message. You can simply do a chmod to the directory, use an udev rule, and get away with it, it’s quick and does the job but also there are few options to solve this using libvirt, however if you’re not using hugeadm to manage those pools and let the operating system take care of it, likely the operating system will take care of this for you, so you can look to /usr/lib/systemd/system/dev-hugepages.mount, since trying to add an udev rule failed for a colleague of mine, I decided to use the systemd approach, ending up with the following:

[Unit] Description=Systemd service to fix hugepages + qemu ram problems. After=dev-hugepages.mount [Service] Type=simple ExecStart=/usr/bin/chmod o+w /dev/hugepages/ [Install] WantedBy=multi-user.target

The Fridge: Ubuntu Weekly Newsletter Issue 583

Mar, 18/06/2019 - 12:21pd

Welcome to the Ubuntu Weekly Newsletter, Issue 583 for the week of June 9 – 15, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Full Circle Magazine: Full Circle Weekly News #135

Hën, 17/06/2019 - 5:46md
Linux Command Line Editors Vulnerable to High Severity Bug
https://threatpost.com/linux-command-line-editors-high-severity-bug/145569/

KDE 5.16 Is Now Available for Kubuntu
https://news.softpedia.com/news/kde-plasma-5-16-desktop-is-now-available-for-kubuntu-and-ubuntu-19-04-users-526369.shtml

Debian 10 Buster-based Endless OS 3.6.0 Linux Distribution Now Available
https://betanews.com/2019/06/12/debian-10-buster-endless-os-linux/

Introducing Matrix 1.0 and the Matrix.org Foundation
https://www.pro-linux.de/news/1/27145/matrix-10-und-die-matrixorg-foundation-vorgestellt.html

System 76’s Supercharged Gazelle Laptop is Finally Available
https://betanews.com/2019/06/13/system76-linux-gazelle-laptop/

Lenovo Thinkpad P Laptops Are Available with Ubuntu
https://www.omgubuntu.co.uk/2019/06/lenovo-thinkpad-p-series-ubuntu-preinstalled

Atari VCS Linux-powered Gaming Console Is Now Available for Pre-order
https://news.softpedia.com/news/atari-vcs-linux-powered-gaming-console-is-now-available-for-pre-order-for-249-526387.shtml

Credits:
Ubuntu “Complete” sound: Canonical
  Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

Simos Xenitellis: How to run LXD containers in WSL2

Hën, 17/06/2019 - 4:22md

Microsoft announced in May that the new version of Windows Subsystem for Linux 2 (WSL 2), will be running on the Linux kernel, itself running alongside the Windows kernel in Windows.

In June, the first version of WSL2 has been made available as long as you update your Windows 10 installation to the Windows Insider program, and select to receive the bleeding edge updates (fast ring).

In this post we are going to see how to get LXD running in WSL2. In a nutshell, LXD does not work out of the box yet, but LXD is versatile enough to actually make it work even when the default Linux kernel in Windows is not fully suitable yet.

Prerequisites

You need to have Windows 10, then join the Windows Insider program (Fast ring).

Then, follow the instructions on installing the components for WSL2 and switching your containers to WSL2 (if you have been using WSL1 already).

Install the Ubuntu container image from the Windows Store.

At the end, when you run wsl in CMD.exe or in Powershell, you should get a Bash prompt.

The problems

We are listing here the issues that do not let LXD run out of the box. Skip to the next section to get LXD going.

In WSL2, there is a modified Linux 4.19 kernel running in Windows, inside Hyper-V. It looks like this is a cut-down/optimized version of Hyper-V that is good enough for the needs of Linux.

The Linux kernel in WSL2 has a specific configuration, and some of the things that LXD needs, are missing. Specifically, here is the output of lxc-checkconfig.

ubuntu@DESKTOP-WSL2:~$ lxc-checkconfig --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled --- Control groups --- Cgroups: enabled --- Control groups --- Cgroups: enabled Cgroup v1 mount points: /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpu /sys/fs/cgroup/cpuacct /sys/fs/cgroup/blkio /sys/fs/cgroup/memory /sys/fs/cgroup/devices /sys/fs/cgroup/freezer /sys/fs/cgroup/net_cls /sys/fs/cgroup/perf_event /sys/fs/cgroup/hugetlb /sys/fs/cgroup/pids /sys/fs/cgroup/rdma Cgroup v2 mount points: Cgroup v1 systemd controller: missing Cgroup v1 clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled, not loaded Macvlan: enabled, not loaded Vlan: missing Bridges: enabled, not loaded Advanced netfilter: enabled, not loaded CONFIG_NF_NAT_IPV4: enabled, not loaded CONFIG_NF_NAT_IPV6: enabled, not loaded CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded CONFIG_IP6_NF_TARGET_MASQUERADE: missing CONFIG_NETFILTER_XT_TARGET_CHECKSUM: missing CONFIG_NETFILTER_XT_MATCH_COMMENT: missing FUSE (for use with lxcfs): enabled, not loaded --- Checkpoint/Restore --- checkpoint restore: enabled CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: enabled CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: enabled CONFIG_NETLINK_DIAG: enabled File capabilities: Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig ubuntu@DESKTOP-WSL2:~$

The systemd-related mount point is OK in the sense that currently systemd does not work anyway in WSL (either WSL1 or WSL2). At some point it will get fixed in WSL2, and there are pending issues on this at Github. Talking about systemd, we cannot use yet the snap package of LXD because snapd depends on systemd. And no snapd, means no snap package of LXD.

The missing netfilter kernel modules mean that we cannot use the managed LXD network interfaces (the one with default name lxdbr0). If you try to create a managed network interface, you will get the following error.

Error: Failed to create network 'lxdbr0': Failed to run: iptables -w -t filter -I INPUT -i lxdbr0 -p udp --dport 67 -j ACCEPT -m comment --comment generated for LXD network lxdbr0: iptables: No chain/target/match by that name.

For completeness, here is the LXD log. Notably, AppArmor is missing from the Linux kernel and there was no CGroup network class controller.

ubuntu@DESKTOP-WSL2:~$ cat /var/log/lxd/lxd.log t=2019-06-17T10:17:10+0100 lvl=info msg="LXD 3.0.3 is starting in normal mode" path=/var/lib/lxd t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel uid/gid map:" t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 0 4294967295" t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 0 4294967295" t=2019-06-17T10:17:10+0100 lvl=info msg="Configured LXD uid/gid map:" t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 100000 65536" t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 100000 65536" t=2019-06-17T10:17:10+0100 lvl=warn msg="AppArmor support has been disabled because of lack of kernel support" t=2019-06-17T10:17:10+0100 lvl=warn msg="Couldn't find the CGroup network class controller, network limits will be ignored." t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel features:" t=2019-06-17T10:17:10+0100 lvl=info msg=" - netnsid-based network retrieval: no" t=2019-06-17T10:17:10+0100 lvl=info msg=" - unprivileged file capabilities: yes" t=2019-06-17T10:17:10+0100 lvl=info msg="Initializing local database" t=2019-06-17T10:17:14+0100 lvl=info msg="Starting /dev/lxd handler:" t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding devlxd socket" socket=/var/lib/lxd/devlxd/sock t=2019-06-17T10:17:14+0100 lvl=info msg="REST API daemon:" t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing global database" t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing storage pools" t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing networks" t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning leftover image files" t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning leftover image files" t=2019-06-17T10:17:14+0100 lvl=info msg="Loading daemon configuration" t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning expired images" t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning expired images" t=2019-06-17T10:17:14+0100 lvl=info msg="Expiring log files" t=2019-06-17T10:17:14+0100 lvl=info msg="Done expiring log files" t=2019-06-17T10:17:14+0100 lvl=info msg="Updating images" t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating images" t=2019-06-17T10:17:14+0100 lvl=info msg="Updating instance types" t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating instance types" ubuntu@DESKTOP-WSL2:~$

Having said all that, let’s get LXD working.

Configuring LXD on WSL2

Let’s get a shell into WSL2.

C:\> wsl ubuntu@DESKTOP-WSL2:~$

The aptpackage of LXD is already available in the Ubuntu 18.04.2 image, found in the Windows Store. However, the LXD service is not running by default and we will to start it.

ubuntu@DESKTOP-WSL2:~$ sudo service lxd start

Now we can run sudo lxd initto configure LXD. We accept the defaults (btrfs storage driver, 50GB default storage). But for networking, we avoid creating the local network bridge, and instead we configure LXD to use an existing bridge. The existing bridge configures macvlan, which avoids the error, but macvlan does not work yet anyway in WSL2.

ubuntu@DESKTOP-WSL2:~$ sudo lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: Create a new BTRFS pool? (yes/no) [default=yes]: Would you like to use an existing block device? (yes/no) [default=no]: Size in GB of the new loop device (1GB minimum) [default=50GB]: Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: no Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes Name of the existing bridge or host interface: eth0 Would you like LXD to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes config: {} networks: [] storage_pools: - config: size: 50GB description: "" name: default driver: btrfs profiles: - config: {} description: "" devices: eth0: name: eth0 nictype: macvlan parent: eth0 type: nic root: path: / pool: default type: disk name: default cluster: null ubuntu@DESKTOP-WSL2:~$

For some reason, LXD does not manage to mount sysfor the containers, therefore we need to perform this ourselves.

ubuntu@DESKTOP-WSL2:~$ sudo mkdir /usr/lib/x86_64-linux-gnu/lxc/sys ubuntu@DESKTOP-WSL2:~$ sudo mount sysfs -t sysfs /usr/lib/x86_64-linux-gnu/lxc/sys

The containers will not have direct Internet connectivity, therefore we need to use a Web proxy. In our case, it suffices to use privoxy. Let’s install it. privoxy uses by default the port 8118, which means that if the containers can somehow get access to port 8118 on the host, they get access to the Internet!

ubuntu@DESKTOP-WSL2:~$ sudo apt update ... ubuntu@DESKTOP-WSL2:~$ sudo apt install -y privoxy

Now, we are good to go! In the following we create a container with a Web server, and view it using Internet Explorer. Yes, IE has two uses, 1. to download Firefox, and 2. to view the Web server in the LXD container as evidence that all these are real.

Setting up a Web server in a LXD container in WSL2

Let’s create our first container, running Ubuntu 18.04.2. It does not get an IP address from the network because macvlan is not working. The container has no Internet connectivity!

ubuntu@DESKTOP-WSL2:~$ lxc launch ubuntu:18.04 mycontainer Creating mycontainer Starting mycontainer ubuntu@DESKTOP-WSL2:~$ lxc list +-------------+---------+------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------+---------+------+------+------------+-----------+ | mycontainer | RUNNING | | | PERSISTENT | 0 | +-------------+---------+------+------+------------+-----------+ ubuntu@DESKTOP-WSL2:~$

The container has no Internet connectivity, so we need to give it access to port 8118 on the host. But how can we do that, if the container does not have even network connectivity with the host? We can do this using a LXD proxy device. Run the following on the host. The command creates a proxy device called myproxy8118 that proxies the TCP port 8118 between the host and the container (the binding happens in the container because the port already exists on the host).

ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy8118 proxy listen=tcp:127.0.0.1:8118 connect=tcp:127.0.0.1:8118 bind=container Device myproxy8118 added to mycontainer ubuntu@DESKTOP-WSL2:~$

Now, get a shell in the container and configure the proxy!

ubuntu@DESKTOP-WSL2:~$ lxc exec mycontainer bash
root@mycontainer:~# export http_proxy=http://localhost:8118/
root@mycontainer:~# export https_proxy=http://localhost:8118/

It’s time to install and start nginx!

root@mycontainer:~# apt update ... root@mycontainer:~# apt install -y nginx ... root@mycontainer:~# service nginx start

nginx is installed. For a finer touch, let’s edit a bit the default HTML file of the Web server so that it is evident that the Web server runs in the container. Add some text you think suitable, using the command

root@mycontainer:~# nano /var/www/html/index.nginx-debian.html

Up to now, there is a Web server running in the container. This container is not accessible by the host and obviously by Windows either. So, how can we view the website from Windows? By creating an additional proxy device. The command creates a proxy device called myproxy80 that proxies the TCP port 80 between the host and the container (the binding happens on the host because the port already exists in the container).

root@mycontainer:~# logout ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 bind=host

Finally, find the IP address of your WLS2 Ubuntu host (hint: use ifconfig) and connect to that IP using your Web browser.

Conclusion

We managed to install LXD in WSL2 and got a container to start. Then, we installed a Web server in the container and viewed the page from Windows.

I hope future versions of WSL2 will be more friendly to LXD. In terms of the networking, there is need for more work to make it work out of the box. In terms of storage, btrfs is supported (over a loop file) and it is fine.

https://blog.simos.info/