You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 14 orë 38 min më parë

Ubuntu Blog: An intro to MicroK8s

Hën, 27/01/2020 - 3:44md

MicroK8s is the smallest, fastest multi-node Kubernetes. Single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux as well as Mac and Windows using Multipass. Perfect for: Developer workstations, IoT, Edge, CI/CD.

Anyone who’s tried to work with Kubernetes knows the pain of having to deal with getting setup and running with the deployment. There are minimalist solutions in the market that reduce time-to-deployment and complexity but the light weight solutions come at the expense of critical extensibility and missing add-ons.

If you don’t want to spend time jumping through hoops to get Kubernetes up and running, MicroK8s gets you started in under 60 seconds.

“Canonical might have assembled the easiest way to provision a single node Kubernetes cluster”

Kelsey Hightower, Google.

Join our webinar to learn why developers choose to work with MicroK8s as a reliable, fast, small and upstream version of Kubernetes and how you can get started. The webinar will also feature the add-ons available including Kubeflow for AI/ML work, Grafana and Prometheus for monitoring, service mesh tools and more.

Watch the webinar

Stuart Langridge: Write more

Sht, 25/01/2020 - 1:45pd

I’ve written a couple of things here recently and I’d forgotten how much I enjoy doing that. I should do more of it.

Most of my creative writing energy goes into D&D, or stuff for work, or talks at conferences, or #sundayroastclub, but I think quite a lot of it is bled away by Twitter; an idea happens, and then while it’s still just an idea I tweet it and then it’s used up. There’s a certain amount of instant gratification involved in this, of course, but I think it’s like a pressure valve; because a tweet is so short, so immediate, it’s easy to release the steam in a hundred tiny bursts rather than one long exhalation. I’m not good at metaphors, but in my head this seems like one of those thermometers for charities: my creative wellspring builds up to the overflow point — call it the value of 50 — and so I tweet something which drops it back down to 48. Then it builds up again to 50 and another tweet drops it back to 48, and so on. In the old days, it’d run up to fifty and then keep going while I was consumed with the desire to write but also consumed with the time required to actually write something, and then there’d be something long and detailed and interesting which would knock me back down to thirty, or ten, or nought.

I kinda miss that. I’m not sure what to do about it, though. Swearing off Twitter isn’t really an option; even ignoring the catastrophic tsunami of FOMO that would ensue, I’d be hugely worried that if I’m not part of the conversation, part of the zeitgeist, I’d just vanish from the public discourse. Not sure my ego could cope with that.

So I’m between the devil and the deep blue sea. Neither of those are nice (which, obviously, is the point) but, like so many people before me, and I suspect me included, I think I’m going to make an effort to turn more thoughts into writing rather than into snide asides or half-finished thoughts where maybe a hundred likes will finish them.

Of course I don’t have comments, so your thoughts on this should be communicated to me via Twitter. The irony hurricane proceeds apace. (Or on your own weblog which then sends me a webmention via the form below, of course, but that’s not all that likely yet.) Check in a month whether I’ve even remotely stuck to this or if I’ve just taken the easy option.

Ubuntu Blog: Teaching Robotics with ROS on Ubuntu at SRU

Pre, 24/01/2020 - 11:11md

This week, as part of my work on the Ubuntu Robotics team, I headed up to Slippery Rock University in northwestern PA to meet with Dr. Sam Thangiah and to introduce students to the Robot Operating System (ROS).  New semester, lots of new opportunities for learning!

We started with a really simple robot environment.  Check out this build! This Raspberry Pi runs an Ubuntu 18.04 image which gives it all the built-in LTS security advantages. It’s mounted on piece of plexiglass with two motors and a motor controller board from the PiHut.  We worked through about 75 lines of sample python code which hooked the RPi.GPIO library to control the general purpose I/O pins, and we created an abstract Motor class.  This got our two-wheeled robot up and running…running right off the table. Oops.

Getting moving was just the beginning.  With a robot active in the physical world, we identified plenty of new problems to solve.  One motor ran a bit faster than the other and the robot drifted right. Sometimes one of the wheels lost traction so the robot didn’t go where we sent it.  But probably the most important problem yet to solve was to keep it from running into things… and from running off the table

Many of these problems are solved by the Robotics Operating System (ROS), the evolving product of a very active and innovative open source robotics community.  With ROS installed on the Pi, another 25 lines of python code created a ROS node listening for commands on the “/move” topic. Devices on the network were able to send motion commands directly to the robot, and we opened the door on the immense library of tools available within ROS.

Robotics can be an outstanding learning tool where the digital realm meets the physical realm.  It’s a place where a student’s code makes real, observable actions and where they can experiment with their environment.  In just over an hour, our conversations wandered over everything from basic electrical theory to mechanical engineering, including a touch of kinematics, some mathematics, and a few lines of python code to solve our problems.  If you’d like to learn more about building your own two-wheeled robot, see the “Your first robot” blog and video series by Kyle Fazzari, Canonical’s lead engineer in robotics.

Now that they’ve been given the basic building blocks, it’ll be exciting to see what a room full of motivated students can produce this semester!

Simos Xenitellis: How to use virtual machines in LXD

Pre, 24/01/2020 - 11:08pd

Traditionally, LXD is used to create system containers, light-weight virtual machines that use Linux Container features and not hardware virtualization.

However, starting from LXD 3.19, it is possible to create virtual machines as well. That is, now with LXD you can create both system containers and virtual machines.

In the following we see how to setup LXD for virtual machines, then start a virtual machine and use it. Finally, we go through some troubleshooting.

How to setup LXD for virtual machines

Launching LXD virtual machines requires some preparation. We need to pass some information to the virtual machine so that we can then be able to connect to it as soon as it boots up. We pass the necessary information to the virtual machine using a LXD profile, through cloud-init.

Creating a LXD profile for virtual machines

Here is such a profile. There is a cloud-init configuration that essentially has all the information that is passed to the virtual machine. Then, there is a config device that makes available a disk device to the virtual machine, and from there it can setup a VM-specific LXD component.

config: user.user-data: | #cloud-config ssh_pwauth: yes users: - name: ubuntu passwd: "$6$iBF0eT1/6UPE2u$V66Rk2BMkR09pHTzW2F.4GHYp3Mb8eu81Sy9srZf5sVzHRNpHP99JhdXEVeN0nvjxXVmoA6lcVEhOOqWEd3Wm0" lock_passwd: false groups: lxd shell: /bin/bash sudo: ALL=(ALL) NOPASSWD:ALL description: LXD profile for virtual machines devices: config: source: cloud-init:config type: disk name: vm used_by:

This profile

  • Enables password authentication in SSH (ssh_pwauth: yes)
  • Adds a non-root user ubuntu with password ubuntu. See Troubleshooting below on how to change this.
  • The password is not in a locked state.
  • The user account belongs to the lxd group, in case we want to run LXD inside the LXD virtual machine.
  • The shell is /bin/bash.
  • Can sudo to all without requiring a password.
  • Some extra configuration will be passed to the virtual machine through an ISO image named config.iso. Once you get a shell in the virtual machine, you can install the rest of the support by mounting this ISO image and running the installer.

We now need to create a profile with the above content. Here is how we do this. You first create an empty profile called vm. Then, you run the cat | lxc profile edit vm command which allows you to paste the above profile configuration and finally hit Control+D to have it saved. Alternatively, you can run lxc profile edit vm and then paste in there the following text. The profile was adapted from the LXD 3.19 announcement page.

$ lxc profile create vm $ cat | lxc profile edit vm config: user.user-data: | #cloud-config ssh_pwauth: yes users: - name: ubuntu passwd: "$6$iBF0eT1/6UPE2u$V66Rk2BMkR09pHTzW2F.4GHYp3Mb8eu81Sy9srZf5sVzHRNpHP99JhdXEVeN0nvjxXVmoA6lcVEhOOqWEd3Wm0" lock_passwd: false groups: lxd shell: /bin/bash sudo: ALL=(ALL) NOPASSWD:ALL description: LXD profile for virtual machines devices: config: source: cloud-init:config type: disk name: vm used_by: Ctrl^D $ lxc profile show vm

We have created the profile with the virtual machine-specific. We have now the pieces in place to launch a LXD virtual machine.

Launching a LXD virtual machine

We launch a LXD virtual machine with the following command. It is the standard lxc launch command, with the addition of the --vm option to create a virtual machine (instead of a system container). We specify the default profile (whichever base configuration you use in your LXD installation) and on top of that we add our VM-specific configuration with --profile vm. Depending on your computer’s specifications, it takes a few seconds to launch the container, and then less than 10 seconds for the VM to boot up and receive the IP address from your network.

$ lxc launch ubuntu:18.04 vm1 --vm --profile default --profile vm Creating vm1 Starting vm1 $ lxc list vm1 +------+---------+------+------+-----------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+------+------+-----------------+-----------+ | vm1 | RUNNING | | | VIRTUAL-MACHINE | 0 | +------+---------+------+------+-----------------+-----------+ $ lxc list vm1 +------+---------+--------------------+------+-----------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+--------------------+------+-----------------+-----------+ | vm1 | RUNNING | 10.10.10.20 (eth0) | | VIRTUAL-MACHINE | 0 | +------+---------+--------------------+------+-----------------+-----------+ $

We have enabled password authentication for SSH, which means that we can connect to the VM straight away with the following command.

$ ssh ubuntu@10.10.10.20 Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Fri Jan 24 09:22:19 UTC 2020 System load: 0.03 Processes: 100 Usage of /: 10.9% of 8.68GB Users logged in: 0 Memory usage: 15% IP address for enp3s5: 10.10.10.20 Swap usage: 0% 0 packages can be updated. 0 updates are security updates. The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. ubuntu@vm1:~$ Using the console in a LXD VM

LXD has the lxc console command to give you a console to a running system container and virtual machine. You can use the console to view the boot messages as they appear, and also log in using a username and password. In the LXD profile we set up a password primarily to be able to connect through the lxc console. Let’s get a shell through the console.

$ lxc console vm1 To detach from the console, press: +a q [NOTE: Press Enter at this point] Ubuntu 18.04.3 LTS vm1 ttyS0 vm1 login: ubuntu Password: ********** Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Fri Jan 24 09:22:19 UTC 2020 System load: 0.03 Processes: 100 Usage of /: 10.9% of 8.68GB Users logged in: 0 Memory usage: 15% IP address for enp3s5: 10.10.10.20 Swap usage: 0% 0 packages can be updated. 0 updates are security updates. The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. ubuntu@vm1:~$

To exit from the console, logout from the shell first, then press Ctrl+A q.

ubuntu@vm1:~$ logout Ubuntu 18.04.3 LTS vm1 ttyS0 vm1 login: [Press Ctrl+A q] $

Bonus tip: When you launch a LXD VM, you can run straight away lxc console vm1 and you get the chance to view the boot up messages of the Linux kernel in the VM as they appear.

Setting up the LXD agent inside the VM

In any VM environment the VM is separated from the host. For usability purposes, we often add a service in the VM so that it makes it easier to access the VM resources from your host. This service is available in the config device that was made available to the VM through cloud-init. At some point in the future, the LXD virtual machine images will be adapted so that they automatically setup the configuration from the config device. But for now, we do this manually by setting up the LXD agent service. First, get a shell into the virtual machine either through SSH or lxc console. We become root and perform the mount of the config device. We can see the exact files of the config device. We run ./install.sh and make the LXD Agent service run automatically in the VM. Finally, we reboot the VM so that the changes take effect.

ubuntu@vm1:~$ sudo -i root@vm1:~# mount -t 9p config /mnt/ root@vm1:~# cd /mnt/ root@vm1:/mnt# ls -l total 6390 -r-------- 1 999 root 745 Jan 24 09:18 agent.crt -r-------- 1 999 root 288 Jan 24 09:18 agent.key dr-x------ 2 999 root 5 Jan 24 09:18 cloud-init -rwx------ 1 999 root 595 Jan 24 09:18 install.sh -r-x------ 1 999 root 11495360 Jan 24 09:18 lxd-agent -r-------- 1 999 root 713 Jan 24 09:18 server.crt dr-x------ 2 999 root 4 Jan 24 09:18 systemd root@vm1:/mnt# ./install.sh Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent.service → /lib/systemd/system/lxd-agent.service. Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent-9p.service → /lib/systemd/system/lxd-agent-9p.service. LXD agent has been installed, reboot to confirm setup. To start it now, unmount this filesystem and run: systemctl start lxd-agent-9p lxd-agent root@vm1:/mnt# reboot

Now the LXD Agent service is running in the VM. We are ready to use the LXD VM just like a LXD system container.

Using a LXD virtual machine

By installing the LXD agent inside the LXD VM, we can run the usual LXD commands such as lxc exec, lxc file, etc. Here is how to get a shell, either using the built-in alias lxc shell, or lxc exec to get a shell with the non-root account of the Ubuntu container images (from the repository ubuntu:).

$ lxc shell vm1 root@vm1:~# logout $ lxc exec vm1 -- sudo --user ubuntu --login ubuntu@vm1:~$

We can transfer files between the host and the LXD virtual machine. We create a file mytest.txt on the host. We push that file to the virtual machine vm1. The destination of the push is vm1/home/ubuntu/, where vm1 is the name of the virtual machine (or system container). It is a bit weird that we do not use : to separate the name from the path, just like in SSH and elsewhere. The reason is that : is used to specify a remote LXD server, so it cannot be used to separate the name from the path. We then perform a recursive pull of the ubuntu home directory and place it in /tmp. Finally, we have a look at the retrieved directory.

$ echo "This is a test" > mytest.txt $ lxc file push mytest.txt vm1/home/ubuntu/ $ lxc file pull --recursive vm1/home/ubuntu/ /tmp/ $ ls -ld /tmp/ubuntu/ drwxr-xr-x 4 myusername myusername 4096 Jan 28 01:00 /tmp/ubuntu/ $

We can view the lxc info of the virtual machine.

$ lxc info vm1 Name: vm1 Location: none Remote: unix:// Architecture: x86_64 Created: 2020/01/27 20:20 UTC Status: Stopped Type: virtual-machine Profiles: default, vm

Other functionality that is available to system containers should be made also available to virtual machines in the following months.

Troubleshooting Error: unknown flag: –vm

You will get this error message when you try to launch a virtual machine while your version of LXD is 3.18 or lower. VM support has been added to LXD 3.19, therefore the version should be either 3.19 or newer.

Error: Failed to connect to lxd-agent

You can launched a LXD VM and you are trying to connect to it using lxc exec and get a shell (or run other commands). The LXD VM needs to have a service running inside the VM that will receive the lxc exec commands. This service has not been installed yet into the LXD VM, or for some reason it is not running.

Error: The LXD VM does not get automatically an IP address

The LXD virtual machine should be able to get an IP address from LXD’s dnsmasq without issues.

macvlan works as well but would not show up in lxc list vm1 until you setup the LXD Agent.

$ lxc list vm1 +------+---------+----------------------+------+-----------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+----------------------+------+-----------------+-----------+ | vm1 | RUNNING | 192.168.1.9 (enp3s5) | | VIRTUAL-MACHINE | 0 | +------+---------+----------------------+------+-----------------+-----------+ I created a LXD VM and did not have to do any preparation at all!

When you lxc launchor lxc init with the aim to create a LXD VM, you need to remember to pass the --vm option in order to create a virtual machine instead of a container. To verify whether your newly created machine is a system container or a virtual machine, run lxc list and it should show you the type under the Type column.

How do I change the VM password in the LXD profile?

You can generate a new password using the following command. We are not required to echo -n in this case because mkpasswd with take care of the newline for us. We use the SHA-512 method, because this is the password hashing algorithm since Ubuntu 16.04.

$ echo "mynewpassword" | mkpasswd --method=SHA-512 --stdin $6$BzEIxmCSyPK7$GQgw5i7SIIY0k2Oa/YmBVzmDZ4/zaxx/qJVzKBfG6uaaPYfb2efJGmJ8xxRsCaxxrYzO2NuPawrPd1DD/DsPk/ $

Then, run lxc profile edit vm and replace the old password field with your new one.

How do I set my public key instead of a password?

Instead of passwd, use ssh-authorized-keys. See the cloud-init example on ssh-authorized-keys.

Discussion

In LXD 3.19 there is initial support for virtual machines. As new versions of LXD are being developed, more features from system containers will get implemented into virtual machines as well. In April 2020 we will be getting LXD 4.0, long-term support for five to ten years. There is ongoing work to add as much functionality for virtual machines in order to make it into the feature freeze for LXD 4.0. If you are affected, it makes sense to follow closely the development of virtual machine support in LXD towards the LXD 4.0 feature freeze.

blog.simos.info/

Podcast Ubuntu Portugal: Ep 74 – WSL por Nuno do Carmo (parte 2)

Enj, 23/01/2020 - 11:45md

Episódio 74 – WSL por Nuno do Carmo (parte 1). E eis que chega a continação da história: 2 Ubuntus e 1 Windows entram num bar e… Já sabem: oiçam, comentem e partilhem!

  • https://ulsoy.org/blog/experiencing-wsl-as-a-linux-veteran-part-1/
  • https://meta.wikimedia.org/wiki/WikiCon_Portugal
  • https://www.humblebundle.com/books/python-machine-learning-packt-books?partner=PUP
  • https://www.humblebundle.com/books/holiday-by-makecation-family-projects-books?partner=PUP
  • https://stackoverflow.com/questions/56979849/dbeaver-ssh-tunnel-invalid-private-key
  • https://fosdem.org
  • https://github.com/PixelsCamp/talks
  • https://pixels.camp/
Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

The Fridge: Ubuntu 19.04 (Disco Dingo) End of Life reached on January 23 2020

Enj, 23/01/2020 - 11:19md

This is a follow-up to the End of Life warning sent earlier this month to confirm that as of today (Jan 23, 2020), Ubuntu 19.04 is no longer supported. No more package updates will be accepted to 19.04, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 19.04 (Disco Dingo) release almost 9 months ago, on April 18, 2019. As a non-LTS release, 19.04 has a 9-month support cycle and, as such, the support period is now nearing its end and Ubuntu 19.04 will reach end of life on Thursday, Jan 23rd.

At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 19.04.

The supported upgrade path from Ubuntu 19.04 is via Ubuntu 19.10. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/EoanUpgrades

Ubuntu 19.10 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jan 23 21:13:01 UTC 2020 by Adam Conrad, on behalf of the Ubuntu Release Team

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2019

Enj, 23/01/2020 - 7:19md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In December, 208.00 work hours have been dispatched among 14 paid contributors. Their reports are available: Evolution of the situation

Though December was as quiet as to be expected due to the holiday season, the usual amount of security updates were still released by our contributors.
We currently have 59 LTS sponsors each month sponsoring 219h. Still, as always we are welcoming new LTS sponsors!

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 33 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Ubuntu Studio: Ubuntu Studio 19.04 reaches End Of Life

Enj, 23/01/2020 - 1:00pd
Our favorite Disco Dingo, Ubuntu Studio 19.04, has reached end-of-life and will no longer receive any updates. If you have not yet upgraded, please do so now or forever lose the ability to upgrade! Ubuntu Studio 20.04 LTS is scheduled for April of 2020. The transition from 19.10 to 20.04... Continue reading

Ubuntu Blog: Ubuntu Server development summary – 21 January 2020

Mar, 21/01/2020 - 8:00md
Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

Spotlight: cloud-init 19.4

On the very last days of 2019 we released version 19.4 of cloud-init. This new upstream release is currently available on the supported LTS releases of Ubuntu (Xenial and Bionic) and in the development version of the next LTS release, Focal Fossa. For a list of features released, see the full ChangeLog on GitHub. The 19.4 cloud-init release was the last release to support python 2.7. Any new commits to cloud-init will not require python 2 support.

Spotlight: Ubuntu Pro for AWS

Ubuntu Pro is a premium Ubuntu image designed to provide the most comprehensive feature set for production environments running in the public cloud. Ubuntu Pro images based on Ubuntu 18.04 LTS (Bionic Beaver) are now available for AWS as an AMI through AWS Marketplace

Spotlight: Speed up project bug triage with grease monkey

Bryce Harrington, on the Ubuntu Server team, has written up an excellent post on how to speed up bug triage responses with grease monkey. It simplifies the inclusion of frequent responses the team uses for various projects when maintaining bugs in Launchpad for multiple Ubuntu packages. Thanks Bryce!

cloud-init
  • Add Rootbox & HyperOne to list of cloud in README (#176) [Adam Dobrawy]
  • docs: add proposed SRU testing procedure (#167)
  • util: rename get_architecture to get_dpkg_architecture (#173)
  • Ensure util.get_architecture() runs only once (#172)
  • Only use gpart if it is the BSD gpart (#131) [Conrad Hoffmann]
  • freebsd: remove superflu exception mapping (#166) [Gonéri Le Bouder]
  • ssh_auth_key_fingerprints_disable test: fix capitalization (#165) [Paride Legovini]
  • util: move uptime’s else branch into its own boottime function (#53) [Igor Galić] (LP: #1853160)
  • workflows: add contributor license agreement checker (#155)
  • net: fix rendering of ‘static6’ in network config (#77) (LP: #1850988)
  • Make tests work with Python 3.8 (#139) [Conrad Hoffmann]
  • fixed minor bug with mkswap in cc_disk_setup.py (#143) [andreaf74]
  • freebsd: fix create_group() cmd (#146) [Gonéri Le Bouder]
  • doc: make apt_update example consistent (#154)
  • doc: add modules page toc with links (#153) (LP: #1852456)
  • Add support for the amazon variant in cloud.cfg.tmpl (#119) [Frederick Lefebvre]
  • ci: remove Python 2.7 from CI runs (#137)
  • modules: drop cc_snap_config config module (#134)
  • migrate-lp-user-to-github: ensure Launchpad repo exists (#136)
  • docs: add initial troubleshooting to FAQ (#104) [Joshua Powers]
  • doc: update cc_set_hostname frequency and descrip (#109) [Joshua Powers] (LP: #1827021)
  • freebsd: introduce the freebsd renderer (#61) [Gonéri Le Bouder]
  • cc_snappy: remove deprecated module (#127)
  • HACKING.rst: clarify that everyone needs to do the LP->GH dance (#130)
  • freebsd: cloudinit service requires devd (#132) [Gonéri Le Bouder]
  • cloud-init: fix capitalisation of SSH (#126)
  • doc: update cc_ssh clarify host and auth keys [Joshua Powers] (LP: #1827021)
  • ci: emit names of tests run in Travis (#120)
  • Release 19.4 (LP: #1856761)
  • rbxcloud: fix dsname in RbxCloud [Adam Dobrawy] (LP: #1855196)
  • tests: Add tests for value of dsname in datasources [Adam Dobrawy]
  • apport: Add RbxCloud ds [Adam Dobrawy]
  • docs: Updating index of datasources [Adam Dobrawy]
  • docs: Fix anchor of datasource_rbx [Adam Dobrawy]
  • settings: Add RbxCloud [Adam Dobrawy]
  • doc: specify _ over – in cloud config modules [Joshua Powers] (LP: #1293254)
  • tools: Detect python to use via env in migrate-lp-user-to-github [Adam Dobrawy]
  • Partially revert “fix unlocking method on FreeBSD” (#116)
  • tests: mock uid when running as root (#113) [Joshua Powers] (LP: #1856096)
  • cloudinit/netinfo: remove unused getgateway (#111)
  • docs: clear up apt config sections (#107) [Joshua Powers] (LP: #1832823)
  • doc: add kernel command line option to user data (#105) [Joshua Powers] (LP: #1846524)
  • config/cloud.cfg.d: update README [Joshua Powers] (LP: #1855006)
  • azure: avoid re-running cloud-init when instance-id is byte-swapped (#84) [AOhassan]
  • fix unlocking method on FreeBSD [Igor Galić] (LP: #1854594)
  • debian: add reference to the manpages [Joshua Powers]
  • ds_identify: if /sys is not available use dmidecode (#42) [Igor Galić] (LP: #1852442)
  • docs: add cloud-id manpage [Joshua Powers]
  • docs: add cloud-init-per manpage [Joshua Powers]
  • docs: add cloud-init manpage [Joshua Powers]
  • docs: add additional details to per-instance/once [Joshua Powers]
  • Merge pull request #96 from fred-lefebvre/master [Joshua Powers]
  • Update doc-requirements.txt [Joshua Powers]
  • doc-requirements: add missing dep [Joshua Powers]
  • Merge pull request #95 from powersj/docs/bugs [Joshua Powers]
  • dhcp: Support RedHat dhcp rfc3442 lease format for option 121 (#76) [Eric Lafontaine] (LP: #1850642)
  • one more [Joshua Powers]
  • Address OddBloke review [Joshua Powers]
  • network_state: handle empty v1 config (#45) (LP: #1852496)
  • docs: Add document on how to report bugs [Joshua Powers]
  • Add an Amazon distro in the redhat OS family [Frederick Lefebvre]
  • Merge pull request #94 from gaughen/patch-1 [Joshua Powers]
  • removed a couple of “the”s [gaughen]
  • docs: fix line length and remove highlighting [Joshua Powers]
  • docs: Add security.md to readthedocs [Joshua Powers]
  • Multiple file fix for AuthorizedKeysFile config (#60) [Eduardo Otubo]
  • Merge pull request #88 from OddBloke/travis [Joshua Powers]
  • Revert “travis: only run CI on pull requests”
  • doc: update links on README.md [Joshua Powers]
  • doc: Updates to wording of README.md [Joshua Powers]
  • Add security.md [Joshua Powers]
  • setup.py: Amazon Linux sets libexec to /usr/libexec (#52) [Frederick Lefebvre]
  • Fix linting failure in test_url_helper (#83) [Eric Lafontaine]
  • url_helper: read_file_or_url should pass headers param into readurl (#66) (LP: #1854084)
  • dmidecode: log result after stripping n [Igor Galić]
  • cloud_tests: add azure platform support to integration tests [ahosmanmsft]
  • set_passwords: support for FreeBSD (#46) [Igor Galić]
curtin
  • vmtests: skip Focal deploying Centos70 ScsiBasic
  • vmtests: fix network mtu tests, separating ifupdown vs networkd
  • doc: Fix kexec documentation bug. [Mike Pontillo]
  • vmtests: Add Focal Fossa
  • centos: Add centos/rhel 8 support, enable UEFI Secure Boot [Lee Trager] (LP: #1788088)
  • Bump XFS /boot skip-by date out a while
  • vmtest: Fix a missing unset of OUTPUT_FSTAB
  • curthooks: handle s390x/aarch64 kernel install hooks (LP: #1856038)
  • clear-holders: handle arbitrary order of devices to clear
  • curthooks: only run update-initramfs in target once (LP: #1842264)
  • test_network_mtu: bump fixby date for MTU tests
git-ubuntu

The git-ubuntu snap package has been updated to 0.8.0 for the ‘beta’ channel.

The lion’s share of effort since 0.7.4 has gone towards bug fixing and general stabilization. Documentation and tests received a fair share of attention, as did the snap and setup.py packaging.

The importer now uses a sqlite3 database to store persistent information such as the pending package import status.

A new –only-request-new-imports-once option is added for the backend source package importer. This makes the importer exit immediately after entering new imports to the database.

The –deconstruct option has been changed to –split, to prevent confusion that led people to assume –deconstruct meant the opposite of “reconstruct”.

Launchpad object fetches are cached using Python’s cachetools module, as a performance improvement that reduces the excessive number of API calls to the Launchpad service.

Finally, the backend service is now managed using a systemd watchdog daemon. Prior to this the service would need to be manually restarted whenever it hung or crashed, such as due to Launchpad service outages or network instabilities.

Contact the Ubuntu Server team Bug Work and Triage Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 3

Uploads Released to the Supported Releases

Total: 80

Uploads to the Development Release

Total: 129

Ubuntu Studio: New Website!

Mar, 21/01/2020 - 7:48md
Ubuntu Studio has had the same website design for nearly 9 years. Today, that changed. We were approached by Shinta from Playmain, asking if they could contribute to the project by designing a new website theme for us. Today, after months of correspondence and collaboration, we are proud to unveil... Continue reading

Ubuntu Blog: problem-oriented

Mar, 21/01/2020 - 5:22md

Once upon a time, Heathkit was a big business.

Yeah, I know I’m dating myself. Meh.

Heathkit kits were great, but honestly, I had an issue with them: They were either too focused on (re-)teaching basic electronics, or they assumed the tinkerer was an EE, so they didn’t give a lot of consideration to explaining what you could do with them. I mean, my first kit was an alarm clock, and it had a snooze button and big, red numbers that kept me waking up all night for a couple weeks to look for the fire trucks. But in general, most of their really cool items — frequency analyzers, oscilloscopes, and so on — didn’t come with much in the way of “how can I use this device?”

That’s why I’m going to start taking the MAAS blog and doc in a little different direction going forward. I want to start using real-world examples and neat networking configurations and other problem-oriented efforts as my baseline for writing. Heck, I’d even like to try using MAAS to control my little Raspberry Pi farm, although that’s probably not the recommended configuration, and I’m not sure how PXE-booting would work yet. (But if I get it going, I promise to blog it.)

Don’t get me wrong; the MAAS doc is pretty solid. I just want to do more with it. As in not just update it for new versions, but make it come alive and show off what MAAS can do. I also want to pick up some of the mid-range applications and situations. MAAS is well-envisioned in large datacentres, and there are obviously hobbyists and small shops tinkering, but that’s not the bulk of people who could genuinely benefit from it. I want to dig into some of the middle-industry, small-to-medium-size possibilities.

Since I already know something about small hospital datacentres, having worked with them for about ten years, that might be a good place to start. Hospitals from 50-200 beds tend to have the same requirements as a full-size facility, but the challenges of a somewhat smaller budget and lower IT headcount. It really feels like a good sample problem for MAAS.

Yeah, I’m gonna sleep on it for a week and tinker a little, so set your Heathkit alarm clock for next Tuesday and check back to see where it’s going. And turn over the other way, so you’re not staring at the bright-red, segmented LEDs all week.