You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 3 javë 1 ditë më parë

Ubuntu Insights: Security Team Weekly Summary: September 27, 2017

Enj, 28/09/2017 - 4:14md

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

During the last week, the Ubuntu Security team:

  • Triaged 296 public security vulnerability reports, retaining the 81 that applied to Ubuntu.
  • Published 16 Ubuntu Security Notices which fixed 37 security issues (CVEs) across 18 supported packages.
Ubuntu Security Notices Bug Triage Mainline Inclusion Requests Updates to Community Supported Packages
  • Simon Quigley (tsimonq2) provided debdiffs for trusty-zesty for jython (LP: #1714728)

Development
  • review
    • udisks2 PR 3931
    • snap-confile calls snap-update-ns PR 3621
    • bind mount relative to snap-confine PR 3956
    • snaps on NFS support
  • completed: create PR 3937 to use only ‘udevadm trigger –action=change’ instead of ‘udevadm control –reload-rules’
  • update snap-confine to unconditional add the nvidia devices to the device cgroup and rely only on apparmor for mediation
  • wrote/tested libseccomp-golang changes to complement the libseccomp changes: https://github.com/seccomp/libseccomp-golang/pull/29

  • uploaded libseccomp, with the most minimal change needed to support snapd, to artful after receiving a Feature Freeze exception
What the Security Team is Reading This Week Weekly Meeting More Info

Jonathan Riddell: KGraphViewer 2.4.2

Mër, 27/09/2017 - 3:23md

KGraphViewer 2.4.2 has been released.

KGraphViewer is a visualiser for Graphviz’s DOT format of graphs.
https://www.kde.org/applications/graphics/kgraphviewer

Changelog compared to 2.4.0:

  • add missing find dependency macro https://build.neon.kde.org/job/xenial_unstable_kde-extras_kgraphviewer_lintcmake/lastCompletedBuild/testReport/libkgraphviewer-dev/KGraphViewerPart/find_package/
  • Fix broken reloading and broken layout changing due to lost filename https://phabricator.kde.org/D7932
  • kgraphviewer_part.rc: set fallback text for toplevel menu entries
  • desktop-mime-but-no-exec-code
  • Codefix, comparisons were meant to be assignments

KGraphViewer 2.4.1 was made with an incorrect internal version number and should be ignored

It can be used by massif-visualizer to add graphing features.

Download from:
https://download.kde.org/stable/kgraphviewer/2.4.2/

sha256:
49438b4e6cca69d2e658de50059f045ede42cfe78ee97cece35959e29ffb85c9 kgraphviewer-2.4.2.tar.xz

Signed with my PGP key
2D1D 5B05 8835 7787 DE9E E225 EC94 D18F 7F05 997E
Jonathan Riddell <jr@jriddell.org>
kgraphviewer-2.4.2.tar.xz.sig

by

Simos Xenitellis: How to set up LXD on Packet.net (baremetal servers)

Mar, 26/09/2017 - 9:45md

Packet.net has premium baremetal servers that start at $36.50 per month for a quad-core Atom C2550 with 8GB RAM and 80GB SSD, on a 1Gbps Internet connection. On the other end of the scale, there is an option for a 24-core (two Intel CPUs) system with 256GB RAM and a total of 2.8TB SSD disk space at around $1000 per month.

In this post we are trying out the most affordable baremetal server (type 0 from the list) with Ubuntu and LXD.

Starting the server is quite uneventful. Being baremetal, it takes more time than a VPS. It started, and we are SSHing into it.

$ ssh root@ip.ip.ip.ip Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.10.0-24-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@lxd:~#

Here there is some information about the booted system,

root@lxd:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.2 LTS Release: 16.04 Codename: xenial root@lxd:~#

And the CPU details,

root@lxd:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 77 model name : Intel(R) Atom(TM) CPU C2550 @ 2.40GHz stepping : 8 microcode : 0x122 cpu MHz : 1200.000 cache size : 1024 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch epb tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida arat bugs : bogomips : 4800.19 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: ... omitting the other three cores ...

Let’s update the package list,

root@lxd:~# apt update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB] ...

They are using the official Ubuntu repositories instead of caching the packages with local mirrors. In retrospect, not an issue because the Internet connectivity is 1Gbps, bonded from two identical interfaces.

Let’s upgrade the packages and deal with issues. You tend to have issues with upgraded packages that complain that local configuration files are different from what they are expecting.

root@lxd:~# apt upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: apt apt-utils base-files cloud-init gcc-5-base grub-common grub-pc grub-pc-bin grub2-common initramfs-tools initramfs-tools-bin initramfs-tools-core kmod libapparmor1 libapt-inst2.0 libapt-pkg5.0 libasn1-8-heimdal libcryptsetup4 libcups2 libdns-export162 libexpat1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgnutls-openssl27 libgnutls30 libgraphite2-3 libgssapi3-heimdal libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libisc-export160 libkmod2 libkrb5-26-heimdal libpython3.5 libpython3.5-minimal libpython3.5-stdlib libroken18-heimdal libstdc++6 libsystemd0 libudev1 libwind0-heimdal libxml2 logrotate mdadm ntp ntpdate open-iscsi python3-jwt python3.5 python3.5-minimal systemd systemd-sysv tcpdump udev unattended-upgrades 59 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 24.3 MB of archives. After this operation, 77.8 kB of additional disk space will be used. Do you want to continue? [Y/n] ...

First is grub, and the diff shows (now shown here) that it is a minor issue. The new version of grub.cfg changes the system to appear as Debian instead of Ubuntu. Did not investigate into this.

We are then asked where to install grub. We set to /dev/sda and hope that the server can successfully reboot. We note that instead of a 80GB SSD disk as written in the description, we got a 160GB SSD. Not bad.

Setting up cloud-init (0.7.9-233-ge586fe35-0ubuntu1~16.04.2) ... Configuration file '/etc/cloud/cloud.cfg' ==> Modified (by you or by a script) since installation. ==> Package distributor has shipped an updated version. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** cloud.cfg (Y/I/N/O/D/Z) [default=N] ? N Progress: [ 98%] [##################################################################################.]

Still through apt upgrade, it complains for /etc/cloud/cloud.cfg. Here is the diff between the installed and packaged versions. We keep the existing file and we do not installed the new packaged generic version (will not boot).

At the end, it complains about

W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast

Time to reboot the server and check if we messed it up.

root@lxd:~# shutdown -r now $ ssh root@ip.ip.ip.ip Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Last login: Tue Sep 26 15:29:58 2017 from 1.2.3.4 root@lxd:~#

We are good! Note that now it says Ubuntu 16.04.3 while before it was Ubuntu 16.04.2.

LXD is not installed by default,

root@lxd:~# apt policy lxd lxd:       Installed: (none)       Candidate: 2.0.10-0ubuntu1~16.04.1       Version table:               2.0.10-0ubuntu1~16.04.1 500                       500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages               2.0.0-0ubuntu4 500                       500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

There are two versions, 2.0.0 which is the stock version released initially with Ubuntu 16.04. And 2.0.10, which is currently the latest stable version for Ubuntu 16.04. Let’s install.

root@lxd:~# apt install lxd ...

We are now ready to add the non-root user account.

root@lxd:~# adduser myusername Adding user `myusername' ... Adding new group `myusername' (1000) ... Adding new user `myusername' (1000) with group `myusername' ... Creating home directory `/home/myusername' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for myusername Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y root@lxd:~# ssh myusername@localhost Permission denied (publickey). root@lxd:~# cp -R ~/.ssh/ ~myusername/ root@lxd:~# chown -R myusername:myusername ~myusername/

We added the new username, then tested that password authentication is indeed disabled. Finally, we copied the authorized_keys file from ~root/ to the new non-root account, and adjusted the ownership of those files.

Let’s log out from the server and log in again as the new non-root account.

$ ssh myusername@ip.ip.ip.ip Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. ************************************************************************** # This system is using the EC2 Metadata Service, but does not appear to # # be running on Amazon EC2 or one of cloud-init's known platforms that # # provide a EC2 Metadata service. In the future, cloud-init may stop # # reading metadata from the EC2 Metadata Service unless the platform can # # be identified. # # # # If you are seeing this message, please file a bug against # # cloud-init at # # https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid # # Make sure to include the cloud provider your instance is # # running on. # # # # For more information see # # https://bugs.launchpad.net/bugs/1660385 # # # # After you have filed a bug, you can disable this warning by # # launching your instance with the cloud-config below, or # # putting that content into # # /etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg # # # # #cloud-config # # datasource: # # Ec2: # # strict_id: false # ************************************************************************** Disable the warnings above by: touch /home/myusername/.cloud-warnings.skip or touch /var/lib/cloud/instance/warnings/.skip myusername@lxd:~$

This issue is related to our action to keep the existing cloud.cfg after we upgraded the cloud-init package. It is something that packet.net (the provider) should deal with.

We are ready to try out LXD on packet.net.

Configuring LXD

Let’s configure LXD. First, how much free space do we have?

myusername@lxd:~$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/sda3 136G 1.1G 128G 1% / myusername@lxd:~$

There is plenty of space, we are using 100GB for LXD.

We are using ZFS as the LXD storage backend, therefore,

myusername@lxd:~$ sudo apt install zfsutils-linux

Now, we set up LXD.

myusername@lxd:~$ sudo lxd init Name of the storage backend to use (dir or zfs) [default=zfs]: zfs Create a new ZFS pool (yes/no) [default=yes]? yes Name of the new ZFS pool [default=lxd]: lxd Would you like to use an existing block device (yes/no) [default=no]? no Size in GB of the new loop device (1GB minimum) [default=27]: 100 Would you like LXD to be available over the network (yes/no) [default=no]? no Do you want to configure the LXD bridge (yes/no) [default=yes]? yes LXD has been successfully configured. myusername@lxd:~$ lxc list +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+ myusername@lxd:~$ Trying out LXD

Let’s create a container, install nginx and then make the web server accessible through the Internet.

myusername@lxd:~$ lxc launch ubuntu:16.04 web Creating web Retrieving image: rootfs: 100% (47.99MB/s) Starting web myusername@lxd:~$

Let’s see the details of the container, called web.

myusername@lxd:~$ lxc list --columns ns4tS +------+---------+---------------------+------------+-----------+ | NAME | STATE | IPV4 | TYPE | SNAPSHOTS | +------+---------+---------------------+------------+-----------+ | web | RUNNING | 10.253.67.97 (eth0) | PERSISTENT | 0 | +------+---------+---------------------+------------+-----------+ myusername@lxd:~$

We can see the container IP address. The parameter ns4tS simply omits the IPv6 address (‘6’) so that the table will look nice on the blog post.

Let’s enter the container and install nginx.

myusername@lxd:~$ lxc exec web -- sudo --login --user ubuntu To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@web:~$

We execute in the web container the whole command sudo –login –user ubuntu that gives us a login shell in the container. All Ubuntu containers have a default non-root account called ubuntu.

ubuntu@web:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html
ubuntu@web:~$ logout

Before installing a package, we must update. We updated and then installed nginx. Subsequently, we touched up a bit the default HTML file to mention packet.net and LXD. Finally, we logged out from the container.

Let’s test that the web server in the container is working.

myusername@lxd:~$ curl 10.253.67.97 <!DOCTYPE html> <html> <head> <title>Welcome to nginx on Packet.net in an LXD container!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx on Packet.net in an LXD container!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> myusername@lxd:~$

The last step is to get Ubuntu to forward any Internet connections from port 80 to the container at port 80. For this, we need the public IP of the server and the private IP of the container (it’s 10.253.67.97).

myusername@lxd:~$ ifconfig bond0 Link encap:Ethernet HWaddr 0c:c4:7a:de:51:a8       inet addr:147.75.82.251 Bcast:255.255.255.255 Mask:255.255.255.254     inet6 addr: 2604:1380:2000:600::1/127 Scope:Global     inet6 addr: fe80::ec4:7aff:fee5:4462/64 Scope:Link       UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1       RX packets:144216 errors:0 dropped:0 overruns:0 frame:0       TX packets:14181 errors:0 dropped:0 overruns:0 carrier:0       collisions:0 txqueuelen:1000       RX bytes:211518302 (211.5 MB) TX bytes:1443508 (1.4 MB)

The interface is a bond, bond0. Two 1Gbps connections are bonded together.

myusername@lxd:~$ PORT=80 PUBLIC_IP=147.75.82.251 CONTAINER_IP=10.253.67.97 sudo -E bash -c 'iptables -t nat -I PREROUTING -i bond0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"' myusername@lxd:~$

Let’s test it out!

That’s it!

Colin Watson: A mysterious bug with Twisted plugins

Mar, 26/09/2017 - 5:20md

I fixed a bug in Launchpad recently that led me deeper than I expected.

Launchpad uses Buildout as its build system for Python packages, and it’s served us well for many years. However, we’re using 1.7.1, which doesn’t support ensuring that packages required using setuptools’ setup_requires keyword only ever come from the local index URL when one is specified; that’s an essential constraint we need to be able to impose so that our build system isn’t immediately sensitive to downtime or changes in PyPI. There are various issues/PRs about this in Buildout (e.g. #238), but even if those are fixed it’ll almost certainly only be in Buildout v2, and upgrading to that is its own kettle of fish for other reasons. All this is a serious problem for us because newer versions of many of our vital dependencies (Twisted and testtools, to name but two) use setup_requires to pull in pbr, and so we’ve been stuck on old versions for some time; this is part of why Launchpad doesn’t yet support newer SSH key types, for instance. This situation obviously isn’t sustainable.

To deal with this, I’ve been working for some time on switching to virtualenv and pip. This is harder than you might think: Launchpad is a long-lived and complicated project, and it had quite a number of explicit and implicit dependencies on Buildout’s configuration and behaviour. Upgrading our infrastructure from Ubuntu 12.04 to 16.04 has helped a lot (12.04’s baseline virtualenv and pip have some deficiencies that would have required a more complicated bootstrapping procedure). I’ve dealt with most of these: for example, I had to reorganise a lot of our helper scripts (1, 2, 3), but there are still a few more things to go.

One remaining problem was that our Buildout configuration relied on building several different environments with different Python paths for various things. While this would technically be possible by way of building multiple virtualenvs, this would inflate our build time even further (we’re already going to have to cope with some slowdown as a result of using virtualenv, because the build system now has to do a lot more than constructing a glorified link farm to a bunch of cached eggs), and it seems like unnecessary complexity. The obvious thing to do seemed to be to collapse these into a single environment, since there was no obvious reason why it should actually matter if txpkgupload and txlongpoll were carefully kept off the path when running most of Launchpad: so I did that.

Then our build system got very sad.

Hmm, I thought. To keep our test times somewhat manageable, we run them in parallel across 20 containers, and we randomise the order in which they run to try to shake out test isolation bugs. It’s not completely unknown for there to be some oddities resulting from that. So I ran it again. Nope, but slightly differently sad this time. Furthermore, I couldn’t reproduce these failures locally no matter how hard I tried. Oh dear. This was obviously not going to be a good day.

In fact I spent a while on various different guesswork-based approaches. I found bug 571334 in Ampoule, an AMP-based process pool implementation that we use for some job runners, and proposed a fix for that, but cherry-picking that fix into Launchpad didn’t help matters. I tried backing out subsets of my changes and determined that if both txlongpoll and txpkgupload were absent from the Python module path in the context of the tests in question then everything was fine. I tried running strace locally and staring at the output for some time in the hope of enlightenment: that reminded me that the two packages in question install modules under twisted.plugins, which did at least establish a reason they might affect the environment that was more plausible than magic, but nothing much more specific than that.

On Friday I was fiddling about with this again and trying to insert some more debugging when I noticed some interesting behaviour around plugin caching. If I caused the txpkgupload plugin to raise an exception when loaded, the Twisted plugin system would remove its dropin.cache (because it was stale) and not create a new one (because there was now no content to put in it). After that, running the relevant tests would fail as I’d seen in our buildbot. Aha! This meant that I could also reproduce it by doing an even cleaner build than I’d previously tried to do, by removing the cached txpkgupload and txlongpoll eggs and allowing the build system to recreate them. When they were recreated, they didn’t contain dropin.cache, instead allowing that to be created on first use.

Based on this clue I was able to get to the answer relatively quickly. Ampoule has a specialised bootstrapping sequence for its worker processes that starts by doing this:

from twisted.application import reactors reactors.installReactor(reactor)

Now, twisted.application.reactors.installReactor calls twisted.plugin.getPlugins, so the very start of this bootstrapping sequence is going to involve loading all plugins found on the module path (I assume it’s possible to write a plugin that adds an alternative reactor implementation). If dropin.cache is up to date, then it will just get the information it needs from that; but if it isn’t, it will go ahead and import the plugin. If the plugin happens (as Twisted code often does) to run from twisted.internet import reactor at some point while being imported, then that will install the platform’s default reactor, and then twisted.application.reactors.installReactor will raise ReactorAlreadyInstalledError. Since Ampoule turns this into an info-level log message for some reason, and the tests in question only passed through error-level messages or higher, this meant that all we could see was that a worker process had exited non-zero but not why.

The Twisted documentation recommends generating the plugin cache at build time for other reasons, but we weren’t doing that. Fixing that makes everything work again.

There are still a few more things needed to get us onto pip, but we’re now pretty close. After that we can finally start bringing our dependencies up to date.

Simos Xenitellis: How to use Ubuntu and LXD on Alibaba Cloud

Mar, 26/09/2017 - 3:38md

Alibaba Cloud is like Amazon Web Services as they offer quite similar cloud services. They are part of the Alibaba Group, a huge Chinese conglomerate. For example, the retailer component of the Alibaba Group is now bigger than Walmart. Here, we try out the cloud services.

The main reason to select Alibaba Cloud is to get a server running inside China. They also have several data centers outside China, but inside China it is mostly Alibaba Cloud. To get a server running inside mainland China though, you need to go through a registration process where you submit photos of your passport. We ain’t have time for that, therefore we select the closest data center to China, Hong Kong.

Creating an account on Alibaba Cloud

Click to create an account on Alibaba Cloud (update: no referral link). You get $300 credit to use within two months, and up to $50 of that credit can go towards launching virtual private servers. Actually, make that account with the referral now, before continuing with this section below..

When creating the account, there is either the option to verify your email or phone number. Let’s do the email verification.

Let’s check our mails. Where is that email from Alibaba Cloud? Nothing arrived!?!

The usability disaster is almost evident. When you get to this page about the Verification, the text says We need to verify your email. Please input the number you receive. Alibaba Cloud did not already send that email to us. We need to first click on Send to get it to send that email. The text should have said instead something like To use email verification, click Send below, then input the numbercode you have received.

You can pay Alibaba Cloud using either a bank card or Paypal. Let’s try Paypal! Actually, to make use of the $300 credit, it has to be a bank card instead.

We have added a bank card. This bank card has to go through a type verification. Alibaba Cloud will make a small debit (to be refunded later) and you can input either the transaction amount or the transaction code (see screenshot) below in order to verify that you do have access to your bank card.

After a couple of days, you get worried because there is no transaction with the description INTL*?????.ALIYUN.COM at your online banking. What went wrong? And what is this weird transaction with a different description in my bank statement?

Description: INTL*175 LUXEM LU ,44

Debit amount: 0.37€

What is LUXEM, a municipality in Germany, doing on my bank statement? Let’s hope that the processor for Alibaba in Europe is LUXEM, not ALIYUN.

Let’s try as transaction code the number 175. Did not work. Four more tries remaining.

Let’s try the transaction amount, 0.37€. Of course, it did not work. It says USD, nor EURO! Three tries remaining.

Let’s google a bit, Add a payment method documentation on Alibaba Cloud talks only about dollars. A forum post about non-dollar currencies says:

I did not get an authorization charge, therefore there is no X.

Let’s do something really crazy:

We type 0.44 as the transaction amount. IT WORKED!

In retrospect, there is a reference on ,44 in the description, who would have thought that this undocumented info might refer to the dollar amount.

After a week, the micro transaction of 0.37€ was not reimbursed. What’s more, I was also charged with a 2.5€ commission which I am not getting back either.

We are now ready to use the $300 Free Credit!

Creating a server on Alibaba Cloud

When trying to create a server, you may encounter this website, with a hostname YUNDUN.console.aliyun.com. If you get that, you are in the wrong place. You cannot add your SSH key here, nor do you create a server.

Instead, it should say ECS, Elastic Compute Service.

Here is the full menu for ECS,

Under Networks & Security, there is Key Pairs. Let’s add there the SSH public key, not the whole key pair.

First of all, we need to select the appropriate data center. Ok, we change to Hong Kong which is listed in the middle.

But how do we add our own SSH key? There is only an option to Create Key Pair!?! Well, let’s create a pair.

Ah, okay. Although the page is called Create Key Pair, we can actually Import an Existing Key Pair.

Now, click back to Elastic Computer S…/Overview, which shows each data center.

If we were to try to create a server in Mainland China, we get

In that case, we would need to send first a photo of our passport or our driver’s license.

Let’s go back, and select Hong Kong.

We are ready to configure our server.

There is an option of either a Starter Package or an Advanced Purchase. The Starter Package is really cool, you can get a server for only $4.5. But the find print for the $300 credit says that you cannot use the Starter Package here.

So, Advanced Purchase it will be.

There are two pricing models, Subscription and Pay As You Go. Subscription means that you pay monthly, Pay As You Go means that you pay hourly. We go for Subscription.

We select the 1-core, 1GB instance, and we can see the price at $12.29. We also pay separately for the Internet traffic. The cost is shown on an overlay, we still have more options to select before we create the server.

We change the default Security Group to the one shown above. We want our server to be accessible from outside on ports 80 and 443. Also port 22 is added by default, along with the port 3389 (Remote Desktop in Windows).

We select Ubuntu 16.04.  The order of the operating systems is a bit weird. Ideally, the order should reflect the popularity.

There is an option for Server Guard. Let’s try it since it is free. (it requires to install some closed-source package in our Linux. Eventually I did not try it).

The Ultra Cloud Disk is a network share and it is included in the earlier price. The other option would be to select an SSD. It is nice that we can add up to 16 disks to our server.

We are ready to place the order. It correctly shows $0 and mentions the $50 credit. We select not to auto renew.

Now we pay the $0.

And that’s how we start a server. We have received an email with the IP address but can also find the public IP address from the ECS settings.

Let’s have a look at the IP block for this IP address.

ffs.

How to set up LXD on an Alibaba server

First, we SSH to the server. The command looks like ssh root@_public_ip_address_

It looks like real Ubuntu, with real Ubuntu Linux kernel. Let’s update.

root@iZj6c66d14k19wi7139z9eZ:~# apt update Get:1 http://mirrors.cloud.aliyuncs.com/ubuntu xenial InRelease [247 kB] Hit:2 http://mirrors.aliyun.com/ubuntu xenial InRelease ... Get:45 http://mirrors.aliyun.com/ubuntu xenial-security/universe i386 Packages [147 kB] Get:46 http://mirrors.aliyun.com/ubuntu xenial-security/universe Translation-en [89.8 kB] Fetched 40.8 MB in 24s (1682 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 105 packages can be upgraded. Run 'apt list --upgradable' to see them. root@iZj6c66d14k19wi7139z9eZ:~#

We upgraded (apt upgrade) and there was a kernel update. We restarted (shutdown -r now) and the newly updated Ubuntu has the updated kernel. Nice!

Let’s check /proc/cpuinfo,

root@iZj6c66d14k19wi7139z9eZ:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz stepping : 2 microcode : 0x1 cpu MHz : 2494.224 cache size : 30720 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt bugs : bogomips : 4988.44 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: root@iZj6c66d14k19wi7139z9eZ:/proc#

How much free space from the 40GB disk?

root@iZj6c66d14k19wi7139z9eZ:~# df -h / Filesystem Size Used Avail Use% Mounted on /dev/vda1 40G 2,2G 36G 6% / root@iZj6c66d14k19wi7139z9eZ:~#

Let’s add a non-root user.

root@iZj6c66d14k19wi7139z9eZ:~# adduser myusername Adding user `myusername' ... Adding new group `myusername' (1000) ... Adding new user `myusername' (1000) with group `myusername' ... Creating home directory `/home/myusername' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for myusername Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] root@iZj6c66d14k19wi7139z9eZ:~#

Is LXD already installed?

root@iZj6c66d14k19wi7139z9eZ:~# apt policy lxd lxd: Installed: (none) Candidate: 2.0.10-0ubuntu1~16.04.2 Version table: 2.0.10-0ubuntu1~16.04.2 500 500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates/main amd64 Packages 500 http://mirrors.aliyun.com/ubuntu xenial-updates/main amd64 Packages 100 /var/lib/dpkg/status 2.0.2-0ubuntu1~16.04.1 500 500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-security/main amd64 Packages 500 http://mirrors.aliyun.com/ubuntu xenial-security/main amd64 Packages 2.0.0-0ubuntu4 500 500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial/main amd64 Packages 500 http://mirrors.aliyun.com/ubuntu xenial/main amd64 Packages root@iZj6c66d14k19wi7139z9eZ:~#

Let’s install LXD.

root@iZj6c66d14k19wi7139z9eZ:~# apt install lxd

Now, we can add our user account myusername to the groups sudo, lxd.

root@iZj6c66d14k19wi7139z9eZ:~# usermod -a -G lxd,sudo myusername root@iZj6c66d14k19wi7139z9eZ:~#

Let’s copy the SSH public key from root to the new non-root account.

root@iZj6c66d14k19wi7139z9eZ:~# cp -R /root/.ssh ~myusername/ root@iZj6c66d14k19wi7139z9eZ:~# chown -R myusername:myusername ~myusername/.ssh/ root@iZj6c66d14k19wi7139z9eZ:~#

Now, log out and log in as the new non-root account.

$ ssh myusername@IP.IP.IP.IP Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-96-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Welcome to Alibaba Cloud Elastic Compute Service ! myusername@iZj6c66d14k19wi7139z9eZ:~$

We are going to install the ZFS utilities so that LXD can use ZFS as a storage backend.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo apt install zfsutils-linux ...myusername@iZj6c66d14k19wi7139z9eZ:~$

Now, we can configure LXD. From before, the server had about 35GB free. We are allocating 20GB of that for LXD.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo lxd init
sudo: unable to resolve host iZj6c66d14k19wi7139z9eZ
[sudo] password for myusername:  ********
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=15]: 20
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
Warning: Stopping lxd.service, but it can still be activated by:
lxd.socket

LXD has been successfully configured.
myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list
Generating a client certificate. This may take a minute…
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+——+——-+——+——+——+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+——+——-+——+——+——+———–+
myusername@iZj6c66d14k19wi7139z9eZ:~$

Okay, we can create now our first LXD container. We are creating just a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc launch ubuntu:16.04 web Creating web Retrieving image: rootfs: 100% (6.70MB/s) Starting web myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s see the container,

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list +------+---------+---------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+------------+-----------+ | web | RUNNING | 10.35.87.141 (eth0) | | PERSISTENT | 0 | +------+---------+---------------------+------+------------+-----------+ myusername@iZj6c66d14k19wi7139z9eZ:~$

Nice. We get into the container and install a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc exec web -- sudo --login --user ubuntu To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@web:~$

We executed into the web container the command sudo –login –user ubuntu. The container has a default non-root account ubuntu.

ubuntu@web:~$ sudo apt update Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease ... Reading state information... Done 3 packages can be upgraded. Run 'apt list --upgradable' to see them. ubuntu@web:~$ sudo apt install nginx Reading package lists... Done ... Processing triggers for ufw (0.35-0ubuntu2) ... ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html ubuntu@web:~$ logout myusername@iZj6c66d14k19wi7139z9eZ:~$ curl 10.35.87.141 <!DOCTYPE html> <html> <head> <title>Welcome to nginx running in an LXD container on Alibaba Cloud!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx running in an LXD container on Alibaba Cloud!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> myusername@iZj6c66d14k19wi7139z9eZ:~$

Obviously, the web server in the container is not accessible from the Internet. We need to do something like add iptables rules to forward appropriately the connection.

Alibaba Cloud gives two IP address per server. One is the public IP address and the other is a private IP address (172.[16-31].*.*). The eth0 interface of the server has that private IP address. This information is important for iptables below.

myusername@iZj6c66d14k19wi7139z9eZ:~$ PORT=80 PUBLIC_IP=my172.IPAddress CONTAINER_IP=10.35.87.141 sudo -E bash -c 'iptables -t nat -I PREROUTING -i eth0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"' myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s load up our site using the public IP address from our own computer:

And that’s it!

Conclusion

Alibaba Cloud is yet another provider for cloud services. They are big in China, actually the biggest in China. They are trying to expand to the rest of the world. There are several teething problems, probably arising from the fact that the main website is in Mandarin and there is no infrastructure for immediate translation to English.

On HN there has been a sort of relaunch a few of months ago. It appears there is interest from them to get international users. What they need is people to attend immediately to issues as they are discovered.

If you want to learn more about LXD, see https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

 

Update #1

After a day of running a VPS on Alibaba Cloud, I received this email.

From: Alibaba Cloud Subject: 【Immediate Attention Needed】Alibaba Cloud Fraud Prevention We have detected a security risk with the card you are using to make purchases. In order to protect your account, please provide your account ID and the following information within one working day via your registered Alibaba Cloud email to compliance_support@aliyun.com for further investigation. If you are using a credit card as your payment method, please provide the following information directly. Please provide clear copies of: 1. Any ONE of the following three forms of government-issued photo identification for the credit card holder or payment account holder of this Alibaba Cloud account: (i) National identification card; (ii) Passport; (iii) Driver's License. 2. A clear copy of the front side of your credit card in connection with this Alibaba Account; (Note: For security reasons, we advise you to conceal the middle digits of your card number. Please make sure that the card holder's name, card issuing bank and the last four digits of the card number are clearly visible). 3. A clear copy of your card's bank statement. We will process your case within 3 working days of receiving the information listed above. NOTE: Please do not provide information in this ticket. All the information needed should be sent to this email compliance_support@aliyun.com. If you fail to provide all the above information within one working day , your instances will be shut down. Best regards, Alibaba Cloud Customer Service Center

What this means, is that update #2 has to happen now.

 

Update #2

Newer versions of LXD have a utility called lxd-benchmark. This utility spawns, starts and stops containers, and can be used to have an idea how efficient a server may be. I suppose primarily it is used to figure out if there is a regression in the LXD code. Let see it anyway in action here, the clock is ticking.

The new LXD is in a PPA at https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Let’s install it on Alibaba Cloud.

sudo apt-get install software-properties-common sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable sudo apt updatesudo apt upgrade          # Now LXD will be upgraded.sudo apt install lxd-tools    # Now lxd-benchmark will be installed.

Let’s see the options for lxd-benchmark.

Usage: lxd-benchmark spawn [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT] lxd-benchmark start [--parallel=COUNT] lxd-benchmark stop [--parallel=COUNT] lxd-benchmark delete [--parallel=COUNT] --count (= 100) Number of containers to create --freeze (= false) Freeze the container right after start --image (= "ubuntu:") Image to use for the test --parallel (= -1) Number of threads to use --privileged (= false) Use privileged containers --report-file (= "") A CSV file to write test file to. If the file is present, it will be appended to. --report-label (= "") A label for the report entry. By default, the action is used. --start (= true) Start the container after creation

First, we need to spawn new containers that we can later start, stop or delete. Ideally, I would expect the terminology to be launch instead of spawn, to keep in sync with the existing container management commands.

Second, there are defaults for each command as shown above. There is no indication yet as to how much RAM you need to spawn the default 100 containers. Obviously it would be more than the 1GB RAM we have on this server. Regarding the disk space, that would be fine because of copy-on-write with ZFS; any newly created LXD container does not use up additional space as they all are based on the space of the first container. Perhaps after a day when unattended-upgrades kicks in, each container would use up some space for any required security updates that get automatically applied.

Let’s try out with 3 containers. We have stopped and deleted the original web container that we have created in this tutorial (lxc stop web ; lxc delete web).

$ lxd-benchmark spawn --count 3 Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 Test variables: Container count: 3 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 3 Batch size: 1 Remainder: 0 [Sep 27 17:31:41.074] Importing image into local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 17:32:12.825] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 17:32:12.825] Batch processing start [Sep 27 17:32:37.614] Processed 1 containers in 24.790s (0.040/s) [Sep 27 17:32:42.611] Processed 2 containers in 29.786s (0.067/s) [Sep 27 17:32:49.110] Batch processing completed in 36.285s $ lxc list --columns ns4tS +-------------+---------+---------------------+------------+-----------+ | NAME | STATE | IPV4 | TYPE | SNAPSHOTS | +-------------+---------+---------------------+------------+-----------+ | benchmark-1 | RUNNING | 10.35.87.252 (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+------------+-----------+ | benchmark-2 | RUNNING | 10.35.87.115 (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+------------+-----------+ | benchmark-3 | RUNNING | 10.35.87.72 (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+------------+-----------+ | web | RUNNING | 10.35.87.141 (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+------------+-----------+ $

We created three extra containers, named benchmark-?, and got them started. There were launched in three batches, which means that one was started after another, not in parallel.

The total time on this server, when the storage backend is zfs, was 36.2 seconds. It is not clear what the numbers in the parenthesis mean at Processed 1 containers in 18.770s (0.053/s).

The total time on this server, when the storage backend was dir, was 68.6 instead.

Let’s stop them!

$ lxd-benchmark stop Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 [Sep 27 18:06:08.822] Stopping 3 containers [Sep 27 18:06:08.822] Batch processing start [Sep 27 18:06:09.680] Processed 1 containers in 0.858s (1.165/s) [Sep 27 18:06:10.543] Processed 2 containers in 1.722s (1.162/s) [Sep 27 18:06:11.406] Batch processing completed in 2.584s $

With dir, it was around 2.4 seconds.

And then delete them!

$ lxd-benchmark delete Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 [Sep 27 18:07:12.020] Deleting 3 containers [Sep 27 18:07:12.020] Batch processing start [Sep 27 18:07:12.130] Processed 1 containers in 0.110s (9.116/s) [Sep 27 18:07:12.224] Processed 2 containers in 0.204s (9.814/s) [Sep 27 18:07:12.317] Batch processing completed in 0.297s $

With dir, it was 2.5 seconds.

Let’s create three containers in parallel.

$ lxd-benchmark spawn --count=3 --parallel=3 Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 Test variables: Container count: 3 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 1 Batch size: 3 Remainder: 0 [Sep 27 18:11:01.570] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 18:11:01.570] Batch processing start [Sep 27 18:11:11.574] Processed 3 containers in 10.004s (0.300/s) [Sep 27 18:11:11.574] Batch processing completed in 10.004s $

With dir, it was 58.7 seconds.

Let’s push it further and try to hit some memory limits! First, we delete all, and launch 5 in parallel.

$ lxd-benchmark spawn --count=5 --parallel=5 Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 Test variables: Container count: 5 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 1 Batch size: 5 Remainder: 0 [Sep 27 18:13:11.171] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 18:13:11.172] Batch processing start [Sep 27 18:13:33.461] Processed 5 containers in 22.290s (0.224/s) [Sep 27 18:13:33.461] Batch processing completed in 22.290s $

So, 5 containers can start in 1GB of RAM, in just 22 seconds.

We also tried the same with the dir storage backend, and got

[Sep 27 17:24:16.409] Batch processing start [Sep 27 17:24:54.508] Failed to spawn container 'benchmark-5': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-5/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . [Sep 27 17:25:11.129] Failed to spawn container 'benchmark-3': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-3/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . [Sep 27 17:25:35.906] Processed 5 containers in 79.496s (0.063/s) [Sep 27 17:25:35.906] Batch processing completed in 79.496s

Out of the five containers, it managed to create 3 (No 1, 3, 4). The reason is that unsquashfs needs to run to expand an image, and that process uses a lot of memory. When using zfs, the same process probably does not need that much memory.

Let’s delete the five containers (storage backend: zfs):

[Sep 27 18:18:37.432] Batch processing completed in 5.006s

Let’s close the post with

$ lxd-benchmark spawn --count=10 --parallel=5 Test environment: Server backend: lxd Server version: 2.18 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.4.0-96-generic Storage backend: zfs Storage version: 0.6.5.6-0ubuntu16 Container backend: lxc Container version: 2.1.0 Test variables: Container count: 10 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 2 Batch size: 5 Remainder: 0 [Sep 27 18:19:44.706] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee [Sep 27 18:19:44.706] Batch processing start [Sep 27 18:20:07.705] Processed 5 containers in 22.998s (0.217/s) [Sep 27 18:20:57.114] Processed 10 containers in 72.408s (0.138/s) [Sep 27 18:20:57.114] Batch processing completed in 72.408s

We launched 10 containers in two batches of five containers each. The lxd-benchmark command completed successfully, in just 72 seconds. However, after the command completed, each container would start up, get an IP and get working. We hit the memory limit when the second batch of five containers where starting up. The network monitor on the Alibaba Cloud management console shows 100% CPU utilization, and it is not possible to access the server over SSH. Let’s delete the server from the management console and wind down this trial of Alibaba Cloud.

lxd-benchmark is quite useful and can be used to get practical understanding as to how many containers can make it on a server and much more.

Update #3

I just restarted the server from the management console and connected using SSH.

Here are the ten containers from Update #2,

$ lxc list --columns ns4 +--------------+---------+------+ | NAME | STATE | IPV4 | +--------------+---------+------+ | benchmark-01 | STOPPED | | +--------------+---------+------+ | benchmark-02 | STOPPED | | +--------------+---------+------+ | benchmark-03 | STOPPED | | +--------------+---------+------+ | benchmark-04 | STOPPED | | +--------------+---------+------+ | benchmark-05 | STOPPED | | +--------------+---------+------+ | benchmark-06 | STOPPED | | +--------------+---------+------+ | benchmark-07 | STOPPED | | +--------------+---------+------+ | benchmark-08 | STOPPED | | +--------------+---------+------+ | benchmark-09 | STOPPED | | +--------------+---------+------+ | benchmark-10 | STOPPED | | +--------------+---------+------+

The containers are in the stopped state. That is, they do not consume memory. How much free memory is there?

$ free total used free shared buff/cache available Mem: 1016020 56192 791752 2928 168076 805428 Swap: 0 0 0

About 792MB free memory.

There is not enough memory to get them all to run at the same time. It is good that they get into the stopped state when you reboot, so that you can fix.

Sebastian K&uuml;gler: Plasma Mobile and Convergence

Mar, 26/09/2017 - 1:12md

Convergence, or the ability the serve different form factors from the same code base, is an often discussed concept. Convergence is at the heart of Plasma‘s design philosophy, but what does this actually mean to how apps are developed? What’s in it for the user? Let’s have a look!

Plasma — same code, different devices
First, let’s have a look at different angles of “Convergence”. It can actually mean different things, and there is overlap between these. Depending on who you ask, convergence could mean any of the following:

  • Being able to plug a monitor, keyboard and mouse into smartphone and use it as a full-fledged desktop replacement
  • Develop an application that works on a phone as well as on a desktop
  • Create different device user interfaces from the same code base

Convergence, in the broadest sense, has been one of the design goals of Plasma when we started creating it. When we work on Plasma, we ultimately expect components to run on a wide variety of target devices, we refer to that concept as the device spectrum.

Alex, one of Plasma’s designers has created a visual concept for a convergent user interface, that gives an impression how a fully convergent Plasma could look like to the user:

Input Methods and Screen Characteristics

Technically, there are a few aspects of convergence, the most important being: input methods, for example mouse, keyboard, touchscreens or combinations of those, and screen size (both physical dimensions, portrait vs. landscape layout and pixel density).

Touchscreen support is one aspect when it comes to run KDE software on a mobile device or within Plasma Mobile. Touchscreens are not specific to phones any more however, so making an app, or a Plasma component ready for touchscreen usage also benefits people who run Plasma on their convertible laptops, for example. Another big factor is that the app needs to work well on the screen of a smartphone, this means support for high dpi screens as well as a layout that presents the necessary controls in a way that is functional, attractive and user-friendly. With the Kirigami toolkit, which builds on top of QtQuick, we develop apps that work well on both target devices. From a more general point of view, KDE has always developed apps in a cross- platform way, so portability to other platforms is very much at the heart of our codebase.

The Kirigami toolkit, which offers a set of high-level application flow-controls for QtQuick applications achieves exactly that: it allows to built responsive apps that adapt to screen characteristics and input method.

(As an aside, there’s the case for Kirigami also supporting Android. Developing an app specifically for usage in Plasma may be easier, but it is also limiting its reach. Imagine an app running fine on your laptop, but also on your smartphone, be it Android or drive by Plasma Mobile (in the future). That would totally rock, and it would mean a target audience in the billions, not millions. Conversely, providing the technology to create such apps decreases the relative investment compared to the target audience, making technologies such as QtQuick and Kirigami an excellent choice for developers that want to maximize their target audience.)

Plasma Mobile vs. Plasma Desktop

Plasma Mobile is being developed in tandem with the popular Plasma desktop, in fact it shares more then 90% of the code with it. This means that work done on either of the two, mobile and desktop often benefits the other, and that there’s a large degree of compatibility between the two. The result is a system that feels the same across different devices, but makes use of the special capabilities of a given device, and supports different ways of using the software. On the development side, this means huge gains in terms of productivity and quality: A wider set of usage scenarios and having the code running on more machines means that it gets more real-world testing and bugs get shaken out quicker.

Who cares, anyway?

Whether or not convergence is something that users want, I think so. It takes a learning curve for users, and I think advancements in technology to bring this to the market, you need rather powerful hardware, the right connectors, and the right hardware components, so it’s not an easy end-goal. The path to convergence already bears huge benefits, as it means more efficient development, more consistency across different form factors and higher quality code.

Whether or not users care is only relevant to a certain point. Arguably, the biggest benefit of convergence lies in the efficiency of the development process, especially when multiple devices are involved. It doesn’t actually matter all that much if users are going to plug their mouse and keyboard into a phone and use it as a desktop device. Already today, users expect touchscreen to just work, even on laptops, users already expect the convertible being usable when the keyboard is flipped away or unplugged, users already expect to plug a 4K into their 1024×768 resolution laptop and the UI neither becoming unreadable or comically large.

In short: There really is no way around a large degree of convergence in Plasma (and similar products).

Kubuntu General News: Kubuntu Artful Aardvark (17.10) Beta 2 testing

Mar, 26/09/2017 - 5:17pd

Artful Aardvark (17.10) Beta 2 images are now available for testing.

The Kubuntu team will be releasing 17.10 in October. The final Beta 2 milestone will be available on September 28.

This is the first spin in preparation for the Beta 2 pre-release. Kubuntu Beta pre-releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Kubuntu Beta pre-releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers

Getting Kubuntu 17.10 Beta 2:

To upgrade to Kubuntu 17.10 pre-releases from 17.04, run sudo do-release-upgrade -d from a command line.

Download a Bootable image and put it onto a DVD or USB Drive via the download link at http://iso.qa.ubuntu.com/qatracker/milestones/382/builds. This is also the direct link to report your findings and any bug reports you file.

See our release notes: https://wiki.ubuntu.com/ArtfulAardvark/Beta2/Kubuntu

Please report your results on the Release tracker.

Didier Roche: Ubuntu GNOME Shell in Artful: Day 14

Hën, 25/09/2017 - 11:35md

The Ubuntu desktop team and a lot other people from the Ubuntu community are gathering for the week in New York for the Ubuntu Rally. It’s time to get the final touch and bug fixes for Ubuntu artful which is turning itself soon into Ubuntu 17.10. As you probably know if you follow this blog series, it will feature GNOME Shell by default, with slight modifications to ease and adapt to our audience for this new user experience. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 14: Badges and progress bar on Ubuntu Dock

One of the latest thing we wanted to work on as we highlighted on our previous posts is the notification for new emails or download experience on the Shell. We already do ship the KStatusNotifier extension for application indicator, but need a way to signal the user (even if you are not looking at the screen when this happens) for new emails, IM or download/copy progress.

Andrea stepped up on this and worked with Dash to Dock upstream to implement the unity API for this. Working with them, as usual, was pleasing and we got the green flag that it’s going to merge to master, with possibly some tweaks, which will make this work available to every Dash to Dock users! It means that after this update, Thunderbird is handily showing the number of unread emails you have in your inbox, thanks to thunderbird-gnome-support that we seeded back with Sébastien.

Similarly, we now have progress bar support on Nautilus, Firefox downloads and every applications using that API to get updated on transactional actions.

And we are all done on our changes to adapt GNOME Shell to our targeted audience! Meanwhile Marco is working on HDPI (and sim cards…) to deliver a fantastic fractional scaling experience.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

Let’s see how many bugs we can squash. We will of course update you on the slight readjustment we are planning to do during this week at the Ubuntu rally and for the release. Let’s target first the incoming beta which will enable you to test all of this.

Julian Andres Klode: APT 1.5 is out

Dje, 24/09/2017 - 9:32md

APT 1.5 is out, after almost 3 months the release of 1.5 alpha 1, and almost six months since the release of 1.4 on April 1st. This release cycle was unusually short, as 1.4 was the stretch release series and the zesty release series, and we waited for the latter of these releases before we started 1.5. In related news, 1.4.8 hit stretch-proposed-updates today, and is waiting in the unapproved queue for zesty.

This release series moves https support from apt-transport-https into apt proper, bringing with it support for https:// proxies, and support for autodetectproxy scripts that return http, https, and socks5h proxies for both http and https.

Unattended updates and upgrades now work better: The dependency on network-online was removed and we introduced a meta wait-online helper with support for NetworkManager, systemd-networkd, and connman that allows us to wait for network even if we want to run updates directly after a resume (which might or might not have worked before, depending on whether update ran before or after network was back up again). This also improves a boot performance regression for systems with rc.local files:

The rc.local.service unit specified After=network-online.target, and login stuff was After=rc.local.service, and apt-daily.timer was Wants=network-online.target, causing network-online.target to be pulled into the boot and the rc.local.service ordering dependency to take effect, significantly slowing down the boot.

An earlier less intrusive variant of that fix is in 1.4.8: It just moves the network-online.target Want/After from apt-daily.timer to apt-daily.service so most boots are uncoupled now. I hope we get the full solution into stretch in a later point release, but we should gather some experience first before discussing this with the release time.

Balint Reczey also provided a patch to increase the time out before killing the daily upgrade service to 15 minutes, to actually give unattended-upgrades some time to finish an in-progress update. Honestly, I’d have though the machine hung up and force rebooted it after 5 seconds already. (this patch is also in 1.4.8)

We also made sure that unreadable config files no longer cause an error, but only a warning, as that was sort of a regression from previous releases; and we added documentation for /etc/apt/auth.conf, so people actually know the preferred way to place sensitive data like passwords (and can make their sources.list files world-readable again).

We also fixed apt-cdrom to support discs without MD5 hashes for Sources (the Files field), and re-enabled support for udev-based detection of cdrom devices which was accidentally broken for 4 years, as it was trying to load libudev.so.0 at runtime, but that library had an SONAME change to libudev.so.1 – we now link against it normally.

Furthermore, if certain information in Release files change, like the codename, apt will now request confirmation from the user, avoiding a scenario where a user has stable in their sources.list and accidentally upgrades to the next release when it becomes stable.

Paul Wise contributed patches to allow configuring the apt-daily intervals more easily – apt-daily is invoked twice a day by systemd but has more fine-grained internal timestamp files. You can now specify the intervals in seconds, minutes, hours, and day units, or specify “always” to always run (that is, up to twice a day on systemd, once per day on non-systemd platforms).

Development for the 1.6 series has started, and I intent to upload a first alpha to unstable in about a week, removing the apt-transport-https package and enabling compressed index files by default (save space, a lot of space, at not much performance cost thanks to lz4). There will also be some small clean ups in there, but I don’t expect any life-changing changes for now.

I think our new approach of uploading development releases directly to unstable instead of parking them in experimental is working out well. Some people are confused why alpha releases appear in unstable, but let me just say one thing: These labels basically just indicate feature-completeness, and not stability. An alpha is just very likely to get a lot more features, a beta is less likely (all the big stuff is in), and the release candidates just fix bugs.

Also, we now have 3 active stable series: The 1.2 LTS series, 1.4 medium LTS, and 1.5. 1.2 receives updates as part of Ubuntu 16.04 (xenial), 1.4 as part of Debian 9.0 (stretch) and Ubuntu 17.04 (zesty); whereas 1.5 will only be supported for 9 months (as part of Ubuntu 17.10). I think the stable release series are working well, although 1.4 is a bit tricky being shared by stretch and zesty right now (but zesty is history soon, so …).


Filed under: Debian, Ubuntu

Ubuntu Insights: Canonical Distribution of Kubernetes: Dev Summary (Sept 22 2017)

Pre, 22/09/2017 - 10:13md

This article originally appeared on Tim Van Steenburgh’s blog

September 15th concluded our most recent development sprint on the Canonical Distribution of Kubernetes (CDK). Here are some highlights:

Canal Bundle

Our new Canal bundle is published! If you need network policy support in your cluster, try it out:

juju deploy canonical-kubernetes-canal

In the future you’ll be able to choose between Flannel and Calico when deploying Kubernetes via conjure-up.

Blogs and Demos

In case you missed them, check out some new blog posts and demos of CDK from members of the CDK engineering team:

RBAC

We added more tests for RBAC and updated CI to start testing an RBAC-enabled cluster. Our remaining task for RBAC is to plan and test the upgrade path for old clusters once we make RBAC on-by-default.

s390x

We built and published an s390x nginx-ingress-controller image and an e2e snap, and started testing a lxd CDK cluster on s390x. Since then we’ve gotten access to more hardware and are now testing on s390x vms using the Juju manual provider.

1.8.0

In our current sprint we’ve started testing 1.8.0 in anticipation of the upstream release at the end of this month. We’re also testing with docker 1.13.1, which will soon become the default in CDK.

If you’d like to follow along more closely with CDK development, you can do so in the following places:

Until next time!

Sebastian K&uuml;gler: The Evolution of Plasma Mobile

Pre, 22/09/2017 - 5:19md
Plasma Mobile

Back around 2006, when the Plasma project was started by Aaron Seigo and a group of brave hackers (among which, yours truly) we wanted to create a user interface that is future-proof. We didn’t want to create something that would only run on desktop devices (or laptops), but a code-base that grows with us into whatever the future would bring. Mobile devices were already getting more powerful, but would usually run entirely different software than desktop devices. We wondered why. The Linux kernel served as a wonderful example. Linux runs on a wide range of devices, from super computers to embedded systems, you would set it up for the target system and it would run largely without code changes. Linux architecture is in fact convergent. Could we do something similar at the user interface level?

Plasma Netbook

In 2007, Asus introduced the Eee PC, a small, inexpensive laptop. Netbooks proved to be all the rage at that point, so around 2009, we created Plasma Netbook, proving for the first time that we could actually serve different device user interfaces from the same code-base. There was a decent amount of code-sharing, but Plasma Netbook also helped us identifying areas in which we wanted to do better.

Plasma Mobile (I)

Come 2010, we got our hands on an N900 by Nokia, running Maemo, a mobile version of Linux. Within a week, during a sprint, we worked on a proof-of-concept mobile interface of Plasma:

Well, Nokia-as-we-knew-it is dead now, and Plasma never materialized on Nokia devices.

Plasma Active

Plasma Active was built as a successor to the early prototypes, and our first attempt at creating something for end-users. Conceived in 2011, the idea was not just to produce a simple Plasma user interface for a tablet device, but also deliver on a range of novel ideas for interaction with the device, closely related to the semantic desktop. Interlinked documents, contacts, sharing built right into the core, not just a “dumb” platform to run apps on, but a holistic system that allows users to manage their digital life on the fly. While Plasma Active had great promise and a lot of innovative potential, it never materialized for end-users in part due to lack of interest from both, the KDE community itself, but also from people on the outside. This doesn’t mean that the work put into it was lost, but thanks to a convergent code-base, many improvements made primarily with Plasma Active in mind have improved Plasma for all its users and continue to do so today. In many ways, Active proved valuable as a playground, as a clean slate where we want to take the technology, and how we can improve our developemnt process. It’s not a surprise that Plasma 5 today is developed in a process very similar to how we approached Plasma Active back then.

Plasma Mobile (II)

Learning from the Plasma Active project, in 2015 we regrouped and started to build a rather simple smartphone user interface, along with a reference software stack that would allow us not only to develop Plasma Mobile further, but to allow us to run on a growing number of devices. Plasma Mobile (II)’s goal wasn’t to get the most innovative of interfaces out, but to create a bread-and-butter platform, a base to develop applications on. From a technology point of view, Plasma is actually very small. It shares approximately 95% of the code with its desktop companion, widgets, and increasingly applications are interchangeable between the two.

Plasma Mobile (in any shape or form) has never been this close to actually making it into the hands and pockets of end users. A collaboration project with Purism, a company bringing privacy and software freedom to end-users, we may create the first Plasma phone for end users and have it on the market as soon as januari 2019. If you want to support this project, the crowdfunding campaign has just passed the 40% mark, and you can be part of it — either by joining the development crew, or by pre-ordering a device and thereby funding the development.

Ubuntu Insights: Ubuntu Desktop Weekly Update: September 22, 2017

Pre, 22/09/2017 - 3:45md

We’re less than a week away from Final Beta! It seems to have come round very quickly this cycle. Next week we’re at the Ubuntu Rally in New York City where we will be putting the finishing touches to the beta. In the meantime, here’s a quick rundown on what happened this week:

GNOME
  • The release of GNOME 3.26 last week meant lots of package updates in 17.10. Thanks Jeremy for leading the charge on this.
  • More work is happening on the progress bars in Dash to Dock.
  • We’re working on a fix for a bug which shows your desktop for a few seconds when resuming from suspend. This affects Unity and GNOME Shell.
  • We’ve made a few more tweaks to GDM, and you can now see the Ubuntu logo at bottom of the greeter.
  • Some new additions to Didier’s series of blog posts on the transition to GNOME Shell covers alt-tab behaviour
  • And the transparency settings for Dash to Dock:
  • The new wallpaper and mascot were released.
Snaps

We’ve been working on a Platform Snap for GNOME 3.26 to allow you to run the latest GNOME apps on Xenial as well as making Snaps for the new apps. This should be ready for testing soon and we’d appreciate some feedback.

Some desktop specific updates to snapd are also in the going to be rolling out soon; Snaps using the new Desktop interface will automatically get access to host system fonts and font caches.

Updates
  • Chromium 61.0.3163.79 is ready for publication. Chromium beta updated to 62.0.3202.18 and dev updated to 63.0.3213.3 for all series except Trusty.
  • Libreoffice 5.4.1-0ubuntu1 now in Artful.
In The News
  • OMG talks about the changes to the Dock.
  • Dustin Kirkland presents the results of the app survey at UbuCon Paris.

Ubuntu Podcast from the UK LoCo: S10E29 – Adamant Terrible Hammer - Ubuntu Podcast

Enj, 21/09/2017 - 10:10md

This is Le Crossover Ubuntu Mashup Podcast thingy recorded live at UbuCon Europe in Paris, France.

It’s Season Ten Episode Twenty-Nine of the Ubuntu Podcast! Alan Pope, Martin Wimpress, Marius Quabeck, Max Kristen, Rudy and Tiago Carrondo are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Ubuntu Insights: Microsoft and Canonical Increase Velocity with Azure Tailored Kernel

Enj, 21/09/2017 - 6:00md

By Leann Ogasawara, Director of Kernel Engineering

Ubuntu has long been a popular choice for Linux instances on Azure.  Our ongoing partnership with Microsoft has brought forth great results, such as the support of the latest Azure features, Ubuntu underlying SQL Server instances, bash on Windows, Ubuntu containers with Hyper-V Isolation on Windows 10 and Windows Servers, and much more.

Canonical, with the team at Microsoft Azure, are now delighted to announce that as of September 21, 2017, Ubuntu Cloud Images for Ubuntu 16.04 LTS on Azure have been enabled with a new Azure tailored Ubuntu kernel by default.  The Azure tailored Ubuntu kernel will receive the same level of support and security maintenance as all supported Ubuntu kernels for the duration of the Ubuntu 16.04 LTS support life.

The kernel itself is provided by the linux-azure kernel package. The most notable highlights for this kernel include:

  • Infiniband and RDMAcapability for Azure HPC to deliver optimized performance of compute intensive workloads on Azure A8, A9, H-series, and NC24r.
  • Full support for Accelerated Networking in Azure.  Direct access to the PCI device provides gains in overall network performance offering the highest throughput and lowest latency for guests in Azure.  Transparent SR-IOV eliminates configuration steps for bonding network devices.  SR-IOV for Linux in Azure is in preview but will become generally available later this year.
  • NAPI and Receive Segment Coalescing for 10% greater throughput on guests not using SR-IOV.
  • 18% reduction in kernel size.
  • Hyper-V socket capability — a socket-based host/guest communication method that does not require a network.
  • The very latest Hyper-V device drivers and feature support available.

The ongoing collaboration between Canonical and Microsoft will also continue to produce upgrades to newer kernel versions providing access to the latest kernel features, bug fixes, and security updates.  Any Ubuntu 16.04 LTS image brought up from the Azure portal after September 21st will be running on this Azure tailored Ubuntu kernel.

How to verify which kernel is used:

$ uname -r 4.11.0-1011-azure

 

Instances using the Azure tailored Ubuntu kernel will, of course, be supportable through Canonical’s Ubuntu Advantage service, available for purchase on our online shop or through sales@canonical.com in three tiers:

  • Essential: designed for self-sufficient users, providing access to our self-support portal as well as a variety of Canonical tools and services.
  • Standard: adding business-hours web and email support on top of the contents of Essential, as well as a 2-hour to 2-business days response time (severity 1-4).
  • Advanced: adding 24×7 web and email support on top of the contents of Essential, as well as a 1-hour to 1-business day response time (severity 1-4).

The Azure tailored Ubuntu kernel will not support the Canonical Livepatch Service at the time of this announcement, but investigation is underway to evaluate delivery of this service in the future.

If, for now, you prefer livepatching at scale over the above performance improvements, it is possible to revert to the standard kernel, using the following commands:

 

$ sudo apt install linux-virtual linux-cloud-tools-virtual $ sudo apt purge linux*azure $ sudo reboot

 

As we continue to collaborate closely with various Microsoft teams on public cloud, private cloud, containers and services, you can expect further boosts in performance, simplification of operations at scale, and enablement of new innovations and technologies.

Ante Karamatić: Ime odbijeno

Enj, 21/09/2017 - 5:47md

Nakon 8-9 dana i poslanog maila, danas sam dobio obavijest o tome što se dešava s mojom prijavom. Pa prenosim u cijelosti:

dana 12.09.2017 poslano je rezervacija u TS Zagreb (e – tvrtka). i poštom je poslana dok. i  RZ obrazac u Hitro.hr Zagreb
Papirna dokumentacija je predana  na sud 13.09.2017.Rezevacija imena nije prošla . .Obavijest je predignuta sa suda 18.09.2017(.Hirto.hr  – Zagreb)
Obavijest je poštom danas stigla u Hitro.hr  – Šibenik (21.09.2017.). Zvala sam Vas na mobitel da bi mogli predigniti potvrdu ,ali mi se niko ne javlja.
Stoga Vas obavještvam da možete predignuti obavijest u HITRO:HR Šibenik.

Dakle, eTvrtka je jedno veliko ništa; obična laž i prijevara. I dalje se dokumenti šalju poštom. Da se razumijemo, ovo nije problem službenika koji su bili sustretljivi. Ovo je problem organizacije države, odnosno Vlade. Službenici su tu žrtve isto kao i mi, koji pokušavamo nešto stvoriti.

Dakle, ime je odbijeno.

U Republici Hrvatskoj je potrebno proći 10 dana kako biste saznali možete li pokrenuti tvrtku s određenim imenom. U drugim državama ovakve stvari ni ne postoje, već se tvrtke pokreću unutar jednog dana. Ako želimo biti plodno tlo za poduzetništvo, hitro.hr treba ukinuti (potpuno je besmislen) i uvesti suvremene tehnologije; algoritmi mogu pregledavati imena i to treba biti samo web stranica. Nikakvi protokoli, plaćanja, stajanja u redu.

The Fridge: Ubuntu Community Council 2017 election under way!

Enj, 21/09/2017 - 5:30md

The Ubuntu Community Council election has begun and ballots sent out to all Ubuntu Members. Voting closes September 27th at end of day UTC.

The following candidates are standing for 7 seats on the council:

Please contact the community-council@lists.ubuntu.com list if you are an Ubuntu Member but did not receive a ballot. Voting instructions were sent to the public address defined in Launchpad, or your launchpad_id@ubuntu.com address if not. Please also make sure you check your spam folder first.

We’d like to thank all the candidate for their willingness to serve in this capacity, and members for their considered votes.

Originally posted to the ubuntu-news-team mailing list on Tue Sep 12 14:22:49 UTC 2017 by Mark Shuttleworth

Ubuntu Insights: Kubernetes Snaps: The Quick Version

Enj, 21/09/2017 - 3:46md

This article originally appeared on George Kraft’s blog

When we built the Canonical Distribution of Kubernetes (CDK), one of our goals was to provide snap packages for the various Kubernetes clients and services: kubectl, kube-apiserver, kubelet, etc.

While we mainly built the snaps for use in CDK, they are freely available to use for other purposes as well. Let’s have a quick look at how to install and configure the Kubernetes snaps directly.

The Client Snaps

This covers: kubectl, kubeadm, kubefed

Nothing special to know about these. Just snap install and you can use them right away:

$ sudo snap install kubectl --classic kubectl 1.7.4 from 'canonical' installed $ kubectl version --client Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} The Server Snaps

This covers: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy

Example: kube-apiserver

We will use kube-apiserver as an example. The other services generally work the same way.

Install with snap install

This creates a systemd service named snap.kube-apiserver.daemon. Initially, it will be in an error state because it’s missing important configuration:

$ systemctl status snap.kube-apiserver.daemon ● snap.kube-apiserver.daemon.service - Service for snap application kube-apiserver.daemon Loaded: loaded (/etc/systemd/system/snap.kube-apiserver.daemon.service; enabled; vendor preset: enabled) Active: inactive (dead) (Result: exit-code) since Fri 2017-09-01 15:54:39 UTC; 11s ago ...

Configure kube-apiserver using snap set.

sudo snap set kube-apiserver \ etcd-servers=https://172.31.9.254:2379 \ etcd-certfile=/root/certs/client.crt \ etcd-keyfile=/root/certs/client.key \ etcd-cafile=/root/certs/ca.crt \ service-cluster-ip-range=10.123.123.0/24 \ cert-dir=/root/certs

Note: Any files used by the service, such as certificate files, must be placed within the /root/ directory to be visible to the service. This limitation allows us to run a few of the services in a strict confinement mode that offers better isolation and security.

After configuring, restart the service and you should see it running:

$ sudo service snap.kube-apiserver.daemon restart $ systemctl status snap.kube-apiserver.daemon ● snap.kube-apiserver.daemon.service - Service for snap application kube-apiserver.daemon Loaded: loaded (/etc/systemd/system/snap.kube-apiserver.daemon.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2017-09-01 16:02:33 UTC; 6s ago ... Configuration

The keys and values for snap set map directly to arguments that you would
normally pass to the service. You can view a list of arguments by invoking the
service directly, e.g. kube-apiserver -h.

For configuring the snaps, drop the leading dashes and pass them through
snap set. For example, if you want kube-apiserver to be invoked like this

kube-apiserver --etcd-servers https://172.31.9.254:2379 --allow-privileged

You would configure the snap like this:

snap set kube-apiserver etcd-servers=https://172.31.9.254:2379 allow-privileged=true

Note, also, that we had to specify a value of true for allow-privileged. This
applies to all boolean flags.

Going deeper

Want to know more? Here are a couple good things to know:

If you’re confused about what snap set ... is actually doing, you can read
the snap configure hooks in

/snap/<snap-name>/current/meta/hooks/configure

to see how they work.

The configure hook creates an args file here:

/var/snap/<snap-name>/current/args

This contains the actual arguments that get passed to the service by the snap:

$ cat /var/snap/kube-apiserver/current/args --cert-dir "/root/certs" --etcd-cafile "/root/certs/ca.crt" --etcd-certfile "/root/certs/client.crt" --etcd-keyfile "/root/certs/client.key" --etcd-servers "https://172.31.9.254:2379" --service-cluster-ip-range "10.123.123.0/24"

Note: While you can technically bypass snap set and edit the args file directly, it’s best not to do so. The next time the configure hook runs, it will obliterate your changes. This can occur not only from a call to snap set but also during a background refresh of the snap.

The source code for the snaps can be found here: https://github.com/juju-solutions/release/tree/rye/snaps/snap

We’re working on getting these snaps added to the upstream Kubernetes build process. You can follow our progress on that here: https://github.com/kubernetes/release/pull/293

If you have any questions or need help, you can either find us at #juju on
freenode, or open an issue against https://github.com/juju-solutions/bundle-canonical-kubernetes and we’ll help you out as soon as we can.

Scarlett Clark: KDE: Randa 2017! KDE Neon Snappy and more

Enj, 21/09/2017 - 2:54md

Another successful Randa meeting! I spent most of my days working on snappy packaging for KDE core applications, and I have most of them done!

Snappy Builds on KDE Neon

We need testers! Please see Using snappy to get started.

In the evenings I worked on getting all my appimage work moved into the KDE infrastructure so that the community can take over.

I learned a great deal about accessibility and have been formulating ways to improve KDE neon in this area.

Randa meetings are crucial to the KDE community for developer interaction, brainstorming, and bringing great new things to KDE.
I encourage all of you to please consider a donation at https://www.kde.org/fundraisers/randameetings2017/

Jamie Strandboge: Easy ssh into libvirt VMs and LXD containers

Mër, 20/09/2017 - 11:39md

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: ubuntu, ubuntu-server

Serge Hallyn: Namespaced File Capabilities

Mër, 20/09/2017 - 5:37md
Namespaced file capabilities

As of this past week, namespaced file capabilities are available in the upstream kernel. (Thanks to Eric Biederman for many review cycles and for the final pull request)

TL;DR

Some packages install binaries with file capabilities, and fail to install if you cannot set the file capabilities. Such packages could not be installed from inside a user namespace. With this feature, that problem is fixed.

Yay!

What are they?

POSIX capabilities are pieces of root’s privilege which can be individually used.

File capabilites are POSIX capability sets attached to files. When files with associated capabilities are executed, the resulting task may end up with privilege even if the calling user was unprivileged.

What’s the problem

In single-user-namespace days, POSIX capabilities were completely orthogonal to userids. You can be a non-root user with CAP_SYS_ADMIN, for instance. This can happen by starting as root, setting PR_SET_KEEPCAPS through prctl(2), and dropping the capabilities you don’t want and changing your uid.  Or, it can happen by a non-root user executing a file with file capabilities.  In order to append such a capability to a file, you require the CAP_SETFCAP capability.

User namespaces had several requirements, including:

  1. an unprivileged user should be able to create a user namespace
  2. root in a user namespace should be privileged against its resources
  3. root in a user namespace should be unprivileged against any resources which it does not own.

So in a post-user-namespace age, unprivileged user can “have privilege” with respect to files they own. However if we allow them to write a file capability on one of their files, then they can execute that file as an unprivileged user on the host, thereby gaining that privilege. This violates the third user namespace requirement, and is therefore not allowed.

Unfortunately – and fortunately – some software wants to be installed with file capabilities. On the one hand that is great, but on the other hand, if the package installer isn’t able to handle the failure to set file capabilities, then package installs are broken. This was the case for some common packages – for instance httpd on centos.

With namespaced file capabilities, file capabilities continue to be orthogonal with respect to userids mapped into the namespace. However they capabilities are tagged as belonging to the host uid mapped to the container’s root id (0).  (If uid 0 is not mapped, then file capabilities cannot be assigned)  This prevents the namespace owner from gaining privilege in a namespace against which they should not be privileged.

 

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.


Faqet