You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 2 months 2 javë më parë

Sean Davis: Development Release: Xfce PulseAudio Plugin 0.3.4

Mar, 12/12/2017 - 1:12md

With each new release, the Xfce PulseAudio Plugin becomes more refined and better suited for Xfce users. The latest release adds support for the MPRIS Playlists specification and improves support for Spotify and other media players.

What’s New? New Feature: MPRIS Playlists Support
  • This is a basic implementation of the MediaPlayer2.Playlists specification.
  • The 5 most recently played playlists are displayed (if supported by the player). Admittedly, I have not found a player that seems to implement the ordering portion of this specification.
New Feature: Experimental libwnck Support
  • libwnck is a window management library. This feature adds the “Raise” method for media players that do not support it, allowing the user to display the application window after clicking the menu item in the plugin.
  • Spotify for Linux is the only media player that I have found which does not implement this method. Since this is the media player I use most of the time, this was an important issue for me to resolve.
General
  • Unexpected error messages sent via DBUS are now handled gracefully. The previous release of Pithos (1.1.2) displayed a Python error when doing DBUS queries before, crashing the plugin.
  • Numerous memory leaks were patched.
Translation Updates

Chinese (Taiwan), Croatian, Czech, Danish, Dutch, French, German, Hebrew, Japanese, Korean, Lithuanian, Polish, Russian, Slovak, Spanish, Swedish, Thai

Downloads

The latest version of Xfce PulseAudio Plugin can always be downloaded from the Xfce archives. Grab version 0.3.4 from the below link.

http://archive.xfce.org/src/panel-plugins/xfce4-pulseaudio-plugin/0.3/xfce4-pulseaudio-plugin-0.3.4.tar.bz2

  • SHA-256: 43fa39400eccab1f3980064f42dde76f5cf4546a6ea0a5dc5c4c5b9ed2a01220
  • SHA-1: 171f49ef0ffd1e4a65ba0a08f656c265a3d19108
  • MD5: 05633b8776dd3dcd4cda8580613644c3

Kubuntu General News: Testing a switch to default Breeze-Dark Plasma theme in Bionic daily isos and default settings

Hën, 11/12/2017 - 2:15pd

Today’s daily ISO for Bionic Beaver 18.04 sees an experimental switch to the Breeze-Dark Plasma theme by default.

Users running 18.04 development version who have not deliberately opted to use Breeze/Breeze-Light in their systemsettings will also see the change after upgrading packages.

Users can easily revert back to the Breeze/Breeze-Light Plasma themes by changing this in systemsettings.

Feedback on this change will be very welcome:

You can reach us on the Kubuntu IRC channel or Telegram group, on our user mailing list, or post feedback on the (unofficial) Kubuntu web forums

Thank you to Michael Tunnell from TuxDigital.com for kindly suggesting this change.

Costales: 3rd Ubucon Europe 2018

Dje, 10/12/2017 - 8:21md
Yes! A new edition for ubunteros around the world! :))

Ubucons around the world
Is Ubucon made for me?This event is just for you! ;) You don't need to be a developer, because you'll enjoy a lot of talks about everything you can to imagine about Ubuntu and share great moments with other users.
Even the language won't be a problem. there, you'll meet people from everywhere and surely someone will speak in your language :)

You can read different posts about the previous Ubucon in Paris here:
English: https://insights.ubuntu.com/2017/09/25/ubucon-europe-2017/
Portuguese: https://carrondo.net/wp/2017/09/19/ubucon-europa-2017-paris/
Spanish: http://thinkonbytes.blogspot.pt/2017/09/2-ubucon-europea.html
Another in Spanish: https://www.innerzaurus.com/ubuntu-touch/articulos-touch/cronicas-la-ubucon-paris-2017/

Where?Gijón/Xixón, Asturies, SpainAntiguo Instituto, just in the city center, built in 1797:
Antiguo InstitutoWhen?27th, 28th and 29th of April 2018.

Organized by
  • Francisco Javier Teruelo de Luis 
  • Francisco Molinero 
  • Sergi Quiles Pérez 
  • Antonio Fernandes 
  • Paul Hodgetts 
  • Santiago Moreira 
  • Joan CiberSheep 
  • Fernando Lanero 
  • Manu Cogolludo 
  • Marcos Costales
Get in touch!We're working in a few details yet, please don't book a flight yet and join our Telegram channel nowGoogle+ or Twitter for getting the last news and future discounts on hotels and transport.

Clive Johnston: Send SMS messages from your Plasma Desktop

Sht, 09/12/2017 - 5:07md

Earlier this year I talked about using KDE Connect to send and receive SMS messages via your connected device. Back then sending messages was a bit of a faff and involved having to use the terminal, but as of today this is no longer an issue!

Meet KDEConnect SMS sender Plasmoid which was uploaded earlier today on the KDE Store.  Once installed on your system you can add it to your desktop as a widget (as shown above).  On first use you need to tell it which connection to use by going to the Settings page.

 

 

Once you have it configured to use the correct device, you type in the phone number of the person you wish to send the message to in the first box (as below).  Please note this needs to be the international dialling code (ie +44 for the UK, +353 for Ireland).  Then type your message and click the Send button, it’s that simple!

Your mobile device will then send the message.  The project has a GitHub page – https://github.com/comexpertise/plasma-kdeconnect-sms so head over there for the code, new releases and bug reports/feedback.

You can try it out yourself, on Xenial (16.04), Artful (17.10) or Bionic (18.04) by adding my PPA:

sudo add-apt-repository ppa:clivejo/plasma-kdeconnect-sms
sudo apt update
sudo apt install plasma-kdeconnect-sms

Stephen Michael Kellat: Not Messing With Hot Wheels Car Insertion

Pre, 08/12/2017 - 7:03pd

Being on furlough from your job for just under four full months and losing 20 pounds during that time can hardly be considered healthy. If anything, it means that something is wrong. I allude in various fora that I work for a bureau of the United States of America's federal government as a civil servant. I am not particularly high-ranking as I only come in at GS-7 Step 1 under "CLEVELAND-AKRON-CANTON, OH" locality pay. My job doesn't normally have me working a full 12 months out of the year (generally 6-8 months depending upon the needs of the bureau) and I am normally on-duty only 32 hours per week.

As you might imagine, I have been trying to leave that job. Unfortunately, working for this particular government bureau makes any resume look kinda weird. My local church has some domestic missions work to do and not much money to fund it. I already use what funding we have to help with our mission work reaching out to one of the local nursing homes to provide spiritual care as well as frankly one of the few lifelines to the outside world some of those residents have. Xubuntu and the bleeding edge of LaTeX2e plus CTAN help greatly in preparing devotional materials for use in the field at the nursing home. Funding held us back from letting me assist with Hurricane Harvey or Hurricane Maria relief especially since I am currently finishing off quite a bit of training in homeland security/emergency management. But for the lack of finances to back it up as well as the lack of a large enough congregation, there is quite a bit to do. Unfortunately the numbers we get on a Sunday morning are not what they once were when the congregation had over a hundred in attendance.

I don't like talking about numbers in things like this. If you take 64 hours in a two week pay period multiplied it by the minimum of 20 pay periods that generally occur and then multiplied by the hourly equivalent rate for my grade and step it only comes out to a pre-tax gross under $26,000. I rounded up to a whole number. Admittedly it isn't too much.

At this time of the year last year, many people across the Internet burned cash by investing in the Holiday Hole event put on by the Cards Against Humanity people. Over $100,000 was raised to dig a hole about 90 miles outside Chicago and then fill the thing back in. This year people spent money to help buy a piece of land to tie up the construction of President Trump's infamous border wall and even more which resulted in Cards Against Humanity raking in $2,250,000 in record time.

Now, the church I would be beefing up the missionary work with doesn't have a web presence. It doesn't have an e-mail address. It doesn't have a fax machine. Again, it is a small church in rural northeast Ohio. According to IRS Publiction 526, contributions to them are deductible under current law provided you read through the stipulations in that thin booklet and are a taxpayer in the USA. Folks outside the USA could contribute in US funds but I don't know what the rules are for foreign tax administrations to advise about how such is treated if at all.

The congregation is best reached by writing to:

West Avenue Church of Christ 5901 West Avenue Ashtabula, OH 44004 United States of America

With the continuing budget shenanigans about how to fund Fiscal Year 2018 for the federal government, I get left wondering if/when I might be returning to duty. Helping the congregation fund me to undertake missions for it removes that as a concern. Besides, any job that gives you gray hair and puts 30 pounds on you during eight months of work cannot be good for you to remain at. Too many co-workers took rides away in ambulances at times due to the pressures of the job during the last work season.


Not Messing With Hot Wheels Car Insertion by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Robert Ancell: Setting up Continuous Integration on gitlab.gnome.org

Pre, 08/12/2017 - 1:40pd
Simple Scan recently migrated to the new gitlab.gnome.org infrastructure. With modern infrastructure I now have the opportunity to enable Continuous Integration (CI), which is a fancy name for automatically building and testing your software when you make changes (and it can do more than that too).

I've used CI in many projects in the past, and it's a really handy tool. However, I've never had to set it up myself and when I've looked it's been non-trivial to do so. The great news is this is really easy to do in GitLab!

There's lots of good documentation on how to set it up, but to save you some time I'll show how I set it up for Simple Scan, which is a fairly typical GNOME application.

To configure CI you need to create a file called .gitlab-ci.yml in your git repository. I started with the following:

build-ubuntu:
  image: ubuntu:rolling
  before_script:
    - apt-get update
    - apt-get install -q -y --no-install-recommends meson valac gcc gettext itstool libgtk-3-dev libgusb-dev libcolord-dev libpackagekit-glib2-dev libwebp-dev libsane-dev
  script:
    - meson _build
    - ninja -C _build install

The first line is the name of the job - "build_ubuntu". This is going to define how we build Simple Scan on Ubuntu.

The "image" is the name of a Docker image to build with. You can see all the available images on Docker Hub. In my case I chose an official Ubuntu image and used the "rolling" link which uses the most recently released Ubuntu version.

The "before_script" defines how to set up the system before building. Here I just install the packages I need to build simple-scan.

Finally the "script" is what is run to build Simple Scan. This is just what you'd do from the command line.

And with that, every time a change is made to the git repository Simple Scan is built on Ubuntu and tells me if that succeeded or not! To make things more visible I added the following to the top of the README.md:

[![Build Status](https://gitlab.gnome.org/GNOME/simple-scan/badges/master/build.svg)](https://gitlab.gnome.org/GNOME/simple-scan/pipelines)

This gives the following image that shows the status of the build:

And because there's many more consumers of Simple Scan that just Ubuntu, I added the following to.gitlab-ci.yml:

build-fedora:
  image: fedora:latest
  before_script:
    - dnf install -y meson vala gettext itstool gtk3-devel libgusb-devel colord-devel PackageKit-glib-devel libwebp-devel sane-backends-devel
  script:
    - meson _build
    - ninja -C _build install

Now it builds on both Ubuntu and Fedora with every commit!

I hope this helps you getting started with CI and gitlab.gnome.org. Happy hacking.

Ubuntu Insights: Security Team Weekly Summary: December 7, 2017

Enj, 07/12/2017 - 4:11md

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

Due to the holiday last week, there was no weekly report, so this report covers the previous two weeks. During the last two weeks, the Ubuntu Security team:

  • Triaged 379 public security vulnerability reports, retaining the 74 that applied to Ubuntu.
  • Published 32 Ubuntu Security Notices which fixed 70 security issues (CVEs) across 34 supported packages.
Ubuntu Security Notices

 

Bug Triage

 

Mainline Inclusion Requests

 

Development

 

  • add max compressed size check to the review tools
  • adjust review-tools runtime errors output for store (final)
  • adjust review-tools for redflagged base snap overrides
  • adjust review-tools for resquashing with fakeroot
  • upload a couple of bad snaps to test r945 of the review tools in the store. The store is correctly not auto-approving, but is also not handling them right. File LP: #1733699
  • investigate SNAPCRAFT_BUILD_INFO=1 with snapcraft cleanbuild and attempt rebuilds
  • respond to feedback in PR 4245, close and resubmit as PR 4255 (interfaces/screen-inhibit-control: fix case in screen inhibit control)
  • investigate reported godot issue. Send up PR 4257 (interfaces/opengl: also allow ‘revision’ on /sys/devices/pci…)
  • investigation of potential biometrics-observe interface
  • snapd reviews
    • PR 4258: fix unmounting on systems without rshared
    • PR 4170: cmd/snap-update-ns: add planWritableMimic
    • PR 4306 (use #include instead of bare ‘include’)
    • PR 4224 – cmd/snap-update-ns: teach update logic to handle synthetic changes
    • PR 4312 – ‘create mount targe for lib32,vulkan on demand
    • PR 4323 – interfaces: add gpio-memory-control interface
    • PR 4325 (add test for netlink-connector interface) and investigate NETLINK_CONNECTOR denials
    • review design of PR 4329 – discard stale mountspaces (v2)
  • finalized squashfs fix for 1555305 and submitted it upstream (https://sourceforge.net/p/squashfs/mailman/message/36140758/)

  • investigation into users 16.04 apparmor issues with tomcat
What the Security Team is Reading This Week

 

Weekly Meeting

 

More Info

 

Ubuntu Insights: Kernel Team Summary – December 6, 2017

Mër, 06/12/2017 - 9:14md
November 21 through December 04 Development (18.04)

Every 6 months the Ubuntu Kernel Team is tasked to pick the kernel to be used in the next release. This is a difficult thing to do because we don’t definitively know what will be going into the upstream kernel over the next 6 months nor the quality of that kernel. We look at the Ubuntu release schedule and how that will line up with the upstream kernel releases. We talk to hardware vendors about when they will be landing their changes upstream and what they would prefer as the Ubuntu kernel version. We talk to major cloud vendors and ask them what they would like. We speak to large consumers of Ubuntu to solicit their opinion. We look at what will be the next upstream stable kernel. We get input from members of the Canonical product strategy team. Taking all of that into account we are tentatively planning to converge on 4.15 for the Bionic Beaver 18.04 LTS release.

On the road to 18.04 we have a 4.14 based kernel in the Bionic -proposed repository.

Stable (Released & Supported)
  • The kernels for the current SRU cycle are being respun to include fixes for CVE-2017-16939 and CVE-2017-1000405.

  • Kernel versions in -proposed:

    trusty 3.13.0-137.186 trusty/linux-lts-xenial 4.4.0-103.126~14.04.1 xenial 4.4.0-103.126 xenial/linux-hwe 4.10.0-42.46~16.04.1 xenial/linux-hwe-edge 4.13.0-19.22~16.04.1 zesty 4.10.0-42.46 artful 4.13.0-19.22
  • Current cycle: 17-Nov through 09-Dec

    17-Nov Last day for kernel commits for this cycle. 20-Nov - 25-Nov Kernel prep week. 26-Nov - 08-Dec Bug verification & Regression testing. 11-Dec Release to -updates.
  • Next cycle: 08-Dec through 30-Dec(This cycle will only contain CVE fixes)

    08-Dec Last day for kernel commits for this cycle. 11-Dec - 16-Dec Kernel prep week. 17-Dec - 29-Dec Bug verification & Regression testing. 01-Jan Release to -updates.
Misc
  • The current CVE status
  • If you would like to reach the kernel team, you can find us at the #ubuntu-kernel
    channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing
    list at: kernel-team@lists.ubuntu.com.

Lubuntu Blog: Join Phabricator

Mar, 05/12/2017 - 8:54md
Inspired by the wonderful KDE folks, Lubuntu has created a Phabricator instance for our project. Phabricator is an open source, version control system-agnostic collaborative development environment similar in some ways to GitHub, GitLab, and perhaps a bit more remotely, like Launchpad. We were looking for tools to organize, coordinate, and collaborate, especially across teams within […]

Simos Xenitellis: How to migrate LXD from DEB/PPA package to Snap package

Mar, 05/12/2017 - 2:35md

You are using LXD from a Linux distribution package and you would like to migrate your existing installation to the Snap LXD package. Let’s do the migration together!

This post is not about live container migration in LXD. Live container migration is about moving a running container from one LXD server to another.

If you do not have LXD installed already, then look for another guide about the installation and set up of LXD from a snap package. A fresh installation of LXD as a snap package is easy.

Note that from the end of 2017, LXD will be generally distributed as a Snap package. If you run LXD 2.0.x from Ubuntu 16.04, you are not affected by this.

Prerequisites

Let’s check the version of LXD (Linux distribution package).

$ lxd --version 2.20 $ apt policy lxd lxd: Installed: 2.20-0ubuntu4~16.04.1~ppa1 Candidate: 2.20-0ubuntu4~16.04.1~ppa1 Version table: *** 2.20-0ubuntu4~16.04.1~ppa1 500 500 http://ppa.launchpad.net/ubuntu-lxc/lxd-stable/ubuntu xenial/main amd64 Packages 100 /var/lib/dpkg/status 2.0.11-0ubuntu1~16.04.2 500 500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages 2.0.2-0ubuntu1~16.04.1 500 500 http://archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages 2.0.0-0ubuntu4 500 500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

In this case, we run LXD version 2.20, and it was installed from the LXD PPA repository.

If you did not enable the LXD PPA repository, you would have an LXD version 2.0.x, the version that was released with Ubuntu 16.04 (what is running above). LXD version 2.0.11 is currently the default version for Ubuntu 16.04.3 and will be supported in that form until 2016 + 5 = 2021. LXD version 2.0.0 is the original LXD version in Ubuntu 16.04 (when original released) and LXD version 2.0.2 is the security update of that LXD 2.0.0.

We are migrating to the LXD snap package. Let’s see how many containers will be migrated.

$ lxc list | grep RUNNING | wc -l 6

It would be a good test to check if something goes horribly wrong.

Let’s check the available incoming LXD snap packages.

$ snap info lxd name: lxd summary: System container manager and API publisher: canonical contact: https://github.com/lxc/lxd/issues description: | LXD is a container manager for system containers. It offers a REST API to remotely manage containers over the network, using an image based workflow and with support for live migration. Images are available for all Ubuntu releases and architectures as well as for a wide number of other Linux distributions. LXD containers are lightweight, secure by default and a great alternative to virtual machines. snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu channels: stable: 2.20 (5182) 44MB - candidate: 2.20 (5182) 44MB - beta: ↑ edge: git-b165982 (5192) 44MB - 2.0/stable: 2.0.11 (4689) 20MB - 2.0/candidate: 2.0.11 (4770) 20MB - 2.0/beta: ↑ 2.0/edge: git-03e9048 (5131) 19MB -

There are several channels to choose from. The stable channel has LXD 2.20, just like the candidate channel. When the LXD 2.21 snap is ready, it will first be released in the candidate channel and stay there for 24 hours. If everything goes well, it will get propagated to the stable channel. LXD 2.20 was released some time ago, that’s why both channels have the same version (at the time of writing this blog post).

There is the edge channel, which has the auto-compiled version from the git source code repository. It is handy to use this channel if you know that a specific fix (that affects you) has been added to the source code, and you want to verify that it actually fixed the issue. Note that the beta channel is not used, therefore it inherits whatever is found in the channel below; the edge channel.

Finally, there are these 2.0/ tagged channels that correspond to the stock 2.0.x LXD versions in Ubuntu 16.04. It looks that those who use the 5-year supported LXD (because Ubuntu 16.04) have the option to switch to a snap version after all.

Installing the LXD snap

Install the LXD snap.

$ snap install lxd lxd 2.20 from 'canonical' installed Migrating to the LXD snap

Now, the LXD snap is installed, but the DEB/PPA package LXD is the one that is running. We need to run the migration script lxd.migrate that will move the data from the DEB/PPA version over to the Snap version of LXD. In practical terms, it will move files from /var/lib/lxd (old DEB/PPA LXD location), to

$ sudo lxd.migrate => Connecting to source server => Connecting to destination server => Running sanity checks === Source server LXD version: 2.20 LXD PID: 4414 Resources: Containers: 6 Images: 3 Networks: 1 Storage pools: 1 === Destination server LXD version: 2.20 LXD PID: 30329 Resources: Containers: 0 Images: 0 Networks: 0 Storage pools: 0 The migration process will shut down all your containers then move your data to the destination LXD. Once the data is moved, the destination LXD will start and apply any needed updates. And finally your containers will be brought back to their previous state, completing the migration. Are you ready to proceed (yes/no) [default=no]? yes => Shutting down the source LXD => Stopping the source LXD units => Stopping the destination LXD unit => Unmounting source LXD paths => Unmounting destination LXD paths => Wiping destination LXD clean => Moving the data => Moving the database => Backing up the database => Opening the database => Updating the storage backends => Starting the destination LXD => Waiting for LXD to come online === Destination server LXD version: 2.20 LXD PID: 2812 Resources: Containers: 6 Images: 3 Networks: 1 Storage pools: 1 The migration is now complete and your containers should be back online. Do you want to uninstall the old LXD (yes/no) [default=no]? yes All done. You may need to close your current shell and open a new one to have the "lxc" command work. Testing the migration to the LXD snap

Let’s check that the containers managed to start successfully,

$ lxc list | grep RUNNING | wc -l 6

But let’s check that we can still run Firefox from an LXD container, according to the following post,

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

Yep, all good. The artifact in the middle (over the c in packaged) is the mouse cursor in wait mode, while GNOME Screenshot is about to take the screenshot. I did not find a report about that in the GNOME Screenshot bugzilla. It is a minor issue and there are several workarounds (1. try one more time, 2. use timer screenshot).

Let’s do some actual testing,

Yep, works as well.

Exploring the LXD snap commands

Let’s type lxd and press Tab.

$ lxd<Tab> lxd lxd.check-kernel lxd.migrate lxd.benchmark lxd.lxc

There are two commands left to try out, lxd.check-kernel and lxd.benchmark. The snap package is called lxd, therefore any additional commands are prepended with lxd.. lxd is the actually LXD server executable. lxd.lxc is the lxc command that we are using for all LXD actions. The LXD snap package makes the appropriate symbolic link so that we just need to write lxc instead of lxd.lxc.

Trying out lxd.check-kernel

Let’s run lxd.check-kernel.

$ sudo lxd.check-kernel Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /lib/modules/4.10.0-40-generic/build/.config --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled newuidmap is not installed newgidmap is not installed Network namespace: enabled --- Control groups --- Cgroups: enabled Cgroup v1 mount points: /sys/fs/cgroup/systemd /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/freezer /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/memory /sys/fs/cgroup/devices /sys/fs/cgroup/perf_event /sys/fs/cgroup/cpuset /sys/fs/cgroup/hugetlb /sys/fs/cgroup/pids /sys/fs/cgroup/blkio Cgroup v2 mount points: Cgroup v1 clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Macvlan: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Vlan: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Bridges: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Advanced netfilter: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NF_NAT_IPV4: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NF_NAT_IPV6: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_IP_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_IP6_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabledmodprobe: ERROR: missing parameters. See -h. , not loadedCONFIG_NETFILTER_XT_MATCH_COMMENT: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded FUSE (for use with lxcfs): enabledmodprobe: ERROR: missing parameters. See -h. , not loaded --- Checkpoint/Restore --- checkpoint restore: enabled CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: enabled CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: enabled CONFIG_NETLINK_DIAG: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /snap/lxd/5182/bin/lxc-checkconfig

This is an important tool if you have issues in getting the LXD to run. In this example in the Misc section, it shows some errors about missing parameters. I suppose they are issues with the tool as the appropriate kernel modules are indeed loaded. My installation of the LXD snap works okay.

Trying out lxd.benchmark

Let’s try out the command without parameters.

$ lxd.benchmark Usage: lxd-benchmark launch [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT] lxd-benchmark start [--parallel=COUNT] lxd-benchmark stop [--parallel=COUNT] lxd-benchmark delete [--parallel=COUNT] --count (= 100) Number of containers to create --freeze (= false) Freeze the container right after start --image (= "ubuntu:") Image to use for the test --parallel (= -1) Number of threads to use --privileged (= false) Use privileged containers --report-file (= "") A CSV file to write test file to. If the file is present, it will be appended to. --report-label (= "") A label for the report entry. By default, the action is used. --start (= true) Start the container after creation error: A valid action (launch, start, stop, delete) must be passed. Exit 1

It is a benchmark tool that allows to create many containers. We can then use the tool to remove those containers. There is an issue with the default number of containers, 100, which is too high. If you run lxd-benchmark launch without specifying a smaller count,  you will mess up your LXD installation because you will run out of memory and maybe of disk space. Looking for a bug report… Okay it got buried into this pull request https://github.com/lxc/lxd/pull/3857 and needs to re-open. Ideally, the default count should be 1, and let the user knowingly select a bigger number. TODO. Here is the new pull request, https://github.com/lxc/lxd/pull/4074

Let’s try carefully lxd-benchmark.

$ lxd.benchmark launch --count 3 Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 Test variables: Container count: 3 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 0 Batch size: 4 Remainder: 3 [Dec 5 13:24:26.044] Found image in local store: 5f364e2e3f460773a79e9bec2edb5e993d236f035f70267923d43ab22ae3bb62 [Dec 5 13:24:26.044] Batch processing start [Dec 5 13:24:28.817] Batch processing completed in 2.773s

It took just 2.8s to launch then on this computer.
lxd-benchmark
launched 3 containers, with names benchmark-%d. Obviously, refrain from using the word benchmark as a name for your own containers. Let’s see these containers

$ lxc list --columns ns4 +---------------+---------+----------------------+ | NAME | STATE | IPV4 | +---------------+---------+----------------------+ | benchmark-1 | RUNNING | 10.52.251.121 (eth0) | +---------------+---------+----------------------+ | benchmark-2 | RUNNING | 10.52.251.20 (eth0) | +---------------+---------+----------------------+ | benchmark-3 | RUNNING | 10.52.251.221 (eth0) | +---------------+---------+----------------------+ ...

Let’s stop them, and finally remove them.

$ lxd.benchmark stop Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 [Dec 5 13:31:16.517] Stopping 3 containers [Dec 5 13:31:16.517] Batch processing start [Dec 5 13:31:20.159] Batch processing completed in 3.642s $ lxd.benchmark delete Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 [Dec 5 13:31:24.902] Deleting 3 containers [Dec 5 13:31:24.902] Batch processing start [Dec 5 13:31:25.007] Batch processing completed in 0.105s

Note that the lxd-benchmark actions follow the naming of the lxc actions (launch, start, stop and delete).

Troubleshooting Error “Target LXD already has images” $ sudo lxd.migrate => Connecting to source server => Connecting to destination server => Running sanity checks error: Target LXD already has images, aborting. Exit 1

This means that the snap version of LXD has some images and it is not clean. lxd.migrate requires the snap version of LXD to be clean. Solution: remove the LXD snap and install again.

$ snap remove lxd lxd removed $ snap install lxd lxd 2.20 from 'canonical' installed Which “lxc” command am I running?

This is the lxc command of the DEB/PPA package,

$ which lxc /usr/bin/lxc

This is the lxc command from the LXD snap package.

$ which lxc /snap/bin/lxc

If you installed the LXD snap but you do not see the the /snap/bin/lxc executable, it could be an artifact of your Unix shell. You may have to close that shell window and open a new one.

Error “bash: /usr/bin/lxc: No such file or directory”

If you get the following,

$ which lxc /snap/bin/lxc

but the lxc command is not found,

$ lxc bash: /usr/bin/lxc: No such file or directory Exit 127

then you must close the terminal window and open a new one.

Note: if you loudly refuse to close the current terminal window, you can just type

$ hash -r

which will refresh the list of executables from the $PATH. Applies to bash, zsh. Use rehash if on *csh.

 

Simos Xenitellishttps://blog.simos.info/

Sebastian Heinlein: Aptdaemon

Hën, 04/12/2017 - 9:01md
I am glad to announce aptdaemon: It is a DBus controlled and PolicyKit using package management...

Raphaël Hertzog: My Free Software Activities in November 2017

Dje, 03/12/2017 - 6:52md

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (16 commits to the security tracker).

I prepared and released DLA-1171-1 on libxml-libxml-perl.

I prepared a new update for simplesamlphp (1.9.2-1+deb7u1) fixing 6 CVE. I did not release any DLA yet since I was not able to test the updated package yet. I’m hoping that the the current maintainer can do it since he wanted to work on the update a few months ago.

Distro Tracker

Distro Tracker has seen a high level of activity in the last month. Ville Skyttä continued to contribute a few patches, he helped notably to get rid of the last blocker for a switch to Python 3.

I then worked with DSA to get the production instance (tracker.debian.org) upgraded to stretch with Python 3.5 and Django 1.11. This resulted in a few regressions related to the Python 3 switch (despite the large number of unit tests) that I had to fix.

In parallel Pierre-Elliott Bécue showed up on the debian-qa mailing list and he started to contribute. I have been exchanging with him almost daily on IRC to help him improve his patches. He has been very responsive and I’m looking forward to continue to cooperate with him. His first patch enabled the use “src:” and “bin:” prefix in the search feature to specify if we want to lookup among source packages or binary packages.

I did some cleanup/refactoring work after the switch of the codebase to Python 3 only.

Misc Debian work

Sponsorship. I sponsored many new packages: python-envparse 0.2.0-1, python-exotel 0.1.5-1, python-aws-requests-auth 0.4.1-1, pystaticconfiguration 0.10.3-1, python-jira 1.0.10-1, python-twilio 6.8.2-1, python-stomp 4.1.19-1. All those are dependencies for elastalert 0.1.21-1 that I also sponsored.

I sponsored updates for vboot-utils 0~R63-10032.B-2 (new upstream release for openssl 1.1 compat), aircrack-ng 1:1.2-0~rc4-4 (introducing airgraph-ng package) and asciidoc 8.6.10-2 (last upstream release, tool is deprecated).

Debian Installer. I submitted a few patches a while ago to support finding ISO images in LVM logical volumes in the hd-media installation method. Colin Watson reviewed them and made a few suggestions and expressed a few concerns. I improved my patches to take into account his suggestions and I resolved all the problems he pointed out. I then committed everything to the respective git repositories (for details review #868848, #868859, #868900, #868852).

Live Build. I merged 3 patches for live-build (#879169, #881941, #878430).

Misc. I uploaded Django 1.11.7 to stretch-backports. I filed an upstream bug on zim for #881464.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Clive Johnston: Bye bye LastPass, hello bitwarden

Dje, 03/12/2017 - 5:42md

I have been a loyal customer for a password manager called LastPass for a number of years now.  It all started when I decided to treat myself to an early Christmas present by purchasing the “Premium” version back in 2013, in order to take advantage of the extra features such as the mobile app.

Now, don’t get me wrong, I do think $12 is very good value for money and I was very happy with LastPass, but I must say this article really, really got my back up.  (Apparently I’m an “entitled user”).  Not only that but the fact that not one, but three of the Google ads on the page are for LastPass (now there’s a spooky coincidence!)

I do agree with a lot of other users that to double the price for absolutely no benefits is an extremely bitter pill to swallow, especially as there are a number of issues I been having regarding the security of the mobile app.  But anyways, I calmed down and the topic went out of my head until I received an email reminding me that they would automatically charge my credit card with the new $24 price.  Then, about a week later, as I watched a YouTube video by TuxDigital, he mentioned another password manager called bitwarden

So a big thank you to Michael for bringing this to my attention. Not only does it have way more features than LastPass, but it is also open source (code on GitHub), self host-able and the “Premium” version is only $10. My issues with the LastPass mobile app are gone in bitwarden and replaced with the option to lock the app with your fingerprint or a pin code, which is a nice happy medium of having to log out of LastPass and then re-enter your entire master code to regain access!

Also another feature I *beeping* love (excuse my French), is the app and vault allows you to store a “Google Authenticator” key in the vault and then automatically generates a One Time Password (OTP) on the fly and copies it to the device clipboard.  This allows it to be easily copied in when auto-filling the username and password, great for those who use this feature on their blogs.

Simos Xenitellis: How to set the timezone in LXD containers

Sht, 02/12/2017 - 1:06md

See https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/ on how to set up and test LXD on Ubuntu (or another Linux distribution).

In this post we see how to set up the timezone in a newly created container.

The problem

The default timezone for a newly created container is Etc/UTC, which is what we used to call Greenwich Mean Time.

Let’s observe.

$ lxc launch ubuntu:16.04 mycontainer Creating mycontainer Starting mycontainer $ lxc exec mycontainer -- date Sat Dec 2 11:40:57 UTC 2017 $ lxc exec mycontainer -- cat /etc/timezone Etc/UTC

That is, the observed time in a container follows a timezone that is different from the vast majority our computer settings. When we connect with a shell inside the container, the time and date is not the same with that of our computer.

The time is recorded correctly inside the container, it is just the way it is presented, that is off by a few hours.

Depending on our use of the container, this might or might not be an issue to pursue.

The workaround

We can set the environment variable TZ (for timezone) of each container to our preferred timezone setting.

$ lxc exec mycontainer -- date Sat Dec 2 11:50:37 UTC 2017 $ lxc config set mycontainer environment.TZ Europe/London $ lxc exec mycontainer -- date Sat Dec 2 11:50:50 GMT 2017

That is, we use the lxc config set action to set, for mycontainer,  the environment variable TZ to the proper timezone (here, Europe/London). UTC time and Europe/London time happen to be the same during the winter.

How do we unset the container timezone and return back to Etc/UTC?

$ lxc config unset mycontainer environment.TZ

Here we used the lxc config unset action to unset the environment variable TZ.

The solution

LXD supports profiles and you can edit the default profile in order to get the timezone setting automatically applied to any containers that follow this profile. Let’s get a list of the profiles.

$ lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | default | 7 | +---------+---------+

Only one profile, called default. It is used by 7 containers already on this LXD installation.

We set the environment variable TZ in the profile with the following,

$ lxc exec mycontainer -- date Sat Dec 2 12:02:37 UTC 2017 $ lxc profile set default environment.TZ Europe/London $ lxc exec mycontainer -- date Sat Dec 2 12:02:43 GMT 2017

How do we unset the profile timezone and get back to Etc/UTC?

lxc profile unset default environment.TZ

Here we used the lxc profile unset action to unset the environment variable TZ.

 

Simos Xenitellishttps://blog.simos.info/

Daniel Pocock: Hacking with posters and stickers

Pre, 01/12/2017 - 9:27md

The FIXME.ch hackerspace in Lausanne, Switzerland has started this weekend's VR Hackathon with a somewhat low-tech 2D hack: using the FSFE's Public Money Public Code stickers in lieu of sticky tape to place the NO CLOUD poster behind the bar.

Get your free stickers and posters

FSFE can send you these posters and stickers too.

Kubuntu General News: Kubuntu Kafe Live approaching

Mër, 29/11/2017 - 9:32md

This Saturday ( December 2nd ) the second Kubuntu Kafe Live, our online video cafe will be taking place from 21:00 UTC.
Join the Kubuntu development community, and guests as our intrepid hosts.

  • Aaron Honeycut
  • Ovidiu-Florin Bogdan
  • Rick Timmis

Discuss a cornucopia of topics in this free format, magazine style show.

This show includes Technical Design and Planning for a Kubuntu CI Rebuild, a Live Testing workshop in the Kubuntu Dojo, Kubuntu product development and more.

We will be attempting to run a live stream into our YouTube Channel although we encourage you to come and join us in our Big Blue Button conference server, use your name and welcome to join room 1 and come and interact with us and be part of the show.

See you there

Chris Glass: Serving a static blog from a Snap

Mër, 29/11/2017 - 10:31pd

Out of curiosity, I decided to try and package this blog as a snap package, and it turns out to be an extremely easy and convenient way to deploy a static blog!

Why?

There are several advantages that the snappy packaging format bring to the table as far as applications developers are concerned (which I am, my application in this case being my blog).

Snapcraft makes it very easy to package things, there's per-application jails preventing/sandboxing your applications/services that basically comes for free, and it also comes with a distribution mechanism that takes care of auto-upgrading your snap on any platform.

Sweet!

How?

Since this blog is generated using the excellent "pelican" static blog generator from a bunch of markdown articles and a theme, there's not a lot of things to package in the first place :)

A webserver for the container age

A static blog obviously needs to be served by a webserver.

Packaging a "full" traditional webserver like apache2 (what I used before) or nginx is a little outside the scope of what I would have liked to do with my spare time, so I looked around for another way to serve it.

Requirements:

  • A static files webserver.
  • Able to set headers for cache control and HSTS.
  • Ideally self-contained / statically linked (because snapping the whole would be much faster/easier this way)
  • SSL ready. I've had an A+ rating on SSLlabs for years and intend to keep it that way.
  • Easy to configure.

After toying with the idea to write my own in rust, I instead settled on an already existing project that fits the bill perfectly and is amazingly easy to deploy and configure - Caddy.

A little bit of snapcraft magic

Of course, a little bit of code was needed in the snapcraft recipe to make it all happen.

All of the code is available on a github project, and most of the logic can be found in the snapcraft.yaml file.

Simply copying the Caddyfile and the snap/ subfolder to your existing pelican project should be all you need to get going, then run the following to get a snap package:

# On an Ubuntu system. snap install snapcraft snapcraft

With your site's FQDN added to the Caddyfile and pushed to production, you can marvel at all the code and configuration you did not have to write to get an A+ rating with SSLlabs :)

Questions? Comments?

As usual, feel free to reach out with any question or comment you may have!

Stephen Michael Kellat: Looking Towards A Retrospective Future

Mër, 29/11/2017 - 5:40pd

I wish this was about Ubuntu MATE. It isn't, alas. With the general freak-out again over net neutrality in the United States let alone the Internet blackout in Pakistan, it is time to run some ideas.1

The Internet hasn't been healthy for a while. Even with net neutrality rules in the United States, I have my Internet Service Provider neutrally blocking all IPv6 traffic and throttling me. As you can imagine, that now makes an apt update quite a pain. When I have asked my provider, they have said they have no plans to offer this on residential service. When I have raised the point that my employer wants me to verify the ability to potentially work from home in crisis situations, they said I would need to subscribe to "business class" service and said they would happily terminate my residential service for me if I tried to use a Virtual Private Network.

At this point, my view of the proposed repeal of net neutrality rules in the United States is simple. To steal a line from a former presidential candidate: What difference at this point does it make?2 I have exactly one broadband provider available to me.3 Unless I move to HughesNet or maybe something exotic, I have what is generally available.4

The Internet, if we can even call it a coherent whole anymore, has been quite stressed over the past few years. After all, a simple hurricane can wipe out Internet companies with their servers and networks based in New York City.5 In Puerto Rico, mail carriers of the United States Postal Service were the communications lifeline for quite a while until services could come back online.6 It can be popular on the African continent to simply make Internet service disappear at times to meet the needs of the government of the day.7 Sometimes bad things simply happen, too.8

Now, this is not say people are trying to drive forward. I have found concept papers with ideas that are not totally "pie in the sky".9 Librarians see the world as one where it is littered with PirateBoxes that are instead called LibraryBoxes.10 Alphabet's own Project Loon has been field tested in the skies of Puerto Rico thanks to the grant of a "Special Temporary Authority" by the Federal Communications Commission's Office of Engineering Technology.11

Now, I can imagine life without an Internet. My first e-mail address was tremendously long as it had a gateway or two in it to get the message to the BBS I dialed into that was tied into FidoNet. I was hunting around for FidoNews and, after reading a recent issue, noticed some names that correlate in an interesting fashion with the Debian & Ubuntu realms. That was a very heartening thing for me to find. With the seeding of apt-offline on at least the Xubuntu installation disc, I know that I would be able to update a Xubuntu installation whenever I actually found access somewhere even if it was not readily available to me at home. Thankfully with that bit of seeding we solved the "chicken and the egg" problem of how do you install something like that which you need when you don't have the access to get it to use.

We can and likely will adapt. We can and likely will overcome. These bits of madness come and go. As it was, I already started pricing the build-out of a communications hub with a Beverage antenna as well as an AN-FLR9 Wullenweber array at a minimum. On a local property like a couple acres of farm land I could probably set this up for just under a quarter million dollars with sufficient backups.12 One farm was positioned close enough to a physical corridor to the PIT Internet Exchange Point but that would still be a little over 100 miles to traverse. As long as I could get the permissions, could get the cable laid, and find a peer, peering with somebody who uses YYZ as their Internet Exchange Point is oddly closer due to quirks of geography.

Earlier in today's news, it appeared that the Democratic People's Republic of Korea made yet another unauthorized missile launch.13 This one appears to have been an ICBM that landed offshore from Japan.14 The DPRK's leader has threatened missile strikes of various sorts over the past year on the United States.15 A suborbital electromagnetic pulse blast near our Pacific coast, for example, would likely wipe out the main offices of companies ranging from Google to Yahoo to Apple to Amazon to Microsoft in terms of their computers and other electronic hardware.16

I'm not really worried right now about the neutrality of internetworking. I want there to still be something carried on it. With the increasingly real threat of an EMP possibly wiping out the USA's tech sector due to one rogue missile, bigger problems exist than mere paid prioritization.17

  1. Megan McArdle, "The Internet Had Already Lost Its Neutrality," Bloomberg.Com, November 21, 2017, https://www.bloomberg.com/view/articles/2017-11-21/the-internet-had-already-lost-its-neutrality. ; M. Ilyas Khan, "The Politics behind Pakistan's Protests," BBC News, November 26, 2017, sec. Asia, http://www.bbc.com/news/world-asia-42129605.

  2. The candidate in this case is Hillary Clinton. That sentence, often taken out of context, was uttered before the Senate Foreign Relations Committee in 2013.

  3. Sadly the National Broadband Map project was not funded to be continually updated. It would have continued to show that, even though cell phone services are available, those are not meant for use in place of a wired broadband connection. Updates stopped in 2014.

  4. I am not made of gold but this is an example of an offering on the Iridium constellation: http://www.bluecosmo.com/iridium-go/rate-plans.

  5. Sinead Carew, "Hurricane Sandy Disrupts Northeast U.S. Telecom Networks," Reuters, October 30, 2012, https://www.reuters.com/article/us-storm-sandy-telecommunications/hurricane-sandy-disrupts-northeast-u-s-telecom-networks-idUSBRE89T0YU20121030.

  6. Hugh Bronstein, "U.S. Mail Carriers Emerge as Heroes in Puerto Rico Recovery," Reuters, October 9, 2017, https://www.reuters.com/article/us-usa-puertorico-mail/u-s-mail-carriers-emerge-as-heroes-in-puerto-rico-recovery-idUSKBN1CE15G.

  7. "Why Has Cameroon Blocked the Internet?," BBC News, February 8, 2017, sec. Africa, http://www.bbc.com/news/world-africa-38895541.

  8. "Marshall Islands' 10-Day Internet Blackout Extended," BBC News, January 9, 2017, sec. News from Elsewhere, http://www.bbc.com/news/blogs-news-from-elsewhere-38559117.

  9. Pekka Abrahamsson et al., "Bringing the Cloud to Rural and Remote Areas - Cloudlet by Cloudlet," ArXiv:1605.03622 [Cs], May 11, 2016, http://arxiv.org/abs/1605.03622.

  10. Jason Griffey, "LibraryBox: Portable Private Digital Distribution," Make: DIY Projects and Ideas for Makers, January 6, 2014, https://makezine.com/projects/make-37/librarybox/.

  11. Nick Statt, "Alphabet's Project Loon Deploys LTE Balloons in Puerto Rico," The Verge, October 20, 2017, https://www.theverge.com/2017/10/20/16512178/alphabet-project-loon-puerto-rico-lte-balloons-disaster-relief-connectivity.

  12. One property reviewed with a house, two barns, and a total of six acres of land came to $130,000. The rest of the money would be for licensing, equipment, and construction.

  13. "North Korea Fires New Ballistic Missile." BBC News, November 28, 2017, sec. Asia. http://www.bbc.com/news/world-asia-42160227.

  14. "N Korea 'Tested New Long-Range Missile.'" BBC News, November 29, 2017, sec. Asia. http://www.bbc.com/news/world-asia-42162462.

  15. Kim, Christine, and Phil Stewart. "North Korea Says Tests New ICBM, Can Reach All U.S. Mainland." Reuters, November 29, 2017. https://www.reuters.com/article/us-northkorea-missiles/north-korea-fires-ballistic-missile-u-s-government-sources-idUSKBN1DS2MB.

  16. For example: Zimmerman, Malia. "Electromagnetic Pulse Attack on Hawaii Would Devastate the State." Fox News, May 12, 2017. http://www.foxnews.com/us/2017/05/12/electromagnetic-pulse-attack-on-hawaii-would-devaste-state.html.

  17. Apparently the last missile test can reach the Pacific coast of the United States. See: Smith, Josh. "How North Korea’s Latest ICBM Test Stacks up." Reuters, November 29, 2017. https://www.reuters.com/article/us-northkorea-missiles-technology-factbo/how-north-koreas-latest-icbm-test-stacks-up-idUSKBN1DT0IF.

Riccardo Padovani: A generic introduction to Gitlab CI

Mar, 28/11/2017 - 10:00md

At fleetster we have our own instance of Gitlab and we rely a lot on Gitlab CI. Also our designers and QA guys use (and love) it, thanks to its advanced features.

Gitlab CI is a very powerful system of Continuous Integration, with a lot of different features, and with every new releases, new features land. It has a very rich technical documentation, but it lacks a generic introduction for whom want to use it in an already existing setup. A designer or a tester doesn’t need to know how to autoscale it with Kubernetes or the difference between an image or a service.

But still, he needs to know what is a pipeline, and how to see a branch deployed to an environment. In this article therefore I will try to cover as many features as possible, highlighting how the end users can enjoy them; in the last months I explained such features to some members of our team, also developers: not everyone knows what Continuous Integration is or has used Gitlab CI in a previous job.

If you want to know why Continuous Integration is important I suggest to read this article, while for finding the reasons for using Gitlab CI specifically, I leave the job to Gitlab.com itself.

Introduction

Every time a developer changes some code he saves his changes in a commit. He can then push that commit to Gitlab, so other developers can review the code.

Gitlab will also start some work on that commit, if the Gitlab CI has been configured. This work is executed by a runner. A runner is basically a server (it can be a lot of different things, also your PC, but we can simplify it as a server) that executes instructions listed in the .gitlab-ci.yml file, and reports the result back to Gitlab itself, which will show it in his graphical interface.

When a developer has finished implementing a new feature or a bugfix (activity that usual requires multiple commits), can open a merge request, where other member of the team can comment on the code and on the implementation.

As we will see, also designers and testers can (and really should!) join this process, giving feedbacks and suggesting improvements, especially thanks to two features of Gitlab CI: environments and artifacts.

Pipelines

Every commit that is pushed to Gitlab generates a pipeline attached to that commit. If multiple commits are pushed together the pipeline will be created only for the last of them. A pipeline is a collection of jobs split in different stages.

All the jobs in the same stage run in concurrency (if there are enough runners) and the next stage begins only if all the jobs from the previous stage have finished with success.

As soon as a job fails, the entire pipeline fails. There is an exception for this, as we will see below: if a job is marked as manual, then a failure will not make the pipeline fails.

The stages are just a logic division between batches of jobs, where doesn’t make sense to execute next jobs if the previous failed. We can have a build stage, where all the jobs to build the application are executed, and a deploy stage, where the build application is deployed. Doesn’t make much sense to deploy something that failed to build, does it?

Every job shouldn’t have any dependency with any other job in the same stage, while they can expect results by jobs from a previous stage.

Let’s see how Gitlab shows information about stages and stages’ status.

Jobs

A job is a collection of instructions that a runner has to execute. You can see in real time what’s the output of the job, so developers can understand why a job fails.

A job can be automatic, so it starts automatically when a commit is pushed, or manual. A manual job has to be triggered by someone manually. Can be useful, for example, to automatize a deploy, but still to deploy only when someone manually approves it. There is a way to limit who can run a job, so only trustworthy people can deploy, to continue the example before.

A job can also build artifacts that users can download, like it creates an APK you can download and test on your device; in this way both designers and testers can download an application and test it without having to ask for help to developers.

Other than creating artifacts, a job can deploy an environment, usually reachable by an URL, where users can test the commit.

Job status are the same as stages status: indeed stages inherit theirs status from the jobs.

Artifacts

As we said, a job can create an artifact that users can download to test. It can be anything, like an application for Windows, an image generated by a PC, or an APK for Android.

So you are a designer, and the merge request has been assigned to you: you need to validate the implementation of the new design!

But how to do that?

You need to open the merge request, and download the artifact, as shown in the figure.

Every pipeline collects all the artifacts from all the jobs, and every job can have multiple artifacts. When you click on the download button, it will appear a dropdown where you can select which artifact you want. After the review, you can leave a comment on the MR.

You can always download the artifacts also from pipelines that do not have a merge request open ;-)

I am focusing on merge request because usually is where testers, designer, and shareholder in general enter the workflow.

But merge requests are not linked to pipelines: while they integrate nice one in the other, they do not have any relation.

Environments

In a similar way, a job can deploy something to an external server, so you can reach it through the merge request itself.

As you can see the environment has a name and a link. Just clicking the link you to go to a deployed version of your application (of course, if your team has setup it correctly).

You can click also on the name of the environment, because Gitlab has also other cool features for environments, like monitoring.

Conclusion

This was a small introduction to some of the features of Gitlab CI: it is very powerful, and using it in the right way allows all the team to use just one tool to go from planning to deploying. A lot of new features are introduced every month, so keep an eye on the Gitlab blog.

For setting it up, or for more advanced features, take a look to the documentation.

In fleetster we use it not only for running tests, but also for having automatic versioning of the software and automatic deploys to testing environments. We have automatized other jobs as well (building apps and publish them on the Play Store and so on).

Speaking of which, do you want to work in a young and dynamically office with me and a lot of other amazing guys? Take a look to the open positions at fleetster!

Kudos to the Gitlab team (and others guys who help in their free time) for their awesome work!

If you have any question or feedback about this blog post, please drop me an email at riccardo@rpadovani.com or tweet me :-) Feel free to suggest me to add something, or to rephrase paragraphs in a clearer way (English is not my mother tongue).

Bye for now,
R.

P.S: if you have found this article helpful and you’d like we write others, do you mind to help us reaching the Ballmer’s peak and buy me a beer?

Sebastian K&uuml;gler: KDE’s Goal: Privacy

Mar, 28/11/2017 - 8:29md

by BanksyAt Akademy 2016, the KDE community started a long-term project to invigorate its development (both, technically and organizationally) with more focus. This process of soul-searching has already yielded some very useful results, the most important one so far being agreement of a common community-wide vision:

A world in which everyone has control over their digital life and enjoys freedom and privacy.

This presents a very high-level vision, so a logical follow-up question has been how this influences KDE’s activities and actions in practice. KDE, being a fairly loose community with many separate sub-communities and products, is not an easy target to align to a common goal. A common goal may have very different on each of KDE’s products, for an email and groupware client, that may be very straight-forward (e.g. support high-end crypto, work very well with privacy-respecting and/or self-hosted services), for others, it may be mostly irrelevant (a natural painting app such as Krita simply doesn’t have a lot of privacy exposure), yet for a product such as Plasma, the implications may be fundamental and varied.
So in the pursuit of the common ground and a common goal, we had to concentrate on what unites us. There’s of course Software Freedom, but that is somewhat vague as well, and also it’s already entrenched in KDE’s DNA. It’s not a very useful goal since it doesn’t give us something to strive for, but something we maintain anyway. A “good goal” has to be more specific, yet it should have a clear connection to Free Software, since that is probably the single most important thing that unites us. Almost two years ago, I posed that privacy is Free Software’s new milestone, trying to set a new goal post for us to head for. Now the point where these streams join has come, and KDE has chosen privacy as one of its primary goals for the next 3 to 4 years. The full proposal can be read here.
“In 5 years, KDE software enables and promotes privacy”

Privacy, being a vague concept, especially given the diversity in the KDE community needs some explanation, some operationalization to make it specific and to know how we can create software that enables privacy. There are three general focus areas we will concentrate on: Security, privacy-respecting defaults and offering the right tools in the first place.

Security

Improving security means improving our processes to make it easier to spot and fix security problems and avoiding single points of failure in both software and development processes. This entails code review, quick turnaround times for security fixes.

Privacy-respecting defaults

Defaulting to encrypted connections where possible and storing sensible data in a secure way. The user should be able to expect the KDE software Does The Right Thing and protect his or her data in the best possible way. Surprises should be avoided as much as possible, and reasonable expectations should be met with best effort.

Offering the right tools

KDE prides itself for providing a very wide range of useful software. From a privacy point of view, some functions are more important than others, of course. We want to offer the tools that most users need in a way that allows them to lead their life privately, so the toolset needs to be comprehensive and cover as many needs as possible. The tools itself should make it easy and straight-forward to achieve privacy. Some examples:

  • An email client allowing encrypted communication
  • Chat and instant messenging with state-of-the art protocol security
  • Support for online services that can be operated as private instance, not depending on a 3rd party provider

Of course, this is only a small part, and the needs of our userbase varies wildly.

Onwards from here…

In the past, KDE software has come a long way in providing privacy tools, but the tool-set is neither comprehensive, nor is privacy its implications widely seen as critical to our success in this area. Setting privacy as a central goal for KDE means that we will put more focus on this topic and lead to improved tools that allow users to increase their level of privacy. Moreover, it will set an example for others to follow and hopefully increase standards across the whole software ecosystem. There is much work to do, and we’re excited to put our shoulder under it and work on it.

Faqet