You are here

Agreguesi i feed

Ubuntu Insights: Kubernetes Snaps: The Quick Version

Planet Ubuntu - Enj, 21/09/2017 - 3:46md

This article originally appeared on George Kraft’s blog

When we built the Canonical Distribution of Kubernetes (CDK), one of our goals was to provide snap packages for the various Kubernetes clients and services: kubectl, kube-apiserver, kubelet, etc.

While we mainly built the snaps for use in CDK, they are freely available to use for other purposes as well. Let’s have a quick look at how to install and configure the Kubernetes snaps directly.

The Client Snaps

This covers: kubectl, kubeadm, kubefed

Nothing special to know about these. Just snap install and you can use them right away:

$ sudo snap install kubectl --classic kubectl 1.7.4 from 'canonical' installed $ kubectl version --client Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} The Server Snaps

This covers: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy

Example: kube-apiserver

We will use kube-apiserver as an example. The other services generally work the same way.

Install with snap install

This creates a systemd service named snap.kube-apiserver.daemon. Initially, it will be in an error state because it’s missing important configuration:

$ systemctl status snap.kube-apiserver.daemon ● snap.kube-apiserver.daemon.service - Service for snap application kube-apiserver.daemon Loaded: loaded (/etc/systemd/system/snap.kube-apiserver.daemon.service; enabled; vendor preset: enabled) Active: inactive (dead) (Result: exit-code) since Fri 2017-09-01 15:54:39 UTC; 11s ago ...

Configure kube-apiserver using snap set.

sudo snap set kube-apiserver \ etcd-servers=https://172.31.9.254:2379 \ etcd-certfile=/root/certs/client.crt \ etcd-keyfile=/root/certs/client.key \ etcd-cafile=/root/certs/ca.crt \ service-cluster-ip-range=10.123.123.0/24 \ cert-dir=/root/certs

Note: Any files used by the service, such as certificate files, must be placed within the /root/ directory to be visible to the service. This limitation allows us to run a few of the services in a strict confinement mode that offers better isolation and security.

After configuring, restart the service and you should see it running:

$ sudo service snap.kube-apiserver.daemon restart $ systemctl status snap.kube-apiserver.daemon ● snap.kube-apiserver.daemon.service - Service for snap application kube-apiserver.daemon Loaded: loaded (/etc/systemd/system/snap.kube-apiserver.daemon.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2017-09-01 16:02:33 UTC; 6s ago ... Configuration

The keys and values for snap set map directly to arguments that you would
normally pass to the service. You can view a list of arguments by invoking the
service directly, e.g. kube-apiserver -h.

For configuring the snaps, drop the leading dashes and pass them through
snap set. For example, if you want kube-apiserver to be invoked like this

kube-apiserver --etcd-servers https://172.31.9.254:2379 --allow-privileged

You would configure the snap like this:

snap set kube-apiserver etcd-servers=https://172.31.9.254:2379 allow-privileged=true

Note, also, that we had to specify a value of true for allow-privileged. This
applies to all boolean flags.

Going deeper

Want to know more? Here are a couple good things to know:

If you’re confused about what snap set ... is actually doing, you can read
the snap configure hooks in

/snap/<snap-name>/current/meta/hooks/configure

to see how they work.

The configure hook creates an args file here:

/var/snap/<snap-name>/current/args

This contains the actual arguments that get passed to the service by the snap:

$ cat /var/snap/kube-apiserver/current/args --cert-dir "/root/certs" --etcd-cafile "/root/certs/ca.crt" --etcd-certfile "/root/certs/client.crt" --etcd-keyfile "/root/certs/client.key" --etcd-servers "https://172.31.9.254:2379" --service-cluster-ip-range "10.123.123.0/24"

Note: While you can technically bypass snap set and edit the args file directly, it’s best not to do so. The next time the configure hook runs, it will obliterate your changes. This can occur not only from a call to snap set but also during a background refresh of the snap.

The source code for the snaps can be found here: https://github.com/juju-solutions/release/tree/rye/snaps/snap

We’re working on getting these snaps added to the upstream Kubernetes build process. You can follow our progress on that here: https://github.com/kubernetes/release/pull/293

If you have any questions or need help, you can either find us at #juju on
freenode, or open an issue against https://github.com/juju-solutions/bundle-canonical-kubernetes and we’ll help you out as soon as we can.

Scarlett Clark: KDE: Randa 2017! KDE Neon Snappy and more

Planet Ubuntu - Enj, 21/09/2017 - 2:54md

Another successful Randa meeting! I spent most of my days working on snappy packaging for KDE core applications, and I have most of them done!

Snappy Builds on KDE Neon

We need testers! Please see Using snappy to get started.

In the evenings I worked on getting all my appimage work moved into the KDE infrastructure so that the community can take over.

I learned a great deal about accessibility and have been formulating ways to improve KDE neon in this area.

Randa meetings are crucial to the KDE community for developer interaction, brainstorming, and bringing great new things to KDE.
I encourage all of you to please consider a donation at https://www.kde.org/fundraisers/randameetings2017/

Jamie Strandboge: Easy ssh into libvirt VMs and LXD containers

Planet Ubuntu - Mër, 20/09/2017 - 11:39md

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: ubuntu, ubuntu-server

Retiring the Debian-Administration.org site

Planet Debian - Mër, 20/09/2017 - 11:00md

So previously I've documented the setup of the Debian-Administration website, and now I'm going to retire it I'm planning how that will work.

There are currently 12 servers powering the site:

  • web1
  • web2
  • web3
  • web4
    • These perform the obvious role, serving content over HTTPS.
  • public
    • This is a HAProxy host which routes traffic to one of the four back-ends.
  • database
    • This stores the site-content.
  • events
    • There was a simple UDP-based protocol which sent notices here, from various parts of the code.
    • e.g. "Failed login for bob from 1.2.3.4".
  • mailer
    • Sends out emails. ("You have a new reply", "You forgot your password..", etc)
  • redis
    • This stored session-data, and short-term cached content.
  • backup
    • This contains backups of each host, via Obnam.
  • beta
    • A test-install of the codebase
  • planet
    • The blog-aggregation site

I've made a bunch of commits recently to drop the event-sending, since no more dynamic actions will be possible. So events can be retired immediately. redis will go when I turn off logins, as there will be no need for sessions/cookies. beta is only used for development, so I'll kill that too. Once logins are gone, and anonymous content is disabled there will be no need to send out emails, so mailer can be shutdown.

That leaves a bunch of hosts left:

  • database
    • I'll export the database and kill this host.
    • I will install mariadb on each web-node, and each host will be configured to talk to localhost only
    • I don't need to worry about four database receiving diverging content as updates will be disabled.
  • backup
  • planet
    • This will become orphaned, so I think I'll just move the content to the web-nodes.

All in all I think we'll just have five hosts left:

  • public to do the routing
  • web1-web4 to do the serving.

I think that's sane for the moment. I'm still pondering whether to export the code to static HTML, there's a lot of appeal as the load would drop a log, but equally I have a hell of a lot of mod_rewrite redirections in place, and reworking all of them would be a pain. Suspect this is something that will be done in the future, maybe next year.

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Serge Hallyn: Namespaced File Capabilities

Planet Ubuntu - Mër, 20/09/2017 - 5:37md
Namespaced file capabilities

As of this past week, namespaced file capabilities are available in the upstream kernel. (Thanks to Eric Biederman for many review cycles and for the final pull request)

TL;DR

Some packages install binaries with file capabilities, and fail to install if you cannot set the file capabilities. Such packages could not be installed from inside a user namespace. With this feature, that problem is fixed.

Yay!

What are they?

POSIX capabilities are pieces of root’s privilege which can be individually used.

File capabilites are POSIX capability sets attached to files. When files with associated capabilities are executed, the resulting task may end up with privilege even if the calling user was unprivileged.

What’s the problem

In single-user-namespace days, POSIX capabilities were completely orthogonal to userids. You can be a non-root user with CAP_SYS_ADMIN, for instance. This can happen by starting as root, setting PR_SET_KEEPCAPS through prctl(2), and dropping the capabilities you don’t want and changing your uid.  Or, it can happen by a non-root user executing a file with file capabilities.  In order to append such a capability to a file, you require the CAP_SETFCAP capability.

User namespaces had several requirements, including:

  1. an unprivileged user should be able to create a user namespace
  2. root in a user namespace should be privileged against its resources
  3. root in a user namespace should be unprivileged against any resources which it does not own.

So in a post-user-namespace age, unprivileged user can “have privilege” with respect to files they own. However if we allow them to write a file capability on one of their files, then they can execute that file as an unprivileged user on the host, thereby gaining that privilege. This violates the third user namespace requirement, and is therefore not allowed.

Unfortunately – and fortunately – some software wants to be installed with file capabilities. On the one hand that is great, but on the other hand, if the package installer isn’t able to handle the failure to set file capabilities, then package installs are broken. This was the case for some common packages – for instance httpd on centos.

With namespaced file capabilities, file capabilities continue to be orthogonal with respect to userids mapped into the namespace. However they capabilities are tagged as belonging to the host uid mapped to the container’s root id (0).  (If uid 0 is not mapped, then file capabilities cannot be assigned)  This prevents the namespace owner from gaining privilege in a namespace against which they should not be privileged.

 

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.


Ubuntu Insights: Kernel Team Summary – September 20, 2017

Planet Ubuntu - Mër, 20/09/2017 - 4:00md

September 13 through September 18 Development (Artful / 17.10)

https://wiki.ubuntu.com/ArtfulAardvark/ReleaseSchedule

Important upcoming dates:

Final Beta - Sept 28 (~1 week away) Kernel Freeze - Oct 5 (~2 weeks away) Final Freeze - Oct 12 (~3 weeks away) Ubuntu 17.10 - Oct 19 (~4 weeks away)

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. A 4.13.1 based kernel is available for testing from the artful-proposed pocket of the Ubuntu archive. As a reminder, the Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

Stable (Released & Supported)
  • All kernels have been re-spun to include a fix for high priority CVE-2017-1000251.

  • SRU cycle completed successfully and the following kernel updates have been released:

    trusty 3.13.0-132.181 trusty/lts-xenial 4.4.0-96.119~14.04.1 xenial 4.4.0-96.119 xenial/snapdargon 4.4.0-1076.81 xenial/raspi2 4.4.0-1074.82 xenial/aws 4.4.0-1035.44 xenial/gke 4.4.0-1031.31 xenial/gcp 4.10.0-1006.6 zesty 4.10.0-35.39 zesty/raspi2 4.10.0-1018.21
    • The following kernel snap updates have been released in the snap store: gke-kernel 4.4.0.1031.32 aws-kernel 4.4.0.1035.37 dragonboard-kernel 4.4.0.1076.68 pi2-kernel 4.4.0.1074.74 pc-kernel 4.4.0.96.101
  • Current cycle: 15-Sep through 07-Oct

    15-Sep Last day for kernel commits for this cycle. 18-Sep - 23-Sep Kernel prep week. 24-Sep - 06-Oct Bug verification & Regression testing. 09-Oct Release to -updates.
  • Next cycle: 06-Oct through 28-Oct

    06-Oct Last day for kernel commits for this cycle.

    09-Oct - 14-Oct Kernel prep week.
    15-Oct - 27-Oct Bug verification & Regression testing.
    30-Oct Release to -updates.

Misc

Didier Roche: Ubuntu GNOME Shell in Artful: Day 13

Planet Ubuntu - Mër, 20/09/2017 - 1:25md

Now that GNOME 3.26 is released, available in Ubuntu artful, and final GNOME Shell UI is confirmed, it’s time to adapt our default user experience to it. Let’s discuss how we worked with dash to dock upstream on the transparency feature. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 13: Adaptive transparency for Ubuntu Dock

GNOME Shell 3.26 excellent new release ships thus some dynamic panel transparency by default. If no window is next to the top panel, the bar is itself is translucent. If any windows is next to it, the panel becomes opaque. This feature is highlighted on the GNOME 3.26 release note. As we already discussed this on a previous blog post, it means that the Ubuntu Dock default opacity level doesn’t fit very well with the transparent top panel on an empty desktop.

Even if there were some discussions within GNOME about keeping or reverting this dynamic transparency feature, we reached out the Dash to Dock guys during the 3.25.9x period to be prepared. Started then some excellent discussions on the pull request which was already rolling full speed ahead.

The first idea was to have dynamic transparency. Having one status for the top panel, and another one for the Dock itself. However, this gives some weirds user experience after playing with it a little bit:

We can feel there are too much flickering, having both parts of the UI behaving independently. The idea I raised upstream was thus to consider all Shell UI (which is, in the Ubuntu session the top panel and Ubuntu Dock) as a single entity. Their opacity status is thus linked, as one UI element. François agreed and had the same idea on his mind, before implementing it. The results is way more natural:

Those options are implemented as options in Dash to Dock settings panel, and we just set this last behavior as the default in Ubuntu Dock.

We made sure that this option is working well with the various dock settings we expose in the Settings application:

In particular, you can see that intelli-hide is working as expected: dock opacity changing while the Dock is vanishing and when forcing it to show up again, it’s at the maximum opacity that we set.

The default with no application next to the panel or dock is now giving some good outlook:

The best part is the following: as we are getting closer to release and there is still a little bit of work upstream to get everything merged in Dash to Dock itself for options and settings UI which doesn’t impact Ubuntu Dock, Michele has prepared a cleaned up branch that we can cherry-pick from directly in our ubuntu-dock branch that they will keep compatible with master for us! Now that the Feature Freeze and UI Freeze exceptions have been approved, the Ubuntu Dock package is currently building in the artful repository alongside other fixes and some shortcuts improvements.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

It’s really a pleasure to work with Dash to Dock upstream, I’m using this blog opportunity to thank them again for everything they do and the cooperation they ease out for our use case.

James Page: OpenStack Charms @ Denver PTG

Planet Ubuntu - Mër, 20/09/2017 - 12:51md

Last week, myself and a number of the OpenStack Charms team had the pleasure of attending the OpenStack Project Teams Gathering in Denver, Colorado.

The first two days of the PTG where dedicated to cross project discussions, with the last three days focused on project specific discussion and work in dedicated rooms.

Here’s a summary of the charm related discussion over the week.

Cross Project Discussions Skip Level Upgrades

This topic was discussed at the start of the week, in the context of supporting upgrades across multiple OpenStack releases for operators.  What was immediately evident was this was really a discussion around ‘fast-forward’ upgrades, rather than actually skipping any specific OpenStack series as part of a cloud upgrade.  Deployments would still need to step through each OpenStack release series in turn, so the discussion centred around how to make this much easier for operators and deployment tools to consume than it has been to-date.

There was general agreement on the principles that all steps required to update a service between series should be supported whilst the service is offline – i.e. all database migrations can be completed without the services actually running;  This would allow multiple upgrade steps to be completed without having to start services up on interim steps. Note that a lot of projects all ready support this approach, but its never been agreed as a general policy as part of the ‘supports-upgrade‘ tag which was one of the actions resulting from this discussion.

In the context of the OpenStack Charms, we already follow something along these lines for minimising the amount of service disruption in the control plane during OpenStack upgrades; with implementation of this approach across all projects, we can avoid having to start up services on each series step as we do today, further optimising the upgrade process delivered by the charms for services that don’t support rolling upgrades.

Policy in Code

Most services in OpenStack rely on a policy.{json,yaml} file to define the policy for role based access into API endpoints – for example, what operations require admin level permissions for the cloud. Moving all policy default definitions to code rather than in a configuration file is a goal for the Queens development cycle.

This approach will make adapting policies as part of an OpenStack Charm based deployment much easier, as we only have to manage the delta on top of the defaults, rather than having to manage the entire policy file for each OpenStack release.  Notably Nova and Keystone have already moved to this approach during previous development cycles.

Deployment (SIG)

During the first two days, some cross deployment tool discussions where held for a variety of topics; of specific interest for the OpenStack Charms was the discussion around health/status middleware for projects so that the general health of a service can be assessed via its API – this would cover in-depth checks such as access to database and messaging resources, as well as access to other services that the checked service might depend on – for example, can Nova access Keystone’s API for authentication of tokens etc. There was general agreement that this was a good idea, and it will be proposed as a community goal for the OpenStack project.

OpenStack Charms Devroom Keystone: v3 API as default

The OpenStack Charms have optionally supported Keystone v3 for some time; The Keystone v2 API is officially deprecated, so we had discussion around approach for switching the default API deployed by the charms going forwards; in summary

  • New deployments should default to the v3 API and associated policy definitions
  • Existing deployments that get upgraded to newer charm releases should not switch automatically to v3, limiting the impact of services built around v2 based deployments already in production.
  • The charms already support switching from v2 to v3, so v2 deployments can upgrade as and when they are ready todo so.

At some point in time, we’ll have to automatically switch v2 deployments to v3 on OpenStack series upgrade, but that does not have to happen yet.

Keystone: Fernet Token support

The charms currently only support UUID based tokens (since PKI was dropped from Keystone); The preferred format is now Fernet so we should implement this in the charms – we should be able to leverage the existing PKI key management code to an extent to support Fernet tokens.

Stable Branch Life-cycles

Currently the OpenStack Charms team actively maintains two branches – the current development focus in the master branch, and the most recent stable branch – which right now is stable/17.08.  At the point of the next release, the stable/17.08 branch is no longer maintained, being superseded by the new stable/XX.XX branch.  This is reflected in the promulgated charms in the Juju charm store as well.  Older versions of charms remain consumable (albeit there appears to be some trimming of older revisions which needs investigating). If a bug is discovered in a charm version from a inactive stable branch, the only course of action is to upgrade the the latest stable version for fixes, which may also include new features and behavioural changes.

There are some technical challenges with regard to consumption of multiple stable branches from the charm store – we discussed using a different team namespace for an ‘old-stable’ style consumption model which is not that elegant, but would work.  Maintaining more branches means more resource effort for cherry-picks and reviews which is not feasible with the currently amount of time the development team has for these activities so no change for the time being!

Service Restart Coordination at Scale

tl;dr no one wants enabling debug logging to take out their rabbits

When running the OpenStack Charms at scale, parallel restarts of daemons for services with large numbers of units (we specifically discussed hundreds of compute units) can generate a high load on underlying control plane infrastructure as daemons drop and re-connect to message and database services potentially resulting in service outages. We discussed a few approaches to mitigate this specific problem, but ended up with focus on how we could implement a feature which batched up restarts of services into chunks based on a user provided configuration option.

You can read the full details in the proposed specification for this work.

We also had some good conversation around how unit level overrides for some configuration options would be useful – supporting the use case where a user wants to enable debug logging for a single unit of a service (maybe its causing problems) without having to restart services across all units to support this.  This is not directly supported by Juju today – but we’ll make the request!

Cross Model Relations – Use Cases

We brainstormed some ideas about how we might make use of the new cross-model relation features being developed for future Juju versions; some general ideas:

  • Multiple Region Cloud Deployments
    • Keystone + MySQL and Dashboard in one model (supporting all regions)
    • Each region (including region specific control plane services) deployed into a different model and controller, potentially using different MAAS deployments in different DC’s.
  • Keystone Federation Support
    • Use of Keystone deployments in different models/controllers to build out federated deployments, with one lead Keystone acting as the identity provider to other peon Keystones in different regions or potentially completely different OpenStack Clouds.

We’ll look to use the existing relations for some of these ideas, so as the implementation of this feature in Juju becomes more mature we can be well positioned to support its use in OpenStack deployments.

Deployment Duration

We had some discussion about the length of time taken to deploy a fully HA OpenStack Cloud onto hardware using the OpenStack Charms and how we might improve this by optimising hook executions.

There was general agreement that scope exists in the charms to improve general hook execution time – specifically in charms such as RabbitMQ and Percona XtraDB Cluster which create and distribute credentials to consuming applications.

We also need to ensure that we’re tracking any improvements made with good baseline metrics on charm hook execution times on reference hardware deployments so that any proposed changes to charms can be assessed in terms of positive or negative impact on individual unit hook execution time and overall deployment duration – so expect some work in CI over the next development cycle to support this.

As a follow up to the PTG, the team is looking at whether we can use the presence of a VIP configuration option to signal to the charm to postpone any presentation of access relation data to the point after which HA configuration has been completed and the service can be accessed across multiple units using the VIP.  This would potentially reduce the number (and associated cost) of interim hook executions due to pre-HA relation data being presented to consuming applications.

Mini Sprints

On the Thursday of the PTG, we held a few mini-sprints to get some early work done on features for the Queens cycle; specifically we hacked on:

Good progress was made in most areas with some reviews already up.

We had a good turnout with 10 charm developers in the devroom – thanks to everyone who attended and a special call-out to Billy Olsen who showed up with team T-Shirts for everyone!

We have some new specs already up for review, and I expect to see a few more over the next two weeks!

EOM


Ante Karamatić: Cvrčci

Planet Ubuntu - Mër, 20/09/2017 - 10:05pd

 

Veli hitro.hr kako se rezervacija imena rješava u 3 (tri) radna dana. Zahtjev je podnesen u utorak, 12.9. Danas je 20.9. Čuju se samo cvrčci.

Mohamad Faizul Zulkifli: APRX On Ubuntu Repository

Planet Ubuntu - Mër, 20/09/2017 - 10:01pd
Good news! i just noticed that aprx packages already listed on Ubuntu repository.



Aprx is a software package designed to run on any POSIX platform (Linux/BSD/Unix/etc.) and act as an APRS Digipeater and/or Internet Gateway. Aprx is able to support most APRS infrastructure deployments, including single stand-alone digipeaters, receive-only Internet gateways, full RF-gateways for bi-directional routing of traffic, and multi-port digipeaters operating on multiple channels or with multiple directional transceivers.

For more info visit:-



If you want to know more about aprs and ham radio visit:-







The Fridge: Ubuntu Weekly Newsletter 519

Planet Ubuntu - Mër, 20/09/2017 - 6:11pd

Welcome to the Ubuntu Weekly Newsletter. This is issue #519 for the weeks of September 5 – 18, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Alan Diggs (Le Schyken, El Chicken)
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Kubuntu General News: Call for design: Artful Banner for Kubuntu.org website

Planet Ubuntu - Mër, 20/09/2017 - 4:25pd

Kubuntu 17.10 — code-named Artful Aardvark — will be released on October 19th, 2017. We need a new banner for the website, and invite artists and designers to submit designs to us based on the Plasma wallpaper and perhaps the mascot design.

The banner is 1500×385 SVG.

Please submit designs to the Kubuntu-devel mail list.

Call to Mexicans: Open up your wifi #sismo

Planet Debian - Mar, 19/09/2017 - 11:52md

Hi friends,

~3hr ago, we just had a big earthquake, quite close to Mexico City. Fortunately, we are fine, as are (at least) most of our friends and family. Hopefully, all of them. But there are many (as in, tens) damaged or destroyed buildings; there have been over 50 deceased people, and numbers will surely rise until a good understanding of the event's strength are evaluated.

Mainly in these early hours after the quake, many people need to get in touch with their families and friends. There is a little help we can all provide: Provide communication.

Open up your wireless network. Set it up unencrypted, for anybody to use.

Refrain from over-sharing graphical content — Your social network groups don't need to see every video and every photo of the shaking moments and of broken buildings. Download of all those images takes up valuable time-space for the saturated cellular networks.

This advice might be slow to flow... The important moment to act is two or three hours ago, even now... But we are likely to have replicas; we are likely to have panic moments again. Do a little bit to help others in need!

gwolf http://gwolf.org Gunnar Wolf

Ubuntu Insights: Ubuntu Server Development Summary – 19 Sep 2017

Planet Ubuntu - Mar, 19/09/2017 - 11:16md
Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: KVM Integration test backend

Cloud-init recently added changes to our integration testing backend to support KVM images in addition to LXC. This will allow us to exercise more complex network and storage scenarios to ensure proper behavior in complex environments.

cloud-init and curtin cloud-init
  • integration tests: Enable NoCloud KVM platform as a test suite. Launches KVM via a wrapper around qemu-system and sets a custom SSH port for communication with the test images.
  • chef users can now pin a desired omibus_version in #cloud-config they wish to install with chef: omnibus_version: X.Y.Z
  • Add cloud-init collect-logs comandline utility to tar and gzip cloud-init logs needed to file bugs
  • Apport integration so ubuntu-bug cloud-init works on Artful (to be SRU’d)
  • Fix Bug #1715690 and Bug #1715738 – Cloud-init config modules are now skipped if distribution doesn’t match module’s supported distros. Modules can be forced to run setting ‘unverified_module’.
  • LP: #1675063 VMWare datasource now reacts to user network customization and renders network configuration version 1 from the datasource.
  • LP: #1717147 Fix a CentOS 7 regression which ignored dhclient lease files beginning with dhclient-. Datasource now handles ‘dhclient-‘ and ‘dhclient.’ prefix.
  • LP: #1717611 walinuxagent sometimes fails to deliver certificates during service upgrade. Cloud-init now waits 15 mins instead of 1 for the wireservice to become healthy.
  • LP: #1717598 Fix GCE cloud instance user-data processing.
curtin
  • bzr r.526 – Ensure iscsi service handles shutdown properly. The following iscsi configurations are validated as passing shutdown (plain, lvm over iscsi, and raid over iscsi)
Bug Work and Triage IRC Meetings Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

SRU bug closure highlights
  • LP: #1713990 – virt-install failes on s38-x with options –location and –extra-args
  • LP: #1393842 – Cannot createa default RHEL7 vm in virt-manager
  • LP: #1099947 – nut-server fails to execute udeva for UPS solutions due to permissions issues.
Uploads to the Development Release (Artful) apache2, 2.4.27-2ubuntu3, mdeslaur cloud-init, 0.7.9-283-g7eb3460b-0ubuntu1, smoser cloud-init, 0.7.9-281-g10f067d8-0ubuntu1, smoser cloud-init, 0.7.9-280-ge626966e-0ubuntu1, smoser docker.io, 1.13.1-0ubuntu3, vorlon dpdk, 17.05.2-0ubuntu1, paelzer groovy, 2.4.8-2, None haproxy, 1.7.9-1ubuntu1, chiluk lxc, 2.1.0-0ubuntu1, stgraber maas, 2.3.0~alpha3-6250-g58f83f3-0ubuntu1, andreserl nagios-nrpe, 3.2.0-4ubuntu2, brian-murray ndg-httpsclient, 0.4.3-1, None nmap, 7.60-1ubuntu1, doko simplestreams, 0.1.0~bzr450-0ubuntu1, smoser sysstat, 11.5.7-1ubuntu1, paelzer tomcat8, 8.5.16-1, None walinuxagent, 2.2.17-0ubuntu1, sil2100 Total: 17 Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty) apache2, zesty, 2.4.25-3ubuntu2.3, mdeslaur apache2, xenial, 2.4.18-2ubuntu3.5, mdeslaur apache2, trusty, 2.4.7-1ubuntu4.18, mdeslaur bind9, zesty, 1:9.10.3.dfsg.P4-10.1ubuntu5.2, mdeslaur bind9, xenial, 1:9.10.3.dfsg.P4-8ubuntu1.8, mdeslaur bind9, trusty, 1:9.9.5.dfsg-3ubuntu0.16, mdeslaur cloud-init, xenial, 0.7.9-233-ge586fe35-0ubuntu1~16.04.2, smoser cloud-init, zesty, 0.7.9-233-ge586fe35-0ubuntu1~17.04.2, smoser cloud-init, zesty, 0.7.9-233-ge586fe35-0ubuntu1~17.04.1, smoser cloud-init, xenial, 0.7.9-233-ge586fe35-0ubuntu1~16.04.1, smoser libvirt, trusty, 1.2.2-0ubuntu13.1.22, paelzer nut, xenial, 2.7.2-4ubuntu1.2, paelzer nut, trusty, 2.7.1-1ubuntu1.2, paelzer qemu, zesty, 1:2.8+dfsg-3ubuntu2.4, mdeslaur qemu, xenial, 1:2.5+dfsg-5ubuntu10.15, mdeslaur qemu, trusty, 2.0.0+dfsg-2ubuntu1.35, mdeslaur walinuxagent, trusty, 2.2.17-0ubuntu1~14.04.1, sil2100 walinuxagent, xenial, 2.2.17-0ubuntu1~16.04.1, sil2100 walinuxagent, zesty, 2.2.17-0ubuntu1~17.04.1, sil2100 walinuxagent, trusty, 2.2.17-0ubuntu1~14.04.1, sil2100 walinuxagent, xenial, 2.2.17-0ubuntu1~16.04.1, sil2100 walinuxagent, zesty, 2.2.17-0ubuntu1~17.04.1, sil2100 Total: 22 Contact the Ubuntu Server team

dot-zed extractor

Planet Debian - Mar, 19/09/2017 - 9:29md

Following last week's .zed format reverse-engineered specification, Loïc Dachary contributed a POC extractor!
It's available at http://www.dachary.org/loic/zed/, it can list non-encrypted metadata without password, and extract files with password (or .pem file).
Leveraging on python-olefile and pycrypto, only 500 lines of code (test cases excluded) are enough to implement it

Beuc's Blog https://blog.beuc.net/tags/planet_debian/ pages tagged planet debian

Reproducible Builds: Weekly report #125

Planet Debian - Mar, 19/09/2017 - 7:45md

Here's what happened in the Reproducible Builds effort between Sunday September 10 and Saturday September 16 2017:

Upcoming events Reproduciblity work in Debian

devscripts/2.17.10 was uploaded to unstable, fixing #872514. This adds a script to report on reproducibility status of installed packages written by Chris Lamb.

#876055 was opened against Debian Policy to decide the precise requirements we should have on a build's environment variables.

Bugs filed:

Non-maintainer uploads:

  • Holger Levsen:
Reproduciblity work in other projects

Patches sent upstream:

  • Bernhard M. Wiedemann:
Reviews of unreproducible packages

16 package reviews have been added, 99 have been updated and 92 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

diffoscope development
  • Juliana Oliveira Rodrigues:
    • Fix comparisons between different container types not comparing inside files. It was caused by falling back to binary comparison for different file types even for unextracted containers.
    • Add many tests for the fixed behaviour.
    • Other code quality improvements.
  • Chris Lamb:
    • Various code quality and style improvements, some of it using Flake8.
  • Mattia Rizzolo:
    • Add a check to prevent installation with python < 3.4
reprotest development
  • Ximin Luo:
    • Split up the very large __init__.py and remove obsolete earlier code.
    • Extend the syntax for the --variations flag to support parameters to certain variations like user_group, and document examples in README.
    • Add a --vary flag for the new syntax and deprecate --dont-vary.
    • Heavily refactor internals to support > 2 builds.
    • Support >2 builds using a new --extra-build flag.
    • Properly sanitize artifact_pattern to avoid arbitrary shell execution.
trydiffoscope development

Version 65 was uploaded to unstable by Chris Lamb including these contributions:

  • Chris Lamb:
    • Packaging maintenance updates.
    • Developer documentation updates.
Reproducible websites development tests.reproducible-builds.org
  • Vagrant Cascadian and Holger Levsen:
    • Added two armhf boards to the build farm. #874682
  • Holger also:
    • use timeout to limit the diffing of the two build logs to 30min, which greatly reduced jenkins load again.
Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Daniel Shahaf & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Kubuntu General News: Plasma 5.11 beta available in unofficial PPA for testing on Artful

Planet Ubuntu - Mar, 19/09/2017 - 1:12pd

Adventurous users and developers running the Artful development release can now also test the beta version of Plasma 5.11. This is experimental and can possibly kill kittens!

Bug reports on this beta go to https://bugs.kde.org, not to Launchpad.

The PPA comes with a WARNING: Artful will ship with Plasma 5.10.5, so please be prepared to use ppa-purge to revert changes. Plasma 5.11 will ship too late for inclusion in Kubuntu 17.10, but should be available via backports soon after release day, October 19th, 2017.

Read more about the beta release: https://www.kde.org/announcements/plasma-5.10.95.php

If you want to test on Artful: sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt-get update && sudo apt full-upgrade -y

The purpose of this PPA is testing, and bug reports go to bugs.kde.org.

Dustin Kirkland: Results of the Ubuntu Desktop Applications Survey

Planet Ubuntu - Mar, 19/09/2017 - 12:34pd

I had the distinct honor to deliver the closing keynote of the UbuCon Europe conference in Paris a few weeks ago.  First off -- what a beautiful conference and venue!  Kudos to the organizers who really put together a truly remarkable event.  And many thanks to the gentleman (Elias?) who brought me a bottle of his family's favorite champagne, as a gift on Day 2 :-)  I should give more talks in France!

In my keynote, I presented the results of the Ubuntu 18.04 LTS Default Desktops Applications Survey, which was discussed at length on HackerNews, Reddit, and Slashdot.  With the help of the Ubuntu Desktop team (led by Will Cooke), we processed over 15,000 survey responses and in this presentation, I discussed some of the insights of the data.

The team is now hard at work evaluating many of the suggested applications, for those of you that aren't into the all-Emacs spin of Ubuntu ;-)

Moreover, we're also investigating a potential approach to make the Ubuntu Desktop experience perhaps a bit like those Choose-Your-Own-Adventure books we loved when we were kids, where users have the opportunity to select each of their prefer applications (or stick with the distro default) for a handful of categories, during installation.

Marius Quabeck recorded the session and published the audio and video of the presentation here on YouTube:


You can download the slides here, or peruse them below:

Let's talk about the Ubuntu 18.04 LTS Roadmap! from Dustin Kirkland
Cheers,
Dustin

Andres Rodriguez: MAAS 2.3.0 Alpha 3 release!

Planet Ubuntu - Hën, 18/09/2017 - 4:24md
MAAS 2.3.0 (alpha3) New Features & Improvements Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.

  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.

  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).

  • Adding additional performance tests

    • Added a CPU performance test with 7z.

    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release
  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected.
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit:

https://launchpad.net/maas/+milestone/2.3.0alpha3

David Tomaschik: Getting Started in Offensive Security

Planet Ubuntu - Hën, 18/09/2017 - 9:00pd

Please note that this post, like all of those on my blog, represents only my views, and not those of my employer. Nothing in here implies official hiring policy or requirements.

I’m not going to pretend that this article is unique or has magic bullets to get you into the offensive security space. I also won’t pretend to speak for others in that space or in other areas of information security. It’s a big field, and it turns out that a lot of us have opinions about it. Mubix maintains a list of posts like this so you can see everyone’s opinions. I highly recommend the post “So You Want to Work in Security” by Parisa Tabriz for a view that’s not specific to offensive security. (Though there’s a lot of cross-over.)

My personal area of interest – some would even say expertise – is offensive application security, which includes activities like black box application testing, reverse engineering (but not, generally, malware reversing), penetration testing, and red teaming. I also do whitebox code review and various other things, but mostly I attack things using the same tools and techniques that an illicit attacker would. Of course, I do this in the interest of securing those systems and learning from the experience to help engineer stronger and more robust systems.

I do a lot of work with recruiting and outreach in our company, so I’ve had the chance to talk to many people about what I think makes a good offensive security engineer. After a few dozen times and much reflection, I decided to write out my thoughts on getting started. Don’t believe this is all you need, but it should help you get started.

A Strong Sense of Curiousity and a Desire to Learn

This isn’t a field or a speciality that you get into after a few courses and can stop there. To be successful, you’ll have to constantly keep learning. To keep learning like that, you have to want to keep learning. I spend a lot of my weekends and evenings playing with technology because I want to understand how it works (and consequently, how I can break it). There’s a lot of ways to learn things that are relevant to this field:

  • Reddit
  • Twitter (follow a bunch of industry people)
  • Blogs (perhaps even mine…)
  • Books (my favorites in the resources section)
  • Courses
  • Attend Conferences (Network! Ask people what they’re doing!)
  • Watch Conference Videos
  • Hands on Exercises

Everyone has a different learning style, you’ll have to learn what works for you. I learn best by doing (hands-on) and somewhat by reading. Videos are just inspiration for me to look more into something. Twitter and Reddit are the starting grounds to find all the other resources.

I see an innate passion for this field in most of the successful people I know. Many of us would do this even if we weren’t paid (and do some of it in our spare time anyway). You don’t have to spend every waking moment working, but you do have to keep moving forward or get left behind.

Understanding the Underlying System

To identify, understand, and exploit security vulnerabilities, you have to understand the underlying system. I’ve seen “penetration testers” who don’t know that paths on Linux/Unix systems start with and use / as the path separator. Watching someone try to exploit a potential LFI with \etc\passwd is just painful. (Hint: it doesn’t work.)

If you’re attacking web applications, you should at least have some understanding of:

  • The HTTP Protocol
  • The Same Origin Policy
  • The programming language used
  • The operating system underneath

For non-web networked applications:

  • A basic understanding of TCP/IP (or UDP/IP, if applicable)
  • The OSI Model
  • Basic computer architecture (stack, heap, etc.)
  • Language used for implementation

You don’t have to know everything about every layer, but each item you don’t know is either something you’ll potentially miss, or something that will cost you time. You’ll learn more as you develop your skills, but there’s some fundamentals that will help you get started:

  • Learn at least one interpreted and one compiled programming language.
    • Python and ruby are a good choice for interpreted languages, as most security tools are written in one of those, so you can modify & create your own tools when needed.
    • C is the classic language for demonstrating memory corruption vulnerabilities, and doesn’t hide a lot of the underlying system, so a good choice for a compiled language.
  • Know basic use of both Linux and Windows. Basic use includes:
    • Network configuration
    • Command line basics
    • How services are run
  • Learn a bit about x86/x86-64 architecture.
    • What are pointers?
    • What is the stack and the heap?
    • What are registers?

You don’t have to have a full CS degree (but it certainly wouldn’t hurt), but if you don’t understand how developers do their work, you’ll have a much harder time looking for and exploiting vulnerabilities. Many of the best penetration testers and security researchers have had experience as network administrators, systems administrators, or developers – this experience is incredibly useful in understanding the underlying systems.

The CIA Triad

To understand security at all, you should understand the CIA triad. This has nothing to do with the American intelligence agency, but everything to do with 3 pillars of information security: Confidentiality, Integrity, and Availability.

Confidentiality refers to allowing only authorized access to data. For example, preventing access to someone else’s email falls into confidentiality. This idea has strong parallels to the notion of privacy. Encryption is often used (and misused) in the pursuit of confidentiality. Heartbleed is an example of a well-known bug affecting confidentiality.

Integrity refers to allowing only authorized changes to state. This can be the state of data (avoiding file tampering), the state of execution (avoiding remote code execution), or some combination. Most of the “exciting” vulnerabilities in information security impact integrity. GHOST is an example of a well-known bug affecting integrity.

Availability is, perhaps, the easiest concept to understand. This refers to the ability of a service to be access by legitimate users when they want to access it. (And probably also as the speed they’d like.)

These 3 concepts are the main areas of concern for security engineers.

Understanding Vulnerabilities

There are many ways to categorize vulnerabilities, so I won’t try to list them all, but find some and understand how they work. The OWASP Top 10 is a good start for web vulnerabilities. The Modern Binary Exploitation course from RPISEC is a good choice for understanding “Binary Exploitation”.

It’s really valuable to distinguish a bug from a vulnerability. Most vulnerabilities are bugs, most bugs are not vulnerabilities. Bugs are accidentally-introduced misbehavior in software. Vulnerabilities are ways to gain access to a higher (or different) privilege level in an unintended fashion. Generally, a bug must violate one of the 3 pillars of the CIA triad to be classified as a vulnerability. (Though this is often subjective, see [systemd bug].)

Doing Security

At some point, it stops being about what you know and starts being about what you can do. Knowing things is useful in being able to do, but merely reciting facts is not very useful in actual offensive security. Getting hands-on experience is critical, and this is one field where you need to be careful how to do it. Please remember that, however you choose to practice, you should stay legal and observe all applicable laws.

There’s a number of different options here that build relevant skills:

  • Formal classes with hands-on components
  • CTFs (please note that most CTF challenges have little resemblence to actual security work)
  • Wargames (see CTFs, but some are closer)
  • Lab work
  • Bug bounties

Of these, lab work is the most relevant to me, but also the one requiring the most time investment to setup. Typically, a lab will involve setting up one or more practices machines with known-vulnerable software (though feel free to progress to finding unknown issues). I’ll have a follow-up post with information on building an offensive security practice lab.

Bug bounties are a good option, but to a beginner, they’ll be very daunting because much of the low-hanging fruit will be gone, and there should be no known vulnerabilities to practice on. Getting into bug bounties without any prior experience at all is likely to only teach frustration and anger.

Resources

There are some suggested resources for getting started in Offensive Security. I’ll try to maintain them if I receive suggestions from other members of the community.

Web Resources (Reading/Watching) Books Courses Lab Resources

I’ll have a follow-up about building a lab soon, but there’s some things worth looking at here:

Conclusion

This isn’t an especially easy field to get started in, but it’s the challenge that keeps most of us into it. I know I need to constantly be pushing the edge of my capabilities and of technology for it to stay satisfying. Good luck, and maybe you’ll soon be the author of one of the great resources in our community.

If you have other tips/resources that you think should have been included here, drop me a line or reach me on Twitter.

Faqet

Subscribe to AlbLinux agreguesi