You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - http://planet.gnome.org/
Përditësimi: 4 ditë 4 orë më parë

Debarshi Ray: Stable GNOME Photos Flatpaks moved to Flathub

Mar, 10/10/2017 - 4:22md

Starting from version 3.26, the stable GNOME Photos Flatpaks have been moved to Flathub. They are no longer available from GNOME’s Flatpak repository.

To migrate, first delete the old build:

$ flatpak uninstall org.gnome.Photos/x86_64/stable

Then install it from Flathub:

$ flatpak remote-add --from flathub https://flathub.org/repo/flathub.flatpakrepo
$ flatpak install flathub org.gnome.Photos

Note that this is only about the stable build. The nightly continues to be available from its existing location in GNOME’s repository. You can keep updating it with:

$ flatpak update --user org.gnome.Photos/x86_64/master


Alberto Ruiz: Fleet Commander: production ready!

Mar, 10/10/2017 - 4:06md

It’s been awhile since I last wrote any updates about Fleet Commander, that’s not to say that there hasn’t been any progress since 0.8. In many senses we (Oliver and I) feel like we should present Fleet Commander as a shiny new project now as many changes have gone through and this is the first release we feel is robust enough to call it production ready.

What is Fleet Commander?

For those missing some background, let me introduce Fleet Commander for you, Fleet Commander is an integrated solution for large Linux desktop deployments that provides a configuration management interface that is controlled centrally and that covers desktop, applications and network configuration. For people familiar with Group Policy Objects in Active Directory in Windows, it is very similar.

Many people ask why not use other popular Linux configuration management tools like Ansible or Puppet, the answer is simple, those are designed for servers that run in a controlled environment like a data center or the cloud, it follows a push model where the configuration changes happen as a series of commands run in the server. If something goes wrong it is easy to audit and rollback if you have access to that server. However desktop machines in large corporate environments can run many times behind a NAT on a public WiFi, think a laptop owned by an on-site support engineer that roams from site to site. Fleet Commander pulls a bunch of configuration data and makes it available to apps without running intrusive shell scripts or walking in into users’ $HOME directory. Ansible and puppet did not solve the core problems of desktop session configuration management so we had to create something new.

At Red Hat we talk to many enterprise desktop customers with a mixed environment of Windows, Macs and Linux desktops and our interaction with them has helped us identify this gap in the GNOME ecosystem and motivated us to roll up our sleeves and try to come up with an answer.

How to build a profile

The way Fleet Commander works when building profiles is somewhat interesting compared to its competitors. We’ve inspired our solution on the good old Sabayon tool. On our admin web UI you get a VM desktop session where you run and configure your apps, Fleet Commander will record those changes and list them. The user will select them and the final selection will get bound together as part of the profile.

You can then apply the profile to individual users, groups, hosts or host groups.

Supported apps/settings

Right now we support anything dconf based (GSettings), GNOME Online Accounts, LibreOffice and NetworkManager. In the near future we plan to tackle our main problem which is giving support to browsers, we’re probably going to start just with bookmarks as it is the most demanded use case.

Cockpit integration

The Fleet Commander UI runs on top of the Cockpit admin UI. Cockpit has given us a lot of stuff for free (a basic UI framework, a web service, built-in websocket support for our SPICE javascript client, among many other things).

FreeIPA Integration

A desktop configuration management solution has to be tightly tied to an identity management solution, (like in Active Directory), FreeIPA is the best Free Software corporate identity management project out there and integrating with it allowed us to remove quite a bit of complexity from our code base while improving security. FreeIPA now stores the profile data and the assignments to users, groups and hosts.

SSSD

SSSD is the client daemon that enrolls and authenticates a Linux machine in a FreeIPA or Active Directory domain, having fleet commander hooking into it was a perfect fit for us and also allowed us to remove a bunch of code from previous versions while having a much more robust implementation. SSSD now retrieves and stores the profile data from FreeIPA.

fleet-commander.org

Our new website is live! We have updated introduction materials and documentation and jimmac has put together a great design and layout. Check it out!
I’d like to thank Alexander Bokovoy and Fabiano Fidencio for their invaluable help extending FreeIPA and SSSD to integrate with Fleet Commander and Jakub for his help on the website design. If you want to know more, join us on our IRC channel (#fleet-commander @ freenode) and our GitHub project page.

It is currently available in Fedora 26 and we are in the process of releasing EPEL packages for CentOS/RHEL.

Lennart Poettering: IP Accounting and Access Lists with systemd

Hën, 09/10/2017 - 10:06md

TL;DR: systemd now can do per-service IP traffic accounting, as well as access control for IP address ranges.

Last Friday we released systemd 235. I already blogged about its Dynamic User feature in detail, but there's one more piece of new functionality that I think deserves special attention: IP accounting and access control.

Before v235 systemd already provided per-unit resource management hooks for a number of different kinds of resources: consumed CPU time, disk I/O, memory usage and number of tasks. With v235 another kind of resource can be controlled per-unit with systemd: network traffic (specifically IP).

Three new unit file settings have been added in this context:

  1. IPAccounting= is a boolean setting. If enabled for a unit, all IP traffic sent and received by processes associated with it is counted both in terms of bytes and of packets.

  2. IPAddressDeny= takes an IP address prefix (that means: an IP address with a network mask). All traffic from and to this address will be prohibited for processes of the service.

  3. IPAddressAllow= is the matching positive counterpart to IPAddressDeny=. All traffic matching this IP address/network mask combination will be allowed, even if otherwise listed in IPAddressDeny=.

The three options are thin wrappers around kernel functionality introduced with Linux 4.11: the control group eBPF hooks. The actual work is done by the kernel, systemd just provides a number of new settings to configure this facet of it. Note that cgroup/eBPF is unrelated to classic Linux firewalling, i.e. NetFilter/iptables. It's up to you whether you use one or the other, or both in combination (or of course neither).

IP Accounting

Let's have a closer look at the IP accounting logic mentioned above. Let's write a simple unit /etc/systemd/system/ip-accounting-test.service:

[Service] ExecStart=/usr/bin/ping 8.8.8.8 IPAccounting=yes

This simple unit invokes the ping(8) command to send a series of ICMP/IP ping packets to the IP address 8.8.8.8 (which is the Google DNS server IP; we use it for testing here, since it's easy to remember, reachable everywhere and known to react to ICMP pings; any other IP address responding to pings would be fine to use, too). The IPAccounting= option is used to turn on IP accounting for the unit.

Let's start this service after writing the file. Let's then have a look at the status output of systemctl:

# systemctl daemon-reload # systemctl start ip-accounting-test # systemctl status ip-accounting-test ● ip-accounting-test.service Loaded: loaded (/etc/systemd/system/ip-accounting-test.service; static; vendor preset: disabled) Active: active (running) since Mon 2017-10-09 18:05:47 CEST; 1s ago Main PID: 32152 (ping) IP: 168B in, 168B out Tasks: 1 (limit: 4915) CGroup: /system.slice/ip-accounting-test.service └─32152 /usr/bin/ping 8.8.8.8 Okt 09 18:05:47 sigma systemd[1]: Started ip-accounting-test.service. Okt 09 18:05:47 sigma ping[32152]: PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. Okt 09 18:05:47 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=29.2 ms Okt 09 18:05:48 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=2 ttl=59 time=28.0 ms

This shows the ping command running — it's currently at its second ping cycle as we can see in the logs at the end of the output. More interesting however is the IP: line further up showing the current IP byte counters. It currently shows 168 bytes have been received, and 168 bytes have been sent. That the two counters are at the same value is not surprising: ICMP ping requests and responses are supposed to have the same size. Note that this line is shown only if IPAccounting= is turned on for the service, as only then this data is collected.

Let's wait a bit, and invoke systemctl status again:

# systemctl status ip-accounting-test ● ip-accounting-test.service Loaded: loaded (/etc/systemd/system/ip-accounting-test.service; static; vendor preset: disabled) Active: active (running) since Mon 2017-10-09 18:05:47 CEST; 4min 28s ago Main PID: 32152 (ping) IP: 22.2K in, 22.2K out Tasks: 1 (limit: 4915) CGroup: /system.slice/ip-accounting-test.service └─32152 /usr/bin/ping 8.8.8.8 Okt 09 18:10:07 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=260 ttl=59 time=27.7 ms Okt 09 18:10:08 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=261 ttl=59 time=28.0 ms Okt 09 18:10:09 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=262 ttl=59 time=33.8 ms Okt 09 18:10:10 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=263 ttl=59 time=48.9 ms Okt 09 18:10:11 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=264 ttl=59 time=27.2 ms Okt 09 18:10:12 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=265 ttl=59 time=27.0 ms Okt 09 18:10:13 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=266 ttl=59 time=26.8 ms Okt 09 18:10:14 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=267 ttl=59 time=27.4 ms Okt 09 18:10:15 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=268 ttl=59 time=29.7 ms Okt 09 18:10:16 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=269 ttl=59 time=27.6 ms

As we can see, after 269 pings the counters are much higher: at 22K.

Note that while systemctl status shows only the byte counters, packet counters are kept as well. Use the low-level systemctl show command to query the current raw values of the in and out packet and byte counters:

# systemctl show ip-accounting-test -p IPIngressBytes -p IPIngressPackets -p IPEgressBytes -p IPEgressPackets IPIngressBytes=37776 IPIngressPackets=449 IPEgressBytes=37776 IPEgressPackets=449

Of course, the same information is also available via the D-Bus APIs. If you want to process this data further consider talking proper D-Bus, rather than scraping the output of systemctl show.

Now, let's stop the service again:

# systemctl stop ip-accounting-test

When a service with such accounting turned on terminates, a log line about all its consumed resources is written to the logs. Let's check with journalctl:

# journalctl -u ip-accounting-test -n 5 -- Logs begin at Thu 2016-08-18 23:09:37 CEST, end at Mon 2017-10-09 18:17:02 CEST. -- Okt 09 18:15:50 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=603 ttl=59 time=26.9 ms Okt 09 18:15:51 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=604 ttl=59 time=27.2 ms Okt 09 18:15:52 sigma systemd[1]: Stopping ip-accounting-test.service... Okt 09 18:15:52 sigma systemd[1]: Stopped ip-accounting-test.service. Okt 09 18:15:52 sigma systemd[1]: ip-accounting-test.service: Received 49.5K IP traffic, sent 49.5K IP traffic

The last line shown is the interesting one, that shows the accounting data. It's actually a structured log message, and among its metadata fields it contains the more comprehensive raw data:

# journalctl -u ip-accounting-test -n 1 -o verbose -- Logs begin at Thu 2016-08-18 23:09:37 CEST, end at Mon 2017-10-09 18:18:50 CEST. -- Mon 2017-10-09 18:15:52.649028 CEST [s=89a2cc877fdf4dafb2269a7631afedad;i=14d7;b=4c7e7adcba0c45b69d612857270716d3;m=137592e75e;t=55b1f81298605;x=c3c9b57b28c9490e] PRIORITY=6 _BOOT_ID=4c7e7adcba0c45b69d612857270716d3 _MACHINE_ID=e87bfd866aea4ae4b761aff06c9c3cb3 _HOSTNAME=sigma SYSLOG_FACILITY=3 SYSLOG_IDENTIFIER=systemd _UID=0 _GID=0 _TRANSPORT=journal _PID=1 _COMM=systemd _EXE=/usr/lib/systemd/systemd _CAP_EFFECTIVE=3fffffffff _SYSTEMD_CGROUP=/init.scope _SYSTEMD_UNIT=init.scope _SYSTEMD_SLICE=-.slice CODE_FILE=../src/core/unit.c _CMDLINE=/usr/lib/systemd/systemd --switched-root --system --deserialize 25 _SELINUX_CONTEXT=system_u:system_r:init_t:s0 UNIT=ip-accounting-test.service CODE_LINE=2115 CODE_FUNC=unit_log_resources MESSAGE_ID=ae8f7b866b0347b9af31fe1c80b127c0 INVOCATION_ID=98a6e756fa9d421d8dfc82b6df06a9c3 IP_METRIC_INGRESS_BYTES=50880 IP_METRIC_INGRESS_PACKETS=605 IP_METRIC_EGRESS_BYTES=50880 IP_METRIC_EGRESS_PACKETS=605 MESSAGE=ip-accounting-test.service: Received 49.6K IP traffic, sent 49.6K IP traffic _SOURCE_REALTIME_TIMESTAMP=1507565752649028

The interesting fields of this log message are of course IP_METRIC_INGRESS_BYTES=, IP_METRIC_INGRESS_PACKETS=, IP_METRIC_EGRESS_BYTES=, IP_METRIC_EGRESS_PACKETS= that show the consumed data.

The log message carries a message ID that may be used to quickly search for all such resource log messages (ae8f7b866b0347b9af31fe1c80b127c0). We can combine a search term for messages of this ID with journalctl's -u switch to quickly find out about the resource usage of any invocation of a specific service. Let's try:

# journalctl -u ip-accounting-test MESSAGE_ID=ae8f7b866b0347b9af31fe1c80b127c0 -- Logs begin at Thu 2016-08-18 23:09:37 CEST, end at Mon 2017-10-09 18:25:27 CEST. -- Okt 09 18:15:52 sigma systemd[1]: ip-accounting-test.service: Received 49.6K IP traffic, sent 49.6K IP traffic

Of course, the output above shows only one message at the moment, since we started the service only once, but a new one will appear every time you start and stop it again.

The IP accounting logic is also hooked up with systemd-run, which is useful for transiently running a command as systemd service with IP accounting turned on. Let's try it:

# systemd-run -p IPAccounting=yes --wait wget https://cfp.all-systems-go.io/en/ASG2017/public/schedule/2.pdf Running as unit: run-u2761.service Finished with result: success Main processes terminated with: code=exited/status=0 Service runtime: 878ms IP traffic received: 231.0K IP traffic sent: 3.7K

This uses wget to download the PDF version of the 2nd day schedule of everybody's favorite Linux user-space conference All Systems Go! 2017 (BTW, have you already booked your ticket? We are very close to selling out, be quick!). The IP traffic this command generated was 231K ingress and 4K egress. In the systemd-run command line two parameters are important. First of all, we use -p IPAccounting=yes to turn on IP accounting for the transient service (as above). And secondly we use --wait to tell systemd-run to wait for the service to exit. If --wait is used, systemd-run will also show you various statistics about the service that just ran and terminated, including the IP statistics you are seeing if IP accounting has been turned on.

It's fun to combine this sort of IP accounting with interactive transient units. Let's try that:

# systemd-run -p IPAccounting=1 -t /bin/sh Running as unit: run-u2779.service Press ^] three times within 1s to disconnect TTY. sh-4.4# dnf update … sh-4.4# dnf install firefox … sh-4.4# exit Finished with result: success Main processes terminated with: code=exited/status=0 Service runtime: 5.297s IP traffic received: …B IP traffic sent: …B

This uses systemd-run's --pty switch (or short: -t), which opens an interactive pseudo-TTY connection to the invoked service process, which is a bourne shell in this case. Doing this means we have a full, comprehensive shell with job control and everything. Since the shell is running as part of a service with IP accounting turned on, all IP traffic we generate or receive will be accounted for. And as soon as we exit the shell, we'll see what it consumed. (For the sake of brevity I actually didn't paste the whole output above, but truncated core parts. Try it out for yourself, if you want to see the output in full.)

Sometimes it might make sense to turn on IP accounting for a unit that is already running. For that, use systemctl set-property foobar.service IPAccounting=yes, which will instantly turn on accounting for it. Note that it won't count retroactively though: only the traffic sent/received after the point in time you turned it on will be collected. You may turn off accounting for the unit with the same command.

Of course, sometimes it's interesting to collect IP accounting data for all services, and turning on IPAccounting=yes in every single unit is cumbersome. To deal with that there's a global option DefaultIPAccounting= available which can be set in /etc/systemd/system.conf.

IP Access Lists

So much about IP accounting. Let's now have a look at IP access control with systemd 235. As mentioned above, the two new unit file settings, IPAddressAllow= and IPAddressDeny= maybe be used for that. They operate in the following way:

  1. If the source address of an incoming packet or the destination address of an outgoing packet matches one of the IP addresses/network masks in the relevant unit's IPAddressAllow= setting then it will be allowed to go through.

  2. Otherwise, if a packet matches an IPAddressDeny= entry configured for the service it is dropped.

  3. If the packet matches neither of the above it is allowed to go through.

Or in other words, IPAddressDeny= implements a blacklist, but IPAddressAllow= takes precedence.

Let's try that out. Let's modify our last example above in order to get a transient service running an interactive shell which has such an access list set:

# systemd-run -p IPAddressDeny=any -p IPAddressAllow=8.8.8.8 -p IPAddressAllow=127.0.0.0/8 -t /bin/sh Running as unit: run-u2850.service Press ^] three times within 1s to disconnect TTY. sh-4.4# ping 8.8.8.8 -c1 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=27.9 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 27.957/27.957/27.957/0.000 ms sh-4.4# ping 8.8.4.4 -c1 PING 8.8.4.4 (8.8.4.4) 56(84) bytes of data. ping: sendmsg: Operation not permitted ^C --- 8.8.4.4 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms sh-4.4# ping 127.0.0.2 -c1 PING 127.0.0.1 (127.0.0.2) 56(84) bytes of data. 64 bytes from 127.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms --- 127.0.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms sh-4.4# exit

The access list we set up uses IPAddressDeny=any in order to define an IP white-list: all traffic will be prohibited for the session, except for what is explicitly white-listed. In this command line, we white-listed two address prefixes: 8.8.8.8 (with no explicit network mask, which means the mask with all bits turned on is implied, i.e. /32), and 127.0.0.0/8. Thus, the service can communicate with Google's DNS server and everything on the local loop-back, but nothing else. The commands run in this interactive shell show this: First we try pinging 8.8.8.8 which happily responds. Then, we try to ping 8.8.4.4 (that's Google's other DNS server, but excluded from this white-list), and as we see it is immediately refused with an Operation not permitted error. As last step we ping 127.0.0.2 (which is on the local loop-back), and we see it works fine again, as expected.

In the example above we used IPAddressDeny=any. The any identifier is a shortcut for writing 0.0.0.0/0 ::/0, i.e. it's a shortcut for everything, on both IPv4 and IPv6. A number of other such shortcuts exist. For example, instead of spelling out 127.0.0.0/8 we could also have used the more descriptive shortcut localhost which is expanded to 127.0.0.0/8 ::1/128, i.e. everything on the local loopback device, on both IPv4 and IPv6.

Being able to configure IP access lists individually for each unit is pretty nice already. However, typically one wants to configure this comprehensively, not just for individual units, but for a set of units in one go or even the system as a whole. In systemd, that's possible by making use of .slice units (for those who don't know systemd that well, slice units are a concept for organizing services in hierarchical tree for the purpose of resource management): the IP access list in effect for a unit is the combination of the individual IP access lists configured for the unit itself and those of all slice units it is contained in.

By default, system services are assigned to system.slice, which in turn is a child of the root slice -.slice. Either of these two slice units are hence suitable for locking down all system services at once. If an access list is configured on system.slice it will only apply to system services, however, if configured on -.slice it will apply to all user processes of the system, including all user session processes (i.e. which are by default assigned to user.slice which is a child of -.slice) in addition to the system services.

Let's make use of this:

# systemctl set-property system.slice IPAddressDeny=any IPAddressAllow=localhost # systemctl set-property apache.service IPAddressAllow=10.0.0.0/8

The two commands above are a very powerful way to first turn off all IP communication for all system services (with the exception of loop-back traffic), followed by an explicit white-listing of 10.0.0.0/8 (which could refer to the local company network, you get the idea) but only for the Apache service.

Use-cases

After playing around a bit with this, let's talk about use-cases. Here are a few ideas:

  1. The IP access list logic can in many ways provide a more modern replacement for the venerable TCP Wrapper, but unlike it it applies to all IP sockets of a service unconditionally, and requires no explicit support in any way in the service's code: no patching required. On the other hand, TCP wrappers have a number of features this scheme cannot cover, most importantly systemd's IP access lists operate solely on the level of IP addresses and network masks, there is no way to configure access by DNS name (though quite frankly, that is a very dubious feature anyway, as doing networking — unsecured networking even – in order to restrict networking sounds quite questionable, at least to me).

  2. It can also replace (or augment) some facets of IP firewalling, i.e. Linux NetFilter/iptables. Right now, systemd's access lists are of course a lot more minimal than NetFilter, but they have one major benefit: they understand the service concept, and thus are a lot more context-aware than NetFilter. Classic firewalls, such as NetFilter, derive most service context from the IP port number alone, but we live in a world where IP port numbers are a lot more dynamic than they used to be. As one example, a BitTorrent client or server may use any IP port it likes for its file transfer, and writing IP firewalling rules matching that precisely is hence hard. With the systemd IP access list implementing this is easy: just set the list for your BitTorrent service unit, and all is good.

    Let me stress though that you should be careful when comparing NetFilter with systemd's IP address list logic, it's really like comparing apples and oranges: to start with, the IP address list logic has a clearly local focus, it only knows what a local service is and manages access of it. NetFilter on the other hand may run on border gateways, at a point where the traffic flowing through is pure IP, carrying no information about a systemd unit concept or anything like that.

  3. It's a simple way to lock down distribution/vendor supplied system services by default. For example, if you ship a service that you know never needs to access the network, then simply set IPAddressDeny=any (possibly combined with IPAddressAllow=localhost) for it, and it will live in a very tight networking sand-box it cannot escape from. systemd itself makes use of this for a number of its services by default now. For example, the logging service systemd-journald.service, the login manager systemd-logind or the core-dump processing unit systemd-coredump@.service all have such a rule set out-of-the-box, because we know that neither of these services should be able to access the network, under any circumstances.

  4. Because the IP access list logic can be combined with transient units, it can be used to quickly and effectively sandbox arbitrary commands, and even include them in shell pipelines and such. For example, let's say we don't trust our curl implementation (maybe it got modified locally by a hacker, and phones home?), but want to use it anyway to download the the slides of my most recent casync talk in order to print it, but want to make sure it doesn't connect anywhere except where we tell it to (and to make this even more fun, let's minimize privileges further, by setting DynamicUser=yes):

    # systemd-resolve 0pointer.de 0pointer.de: 85.214.157.71 2a01:238:43ed:c300:10c3:bcf3:3266:da74 -- Information acquired via protocol DNS in 2.8ms. -- Data is authenticated: no # systemd-run --pipe -p IPAddressDeny=any \ -p IPAddressAllow=85.214.157.71 \ -p IPAddressAllow=2a01:238:43ed:c300:10c3:bcf3:3266:da74 \ -p DynamicUser=yes \ curl http://0pointer.de/public/casync-kinvolk2017.pdf | lp

So much about use-cases. This is by no means a comprehensive list of what you can do with it, after all both IP accounting and IP access lists are very generic concepts. But I do hope the above inspires your fantasy.

What does that mean for packagers?

IP accounting and IP access control are primarily concepts for the local administrator. However, As suggested above, it's a very good idea to ship services that by design have no network-facing functionality with an access list of IPAddressDeny=any (and possibly IPAddressAllow=localhost), in order to improve the out-of-the-box security of our systems.

An option for security-minded distributions might be a more radical approach: ship the system with -.slice or system.slice configured to IPAddressDeny=any by default, and ask the administrator to punch holes into that for each network facing service with systemctl set-property … IPAddressAllow=…. But of course, that's only an option for distributions willing to break compatibility with what was before.

Notes

A couple of additional notes:

  1. IP accounting and access lists may be mixed with socket activation. In this case, it's a good idea to configure access lists and accounting for both the socket unit that activates and the service unit that is activated, as both units maintain fully separate settings. Note that IP accounting and access lists configured on the socket unit applies to all sockets created on behalf of that unit, and even if these sockets are passed on to the activated services, they will still remain in effect and belong to the socket unit. This also means that IP traffic done on such sockets will be accounted to the socket unit, not the service unit. The fact that IP access lists are maintained separately for the kernel sockets created on behalf of the socket unit and for the kernel sockets created by the service code itself enables some interesting uses. For example, it's possible to set a relatively open access list on the socket unit, but a very restrictive access list on the service unit, thus making the sockets configured through the socket unit the only way in and out of the service.

  2. systemd's IP accounting and access lists apply to IP sockets only, not to sockets of any other address families. That also means that AF_PACKET (i.e. raw) sockets are not covered. This means it's a good idea to combine IP access lists with RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6 in order to lock this down.

  3. You may wonder if the per-unit resource log message and systemd-run --wait may also show you details about other types or resources consumed by a service. The answer is yes: if you turn on CPUAccounting= for a service, you'll also see a summary of consumed CPU time in the log message and the command output. And we are planning to hook-up IOAccounting= the same way too, soon.

  4. Note that IP accounting and access lists aren't entirely free. systemd inserts an eBPF program into the IP pipeline to make this functionality work. However, eBPF execution has been optimized for speed in the last kernel versions already, and given that it currently is in the focus of interest to many I'd expect to be optimized even further, so that the cost for enabling these features will be negligible, if it isn't already.

  5. IP accounting is currently not recursive. That means you cannot use a slice unit to join the accounting of multiple units into one. This is something we definitely want to add, but requires some more kernel work first.

  6. You might wonder how the PrivateNetwork= setting relates to IPAccessDeny=any. Superficially they have similar effects: they make the network unavailable to services. However, looking more closely there are a number of differences. PrivateNetwork= is implemented using Linux network name-spaces. As such it entirely detaches all networking of a service from the host, including non-IP networking. It does so by creating a private little environment the service lives in where communication with itself is still allowed though. In addition using the JoinsNamespaceOf= dependency additional services may be added to the same environment, thus permitting communication with each other but not with anything outside of this group. IPAddressAllow= and IPAddressDeny= are much less invasive. First of all they apply to IP networking only, and can match against specific IP addresses. A service running with PrivateNetwork= turned off but IPAddressDeny=any turned on, may enumerate the network interfaces and their IP configured even though it cannot actually do any IP communication. On the other hand if you turn on PrivateNetwork= all network interfaces besides lo disappear. Long story short: depending on your use-case one, the other, both or neither might be suitable for sand-boxing of your service. If possible I'd always turn on both, for best security, and that's what we do for all of systemd's own long-running services.

And that's all for now. Have fun with per-unit IP accounting and access lists!

Richard Hughes: fwupd hits 1.0.0

Hën, 09/10/2017 - 3:05md

Today I released fwupd version 1.0.0, a version number most Open Source projects seldom reach. Unusually it bumps the soname so any applications that link against libfwupd will need to be rebuilt. The reason for bumping is that we removed a lot of the cruft we’ve picked up over the couple of years since we started the project, and also took the opportunity to rename some public interfaces that are now used differently to how they were envisaged. Since we started the project, we’ve basically re-architected the way the daemon works, re-imagined how the metadata is downloaded and managed, and changed core ways we’ve done the upgrades themselves. It’s no surprise that removing all that crufty code makes the core easier to understand and maintain. I’m intending to support the 0_9_X branch for a long time, as that’s what’s going to stay in Fedora 26 and the upcoming Fedora 27.

Since we’ve started we now support 72 different kinds of hardware, with support for another dozen-or-so currently being worked on. Lots of vendors are now either using the LVFS to distribute firmware, or are testing with one or two devices in secret. Although we have 10 (!) different ways of applying firmware already, vendors are slowly either switching to a more standard mechanism for new products (UpdateCapsule/DFU/Redfish) or building custom plugins for fwupd to update existing hardware.

Every month 165,000+ devices get updated using fwupd using the firmware on the LVFS; possibly more as people using corporate mirrors and caching servers don’t show up in the stats. Since we started this project there are now at least 600,000 items of hardware with new firmware. Many people have updated firmware, fixing bugs and solving security issues without having to understand all the horrible details involved.

I guess I should say thanks; to all the people both uploading firmware, and the people using, testing, and reporting bugs. Dell have been a huge supporter since the very early days, and now smaller companies and giants like Logitech are also supporting the project. Red Hat have given me the time and resources that I need to build something as complicated and political as shared infrastructure like this. There is literally no other company on the planet that I would rather work for.

So, go build fwupd 1.0.0 in your distro development branch and report any problems. 1.0.1 will follow soon with fixes I’m sure, and hopefully we can make some more vendor announcements in the near future. There are a few big vendors working on things in secret that I’m sure you’ll all know :)

Max Huang: GUADEC 2017 Notes

Hën, 09/10/2017 - 9:16pd
GUADEC 2017 Notes


This is my 2nd time to attend GUADEC :)

It’s great to see geeko there.I got one openSUSE 2017 t-shirt cause I couldn’t attend this year.


GNOME Love and GNOME newcomers always cache my eyes in many events.


Karen’s speech always touch people’s hart :)


It’s great to know brief history of GNOME since I am young to GNOME.



This year, I report GNOME.Asia status in GNOME Annual General Meeting. Invite everyone to attend GNOME Asia Summit 2017 in Chongqing, China.









Also gave a lighting talk about { GNOME, openSUSE }.Asia Call for paper








Thanks GUADEC team and sponsors give us these wonderful event.



BOF time always the good time for team and contribute.





Reference


Lennart Poettering: Dynamic Users with systemd

Pre, 06/10/2017 - 7:21md

TL;DR: you may now configure systemd to dynamically allocate a UNIX user ID for service processes when it starts them and release it when it stops them. It's pretty secure, mixes well with transient services, socket activated services and service templating.

Today we released systemd 235. Among other improvements this greatly extends the dynamic user logic of systemd. Dynamic users are a powerful but little known concept, supported in its basic form since systemd 232. With this blog story I hope to make it a bit better known.

The UNIX user concept is the most basic and well-understood security concept in POSIX operating systems. It is UNIX/POSIX' primary security concept, the one everybody can agree on, and most security concepts that came after it (such as process capabilities, SELinux and other MACs, user name-spaces, …) in some form or another build on it, extend it or at least interface with it. If you build a Linux kernel with all security features turned off, the user concept is pretty much the one you'll still retain.

Originally, the user concept was introduced to make multi-user systems a reality, i.e. systems enabling multiple human users to share the same system at the same time, cleanly separating their resources and protecting them from each other. The majority of today's UNIX systems don't really use the user concept like that anymore though. Most of today's systems probably have only one actual human user (or even less!), but their user databases (/etc/passwd) list a good number more entries than that. Today, the majority of UNIX users in most environments are system users, i.e. users that are not the technical representation of a human sitting in front of a PC anymore, but the security identity a system service — an executable program — runs as. Even though traditional, simultaneous multi-user systems slowly became less relevant, their ground-breaking basic concept became the cornerstone of UNIX security. The OS is nowadays partitioned into isolated services — and each service runs as its own system user, and thus within its own, minimal security context.

The people behind the Android OS realized the relevance of the UNIX user concept as the primary security concept on UNIX, and took its use even further: on Android not only system services take benefit of the UNIX user concept, but each UI app gets its own, individual user identity too — thus neatly separating app resources from each other, and protecting app processes from each other, too.

Back in the more traditional Linux world things are a bit less advanced in this area. Even though users are the quintessential UNIX security concept, allocation and management of system users is still a pretty limited, raw and static affair. In most cases, RPM or DEB package installation scripts allocate a fixed number of (usually one) system users when you install the package of a service that wants to take benefit of the user concept, and from that point on the system user remains allocated on the system and is never deallocated again, even if the package is later removed again. Most Linux distributions limit the number of system users to 1000 (which isn't particularly a lot). Allocating a system user is hence expensive: the number of available users is limited, and there's no defined way to dispose of them after use. If you make use of system users too liberally, you are very likely to run out of them sooner rather than later.

You may wonder why system users are generally not deallocated when the package that registered them is uninstalled from a system (at least on most distributions). The reason for that is one relevant property of the user concept (you might even want to call this a design flaw): user IDs are sticky to files (and other objects such as IPC objects). If a service running as a specific system user creates a file at some location, and is then terminated and its package and user removed, then the created file still belongs to the numeric ID ("UID") the system user originally got assigned. When the next system user is allocated and — due to ID recycling — happens to get assigned the same numeric ID, then it will also gain access to the file, and that's generally considered a problem, given that the file belonged to a potentially very different service once upon a time, and likely should not be readable or changeable by anything coming after it. Distributions hence tend to avoid UID recycling which means system users remain registered forever on a system after they have been allocated once.

The above is a description of the status quo ante. Let's now focus on what systemd's dynamic user concept brings to the table, to improve the situation.

Introducing Dynamic Users

With systemd dynamic users we hope to make make it easier and cheaper to allocate system users on-the-fly, thus substantially increasing the possible uses of this core UNIX security concept.

If you write a systemd service unit file, you may enable the dynamic user logic for it by setting the DynamicUser= option in its [Service] section to yes. If you do a system user is dynamically allocated the instant the service binary is invoked, and released again when the service terminates. The user is automatically allocated from the UID range 61184–65519, by looking for a so far unused UID.

Now you may wonder, how does this concept deal with the sticky user issue discussed above? In order to counter the problem, two strategies easily come to mind:

  1. Prohibit the service from creating any files/directories or IPC objects

  2. Automatically removing the files/directories or IPC objects the service created when it shuts down.

In systemd we implemented both strategies, but for different parts of the execution environment. Specifically:

  1. Setting DynamicUser=yes implies ProtectSystem=strict and ProtectHome=read-only. These sand-boxing options turn off write access to pretty much the whole OS directory tree, with a few relevant exceptions, such as the API file systems /proc, /sys and so on, as well as /tmp and /var/tmp. (BTW: setting these two options on your regular services that do not use DynamicUser= is a good idea too, as it drastically reduces the exposure of the system to exploited services.)

  2. Setting DynamicUser=yes implies PrivateTmp=yes. This option sets up /tmp and /var/tmp for the service in a way that it gets its own, disconnected version of these directories, that are not shared by other services, and whose life-cycle is bound to the service's own life-cycle. Thus if the service goes down, the user is removed and all its temporary files and directories with it. (BTW: as above, consider setting this option for your regular services that do not use DynamicUser= too, it's a great way to lock things down security-wise.)

  3. Setting DynamicUser=yes implies RemoveIPC=yes. This option ensures that when the service goes down all SysV and POSIX IPC objects (shared memory, message queues, semaphores) owned by the service's user are removed. Thus, the life-cycle of the IPC objects is bound to the life-cycle of the dynamic user and service, too. (BTW: yes, here too, consider using this in your regular services, too!)

With these four settings in effect, services with dynamic users are nicely sand-boxed. They cannot create files or directories, except in /tmp and /var/tmp, where they will be removed automatically when the service shuts down, as will any IPC objects created. Sticky ownership of files/directories and IPC objects is hence dealt with effectively.

The RuntimeDirectory= option may be used to open up a bit the sandbox to external programs. If you set it to a directory name of your choice, it will be created below /run when the service is started, and removed in its entirety when it is terminated. The ownership of the directory is assigned to the service's dynamic user. This way, a dynamic user service can expose API interfaces (AF_UNIX sockets, …) to other services at a well-defined place and again bind the life-cycle of it to the service's own run-time. Example: set RuntimeDirectory=foobar in your service, and watch how a directory /run/foobar appears at the moment you start the service, and disappears the moment you stop it again. (BTW: Much like the other settings discussed above, RuntimeDirectory= may be used outside of the DynamicUser= context too, and is a nice way to run any service with a properly owned, life-cycle-managed run-time directory.)

Persistent Data

Of course, a service running in such an environment (although already very useful for many cases!), has a major limitation: it cannot leave persistent data around it can reuse on a later run. As pretty much the whole OS directory tree is read-only to it, there's simply no place it could put the data that survives from one service invocation to the next.

With systemd 235 this limitation is removed: there are now three new settings: StateDirectory=, LogsDirectory= and CacheDirectory=. In many ways they operate like RuntimeDirectory=, but create sub-directories below /var/lib, /var/log and /var/cache, respectively. There's one major difference beyond that however: directories created that way are persistent, they will survive the run-time cycle of a service, and thus may be used to store data that is supposed to stay around between invocations of the service.

Of course, the obvious question to ask now is: how do these three settings deal with the sticky file ownership problem?

For that we lifted a concept from container managers. Container managers have a very similar problem: each container and the host typically end up using a very similar set of numeric UIDs, and unless user name-spacing is deployed this means that host users might be able to access the data of specific containers that also have a user by the same numeric UID assigned, even though it actually refers to a very different identity in a different context. (Actually, it's even worse than just getting access, due to the existence of setuid file bits, access might translate to privilege elevation.) The way container managers protect the container images from the host (and from each other to some level) is by placing the container trees below a boundary directory, with very restrictive access modes and ownership (0700 and root:root or so). A host user hence cannot take advantage of the files/directories of a container user of the same UID inside of a local container tree, simply because the boundary directory makes it impossible to even reference files in it. After all on UNIX, in order to get access to a specific path you need access to every single component of it.

How is that applied to dynamic user services? Let's say StateDirectory=foobar is set for a service that has DynamicUser= turned off. The instant the service is started, /var/lib/foobar is created as state directory, owned by the service's user and remains in existence when the service is stopped. If the same service now is run with DynamicUser= turned on, the implementation is slightly altered. Instead of a directory /var/lib/foobar a symbolic link by the same path is created (owned by root), pointing to /var/lib/private/foobar (the latter being owned by the service's dynamic user). The /var/lib/private directory is created as boundary directory: it's owned by root:root, and has a restrictive access mode of 0700. Both the symlink and the service's state directory will survive the service's life-cycle, but the state directory will remain, and continues to be owned by the now disposed dynamic UID — however it is protected from other host users (and other services which might get the same dynamic UID assigned due to UID recycling) by the boundary directory.

The obvious question to ask now is: but if the boundary directory prohibits access to the directory from unprivileged processes, how can the service itself which runs under its own dynamic UID access it anyway? This is achieved by invoking the service process in a slightly modified mount name-space: it will see most of the file hierarchy the same way as everything else on the system (modulo /tmp and /var/tmp as mentioned above), except for /var/lib/private, which is over-mounted with a read-only tmpfs file system instance, with a slightly more liberal access mode permitting the service read access. Inside of this tmpfs file system instance another mount is placed: a bind mount to the host's real /var/lib/private/foobar directory, onto the same name. Putting this together these means that superficially everything looks the same and is available at the same place on the host and from inside the service, but two important changes have been made: the /var/lib/private boundary directory lost its restrictive character inside the service, and has been emptied of the state directories of any other service, thus making the protection complete. Note that the symlink /var/lib/foobar hides the fact that the boundary directory is used (making it little more than an implementation detail), as the directory is available this way under the same name as it would be if DynamicUser= was not used. Long story short: for the daemon and from the view from the host the indirection through /var/lib/private is mostly transparent.

This logic of course raises another question: what happens to the state directory if a dynamic user service is started with a state directory configured, gets UID X assigned on this first invocation, then terminates and is restarted and now gets UID Y assigned on the second invocation, with X ≠ Y? On the second invocation the directory — and all the files and directories below it — will still be owned by the original UID X so how could the second instance running as Y access it? Our way out is simple: systemd will recursively change the ownership of the directory and everything contained within it to UID Y before invoking the service's executable.

Of course, such recursive ownership changing (chown()ing) of whole directory trees can become expensive (though according to my experiences, IRL and for most services it's much cheaper than you might think), hence in order to optimize behavior in this regard, the allocation of dynamic UIDs has been tweaked in two ways to avoid the necessity to do this expensive operation in most cases: firstly, when a dynamic UID is allocated for a service an allocation loop is employed that starts out with a UID hashed from the service's name. This means a service by the same name is likely to always use the same numeric UID. That means that a stable service name translates into a stable dynamic UID, and that means recursive file ownership adjustments can be skipped (of course, after validation). Secondly, if the configured state directory already exists, and is owned by a suitable currently unused dynamic UID, it's preferably used above everything else, thus maximizing the chance we can avoid the chown()ing. (That all said, ultimately we have to face it, the currently available UID space of 4K+ is very small still, and conflicts are pretty likely sooner or later, thus a chown()ing has to be expected every now and then when this feature is used extensively).

Note that CacheDirectory= and LogsDirectory= work very similar to StateDirectory=. The only difference is that they manage directories below the /var/cache and /var/logs directories, and their boundary directory hence is /var/cache/private and /var/log/private, respectively.

Examples

So, after all this introduction, let's have a look how this all can be put together. Here's a trivial example:

# cat > /etc/systemd/system/dynamic-user-test.service <<EOF [Service] ExecStart=/usr/bin/sleep 4711 DynamicUser=yes EOF # systemctl daemon-reload # systemctl start dynamic-user-test # systemctl status dynamic-user-test ● dynamic-user-test.service Loaded: loaded (/etc/systemd/system/dynamic-user-test.service; static; vendor preset: disabled) Active: active (running) since Fri 2017-10-06 13:12:25 CEST; 3s ago Main PID: 2967 (sleep) Tasks: 1 (limit: 4915) CGroup: /system.slice/dynamic-user-test.service └─2967 /usr/bin/sleep 4711 Okt 06 13:12:25 sigma systemd[1]: Started dynamic-user-test.service. # ps -e -o pid,comm,user | grep 2967 2967 sleep dynamic-user-test # id dynamic-user-test uid=64642(dynamic-user-test) gid=64642(dynamic-user-test) groups=64642(dynamic-user-test) # systemctl stop dynamic-user-test # id dynamic-user-test id: ‘dynamic-user-test’: no such user

In this example, we create a unit file with DynamicUser= turned on, start it, check if it's running correctly, have a look at the service process' user (which is named like the service; systemd does this automatically if the service name is suitable as user name, and you didn't configure any user name to use explicitly), stop the service and verify that the user ceased to exist too.

That's already pretty cool. Let's step it up a notch, by doing the same in an interactive transient service (for those who don't know systemd well: a transient service is a service that is defined and started dynamically at run-time, for example via the systemd-run command from the shell. Think: run a service without having to write a unit file first):

# systemd-run --pty --property=DynamicUser=yes --property=StateDirectory=wuff /bin/sh Running as unit: run-u15750.service Press ^] three times within 1s to disconnect TTY. sh-4.4$ id uid=63122(run-u15750) gid=63122(run-u15750) groups=63122(run-u15750) context=system_u:system_r:initrc_t:s0 sh-4.4$ ls -al /var/lib/private/ total 0 drwxr-xr-x. 3 root root 60 6. Okt 13:21 . drwxr-xr-x. 1 root root 852 6. Okt 13:21 .. drwxr-xr-x. 1 run-u15750 run-u15750 8 6. Okt 13:22 wuff sh-4.4$ ls -ld /var/lib/wuff lrwxrwxrwx. 1 root root 12 6. Okt 13:21 /var/lib/wuff -> private/wuff sh-4.4$ ls -ld /var/lib/wuff/ drwxr-xr-x. 1 run-u15750 run-u15750 0 6. Okt 13:21 /var/lib/wuff/ sh-4.4$ echo hello > /var/lib/wuff/test sh-4.4$ exit exit # id run-u15750 id: ‘run-u15750’: no such user # ls -al /var/lib/private total 0 drwx------. 1 root root 66 6. Okt 13:21 . drwxr-xr-x. 1 root root 852 6. Okt 13:21 .. drwxr-xr-x. 1 63122 63122 8 6. Okt 13:22 wuff # ls -ld /var/lib/wuff lrwxrwxrwx. 1 root root 12 6. Okt 13:21 /var/lib/wuff -> private/wuff # ls -ld /var/lib/wuff/ drwxr-xr-x. 1 63122 63122 8 6. Okt 13:22 /var/lib/wuff/ # cat /var/lib/wuff/test hello

The above invokes an interactive shell as transient service run-u15750.service (systemd-run picked that name automatically, since we didn't specify anything explicitly) with a dynamic user whose name is derived automatically from the service name. Because StateDirectory=wuff is used, a persistent state directory for the service is made available as /var/lib/wuff. In the interactive shell running inside the service, the ls commands show the /var/lib/private boundary directory and its contents, as well as the symlink that is placed for the service. Finally, before exiting the shell, a file is created in the state directory. Back in the original command shell we check if the user is still allocated: it is not, of course, since the service ceased to exist when we exited the shell and with it the dynamic user associated with it. From the host we check the state directory of the service, with similar commands as we did from inside of it. We see that things are set up pretty much the same way in both cases, except for two things: first of all the user/group of the files is now shown as raw numeric UIDs instead of the user/group names derived from the unit name. That's because the user ceased to exist at this point, and "ls" shows the raw UID for files owned by users that don't exist. Secondly, the access mode of the boundary directory is different: when we look at it from outside of the service it is not readable by anyone but root, when we looked from inside we saw it it being world readable.

Now, let's see how things look if we start another transient service, reusing the state directory from the first invocation:

# systemd-run --pty --property=DynamicUser=yes --property=StateDirectory=wuff /bin/sh Running as unit: run-u16087.service Press ^] three times within 1s to disconnect TTY. sh-4.4$ cat /var/lib/wuff/test hello sh-4.4$ ls -al /var/lib/wuff/ total 4 drwxr-xr-x. 1 run-u16087 run-u16087 8 6. Okt 13:22 . drwxr-xr-x. 3 root root 60 6. Okt 15:42 .. -rw-r--r--. 1 run-u16087 run-u16087 6 6. Okt 13:22 test sh-4.4$ id uid=63122(run-u16087) gid=63122(run-u16087) groups=63122(run-u16087) context=system_u:system_r:initrc_t:s0 sh-4.4$ exit exit

Here, systemd-run picked a different auto-generated unit name, but the used dynamic UID is still the same, as it was read from the pre-existing state directory, and was otherwise unused. As we can see the test file we generated earlier is accessible and still contains the data we left in there. Do note that the user name is different this time (as it is derived from the unit name, which is different), but the UID it is assigned to is the same one as on the first invocation. We can thus see that the mentioned optimization of the UID allocation logic (i.e. that we start the allocation loop from the UID owner of any existing state directory) took effect, so that no recursive chown()ing was required.

And that's the end of our example, which hopefully illustrated a bit how this concept and implementation works.

Use-cases

Now that we had a look at how to enable this logic for a unit and how it is implemented, let's discuss where this actually could be useful in real life.

  • One major benefit of dynamic user IDs is that running a privilege-separated service leaves no artifacts in the system. A system user is allocated and made use of, but it is discarded automatically in a safe and secure way after use, in a fashion that is safe for later recycling. Thus, quickly invoking a short-lived service for processing some job can be protected properly through a user ID without having to pre-allocate it and without this draining the available UID pool any longer than necessary.

  • In many cases, starting a service no longer requires package-specific preparation. Or in other words, quite often useradd/mkdir/chown/chmod invocations in "post-inst" package scripts, as well as sysusers.d and tmpfiles.d drop-ins become unnecessary, as the DynamicUser= and StateDirectory=/CacheDirectory=/LogsDirectory= logic can do the necessary work automatically, on-demand and with a well-defined life-cycle.

  • By combining dynamic user IDs with the transient unit concept, new creative ways of sand-boxing are made available. For example, let's say you don't trust the correct implementation of the sort command. You can now lock it into a simple, robust, dynamic UID sandbox with a simple systemd-run and still integrate it into a shell pipeline like any other command. Here's an example, showcasing a shell pipeline whose middle element runs as a dynamically on-the-fly allocated UID, that is released when the pipelines ends.

    # cat some-file.txt | systemd-run ---pipe --property=DynamicUser=1 sort -u | grep -i foobar > some-other-file.txt
  • By combining dynamic user IDs with the systemd templating logic it is now possible to do much more fine-grained and fully automatic UID management. For example, let's say you have a template unit file /etc/systemd/system/foobard@.service:

    [Service] ExecStart=/usr/bin/myfoobarserviced DynamicUser=1 StateDirectory=foobar/%i

    Now, let's say you want to start one instance of this service for each of your customers. All you need to do now for that is:

    # systemctl enable foobard@customerxyz.service --now

    And you are done. (Invoke this as many times as you like, each time replacing customerxyz by some customer identifier, you get the idea.)

  • By combining dynamic user IDs with socket activation you may easily implement a system where each incoming connection is served by a process instance running as a different, fresh, newly allocated UID within its own sandbox. Here's an example waldo.socket:

    [Socket] ListenStream=2048 Accept=yes

    With a matching waldo@.service:

    [Service] ExecStart=-/usr/bin/myservicebinary DynamicUser=yes

    With the two unit files above, systemd will listen on TCP/IP port 2048, and for each incoming connection invoke a fresh instance of waldo@.service, each time utilizing a different, new, dynamically allocated UID, neatly isolated from any other instance.

  • Dynamic user IDs combine very well with state-less systems, i.e. systems that come up with an unpopulated /etc and /var. A service using dynamic user IDs and the StateDirectory=, CacheDirectory=, LogsDirectory= and RuntimeDirectory= concepts will implicitly allocate the users and directories it needs for running, right at the moment where it needs it.

Dynamic users are a very generic concept, hence a multitude of other uses are thinkable; the list above is just supposed to trigger your imagination.

What does this mean for you as a packager?

I am pretty sure that a large number of services shipped with today's distributions could benefit from using DynamicUser= and StateDirectory= (and related settings). It often allows removal of post-inst packaging scripts altogether, as well as any sysusers.d and tmpfiles.d drop-ins by unifying the needed declarations in the unit file itself. Hence, as a packager please consider switching your unit files over. That said, there are a number of conditions where DynamicUser= and StateDirectory= (and friends) cannot or should not be used. To name a few:

  1. Service that need to write to files outside of /run/<package>, /var/lib/<package>, /var/cache/<package>, /var/log/<package>, /var/tmp, /tmp, /dev/shm are generally incompatible with this scheme. This rules out daemons that upgrade the system as one example, as that involves writing to /usr.

  2. Services that maintain a herd of processes with different user IDs. Some SMTP services are like this. If your service has such a super-server design, UID management needs to be done by the super-server itself, which rules out systemd doing its dynamic UID magic for it.

  3. Services which run as root (obviously…) or are otherwise privileged.

  4. Services that need to live in the same mount name-space as the host system (for example, because they want to establish mount points visible system-wide). As mentioned DynamicUser= implies ProtectSystem=, PrivateTmp= and related options, which all require the service to run in its own mount name-space.

  5. Your focus is older distributions, i.e. distributions that do not have systemd 232 (for DynamicUser=) or systemd 235 (for StateDirectory= and friends) yet.

  6. If your distribution's packaging guides don't allow it. Consult your packaging guides, and possibly start a discussion on your distribution's mailing list about this.

Notes

A couple of additional, random notes about the implementation and use of these features:

  1. Do note that allocating or deallocating a dynamic user leaves /etc/passwd untouched. A dynamic user is added into the user database through the glibc NSS module nss-systemd, and this information never hits the disk.

  2. On traditional UNIX systems it was the job of the daemon process itself to drop privileges, while the DynamicUser= concept is designed around the service manager (i.e. systemd) being responsible for that. That said, since v235 there's a way to marry DynamicUser= and such services which want to drop privileges on their own. For that, turn on DynamicUser= and set User= to the user name the service wants to setuid() to. This has the effect that systemd will allocate the dynamic user under the specified name when the service is started. Then, prefix the command line you specify in ExecStart= with a single ! character. If you do, the user is allocated for the service, but the daemon binary is invoked as root instead of the allocated user, under the assumption that the daemon changes its UID on its own the right way. Note that after registration the user will show up instantly in the user database, and is hence resolvable like any other by the daemon process. Example: ExecStart=!/usr/bin/mydaemond

  3. You may wonder why systemd uses the UID range 61184–65519 for its dynamic user allocations (side note: in hexadecimal this reads as 0xEF00–0xFFEF). That's because distributions (specifically Fedora) tend to allocate regular users from below the 60000 range, and we don't want to step into that. We also want to stay away from 65535 and a bit around it, as some of these UIDs have special meanings (65535 is often used as special value for "invalid" or "no" UID, as it is identical to the 16bit value -1; 65534 is generally mapped to the "nobody" user, and is where some kernel subsystems map unmappable UIDs). Finally, we want to stay within the 16bit range. In a user name-spacing world each container tends to have much less than the full 32bit UID range available that Linux kernels theoretically provide. Everybody apparently can agree that a container should at least cover the 16bit range though — already to include a nobody user. (And quite frankly, I am pretty sure assigning 64K UIDs per container is nicely systematic, as the the higher 16bit of the 32bit UID values this way become a container ID, while the lower 16bit become the logical UID within each container, if you still follow what I am babbling here…). And before you ask: no this range cannot be changed right now, it's compiled in. We might change that eventually however.

  4. You might wonder what happens if you already used UIDs from the 61184–65519 range on your system for other purposes. systemd should handle that mostly fine, as long as that usage is properly registered in the user database: when allocating a dynamic user we pick a UID, see if it is currently used somehow, and if yes pick a different one, until we find a free one. Whether a UID is used right now or not is checked through NSS calls. Moreover the IPC object lists are checked to see if there are any objects owned by the UID we are about to pick. This means systemd will avoid using UIDs you have assigned otherwise. Note however that this of course makes the pool of available UIDs smaller, and in the worst cases this means that allocating a dynamic user might fail because there simply are no unused UIDs in the range.

  5. If not specified otherwise the name for a dynamically allocated user is derived from the service name. Not everything that's valid in a service name is valid in a user-name however, and in some cases a randomized name is used instead to deal with this. Often it makes sense to pick the user names to register explicitly. For that use User= and choose whatever you like.

  6. If you pick a user name with User= and combine it with DynamicUser= and the user already exists statically it will be used for the service and the dynamic user logic is automatically disabled. This permits automatic up- and downgrades between static and dynamic UIDs. For example, it provides a nice way to move a system from static to dynamic UIDs in a compatible way: as long as you select the same User= value before and after switching DynamicUser= on, the service will continue to use the statically allocated user if it exists, and only operates in the dynamic mode if it does not. This is useful for other cases as well, for example to adapt a service that normally would use a dynamic user to concepts that require statically assigned UIDs, for example to marry classic UID-based file system quota with such services.

  7. systemd always allocates a pair of dynamic UID and GID at the same time, with the same numeric ID.

  8. If the Linux kernel had a "shiftfs" or similar functionality, i.e. a way to mount an existing directory to a second place, but map the exposed UIDs/GIDs in some way configurable at mount time, this would be excellent for the implementation of StateDirectory= in conjunction with DynamicUser=. It would make the recursive chown()ing step unnecessary, as the host version of the state directory could simply be mounted into a the service's mount name-space, with a shift applied that maps the directory's owner to the services' UID/GID. But I don't have high hopes in this regard, as all work being done in this area appears to be bound to user name-spacing — which is a concept not used here (and I guess one could say user name-spacing is probably more a source of problems than a solution to one, but you are welcome to disagree on that).

And that's all for now. Enjoy your dynamic users!

Adrien Plazas: retro-gtk: Postmortem

Pre, 06/10/2017 - 5:44md

This article is the first of a small series about retro-gtk, a library I develop in tandem with Games and which allows it to use Libretro cores. This first article focuses on the initial goals of the library, its design and the problems that arose during its development, while the next ones will focus on what I am working on to fix these problems.

Libretro? retro-gtk? Are These Edible?

The Libretro project defines an API to be implemented by so-called Libretro cores — typically video game console emulators — to expose them as shared libraries with a common ABI. These cores can then be used by so-called Libretro frontends via this API. Here is the main C header if you want to know what it actually looks like.

You can see Libretro as a videogame console emulator plugin definition without a plugin system to make it usable.

Initially, retro-gtk was designed and implemented as a library easing the use of Libretro cores from higher level languages like Vala. It allowed to dynamically load cores and to use them via a GObject API mimicking the names and the behavior of the Libretro one, while overcoming some of its limitations. The main limitation of Libretro is that you can't pass user defined data when calling a core's function and get it back when the core calls back, which makes it impossible as-is to get the identity of the calling-back core and hence make it impossible to have multiple cores running at the same time. This is something we want to avoid as having a parametrable singleton would artificially hinder retro-gtk's API and by extension the code of its users.

You can see retro-gtk as a GObject-based plugin system based on the Libretro plugin definition.

To ease it's development, the library was written in Vala, which at the time seemed like a good candidate to simplify the implementation of a GObject introspectable library — also, when I started writing it I was more proficient in Vala than in GObject C. To allow multiple cores to coexist, two solutions were explored. The first solution consists in storing the calling core in a thread-specific static variable and running each core in their own threads. This forces cores to be run from a different thread and doesn't allow reentrant calls from callbacks. The second solution consists in pushing the calling core on a static stack before each call to one of it's module's functions, and in poping it out of the stack just after. This allows reentrant calls from callbacks but forces cores to be run from the same thread.

A third solution could consist in a mixture of the first two, using thread-specific static stacks, forcing usage of multiple threads and allowing reentrance, but thread-specific static variables being a non-standard compiler feature, the second solution was retained.

Emergent Problems

While developing and using retro-gtk I noticed several problems. Here are the main ones.

Staying close to Libretro's API means staying close to a large and tedious to use API. Each user of retro-gtk would have to solve the same problems in about the same way, hence even though retro-gtk simplifies discovering the available cores, simplifies managing its resources and makes Libretro available from other languages it — by design — doesn't fix the complexity of the original API, making it not as useful as it could be.

You may be wondering how annoying is the API to use. Well, here is a pseudo-code (hence, simplified) example of what Libretro and by extension retro-gtk requires you to do for something as trivial as loading a game into a core.


load_game(core, medias):
assert medias.length > 0
core.init()
if core.has_disk_interface():
core.disk_interface.open_tray()
foreach media in medias:
core.disk_interface.add_index()
gameinfo = prepare_gameinfo (core, media)
core.disk_interface.set_index(gameinfo)
core.disk_interface.close_tray()
else:
assert medias.length == 1
gameinfo = prepare_gameinfo (core, medias[0])
core.load_game(gameinfo)

prepare_gameinfo(core, media):
if core.needs_media_path:
return new gameinfo_from_path(media.path)
else:
file = new file(media.path)
content = file.read()
return new gameinfo_from_content(content)

The tricks used to allow multiple cores to run side by side were working, but not as well as expected. Storing the calling core on a static stack or in a thread-specific static variable means that if the core calls back from a different thread than the calling one, the identify of the caller can't be retrieved: either the caller has already been removed from the stack or the the thread-specific variable hasn't been set in the calling-back thread. In both cases you get a wrong value, making cores behaving this way not usable at all. A solution would be to ensure having only one core loaded at the same time but as explained earlier, that's a no-go.

The goals of staying close to the original low-level API while exposing it to higher levels is tedious, and hinder the inclusion of some of the more complex features of the original API. And again, doing this offers no real added value to retro-gtk's users.

And finally, it's a bit out of retro-gtk's scope but some Libretro cores aren't very stable and lead users of retro-gtk to crash. It would be great for retro-gtk to fix that in some way.

So… What Now?

All of this really doesn't sound good, but don't worry as the next article will focus on what I am working on to improve the library and make it actually usable and useful!

Henri Bergius: Building an IoT dashboard with NASA Open MCT

Enj, 05/10/2017 - 10:52pd

One important aspect of any Internet of Things setup is being able to collect and visualize data for analysis. Seeing trends in sensor readings over time can be useful for identifying problems, and for coming up with new ways to use the data.

We wanted an easy solution for this for the c-base IoT setup. Since the c-base backstory is that of a crashed space station, using space technology for this made sense.

NASA Open MCT is a framework for building web-based mission control tools and dashboards that they’ve released as open source. It is intended for bringing together tools and both historical and real-time data, as can be seen in their Mars Science Laboratory dashboard demo.

c-beam telemetry server

As a dashboard framework, Open MCT doesn’t really come with batteries included. You get a bunch of widgets and library functionality, but out of the box there is no integration with data sources.

However, they do provide a tutorial project for integrating data sources. We started with that, and built the cbeam-telemetry-server project which gives a very easy way to integrate Open MCT with an existing IoT setup.

With the c-beam telemetry server we combine Open MCT with the InfluxDB timeseries database and the MQTT messaging bus. This gives a “turnkey” setup for persisting and visualizing IoT information.

Getting started

The first step is to install the c-beam telemetry server. If you want to do a manual setup, first install a MQTT broker, InfluxDB and Node.js. Optionally you can also install CouchDB for sharing custom dashboard layouts between users.

Then just clone the c-beam telemetry server repo:

$ git clone https://github.com/c-base/cbeam-telemetry-server.git

Install the dependencies and build Open MCT with:

$ npm install

Now you should be able to start the service with:

$ npm start Running with Docker

There is also an easier way to get going: we provide pre-built Docker images of the c-beam telemetry server for both x86 and ARM.

There are also docker-compose configuration files for both environments. To install and start the whole service with all its dependencies, grab the docker-compose.yml file (or the Raspberry Pi 3 version) and start with:

$ docker-compose up -d

We’re building these images as part of our continuous integration pipeline (ARM build with this recipe), so they should always be reasonably up-to-date.

Configuring your data

The next step is to create a JavaScript configuration file for your Open MCT. This is where you need to provide a “dictionary” listing all data you want your dashboard to track.

Data sets are configured like the following (configuring a temperature reading tracked for the 2nd floor):

var floor2 = new app.Dictionary('2nd floor', 'floor2'); floor2.addMeasurement('temperature', 'floor2_temperature', [ { units: 'degrees', format: 'float' } ], { topic: 'bitraf/temperature/1' });

You can have multiple dictionaries in the same Open MCT installation, allowing you to group related data sets. Each measurement needs to have a name and a unit.

Getting data in

In the example above we also supply a MQTT topic to read the measurement from. Now sending data to the dashboard is as easy as writing numbers to that MQTT topic. On command-line that would be done with:

$ mosquitto_pub -t bitraf/temperature/1 -m 27.3

If you were running the telemetry server when you sent that message, you should’ve seen it appear in the appropriate dashboard.

There are MQTT libraries available for most programming languages, making it easy to connect existing systems with this dashboard.

The telemetry server is also compatible with our MsgFlo framework, meaning that you can also configure the connections between your data sources and Open MCT visually in Flowhub.

This makes it possible to utilize the existing MsgFlo libraries for implementing data sources. For example, with msgflo-arduino you can transmit sensor data from Tiva-C or NodeMcu microcontrollers to the dashboard.

Status and how you can help

The c-beam telemetry server is currently in production use in a couple of hackerspaces, and seems to run quite happily.

We’d love to get feedback from other deployments!

If you’d like to help with the project, here are couple of areas that would be great:

  • Adding tests to the project
  • Implementing downsampling of historical data
  • Figuring out ways to control IoT devices via the dashboard (so, to write to MQTT instead of just reading)

Please file issues or make pull requests to the repository.

Julita Inca: Hadoop talk at Untelstronics

Mër, 04/10/2017 - 5:38pd

Today I did an introduction to Hadoop at UNTELS university. The event was called Untelstronics and it was organized by IEEE UNTELS. It was programmed two hours talk. There are a lot of things to juggle before the talk, so thanks to local Linux guys in Lima for helping me to set them up. Thanks Toto, Solanch, Chavez and Brunito Avila. 

I started by defining what BigData is, then I pointed out the 5V of BigData. The challenges BigData is facing, the case uses, statistics related to the exponential growing data to 2020.Then I presented Hadoop as one solution to handle BigData. HDFS and MapReduce were explained in concept, as well as the architecture Hadoop. I also talked about the configuration of the master and slave nodes from my previous experience. My slides.Finally I shared my experience in Frankfurt, Germany where I presented my poster at ISC 2016. It was a pleasure to show the experience and the BigData solution on Fedora and GNOME. Thanks so much to the organizers of Untelstronics for the invitation!


Filed under: FEDORA, GNOME Tagged: BigData, Commodity Cluster, fedora, GNOME, hadoop, HDFS, Julita Inca, Julita Inca Chiroque, MapReduce

Georges Basile Stavracas Neto: Improved half tiling available in Mutter 3.26.1

Mër, 04/10/2017 - 3:24pd

A late night announcement: the improved tiling patches (shown in a previous blog post) were merged in Mutter and and GTK+3, and will be available in GNOME 3.26.1 / GTK 3.22.23 (not yet released; should be available this week).

I’d like to thank Florian Muellner, Matthias Clasen, Jonas Adahl and AlexGS for all their support, time, code reviews and testing.

Have a wonderful night!

Julita Inca: Fedora Women Day in Lima, Peru

Hën, 02/10/2017 - 8:23md

On Saturday 30th  we have celebrated the Fedora Women Day in Lima, Peru at LabV207 – PUCP since 8:00 a.m. to 2:00 p.m.

Acknowledged with Thanks

I’ve just wrapped up and I wanted to say thanks for the support throughout the process in having a nice place. Thanks to the staff of the Pontificia Universidad Catolica del Peru: Giohanny Mueck, Felipe Solari, Corrado Daly and Walter Segama. Congrats to the initiative of the Fedora Diversity team to foster more women involve in Linux. In addition, thanks to the help of Chhavi in the design and Bee for the help in planning the event. These were our FWD peruvian speakers:

We had three previous session with the speakers and members of our local Linux team. In the following picture you can see our work behind the scenes. I must to highlight the support and help of Solanch Ccasa in this new endeavour:

The core Day

I started my talk by giving a brief history of Fedora, since 1985 when GNU was conformed, until 2017 with Fedora 26 version. I also have shown help received from other Fedora Women as Marina, Robyn, Bee, Chhavi and Amita whenever I had technical and administrative issues. The “Google Summer of Code” program, how to join to the Fedora community, its philosophy and topics related were explained. My talk lasted twenty minutes as I did prepared.

Other women talks and workshops followed as planned DNF, GIT, Fedora loves Python, Linux commandsD3

It was great to see many women interested in the Linux world. More than seven years of organising events in Lima related to Linux, and this was first time I see several women using Fedora with GNOME at the same time.  

We have shared a special FWD cake, and posted on a Fedora board pros and cons of why you use Fedora or not. 

Special thanks to guys that helped us during the whole event: Martin Vuelta, Rodrigo Lindo and Rommel Zavaleta. 


Filed under: FEDORA, GNOME Tagged: fedora, Fedora + GNOME community, Fedora + GNOME group, Fedora Lima, Fedora Perú, Fedora women, Fedora Women's Day, FWD, FWD 2017, FWD Lima, FWD Lima 2017, Julita Inca, Julita Inca Chiroque, Women in Linux

Daniel G. Siegel: summing up 91

Hën, 02/10/2017 - 1:06md

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

The Best Way to Predict the Future is to Issue a Press Release, by Audrey Watters

Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues, to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

we are making computers in all forms available, but we're far away from generating new thoughts or breaking up thought patterns. instead of augmenting humans with the use of computers like imagined by the fathers of early personal computing, our computers have turned out to be mind-numbing consumption devices rather than a bicycle for the mind that steve jobs envisioned.

Eliminating the Human, by David Byrne

I have a theory that much recent tech development and innovation over the last decade or so has an unspoken overarching agenda. It has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug—it’s a feature.

Human interaction is often perceived, from an engineer’s mind-set, as complicated, inefficient, noisy, and slow. Part of making something “frictionless” is getting the human part out of the way.

But our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. “We” do not exist as isolated individuals. We, as individuals, are inhabitants of networks; we are relationships. That is how we prosper and thrive.

the computer claims sovereignty over the whole range of human experience, and supports its claim by showing that it “thinks” better than we can. the fundamental metaphorical message of the computer is that we become machines. our nature, our biology, our emotions and our spirituality become subjects of second order. but in order for this to work perfectly, society has to dumb itself down in order to level the playing field between humans and computers. what is most significant about this line of thinking is the dangerous reductionism it represents.

User Interface: A Personal View, by Alan Kay

The printing press was the dominant force that transformed the hermeneutic Middle Ages into our scientific society should not be taken too lightly–especially because the main point is that the press didn’t do it just by making books more available, it did it by changing the thought patterns of those who learned to read.

I had always thought of the computer as a tool, perhaps a vehicle–a much weaker conception. But if the personal computer is a truly new medium then the very use of it would actually change the thought patterns of an entire civilization. What kind of a thinker would you become if you grew up with an active simulator connected, not just to one point of view, but to all the points of view of the ages represented so they could be dynamically tried out and compared?

the tragic notion is that alan kay assumed people would be smart enough to try out and see different point of views. but in reality, people stick rigidly to the point of view they learned and consider all others to be only noise or worse.

Alexander Larsson: Spotify and Skype flatpaks moved to flathub

Mër, 27/09/2017 - 3:05md

This is a public service announcement.

I used to maintain two custom repositories of flatpaks for spotify and skype. These are now at flathub (in addition to a lot of other apps), and if you were using the old repository you should switch to the new one to continue getting updates.

This is easiest done by removing the current version and then following the directions on the flathub site for installing.

Richard Hughes: fwupd about to break API and ABI

Mar, 26/09/2017 - 9:35md

Soon I’m going to merge a PR to fwupd that breaks API and ABI and bumps the soname. If you want to use the stable branch, please track 0_9_X. The API break removes all the deprecated API and cruft we’ve picked up in the months since we started the project, and with the upcoming 1.0.0 version coming up in a few weeks it seems a sensible time to have a clean out. If it helps, I’m going to put 0.9.x in Fedora 26 and F27, so master branch probably only for F28/rawhide and jhbuild at this point.

In other news, 4 days ago I became a father again, so expect emails to be delayed and full of confusion. All doing great, but it turns out sleep is for the weak. :)

Felipe Borges: GNOME 3.26 Release Party in Brno, Czech Republic

Mar, 26/09/2017 - 1:55md

Last Monday our local GNOME community in Brno gathered together to celebrate once more one of our releases.

This time (after many releases) we had a cake! Other than that, we had drinks and great people chatting in a very cozy venue. It was a blast to see old friends and make new.

Pictures taken by our fellow GNOMEr Jiří Eischmann

I would like to thank the GNOME Foundation for sponsoring our meetup and Dominika Vágnerová for organizing it all!

Didier Roche: Ubuntu GNOME Shell in Artful: Day 14

Hën, 25/09/2017 - 10:39md

The Ubuntu desktop team and a lot other people from the Ubuntu community are gathering for the week in New York for the Ubuntu Rally. It’s time to get the final touch and bug fixes for Ubuntu artful which is turning itself soon into Ubuntu 17.10. As you probably know if you follow this blog series, it will feature GNOME Shell by default, with slight modifications to ease and adapt to our audience for this new user experience. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 14: Badges and progress bar on Ubuntu Dock

One of the latest thing we wanted to work on as we highlighted on our previous posts is the notification for new emails or download experience on the Shell. We already do ship the KStatusNotifier extension for application indicator, but need a way to signal the user (even if you are not looking at the screen when this happens) for new emails, IM or download/copy progress.

Andrea stepped up on this and worked with Dash to Dock upstream to implement the unity API for this. Working with them, as usual, was pleasing and we got the green flag that it’s going to merge to master, with possibly some tweaks, which will make this work available to every Dash to Dock users! It means that after this update, Thunderbird is handily showing the number of unread emails you have in your inbox, thanks to thunderbird-gnome-support that we seeded back with Sébastien.

Similarly, we now have progress bar support on Nautilus, Firefox downloads and every applications using that API to get updated on transactional actions.

And we are all done on our changes to adapt GNOME Shell to our targeted audience! Meanwhile Marco is working on HDPI (and sim cards…) to deliver a fantastic fractional scaling experience.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

Let’s see how many bugs we can squash. We will of course update you on the slight readjustment we are planning to do during this week at the Ubuntu rally and for the release. Let’s target first the incoming beta which will enable you to test all of this.