You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 4 months 2 javë më parë

Clive Johnston: New artwork for Falkon, do you have any ideas?

Hën, 29/01/2018 - 10:37md

This past few days there has been some interesting developments over at the Falkon project on KDE Phabricator, especially interesting is this task https://phabricator.kde.org/T6859

The lead developer has called for submissions on a new logo for Falkon.  One of the current submissions, which I must say I love, is shown below by Andres Betts who is on the KDE VDG team.

Designed by Andres Betts KDE VDG

Do you have ideas on what the new logo should be?  Please head over to the task and post your ideas.

Also, if you want to help test Falkon, you can grab it from my PPA – https://launchpad.net/~clivejo/+archive/ubuntu/falkon/

Jeremy Bicha: GNOME Tweaks 3.28 Progress Report 1

Hën, 29/01/2018 - 10:07md

A few days ago, I released GNOME Tweaks 3.27.4, a development snapshot on the way to the next stable version 3.28 which will be released alongside GNOME 3.28 in March. Here are some highlights of what’s changed since 3.26.

New Name (Part 2)

For 3.26, we renamed GNOME Tweak Tool to GNOME Tweaks. It was only a partial rename since many underlying parts still used the gnome-tweak-tool name. For 3.28, we have completed the rename. We have renamed the binary, the source tarball releases, the git repository, the .desktop, and app icons. For upgrade compatibility, the autostart file and helper script for the Suspend on Lid Close inhibitor keeps the old name.

New Home

GNOME Tweaks has moved from the classic GNOME Git and Bugzilla to the new GNOME-hosted gitlab.gnome.org. The new hosting includes git hosting, a bug tracker and merge requests. Much of GNOME Core has moved this cycle, and I expect many more projects will move for the 3.30 cycle later this year.

Dark Theme Switch Removed

As promised, the Global Dark Theme switch has been removed. Read my previous post for more explanation of why it’s removed and a brief mention of how theme developers should adapt (provide a separate Dark theme!).

Improved Theme Handling

The theme chooser has been improved in several small ways. Now that it’s quite possible to have a GNOME desktop without any gtk2 apps, it doesn’t make sense to require that a theme provide a gtk2 version to show up in the theme chooser so that requirement has been dropped.

The theme chooser will no longer show the same theme name multiple times if you have a system-wide installed theme and a theme in your user theme directory with the same name. Additionally, GNOME Tweaks does better at supporting the  XDG_DATA_DIRS standard in case you use custom locations to store your themes or gsettings overrides.

GNOME Tweaks 3.27.4 with the HighContrastInverse theme

Finally, gtk3 still offers a HighContrastInverse theme but most people probably weren’t aware of that since it didn’t show up in Tweaks. It does now! It is much darker than Adwaita Dark.

Several of these theme improvements (including HighContrastInverse) have also been included in 3.26.4.

For more details about what’s changed and who’s done the changing, see the project NEWS file.

Simos Xenitellis: Checking the Ubuntu Linux kernel updates on Spectre and Meltdown

Hën, 29/01/2018 - 5:40md
Here is the status page for the Ubuntu updates on Spectre and Meltdown. For a background on these vulnerabilities, see the Meltdown and Spectre Attacks website. In this post we are trying out the Spectre & Meltdown Checker on different versions of the stock Ubuntu Linux kernel. Trying the Spectre & Meltdown Checker before any …

Continue reading

Simos Xenitellis: How to make your LXD containers get IP addresses from your LAN using a bridge

Hën, 29/01/2018 - 5:11md
Background: LXD is a hypervisor that manages machine containers on Linux distributions. You install LXD on your Linux distribution and then you can launch machine containers into your distribution running all sort of (other) Linux distributions. In the previous post, we saw how to get our LXD container to receive an IP address from the …

Continue reading

Ted Gould: Jekyll and Mastodon

Hën, 29/01/2018 - 1:00pd

A while back I moved my website to Jekyll for all the static-y goodness that provides. Recently I was looking to add Mastodon to my domain as well. Doing so with Jekyll isn't hard, but searching for it seemed like something no one had written up. For your searchable pleasure I am writing it up.

I used Masto.host to put the Mastodon instance at social.gould.cx. But I wanted my Mastodon address to be @ted@gould.cx. To do that you need to link the gould.cx domain to point at social.gould.cx. To do that you need a .well-known/host-meta file that redirects webfinger to the Mastodon instance:

<?xml version='1.0' encoding='UTF-8'?> <XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'> <!-- Needed for Mastodon --> <Link rel='lrdd' type='application/xrd+xml' template='https://social.gould.cx/.well-known/webfinger?resource={uri}' /> </XRD>

The issue is that Jekyll doesn't copy static files that are in hidden directories. This is good for if you have a Git repository, so it doesn't copy the .git directory. We can get around this by using Jekyll's YAML front matter to set the location of the file.

--- layout: null permalink: /.well-known/host-meta --- <?xml version='1.0' encoding='UTF-8'?> <XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'> <!-- Needed for Mastodon --> <Link rel='lrdd' type='application/xrd+xml' template='https://social.gould.cx/.well-known/webfinger?resource={uri}' /> </XRD>

This file can then be placed anywhere, and Jekyll will put it in the right location on the static site. And you can folow me as @ted@gould.cx even though my Mastodon instance is social.gould.cx.

Jorge Castro: Updating your CNCF Developer Affiliation

Hën, 29/01/2018 - 1:00pd

The Cloud Native Computing Foundation uses gitdm to figue out who is contributing and from where. This is used to generate reports and so forth.

There is a huge text file where they are mapping email addresses used and affiliation. It probably doesn’t hurt to check your entry, for example, here’s mine:

Jorge O. Castro*: jorge.castro!gmail.com Lemon Ice Lemon Location City until 2017-05-01 Lemon Travel Smart Vacation Club until 2015-06-01

Whoa? What? This is what a corrected entry looks like, as you can see it takes into account where you used to work for correctness:

Jorge O. Castro*: jorge!heptio.com, jorge!ubuntu.com, jorge.castro!gmail.com Heptio Canonical until 2017-03-31

As an aside this also really makes a nice rolodex for looking up people. :D

Sean Davis: Catfish 1.4.4 Released

Hën, 29/01/2018 - 12:35pd

I’ve got some great news for fans of Catfish, the fast and powerful graphical search utility for Linux. The latest version, 1.4.4, has arrived with performance improvements and tons of localization updates!

What’s New

This update covers both versions 1.4.3 and 1.4.4.

General
  • Improved theming support
  • Improved error handling with thumbnails
  • Improved search performance by excluding .cache and .gvfs when not explicitly requested
  • Improved locate method performance with the addition of the –basename flag
  • Added keywords to the launcher for improved discoverability and Debian packaging improvements
  • Updated included AppData to latest standards
Bug Fixes
  • All search methods are stopped when the search activity is canceled. This results in a much faster response time when switching search terms.
  • Debian #798074: New upstream release available
  • Debian #794544: po/en_AU.po has Sinhalese (not English) translations for catfish.desktop
Translation Updates

Afrikaans, Brazilian Portuguese, Bulgarian, Catalan, Chinese (Traditional), Croatian, Czech, Danish, Dutch, French, Greek, Italian, Kurdish, Lithuanian, Portuguese, Serbian, Slovak, Spanish, Swedish, Turkish, Ukrainian

Downloads

Debian Unstable and Ubuntu Bionic users can install Catfish 1.4.4 from the repositories.

sudo apt update && sudo apt install catfish

The latest version of Catfish can always be downloaded from the Launchpad archives. Grab version 1.4.4 from the below link.

https://launchpad.net/catfish-search/1.4/1.4.4/+download/catfish-1.4.4.tar.gz

  • SHA-256: a2d452780bf51f80afe7621e040fe77725021c24a0fe4a9744c89ba88dbf87d7
  • SHA-1: b149b454fba75de6e6f9029cee8eec4adfb4be0e
  • MD5: 8fd7e8bb241f2396ebc3d9630b47a635

David Tomaschik: Playing with the Gigastone Media Streamer Plus

Dje, 28/01/2018 - 9:00pd
Background

A few months ago, I was shopping on woot.com and discovered the Gigastone Media Streamer Plus for about $25. I figured this might be something occassionally useful, or at least fun to look at for security vulnerabilities. When it arrived, I didn’t get around to it for quite a while, and then when I finally did, I was terribly disappointed in it as a security research target – it was just too easy.

The Gigastone Media Streamer Plus is designed to provide streaming from an attached USB drive or SD card over a wireless network. It features a built-in battery that can be used to charge a device as well. In concept, it sounds pretty awesome (and there’s many such devices on the market) but it turns out there’s no security to speak of in this particular device.

Exploration

By default the device creates its own wireless network that you can connect to in order to configure and stream, but it can quickly be reconfigured as a client on another wireless network. I chose the latter and joined it to my lab network so I wouldn’t need to be connected to just the device during my research.

NMAP Scan

The first thing I do when something touches the network is perform an NMAP scan. I like to use the version scan as well, though it’s not nearly as accurate on embedded devices as it is on more common client/server setups. NMAP quickly returned some interesting findings:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 # Nmap 7.40 scan initiated as: nmap -sV -T4 -p1-1024 -Pn -o gigastone.nmap 192.168.40.114 Nmap scan report for 192.168.40.114 Host is up (0.14s latency). Not shown: 1020 closed ports PORT STATE SERVICE VERSION 21/tcp open ftp vsftpd 2.0.8 or later 23/tcp open telnet security DVR telnetd (many brands) 53/tcp open domain dnsmasq 2.52 80/tcp open http Boa httpd MAC Address: C0:34:B4:80:29:EB (Gigastone) Service Info: Host: use Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . # Nmap done -- 1 IP address (1 host up) scanned in 22.33 seconds

Hrrm, FTP and Telnet. I’m sure they’re for a good reason.

Web Interface

The web interface is functional, but not attractive. It provides functionality for uploading and downloading files as well as changing settings, such as wireless configuration, WAN/LAN settings, and storage usage.

I noticed that, when loading the Settings page, you would sometimes get the settings visible before authenticating to the admin interface.

Problems with Burp Suite

While playing with this device, I did notice a bug in Burp Suite. The Gigastone Media Streamer Plus does not adhere to the HTTP RFCs, and all of their cgi-bin scripts send only \r at the end of line, instead of \r\n per the RFC. Browsers are forgiving, so they handled this gracefully. Unfortunately, when passing the traffic through Burp Suite, it transformed the ‘\r\r’ at the end of the response headers to \n\r\n\r\n. This causes the browser to interpret an extra blank line at the beginning of the response. Still not a problem for the browser parsing things, but slightly more a problem for the Gigastone Javascript parsing its own custom response format (newline-separated).

I reported the bug to PortSwigger and not only got a prompt confirmation of the bug, but a Python Burp extension to work around the issue until a proper fix lands in Burp Suite. That’s an incredible level of support from the authors of a quality tool.

Vulnerabilities Telnet with Default Credentials

The device exposes telnet to the local network and accepts username ‘root’ and password ‘root’. This gives full control of the device to anyone on the local network.

Information Disclosure: Administrative PIN (and Other Settings)

The administrative PIN can be retrieved by an unauthenticated request to an API. In fact, the default admin interface uses this API to compare the entered PIN entirely on the client side.

1 2 3 % curl 'http://192.168.40.114/cgi-bin/gadmin' get 1234

In fact, all of the administrative settings can be retrieved by unauthenticated requests, such as the WiFi settings. (Though, on a shared network, this is of limited value.)

1 2 3 4 5 6 7 8 9 % curl 'http://192.168.40.114/cgi-bin/cgiNK' AP_SSID=LabNet AP_SECMODE=WPA2 PSK_KEY=ThisIsNotAGoodPassphrase AP_PRIMARY_KEY=1 WEPKEY_1= WEPKEY_2= WEPKEY_3= WEPKEY_4= Authentication Bypass: Everything

None of the administrative APIs actually require any authentication. The admin PIN is never sent with requests, no session cookie is set, and there are no other authentication controls. For example, the admin PIN can be set via a GET request as follows:

1 2 3 4 % curl 'http://192.168.40.114/cgi-bin/gadmin?set=4444' set 0 4444 Timeline
  • Discovered in ~May 2017
  • Reported Jan 28 2018
  • Response from Gigastone on Jan 28 2018:

Media Streamer Plus provides convenient functions for portable use. It is not to replace or to be comparable to normal networking devices. However, we do not recommend users to change internal setup to avoid unrecoverable errors.

Sebastian Schauenburg: Local OsmAnd and Geo URL's

Pre, 26/01/2018 - 11:50md

Earlier this year I went on a long holiday to Japan and China. I have an Android phone and am a very big fan of OpenStreetMap. So I used OsmAnd (which uses OpenStreetMap data) to navigate through those countries. I made a spreadsheet with LibreOffice, which included a few links to certain location which are hard to find or do not have an address. Then I exported that .ods to a .pdf and was able to click on the links, which then openend perfectly in OsmAnd.

The URL I was able to use in my PDF document was this one (of course you can substitute longitude and latitude):

http://osmand.net/go?lat=51.4404&lon=4.3294&z=16

And then I helped a friend of mine with something similar to use on a website. Of course the link above did not work. After a short look on Wikipedia I found the page about Geo URI scheme. Constructing a URL with the Geo URI scheme will trigger the default navigation application on a mobile device to open the location. And of course, here you can also substitute the longitude and latitude.

<a href="geo:51.4404,4.3294;u=15">Hoogerheide</a>

Which will result in this link (usable on mobile devices) and of course you can still create a "normal one" for non-mobile device such as this one.

Salih Emin: ucaresystem core 4.4.0 : Pkexec, check for reboot and minor fix

Pre, 26/01/2018 - 10:49md
The new release 4.4.0 of ucaresystem core introduces two internal but important features and a minor bug fix for Debian Jessie. Let’s check them out…Thanks to an idea of Mark Drone on launchpad, I added in ucaresystem core the feature to recognize and inform the user in case they need to restart the system afterContinue reading "ucaresystem core 4.4.0 : Pkexec, check for reboot and minor fix"

David Tomaschik: Psychological Issues in the Security Industry

Pre, 26/01/2018 - 9:00pd

I’ve unfortunately had the experience of dealing with a number of psychological issues (either personally or through personal connections) during my tenure in the security fold. I hope to shed some light on them and encourage others to take them seriously.

If you are hoping this post will be some grand reveal of security engineers going psychotic and stabbing users who enter passwords into phishing pages with poor grammar and spelling, web site administrators who can’t be bothered to set up HTTPS, and ransomware authors, then I hate to disappoint you. If, on the other hand, you’re interested in observations of people who have experienced various psychological problems while in the security industry, then I’ll probably still disappoint, just but not as much.

Impostor Syndrome

According to Wikipedia:

Impostor syndrome is a concept describing individuals who are marked by an inability to internalize their accomplishments and a persistent fear of being exposed as a “fraud”. Despite external evidence of their competence, those exhibiting the syndrome remain convinced that they are frauds and do not deserve the success they have achieved.

I know many, many people in this industry who suffer from this and do not have the ability to recognize their own successes. They may only credit themselves with having “helped out” or done the “non-technical” parts. Some do not take career opportunities or refuse to believe their work is interesting to others.

I, myself, sit on the border of impostor syndrome, and it took me years to be convinced that it was only impostor syndrome and that I am not actually incompetent. Even after promotions, performance reviews “exceeding expectations”, and other signs that a rationale individual would take as signs of success, I still believed that I was not doing the right things. I still believe that I’m not as technically strong or effective as any of my coworkers, despite repeated statements by my manager, my skip-level manager, and my director.

I’m not sure if it is possible to “get over” impostor syndrome, but I think most are able to recognize that their self-doubt is a figment of their imagination. It doesn’t necessarily make it easier to swallow, but at a certain point, if you really believe you’re a failure, you will sabotage yourself into being a failure. If you’re concerned about your performance, you don’t have to admit to impostor syndrome, but ask your teammates and manager: am I performing up to your expectations for someone at my level? Am I on track to keep progressing?

Impostor syndrome, though not a diagnosable mental illness, is probably the most common psychological issue faced by those in the security industry. It’s a fast-paced, competitive, field and it’s hard not to compare yourself to others who are more visible and have achieved so much.

Depression

I don’t think I need to define depression. I think it’s important to acknowledge that depression is not the same as “feeling down” or occasionally having a bad day. Most people who have suffered from depression describe it as a feeling that things will never get better or a complete lack of desire to do anything.

Depression doesn’t seem to be quite as widespread as impostor syndrome, but it’s clearly still a big issue. I have known multiple people in the industry who have suffered from depression, and it’s not something that gets “cured” – you just learn how to live with it (and sometimes use medication to help with the worst of it).

Depression is obviously not unique to our field, but I’ve known several people who suffered in silence because of the social aversion/shyness so prevalent. I strongly encourage those who think they might have depression to seek professional help. It’s not an easy thing to deal with, and the consequences can be terrible.

Working as part of a team can help with depression by increasing exposure to other individuals. If you currently work remotely/from home, consider a change that gets you out more and spending time with coworkers or others. It’s clearly not a fix or a panacea, but social interactions can help.

Anxiety

Anxiety is an entire spectrum of issues that members of our field deal with. There are many “introverts” in this industry, so social anxiety is a common issue, especially in situations like conferences or other events with large crowds. I use the term introverts loosely, because it turns out many people who call themselves introverted actually like to be around others and enjoy social interactions, but find them hard to do for reasons of anxiety. Social anxiety and introversion, it turns out, are not the same thing. (I’ve heard that shyness is the bottom end of a spectrum that leads to social anxiety at the upper end.)

Beyond social anxiety, we have generalized anxiety disorder. Given that we work in a field where we spend all day long looking at the way things can fail and the problems that can occur, it’s not surprising that we can tend to have a somewhat negative and anxious view of things. This tends to present with anxiety about a variety of topics, and often also has panic attacks associated.

There are, of course, many other forms of anxiety. I have long had anxiety in the form of so-called “Pure-O” OCD – that is, Obsessive-Compulsive Disorder with only the Obsessive Thoughts and not the Compulsions. This leads to worst-case scenario thinking and an inability to avoid “intrusive” thoughts. It also makes it incredibly hard to manage my work-life balance because I cannot separate my thoughts from my work. I have spent entire weekends unable to do anything because I’ve been thinking about projects I’m dreading or meetings I have the next week. I also tend to obsess about stupid mistakes I make or whether or not I have missed something. At the end of the day, I value certainty and hate the unknowns. (Security is a perfect field for discovering that you don’t handle uncertainty!) At times it can lead to depression as well.

Feeling Overwhelmed (Burn Out)

Obviously this isn’t a diagnosable issue either, but a lot of people in this industry get quite overwhelmed. Burn out is a big problem, and one I’m trying to cope with even as I write this. There’s a number of reasons I see for this:

  1. It’s hard to keep up with this industry. If you just work a 9-5 job in security and spend no time outside that keeping current, I don’t think you’ll have an easy time keeping up.
  2. In many companies, once someone has interacted with you on one project, you’re their permanent “security contact” – and they’ll ask you every question they possibly can.
  3. At least on my team, you’re never able to work on a single thing – I currently have at least a half-dozen projects in parallel. Context switching is very hard for me, and the fastest way to lead to burn out for me.
  4. A lot of being a security professional is not technical, even if that’s the part you love the most. You’ll spend a big part of your time explaining things to product managers, non-security engineers, and others to get your point across.

I wish I had an instant solution for burnout, but if I did, I probably wouldn’t be feeling burnt out. If you have a supportive manager, get them involved early if you’re feeling burnout coming on. I have a great management chain, but I was too “proud” to admit to approaching burnout (because I viewed it as a personal failure) until I was nearly at the point of rage quitting. I still haven’t really fixed it, but I’ve discussed some steps I’ll be taking over the next couple of months to see if I can get myself back to a sane and productive state.

Other Reading Conclusion

I’m hoping this is a helpful tour of some of the mental issues I’ve dealt with in my life, my career, and 5 years in the security industry. It’s not easy, and by no means do I think I hold the answers (if I did, I would probably feel a lot better myself), but I think it’s important to recognize these issues exist. Most of them are not unique to our industry, but I feel that our industry tends to exacerbate them when they exist. I hope that we, as an industry and a community, can work to help those who are suffering or have issues to work past them and become a more successful member of the community.

Matthew Helmke: Attacking Network Protocols

Enj, 25/01/2018 - 11:22md

I am always trying to expand the boundaries of my knowledge. While I have a basic understanding of networking and a high-level understanding of security issues, I have never studied or read up on the specifics of packet sniffing or other network traffic security topics. This book changed that.

Attacking Network Protocols: A Hacker’s Guide to Capture, Analysis, and Exploitation takes a network attacker’s perspective while probing topics related to data and system vulnerability over a network. The author, James Forshaw, takes an approach similar to the perspective taken by penetration testers (pen testers), the so-called white hat security people who test a company’s security by trying to break through its defenses. The premise is that if you understand the vulnerabilities and attack vectors, you will be better equipped to protect against them. I agree with that premise.

Most of us in the Free and Open Source software world know about Wireshark and using it to capture network traffic information. This book mentions that tool, but focuses on using a different tool that was written by the author, called CANAPE.Core. Along the way, the author calls out multiple other resources for further study. I like and appreciate that very much! This is a complex topic and even a detailed and technically complex book like this one cannot possibly cover every aspect of the topic in 300 pages. What is covered is clearly expressed, technically deep, and valuable.

The book covers topics ranging from network basics to passive and active traffic capture all the way to the reverse engineering of applications. Along the way Forshaw covers network protocols and their structures, compilers and assemblers, operating system basics, CPU architectures, dissectors, cryptography, and the many causes of vulnerabilities.

Closing the book is an appendix (additional chapter? It isn’t precisely defined, but it is extra content dedicated to a specific topic) that describes a multitude of tools and libraries that the author finds useful, but may not have had an excuse to mention earlier in the book. This provides a set of signposts for the reader to follow for further research and is, again, much appreciated.

While I admit I am a novice in this domain, I found the book helpful, interesting, of sufficient depth to be immediately useful, with enough high-level descriptions and clarification to give me the context and thoughts for further study.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

Kubuntu General News: Plasma 5.12 LTS beta available in PPA for testing on Artful & Bionic

Mër, 24/01/2018 - 4:10md

Adventurous users, testers and developers running Artful 17.10 or our development release Bionic 18.04 can now test the beta version of Plasma 5.12 LTS.

An upgrade to the required Frameworks 5.42 is also provided.

As with previous betas, this is experimental and is only suggested for people who are prepared for possible bugs and breakages.

In addition, please be prepared to use ppa-purge to revert changes, should the need arise at some point.

Read more about the beta release.

If you want to test then:

sudo add-apt-repository ppa:kubuntu-ppa/beta

and then update packages with

sudo apt update
sudo apt full-upgrade

A Wayland session can be made available at the SDDM login screen by installing the package plasma-workspace-wayland. Please note the information on Wayland sessions in the KDE announcement.

Note: Due to Launchpad builder downtime and maintenance due to Meltdown/Spectre fixes, limiting us to amd64/i386 architectures, these builds may be superseded with a rebuild once the builders are back to normal availability.

The primary purpose of this PPA is to assist testing for bugs and quality of the upcoming final Plasma 5.12 LTS release, due for release by KDE on 6th Febuary.

It is anticipated that Kubuntu Bionic Beaver 18.04 LTS will ship with Plasma 5.12.4, the latest point release of 5.12 LTS available at release date.

Bug reports on the beta itself should be reported to bugs.kde.org.

Packaging bugs can be reported as normal to: Kubuntu PPA bugs: https://bugs.launchpad.net/kubuntu-ppa

Should any issues occur, please provide feedback on our mailing lists [1] or IRC [2]

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net

Stephen Michael Kellat: Damage Report

Mër, 24/01/2018 - 4:24pd

In no particular order:

  • There was a "partial government shutdown" of the federal government of the United States of America. As a federal civil servant, I rated an "essential-excepted" designation this time which required working without pay until the end of the crisis. Fortunately this did not change my tour of duty. Deroy Murdock has a good write-up of the sordid affair Not all my co-workers at my bureau in the department were rated "essential-excepted" so I had to staff a specific appointment hotline to reschedule taxpayer appointments as no specific outreach was made to tell any taxpayers that the in-person offices were closed.
  • The federal government of the United States of America remains without full-year appropriations for Fiscal Year 2018. Appropriations are set to lapse again on February 8, 2018. I've talked to family about apportioning costs of the household bills and have told them that even though I am working now nothing is guaranteed. Donations are always accepted via PayPal although they are totally not tax-deductible. I'm open to considering proposed transitions from the federal civil service and the data on LinkedIn is probably a good starting point if anybody wants to talk.
  • Sporadic power outages are starting to erupt in Ashtabula. A sudden outage happened on Monday during my Regular Day Off that stressed the many UPS units littered around the house. Multiple outages happened today that also stressed the UPS units. Nothing too unusual has been going on other than snow has been melting.
  • Finances are balanced on a knife edge. Who ever said a government job was a life of luxury?

Positives:

  • I haven't broken any computers recently
  • I haven't run any cell phones or other electronics through the washer/dryer combo
  • My limited work with the church in mission outreach to one of the local nursing homes is still going on
  • I own my home

It isn't all bad. Tallying up the damage lately has just taken a bit of energy. There has been a lot of bad stuff going on.

Didier Roche: Welcome To The (Ubuntu) Bionic Age: Nautilus, a LTS and desktop icons

Mar, 23/01/2018 - 10:36pd
Nautilus, Ubuntu 18.04 LTS and desktop icons: upstream and downstream views.

If you are following closely the news of various tech websites, one of the latest hot topic in the community was about Nautilus removing desktop icons. Let’s try to clarify some points to ensure the various discussions around it have enough background information and not reacting on emotions only as it could be seen lately. You will have both downstream (mine) and upstream (Carlos) perspectives here.

Why upstream Nautilus developers are removing the desktop icons

First, I wasn’t personally really surprised by the announce. Let’s be clear: GNOME, since its 3.0 release, doesn’t have any icons on the desktop by default. There was an option in Tweaks to turn it back on, but let’s be honest: this wasn’t really supported.

The proof is that this code was never really maintained for 7 years and didn’t transition to newer view technologies like the ones Nautilus is migrating to. Having patched myself this code many years ago for Unity (moving desktop icons to the right depending on the icon size, in intellihide mode which thus doesn’t workarea STRUT), I can testify that this code was getting old. Consequently, it became old and rotten for something not even used on default upstream GNOME experience! It would be some irony to keep it that way.

I’m reading a lot of comments about “just keep it as option, the answer is easy”. Let me disagree with this. As already stated previously during my artful blog post series, and for the same reason that we keep Ubuntu Dock with a very few set of supported options, any added option has a cost:

  • It’s another code path to test (manually, most of the time, unfortunately), and the exploding combination of options which can interact badly between each other just produce an unfinished projects, where you have to be careful to not enable this and that option together, or it crashes or cause side-effects… People having played enough with Compiz Config Settings Manager should know what I’m talking about.
  • Not only that, but more code means more bugs, and if you have to transition to a newer technology, you have to modify that code as well. And working on that is detrimental to other bug fixes, features, tests or documentations that could benefit the project. So, this piece of code that you keep and don’t use, has a very negative impact on your whole project. Worse, it impacts indirectly even users who are using the defaults as they are not benefiting of planned enhancements from other part of the project, due to maintainer’s time constraints.

So, yeah, there is never “just an option”.

In addition to that argument that I took to defend upstream’s position (even in front of the French ubuntu community), I want also to hilight that the plan to remove desktop icons was really well executed in term of communication. However, seeing the feedback the upstream developers got when following this communication plan, which takes time, doesn’t motivate to do it again, where in my opinion, it should be the standard for any important (or considered as this) decisions:

  • Carlos blogged about it on planet GNOME by the end of December. He didn’t only explain the context, why this change, who are impacted by this, possible solutions, but he also presented some proposals. So, there is complete what/why/who/how!
  • In addition to this, there is a very good abstract, more technically oriented, in an upstream bug report.
  • A GNOME Shell proof of concept extension was even built to show that the long term solution for users who wants to support icons on the desktop is feasible. It wasn’t just a throwaway experience as clear goals and targets were defined on what’s need to be done to move from a PoC to a working extension. And yes, by the exact same group who are removing desktop icons from Nautilus, I guess that means a lot!

Consequently, those are the detailed information for users to understand why this change is happening and what will be the long-term consequence of it. That is the foundation for good comments and exchanges on the various striking news blog posts. Very well done Carlos! I hope that more and more of those decisions on any free software project will be presented and explained as well as this one. ;)

A word from Nautilus upstream maintainer

Now that I’ve said what I wanted to tell about my view on the upstream changes, and before detailing what we are going to do for Ubuntu 18.04 LTS, let me introduce you Nautilus upstream maintainer already many times mentioned: Carlos Soriano.

Hello Ubuntu and GNOME community,

Thanks Didier for the detailed explanation and giving me a place in your blog!

I’m writting here because I wanted to clarify some details in the interaction with downstreams, in this case Ubuntu and Canonical developers. When I wrote the blog post with all the details I explained only the part that purely refers to upstream Nautilus. And that was actually quite well received. However, two weeks after my blog post some websites explained the change in a not very good way (the clickbait magic).

Usually that is not very important, those who want factual information know that we are in IRC all the time and that we usually write blog posts about upcoming changes in our blog agregation at Planet GNOME or here in Didier’s blog for Ubuntu. This time though, some people in our closer communities (both GNOME and Ubuntu) got splashed with missconceptions, and I wanted to address that, and what better than to do it with Didier :)

One missconception was that Ubuntu and Canonical were ‘yet again’ using older version of software just because. Well maybe you are surprised now, my recommendation for Ubuntu and Canonical was to actually stay in Nautilus 3.26. With a LTS version coming that is by far the most reasonable option. While for a regular user the upstream recommendation is to try out nemo-desktop (by the way, this is another missconception, we said nemo-desktop, not Nemo the app, for a user those are in practice two different things), for a distribution that needs to support and maintain all kind of requests and stability promises for years, staying with a single code that they already worked with is the best option.

Another missconception I saw these days is that seems we take decisions in a rush. In short, I became Nautilus maintainer 3 years and 4 months ago. Exactly 3 years and one month ago I realized that we need to remove that part from Nautilus. It has been quite hard to reason within myself during these 3 years that an option that upstream was not considered the experience we wanted to provide was holding most of the major works on Nautilus, including making away contributions from new contributors given the poor state of the desktop part that unfortunately impacted the whole code of the application itself. In all this time, dowstreams like Ubuntu were a major reason for me to hold this code. Discussions about this decision happened all this time with the other developers of Nautilus.

And the last missconception was that it looks like GNOME devs and Ubuntu devs are in completely separate nichos where noone communicates with each other. While we are usually focused on our personal tasks, when a change is going to happen we do communicate. In this case, I reached out to the desktop team at Canonical before taking the final decision, providing a draft of the blog post to check out the impact, and give possible options for the LTS release of Ubuntu and further.

In summary, the take out from here is that while we might have slighly different visions, at the end of the day we just want to provide the best experience to the users, and for that believe me we do the best we can.

In case you have any question you can always reach out to us Nautilus upstream in #nautilus IRC channel at irc.gnome.org or in our mailing list.

Hope you enjoy this read, and hopefully we will have the benefits of this work to be shown soon. Thanks again Didier!

What does this mean for Ubuntu?

We thought about this as the Ubuntu Desktop team. Our next release is a LTS (Long-Term Support) version, meaning that Ubuntu 18.04 (currently named Bionic Beaver during its development) will have 5 years of support in term of bug fixes and security updates.

It also mean that most of our user audience will upgrade from our last Ubuntu 16.04 LTS to Ubuntu 18.04 LTS (or even 14.04 -> 16.04 -> 18.04!). The changes are quite large in those last 2 years in term of software updates and new features. On top of this, those users will experience for the first time the Unity -> GNOME Shell transition, and we want to give them a feeling of comfort and familiar landmarks in our default experience, despite the huge changes underneath.

On Ubuntu desktop, we are shipping a Dock, visible by default. Consequently, the desktop view itself, without any application on top, is more important in our user experience than it is for upstream GNOME Shell default one. We think that shipping icons on the desktop is still relevant for our user bases.

Where does this leave us regarding those changes? Thinking about the problem, we came to approximately the same conclusions that upstream Nautilus developers have:

  • Staying for the LTS release, on Nautilus 3.26: the pros is that it’s a battle tested code, that we already know we can support (shipped on 17.10). This matches the fact that the LTS is a very important and strong commitment to us. The cons is that it won’t be by release date the latest and greatest upstream Nautilus release, and maybe some integrations with other parts of GNOME 3.28 code would require more downstream work from us.
  • Using an alternative file manager for the desktop, like Nemo. Shipping on a LTS entirely new code, having to support 2 file managers (Nautilus for normal file browsing and Nemo for the desktop) and ensuring the integration between those two and all other applications works well quickly ruled out that solution.
  • Upgrading to Nautilus 3.28 and shipping the PoC GNOME-Shell extension, contributing to it as much as possible before release. The issue (despite being the long-term solution) is that we’ll ship as in the previous solution entirely new code and that the extension needs new APIs from Nautilus which aren’t fully shaped yet (and maybe won’t be ready for GNOME 3.28 even). Also, we did plan a long time in advance (end of September for 18.04 LTS) on the features and work needed to be done for the next release and we still have a lot to do for Ubuntu 18.04 LTS, some of them being GNOME upstream code. Consequently, rushing into this extension coding, looking to our approaching Feature Freeze deadline on March 1st, would mean that we either drop some initially planned features, or fix less bugs, less polish, which will be detrimental to our overall release quality.

As in every release, we decide on a component by component bases what to do (upgrade to latest or not), weighing the pros and cons and trying to take the best decision for our end user audience. We think that most of GNOME components will be upgraded to 3.28. However, in that particular instance, we decided to keep Nautilus to version 3.26 on Ubuntu 18.04 LTS. You can read the discussion that took place during our weekly Ubuntu Desktop meeting on IRC, leading to that decision.

Another pro to that decision is that it gives flavors shipping Nautilus by default, like Ubuntu Budgie and Edubuntu, a little bit more time to find a solution matching their need, as they don’t run GNOME Shell, and so, can’t use that extension.

The experience will thus be: desktop icons (with Nautilus 3.26) on the default ubuntu session. The vanilla GNOME session I talked many times about will still be running Nautilus 3.26 (as we can only have one version of a software in the archive and installed on user’s machine with traditional deb packages), but with icons on the desktop disabled, as we did on Ubuntu Artful. I think some motivated users will build Nautilus 3.28 in a ppa, but it won’t receive official security and bug fixes support of course.

Meanwhile, we will start contributing for a more long term plan on this new GNOME Shell extension with the Nautilus developers to shape a proper API, have good Drag and Drop support and so on, progressively… This will give better long term code and we hope that the following ubuntu releases will be able to move to it once we reach the minimal set of features we want from it (and consequently, update to latest Nautilus version!).

I hope that sheds some lights on both GNOME upstream and our ubuntu decisions, seeing the two perspectives and why those actions were taken, as well as the long term plan. Hopefully, those posts explaining a little bit the context will lead to informed and constructive comments as well!

Benjamin Mako Hill: Introducing Computational Methods to Social Media Scientists

Mar, 23/01/2018 - 1:38pd

The ubiquity of large-scale data and improvements in computational hardware and algorithms have provided enabled researchers to apply computational approaches to the study of human behavior. One of the richest contexts for this kind of work is social media datasets like Facebook, Twitter, and Reddit.

We were invited by Jean BurgessAlice Marwick, and Thomas Poell to write a chapter about computational methods for the Sage Handbook of Social Media. Rather than simply listing what sorts of computational research has been done with social media data, we decided to use the chapter to both introduce a few computational methods and to use those methods in order to analyze the field of social media research.

A “hairball” diagram from the chapter illustrating how research on social media clusters into distinct citation network neighborhoods. Explanations and Examples

In the chapter, we start by describing the process of obtaining data from web APIs and use as a case study our process for obtaining bibliographic data about social media publications from Elsevier’s Scopus API.  We follow this same strategy in discussing social network analysis, topic modeling, and prediction. For each, we discuss some of the benefits and drawbacks of the approach and then provide an example analysis using the bibliographic data.

We think that our analyses provide some interesting insight into the emerging field of social media research. For example, we found that social network analysis and computer science drove much of the early research, while recently consumer analysis and health research have become more prominent.

More importantly though, we hope that the chapter provides an accessible introduction to computational social science and encourages more social scientists to incorporate computational methods in their work, either by gaining computational skills themselves or by partnering with more technical colleagues. While there are dangers and downsides (some of which we discuss in the chapter), we see the use of computational tools as one of the most important and exciting developments in the social sciences.

Steal this paper!

One of the great benefits of computational methods is their transparency and their reproducibility. The entire process—from data collection to data processing to data analysis—can often be made accessible to others. This has both scientific benefits and pedagogical benefits.

To aid in the training of new computational social scientists, and as an example of the benefits of transparency, we worked to make our chapter pedagogically reproducible. We have created a permanent website for the chapter at https://communitydata.cc/social-media-chapter/ and uploaded all the code, data, and material we used to produce the paper itself to an archive in the Harvard Dataverse.

Through our website, you can download all of the raw data that we used to create the paper, together with code and instructions for how to obtain, clean, process, and analyze the data. Our website walks through what we have found to be an efficient and useful workflow for doing computational research on large datasets. This workflow even includes the paper itself, which is written using LaTeX + knitr. These tools let changes to data or code propagate through the entire workflow and be reflected automatically in the paper itself.

If you  use our chapter for teaching about computational methods—or if you find bugs or errors in our work—please let us know! We want this chapter to be a useful resource, will happily consider any changes, and have even created a git repository to help with managing these changes!

The book chapter and this blog post were written with Jeremy Foote and Aaron Shaw. You can read the book chapter here. This blog post was originally published on the Community Data Science Collective blog.

Dustin Kirkland: Dell XPS 13 with Ubuntu -- The Ultimate Developer Laptop of 2018!

Hën, 22/01/2018 - 9:51pd

I'm the proud owner of a new Dell XPS 13 Developer Edition (9360) laptop, pre-loaded from the Dell factory with Ubuntu 16.04 LTS Desktop.

Kudos to the Dell and the Canonical teams that have engineered a truly remarkable developer desktop experience.  You should also check out the post from Dell's senior architect behind the XPS 13, Barton George.
As it happens, I'm also the proud owner of a long loved, heavily used, 1st Generation Dell XPS 13 Developer Edition laptop :-)  See this post from May 7, 2012.  You'll be happy to know that machine is still going strong.  It's now my wife's daily driver.  And I use it almost every day, for any and all hacking that I do from the couch, after hours, after I leave the office ;-)

Now, this latest XPS edition is a real dream of a machine!

From a hardware perspective, this newer XPS 13 sports an Intel i7-7660U 2.5GHz processor and 16GB of memory.  While that's mildly exciting to me (as I've long used i7's and 16GB), here's what I am excited about...

The 500GB NVME storage and a whopping 1239 MB/sec I/O throughput!

kirkland@xps13:~$ sudo hdparm -tT /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 25230 MB in 2.00 seconds = 12627.16 MB/sec
Timing buffered disk reads: 3718 MB in 3.00 seconds = 1239.08 MB/sec

And on top of that, this is my first QHD+ touch screen laptop display, sporting a magnificent 3200x1800 resolution.  The graphics are nothing short of spectacular.  Here's nearly 4K of Hollywood hard "at work" :-)


The keyboard is super comfortable.  I like it a bit better than the 1st generation.  Unlike your Apple friends, we still have our F-keys, which is important to me as a Byobu user :-)  The placement of the PgUp, PgDn, Home, and End keys (as Fn + Up/Down/Left/Right) takes a while to get used to.


The speakers are decent for a laptop, and the microphone is excellent.  The webcam is placed in an odd location (lower left of the screen), but it has quite nice resolution and focus quality.


And Bluetooth and WiFi, well, they "just work".  I got 98.2 Mbits/sec of throughput over WiFi.

kirkland@xps:~$ iperf -c 10.0.0.45
------------------------------------------------------------
Client connecting to 10.0.0.45, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.149 port 40568 connected with 10.0.0.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 118 MBytes 98.2 Mbits/sec

There's no external display port, so you'll need something like this USB-C-to-HDMI adapter to project to a TV or monitor.


There's 1x USB-C port, 2x USB-3 ports, and an SD-Card reader.


One of the USB-3 ports can be used to charge your phone or other devices, even while your laptop is suspended.  I use this all the time, to keep my phone topped up while I'm aboard planes, trains, and cars.  To do so, you'll need to enable "USB PowerShare" in the BIOS.  Here's an article from Dell's KnowledgeBase explaining how.


Honestly, I have only one complaint...  And that's that there is no Trackstick mouse (which is available on some Dell models).  I'm not a huge fan of the Touchpad.  It's too sensitive, and my palms are always touching it inadvertently.  So I need to use an external mouse to be effective.  I'll continue to provide this feedback to the Dell team, in the hopes that one day I'll have my perfect developer laptop!  Otherwise, this machine is a beauty.  I'm sure you'll love it too.

Cheers,
Dustin

Sebastian Dröge: Speeding up RGB to grayscale conversion in Rust by a factor of 2.2 – and various other multimedia related processing loops

Dje, 21/01/2018 - 2:48md

In the previous blog post I wrote about how to write a RGB to grayscale conversion filter for GStreamer in Rust. In this blog post I’m going to write about how to optimize the processing loop of that filter, without resorting to unsafe code or SIMD instructions by staying with plain, safe Rust code.

I also tried to implement the processing loop with faster, a Rust crate for writing safe SIMD code. It looks very promising, but unless I missed something in the documentation it currently is missing some features to be able to express this specific algorithm in a meaningful way. Once it works on stable Rust (waiting for SIMD to be stabilized) and includes runtime CPU feature detection, this could very well be a good replacement for the ORC library used for the same purpose in GStreamer in various places. ORC works by JIT-compiling a minimal “array operation language” to SIMD assembly for your specific CPU (and has support for x86 MMX/SSE, PPC Altivec, ARM NEON, etc.).

If someone wants to prove me wrong and implement this with faster, feel free to do so and I’ll link to your solution and include it in the benchmark results below.

All code below can be found in this GIT repository.

Table of Contents
  1. Baseline Implementation
  2. First Optimization – Assertions
  3. First Optimization – Assertions Try 2
  4. Second Optimization – Iterate a bit more
  5. Third Optimization – Getting rid of the bounds check finally
  6. Summary
  7. Addendum: slice::split_at
Baseline Implementation

This is how the baseline implementation looks like.

pub fn bgrx_to_gray_chunks_no_asserts( in_data: &[u8], out_data: &mut [u8], in_stride: usize, out_stride: usize, width: usize, ) { let in_line_bytes = width * 4; let out_line_bytes = width * 4; for (in_line, out_line) in in_data .chunks(in_stride) .zip(out_data.chunks_mut(out_stride)) { for (in_p, out_p) in in_line[..in_line_bytes] .chunks(4) .zip(out_line[..out_line_bytes].chunks_mut(4)) { let b = u32::from(in_p[0]); let g = u32::from(in_p[1]); let r = u32::from(in_p[2]); let x = u32::from(in_p[3]); let grey = ((r * RGB_Y[0]) + (g * RGB_Y[1]) + (b * RGB_Y[2]) + (x * RGB_Y[3])) / 65536; let grey = grey as u8; out_p[0] = grey; out_p[1] = grey; out_p[2] = grey; out_p[3] = grey; } } }

This basically iterates over each line of the input and output frame (outer loop), and then for each BGRx chunk of 4 bytes in each line it converts the values to u32, multiplies with a constant array, converts back to u8 and stores the same value in the whole output BGRx chunk.

Note: This is only doing the actual conversion from linear RGB to grayscale (and in BT.601 colorspace). To do this conversion correctly you need to know your colorspaces and use the correct coefficients for conversion, and also do gamma correction. See this about why it is important.

So what can be improved on this? For starters, let’s write a small benchmark for this so that we know whether any of our changes actually improve something. This is using the (unfortunately still) unstable benchmark feature of Cargo.

#![feature(test)] #![feature(exact_chunks)] extern crate test; pub fn bgrx_to_gray_chunks_no_asserts(...) [...] } #[cfg(test)] mod tests { use super::*; use test::Bencher; use std::iter; fn create_vec(w: usize, h: usize) -> Vec<u8> { iter::repeat(0).take(w * h * 4).collect::<_>() } #[bench] fn bench_chunks_1920x1080_no_asserts(b: &mut Bencher) { let i = test::black_box(create_vec(1920, 1080)); let mut o = test::black_box(create_vec(1920, 1080)); b.iter(|| bgrx_to_gray_chunks_no_asserts(&i, &mut o, 1920 * 4, 1920 * 4, 1920)); } }

This can be run with cargo bench and then prints the amount of nanoseconds each iterator of the closure was taking. To only really measure the processing itself, allocations and initializations of the input/output frame are happening outside of the closure. We’re not interested in times for that.

First Optimization – Assertions

To actually start optimizing this function, let’s take a look at the assembly that the compiler is outputting. The easiest way of doing that is via the Godbolt Compiler Explorer website. Select “rustc nightly” and use “-C opt-level=3” for the compiler flags, and then copy & paste your code in there. Once it compiles, to find the assembly that corresponds to a line, simply right-click on the line and “Scroll to assembly”.

Alternatively you can use cargo rustc –release — -C opt-level=3 –emit asm and check the assembly file that is output in the target/release/deps directory.

What we see then for our inner loop is something like the following

.LBB4_19: cmp r15, r11 mov r13, r11 cmova r13, r15 mov rdx, r8 sub rdx, r13 je .LBB4_34 cmp rdx, 3 jb .LBB4_35 inc r9 movzx edx, byte ptr [rbx - 1] movzx ecx, byte ptr [rbx - 2] movzx esi, byte ptr [rbx] imul esi, esi, 19595 imul edx, edx, 38470 imul ecx, ecx, 7471 add ecx, edx add ecx, esi shr ecx, 16 mov byte ptr [r10 - 3], cl mov byte ptr [r10 - 2], cl mov byte ptr [r10 - 1], cl mov byte ptr [r10], cl add r10, 4 add r8, -4 add r15, -4 add rbx, 4 cmp r9, r14 jb .LBB4_19

This is already quite optimized. For each loop iteration the first few instructions are doing some bounds checking and if they fail jump to the .LBB4_34 or .LBB4_35 labels. How to understand that this is bounds checking? Scroll down in the assembly to where these labels are defined and you’ll see something like the following

.LBB4_34: lea rdi, [rip + .Lpanic_bounds_check_loc.D] xor esi, esi xor edx, edx call core::panicking::panic_bounds_check@PLT ud2 .LBB4_35: cmp r15, r11 cmova r11, r15 sub r8, r11 lea rdi, [rip + .Lpanic_bounds_check_loc.F] mov esi, 2 mov rdx, r8 call core::panicking::panic_bounds_check@PLT ud2

Also if you check (with the colors, or the “scroll to source” feature) which Rust code these correspond to, you’ll see that it’s the first and third access to the 4-byte slice that contains our BGRx values.

Afterwards in the assembly, the following steps are happening: 0) incrementing of the “loop counter” representing the number of iterations we’re going to do (r9), 1) actual reading of the B, G and R value and conversion to u32 (the 3 movzx, note that the reading of the x value is optimized away as the compiler sees that it is always multiplied by 0 later), 2) the multiplications with the array elements (the 3 imul), 3) combining of the results and division (i.e. shift) (the 2 add and the shr), 4) storing of the result in the output (the 4 mov). Afterwards the slice pointers are increased by 4 (rbx and r10) and the lengths (used for bounds checking) are decreased by 4 (r8 and r15). Finally there’s a check (cmp) to see if r9 (our loop) counter is at the end of the slice, and if not we jump back to the beginning and operate on the next BGRx chunk.

Generally what we want to do for optimizations is to get rid of unnecessary checks (bounds checking), memory accesses, conditions (cmp, cmov) and jumps (the instructions starting with j). These are all things that are slowing down our code.

So the first thing that seems useful to optimize here is the bounds checking at the beginning. It definitely seems not useful to do two checks instead of one for the two slices (the checks are for the both slices at once but Godbolt does not detect that and believes it’s only the input slice). And ideally we could teach the compiler that no bounds checking is needed at all.

As I wrote in the previous blog post, often this knowledge can be given to the compiler by inserting assertions.

To prevent two checks and just have a single check, you can insert a assert_eq!(in_p.len(), 4) at the beginning of the inner loop and the same for the output slice. Now we only have a single bounds check left per iteration.

As a next step we might want to try to move this knowledge outside the inner loop so that there is no bounds checking at all in there anymore. We might want to add assertions like the following outside the outer loop then to give all knowledge we have to the compiler

assert_eq!(in_data.len() % 4, 0); assert_eq!(out_data.len() % 4, 0); assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride); assert!(in_line_bytes <= in_stride); assert!(out_line_bytes <= out_stride);

Unfortunately adding those has no effect at all on the inner loop, but having them outside the outer loop for good measure is not the worst idea so let’s just keep them. At least it can be used as some kind of documentation of the invariants of this code for future readers.

So let’s benchmark these two implementations now. The results on my machine are the following

test tests::bench_chunks_1920x1080_no_asserts ... bench: 4,420,145 ns/iter (+/- 139,051) test tests::bench_chunks_1920x1080_asserts ... bench: 4,897,046 ns/iter (+/- 166,555)

This is surprising, our version without the assertions is actually faster by a factor of ~1.1 although it had fewer conditions. So let’s take a closer look at the assembly at the top of the loop again, where the bounds checking happens, in the version with assertions

.LBB4_19: cmp rbx, r11 mov r9, r11 cmova r9, rbx mov r14, r12 sub r14, r9 lea rax, [r14 - 1] mov qword ptr [rbp - 120], rax mov qword ptr [rbp - 128], r13 mov qword ptr [rbp - 136], r10 cmp r14, 5 jne .LBB4_33 inc rcx [...]

While this indeed has only one jump as expected for the bounds checking, the number of comparisons is the same and even worse: 3 memory writes to the stack are happening right before the jump. If we follow to the .LBB4_33 label we will see that the assert_eq! macro is going to do something with core::fmt::Debug. This is setting up the information needed for printing the assertion failure, the “expected X equals to Y” output. This is certainly not good and the reason why everything is slower now.

First Optimization – Assertions Try 2

All the additional instructions and memory writes were happening because the assert_eq! macro is outputting something user friendly that actually contains the values of both sides. Let’s try again with the assert! macro instead

test tests::bench_chunks_1920x1080_no_asserts ... bench: 4,420,145 ns/iter (+/- 139,051) test tests::bench_chunks_1920x1080_asserts ... bench: 4,897,046 ns/iter (+/- 166,555) test tests::bench_chunks_1920x1080_asserts_2 ... bench: 3,968,976 ns/iter (+/- 97,084)

This already looks more promising. Compared to our baseline version this gives us a speedup of a factor of 1.12, and compared to the version with assert_eq! 1.23. If we look at the assembly for the bounds checks (everything else stays the same), it also looks more like what we would’ve expected

.LBB4_19: cmp rbx, r12 mov r13, r12 cmova r13, rbx add r13, r14 jne .LBB4_33 inc r9 [...]

One cmp less, only one jump left. And no memory writes anymore!

So keep in mind that assert_eq! is more user-friendly but quite a bit more expensive even in the “good case” compared to assert!.

Second Optimization – Iterate a bit more

This is still not very satisfying though. No bounds checking should be needed at all as each chunk is going to be exactly 4 bytes. We’re just not able to convince the compiler that this is the case. While it may be possible (let me know if you find a way!), let’s try something different. The zip iterator is done when the shortest iterator of both is done, and there are optimizations specifically for zipped slice iterators implemented. Let’s try that and replace the grayscale value calculation with

let grey = in_p.iter() .zip(RGB_Y.iter()) .map(|(i, c)| u32::from(*i) * c) .sum::<u32>() / 65536;

If we run that through our benchmark after removing the assert!(in_p.len() == 4) (and the same for the output slice), these are the results

test tests::bench_chunks_1920x1080_asserts_2 ... bench: 3,968,976 ns/iter (+/- 97,084) test tests::bench_chunks_1920x1080_iter_sum ... bench: 11,393,600 ns/iter (+/- 347,958)

We’re actually 2.9 times slower! Even when adding back the assert!(in_p.len() == 4) assertion (and the same for the output slice) we’re still slower

test tests::bench_chunks_1920x1080_asserts_2 ... bench: 3,968,976 ns/iter (+/- 97,084) test tests::bench_chunks_1920x1080_iter_sum ... bench: 11,393,600 ns/iter (+/- 347,958) test tests::bench_chunks_1920x1080_iter_sum_2 ... bench: 10,420,442 ns/iter (+/- 242,379)

If we look at the assembly of the assertion-less variant, it’s a complete mess now

.LBB0_19: cmp rbx, r13 mov rcx, r13 cmova rcx, rbx mov rdx, r8 sub rdx, rcx cmp rdx, 4 mov r11d, 4 cmovb r11, rdx test r11, r11 je .LBB0_20 movzx ecx, byte ptr [r15 - 2] imul ecx, ecx, 19595 cmp r11, 1 jbe .LBB0_22 movzx esi, byte ptr [r15 - 1] imul esi, esi, 38470 add esi, ecx movzx ecx, byte ptr [r15] imul ecx, ecx, 7471 add ecx, esi test rdx, rdx jne .LBB0_23 jmp .LBB0_35 .LBB0_20: xor ecx, ecx .LBB0_22: test rdx, rdx je .LBB0_35 .LBB0_23: shr ecx, 16 mov byte ptr [r10 - 3], cl mov byte ptr [r10 - 2], cl cmp rdx, 3 jb .LBB0_36 inc r9 mov byte ptr [r10 - 1], cl mov byte ptr [r10], cl add r10, 4 add r8, -4 add rbx, -4 add r15, 4 cmp r9, r14 jb .LBB0_19

In short, there are now various new conditions and jumps for short-circuiting the zip iterator in the various cases. And because of all the noise added, the compiler was not even able to optimize the bounds check for the output slice away anymore (.LBB0_35 cases). While it was able to unroll the iterator (note that the 3 imul multiplications are not interleaved with jumps and are actually 3 multiplications instead of yet another loop), which is quite impressive, it couldn’t do anything meaningful with that information it somehow got (it must’ve understood that each chunk has 4 bytes!). This looks like something going wrong somewhere in the optimizer to me.

If we take a look at the variant with the assertions, things look much better

.LBB3_19: cmp r11, r12 mov r13, r12 cmova r13, r11 add r13, r14 jne .LBB3_33 inc r9 movzx ecx, byte ptr [rdx - 2] imul r13d, ecx, 19595 movzx ecx, byte ptr [rdx - 1] imul ecx, ecx, 38470 add ecx, r13d movzx ebx, byte ptr [rdx] imul ebx, ebx, 7471 add ebx, ecx shr ebx, 16 mov byte ptr [r10 - 3], bl mov byte ptr [r10 - 2], bl mov byte ptr [r10 - 1], bl mov byte ptr [r10], bl add r10, 4 add r11, -4 add r14, 4 add rdx, 4 cmp r9, r15 jb .LBB3_19

This is literally the same as the assertion version we had before, except that the reading of the input slice, the multiplications and the additions are happening in iterator order instead of being batched all together. It’s quite impressive that the compiler was able to completely optimize away the zip iterator here, but unfortunately it’s still many times slower than the original version. The reason must be the instruction-reordering. The previous version had all memory reads batched and then the operations batched, which is apparently much better for the internal pipelining of the CPU (it is going to perform the next instructions without dependencies on the previous ones already while waiting for the pending instructions to finish).

It’s also not clear to me why the LLVM optimizer is not able to schedule the instructions the same way here. It apparently has all information it needs for that if no iterator is involved, and both versions are leading to exactly the same assembly except for the order of instructions. This also seems like something fishy.

Nonetheless, we still have our manual bounds check (the assertion) left here and we should really try to get rid of that. No progress so far.

Third Optimization – Getting rid of the bounds check finally

Let’s tackle this from a different angle now. Our problem is apparently that the compiler is not able to understand that each chunk is exactly 4 bytes.

So why don’t we write a new chunks iterator that has always exactly the requested amount of items, instead of potentially less for the very last iteration. And instead of panicking if there are leftover elements, it seems useful to just ignore them. That way we have API that is functionally different from the existing chunks iterator and provides behaviour that is useful in various cases. It’s basically the slice equivalent of the exact_chunks iterator of the ndarray crate.

By having it functionally different from the existing one, and not just an optimization, I also submitted it for inclusion in Rust’s standard library and it’s nowadays available as an unstable feature in nightly. Like all newly added API. Nonetheless, the same can also be implemented inside your code with basically the same effect, there are no dependencies on standard library internals.

So, let’s use our new exact_chunks iterator that is guaranteed (by API) to always give us exactly 4 bytes. In our case this is exactly equivalent to the normal chunks as by construction our slices always have a length that is a multiple of 4, but the compiler can’t infer that information. The resulting code looks as follows

pub fn bgrx_to_gray_exact_chunks( in_data: &[u8], out_data: &mut [u8], in_stride: usize, out_stride: usize, width: usize, ) { assert_eq!(in_data.len() % 4, 0); assert_eq!(out_data.len() % 4, 0); assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride); let in_line_bytes = width * 4; let out_line_bytes = width * 4; assert!(in_line_bytes <= in_stride); assert!(out_line_bytes <= out_stride); for (in_line, out_line) in in_data .exact_chunks(in_stride) .zip(out_data.exact_chunks_mut(out_stride)) { for (in_p, out_p) in in_line[..in_line_bytes] .exact_chunks(4) .zip(out_line[..out_line_bytes].exact_chunks_mut(4)) { assert!(in_p.len() == 4); assert!(out_p.len() == 4); let b = u32::from(in_p[0]); let g = u32::from(in_p[1]); let r = u32::from(in_p[2]); let x = u32::from(in_p[3]); let grey = ((r * RGB_Y[0]) + (g * RGB_Y[1]) + (b * RGB_Y[2]) + (x * RGB_Y[3])) / 65536; let grey = grey as u8; out_p[0] = grey; out_p[1] = grey; out_p[2] = grey; out_p[3] = grey; } } }

It’s exactly the same as the previous version with assertions, except for using exact_chunks instead of chunks and the same for the mutable iterator. The resulting benchmark of all our variants now looks as follow

test tests::bench_chunks_1920x1080_no_asserts ... bench: 4,420,145 ns/iter (+/- 139,051) test tests::bench_chunks_1920x1080_asserts ... bench: 4,897,046 ns/iter (+/- 166,555) test tests::bench_chunks_1920x1080_asserts_2 ... bench: 3,968,976 ns/iter (+/- 97,084) test tests::bench_chunks_1920x1080_iter_sum ... bench: 11,393,600 ns/iter (+/- 347,958) test tests::bench_chunks_1920x1080_iter_sum_2 ... bench: 10,420,442 ns/iter (+/- 242,379) test tests::bench_exact_chunks_1920x1080 ... bench: 2,007,459 ns/iter (+/- 112,287)

Compared to our initial version this is a speedup of a factor of 2.2, compared to our version with assertions a factor of 1.98. This seems like a worthwhile improvement, and if we look at the resulting assembly there are no bounds checks at all anymore

.LBB0_10: movzx edx, byte ptr [rsi - 2] movzx r15d, byte ptr [rsi - 1] movzx r12d, byte ptr [rsi] imul r13d, edx, 19595 imul edx, r15d, 38470 add edx, r13d imul ebx, r12d, 7471 add ebx, edx shr ebx, 16 mov byte ptr [rcx - 3], bl mov byte ptr [rcx - 2], bl mov byte ptr [rcx - 1], bl mov byte ptr [rcx], bl add rcx, 4 add rsi, 4 dec r10 jne .LBB0_10

Also due to this the compiler is able to apply some more optimizations and we only have one loop counter for the number of iterations r10 and the two pointers rcx and rsi that are increased/decreased in each iteration. There is no tracking of the remaining slice lengths anymore, as in the assembly of the original version (and the versions with assertions).

Summary

So overall we got a speedup of a factor of 2.2 while still writing very high-level Rust code with iterators and not falling back to unsafe code or using SIMD. The optimizations the Rust compiler is applying are quite impressive and the Rust marketing line of zero-cost abstractions is really visible in reality here.

The same approach should also work for many similar algorithms, and thus many similar multimedia related algorithms where you iterate over slices and operate on fixed-size chunks.

Also the above shows that as a first step it’s better to write clean and understandable high-level Rust code without worrying too much about performance (assume the compiler can optimize well), and only afterwards take a look at the generated assembly and check which instructions should really go away (like bounds checking). In many cases this can be achieved by adding assertions in strategic places, or like in this case by switching to a slightly different abstraction that is closer to the actual requirements (however I believe the compiler should be able to produce the same code with the help of assertions with the normal chunks iterator, but making that possible requires improvements to the LLVM optimizer probably).

And if all does not help, there’s still the escape hatch of unsafe (for using functions like slice::get_unchecked() or going down to raw pointers) and the possibility of using SIMD instructions (by using faster or stdsimd directly). But in the end this should be a last resort for those little parts of your code where optimizations are needed and the compiler can’t be easily convinced to do it for you.

Addendum: slice::split_at

User newpavlov suggested on Reddit to use repeated slice::split_at in a while loop for similar performance.

This would for example like

pub fn bgrx_to_gray_split_at( in_data: &[u8], out_data: &mut [u8], in_stride: usize, out_stride: usize, width: usize, ) { assert_eq!(in_data.len() % 4, 0); assert_eq!(out_data.len() % 4, 0); assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride); let in_line_bytes = width * 4; let out_line_bytes = width * 4; assert!(in_line_bytes <= in_stride); assert!(out_line_bytes <= out_stride); for (in_line, out_line) in in_data .exact_chunks(in_stride) .zip(out_data.exact_chunks_mut(out_stride)) { let mut in_pp: &[u8] = in_line[..in_line_bytes].as_ref(); let mut out_pp: &mut [u8] = out_line[..out_line_bytes].as_mut(); assert!(in_pp.len() == out_pp.len()); while in_pp.len() >= 4 { let (in_p, in_tmp) = in_pp.split_at(4); let (out_p, out_tmp) = { out_pp }.split_at_mut(4); in_pp = in_tmp; out_pp = out_tmp; let b = u32::from(in_p[0]); let g = u32::from(in_p[1]); let r = u32::from(in_p[2]); let x = u32::from(in_p[3]); let grey = ((r * RGB_Y[0]) + (g * RGB_Y[1]) + (b * RGB_Y[2]) + (x * RGB_Y[3])) / 65536; let grey = grey as u8; out_p[0] = grey; out_p[1] = grey; out_p[2] = grey; out_p[3] = grey; } } }

Performance-wise this brings us very close to the exact_chunks version

test tests::bench_exact_chunks_1920x1080 ... bench: 1,965,631 ns/iter (+/- 58,832) test tests::bench_split_at_1920x1080 ... bench: 2,046,834 ns/iter (+/- 35,990)

and the assembly is also very similar

.LBB0_10: add rbx, -4 movzx r15d, byte ptr [rsi] movzx r12d, byte ptr [rsi + 1] movzx edx, byte ptr [rsi + 2] imul r13d, edx, 19595 imul r12d, r12d, 38470 imul edx, r15d, 7471 add edx, r12d add edx, r13d shr edx, 16 movzx edx, dl imul edx, edx, 16843009 mov dword ptr [rcx], edx lea rcx, [rcx + 4] add rsi, 4 cmp rbx, 3 ja .LBB0_10

Here the compiler even optimizes the storing of the value into a single write operation of 4 bytes, at the cost of an additional multiplication and zero-extend register move.

Overall this code performs very well too, but in my opinion it looks rather ugly compared to the versions using the different chunks iterators. Also this is basically what the exact_chunks iterator does internally: repeatedly calling slice::split_at. In theory both versions could lead to the very same assembly, but the LLVM optimizer is currently handling both slightly different.

Joe Barker: Configuring MSMTP On Ubuntu 16.04 (Again)

Pre, 19/01/2018 - 11:30pd

This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I’m posting it as-is for posterity, and have no idea if it’ll work on later versions. As I’m not hosting my own Ubuntu/MSMTP server anymore I can’t see any updates being made to this, but if I ever do have to set this up again I’ll create an updated post! Anyway, here’s what I had…

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mta ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults. defaults # Enable or disable TLS/SSL encryption. tls on tls_starttls on tls_trust_file /etc/ssl/certs/ca-certificates.crt # Setup WP account's settings. account host smtp.gmail.com port 587 auth login user password from logfile /var/log/msmtp/msmtp.log account default :

Any of the uppercase items (i.e. ) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp sudo chown -R www-data:adm /var/log/msmtp sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log { rotate 12 monthly compress missingok notifempty }

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text'); exit();

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

Sean Davis: MenuLibre 2.1.4 Released

Enj, 18/01/2018 - 12:50md

The wait is over. MenuLibre 2.1.4 is now available for public testing and translations! With well over 100 commits, numerous bug fixes, and a lot of polish, the best menu editing solution for Linux is ready for primetime.

What’s New? New Features
  • New Test Launcher button to try out new changes without saving (LP: #1315875)
  • New Sort Alphabetically button to instantly sort subdirectories (LP: #1315536)
  • Added ability to create subdirectories in system-installed paths (LP: #1315872)
  • New Parsing Errors tool to review invalid launcher files
  • New layout preferences! Budgie, GNOME, and Pantheon users will find that MenuLibre uses client side decorations (CSD) by default, while other desktops will use the more traditional server side decorations with a toolbar. Users are able to set their preference via the commandline.
    • -b, --headerbar to use the headerbar layout
    • -t, --toolbar to use the toolbar layout
General
  • The folder icon is now used in place of applications-other for directories (LP: #1605905)
  • DBusActivatable and Hidden keys are now represented by switches instead of text entries
  • Additional non-standard but commonly used categories have been added
  • Support for the Implements key has been added
  • Cinnamon, EDE, LXQt, and Pantheon have been added to the list of supported ShowIn environments
  • All file handling has been replaced with the better-maintained GLib KeyFile library
  • The Version key has been bumped to 1.1 to comply with the latest version of the specification
Bug Fixes
  • TypeError when adding a launcher and nothing is selected in the directory view (LP: #1556664)
  • Invalid categories added to first launcher in top-level directory under Xfce (LP: #1605973)
  • Categories created by Alacarte not respected, custom launchers deleted (LP: #1315880)
  • Exit application when Ctrl-C is pressed in the terminal (LP: #1702725)
  • Some categories cannot be removed from a launcher (LP: #1307002)
  • Catch exceptions when saving and display an error (LP: #1444668)
  • Automatically replace ~ with full home directory (LP: #1732099)
  • Make hidden items italic (LP: #1310261)
Translation Updates
  • This is the first release with complete documentation for every translated string in MenuLibre. This allows translators to better understand the context of each string when they adapt MenuLibre to their language, and should lead to more and better quality translations in the future.
  • The following languages were updating since MenuLibre 2.1.3:
    • Brazilian Portuguese, Catalan, Croatian, Danish, French, Galician, German, Italian, Kazakh, Lithuanian, Polish, Russian, Slovak, Spanish, Swedish, Ukrainian
Screenshots Downloads

The latest version of MenuLibre can always be downloaded from the Launchpad archives. Grab version 2.1.4 from the below link.

https://launchpad.net/menulibre/2.1/2.1.4/+download/menulibre-2.1.4.tar.gz

  • SHA-256: 36a6350019e45fbd1219c19a9afce29281e806993d4911b45b371dac50064284
  • SHA-1: 498fdd0b6be671f4388b6fa77a14a7d1e127e7ce
  • MD5: 0e30f24f544f0929621046d17874ecf0

Faqet