You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 3 months 14 orë më parë

Michael Zanetti: nymea

Hën, 15/10/2018 - 7:01md

It’s been quite a while since I had written a post now. Lots of things have changed around here but even though I am not actively developing for Ubuntu itself any more it doesn’t mean that I’ve left the Ubuntu and FOSS world in general. In fact, I’ve been pretty busy hacking on some more free software goodness. Some few have sure heard about it, but for the biggest part, allow me to introduce you to nymea.

nymea is an IoT platform mainly based on Ubuntu. Well, that’s where we develop on, we provide packages for debian and snaps for all the platforms supporting snaps too.

It consists of 3 parts: nymea:core, nymea:app and nymea:cloud.
The purpose of this project is to enable easy integration of various things with each other. Being plugin-based, it allows to make all sorts of things (devices, online services…) work together.

Practically speaking this means two things:

– It will allow users to have a completely open source smart home setup which does everything offline. Everything is processed offline, including the smartness. Turning your living room lights on when it gets dark? nymea will do it, and it’ll do it even without your internet connection. It comes with nymea:core to be installed on a gateway device in your home (a Raspberry Pi, or any other device that can run Ubuntu/Debian or snapd) and nymea:app, available in app stores and also as a desktop app in the snap store.

– It delivers a developer platform for device makers. Looking for a solution that easily allows you to make your device smart? Ubuntu:core + nymea:core together will get you sorted in no time to have an app for your “thing” and allow it to react on just about any input it gets.

nymea:cloud is an optional feature to nymea:core and nymea:app and allows to extend the nymea system with features like remote connection, push notifications or Alexa integration (not released yet).

So if that got you curious, check out https://wiki.nymea.io (and perhaps https://nymea.io in general) or simply install nymea and nymea-app and get going (on snap systems you need to connect some plugs and iterfaces for all the bits and pieces to work, alternatively we have a ppa ready for use too).

Jeremy Bicha: Google Cloud Print in Ubuntu

Dje, 14/10/2018 - 4:31md

There is an interesting hidden feature available in Ubuntu 18.04 LTS and newer. To enable this feature, first install cpdb-backend-gcp.

sudo apt install cpdb-backend-gcp

Make sure you are signed in to Google with GNOME Online Accounts. Open the Settings app1gnome-control-center to the Online Accounts page. If your Google account is near the top above the Add an account section, then you’re all set.

Currently, only LibreOffice is supported. Hopefully, for 19.04, other GTK+ apps will be able to use the feature.

This feature was developed by Nilanjana Lodh and Abhijeet Dubey when they were Google Summer of Code 2017 participants. Their mentors were Till Kamppeter, Aveek Basu, and Felipe Borges.

Till has been trying to get this feature installed by default in Ubuntu since 18.04 LTS, but it looks like it won’t make it in until 19.04.

I haven’t seen this feature packaged in any other Linux distros yet. That might be because people don’t know about this feature so that’s why I’m posting about it today! If you are a distro packager, the 3 packages you need are cpdb-libs , cpdb-backend-gcp, and cpdb-backend-cups. The final package enables easy printing to any IPP printer. (I didn’t mention it earlier because I believe Ubuntu 18.04 LTS already supports that feature through a different package.)

Save to Google Drive

In my original blog post, I confused the cpdb feature with a feature that already exists in GTK3 built with GNOME Online Accounts support. This should already work on most distros.

When you print a document, there will be an extra Save to Google Drive option. Saving to Google Drive saves a PDF of your document to your Google Drive account.

This post was edited on October 16 to mention that cpdb only supports LibreOffice now and that Save to Google Drive is a GTK3 feature instead.

October 17: Please see Felipe’s comments. It turns out that even Google Cloud Print works fine in distros with recent GTK3. The point of the cpdb feature is to make this work in apps that don’t use GTK3. So I guess the big benefit now is that you can use Google Cloud Print or Save to Google Drive from LibreOffice.

Julian Andres Klode: The demise of G+ and return to blogging (w/ mastodon integration)

Sht, 13/10/2018 - 11:03md

I’m back to blogging, after shutting down my wordpress.com hosted blog in spring. This time, fully privacy aware, self hosted, and integrated with mastodon.

Let’s talk details: In spring, I shutdown my wordpress.com hosted blog, due to concerns about GDPR implications with comment hosting and ads and stuff. I’d like to apologize for using that, back when I did this (in 2007), it was the easiest way to get into blogging. Please forgive me for subjecting you to that!

Recently, Google announced the end of Google+. As some of you might know, I posted a lot of medium-long posts there, rather than doing blog posts; especially after I disabled the wordpress site.

With the end of Google+, I want to try something new: I’ll host longer pieces on this blog, and post shorter messages on @juliank@mastodon.social. If you follow the Mastodon account, you will see toots for each new blog post as well, linking to the blog post.

Mastodon integration and privacy

Now comes the interesting part: If you reply to the toot, your reply will be shown on the blog itself. This works with a tiny bit of JavaScript that talks to a simple server-side script that finds toots from me mentioning the blog post, and then replies to that.

This protects your privacy, because mastodon.social does not see which blog post you are looking at, because it is contacted by the server, not by you. Rendering avatars requires loading images from mastodon.social’s file server, however - to improve your privacy, all avatars are loaded with referrerpolicy='no-referrer', so assuming your browser is half-way sane, it should not be telling mastodon.social which post you visited either. In fact, the entire domain also sets Referrer-Policy: no-referrer as an http header, so any link you follow will not have a referrer set.

The integration was originally written by @bjoern@mastodon.social – I have done some moderate improvements to adapt it to my theme, make it more reusable, and replace and extend the caching done in a JSON file with a Redis database.

Source code

This blog is free software; generated by the Hugo snap. All source code for it is available:

(Yes I am aware that hosting the repositories on GitHub is a bit ironic given the whole focus on privacy and self-hosting).

The theme makes use of Hugo pipes to minify and fingerprint JavaScript, and vendorizes all dependencies instead of embedding CDN links, to, again, protect your privacy.

Future work

I think I want to make the theme dark, to be more friendly to the eyes. I also might want to make the mastodon integration a bit more friendly to use. And I want to get rid of jQuery, it’s only used for a handful of calls in the Mastodon integration JavaScript.

If you have any other idea for improvements, feel free to join the conversation in the mastodon toot, send me an email, or open an issue at the github projects.

Closing thoughts

I think the end of Google+ will be an interesting time, requring a lot of people in the open source world to replace one of their main communication channels with a different approach.

Mastodon and Diaspora are both in the race, and I fear the community will split or everyone will have two accounts in the end. I personally think that Mastodon + syndicated blogs provide a good balance: You can quickly write short posts (up to 500 characters), and you can host long articles on your own and link to them.

I hope that one day diaspora* and mastodon federate together. If we end up with one federated network that would be the best outcome.

Jeremy Bicha: Shutter removed from Debian & Ubuntu

Sht, 13/10/2018 - 8:29md

This week, the popular screenshot app Shutter was removed from Debian Unstable & Ubuntu 18.10. (It had already been removed from Debian “Buster” 6 months ago and some of its “optional” dependencies had already been removed from Ubuntu 18.04 LTS).

Shutter will need to be ported to gtk3 before it can return to Debian. (Ideally, it would support Wayland desktops too but that’s not a blocker for inclusion in Debian.)

See the Debian bug for more discussion.

I am told that flameshot is a nice well-maintained screenshot app.

I believe Snap or Flatpak are great ways to make apps that use obsolete libraries available on modern distros that can no longer keep those libraries around. There isn’t a Snap or Flatpak version of Shutter yet, so hopefully someone interested in that will help create one.

David Tomaschik: Course Review: Adversarial Attacks and Hunt Teaming

Pre, 12/10/2018 - 9:00pd

At DerbyCon 8, I had the opportunity to take the “Adversarial Attacks and Hunt Teaming” presented by Ben Ten and Larry Spohn from TrustedSec. I went into the course hoping to get a refresher on the latest techniques for Windows domains (I do mostly Linux, IoT & Web Apps at work) as well as to get a better understanding of how hunt teaming is done. (As a Red Teamer, I feel understanding the work done by the blue team is critical to better success and reducing detection.) From the course description:

This course is completely hands-on, focusing on the latest attack techniques and building a defense strategy around them. This workshop will cover both red and blue team efforts and provide methods for understanding how to best detect threats in an enterprise. It will give penetration testers the ability to learn the newest techniques, as well as teach blue teamers how to defend against them.

The Good

The course was definitely hands-on, which I really appreciate as someone who learns by “doing” rather than by listening to someone talk. Both instructors were obviously knowledgeable and able to answer questions about how tools and techniques work. It’s really valuable to understand why things work instead of just running commands blindly. Having the why lets you pivot your knowledge to other tools when your first choice isn’t working for some reason. (AV, endpoint protection, etc.)

Both instructors are strong teachers with an obvious passion for what they do. They presented the material well and mostly at a reasonable pace. They also tag-team well: while one is presenting, the other can help students having issues without delaying the entire class.

The final lab/exam was really good. We were challenged to get Domain Admin on a network we hadn’t seen so far, with the top 5 finishers receiving challenge coins. Despite how little I do with Windows, I was happy to be one of the recipients!

The Bad

The course began quite slowly for my experience level. The first half-day or so involved basic reconnaisance with nmap and an introduction to Metasploit. While I understand that not everyone has experience with these tools, the course description did not make me feel like it would be as basic as was presented.

There was a section on physical attacks that, while extremely interesting, was not really a good fit for the rest of the course material. It was too brief to really learn how to execute these attacks from a Red Team perspective, and physical security is often out of scope for the Blue Team (or handled by a different group). Other than entertainment value, I do not feel like it added anything to the course.

I would have liked a little more “Blue” content. The hunt-teaming section was mostly about configuring Windows Logging and pointing it to an ELK server for aggregation and analysis. Again, this was interesting, but we did not dive into other sources of data (network firewalls, non-Windows systems, etc.) like I hoped we would. It also did not spend any time discussing how to relate different events, only how to log the events you would want to look for.

Summary

Overall, I think this is a good course presented by excellent instructors. If you’ve done an OSCP course or even basic penetration testing, expect some duplication in the first day or so, but there will still be techniques that you might not have seen (or had the chance to try out) before. This was my first time trying the “Kerberoasting” attack, so it was nice to be able to do it hands-on. Overall a solid course, but I’d generally recommend it to those early in their careers or transitioning to an offensive security role.

Simos Xenitellis: How to create a minimal container image for LXC/LXD with distrobuilder

Mër, 10/10/2018 - 10:12md

In the previous post,

Using distrobuilder to create container images for LXC and LXD

we saw how to build distrobuilder, then use it to create a LXD container image for Ubuntu. We used one of the existing configuration files for an Ubuntu container image.

In this post, we are going to see how to compose such YAML configuration files that describe how the container image will look like. The aim of this post is to deal with a minimal configuration file to create a container image for Alpine Linux. A future post will deal with a more complete configuration file.

Creating a minimal configuration for a container image

Here is the minimal configuration for a Alpine Linux container image. Note that we have omitted some parts that will make the container more useful (namespaces, etc). The containers from this container image will still work for our humble purposes.

image:
description: My Alpine Linux
distribution: minimalalpine
release: 3.8.1

source:
downloader: alpinelinux-http
url: http://dl-cdn.alpinelinux.org/alpine/
keys:
- 0482D84022F52DF1C4E7CD43293ACD0907D9495A
keyserver: keyserver.ubuntu.com

packages:
manager: apk

Save this as a file with filename such as myalpine.yaml, and then build the container image. It takes a couple of seconds to build the container image. We will come back to the minimal configuration and explain in detail in the next section.

$ sudo $HOME/go/bin/distrobuilder build-lxd myalpine.yaml
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
v3.8.1-27-g42946288bd [http://dl-cdn.alpinelinux.org/alpine/v3.8/main]
v3.8.1-23-ga2d8d72222 [http://dl-cdn.alpinelinux.org/alpine/v3.8/community]
OK: 9539 distinct packages available
Parallel mksquashfs: Using 4 processors
Creating 4.0 filesystem on /home/username/ContainerImages/minimal/rootfs.squashfs, block size 131072.
[==================================================|] 90/90 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 2093.68 Kbytes (2.04 Mbytes)
48.30% of uncompressed filesystem size (4334.32 Kbytes)
Inode table size 3010 bytes (2.94 Kbytes)
17.41% of uncompressed inode table size (17290 bytes)
Directory table size 4404 bytes (4.30 Kbytes)
54.01% of uncompressed directory table size (8154 bytes)
Number of duplicate files found 5
Number of inodes 481
Number of files 64
Number of fragments 5
Number of symbolic links 329
Number of device nodes 1
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 87
Number of ids (unique uids + gids) 2
Number of uids 1
root (0)
Number of gids 2
root (0)
shadow (42)
$

And here is the container image. The size of the container image is about 2MB.

$ ls -l
total 2108
-rw-r--r-- 1 root root 364 Oct 10 20:30 lxd.tar.xz
-rw-rw-r-- 1 user user 287 Oct 10 20:30 myalpine.yaml
-rw-r--r-- 1 root root 2146304 Oct 10 20:30 rootfs.squashfs

Let’s import it into our LXD installation.

$ lxc image import --alias myminimal lxd.tar.xz rootfs.squashfs
Image imported with fingerprint: ee9208767e745bb980a074006fa462f6878e763539c439e6bfa34c029cfc318b

And now launch a container from this container image.

$ lxc launch myminimal mycontainer
Creating mycontainer
Starting mycontainer

Let’s see the container running. It’s running, but did not get an IP address. That’s part of the cost-cutting in the initial minimal configuration file.

$ lxc list mycontainer
+-------------+---------+------+------+
| NAME | STATE | IPV4 | IPV6 |
+-------------+---------+------+------+
| mycontainer | RUNNING | | |
+-------------+---------+------+------+

Let’s get a shell in the container and start doing things! First, set up the network configuration.

$ lxc exec mycontainer -- sh
~ # pwd
/root
~ # cat /etc/network/interfaces
cat: can't open '/etc/network/interfaces': No such file or directory
~ # echo "auto eth0" > /etc/network/interfaces
~ # echo "iface eth0 inet dhcp" >> /etc/network/interfaces

Then, get an IP address using DHCP.

~ # ifup eth0
udhcpc: started, v1.28.4
udhcpc: sending discover
udhcpc: sending discover
udhcpc: sending select for 10.50.250.150
udhcpc: lease of 10.50.250.150 obtained, lease time 3600

We got a lease, but for some reason the network was not configured. Both ifconfig and route showed no configuration. So, we complete the network configuration manually. And it works, we have access to the Internet!

~ # ifconfig eth0 10.50.250.150 up
~ # route add -net default gw 10.50.250.1
~ # ping -c 1 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=120 time=17.451 ms
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 17.451/17.451/17.451 ms
~ # exit
$

Let’s clear up and start studying the configuration file. We force-delete the container, and then delete the container image.

$ lxc delete --force mycontainer
$ lxc image delete myminimal
Understanding the configuration file of a container image

Here is again the container file for a minimal Alpine container image. It has three sections,

  1. image, with information about the image. We can put anything for the description and distribution name. The release version, though, should exist.
  2. source, which describes where to get the image, ISO or packages of the distribution. The downloader is a plugin in distrobuilder that knows how to get the appropriate files, as long as it knows the URL and the release version. The url is the URL prefix of the location with the files. keys and keyserver are used to verify digitally the authenticity of the files.
  3. packages, which indicates the plugin that knows how to deal with the specific package manager of the distribution. In general, you can also indicate here which additional packages to install, which to remove and which to update.
image:
description: My Alpine Linux
distribution: minimalalpine
release: 3.8.1

source:
downloader: alpinelinux-http
url: http://dl-cdn.alpinelinux.org/alpine/
keys:
- 0482D84022F52DF1C4E7CD43293ACD0907D9495A
keyserver: keyserver.ubuntu.com

packages:
manager: apk

The downloader and url go hand in hand. The URL is the prefix for the repository that the downloader will use to get the necessary files.

The keys are necessary to verify the authenticity of the files. The keyserver is used to download the actual public keys of the IDs that were specified in the keys. You could very well not specify a keyserver, and distrobuilder would request the keys from the root PGP servers. However, those servers are often overloaded and the attempt can easily fail. It happened to me several times so that I explicitly use now the Ubuntu keyserver.

Summary

We have seen how to use a minimal configuration file for an Alpine container image. In future posts, we are going to see how to create more complete configuration files.

Simos Xenitellishttps://blog.simos.info/

Benjamin Mako Hill: What we lose when we move from social to market exchange

Mar, 09/10/2018 - 7:02md

Couchsurfing and Airbnb are websites that connect people with an extra guest room or couch with random strangers on the Internet who are looking for a place to stay. Although Couchsurfing predates Airbnb by about five years, the two sites are designed to help people do the same basic thing and they work in extremely similar ways. They differ, however, in one crucial respect. On Couchsurfing, the exchange of money in return for hosting is explicitly banned. In other words, couchsurfing only supports the social exchange of hospitality. On Airbnb, users must use money: the website is a market on which people can buy and sell hospitality.

Comparison of yearly sign-ups of trusted hosts on Couchsurfing and Airbnb. Hosts are “trusted” when they have any form of references or verification in Couchsurfing and at least one review in Airbnb.

The figure above compares the number of people with at least some trust or verification on both  Couchsurfing and Airbnb based on when each user signed up. The picture, as I have argued elsewhere, reflects a broader pattern that has occurred on the web over the last 15 years. Increasingly, social-based systems of production and exchange, many like Couchsurfing created during the first decade of the Internet boom, are being supplanted and eclipsed by similar market-based players like Airbnb.

In a paper led by Max Klein that was recently published and will be presented at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW) which will be held in Jersey City in early November 2018, we sought to provide a window into what this change means and what might be at stake. At the core of our research were a set of interviews we conducted with “dual-users” (i.e. users experienced on both Couchsurfing and Airbnb). Analyses of these interviews pointed to three major differences, which we explored quantitatively from public data on the two sites.

First, we found that users felt that hosting on Airbnb appears to require higher quality services than Couchsurfing. For example, we found that people who at some point only hosted on Couchsurfing often said that they did not host on Airbnb because they felt that their homes weren’t of sufficient quality. One participant explained that:

“I always wanted to host on Airbnb but I didn’t actually have a bedroom that I felt would be sufficient for guests who are paying for it.”

An another interviewee said:

“If I were to be paying for it, I’d expect a nice stay. This is why I never Airbnb-hosted before, because recently I couldn’t enable that [kind of hosting].”

We conducted a quantitative analysis of rates of Airbnb and Couchsurfing in different cities in the United States and found that median home prices are positively related to number of per capita Airbnb hosts and a negatively related to the number of Couchsurfing hosts. Our exploratory models predicted that for each $100,000 increase in median house price in a city, there will be about 43.4 more Airbnb hosts per 100,000 citizens, and 3.8 fewer hosts on Couchsurfing.

A second major theme we identified was that, while Couchsurfing emphasizes people, Airbnb places more emphasis on places. One of our participants explained:

“People who go on Airbnb, they are looking for a specific goal, a specific service, expecting the place is going to be clean […] the water isn’t leaking from the sink. I know people who do Couchsurfing even though they could definitely afford to use Airbnb every time they travel, because they want that human experience.”

In a follow-up quantitative analysis we conducted of the profile text from hosts on the two websites with a commonly-used system for text analysis called LIWC, we found that, compared to Couchsurfing, a lower proportion of words in Airbnb profiles were classified as being about people while a larger proportion of words were classified as being about places.

Finally, our research suggested that although hosts are the powerful parties in exchange on Couchsurfing, social power shifts from hosts to guests on Airbnb. Reflecting a much broader theme in our interviews, one of our participants expressed this concisely, saying:

“On Airbnb the host is trying to attract the guest, whereas on Couchsurfing, it works the other way round. It’s the guest that has to make an effort for the host to accept them.”

Previous research on Airbnb has shown that guests tend to give their hosts lower ratings than vice versa. Sociologists have suggested that this asymmetry in ratings will tend to reflect the direction of underlying social power balances.

Average sentiment score of reviews in Airbnb and Couchsurfing, separated by direction (guest-to-host, or host-to-guest). Error bars show the 95% confidence interval.

We both replicated this finding from previous work and found that, as suggested in our interviews, the relationship is reversed on Couchsurfing. As shown in the figure above, we found Airbnb guests will typically give a less positive review to their host than vice-versa while in Couchsurfing guests will typically a more positive review to the host.

As Internet-based hospitality shifts from social systems to the market, we hope that our paper can point to some of what is changing and some of what is lost. For example, our first result suggests that less wealthy participants may be cut out by market-based platforms. Our second theme suggests a shift toward less human-focused modes of interaction brought on by increased “marketization.” We see the third theme as providing somewhat of a silver-lining in that shifting power toward guests was seen by some of our participants as a positive change in terms of safety and trust in that guests. Travelers in unfamiliar places often are often vulnerable and shifting power toward guests can be helpful.

Although our study is only of Couchsurfing and Airbnb, we believe that the shift away from social exchange and toward markets has broad implications across the sharing economy. We end our paper by speculating a little about the generalizability of our results. I have recently spoken at much more length about the underlying dynamics driving the shift we describe in  my recent LibrePlanet keynote address.

More details are available in our paper which we have made available as a preprint on our website. The final version is behind a paywall in the ACM digital library.

This blog post, and paper that it describes, is a collaborative project by Maximilian Klein, Jinhao Zhao, Jiajun Ni, Isaac Johnson, Benjamin Mako Hill, and Haiyi Zhu. Versions of this blog post were posted on several of our personal and institutional websites. Support came from GroupLens Research at the University of Minnesota and the Department of Communication at the University of Washington.

Simos Xenitellis: Using distrobuilder to create container images for LXC and LXD

Mar, 09/10/2018 - 2:20md

With LXC and LXD you can run system containers, which are containers that behave like a full operating system (like a Virtual Machine does). There are already official container images for most Linux distributions. When you run lxc launch ubuntu:18.04 mycontainer, you are using the ubuntu: repository of container images to launch a container with Ubuntu 18.04.

In this post, we are going to see

  1. an introduction to the tool distrobuilderthat creates container images
  2. how to recreate a container image
  3. how to customize a container image
Introduction to distrobuilder

The following are the command line options of distrobuilder. You can use distrobuilder to create container images for both LXC and LXD.

$ distrobuilder System container image builder for LXC and LXD Usage: distrobuilder [command] Available Commands: build-dir Build plain rootfs build-lxc Build LXC image from scratch build-lxd Build LXD image from scratch help Help about any command pack-lxc Create LXC image from existing rootfs pack-lxd Create LXD image from existing rootfs Flags: --cache-dir Cache directory --cleanup Clean up cache directory (default true) -h, --help help for distrobuilder -o, --options Override options (list of key=value) Use "distrobuilder [command] --help" for more information about a command.

The build-dir command builds the root filesystem (rootfs) of the distribution and stops there. This option makes sense if we plan to make some custom manual changes to the rootfs. We would then need to use either pack-lxc or pack-lxd to package up the rootfs into a container image.

The build-lxc and build-lxd commands create container images for either LXC or LXD, both from scratch. They both require a YAML configuration file, and that’s what is only needed from them to produce  a container image.

Installation

Currently, there are no binary packages of distrobuilder. Therefore, you will need to compile it from source. To do so, first install the Go programming language, and some other dependencies. Here are the commands to do this.

sudo apt update sudo apt install -y golang-go debootstrap rsync gpg squashfs-tools

Second, download the source code of the distrobuilder repository (this repository). The source will be placed in $HOME/go/src/github.com/lxc/distrobuilder/Here is the command to do this.

go get -d -v github.com/lxc/distrobuilder

Third, enter the directory with the source code of distrobuilder and run make to compile the source code. This will generate the executable program distrobuilder, and it will be located at $HOME/go/bin/distrobuilder. Here are the commands to do this.

cd $HOME/go/src/github.com/lxc/distrobuilder make cd Creating a container image

To create a container image, first create a directory where you will be placing the container images, and enter that directory.

mkdir -p $HOME/ContainerImages/ubuntu/ cd $HOME/ContainerImages/ubuntu/

Then, copy one of the example yaml configuration files for container images into this directory. In this example, we are creating an Ubuntu container image.

cp $HOME/go/src/github.com/lxc/distrobuilder/doc/examples/ubuntu ubuntu.yaml

Finally, run distrobuilder to create the container image. We are using the build-lxd option to create a container image for LXD. We need sudo because the process of preparing the rootfs requires to set the ownership and permissions of files to IDs that a non-root account cannot perform. Also note the way we invoke distrobuilder (as $HOME/go/bin/distrobuilder). It has to be an absolute path because under sudo the $PATH is different from our current non-root user account.

sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml

It takes about five minutes to build the Ubuntu container image. Be patient.

If the command is successful, you will get an output similar to the following. The lxd.tar.xz file is the description of the container image. The rootfs.squasfs file is the root filesystem (rootfs) of the container image. The set of these two files is the container image.

multipass@dazzling-termite:~/ContainerImages/ubuntu$ ls -l total 121032 -rw-r--r-- 1 root root 560 Oct 3 13:28 lxd.tar.xz -rw-r--r-- 1 root root 123928576 Oct 3 13:28 rootfs.squashfs -rw-rw-r-- 1 multipass multipass 3317 Oct 3 13:19 ubuntu.yaml multipass@dazzling-termite:~/ContainerImages/ubuntu$ Adding the container image to LXD

To add the container image to a LXD installation, use the lxc image import command as follows.

multipass@dazzling-termite:~/ContainerImages/ubuntu$ lxc image import lxd.tar.xz rootfs.squashfs --alias mycontainerimage Image imported with fingerprint: ae81c04327b5b115383a4f90b969c97f5ef417e02d4210d40cbb17a038729a27

Let’s see the container image in LXD. The ubuntu.yaml had a setting to create an Ubuntu 17.10 (artful) image. The size is 118MB.

$ lxc image list mycontainerimage +------------------+--------------+--------+---------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +------------------+--------------+--------+---------------+--------+----------+------------------------------+ | mycontainerimage | ae81c04327b5 | no | Ubuntu artful | x86_64 | 118.19MB | Oct 3, 2018 at 12:09pm (UTC) | +------------------+--------------+--------+---------------+--------+----------+------------------------------+ Launching a container from the container image

To launch a container from the freshly created container image, use lxc launch as follows. Note that you do not specify a repository of container images (like ubuntu: or images:) because the image is located locally.

$ lxc launch mycontainerimage c1 Creating c1 Starting c1 How to customize a container image

The ubuntu.yaml configuration file contains all the details that are required to create an Ubuntu container image. We can edit the file and make changes to the generated container image.

Changing the distribution release

The file that is currently included in the distrobuilder repository has the following section:

image:
distribution: ubuntu
release: artful
description: Ubuntu {{ image.release }}
architecture: amd64

We can change to either bionic (for Ubuntu 18.04) or cosmic (for Ubuntu 18.10), save and finally build again the container image.

Troubleshooting Error “gpg: no valid OpenPGP data found” $ sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml
Error: Error while downloading source: Failed to create keyring: gpg: keyring /tmp/distrobuilder.920564219/secring.gpg' created gpg: keyring/tmp/distrobuilder.920564219/pubring.gpg' created
gpg: requesting key C0B21F32 from hkp server pgp.mit.edu
gpgkeys: key 790BC7277767219C42C86F933B4FE6ACC0B21F32 can't be retrieved
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0
gpg: keyserver communications error: keyserver helper general error
gpg: keyserver communications error: unknown pubkey algorithm
gpg: keyserver receive failed: unknown pubkey algorithm

The keyserver pgp.mit.edu is often under load and does not respond. You can edit the YAML configuration file and replace pgp.mit.edu with keyserver.ubuntu.com.

Error “gpg: keyserver timed out”
$ sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml
Error: Error while downloading source: Failed to create keyring: gpg: keyring /tmp/distrobuilder.854636592/secring.gpg' created gpg: keyring/tmp/distrobuilder.854636592/pubring.gpg' created
gpg: requesting key C0B21F32 from hkp server pgp.mit.edu
gpg: keyserver timed out
gpg: keyserver receive failed: keyserver error

The keyserver pgp.mit.edu is often under load and does not respond. You can edit the YAML configuration file and replace pgp.mit.edu with keyserver.ubuntu.com.

Simos Xenitellishttps://blog.simos.info/

Harald Sitter: KDiff3 master as git mergetool? Yes, please!

Mar, 09/10/2018 - 1:00md

I like using kdiff3, I also like using git, I also like using bundles for applications. Let’s put the three together!

Set up the KDE git flatpak repo and install kdiff3

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo flatpak remote-add --if-not-exists kdeapps --from https://distribute.kde.org/kdeapps.flatpakrepo flatpak install kdeapps org.kde.kdiff3

Write a tiny shim around this so we can use it from git. Put it in /usr/bin/kdiff3 or $HOME/bin/kdiff3 if $PATH is set up to include bins from $HOME.

#/bin/sh exec flatpak run org.kde.kdiff3 "$@"

Don’t forget to chmod +x kdiff3 it!

git mergetool should now pick up our kdiff3 wrapper automatically. So all that’s left to do is having a merge conflict and off we go with git mergetool

Nathan Haines: Announcing the Ubuntu 18.10 Free Culture Showcase winners

Pre, 28/09/2018 - 9:00pd

October approaches, and Ubuntu marches steadly along the road from one LTS to another. Ubuntu 18.10 is another step in Ubuntu’s future. And now it’s time to unveil a small part of that change: the community wallpapers to be included in Ubuntu 18.10!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. This cycle we had some amazing images submitted to the Ubuntu 18.10 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found. The competition was fierce; narrowing down the options to the final selections was painful!

But there can be only 12, and the final images that will be included in Ubuntu 18.10 are:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) today at the links above, or in your desktop wallpaper list after you upgrade to or install Ubuntu 18.10 on October 18th.

Ubuntu Studio: Ubuntu Studio 18.10 (Cosmic Cuttlefish) Beta released

Pre, 28/09/2018 - 7:09pd
The Ubuntu Studio team is pleased to announce the final beta release of Ubuntu Studio 18.10 Cosmic Cuttlefish. While this beta is reasonably free of any showstopper CD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 18.10 is released […]

Ubuntu MATE: Ubuntu MATE 18.10 Beta

Pre, 28/09/2018 - 1:30pd

Ubuntu MATE 18.10 is a modest, yet strategic, upgrade over our 18.04 release. If you want bug fixes and improved hardware support then 18.10 is for you. For those who prefer staying on the LTS then everything in this 18.10 release is also important for the upcoming 18.04.2 release. Read on to learn more...

We are preparing Ubuntu MATE 18.10 (Cosmic Cuttlefish) for distribution on October 18th, 2018 With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.


Superposition on the Intel Core i7-8809G Radeon RX Vega M powered Hades Canyon NUC What works?

People tell us that Ubuntu MATE is stable. You may, or may not, agree.

Ubuntu MATE Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu MATE Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu MATE, MATE, and GTK+ developers
What changed since the Ubuntu MATE 18.04 final release?

Curiously, the work during this Ubuntu MATE 18.10 release has really been focused on what will become Ubuntu MATE 18.04.2. Let me explain.

MATE Desktop

The upstream MATE Desktop team have been working on many bug fixes for MATE Desktop 1.20.x, that has resulted in a lot of maintenance updates in the upstream releases of MATE Desktop. The Debian packaging team for MATE Desktop, of which I am member, has been updating all the MATE packages to track these upstream bug fixes and new releases. Just about all MATE Desktop packages and associated components, such as AppMenu and MATE Dock Applet have been updated. Now that all these fixes exist in the 18.10 release, we will start the process of SRU'ing (backporting) them to 18.04 so that they will feature in the Ubuntu MATE 18.04.2 release due in February 2019. The fixes should start landing in Ubuntu MATE 18.04 very soon, well before the February deadline.

Hardware Enablement

Ubuntu MATE 18.04.2 will include a hardware enablement stack (HWE) based on what is shipped in Ubuntu 18.10. Ubuntu users are increasingly adopting the current generation of AMD RX Vega GPUs, both discrete and integrated solutions such as the Intel Core i7-8809G Radeon RX Vega M found in the Hades Canyon NUC and some laptops. I have been lobbying people within the Ubuntu project to upgrade to newer versions of the Linux kernel, firmware, Mesa and Vulkan that offer the best possible "out of box" support for AMD GPUs. Consequently, Ubuntu 18.10 (of any flavour) is great for owners of AMD graphics solutions and these improvements will soon be available in Ubuntu 18.04.2 too.

Download Ubuntu MATE 18.10 Beta

We've even redesigned the download page so it's even easier to get started.

Download Known Issues

Here are the known issues.

Ubuntu MATE
  • The Software Boutique doesn't list any available software.
    • An update, due very soon, will re-stock the software library and add a few new applications too.
Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

Benjamin Mako Hill: Shannon’s Ghost

Mër, 26/09/2018 - 4:34pd

I’m spending the 2018-2019 academic year as a fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford.

Claude Shannon on a bicycle.

Every CASBS study is labeled with a list of  “ghosts” who previously occupied the study. This year, I’m spending the year in Study 50 where I’m haunted by an incredible cast that includes many people whose scholarship has influenced and inspired me.

The top part of the list of ghosts in Study #50 at CASBS.

Foremost among this group is Study 50’s third occupant: Claude Shannon

At 21 years old, Shannon’s masters thesis (sometimes cited as the most important masters thesis in history) proved that electrical circuits could encode any relationship expressible in Boolean logic and opened the door to digital computing. Incredibly, this is almost never cited as Shannon’s most important contribution. That came in 1948 when he published a paper titled A Mathematical Theory of Communication which effectively created the field of information theory. Less than a decade after its publication, Aleksandr Khinchin (the mathematician behind my favorite mathematical constant) described the paper saying:

Rarely does it happen in mathematics that a new discipline achieves the character of a mature and developed scientific theory in the first investigation devoted to it…So it was with information theory after the work of Shannon.

As someone whose own research is seeking to advance computation and mathematical study of communication, I find it incredibly propitious to be sharing a study with Shannon.

Although I teach in a communication department, I know Shannon from my background in computing. I’ve always found it curious that, despite the fact Shannon’s 1948 paper is almost certainly the most important single thing ever published with the word “communication” in its title, Shannon is rarely taught in communication curricula is sometimes completely unknown to communication scholars.

In this regard, I’ve thought a lot about this passage in Robert’s Craig’s  influential article “Communication Theory as a Field” which argued:

In establishing itself under the banner of communication, the discipline staked an academic claim to the entire field of communication theory and research—a very big claim indeed, since communication had already been widely studied and theorized. Peters writes that communication research became “an intellectual Taiwan-claiming to be all of China when, in fact, it was isolated on a small island” (p. 545). Perhaps the most egregious case involved Shannon’s mathematical theory of information (Shannon & Weaver, 1948), which communication scholars touted as evidence of their field’s potential scientific status even though they had nothing whatever to do with creating it, often poorly understood it, and seldom found any real use for it in their research.

In preparation for moving into Study 50, I read a new biography of Shannon by Jimmy Soni and Rob Goodman and was excited to find that Craig—although accurately describing many communication scholars’ lack of familiarity—almost certainly understated the importance of Shannon to communication scholarship.

For example, the book form of Shannon’s 1948 article was published by University Illinois on the urging of and editorial supervision of Wilbur Schramm (one of the founders of modern mass communication scholarship) who was a major proponent of Shannon’s work. Everett Rogers (another giant in communication) devotes a chapter of his “History of Communication Studies”² to Shannon and to tracing his impact in communication. Both Schramm and Rogers built on Shannon in parts of their own work. Shannon has had an enormous impact, it turns out, in several subareas of communication research (e.g., attempts to model communication processes).

Although I find these connections exciting. My own research—like most of the rest of communication—is far from the substance of technical communication processes at the center of Shannon’s own work. In this sense, it can be a challenge to explain to my colleagues in communication—and to my fellow CASBS fellows—why I’m so excited to be sharing a space with Shannon this year.

Upon reflection, I think it boils down to two reasons:

  1. Shannon’s work is both mathematically beautiful and incredibly useful. His seminal 1948 article points to concrete ways that his theory can be useful in communication engineering including in compression, error correcting codes, and cryptography. Shannon’s focus on research that pushes forward the most basic type of basic research while remaining dedicated to developing solutions to real problems is a rare trait that I want to feature in my own scholarship.
  2. Shannon was incredibly playful. Shannon played games, juggled constantly, and was always seeking to teach others to do so. He tinkered, rode unicycles, built a flame-throwing trumpet, and so on. With Marvin Minsky, he invented the “ultimate machine”—a machine that’s only function is to turn itself off—which he kept on his desk.

    A version of the Shannon’s “ultimate machine” that is sitting on my desk at CASBS.

I have no misapprehension that I will accomplish anything like Shannon’s greatest intellectual achievements during my year at CASBS. I do hope to be inspired by Shannon’s creativity, focus on impact, and playfulness. In my own little ways, I hope to build something at CASBS that will advance mathematical and computational theory in communication in ways that Shannon might have appreciated.

  1. Incredibly, the year that Shannon was in Study 50, his neighbor in Study 51 was Milton Friedman. Two thoughts: (i) Can you imagine?! (ii) I definitely chose the right study!
  2. Rogers book was written, I found out, during his own stint at CASBS. Alas, it was not written in Study 50.

Stephen Michael Kellat: Work Items To Remember

Mër, 26/09/2018 - 4:30pd

Sometimes I truly cannot remember everything. There have been many, many things going on as of late. Being on medical leave has not been helpful, either.

As we look to the last quarter of 2018, there are some matters I need to remind myself about keeping in the work plan:

  1. Finish the write-up on the research for Outernet/Othernet.

  2. Begin looking at what I need to do to set up a FidoNet node. I haven’t been involved in FidoNet since high school during President Bill Clinton’s second term in office.

  3. Consider the possibility that the folks of DarkNetPlan failed. After looking at this post I honestly need to look at finding a micrographics artist that I can set up a working relationship with. Passing digital data via microfilm sounds old-fashioned but seems more durable these days.

  4. Construct a proper permanent HF antenna for operating. I am a ham radio operator with General class privileges in the United States that remain barely used even though I am only a few years away from joining the Quarter Century Wireless Association.

  5. Figure out what I’m doing wrong setting up multiple HDHomeRun receivers to be tapped by a PVR-styled computer.

  6. Pick up 18 graduate semester hours so I can teach as an adjunct somewhere. This would generally have to happen in a graduate certificate program in the US or at the halfway mark in a master’s degree program.

With my day job being constantly in flux, I am sure I’ve missed something in the listing above.

Riccardo Padovani: Responsible disclosure: retrieving a user's private Facebook friends.

Dje, 23/09/2018 - 11:00pd

Data access control isn’t easy. While it can sound quite simple (just give access to the authorized entities), it is very difficult, both on a theoretical side (who is an authorized entity? What does authorized mean? And how do we identify an entity?) and on a pratical side.

On the pratical side, how we will see, disclose of private data is often a unwanted side effect of an useful feature.

Facebook and Instagram

Facebook bought Instagram back in 2012. Since then, a lot of integrations have been implemented between them: among the others, when you suscribe to Instagram, it will suggest you who to follow based on your Facebook friends.

Your Instagram and Facebook accounts are then somehow linked: it happens both if you sign up to Instagram using your Facebook account (doh!), but also if you sign up to Instagram creating a new account but using the same email you use in your Facebook account (there are also other way Instagram links your new account with an existing Facebook account, but they are not of our interest here).

So if you want to create a secret Instagram account, create a new mail for it ;-)

Back in topic: Instagram used to enable all its feature to new users, before they have confirmed their email address. This was to do not “interrupt” usage of the website / app, they would have been time to confirm the email later in their usage.

Email address confirmation is useful to confirm you are signing up using your own email address, and not one of someone else.

Data leak

One of the features available before confirming the email address, was the suggestion of who to follow based on the Facebook friends of the account Instagram automatically linked.

This made super easy to retrieve the Facebook’s friend list of anyone who doesn’t have an Instagram account, and since there are more than 2 billions Facebook accounts but just 800 millions Instagram accounts, it means that at least 1 billion and half accounts were vulnerable.

The method was simple: knowing the email address of the target (and an email address is all but secret), the attacker had just to sign up to Instagram with that email, and then go to the suggestions of people to follow to see victim’s friends.

Conclusion

The combination of two useful features (suggestion of people to follow based on a linked Facebook account, being able to use the new Instagram account immediately) made this data leak possible.

It wasn’t important if the attacker was a Facebook’s friend with the victim, or the privacy settings of the victim’s account on Facebook. Heck, the attacker didn’t need a Facebook account at all!

Timeline
  • 20 August 2018: first disclosure to Facebook
  • 20 August 2018: request of other information from Facebook
  • 20 August 2018: more information provided to Facebook
  • 21 August 2018: Facebook closed the issue saying wasn’t a security issue
  • 21 August 2018: I submitted a new demo with more information
  • 23 August 2018: Facebook confirmed the issue
  • 30 August 2018: Facebook deployed a fix and asked for a test
  • 12 September 2018: Facebook awarded me a bounty
Bounty

Facebook awarded me a $3000 bounty award for the disclosure. This was the first time I was awarded for a security disclosure for Facebook, I am quite happy with the result and I applaude Facebook for making all the process really straightforward.

For any comment, feedback, critic, write me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.

Regards, R.

Stephen Michael Kellat: And Another Thing

Dje, 23/09/2018 - 4:53pd

My Zotero database has some unfortunate comparisons and contrasts in it. For example:

Crowe, J. (2018, September 21). Google Employees Considered Changing Search Algorithm to Fight Travel Ban. Retrieved September 22, 2018, from https://www.nationalreview.com/news/google-employees-considered-changing-search-algorithm-to-fight-travel-ban/

Not the happiest of news that, apparently, President Donald John Trump isn't totally unjustified in his paranoia. The blackbox that is search at Google can potentially be tampered with. Without any understanding of what goes on inside Google's "black box" system, there isn't really much to assuage President Trump's fears.

That this sort of a possibility could come up in 2018 should not be surprising. After all, here are some further citations from my Zotero database:

Kellat, S. M. (2006). Intellectual Terrorism and the Church: The Case of the Google Bomb. Conference paper. Retrieved from http://eprints.rclis.org/10147/

Kellat, S. M. (2007). Print-Based Culture Meets An “Amazoogle” World: New Challenges To A Priesthood of Readers. Conference paper. Retrieved from http://eprints.rclis.org/10146/

I suppose I merely wrote about the matter initially in terms of malicious external actors twelve years ago. The idea of internal malicious actors came up eleven years ago in my writing. After that I began following the various color uprisings and the like but forgot to keep writing. I used to be a working academic but for some reason detoured into being a tax collector these days after spending time as a podcaster.

There seems to be low-hanging fruit to pursue again in research about this digital life.

Alberto Milone: NVIDIA PRIME in Ubuntu 18.04 and 18.10, and a call for testing

Enj, 20/09/2018 - 11:19pd

Ubuntu 18.04 marked the transition to a new, more granular, packaging of the NVIDIA drivers, which, unfortunately, combined with a change in logind, and with the previous migration from Lightdm to Gdm3, caused (Intel+NVIDIA) hybrid laptops to stop working the way they used to in Ubuntu 16.xx and older.

The following are the main issues experienced by our users:

  • An increase in power consumption when using the power saving profile (i.e. when the discrete GPU is off).
  • The inability to switch between power profiles on log out (thus requiring a reboot).

We have backported a commit to solve the problem with logind, and I have worked on a few changes in gpu-manager, and in the other key components, to improve the experience when using Gdm3.

NOTE: fixes for Lightdm, and for SDDM still need some work, and will be made available in the next update.

Both issues should be fixed in Ubuntu 18.10, and I have backported my work to Ubuntu 18.04, which is now available for testing.

If you run Ubuntu 18.04, own a hybrid laptop with an Intel and an NVIDIA GPU (supported by the 390 NVIDIA driver),  we would love to get your feedback on the updates in Ubuntu 18.04.

If you are interested, head over to the bug report, follow the instructions at the end of the bug description, and let us know about your experience.

Jono Bacon: Linus, His Apology, And Why We Should Support Him

Hën, 17/09/2018 - 12:12pd

Today, Linus Torvalds, the creator of Linux, which powers everything from smartwatches to electrical grids posted a pretty remarkable note on the kernel mailing list.

As a little bit of backstory, Linus has sometimes come under fire for the ways in which he has expressed feedback, provided criticism, and reacted to various scenarios on the kernel mailing list. This criticism has been fair in many cases: he has been overly aggressive at times, and while the kernel maintainers are a tight-knit group, the optics (not just of what it looks like, but what is actually happening), particularly for those new to kernel development has often been pretty bad.

Like many conflict scenarios, this feedback has been communicated back to him in both constructive and non-constructive ways. Historically he has been seemingly reluctant to really internalize this feedback, I suspect partially because (a) the Linux kernel is a very successful project, and (b) some of the critics have at times gone nuclear at him (which often doesn’t work as a strategy towards defensive people). Well, things changed today.

In his post today he shared some self-reflection on this feedback:

This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry.

He went on to not just share an admission that this has been a problem, but to also share a very personal acceptance that he struggles to understand and engage with people’s emotions:

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely. I am going to take time off and get some assistance on how to understand people’s emotions and respond appropriately.

His post is sure to light up the open source, Linux, and tech world for the next few weeks. For some it will be celebrated as a step in the right direction. For some it will be too little too late, and their animus will remain. For some they will be cautiously supportive, but defer judgement until they have seen his future behavior demonstrate substantive changes.

My Take

I wouldn’t say I know Linus very closely; we have a casual relationship. I see him at conferences from time to time, and we often bump into each other and catch up. I interviewed him for my book and for the Global Learning XPRIZE. From my experience he is a funny, genuine, friendly guy. Interestingly, and not unusually at all for open source, his online persona is rather different to his in-person persona. I am not going to deny that when I would see these dust-ups on LKML, it didn’t reflect the Linus I know. I chalked it down to a mixture of his struggles with social skills, dogmatic pragmatism, and ego.

His post today is a pretty remarkable change of posture for him, and I encourage that we as a community support him in making these changes.

Accepting these personal challenges is tough, particularly for someone in his position. Linux is a global phenomenon. It has resulted in billions of dollars of technology creation, powering thousands of companies, and changing the norms around of how software is consumed and created. It is easy to forget that Linux was started by a quiet Finnish kid in his university dorm room. It is important to remember that just because Linux has scaled elegantly, it doesn’t mean that Linus has been able to. He isn’t a codebase, he is a human being, and bugs are harder to spot and fix in humans. You can’t just deploy a fix immediately. It takes time to identify the problem and foster and grow a change. The starting point for this is to support people in that desire for change, not re-litigate the ills of the past: that will get us nowhere quickly.

I am also mindful of ego. None of us like to admit we have an ago, but we all do. You don’t get to build one of the most fundamental technologies in the last thirty years and not have an ego. He built it…they came…and a revolution was energized because of what he created. While Linus’s ego is more subtle, and certainly not overstated and extending to faddish self-promotion, overly expensive suits, and forays into Hollywood (quite the opposite), his ego has naturally resulted in abrupt and fixed opinions on how his project should run. This sometimes results in him plugging fingers in his ears to particularly challenging viewpoints from others (he is not the only person guilty of this, many people in similar positions do too). His post today is a clear example of him putting Linux as a project ahead of his own personal ego.

This is important for a few reasons. Firstly, being in such a public position and accepting your personal flaws isn’t a problem many people face, and isn’t a situation many people handle well. I work with a lot of CEOs, and they often say it is the loneliest job on the planet. I have heard American presidents say the same in interviews. This is because they are the top of the tree with all the responsibility and expectations on their shoulders. Put yourself in Linus’s position: his little project has blown up into a global phenomenon, and he didn’t necessarily have the social tools to be able to handle this change. Ego forces these internal struggles under the surface and to push them down and avoid them. So, to accept them as publicly and openly as he did today is a very firm step in the right direction. Now, the true test will be results, but we need to all provide the breathing space for him to accomplish them.

So, I would encourage everyone to give Linus a shot. This doesn’t mean the frustrations of the past are erased, and he has acknowledged and apologized for these mistakes as a first step. He has accepted he struggles with understanding other’s emotions, and a desire to help improve this for the betterment of the project and himself. He is a human, and the best tonic for humans to resolve their own internal struggles is the support and encouragement of other humans. This is not unique to Linus, but to anyone who faces similar struggles.

All the best, Linus.

The post Linus, His Apology, And Why We Should Support Him appeared first on Jono Bacon.

David Tomaschik: Course Review: Software Defined Radio with HackRF

Pre, 14/09/2018 - 9:00pd

Over the past two days, I had the opportunity to attend Michael Ossman’s course “Software Defined Radio with HackRF” at Toorcon XX. This is a course I’ve wanted to take for several years, and I’m extremely happy that I finally had the chance. I wanted to write up a short review for others considering taking the course.

Course Material

The material in the course focuses predominantly on the basics of Software Defined Radio and Digital Signal Processing. This includes the math necessary to understand how the DSP handles the signal. The math is presented in a practical, rather than academic, way. It’s not a math class, but a review of the necessary basics, mostly of complex mathematics and a bit of trigonometry. (My high school teachers are now vindicated. I did use that math again.) You don’t need the math background coming in, but you do need to be prepared to think about math during the class. Extracting meaningful information from the ether is, it turns out, an exercise in mathematics.

There’s a lot of discussions of frequencies, frequency mixers, and how frequency, amplitude, and phase are related. Also, despite more than 20 years as an amateur radio operator, I finally understand dB properly. It’s possible to understand reasonably without having to do logarithms:

  • +3db = x2
  • +10db = x10
  • -3db = 1/2
  • -10db = 1/10

In terms of DSP, he demonstrated extracting signals of interest, clock recovery, and other techniques necessary for understanding digital signals. It really just scratches the surface, but is enough to get a basic signal understood.

From a security point of view, there was only a single system that we “attacked” in the class. I was hoping for a little bit more of this, but given the detail in the other content, I am not disappointed.

Mike pointed out that the course primarily focuses on getting signals from the air to a digital series of 0 an 1 bits, and then leaves the remainder to tools like python for adding meaning and interpretation of the bits. While I understand this (and, admittedly, at that point it’s similar to decoding an unknown network protocol), I would still like to have gone into more detail.

Course Style

At the very beginning of the course, Mike makes it clear that no two classes he teaches are exactly the same. He adapts the course to the experience and background of each class, and that was very evident from our small group this week. With such a small class, it became more like a guided conversation than a formal class.

Overall, the course was very interactive, with lots of student questions, as well as “Socratic Method” questions from the instructor. This was punctuated with a number of hands-on exercises. One of the best parts of the hands-on exercises is that Mike provides a flash drive with a preconfigured Ubuntu Linux installation containing all the tools that are needed for the course. This allows students to boot into a working environment, rather than having to play around with tool installation or virtual machine settings. (We were, in fact, warned that VMs often do not play well with SDR, because the USB forwarding has overhead resulting in lost samples.)

Mike made heavy use of the poster pad in the room, diagramming waveforms and information about the processes involved in the SDR architecture and the DSP done in the computer. This works well because he customizes the diagrams to explain each part and answer student questions. It also feels much more engaging than just pointing at slides. In fact, the only thing displayed on the projector is Mike’s live screen from his laptop, displaying things like the work he’s doing in GNURadio Companion and other pieces of software.

If you have devices you’re interested in studying, you should bring them along with you. If time permits, Mike tries to work these devices into the analysis during the course.

Tools Used Additional Resources Opinions & Conclusion

This was a great class that I really enjoyed. However, I really wish there had been more emphasis on how you decode and interpret the unknown signals, such as discussion of common packet types over RF, any tools for signals analysis that could be built either in Python or in GNURadio. Perhaps he (or someone) could offer an advanced class that focuses on the signal analysis, interpretation, and “spoofing” portions of the problem of attacking RF-based systems.

If you’re interested in doing assessments of physical devices, or into radio at all, I highly recommend this course. Mike obviously really knows the material, and getting a HackRF One is a pretty nice bonus. Watching the videos on his website will help you prepare for the math, but will also result int a good portion of the content being duplicated in the course. I’m not disappointed that I did that, and I still feel that I more than made good use of the time in the course, but it is something to be aware of.

Faqet