You are here

Agreguesi i feed

Ubuntu Podcast from the UK LoCo: S11E07 – Seven Years in Tibet - Ubuntu Podcast

Planet Ubuntu - Enj, 19/04/2018 - 4:00md

This week we meet a sloth and buy components for the Hades Canyon NUC. The Windows File Manager gets open sourced, Iran are going to block Telegram, PostmarketOS explore creating an open source baseband, Microsoft make a custom Linux distro called Azure Sphere and we round the community news.

It’s Season 11 Episode 07 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Daniel Holbach: A month with Dell XPS 13 (9370)

Planet Ubuntu - Enj, 19/04/2018 - 8:55pd

After years of using Thinkpads, I went for a Dell XPS 13 with Ubuntu. Although I had bought devices with Linux pre-installed and laptops for friends as well, this  was going to be my first own laptop coming with Ubuntu straight from the factory.

 

The hardware

The specs looked great (big SSD disk, enough memory to play around with VMs/containers, etc.), but I had to brush away some fond memories of old laptops, where I was able to easily replace parts (memory, screen, disk, power jack, keyboard and more for my x220). With the XPS this was not easily going to be possible anymore.

Anyway, the new machine arrived in the office. It looked great, it was light,  it was really well-built and whatever task I threw at the machine, it dealt with it nicely. In general I really liked the hardware and how the machine felt a lot. I knew I was going to be happy with this.

A few things bothered me somewhat though. The placement of the webcam simply does not make sense. It’s at the bottom of the screen, so you get an upwards-angle no matter what you do and people in calls with you will always see a close up of your fingers typing. Small face, huge fingers. It’s really awkward. I won’t go unmanicured into meetings anymore!

The software

It came with an old image of Ubuntu 16.04 LTS pre-installed and after pulling a lot of updates, I thought I was going to get a nice fresh start with everything just working out of the box. Not quite.

The super key was disabled. As 16.04 came with Unity, the super key is one of the key ingredients to starting apps or bringing up the dash. There was a package called supey-key-dell (or some such) installed which I had to find and remove and some gnome config I had to change to make it work again. Why oh why?

Hardware support. I thought this was going to be straight-forward. Unfortunately it wasn’t. In the process of the purchase Dell recommended I get a DA300, a USB-C mobility adapter. That looked like a great suggestion, ensuring I can still use all my Old World devices. Unfortunately the Ethernet port of it just didn’t work with 16.04.

The laptops’s own screen flickered in many circumstances and connecting to screens (even some Dell devices) flickered even more, sometimes screens went on and off.

I got a case with USB-C adapter for the SSD disk of my laptop and copied some data over only to find that some disk I/O load nearly brought the system to a grinding halt.

Palm detection of the touchpad was throwing me off again and again. I can’t count how many times I messed up documents or typed text in the wrong places. This was simply infuriating.

Enter Ubuntu 18.04 LTS

I took the plunge, wiped the disk and made a fresh install of Bionic and I’m not looking back. Palm detection is LOADS better, Disk I/O is better, screen flickering gone, Ethernet port over USB-C works. And I’m using a recent Ubuntu, which is just great! Nice work everyone involved at Ubuntu!

I hope Dell will reconsider shipping this new release to users with recent machines (and as an update) – the experience is dramatically different.

I’m really happy with this machine now, got to go now, got a manicure appointment…

Jeremy Bicha: gksu removed from Ubuntu

Planet Ubuntu - Enj, 19/04/2018 - 2:49pd

Today, gksu was removed from Ubuntu 18.04, four weeks after it was removed from Debian.

Thomas Ward: NGINX Updates: Ubuntu Bionic, and Mainline and Stable PPAs

Planet Ubuntu - Mër, 18/04/2018 - 8:58md

NGINX has been updated in multiple places.

Ubuntu Bionic 18.04

Ubuntu Bionic 18.04 now has 1.14.0 in the repositories, and very likely will have 1.14.0 for the lifecycle of 18.04 from April of 2018 through April of 2023, as soon as it is released.

NGINX PPAs: Mainline and Stable

There are two major things to note:

First: Ubuntu Trusty 14.04 is no longer supported in the PPAs, and will not receive the updated NGINX versions. This is due to the older versions of libraries in the 14.04 release, which are too old to compile the third-party modules which are included from the Debian packages. Individuals using 14.04 should strongly consider using the nginx.org repositories instead, for newer releases, as they don’t need any libraries which the PPA versions of the packages need.

Secondly: With the exception of Ubuntu Trusty 14.04, the NGINX PPAs are in the process of being updated with NGINX Stable 1.14.0 and NGINX Mainline 1.13.12. Please note that 1.14.0 is equal to 1.13.12 in terms of features, and you should probably use NGINX 1.14.0 instead of 1.13.12 for now. NGINX Mainline will be updated to 1.15.x when NGINX has a ‘new’ Mainline release that is ahead of NGINX Stable.

Didier Roche: Welcome To The (Ubuntu) Bionic Age: Behind communitheme: interviewing Mads

Planet Ubuntu - Mër, 18/04/2018 - 1:35md
Interviewing people behind communitheme. Today: Mads Rosendahl

As discussed last week when unveiling the communitheme snap for ubuntu 18.04 LTS, here is a suite of interview this week on some members of the core contributor team shaping this entirely community-driven theme.

Today is the turn of Mads, madsrh on the community hub.

Who are you? What are you doing/where are you working? Give us some words and background about you!

My name is Mads Rosendahl (MadsRH) and I’m from Denmark. My dayjob has two sides, half the time I work as a teacher at a school of music and the other half I work in PR (no, not pull requests ;) ) where I do things like brochures, ads, website graphics, etc.

I’m no saint - I use OSX, Windows and Linux.

I got involved with Ubuntu back when everything was brown - around 7.10. When I read about Ubuntu, Linux and how Mark Shuttleworth fits into the story, a fire was lit inside me and I wanted to give something back to this brilliant project. In the beginning I set out to make peoples desktops brown and pretty by posting wallpaper suggestions to the artwork mailing list.

Because I can’t write any code, I mostly piggyback on awesome people in the community, like when I worked on the very first slideshow in Ubiquity installer with Dylan McCall.

I attended UDS in Dallas back in 2009 (an amazing experience!) and have had to take a long break from contributing. This theme work is my first contribution since then.

What are you mainly contributor areas on communitheme?

I do mockups, design, find bugs and participate in the conversations. I also suggested new system sounds and have a cursor project in the works - let’s see if it’ll make it into the final release of the theme.

How did you hear about new theming effort on ubuntu, what made you willing to participate actively to it?

I’ve been asking for this for a long time, and suddenly Merlijn suggested a community theme in a comment on a blogpost, so of course I signed up. It’s obvious that the best linux distribution, should have the most beautiful out of the box desktop ;)

How is the interaction with the larger community, how do you deal with different ideas and opinions on the community hub, issues opened against the projects, PR?

There’s an awesome community within Ubuntu and there has been a ton of great feedback and conversations around the decisions. It comes as no surprise that with (almost) every change, there are people both for and against. Luckily we’re not afraid of experimenting. I’m sure that with the final release we’ll have found a good balance between UX (what works best), design (what looks best) and branding (what feels like Ubuntu).

We have a small but awesome team put together back in november when the project was first announced, but we’ve also see a lot of other contributors file issues and step up with PR - fantastic!

It’s easy to see that people are passioned about the Ubuntu desktop.

What did you think (honestly) about the decision for not shipping it by default on 18.04, but curating it for a little while?

It’s the right move. I rest comfortably knowing that Canonical values stability over beauty. Especially when you’ll be able to just install a snap to get the new theme. Rather dusty and stable, than shiny and broken.

Any idea or wish on what the theme name (communitheme is a codename project) should be?

No, but off the top of my head how about: “Dewy” or “Muutos” (Finnish for change)

Any last words or questions I should have asked you?

Nope.

Thanks Mads!

Next interview coming up tomorrow, stay tuned! :)

Jono Bacon: Open Collaboration Conference (at Open Source Summit) Call For Papers

Planet Ubuntu - Mër, 18/04/2018 - 6:34pd

Back in February I announced the Call For Papers for the Open Collaboration Conference was open. For those of you in the dark, last year I ran the Open Community Conference as part of the Linux Foundation’s Open Source Summit events in North America and Europe. The events were a great success, but this year we decided to change the name. From the original post:

As the event has evolved, I have wanted it to incorporate as many elements focused on people collaborating together. While one component of this is certainly people building communities, other elements such as governance, remote working, innersource, cultural development, and more fit under the banner of “collaboration”, but don’t necessarily fit under the traditional banner of “community”. As such, we decided to change the name of the conference to the Open Collaboration Conference. I am confident this will then provide both a home to the community strategy and tactics content, as well as these other related areas. This way the entire event services as a comprehensive capsule for collaboration in technology.

I am really excited about this year’s events. They are taking place:

  • North America in Vancouver from 29th – 31st August 2018
  • Europe in Edinburgh from 22nd – 24th October 2018

Last year there was a wealth of tremendous material and truly talented speakers, and I am looking forward to even more focused, valuable, and pragmatic content.

North America Call For Papers Closing Soon

…this neatly leads to the point.

The Call For Papers for the Vancouver event closing on 29th April 2018. So, be sure to go and get your papers in right away.

Also, don’t forget that the European event has the CFP close on the 1st July 2018. Go and submit your papers there too!

For both events I am really looking for a diverse set of content that offers genuine pragmatic value. Example topics include:

  • Open Source Metrics
  • Incentivization and Engagement
  • Software Development Methodologies and Platforms
  • Building Internal Innersource Communities
  • Remote Team Management and Methods
  • Bug/Issue Management and Triage
  • Communication Platforms and Methods
  • Open Source Governance and Models
  • Mentoring and Training
  • Event Strategy
  • Content Management and Social Media
  • DevOps Culture
  • Community Management
  • Advocacy and Evangelism
  • Government and Compliance

Also, here’s a pro tip for helping to get your papers picked.

Many people who submit papers to conferences send in very generic “future of open source” style topics. For the Open Collaboration Conference I am eager to have a few of these, but I am particularly interested in seeing deep dives into specific areas, technologies and approaches. Your submission will be especially well received if it offers pragmatic approaches and value that the audience can immediately take away and apply in their own world. So, consider how you package up your recommendations and best practice and I look forward to seeing you submissions and seeing you there!

The post Open Collaboration Conference (at Open Source Summit) Call For Papers appeared first on Jono Bacon.

Andres Rodriguez: MAAS 2.4.0 beta 2 released!

Planet Ubuntu - Mar, 17/04/2018 - 7:56md
Hello MAASters! I’m happy to announce that MAAS 2.4.0 beta 2 is now released and is available for Ubuntu Bionic. MAAS Availability MAAS 2.4.0 beta 2 is currently available in Bionic’s Archive or in the following PPA: ppa:maas/next MAAS 2.4.0 (beta2) New Features & Improvements MAAS Internals optimisation

Continuing with MAAS’ internal surgery, a few more improvements have been made:

  • Backend improvements

  • Improve the image download process, to ensure rack controllers immediately start image download after the region has finished downloading images.

  • Reduce the service monitor interval to 30 seconds. The monitor tracks the status of the various services provided alongside MAAS (DNS, NTP, Proxy).

  • UI Performance optimizations for machines, pods, and zones, including better filtering of node types.

KVM pod improvements

Continuing with the improvements for KVM pods, beta 2 adds the ability to:

  • Define a default storage pool

This feature allows users to select the default storage pool to use when composing machines, in case multiple pools have been defined. Otherwise, MAAS will pick the storage pool automatically depending which pool has the most available space.

  • API – Allow allocating machines with different storage pools

Allows users to request a machine with multiple storage devices from different storage pools. This feature uses storage tags to automatically map a storage pool in libvirt with a storage tag in MAAS.

UI Improvements
  • Remove remaining YUI in favor of AngularJS.

As of beta 2, MAAS has now fully dropped the use of YUI for the Web Interface. The last section using YUI was the Settings page and the login page. Both sections have now been transitioned to use AngularJS instead.

  • Re-organize Settings page

The MAAS settings  have now been reorganized into multiple tabs.

Minor improvements
  • API for default DNS domain selection

Adds the ability to define the default DNS domain. This is currently only available via the API.

  • Vanilla framework upgrade

We would like to thank the Ubuntu web team for their hard work upgrading MAAS to the latest version of the Vanilla framework. MAAS is looking better and more consistent every day!

Bug fixes

Please refer to the following for all 37 bug fixes in this release, which address issues with MAAS across the board:

https://launchpad.net/maas/+milestone/2.4.0beta2

 

The Fridge: Ubuntu Weekly Newsletter Issue 523

Planet Ubuntu - Mar, 17/04/2018 - 5:35pd

Welcome to the Ubuntu Weekly Newsletter, Issue 523 for the week of April 8 – 14, 2018 – the full version is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Rozz Welford
  • Elizabeth K. Joseph
  • Bashing-om
  • wildmanne39
  • Krytarik Raido
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

David Tomaschik: The IoT Hacker's Toolkit

Planet Ubuntu - Hën, 16/04/2018 - 9:00md

Today, I’m giving a talk entitled “The IoT Hacker’s Toolkit” at BSides San Francisco. I thought I’d release a companion blog post to go along with the slide deck. I’ll also include a link to the video once it gets posted online.

Introduction

From my talk synopysis:

IoT and embedded devices provide new challenges to security engineers hoping to understand and evaluate the attack surface these devices add. From new interfaces to uncommon operating systems and software, the devices require both skills and tools just a little outside the normal security assessment. I’ll show both the hardware and software tools, where they overlap and what capabilities each tool brings to the table. I’ll also talk about building the skillset and getting the hands-on experience with the tools necessary to perform embedded security assessments.

While some IoT devices can be evaluated from a purely software standpoint (perhaps reverse engineering the mobile application is sufficient for your needs), a lot more can be learned about the device by interacting with all the interfaces available (often including ones not intended for access, such as debug and internal interfaces).

Background

I’ve always had a fascination with both hacking and electronics. I became a radio amateur at age 11, and in college, since my school had no concentration in computer security, I selected an embedded systems concentration. As a hacker, I’ve viewed the growing population of IoT devices with fascination. These devices introduce a variety of new challenges to hackers, including the security engineers tasked with evaluating these devices for security flaws:

  • Unfamiliar architectures (mostly ARM and MIPS)
  • Unusual interfaces (802.15.4, Bluetooth LE, etc.)
  • Minimal software (stripped C programs are common)

Of course, these challenges also present opportunities for hackers (white-hat and black-hat alike) who understand the systems. While finding a memory corruption vulnerability in an enterprise web application is all but unheard of, on an IoT device, it’s not uncommon for web requests to be parsed and served using basic C, with all the memory management issues that entails. In 2016, I found memory corruption vulnerabilities in a popular IP phone.

Think Capabilities, Not Toys

A lot of hackers, myself included, are “gadget guys” (or “gadget girls”). It’s hard not to look at every possible tool as something new to add to the toolbox, but at the end of the day, one has to consider how the tool adds new capabilities. It needn’t be a completely distinct capability, perhaps it offers improved speed or stability.

Of course, this is a “do as I say, not as I do” area. I, in fact, have quite a number of devices with overlapping capabilities. I’d love to claim this was just to compare devices for the benfit of those attending my presentation or reading this post, but honestly, I do love my technical toys.

Software

Much of the software does not differ from that for application security or penetration testing. For example, Wireshark is commonly used for network analysis (IP and Bluetooth), and Burp Suite for HTTP/HTTPS.

The website fccid.io is very useful in reconnaissance of devices, providing information about the frequencies and modulations used, as well as often internal pictures of devices, which can also reveal information such as chipsets, overall architecture, etc., all without lifting a screwdriver.

Reverse Engineering

Firmware images are often multiple files concatentated, or contain proprietary metadata headers. Binwalk walks the image, looking for known file signatures, and extracts the components. Often this will include entire Linux filesystems, kernel images, etc.

Once you have extracted this, you might be interested in analyzing the binaries or other software contained inside. Often a disassembler is useful. My current favorite disassembler is Binary Ninja, but there are a number of options:

Basic Tools

There’s a few tools that I consider absolutely essentially to any sort of hardware hacking exercise. These tools are fundamental to gaining an understanding of the device and accessing multiple types of interfaces on the device.

Screwdriver Set

A screwdriver set might be an obvious thing, but you’ll want one with bits that can get into tight places, are appropriately sized to the screws on your device (using the wrong size Phillips screwdriver bit is one of the easiest ways to strip a screw). Many devices also use “security screws”, which seems to be a term applied to just about any screw that doesn’t come in your standard household tool kit. (I’ve seen Torx, triangle bits, square bits, Torx with a center pin, etc.)

I have a wonderful driver kit from iFixit, and I’ve found almost nothing that it won’t open. The extension driver helps get into smaller spaces, and the 64 bits cover just about everything. I personally like to support iFixit because they have great write-ups and tear downs, but there are also cheaper clones of this toolkit.

Openers

Many devices are sealed with plastic catches or pieces that are press-fit together. For these, you’ll need some kind of opener (sometimes called a “spudger”) to pry them apart. I find a variety of shapes useful. You can get this a as part of a combined tool kit from iFixit, iFixit clones, or openers by themselves. I have found the iFixit model to be of slightly higher quality, but I also carry a cheap clone for occassional travel use.

The very thin metal one with a plastic handle is probably my favorite opener – it fits into the thinnest openings, but consequently it also bends fairly easily. I’ve been through a few due to bending damage. Be careful how you use these tools, and make sure your hand is not where they will go if they slip! They are not quite razor-blade sharp, but they will cut your hand with a bit of force behind them.

Multimeter

I get it, you’re looking to hack the device, not rewire your car. That being said, for a lot of tasks, a halfway decent multimeter is somewhere between an absolute requirement and a massive time saver. Some of the tasks a multimeter will help with include:

  • Identifying unknown pinouts
  • Find the ground pin for a UART
  • Checking which components are connected
  • Figuring out what kind of power supply you need
  • Checking the voltage on an interface to make sure you don’t blow something up

I have several multimeters (more than one is important for electronics work), but you can get by with a single one for your IoT hacking projects. The UNI-T UT-61E is a popular model at a good price/performance ratio, but its safety ratings are a little optimistic. The EEVBlog BM235 is my favorite of my meters, but a little higher end (aka expensive). If you’re buying for work, the Fluke 87V is the “golden standard” of multimeters.

If you buy a cheap meter, it will probably work for IoT projects, but there are many multimeters that are unsafe out there. Please do not use these cheap meters on “mains” electricity, high voltage power supplies, anything coming out of the wall, etc. Your personal safety is not worth saving $40.

Soldering Iron

You will find a lot of unpopulated headers (just the holes in the circuit board) on production IoT devices. The headers for various debug interfaces are left out, either as a cost savings, or for space reasons, or perhaps both. The headers were used during the development process, but often the manufacturer wants to leave the connections either to avoid redoing the printed circuit board (PCB) layout, or to be able to debug failures in the field.

In order to connect to these unpopulated headers, you will want to solder your own headers in their place. To do so, you’ll need a soldering iron. To minimize the risk of damaging the board in the process, use a soldering iron with a variable temperature and a small tip. The Hakko FX-888D is very popular and a very nice option, but you can still do good work with something like this Aoyue or other options. Just don’t use a soldering iron designed for a plumber or similiar uses – you’ll just end up burning the board.

Likewise, you’ll want to practice your soldering skills before you start work on your target board – find some small soldering projects to practice on, or some through away scrap electronics to work on.

Network Interfaces

Obviously, these devices have network interfaces. After all, they are the “Internet of Things”, so a network connection would seem to be a requirement. Nearly universally, 802.11 connectivity is present (sometimes on just a base station), and ethernet (10/100 or Gigabit) interfaces are also very common.

Wired Network Sniffing

The easiest way to sniff a wired network is often a 2nd interface on your computer. I’m a huge fan of this USB 3.0 to Dual Gigabit Adapter, which even has a USB-C version for those using one of the newer laptops or Macbooks that only support USB-C. Either option gives you two network ports to work with, even on laptops without built-in wired interfaces.

Beyond this, you’ll need software for the sniffing. Wireshark is an obvious tool for raw packet capture, but you’ll often also want HTTP/HTTPS sniffing, for which Burp Suite is the defacto standard, but mitmproxy is an up-and-coming contender with a lot of nice features.

Wireless Network Sniffing

Most common wireless network interfaces on laptops can perform monitor mode, but perhaps you’d like to connect your wireless to use the internet, as well as sniff on another interface. Alfa wireless cards like the AWUSO36NH and the AWUSO36ACH have been quite popular for a while, but I personally like using the tiny RT5370-based adapters for assessments not requiring long range due to its compact size and portability.

Wired (Debug/Internal) Interfaces

There are many subtle interfaces on IoT devices, intended for either debug use, or for various components to communicate with each other. For example:

  • SPI/I2C for flash chips
  • SPI/SD for wifi chips
  • UART for serial consoles
  • UART for bluetooth/wifi controllers
  • JTAG/SWD for debugging processors
  • ICSP for In-Circuit Programming
UART

Though there are many universal devices that can do other things, I run into UARTs so often that I like having a standalone adapter for this. Additionally, having a standalone adapter allows me to maintain a UART connection at the same time as I’m working with JTAG/SWD or other interfaces.

You can get a standalone cable for around $10, that can be used for most UART interfaces. (On most devices I’ve seen, the UART interface is 3.3v, and these cables work well for that.) Most of these cables have the following pinout, but make sure you check your own:

  • Red: +5V (Don’t connect on most boards)
  • Black: GND
  • Green: TX from Computer, RX from Device
  • White: RX from Computer, TX from Device

There are also a number of breakouts for the FT232RL or the CH340 chips for UART to USB. These provide a row of headers to connect jumpers between your target device and the adapter. I prefer the simplicity of the cables (and fewer jumper ends to come loose during my testing), but this is further evidence that there are a number of options to provide the same capabilities.

Universal Interfaces (JTAG/SWD/I2C/SPI)

There are a number of interface boards referred to as “universal interfaces” that have the capability to interface with a wide variety of protocols. These largely fit into two categories:

  • Bit-banging microcontrollers
  • Hardware interfaces (dominated by the FT*232 series from FTDI)

There are a number of options for implementing a bit-banging solution for speaking these protocols, ranging from software projects to run on an Arduino, to projects like the Bus Pirate, which uses a PIC microcontroller. These generally present a serial interface (UART) to the host computer and applications, and use in-band signalling for configuration and settings. There may be some timing issues on certain devices, as microcontrollers often cannot update multiple output pins in the same clock cycle.

Hardware interfaces expose a dedicated USB endpoint to talk to the device, and though this can be configured, it is done via USB endpoints and registers. The protocols are implemented in semi-dedicated hardware. In my experience, these devices are both faster and more reliable than bit-banging microcontrollers, but you are limited to whatever protocols are supported by the particular device, or the capabilities of the software to drive them. (For example, the FT*232H series can do most protocols via bit-banging, but it updates an entire register at a time, and has high enough speed to run the clock rate of many protocols.)

The FT2232H and FT232H (not to be confused with the FT232RL, which is UART only), in particular, has been incorporated into a number of different breakout boards that are excellent universal interfaces:

Logic Analyzer

When you have an unknown protocol, unknown pinout, or unknown protocol settings (baudrate, polarity, parity, etc.), a logic analyzer can dramtically help by allowing you a direct look at the signals being passed between chips or interfaces.

I have a Saleae Logic 8, which is a great value logic analyzer. It has a compact size and their software is really excellent and easy to use. I’ve used it to discover the pinout for many unlabeled ports, discover the settings for UARTs, and just generally snoop on traffic between two chips on a board.

Though there are cheap knock-offs available on eBay or AliExpress, I have tried them and they have very poor quality, and unfortunately the open-source sigrok software is not quite the quality of the Saleae software. Additionally, they rarely have any input protection to prevent you from blowing up the device yourself.

Wireless

Obviously, the Internet of Things has quite a number of wireless devices. Some of these devices use WiFI (discussed above) but many use other wireless protocols. Bluetooth (particularly Bluetooth LE) is quite common, but in other areas, such as home automation, other protocols prevail. Many of these are based on 802.15.4 (Zigbee, Z-Wave) or proprietary protocols in the 433 MHz, 915 MHz, or 2.4 GHz ISM bands.

Bluetooth

Bluetooth devices are incredibly common, and Bluetooth Low Energy (starting with Bluetooth 4.0) is very popular for IoT devices. Most devices that do not stream audio, provide IP connectivity, or have other high-bandwidth needs seem to be moving to Bluetooth Low Energy, probably because of several reasons:

  1. Lower power consumption (battery friendly)
  2. Cheaper chipsets
  3. Less complex implementation

There is essentially only one tool I can really recommend for assessing Bluetooth, and that is the Ubertooth One (Amazon). This can follow and capture Bluetooth communications, providing output in pcap or pcap-ng format, allowing you to import the communications into Wireshark for later analysis. (You can also use other pcap-based tools like scapy for analysis of the resulting pcaps.) The Ubertooth tools are available in Debian, Ubuntu, or Kali as packages, but you can get a more up to date version of the software from their Github repository.

Adafruit also offers a BLE Sniffer which works only for Bluetooth Low Energy and utilizes a Nordic Semiconductor BLE chip with a special firmware for sniffing. The software for this works well on Windows, but not so well on Linux where it is a python script that tends to be more difficult to use than the Ubertooth tools.

Software Defined Radio

For custom protocols, or to enable lower-level evaluation or attacks of radio-based systems, Software Defined Radio presents an excellent opportunity for direct interaction with the RF side of the IoT device. This can range from only receiving (for purposes of understanding and reverse engineering the device) to being able to simultaneously receive and transmit (full-duplex) depending upon the needs of your assessment.

For simply receiving, there are simple DVB-T dongles that have been repurposed as general-purpose SDRs, often referred to as “RTL SDRs”, a name based on the Realtek RTL2832U chips present in the device. These can be used because the chip is capable of providing the raw samples to the host operating system, and because of their low cost, a large open source community has emerged. Companies like NooElec are now even offering custom built hardware based on these chips for the SDR community. There’s also a kit that expands the receive range of the RTL-SDR dongles.

In order to transmit as well, the hardware is significantly more complex, and most options in this space are driven by an FPGA or other powerful processor. Even a few years ago, the capabilities here were very expensive with tools like the USRP. However, the HackRF by Great Scott Gadgets and the BladeRF by Nuand have offered a great deal of capability for a hacker-friendly price.

I personally have a BladeRF, but I honestly wish I had bought a HackRF instead. The HackRF has a wider usable frequency range (especially at the low end), while the BladeRF requires a relatively expensive upconverter to cover those bands. The HackRF also seems to have a much more active community and better support in some areas of open source software.

Other Useful Tools

It is occasionally useful to use an oscilloscope to see RF signals or signal integrity, but I have almost never found this necessary.

Specialized JTAG programmers for specific hardware often work better, but cost quite a bit more and are specialized to those specific items.

For dumping Flash chips, Xeltec programmers/dumpers are considered the “top of the line” and do an incredible job, but are at a price point such that only labs doing this on a regular basis find it worthwhile.

Slides

PDF: The IoT Hacker’s Toolkit

Lubuntu Blog: This Week in Lubuntu Development #3

Planet Ubuntu - Hën, 16/04/2018 - 6:45md
Here is the third issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine. Here's what she has been working on: Start page for Evince. Start docs for the Document Viewer. Start work on the GNOME […]

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2018

Planet Ubuntu - Hën, 16/04/2018 - 4:07md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 214 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 26. Thanks to a few extra hours dispatched this month (accumulated backlog of a contributor), the number of open issues came back to a more usual value.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Elizabeth K. Joseph: SCaLE16x with Ubuntu, CI/CD and more!

Planet Ubuntu - Pre, 13/04/2018 - 10:49md

Last month I made my way down to Pasadena for one of my favorite conferences of the year, the Southern California Linux Expo. Like most years, I split my time between Ubuntu and stuff I was working on for my day job. This year that meant doing two talks and attending UbuCon on Thursday and half of Friday.

As with past years, UbuCon at SCALE was hosted by Nathan Haines and Richard Gaskin. The schedule this year was very reflective about the history and changes in the project. In a talk from Sriram Ramkrishna of System76 titled “Unity Dumped Us! The Emotional Healing” he talked about the closing of development on the Unity desktop environment. System76 is primarily a desktop company, so the abrupt change of direction from Canonical took some adjusting to and was a little painful. But out of it came their Ubuntu derivative Pop!_OS and a community around it that they’re quite proud of. In the talk “The Changing Face of Ubuntu” Nathan Haines walked through Ubuntu history to demonstrate the changes that have happened within the project over the years, and allow us to look at the changes today with some historical perspective. The Ubuntu project has always been about change. Jono Bacon was in the final talk slot of the event to give a community management talk titled “Ubuntu: Lessons Learned”. Another retrospective, he drew from his experience when he was the Ubuntu Community Manager to share some insight into what worked and what didn’t in the community. Particularly noteworthy for me were his points about community members needing direction more than options (something I’ve also seen in my work, discrete tasks have a higher chance of being taken than broad contribution requests) and the importance of setting expectations for community members. Indeed, I’ve seen that expectations are frequently poorly communicated in communities where there is a company controlling direction of the project. A lot of frustration could be alleviated by being more clear about what is expected from the company and where the community plays a role.


UbuCon group photo courtesy of Nathan Haines (source)

The UbuCon this year wasn’t as big as those in years past, but we did pack the room with nearly 120 people for a few talks, including the one I did on “Keeping Your Ubuntu Systems Secure”. Nathan Haines suggested this topic when I was struggling to come up with a talk idea for the conference. At first I wasn’t sure what I’d say, but as I started taking notes about what I know about Ubuntu both from a systems administration perspective with servers, and as someone who has done a fair amount of user support in the community over the past decade, it turned out that I did have an entire talk worth of advice! None of what I shared was complicated or revolutionary, there was no kernel hardening in my talk or much use of third party security tools. Instead the talk focused on things like keeping your system updated, developing a fundamental understanding of how your system and Debian packages work, and tips around software management. The slides for my presentation are pretty wordy, so you can glean the tips I shared from them: Keeping_Your_Ubuntu_Systems_Secure-UbuConSummit_Scale16x.pdf.


Thanks to Nathan Haines for taking this photo during my talk (source)

The team running Ubuntu efforts at the conference rounded of SCALE by staffing a booth through the weekend. The Ubuntu booths have certainly evolved over the years, when I ran them it was always a bit cluttered and had quite the grass roots feeling to it (the booth in 2012). The booths the team put together now are simpler and more polished. This is definitely in line with the trend of more polished open source software presence in general, so kudos to the team for making sure our little Ubuntu California crew of volunteers keeps up.

Shifting over to the more work-focused parts of the conference, on Friday I spoke at Container Day, with my talk being the first of the day. The great thing about going first is that I get to complete my talk and relax for the rest of the conference. The less great thing about it is that I get to experience all the A/V gotchas and be awake and ready to give a talk at 9:30AM. Still, I think the pros outweighed the cons and I was able to give a refresh of my “Advanced Continuous Delivery Strategies for Containerized Applications Using DC/OS” talk, which included a new demo that I finished writing the week before. The talk seemed to generate interest that led to good discussions later in the conference, and to my relief the live demo concluded without a problem. Slides from the talk can be found here: Advanced_CD_Using_DCOS-SCALE16x.pdf


Thanks to Nathan Handler for taking this photo during my talk (source)

Saturday and Sunday brought a duo of keynotes that I wouldn’t have expected at an open source conference five years ago, from Microsoft and Amazon. In both these keynotes the speaker recognized the importance of open source today in the industry, which has fueled the shift in perspective and direction regarding open source for these companies. There’s certainly a celebration to be had around this, when companies are contributing to open source because it makes business sense to do so, we all benefit from the increased opportunities that presents. On the other hand, it has caused disruption in the older open source communities, and some have struggled to continue to find personal value and meaning in this new open source world. I’ve been thinking a lot about this since the conference and have started putting together a talk about it, nicely timed for the 20th anniversary of the “open source” term. I want to explore how veteran contributors stay passionate and engaged, and how we can bring this same feeling to new contributors who came down different paths to join open source communities.

Regular talks began on Saturday with me attending Nathan Handler’s talk on “Terraforming all the things” where he shared some of the work they’ve been doing at Yelp that has resulted in the handling of things like DNS records and CDN configuration being handled by Terraform. From there I went to a talk by Brian Proffitt where he talked about metrics in communities and the Community Health Analytics Open Source Software (CHAOOS) project. I spent much of the rest of the day in the “hallway track” catching up with people, but at the end I popped into a talk by Steve Wong on “Running Containerized Workloads in an on-prem Datacenter” where he discussed the role that bare metal continues to have in the industry, even as many rush to the cloud for a turnkey solution.

It was at this talk where I had the pleasure of meeting one of our newest Account Executives at Mesosphere, Kelly Bond, and also had some time to catch up with my colleague Jörg Schad.


Jörg, me, Kelly

Nuritzi Sanchez presented my favorite talk on Sunday, on Endless OS. They build a Linux distribution using FlatPak and as an organization work on the problem of access to technology in developing nations. I’ve long been concerned about cellphone-only access in these countries. You need a mix of a system that’s tolerant to being offline and that has input devices (like keyboards!) that allow work to be done on them. They’re doing really interesting work on the technical side related to offline content and general architecture around a system that needs to be conscious of offline status, but they’re also developing deployment strategies on the ground in places like Indonesia that will ensure the local community can succeed long term. I have a lot of respect for the people working toward all this, and really want to see this organization succeed.

I’m always grateful to participate in this conference. It’s grown a lot over the years and it certainly has changed, but the autonomy given to the special events like UbuCon allows for a conference that brings together lots of different voices and perspective all in one place. I also have a lot of friends who attend this conference, many of whom span jobs and open source projects I’ve worked on over more than a decade. Building friendships and reconnecting with people is part of what makes the work I do in open source so important to me, and not just a job for me. Thanks to everyone who continues to make this possible year after year in beautiful Pasadena.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157693153653781

Simon Raffeiner: I went to Fukushima

Planet Ubuntu - Pre, 13/04/2018 - 1:36md

I'm an engineer and interested in all kinds of technology, especially if it is used to build something big. But I'm also fascinated by what happens when things suddenly change and don't go as expected, and especially by everything that's left behind after technological and social revolutions or disasters. In October 2017 I travelled across Japan and decided to visit one of the places where technology had failed in the worst way imaginable: the Fukushima Evacuation Zone.

The post I went to Fukushima appeared first on LIEBERBIBER.

Kees Cook: security things in Linux v4.16

Planet Ubuntu - Pre, 13/04/2018 - 2:04pd

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Ubuntu Podcast from the UK LoCo: S11E06 – Six Feet Over It - Ubuntu Podcast

Planet Ubuntu - Enj, 12/04/2018 - 4:15md

This week we review the Dell XPS 13 (9370) Developer Edition laptop, bring you some command line lurve and go over all your feedback.

It’s Season 11 Episode 06 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Lessons from OpenStack Telemetry: Incubation

Planet Debian - Enj, 12/04/2018 - 2:50md

It was mostly around that time in 2012 that I and a couple of fellow open-source enthusiasts started working on Ceilometer, the first piece of software from the OpenStack Telemetry project. Six years have passed since then. I've been thinking about this blog post for several months (even years, maybe), but lacked the time and the hindsight needed to lay out my thoughts properly. In a series of posts, I would like to share my observations about the Ceilometer development history.

To understand the full picture here, I think it is fair to start with a small retrospective on the project. I'll try to keep it short, and it will be unmistakably biased, even if I'll do my best to stay objective – bear with me.

Incubation

Early 2012, I remember discussing with the first Ceilometer developers the right strategy to solve the problem we were trying to address. The company I worked for wanted to run a public cloud, and billing the resources usage was at the heart of the strategy. The fact that no components in OpenStack were exposing any consumption API was a problem.

We debated about how to implement those metering features in the cloud platform. There were two natural solutions: either achieving some resource accounting report in each OpenStack projects or building a new software on the side, covering for the lack of those functionalities.

At that time there were only less than a dozen of OpenStack projects. Still, the burden of patching every project seemed like an infinite task. Having code reviewed and merged in the most significant projects took several weeks, which, considering our timeline, was a show-stopper. We wanted to go fast.

Pragmatism won, and we started implementing Ceilometer using the features each OpenStack project was offering to help us: very little.

Our first and obvious candidate for usage retrieval was Nova, where Ceilometer aimed to retrieves statistics about virtual machines instances utilization. Nova offered no API to retrieve those data – and still doesn't. Since it was out of the equation to wait several months to have such an API exposed, we took the shortcut of polling directly libvirt, Xen or VMware from Ceilometer.

That's precisely how temporary hacks become historical design. Implementing this design broke the basis of the abstraction layer that Nova aims to offer.

As time passed, several leads were followed to mitigate those trade-offs in better ways. But on each development cycle, getting anything merged in OpenStack became harder and harder. It went from patches long to review, to having a long list of requirements to merge anything. Soon, you'd have to create a blueprint to track your work, write a full specification linked to that blueprint, with that specification being reviewed itself by a bunch of the so-called core developers. The specification had to be a thorough document covering every aspect of the work, from the problem that was trying to be solved, to the technical details of the implementation. Once the specification was approved, which could take an entire cycle (6 months), you'd have to make sure that the Nova team would make your blueprint a priority. To make sure it was, you would have to fly a few thousands of kilometers from home to an OpenStack Summit, and orally argue with developers in a room filled with hundreds of other folks about the urgency of your feature compared to other blueprints.

An OpenStack design session in Hong-Kong, 2013

Even if you passed all of those ordeals, the code you'd send could be rejected, and you'd get back to updating your specification to shed light on some particular points that confused people. Back to square one.

Nobody wanted to play that game. Not in the Telemetry team at least.

So Ceilometer continued to grow, surfing the OpenStack hype curve. More developers were joining the project every cycle – each one with its list of ideas, features or requirements cooked by its in-house product manager.

But many features did not belong in Ceilometer. They should have been in different projects. Ceilometer was the first OpenStack project to pass through the OpenStack Technical Committee incubation process that existed before the rules were relaxed.

This incubation process was uncertain, long, and painful. We had to justify the existence of the project, and many technical choices that have been made. Where we were expecting the committee to challenge us at fundamental decisions, such as breaking abstraction layers, it was mostly nit-picking about Web frameworks or database storage.

Consequences

The rigidity of the process discouraged anyone to start a new project for anything related to telemetry. Therefore, everyone went ahead and started dumping its idea in Ceilometer itself. With more than ten companies interested, the frictions were high, and the project was at some point pulled apart in all directions. This phenomenon was happening to every OpenStack projects anyway.

On the one hand, many contributions brought marvelous pieces of technology to Ceilometer. We implemented several features you still don't find any metering system. Dynamically sharded, automatic horizontally scalable polling? Ceilometer has that for years, whereas you can't have it in, e.g., Prometheus.

On the other hand, there were tons of crappy features. Half-baked code merged because somebody needed to ship something. As the project grew further, some of us developers started to feel that this was getting out of control and could be disastrous. The technical debt was growing as fast as the project was.

Several technical choices made were definitely bad. The architecture was a mess; the messaging bus was easily overloaded, the storage engine was non-performant, etc. People would come to me (as I was the Project Team Leader at that time) and ask why the REST API would need 20 minutes to reply to an autoscaling request. The willingness to solve everything for everyone was killing Ceilometer. It's around that time that I decided to step out of my role of PTL and started working on Gnocchi to, at least, solve one of our biggest challenge: efficient data storage.

Ceilometer was also suffering from the poor quality of many OpenStack projects. As Ceilometer retrieves data from a dozen of other projects, it has to use their interface for data retrieval (API calls, notifications) – or sometimes, palliate for their lack of any interface. Users were complaining about Ceilometer dysfunctioning while the root of the problem was actually on the other side, in the polled project. The polling agent would try to retrieve the list of virtual machines running on Nova, but just listing and retrieving this information required several HTTP requests to Nova. And those basic retrieval requests would overload the Nova API. The API does not offer any genuine interface from where the data could be retrieved in a small number of calls. And it had terrible performances.
From the point of the view of the users, the load was generated by Ceilometer. Therefore, Ceilometer was the problem. We had to imagine new ways of circumventing tons of limitation from our siblings. That was exhausting.

At its peak, during the Juno and Kilo releases (early 2015), the code size of Ceilometer reached 54k lines of code, and the number of committers reached 100 individuals (20 regulars). We had close to zero happy user, operators were hating us, and everybody was wondering what the hell was going in those developer minds.

Nonetheless, despite the impediments, most of us had a great time working on Ceilometer. Nothing's ever perfect. I've learned tons of things during that period, which were actually mostly non-technical. Community management, social interactions, human behavior and politics were at the heart of the adventure, offering a great opportunity for self-improvement.

In the next blog post, I will cover what happened in the years that followed that booming period, up until today. Stay tuned!

Julien Danjou https://julien.danjou.info/ Julien Danjou

Bursary applications for DebConf18 are closing in 48 hours!

Planet Debian - Enj, 12/04/2018 - 12:30md

If you intend to apply for a DebConf18 bursary and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf18 will be accepted until April 13th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

See you in Hsinchu!

Laura Arjona Reina https://bits.debian.org/ Bits from Debian

Ante Karamatić: Spaces – uncomplicating your network

Planet Ubuntu - Enj, 12/04/2018 - 6:44pd
An old OpenStack network architecture

For past 5-6 years I’ve been in business of deploying cloud solutions for our customers. Vast majority of that was some form of OpenStack, either a simple cloud or a complicated one. But when you think about it – what is a simple cloud? It’s easy to say that small amount of machines makes an easy, and large amount of machines makes a complicated cloud. But, that is not true. Complexity of a typical IaaS solution is pretty much determined by network complexity. Network, in all shapes and forms, from the underlay network to the customer’s overlay network requirements. I’ll try to explain how we deal with the underlay part in this blog.

It’s not a secret that a traditional tree like network architecture just doesn’t work for cloud environments. There are multiple reasons why; it doesn’t scale very well, it requires big OSI layer 2 domains and… well, it’s based on OSI layer 2. Debugging issues on that level is never a joyful experience. Therefore, for IaaS environments one really wants to do a modern design in a form of a spine-leaf architecture. Layer 3 spine-leaf architecture. This allows us to have bunch of smaller layer 2 domains, which then nicely correlate to availability zones, power zones, etc. However, managing environments with multiple layer 2 and therefore even more layer 3 domains requires a bit of rethinking. If we truly want to be effective in deploying and operating a cloud across multiple different layer 2 domains we need to think of the network in a bit more abstract mode. Luckily, this is nothing new.

In traditional approach to network, we talk about TORs, management fabric, BMC/OOB fabric, etc. These are most of the time layer 2 concepts. Fabric, after all, is a collection of switches. But the approach is correct; we should always talk about networks in abstract terms. Instead of talking about subnets and VLANs, we should talk about purpose of the network. This becomes important when we talk about spine-leaf architecture and multiple different subnets that serve the same purpose. In rack 1, subnet 172.16.1.0/24 is management network, but in rack 2, management network is on subnet 192.168.1.0/24, and so on. It’s obvious that it’s much nicer to abstract those subnets into a ‘management network’. Still, nothing new. We do this every day.

So… Why do our tools and applications still require us to use VLANs, subnets and IPs? If we deploy same application across different racks, why do we have to keep separate configurations for each of the units of the same application? What we really want is to have all of our Keystones listening on OpenStack Public API network, and not on subnets 192.168.10.0/24, 192.168.20.0/24 and 192.168.30.0/24. We end up thinking about application on a network, but we configure differently exact copies of the same application (units) on different subnets. Clearly our configuration tools are not doing what we want, but rather forcing us to transform our way of thinking into what those tools need. It’s a paradox that OpenStack is not that complicated, rather it’s made complicated by the tools used to deploy it.

While trying to solve this problem in our deployments at Canonical, we came up with concept of spaces. A space would be this abstracted network that we have in our heads, but somehow fail to put into our tools. Again, spaces are not a revolutionary concept in networking, they have been in our heads all this time. So, how do we implement spaces at Canonical?

We have grown concept of spaces across all of our tooling; MAAS, juju and charms. When we configure MAAS to manage our bare metal machines, we do not define networks as subnets or VLANs, we rather define networks as spaces. A space has a purpose, description and few other attributes. VLANs, and indirectly subnets too, become properties of the space, instead of other way around. This also means that when we deploy a machine, we deploy it connected to a space. When we deploy a machine, we usually do not deploy it on a specific network, but rather with specific requirements; must be able to talk to X, must have Y CPU and Z RAM. If you ever asked yourself why does it take so much time to rack and stack a server, it’s because of this disconnect of what we want and how we handle the configuration.

We’ve also enabled Juju to make this kind of requests – it asks MAAS for machines that is connected to a space, or set of spaces. It then exposes this spaces to charms, so that each charm knows what kind of networks this application has on its disposal. This allows us to do ‘juju deploy keystone –bind public=public-space -n3’; deploy three keystones, connect them to a public-space, a space defined in MAAS. What VLAN will that be, which subnet or an IP, we do not care; the charm will get information from Juju about these “low level” terms (VLANs, IPs). We humans do not think of VLANs and subnets and IPs; at best we think in OSI layer 1 terms.

Sounds a bit complicated? Let’s flip it the other way around. What I can do now is define my application as “3 units of keystone, which use internal network for SQL, public network for exposing API, internal network for OpenStack’s internal communication and is also exposed on OAM network for management purposes” and this is precisely how we deploy OpenStack. In fact, the Juju bundle looks like this:

keystone:
  charm: cs:keystone
  num_units: 3
  bindings:
    "": oam-space
    public: public-space
    internal: internal-space
    shared-db: internal-space

Those who follow OpenStack development will notice that something similar has landed in OpenStack recently; routed provider networks. It’s the same concept, solving the same problem. It’s nice to see how Juju uses this out of the box.

Big thanks to MAAS, Juju, charms and OpenStack communities for doing this. It allowed us to deploy complex applications with a breeze, and therefore shifted our focus to bigger picture, IaaS modeling and some other, new challenges!

Streaming the Norwegian ultimate championships

Planet Debian - Enj, 12/04/2018 - 1:36pd

As the Norwegian indoor frisbee season is coming to a close, the Norwegian ultimate nationals are coming up, too. Much like in Trøndisk 2017, we'll be doing the stream this year, replacing a single-camera Windows/XSplit setup with a multi-camera free software stack based on Nageru.

The basic idea is the same as in Trøndisk; two cameras (one wide and one zoomed) for the main action and two static ones above the goal zones. (The hall has more amenities for TV productions than the one in Trøndisk, so a basic setup is somewhat simpler.) But there are so many tweaks:

  • We've swapped out some of the cameras for more suitable ones; the DSLRs didn't do too well under the flicker of the fluorescent tubes, for instance, and newer GoPros have rectilinear modes). And there's a camera on the commentators now, with side-by-side view as needed.

  • There are tally lights on the two human-operated cameras (new Nageru feature).

  • We're doing CEF directly in Nageru (new Nageru feature) instead of through CasparCG, to finally get those 60 fps buttery smooth transitions (and less CPU usage!).

  • HLS now comes out directly of Cubemap (new Cubemap feature) instead of being generated by a shell script using FFmpeg.

  • Speaking of CPU usage, we now have six cores instead of four, for more x264 oomph (we wanted to do 1080p60 instead of 720p60, but alas, even x264 at nearly superfast can't keep up when there's too much motion).

  • And of course, a ton of minor bugfixes and improvements based on our experience with Trøndisk—nothing helps as much as battle-testing.

For extra bonus, we'll be testing camera-over-IP from Android for interviews directly on the field, which will be a fun challenge for the wireless network. Nageru does have support for taking in IP streams through FFmpeg (incidentally, a feature originally added for the now-obsolete CasparCG integration), but I'm not sure if the audio support is mature enough to run in production yet—most likely, we'll do the reception with a laptop and use that as a regular HDMI input. But we'll see; thankfully, it's a non-essential feature this time, so we can afford to have it break. :-)

Streaming starts Saturday morning CEST (UTC+2), will progress until late afternoon, and then restart on Sunday with the playoffs (the final starts at 14:05). There will be commentary in a mix of Norwegian and English depending on the mood of the commentators, so head over to www.plastkast.no if you want to watch :-) Exact schedule on the page.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Debian LTS work, March 2018

Planet Debian - Mër, 11/04/2018 - 10:41md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from February. I worked 15 hours and will again carry over 2 hours to April.

I made another two releases on the Linux 3.2 longterm stable branch (3.2.100 and 3.2.101), the latter including mitigations for Spectre on x86. I rebased the Debian package onto 3.2.101 but didn't upload an update to Debian this month. We will need to add gcc-4.9 to wheezy before we can enable all the mitigations for Spectre variant 2.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

Faqet

Subscribe to AlbLinux agreguesi