You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 4 months 2 javë më parë

Kees Cook: UEFI booting and RAID1

Pre, 20/04/2018 - 2:34pd

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.

With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.

So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.

The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90

Reading from the mdadm man page:

-e, --metadata= ... 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:

# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| ... # file -s /dev/sda1 /dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1 ... # mkfs.fat -F32 /dev/md0 # dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd 00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|. # file -s /dev/sda1 /dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.

However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:

Installing for x86_64-efi platform. efibootmgr: option requires an argument -- 'd' ... grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted. Failed: grub-install --target=x86_64-efi WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64

So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).

To deal with this “external write” situation, I see some solutions:

  • Make the partition read-only when not under Linux. (I don’t think this is a thing.)
  • Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
  • Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:

ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:

/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:

[Unit] Description=Resync /boot/efi RAID DefaultDependencies=no After=local-fs.target [Service] Type=oneshot ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync ExecStart=/bin/mount /boot/efi RemainAfterExit=yes [Install] WantedBy=sysinit.target

(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)

If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!

And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img # losetup -f --show /boot/efi.img /dev/loop0 # mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Rhonda D&#39;Vine: Diversity Update

Enj, 19/04/2018 - 9:53md

I have to excuse for being silent for that long. Way too many things happened. In fact I already wrote most of this last fall, but then something happened that did impact me too much to finalize this entry. And with that I want to go a bit into details how I write my blog entries:
I start to write them in English, I like to cross-reference things, and after I'm done I go over it and write it again in German. That process helps me proof-reading the English part, but it also means that it takes a fair amount of time. And the longer the entries get the more energy the translation and proof reading part takes, too. That's mostly also the reason why I tend to write longer entries when I find the energy and time for it.

Anyway, the first thing that I want to mention here finally happened last June: I officially got changed my name and gender/sex marker in my papers! That was a very happy moment in so many ways. A week later I got my new passport, finally managed to book my flight to Debconf in my name. Yay me, I exist!

Then, Stretch was released. I have to admit I had very little to do, wasn't involved in the release process, neither from the website team nor anywhere else because ...

... because I was packing my stuff that weekend, because on June 21st, a second thing finally happened: I got the keys to my flat in the Que[e]rbau!! Yes, I'm aware that we still need to work on the website. The building company actually did make a big event out of it, called every single person onto stage and handed over the keys. And it made me happy to be able to receive my key in my name and not one I don't relate to since a long while anymore. It did hurt seeing that happening to someone else from our house, even though they knew what the Que[e]rbau is about ... And: I moved right in the same day. Gave up my old flat the following week, even though I didn't have much furniture nor a kitchen but I was waiting way too long to be able to not be there. And just watch that sunset from my balcony. <3

And I mentioned it in the last blog post already, the European Lesbian* Conference organization needed more and more work, too. The program for it started to finalize, but there were still more than enough things to do. I totally fell into this, this was the first time I really felt what intersectionality means and that it's not just a label but an internal part of this conference. The energy going on in the team on that grounds is really outstanding, and I'm totally happy to be part of this effort.

And then came along Debconf17 in Montreal. It was nice to be with a fair amount of people that grew on me like a family over the years. And interestingly I got the notice that there was a Trans March going on, so I joined that. It was a pleasure meeting Sophie LaBelle and Chase Ross there. I wasn't aware that Chase was from Montreal, so that part was a surprise. Sophie I knew, and I brought her back to Vienna in November, right before the Transgender Day of Remembrance. :)

But one of the two moving speeches at the march were from Charlie Rose titled My Gender Is Black. I managed to get a recording of this and another great speech from another Black Lives Matters activist, and hope I'll be able to put them online at some point. For the time being the link to the text should be able to help.

And then Debconf itself started. And I held the Debian Diversity Round Table. While the title might had been misleading, because this group isn't officially formed yet, it turned out to get a fair amount of interest. I started off with why I called for it, that I intentionally chose to not have it video taped for people to be able to speak more freely and after a short introduction round with names, pronouns and other things people wanted to share we had some interesting discussions on why people think this is a good idea, what direction to move. A few ideas did spring up, and then ... time ran out. So actually we scheduled a continuation BoF to further enhance the topic. At the end of that we came up with a pretty good consensual view on how to move forward. Unfortunately I didn't manage yet to follow up on that and feel quite bad about it. :/

Because, after returning, getting back into work, and needing a bit more time for EL*C I started to feel serious pain in my back and my leg which seems to be a slipped disc and was on sick leave for about two months. The pain was too much, I even had to stay at the hospital for two weeks because my stomach acted up too.

At the end of October we had a grand opening: We have a community space in our Que[e]rbau in which we built sort of a bar, with cooking facility and hi-fi equipment. And we intentionally opened it up to the public. It's name is Yella Yella! Nachbar_innentreff. We named it after Yella Hertzka who was an important feminist at the start of the 20th century. The park on the other side of the street is called Yella Hertzka park, so the pun in the name with the connection to the arabic proverb Yalla Yalla is intentional.

With the Yella Yella a fair amount of internal discussions emerged, we all only started to live together, so naturally this took a fair amount of energy and discussions. Things take time to get a feeling for all the people. There were several interviews made, and events to get organized to get it running.

And then out of the sudden it turned 2018 and I still haven't published this post. I'm sorry 'bout that, but sometimes there are other things needing time. And here I am. Time move on even if we don't look at it.

A recent project that I had the honor to be part of is my movement is limitless [trans_non-binary short]. It was interesting to think about the topic whether gender identity affects the way you dance. And to seen and hear other people's approach to it.

At the upcoming Linuxtage Graz there will be a session about Common misconceptions about names and spaces and communities because they were enforcing a realname policy – at a community event. Not only is this a huge issue for trans people but also works against privacy researchers or people from the community that noone really knows by the name in their papers. The discussions that happened on twitter or in the background were partly a fair bit disturbing. Let's hope that we'll manage to make a good panel.

Which brings us to a panel for the upcoming Debconf in Taiwan. There is a suggestion to have a Gender Forum at the Openday. I'm still not completely sure what it should cover or what is expected for it and I guess it's still open for suggestions. There will be a plan, let's see to make it diverse and great!

I won't promise to send the next update sooner, but I'll try to get back into it. Right now I'm also working on a (German language) submission for a non-binary YouTube project and it would be great to see that thing lift off. I'll be more verbose on that front.

Thanks for reading so far, and read you soon. :)

/personal | permanent link | Comments: 0 |

Didier Roche: Welcome To The (Ubuntu) Bionic Age: Behind communitheme: interviewing Frederik

Enj, 19/04/2018 - 6:10md
Interviewing people behind communitheme. Today: Frederik

As discussed last week when unveiling the communitheme snap for ubuntu 18.04 LTS, here is a suite of interview this week on some members of the core contributor team shaping this entirely community-driven theme.

Today is the turn of Frederik, frederik-f on the community hub.

Who are you? What are you doing/where are you working? Give us some words and background about you!

My name is Frederik, I live in Germany and I am working as a java software developer in my daily job.

I am using Ubuntu since 5 years and quickly started to report bugs and issues when they jumped into my face. Apart from that, I like good music, and beautiful software. I also make my own music in my free time.

What are you mainly contributor areas on communitheme?

I mainly contribute to the shell theme but also work on implementing some design ideas in the gtk theme.

How did you hear about new theming effort on ubuntu, what made you willing to participate actively to it?

I followed the design process from the beginning on the community website and was very interested in it. Not only because I love ubuntu but also because I finished my thesis last year, where I needed to read some design books about UX and interaction design. I loved how they created the mockups and discussed about them in a very professional, mature, friendly and yet unemotional way - accepting and rejecting different opinions.

How is the interaction with the larger community, how do you deal with different ideas and opinions on the community hub, issues opened against the projects, PR?

I feel there could be even more interaction and I hope there will be more promotion about this website so more people would share their opinions.

What did you think (honestly) about the decision for not shipping it by default on 18.04, but curating it for a little while?

While ambiance uses very antiquated design ideas, it still represents the ubuntu brand. Of course I was a little disappointed, but that was also the point where I decided to contribute actual code and make PRs. I felt like they need more help.

I think if the snap will be promoted in the software center like for example spotify or skype, many LTS users could try it and then in the end, we got our theme shining on the LTS as well.

Do you think the snap approach for 18.04 will give us more flexibility before shipping a finale version?

Yes - this was a very good idea. I am curious about how it will work out with all the other snaps which fallback to adwaita at the moment.

Any idea or wish on what the theme name (communitheme is a codename project) should be?

My idea would be: Orenji which means “Orange” on japanese, which could fit to our origami icon theme suru.

Any last words or questions I should have asked you?

This sounds like you want to execute me! So why didn’t you ask for my last meal? :)

Thanks Frederik!

Next interview coming up soon, stay tuned! :)

Ubuntu Podcast from the UK LoCo: S11E07 – Seven Years in Tibet - Ubuntu Podcast

Enj, 19/04/2018 - 4:00md

This week we meet a sloth and buy components for the Hades Canyon NUC. The Windows File Manager gets open sourced, Iran are going to block Telegram, PostmarketOS explore creating an open source baseband, Microsoft make a custom Linux distro called Azure Sphere and we round the community news.

It’s Season 11 Episode 07 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Daniel Holbach: A month with Dell XPS 13 (9370)

Enj, 19/04/2018 - 8:55pd

After years of using Thinkpads, I went for a Dell XPS 13 with Ubuntu. Although I had bought devices with Linux pre-installed and laptops for friends as well, this  was going to be my first own laptop coming with Ubuntu straight from the factory.

 

The hardware

The specs looked great (big SSD disk, enough memory to play around with VMs/containers, etc.), but I had to brush away some fond memories of old laptops, where I was able to easily replace parts (memory, screen, disk, power jack, keyboard and more for my x220). With the XPS this was not easily going to be possible anymore.

Anyway, the new machine arrived in the office. It looked great, it was light,  it was really well-built and whatever task I threw at the machine, it dealt with it nicely. In general I really liked the hardware and how the machine felt a lot. I knew I was going to be happy with this.

A few things bothered me somewhat though. The placement of the webcam simply does not make sense. It’s at the bottom of the screen, so you get an upwards-angle no matter what you do and people in calls with you will always see a close up of your fingers typing. Small face, huge fingers. It’s really awkward. I won’t go unmanicured into meetings anymore!

The software

It came with an old image of Ubuntu 16.04 LTS pre-installed and after pulling a lot of updates, I thought I was going to get a nice fresh start with everything just working out of the box. Not quite.

The super key was disabled. As 16.04 came with Unity, the super key is one of the key ingredients to starting apps or bringing up the dash. There was a package called supey-key-dell (or some such) installed which I had to find and remove and some gnome config I had to change to make it work again. Why oh why?

Hardware support. I thought this was going to be straight-forward. Unfortunately it wasn’t. In the process of the purchase Dell recommended I get a DA300, a USB-C mobility adapter. That looked like a great suggestion, ensuring I can still use all my Old World devices. Unfortunately the Ethernet port of it just didn’t work with 16.04.

The laptops’s own screen flickered in many circumstances and connecting to screens (even some Dell devices) flickered even more, sometimes screens went on and off.

I got a case with USB-C adapter for the SSD disk of my laptop and copied some data over only to find that some disk I/O load nearly brought the system to a grinding halt.

Palm detection of the touchpad was throwing me off again and again. I can’t count how many times I messed up documents or typed text in the wrong places. This was simply infuriating.

Enter Ubuntu 18.04 LTS

I took the plunge, wiped the disk and made a fresh install of Bionic and I’m not looking back. Palm detection is LOADS better, Disk I/O is better, screen flickering gone, Ethernet port over USB-C works. And I’m using a recent Ubuntu, which is just great! Nice work everyone involved at Ubuntu!

I hope Dell will reconsider shipping this new release to users with recent machines (and as an update) – the experience is dramatically different.

I’m really happy with this machine now, got to go now, got a manicure appointment…

Jeremy Bicha: gksu removed from Ubuntu

Enj, 19/04/2018 - 2:49pd

Today, gksu was removed from Ubuntu 18.04, four weeks after it was removed from Debian.

Thomas Ward: NGINX Updates: Ubuntu Bionic, and Mainline and Stable PPAs

Mër, 18/04/2018 - 8:58md

NGINX has been updated in multiple places.

Ubuntu Bionic 18.04

Ubuntu Bionic 18.04 now has 1.14.0 in the repositories, and very likely will have 1.14.0 for the lifecycle of 18.04 from April of 2018 through April of 2023, as soon as it is released.

NGINX PPAs: Mainline and Stable

There are two major things to note:

First: Ubuntu Trusty 14.04 is no longer supported in the PPAs, and will not receive the updated NGINX versions. This is due to the older versions of libraries in the 14.04 release, which are too old to compile the third-party modules which are included from the Debian packages. Individuals using 14.04 should strongly consider using the nginx.org repositories instead, for newer releases, as they don’t need any libraries which the PPA versions of the packages need.

Secondly: With the exception of Ubuntu Trusty 14.04, the NGINX PPAs are in the process of being updated with NGINX Stable 1.14.0 and NGINX Mainline 1.13.12. Please note that 1.14.0 is equal to 1.13.12 in terms of features, and you should probably use NGINX 1.14.0 instead of 1.13.12 for now. NGINX Mainline will be updated to 1.15.x when NGINX has a ‘new’ Mainline release that is ahead of NGINX Stable.

Didier Roche: Welcome To The (Ubuntu) Bionic Age: Behind communitheme: interviewing Mads

Mër, 18/04/2018 - 1:35md
Interviewing people behind communitheme. Today: Mads Rosendahl

As discussed last week when unveiling the communitheme snap for ubuntu 18.04 LTS, here is a suite of interview this week on some members of the core contributor team shaping this entirely community-driven theme.

Today is the turn of Mads, madsrh on the community hub.

Who are you? What are you doing/where are you working? Give us some words and background about you!

My name is Mads Rosendahl (MadsRH) and I’m from Denmark. My dayjob has two sides, half the time I work as a teacher at a school of music and the other half I work in PR (no, not pull requests ;) ) where I do things like brochures, ads, website graphics, etc.

I’m no saint - I use OSX, Windows and Linux.

I got involved with Ubuntu back when everything was brown - around 7.10. When I read about Ubuntu, Linux and how Mark Shuttleworth fits into the story, a fire was lit inside me and I wanted to give something back to this brilliant project. In the beginning I set out to make peoples desktops brown and pretty by posting wallpaper suggestions to the artwork mailing list.

Because I can’t write any code, I mostly piggyback on awesome people in the community, like when I worked on the very first slideshow in Ubiquity installer with Dylan McCall.

I attended UDS in Dallas back in 2009 (an amazing experience!) and have had to take a long break from contributing. This theme work is my first contribution since then.

What are you mainly contributor areas on communitheme?

I do mockups, design, find bugs and participate in the conversations. I also suggested new system sounds and have a cursor project in the works - let’s see if it’ll make it into the final release of the theme.

How did you hear about new theming effort on ubuntu, what made you willing to participate actively to it?

I’ve been asking for this for a long time, and suddenly Merlijn suggested a community theme in a comment on a blogpost, so of course I signed up. It’s obvious that the best linux distribution, should have the most beautiful out of the box desktop ;)

How is the interaction with the larger community, how do you deal with different ideas and opinions on the community hub, issues opened against the projects, PR?

There’s an awesome community within Ubuntu and there has been a ton of great feedback and conversations around the decisions. It comes as no surprise that with (almost) every change, there are people both for and against. Luckily we’re not afraid of experimenting. I’m sure that with the final release we’ll have found a good balance between UX (what works best), design (what looks best) and branding (what feels like Ubuntu).

We have a small but awesome team put together back in november when the project was first announced, but we’ve also see a lot of other contributors file issues and step up with PR - fantastic!

It’s easy to see that people are passioned about the Ubuntu desktop.

What did you think (honestly) about the decision for not shipping it by default on 18.04, but curating it for a little while?

It’s the right move. I rest comfortably knowing that Canonical values stability over beauty. Especially when you’ll be able to just install a snap to get the new theme. Rather dusty and stable, than shiny and broken.

Any idea or wish on what the theme name (communitheme is a codename project) should be?

No, but off the top of my head how about: “Dewy” or “Muutos” (Finnish for change)

Any last words or questions I should have asked you?

Nope.

Thanks Mads!

Next interview coming up tomorrow, stay tuned! :)

Jono Bacon: Open Collaboration Conference (at Open Source Summit) Call For Papers

Mër, 18/04/2018 - 6:34pd

Back in February I announced the Call For Papers for the Open Collaboration Conference was open. For those of you in the dark, last year I ran the Open Community Conference as part of the Linux Foundation’s Open Source Summit events in North America and Europe. The events were a great success, but this year we decided to change the name. From the original post:

As the event has evolved, I have wanted it to incorporate as many elements focused on people collaborating together. While one component of this is certainly people building communities, other elements such as governance, remote working, innersource, cultural development, and more fit under the banner of “collaboration”, but don’t necessarily fit under the traditional banner of “community”. As such, we decided to change the name of the conference to the Open Collaboration Conference. I am confident this will then provide both a home to the community strategy and tactics content, as well as these other related areas. This way the entire event services as a comprehensive capsule for collaboration in technology.

I am really excited about this year’s events. They are taking place:

  • North America in Vancouver from 29th – 31st August 2018
  • Europe in Edinburgh from 22nd – 24th October 2018

Last year there was a wealth of tremendous material and truly talented speakers, and I am looking forward to even more focused, valuable, and pragmatic content.

North America Call For Papers Closing Soon

…this neatly leads to the point.

The Call For Papers for the Vancouver event closing on 29th April 2018. So, be sure to go and get your papers in right away.

Also, don’t forget that the European event has the CFP close on the 1st July 2018. Go and submit your papers there too!

For both events I am really looking for a diverse set of content that offers genuine pragmatic value. Example topics include:

  • Open Source Metrics
  • Incentivization and Engagement
  • Software Development Methodologies and Platforms
  • Building Internal Innersource Communities
  • Remote Team Management and Methods
  • Bug/Issue Management and Triage
  • Communication Platforms and Methods
  • Open Source Governance and Models
  • Mentoring and Training
  • Event Strategy
  • Content Management and Social Media
  • DevOps Culture
  • Community Management
  • Advocacy and Evangelism
  • Government and Compliance

Also, here’s a pro tip for helping to get your papers picked.

Many people who submit papers to conferences send in very generic “future of open source” style topics. For the Open Collaboration Conference I am eager to have a few of these, but I am particularly interested in seeing deep dives into specific areas, technologies and approaches. Your submission will be especially well received if it offers pragmatic approaches and value that the audience can immediately take away and apply in their own world. So, consider how you package up your recommendations and best practice and I look forward to seeing you submissions and seeing you there!

The post Open Collaboration Conference (at Open Source Summit) Call For Papers appeared first on Jono Bacon.

Andres Rodriguez: MAAS 2.4.0 beta 2 released!

Mar, 17/04/2018 - 7:56md
Hello MAASters! I’m happy to announce that MAAS 2.4.0 beta 2 is now released and is available for Ubuntu Bionic. MAAS Availability MAAS 2.4.0 beta 2 is currently available in Bionic’s Archive or in the following PPA: ppa:maas/next MAAS 2.4.0 (beta2) New Features & Improvements MAAS Internals optimisation

Continuing with MAAS’ internal surgery, a few more improvements have been made:

  • Backend improvements

  • Improve the image download process, to ensure rack controllers immediately start image download after the region has finished downloading images.

  • Reduce the service monitor interval to 30 seconds. The monitor tracks the status of the various services provided alongside MAAS (DNS, NTP, Proxy).

  • UI Performance optimizations for machines, pods, and zones, including better filtering of node types.

KVM pod improvements

Continuing with the improvements for KVM pods, beta 2 adds the ability to:

  • Define a default storage pool

This feature allows users to select the default storage pool to use when composing machines, in case multiple pools have been defined. Otherwise, MAAS will pick the storage pool automatically depending which pool has the most available space.

  • API – Allow allocating machines with different storage pools

Allows users to request a machine with multiple storage devices from different storage pools. This feature uses storage tags to automatically map a storage pool in libvirt with a storage tag in MAAS.

UI Improvements
  • Remove remaining YUI in favor of AngularJS.

As of beta 2, MAAS has now fully dropped the use of YUI for the Web Interface. The last section using YUI was the Settings page and the login page. Both sections have now been transitioned to use AngularJS instead.

  • Re-organize Settings page

The MAAS settings  have now been reorganized into multiple tabs.

Minor improvements
  • API for default DNS domain selection

Adds the ability to define the default DNS domain. This is currently only available via the API.

  • Vanilla framework upgrade

We would like to thank the Ubuntu web team for their hard work upgrading MAAS to the latest version of the Vanilla framework. MAAS is looking better and more consistent every day!

Bug fixes

Please refer to the following for all 37 bug fixes in this release, which address issues with MAAS across the board:

https://launchpad.net/maas/+milestone/2.4.0beta2

 

The Fridge: Ubuntu Weekly Newsletter Issue 523

Mar, 17/04/2018 - 5:35pd

Welcome to the Ubuntu Weekly Newsletter, Issue 523 for the week of April 8 – 14, 2018 – the full version is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Rozz Welford
  • Elizabeth K. Joseph
  • Bashing-om
  • wildmanne39
  • Krytarik Raido
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

David Tomaschik: The IoT Hacker's Toolkit

Hën, 16/04/2018 - 9:00md

Today, I’m giving a talk entitled “The IoT Hacker’s Toolkit” at BSides San Francisco. I thought I’d release a companion blog post to go along with the slide deck. I’ll also include a link to the video once it gets posted online.

Introduction

From my talk synopysis:

IoT and embedded devices provide new challenges to security engineers hoping to understand and evaluate the attack surface these devices add. From new interfaces to uncommon operating systems and software, the devices require both skills and tools just a little outside the normal security assessment. I’ll show both the hardware and software tools, where they overlap and what capabilities each tool brings to the table. I’ll also talk about building the skillset and getting the hands-on experience with the tools necessary to perform embedded security assessments.

While some IoT devices can be evaluated from a purely software standpoint (perhaps reverse engineering the mobile application is sufficient for your needs), a lot more can be learned about the device by interacting with all the interfaces available (often including ones not intended for access, such as debug and internal interfaces).

Background

I’ve always had a fascination with both hacking and electronics. I became a radio amateur at age 11, and in college, since my school had no concentration in computer security, I selected an embedded systems concentration. As a hacker, I’ve viewed the growing population of IoT devices with fascination. These devices introduce a variety of new challenges to hackers, including the security engineers tasked with evaluating these devices for security flaws:

  • Unfamiliar architectures (mostly ARM and MIPS)
  • Unusual interfaces (802.15.4, Bluetooth LE, etc.)
  • Minimal software (stripped C programs are common)

Of course, these challenges also present opportunities for hackers (white-hat and black-hat alike) who understand the systems. While finding a memory corruption vulnerability in an enterprise web application is all but unheard of, on an IoT device, it’s not uncommon for web requests to be parsed and served using basic C, with all the memory management issues that entails. In 2016, I found memory corruption vulnerabilities in a popular IP phone.

Think Capabilities, Not Toys

A lot of hackers, myself included, are “gadget guys” (or “gadget girls”). It’s hard not to look at every possible tool as something new to add to the toolbox, but at the end of the day, one has to consider how the tool adds new capabilities. It needn’t be a completely distinct capability, perhaps it offers improved speed or stability.

Of course, this is a “do as I say, not as I do” area. I, in fact, have quite a number of devices with overlapping capabilities. I’d love to claim this was just to compare devices for the benfit of those attending my presentation or reading this post, but honestly, I do love my technical toys.

Software

Much of the software does not differ from that for application security or penetration testing. For example, Wireshark is commonly used for network analysis (IP and Bluetooth), and Burp Suite for HTTP/HTTPS.

The website fccid.io is very useful in reconnaissance of devices, providing information about the frequencies and modulations used, as well as often internal pictures of devices, which can also reveal information such as chipsets, overall architecture, etc., all without lifting a screwdriver.

Reverse Engineering

Firmware images are often multiple files concatentated, or contain proprietary metadata headers. Binwalk walks the image, looking for known file signatures, and extracts the components. Often this will include entire Linux filesystems, kernel images, etc.

Once you have extracted this, you might be interested in analyzing the binaries or other software contained inside. Often a disassembler is useful. My current favorite disassembler is Binary Ninja, but there are a number of options:

Basic Tools

There’s a few tools that I consider absolutely essentially to any sort of hardware hacking exercise. These tools are fundamental to gaining an understanding of the device and accessing multiple types of interfaces on the device.

Screwdriver Set

A screwdriver set might be an obvious thing, but you’ll want one with bits that can get into tight places, are appropriately sized to the screws on your device (using the wrong size Phillips screwdriver bit is one of the easiest ways to strip a screw). Many devices also use “security screws”, which seems to be a term applied to just about any screw that doesn’t come in your standard household tool kit. (I’ve seen Torx, triangle bits, square bits, Torx with a center pin, etc.)

I have a wonderful driver kit from iFixit, and I’ve found almost nothing that it won’t open. The extension driver helps get into smaller spaces, and the 64 bits cover just about everything. I personally like to support iFixit because they have great write-ups and tear downs, but there are also cheaper clones of this toolkit.

Openers

Many devices are sealed with plastic catches or pieces that are press-fit together. For these, you’ll need some kind of opener (sometimes called a “spudger”) to pry them apart. I find a variety of shapes useful. You can get this a as part of a combined tool kit from iFixit, iFixit clones, or openers by themselves. I have found the iFixit model to be of slightly higher quality, but I also carry a cheap clone for occassional travel use.

The very thin metal one with a plastic handle is probably my favorite opener – it fits into the thinnest openings, but consequently it also bends fairly easily. I’ve been through a few due to bending damage. Be careful how you use these tools, and make sure your hand is not where they will go if they slip! They are not quite razor-blade sharp, but they will cut your hand with a bit of force behind them.

Multimeter

I get it, you’re looking to hack the device, not rewire your car. That being said, for a lot of tasks, a halfway decent multimeter is somewhere between an absolute requirement and a massive time saver. Some of the tasks a multimeter will help with include:

  • Identifying unknown pinouts
  • Find the ground pin for a UART
  • Checking which components are connected
  • Figuring out what kind of power supply you need
  • Checking the voltage on an interface to make sure you don’t blow something up

I have several multimeters (more than one is important for electronics work), but you can get by with a single one for your IoT hacking projects. The UNI-T UT-61E is a popular model at a good price/performance ratio, but its safety ratings are a little optimistic. The EEVBlog BM235 is my favorite of my meters, but a little higher end (aka expensive). If you’re buying for work, the Fluke 87V is the “golden standard” of multimeters.

If you buy a cheap meter, it will probably work for IoT projects, but there are many multimeters that are unsafe out there. Please do not use these cheap meters on “mains” electricity, high voltage power supplies, anything coming out of the wall, etc. Your personal safety is not worth saving $40.

Soldering Iron

You will find a lot of unpopulated headers (just the holes in the circuit board) on production IoT devices. The headers for various debug interfaces are left out, either as a cost savings, or for space reasons, or perhaps both. The headers were used during the development process, but often the manufacturer wants to leave the connections either to avoid redoing the printed circuit board (PCB) layout, or to be able to debug failures in the field.

In order to connect to these unpopulated headers, you will want to solder your own headers in their place. To do so, you’ll need a soldering iron. To minimize the risk of damaging the board in the process, use a soldering iron with a variable temperature and a small tip. The Hakko FX-888D is very popular and a very nice option, but you can still do good work with something like this Aoyue or other options. Just don’t use a soldering iron designed for a plumber or similiar uses – you’ll just end up burning the board.

Likewise, you’ll want to practice your soldering skills before you start work on your target board – find some small soldering projects to practice on, or some through away scrap electronics to work on.

Network Interfaces

Obviously, these devices have network interfaces. After all, they are the “Internet of Things”, so a network connection would seem to be a requirement. Nearly universally, 802.11 connectivity is present (sometimes on just a base station), and ethernet (10/100 or Gigabit) interfaces are also very common.

Wired Network Sniffing

The easiest way to sniff a wired network is often a 2nd interface on your computer. I’m a huge fan of this USB 3.0 to Dual Gigabit Adapter, which even has a USB-C version for those using one of the newer laptops or Macbooks that only support USB-C. Either option gives you two network ports to work with, even on laptops without built-in wired interfaces.

Beyond this, you’ll need software for the sniffing. Wireshark is an obvious tool for raw packet capture, but you’ll often also want HTTP/HTTPS sniffing, for which Burp Suite is the defacto standard, but mitmproxy is an up-and-coming contender with a lot of nice features.

Wireless Network Sniffing

Most common wireless network interfaces on laptops can perform monitor mode, but perhaps you’d like to connect your wireless to use the internet, as well as sniff on another interface. Alfa wireless cards like the AWUSO36NH and the AWUSO36ACH have been quite popular for a while, but I personally like using the tiny RT5370-based adapters for assessments not requiring long range due to its compact size and portability.

Wired (Debug/Internal) Interfaces

There are many subtle interfaces on IoT devices, intended for either debug use, or for various components to communicate with each other. For example:

  • SPI/I2C for flash chips
  • SPI/SD for wifi chips
  • UART for serial consoles
  • UART for bluetooth/wifi controllers
  • JTAG/SWD for debugging processors
  • ICSP for In-Circuit Programming
UART

Though there are many universal devices that can do other things, I run into UARTs so often that I like having a standalone adapter for this. Additionally, having a standalone adapter allows me to maintain a UART connection at the same time as I’m working with JTAG/SWD or other interfaces.

You can get a standalone cable for around $10, that can be used for most UART interfaces. (On most devices I’ve seen, the UART interface is 3.3v, and these cables work well for that.) Most of these cables have the following pinout, but make sure you check your own:

  • Red: +5V (Don’t connect on most boards)
  • Black: GND
  • Green: TX from Computer, RX from Device
  • White: RX from Computer, TX from Device

There are also a number of breakouts for the FT232RL or the CH340 chips for UART to USB. These provide a row of headers to connect jumpers between your target device and the adapter. I prefer the simplicity of the cables (and fewer jumper ends to come loose during my testing), but this is further evidence that there are a number of options to provide the same capabilities.

Universal Interfaces (JTAG/SWD/I2C/SPI)

There are a number of interface boards referred to as “universal interfaces” that have the capability to interface with a wide variety of protocols. These largely fit into two categories:

  • Bit-banging microcontrollers
  • Hardware interfaces (dominated by the FT*232 series from FTDI)

There are a number of options for implementing a bit-banging solution for speaking these protocols, ranging from software projects to run on an Arduino, to projects like the Bus Pirate, which uses a PIC microcontroller. These generally present a serial interface (UART) to the host computer and applications, and use in-band signalling for configuration and settings. There may be some timing issues on certain devices, as microcontrollers often cannot update multiple output pins in the same clock cycle.

Hardware interfaces expose a dedicated USB endpoint to talk to the device, and though this can be configured, it is done via USB endpoints and registers. The protocols are implemented in semi-dedicated hardware. In my experience, these devices are both faster and more reliable than bit-banging microcontrollers, but you are limited to whatever protocols are supported by the particular device, or the capabilities of the software to drive them. (For example, the FT*232H series can do most protocols via bit-banging, but it updates an entire register at a time, and has high enough speed to run the clock rate of many protocols.)

The FT2232H and FT232H (not to be confused with the FT232RL, which is UART only), in particular, has been incorporated into a number of different breakout boards that are excellent universal interfaces:

Logic Analyzer

When you have an unknown protocol, unknown pinout, or unknown protocol settings (baudrate, polarity, parity, etc.), a logic analyzer can dramtically help by allowing you a direct look at the signals being passed between chips or interfaces.

I have a Saleae Logic 8, which is a great value logic analyzer. It has a compact size and their software is really excellent and easy to use. I’ve used it to discover the pinout for many unlabeled ports, discover the settings for UARTs, and just generally snoop on traffic between two chips on a board.

Though there are cheap knock-offs available on eBay or AliExpress, I have tried them and they have very poor quality, and unfortunately the open-source sigrok software is not quite the quality of the Saleae software. Additionally, they rarely have any input protection to prevent you from blowing up the device yourself.

Wireless

Obviously, the Internet of Things has quite a number of wireless devices. Some of these devices use WiFI (discussed above) but many use other wireless protocols. Bluetooth (particularly Bluetooth LE) is quite common, but in other areas, such as home automation, other protocols prevail. Many of these are based on 802.15.4 (Zigbee, Z-Wave) or proprietary protocols in the 433 MHz, 915 MHz, or 2.4 GHz ISM bands.

Bluetooth

Bluetooth devices are incredibly common, and Bluetooth Low Energy (starting with Bluetooth 4.0) is very popular for IoT devices. Most devices that do not stream audio, provide IP connectivity, or have other high-bandwidth needs seem to be moving to Bluetooth Low Energy, probably because of several reasons:

  1. Lower power consumption (battery friendly)
  2. Cheaper chipsets
  3. Less complex implementation

There is essentially only one tool I can really recommend for assessing Bluetooth, and that is the Ubertooth One (Amazon). This can follow and capture Bluetooth communications, providing output in pcap or pcap-ng format, allowing you to import the communications into Wireshark for later analysis. (You can also use other pcap-based tools like scapy for analysis of the resulting pcaps.) The Ubertooth tools are available in Debian, Ubuntu, or Kali as packages, but you can get a more up to date version of the software from their Github repository.

Adafruit also offers a BLE Sniffer which works only for Bluetooth Low Energy and utilizes a Nordic Semiconductor BLE chip with a special firmware for sniffing. The software for this works well on Windows, but not so well on Linux where it is a python script that tends to be more difficult to use than the Ubertooth tools.

Software Defined Radio

For custom protocols, or to enable lower-level evaluation or attacks of radio-based systems, Software Defined Radio presents an excellent opportunity for direct interaction with the RF side of the IoT device. This can range from only receiving (for purposes of understanding and reverse engineering the device) to being able to simultaneously receive and transmit (full-duplex) depending upon the needs of your assessment.

For simply receiving, there are simple DVB-T dongles that have been repurposed as general-purpose SDRs, often referred to as “RTL SDRs”, a name based on the Realtek RTL2832U chips present in the device. These can be used because the chip is capable of providing the raw samples to the host operating system, and because of their low cost, a large open source community has emerged. Companies like NooElec are now even offering custom built hardware based on these chips for the SDR community. There’s also a kit that expands the receive range of the RTL-SDR dongles.

In order to transmit as well, the hardware is significantly more complex, and most options in this space are driven by an FPGA or other powerful processor. Even a few years ago, the capabilities here were very expensive with tools like the USRP. However, the HackRF by Great Scott Gadgets and the BladeRF by Nuand have offered a great deal of capability for a hacker-friendly price.

I personally have a BladeRF, but I honestly wish I had bought a HackRF instead. The HackRF has a wider usable frequency range (especially at the low end), while the BladeRF requires a relatively expensive upconverter to cover those bands. The HackRF also seems to have a much more active community and better support in some areas of open source software.

Other Useful Tools

It is occasionally useful to use an oscilloscope to see RF signals or signal integrity, but I have almost never found this necessary.

Specialized JTAG programmers for specific hardware often work better, but cost quite a bit more and are specialized to those specific items.

For dumping Flash chips, Xeltec programmers/dumpers are considered the “top of the line” and do an incredible job, but are at a price point such that only labs doing this on a regular basis find it worthwhile.

Slides

PDF: The IoT Hacker’s Toolkit

Lubuntu Blog: This Week in Lubuntu Development #3

Hën, 16/04/2018 - 6:45md
Here is the third issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine. Here's what she has been working on: Start page for Evince. Start docs for the Document Viewer. Start work on the GNOME […]

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2018

Hën, 16/04/2018 - 4:07md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 214 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 26. Thanks to a few extra hours dispatched this month (accumulated backlog of a contributor), the number of open issues came back to a more usual value.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Elizabeth K. Joseph: SCaLE16x with Ubuntu, CI/CD and more!

Pre, 13/04/2018 - 10:49md

Last month I made my way down to Pasadena for one of my favorite conferences of the year, the Southern California Linux Expo. Like most years, I split my time between Ubuntu and stuff I was working on for my day job. This year that meant doing two talks and attending UbuCon on Thursday and half of Friday.

As with past years, UbuCon at SCALE was hosted by Nathan Haines and Richard Gaskin. The schedule this year was very reflective about the history and changes in the project. In a talk from Sriram Ramkrishna of System76 titled “Unity Dumped Us! The Emotional Healing” he talked about the closing of development on the Unity desktop environment. System76 is primarily a desktop company, so the abrupt change of direction from Canonical took some adjusting to and was a little painful. But out of it came their Ubuntu derivative Pop!_OS and a community around it that they’re quite proud of. In the talk “The Changing Face of Ubuntu” Nathan Haines walked through Ubuntu history to demonstrate the changes that have happened within the project over the years, and allow us to look at the changes today with some historical perspective. The Ubuntu project has always been about change. Jono Bacon was in the final talk slot of the event to give a community management talk titled “Ubuntu: Lessons Learned”. Another retrospective, he drew from his experience when he was the Ubuntu Community Manager to share some insight into what worked and what didn’t in the community. Particularly noteworthy for me were his points about community members needing direction more than options (something I’ve also seen in my work, discrete tasks have a higher chance of being taken than broad contribution requests) and the importance of setting expectations for community members. Indeed, I’ve seen that expectations are frequently poorly communicated in communities where there is a company controlling direction of the project. A lot of frustration could be alleviated by being more clear about what is expected from the company and where the community plays a role.


UbuCon group photo courtesy of Nathan Haines (source)

The UbuCon this year wasn’t as big as those in years past, but we did pack the room with nearly 120 people for a few talks, including the one I did on “Keeping Your Ubuntu Systems Secure”. Nathan Haines suggested this topic when I was struggling to come up with a talk idea for the conference. At first I wasn’t sure what I’d say, but as I started taking notes about what I know about Ubuntu both from a systems administration perspective with servers, and as someone who has done a fair amount of user support in the community over the past decade, it turned out that I did have an entire talk worth of advice! None of what I shared was complicated or revolutionary, there was no kernel hardening in my talk or much use of third party security tools. Instead the talk focused on things like keeping your system updated, developing a fundamental understanding of how your system and Debian packages work, and tips around software management. The slides for my presentation are pretty wordy, so you can glean the tips I shared from them: Keeping_Your_Ubuntu_Systems_Secure-UbuConSummit_Scale16x.pdf.


Thanks to Nathan Haines for taking this photo during my talk (source)

The team running Ubuntu efforts at the conference rounded of SCALE by staffing a booth through the weekend. The Ubuntu booths have certainly evolved over the years, when I ran them it was always a bit cluttered and had quite the grass roots feeling to it (the booth in 2012). The booths the team put together now are simpler and more polished. This is definitely in line with the trend of more polished open source software presence in general, so kudos to the team for making sure our little Ubuntu California crew of volunteers keeps up.

Shifting over to the more work-focused parts of the conference, on Friday I spoke at Container Day, with my talk being the first of the day. The great thing about going first is that I get to complete my talk and relax for the rest of the conference. The less great thing about it is that I get to experience all the A/V gotchas and be awake and ready to give a talk at 9:30AM. Still, I think the pros outweighed the cons and I was able to give a refresh of my “Advanced Continuous Delivery Strategies for Containerized Applications Using DC/OS” talk, which included a new demo that I finished writing the week before. The talk seemed to generate interest that led to good discussions later in the conference, and to my relief the live demo concluded without a problem. Slides from the talk can be found here: Advanced_CD_Using_DCOS-SCALE16x.pdf


Thanks to Nathan Handler for taking this photo during my talk (source)

Saturday and Sunday brought a duo of keynotes that I wouldn’t have expected at an open source conference five years ago, from Microsoft and Amazon. In both these keynotes the speaker recognized the importance of open source today in the industry, which has fueled the shift in perspective and direction regarding open source for these companies. There’s certainly a celebration to be had around this, when companies are contributing to open source because it makes business sense to do so, we all benefit from the increased opportunities that presents. On the other hand, it has caused disruption in the older open source communities, and some have struggled to continue to find personal value and meaning in this new open source world. I’ve been thinking a lot about this since the conference and have started putting together a talk about it, nicely timed for the 20th anniversary of the “open source” term. I want to explore how veteran contributors stay passionate and engaged, and how we can bring this same feeling to new contributors who came down different paths to join open source communities.

Regular talks began on Saturday with me attending Nathan Handler’s talk on “Terraforming all the things” where he shared some of the work they’ve been doing at Yelp that has resulted in the handling of things like DNS records and CDN configuration being handled by Terraform. From there I went to a talk by Brian Proffitt where he talked about metrics in communities and the Community Health Analytics Open Source Software (CHAOOS) project. I spent much of the rest of the day in the “hallway track” catching up with people, but at the end I popped into a talk by Steve Wong on “Running Containerized Workloads in an on-prem Datacenter” where he discussed the role that bare metal continues to have in the industry, even as many rush to the cloud for a turnkey solution.

It was at this talk where I had the pleasure of meeting one of our newest Account Executives at Mesosphere, Kelly Bond, and also had some time to catch up with my colleague Jörg Schad.


Jörg, me, Kelly

Nuritzi Sanchez presented my favorite talk on Sunday, on Endless OS. They build a Linux distribution using FlatPak and as an organization work on the problem of access to technology in developing nations. I’ve long been concerned about cellphone-only access in these countries. You need a mix of a system that’s tolerant to being offline and that has input devices (like keyboards!) that allow work to be done on them. They’re doing really interesting work on the technical side related to offline content and general architecture around a system that needs to be conscious of offline status, but they’re also developing deployment strategies on the ground in places like Indonesia that will ensure the local community can succeed long term. I have a lot of respect for the people working toward all this, and really want to see this organization succeed.

I’m always grateful to participate in this conference. It’s grown a lot over the years and it certainly has changed, but the autonomy given to the special events like UbuCon allows for a conference that brings together lots of different voices and perspective all in one place. I also have a lot of friends who attend this conference, many of whom span jobs and open source projects I’ve worked on over more than a decade. Building friendships and reconnecting with people is part of what makes the work I do in open source so important to me, and not just a job for me. Thanks to everyone who continues to make this possible year after year in beautiful Pasadena.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157693153653781

Simon Raffeiner: I went to Fukushima

Pre, 13/04/2018 - 1:36md

I'm an engineer and interested in all kinds of technology, especially if it is used to build something big. But I'm also fascinated by what happens when things suddenly change and don't go as expected, and especially by everything that's left behind after technological and social revolutions or disasters. In October 2017 I travelled across Japan and decided to visit one of the places where technology had failed in the worst way imaginable: the Fukushima Evacuation Zone.

The post I went to Fukushima appeared first on LIEBERBIBER.

Kees Cook: security things in Linux v4.16

Pre, 13/04/2018 - 2:04pd

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Ubuntu Podcast from the UK LoCo: S11E06 – Six Feet Over It - Ubuntu Podcast

Enj, 12/04/2018 - 4:15md

This week we review the Dell XPS 13 (9370) Developer Edition laptop, bring you some command line lurve and go over all your feedback.

It’s Season 11 Episode 06 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Ante Karamatić: Spaces – uncomplicating your network

Enj, 12/04/2018 - 6:44pd
An old OpenStack network architecture

For past 5-6 years I’ve been in business of deploying cloud solutions for our customers. Vast majority of that was some form of OpenStack, either a simple cloud or a complicated one. But when you think about it – what is a simple cloud? It’s easy to say that small amount of machines makes an easy, and large amount of machines makes a complicated cloud. But, that is not true. Complexity of a typical IaaS solution is pretty much determined by network complexity. Network, in all shapes and forms, from the underlay network to the customer’s overlay network requirements. I’ll try to explain how we deal with the underlay part in this blog.

It’s not a secret that a traditional tree like network architecture just doesn’t work for cloud environments. There are multiple reasons why; it doesn’t scale very well, it requires big OSI layer 2 domains and… well, it’s based on OSI layer 2. Debugging issues on that level is never a joyful experience. Therefore, for IaaS environments one really wants to do a modern design in a form of a spine-leaf architecture. Layer 3 spine-leaf architecture. This allows us to have bunch of smaller layer 2 domains, which then nicely correlate to availability zones, power zones, etc. However, managing environments with multiple layer 2 and therefore even more layer 3 domains requires a bit of rethinking. If we truly want to be effective in deploying and operating a cloud across multiple different layer 2 domains we need to think of the network in a bit more abstract mode. Luckily, this is nothing new.

In traditional approach to network, we talk about TORs, management fabric, BMC/OOB fabric, etc. These are most of the time layer 2 concepts. Fabric, after all, is a collection of switches. But the approach is correct; we should always talk about networks in abstract terms. Instead of talking about subnets and VLANs, we should talk about purpose of the network. This becomes important when we talk about spine-leaf architecture and multiple different subnets that serve the same purpose. In rack 1, subnet 172.16.1.0/24 is management network, but in rack 2, management network is on subnet 192.168.1.0/24, and so on. It’s obvious that it’s much nicer to abstract those subnets into a ‘management network’. Still, nothing new. We do this every day.

So… Why do our tools and applications still require us to use VLANs, subnets and IPs? If we deploy same application across different racks, why do we have to keep separate configurations for each of the units of the same application? What we really want is to have all of our Keystones listening on OpenStack Public API network, and not on subnets 192.168.10.0/24, 192.168.20.0/24 and 192.168.30.0/24. We end up thinking about application on a network, but we configure differently exact copies of the same application (units) on different subnets. Clearly our configuration tools are not doing what we want, but rather forcing us to transform our way of thinking into what those tools need. It’s a paradox that OpenStack is not that complicated, rather it’s made complicated by the tools used to deploy it.

While trying to solve this problem in our deployments at Canonical, we came up with concept of spaces. A space would be this abstracted network that we have in our heads, but somehow fail to put into our tools. Again, spaces are not a revolutionary concept in networking, they have been in our heads all this time. So, how do we implement spaces at Canonical?

We have grown concept of spaces across all of our tooling; MAAS, juju and charms. When we configure MAAS to manage our bare metal machines, we do not define networks as subnets or VLANs, we rather define networks as spaces. A space has a purpose, description and few other attributes. VLANs, and indirectly subnets too, become properties of the space, instead of other way around. This also means that when we deploy a machine, we deploy it connected to a space. When we deploy a machine, we usually do not deploy it on a specific network, but rather with specific requirements; must be able to talk to X, must have Y CPU and Z RAM. If you ever asked yourself why does it take so much time to rack and stack a server, it’s because of this disconnect of what we want and how we handle the configuration.

We’ve also enabled Juju to make this kind of requests – it asks MAAS for machines that is connected to a space, or set of spaces. It then exposes this spaces to charms, so that each charm knows what kind of networks this application has on its disposal. This allows us to do ‘juju deploy keystone –bind public=public-space -n3’; deploy three keystones, connect them to a public-space, a space defined in MAAS. What VLAN will that be, which subnet or an IP, we do not care; the charm will get information from Juju about these “low level” terms (VLANs, IPs). We humans do not think of VLANs and subnets and IPs; at best we think in OSI layer 1 terms.

Sounds a bit complicated? Let’s flip it the other way around. What I can do now is define my application as “3 units of keystone, which use internal network for SQL, public network for exposing API, internal network for OpenStack’s internal communication and is also exposed on OAM network for management purposes” and this is precisely how we deploy OpenStack. In fact, the Juju bundle looks like this:

keystone:
  charm: cs:keystone
  num_units: 3
  bindings:
    "": oam-space
    public: public-space
    internal: internal-space
    shared-db: internal-space

Those who follow OpenStack development will notice that something similar has landed in OpenStack recently; routed provider networks. It’s the same concept, solving the same problem. It’s nice to see how Juju uses this out of the box.

Big thanks to MAAS, Juju, charms and OpenStack communities for doing this. It allowed us to deploy complex applications with a breeze, and therefore shifted our focus to bigger picture, IaaS modeling and some other, new challenges!

Launchpad News: Launchpad security advisory: cross-site-scripting in site search

Mër, 11/04/2018 - 10:40pd
Summary

Mohamed Alaa reported that Launchpad’s Bing site search implementation had a cross-site-scripting vulnerability.  This was introduced on 2018-03-29, and fixed on 2018-04-10.  We have not found any evidence of this bug being actively exploited by attackers; the rest of this post is an explanation of the problem for the sake of transparency.

Details

Some time ago, Google announced that they would be discontinuing their Google Site Search product on 2018-04-01.  Since this served as part of the backend for Launchpad’s site search feature (“Search Launchpad” on the front page), we began to look around for a replacement.  We eventually settled on Bing Custom Search, implemented appropriate support in Launchpad, and switched over to it on 2018-03-29.

Unfortunately, we missed one detail.  Google Site Search’s XML API returns excerpts of search results as pre-escaped HTML, using <b> tags to indicate where search terms match.  This makes complete sense given its embedding in XML; it’s hard to see how that API could do otherwise.  The Launchpad integration code accordingly uses TAL code along these lines, using the structure keyword to explicitly indicate that the excerpts in question do not require HTML-escaping (like most good web frameworks, TAL’s default is to escape all variable content, so successful XSS attacks on Launchpad have historically been rare):

<div class="summary" tal:content="structure page/summary" />

However, Bing Custom Search’s JSON API returns excerpts of search results without any HTML escaping.  Again, in the context of the API in question, this makes complete sense as a default behaviour (though a textFormat=HTML switch is available to change this); but, in the absence of appropriate handling, this meant that those excerpts were passed through to the TAL code above without escaping.  As a result, if you could craft search terms that match a portion of an existing page on Launchpad that shows scripting tags (such as a bug about an XSS vulnerability in another piece of software hosted on Launchpad), and convince other people to follow a suitable search link, then you could cause that code to be executed in other users’ browsers.

The fix was, of course, to simply escape the data returned by Bing Custom Search.  Thanks to Mohamed Alaa for their disclosure.

Faqet