You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 4 months 4 javë më parë

Serge Hallyn: Outdoors laptop (part 2)

Pre, 26/10/2018 - 12:59pd

Some time ago I posted about wanting an outdoor laptop. The first option I listed was a panasonic toughbook. Recently (a year and a half later) I finally ordered one. I ordered from bobjohnson.com, because the people there are a class act who’ve been calmly answering my questions for a long time.

Some highlights:

* It has a transflective display. This means that it is emissive, but also uses reflected sunlight to boost brightness, up to 6000 nits. In comparison, my previous thinkpad was 350 nit (unusable sometimes even in shade), and my macbook was 500nit. With this laptop, I can leave the display on 25% brightness and move from a dark basement to hurt-my-skin bright sunlight.

* It’s ‘fully rugged’, so using it in rain or dust storms should not be an issue. (I lost 4G ram in my thinkpad to dust).

* It has a shoulder strap ($20 extra) screwed on solidly.

* It has a touchscreen with stylus. (To use this under ubuntu 18.04 I had to install xserver-xorg-input-evdev and remove xserver-xorg-input-libinput. Note just installing evdev was not enough) I may look like a dweeb, but I prefer this to smudging my screen.

The laptop I got is a CF-19 MK6. This is several years old and refurbished. The reason I went with this instead of a new toughbook (besides price) is because, as far as I can tell, only the CF-19 MK5 through MK8 have the transflective display. The replacement for the CF-19 (the CF-20) may have a better screen (i’ve not seen it), but it is not transflective and comes in at “only” 1000nit. Same with the slightly larger nonconvertable laptops.

Mind you, there is (I trust) a reason these screens did not take off – the colors are kind of washed out, and it’s low resolution. But for reading kernel code by the pool without draining the battery in 1 hr, the only thing I can imagine being better is an eink screen.

The CF-19 is compact: it’s a 10″ (convertible) netbook. This keyboard is more cramped than on my old s100 netbook. I do actually kind of like the keys – they have a good travel depth and a nice click. But it’s weird going back to a full-size keyboard.

The first time I measured the battery life, it shut down when battery listed 36% remaining, after a mere got 3 hours. Panasonic had advertised 10 hours for this laptop. 3 was unacceptable, and I was about ready to send it back. But, reading the powertop output, I noticed that the sound card was listed as taking tons of battery power. So for my next run I did a powertop –auto-tune, and got over 4 hours battery life. Then I noticed bluetooth radio doing the same, so I did rmmod btusb. These are now all done on startup by systemd. The battery still stops at 35%, which takes getting used to, but it’s acceptable.

4.5 hours is still limiting, so I picked up a second battery and an external charger. I can charge one battery while using the other, or take both batteries along for a longer trip.

In summary – i may have found my outdoors laptop. I’d still prefer it be thinner, with a slightly larger and mechanical keyboard, and have 12 hour battery life, delivered on a unicorn…

(Here is an attempt to show the screen in very bright sunlight. It’s hard to get a good photo, since the camera wants to play its own games) :

Julian Andres Klode: Migrated website from ikiwiki to Hugo

Enj, 25/10/2018 - 8:42md

So, I’ve been using ikiwiki for my website since 2011. At the time, I was hosting the website on a tiny hosting package included in a DSL contract - nothing dynamic possible, so a static site generator seemed like a good idea. ikiwiki was a good social fit at the time, as it was packaged in Debian and developed by a Debian Developer.

Today, I finished converting it to Hugo.

Why?

I did not really have a huge problem with ikiwiki, but I recently converted my blog from wordpress to hugo and it seemed to make sense to have one technology for both, especially since I don’t update the website very often and forget ikiwiki’s special things.

One thing that was somewhat annoying is that I built a custom ikiwiki plugin for the menu in my template, so I had to clone it’s repository into ~/.ikiwiki every time, rather than having a self-contained website. Well, it was a submodule of my dotfiles repo.

Another thing was that ikiwiki had a lot of git integration, and when you build your site it tries to push things to git repositories and all sorts of weird stuff – Hugo just does one thing: It builds your page.

One thing that Hugo does a lot better than ikiwiki is the built-in server which allows you to run `hugo server´ and get a local http URL you can open in the browser with live-reload as you save files. Super convenient to check changes (and of course, for writing this blog post)!

Also, in general, Hugo feels a lot more modern. ikiwiki is from 2006, Hugo is from 2013. Especially recent Hugo versions added quite a few features for asset management.

  • Fingerprinting of assets like css (inserting hash into filename) - ikiwiki just contains its style in style.css (and your templates in other statically named files), so if you switch theming details, you could break things because the CSS the browser has cached does not match the CSS the page expects.
  • Asset minification - Hugo can minimize CSS and JavaScript for you. This means browers have to fetch less data.
  • Asset concatenation - Hugo can concatenate CSS and JavaScript. This allows you to serve only one file per type, reducing the number of round trips a client has to make.

There’s also proper theming support, so you can easily clone a theme into the themes/ directory, or add it as a submodule like I do for my blog. But I don’t use it for the website yet.

Oh, and Hugo automatically generates sitemap.xml files for your website, teaching search engines which pages exist and when they have been modified.

I also like that it’s written in Go vs in Perl, but I think that’s just another more modern type of thing. Gotta keep up with the world!

Basic conversion

The first part to the conversion was to split the repository of the website: ikiwiki puts templates into a templates/ subdirectory of the repository and mixes all other content. Hugo on the other hand splits things into content/ (where pages go), layouts (page templates), and static/ (other files).

The second part was to inject the frontmatter into the markdown files. See, ikiwiki uses shortcuts like this to set up the title, and gets its dates from git:

[[!meta title="My page title"]]

on the other hand, Hugo uses frontmatter - some YAML at the beginning of the markdown, and specifies the creation date in there:

--- title: "My page title" date: Thu, 18 Oct 2018 21:36:18 +0200 ---

You can also have lastmod in there when modifying it, but I set enableGitInfo = true in config.toml so Hugo picks up the mtime from the git repo.

I wrote a small script to automatize those steps, but it was obviously not perfect (also, it inserted lastmod, which it should not have).

One thing it took me some time to figure out was that index.mdown needs to become _index.md in the content/ directory of Hugo, otherwise no pages below it are rendered - not entirely obvious.

The theme

Converting the template was surprisingly easy, it was just a matter of replacing <TMPL_VAR BASEURL> and friends with { .Site.BaseURL } and friends - the names are basically the same, just sometimes there’s .Site at the front of it.

Then I had to take care of the menu generation loop. I had my bootmenu plugin for ikiwiki which allowed me to generate menus from the configuration file. The template for it looked like this:

<TMPL_LOOP BOOTMENU> <TMPL_IF FIRSTNAV> <li <TMPL_IF ACTIVE>class="active"</TMPL_IF>><a href="<TMPL_VAR URL>"><TMPL_VAR PAGE></a></li> </TMPL_IF> </TMPL_LOOP>

I converted this to:

{{ $currentPage := . }} {{ range .Site.Menus.main }} <li class="{{ if $currentPage.IsMenuCurrent "main" . }}active{{ end }}"> <a href="{{ .URL }}"> {{ .Pre | safeHTML }} <span>{{ .Name }}</span> </a> {{ .Post }} </li> {{ end }}

this allowed me to configure my menu in config.toml like this:

[menu] [[menu.main]] name = "dh-autoreconf" url = "/projects/dh-autoreconf" weight = -110

I can also specify pre and post parts and a right menu, and I use pre and post in the right menu to render a few icons before and after items, for example:

[[menu.right]] pre = "<i class='fab fa-mastodon'></i>" post = "<i class='fas fa-external-link-alt'></i>" url = "https://mastodon.social/@juliank" name = "Mastodon" weight = -70

Setting class="active" on the menu item does not seem to work yet, though; I think I need to find out the right code for that…

Fixing up the details

Once I was done with that steps, the next stage was to convert ikiwiki shortcodes to something hugo understands. This took 4 parts:

The first part was converting tables. In ikiwiki, tables look like this:

[[!table format=dsv data=""" Status|License|Language|Reference Active|GPL-3+|Java|[github](https://github.com/julian-klode/dns66) """]]

The generated HTML table had the class="table" set, which the bootstrap framework needs to render a nice table. Converting that to a straightforward markdown hugo table did not work: Hugo did not add the class, so I had to convert pages with tables in them to the mmark variant of markdown, which allows classes to be set like this {.table}, so the end result then looked like this:

{.table} Status|License|Language|Reference ------|-------|--------|--------- Active|GPL-3+|Java|[github](https://github.com/julian-klode/dns66)

I’ll be able to get rid of this in the future by using the bootstrap sources and then having table inherit .table properties, but this requires saas or less, and I only have the CSS at the moment, so using mmark was slightly easier.

The second part was converting ikiwiki links like [[MyPage]] and [[my title|MyPage]] to Markdown links. This was quite easy, the first one became [MyPage](MyPage) and the second one [my title](my page).

The third part was converting custom shortcuts: I had [[!lp <number>]] to generate a link LP: #<number> to the corresponding launchpad bug, and [[!Closes <number>]] to generate Closes: #<number> links to the Debian bug tracker. I converted those to normal markdown links, but I could have converted them to Hugo shortcodes. But meh.

The fourth part was about converting some directory indexes I had. For example, [[!map pages="projects/dir2ogg/0.12/* and ! projects/dir2ogg/0.12/*/*"]] generated a list of all files in projects/dir2ogg/0.12. There was a very useful shortcode for that posted on the Hugo documentation, I used a variant of it and then converted pages like this to {{< directoryindex path="/static/projects/dir2ogg/0.12" pathURL="/projects/dir2ogg/0.12" >}}. As a bonus, the new directory index also generates SHA256 hashes for all files!

Further work

The website is using an old version of bootstrap, and the theme is not split out yet. I’m not sure if I want to keep a bootstrap theme for the website, seeing as the blog theme is Bulma-based - it would be easier to have both use bulma.

I also might want to update both the website and the blog by pushing to GitHub and then using CI to build and push it. That would allow me to write blog posts when I don’t have my laptop with me. But I’m not sure, I might lose control if there’s a breach at travis.

Ubuntu Podcast from the UK LoCo: S11E33 – Thirty-Three Teeth

Enj, 25/10/2018 - 5:30md

This week we’ve been playing with DXVK and Volumio. We discuss Wifi getting a new naming scheme, interpreting the Linux CoC, Motorola partnering with iFixIt, KDE adding scaling for GTK applications and what’s been going on in the Ubuntu Community.

It’s Season 11 Episode 33 of the Ubuntu Podcast! Alan Pope, Mark Johnson and guest presenter Andy Jesse are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Kubuntu General News: Plasma 5.14.2 available in Cosmic backports PPA

Enj, 25/10/2018 - 11:51pd

After the 5.14.1 release of Plasma 5.14 was made available for Cosmic 18.10 via our backports PPA last week, we are pleased to say the the PPA has now been updated to the 2nd bugfix release 5.14.2.

The full changelog for 5.14.2 can be found here.

Already released in the PPA is an update to the latest KDE Frameworks 5.51.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

 

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.14, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.13.5 as included in the original 18.10 Cosmic release.

Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2018

Mër, 24/10/2018 - 12:13md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, about 227 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 11 hours (out of 10 hours allocated + 5 extra hours, thus keeping 4 extra hours for October).
  • Antoine Beaupré did 24 hours.
  • Ben Hutchings did 29 hours (out of 15 hours allocated + 18 extra hours, thus keeping 4 extra hours for October).
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did not publish his report yet (he had 29.25 hours hours allocated).
  • Holger Levsen did 2.5 hours (out of 8 hours allocated + 14 extra hours, thus keeping 19.5 extra hours for October).
  • Hugo Lefeuvre did 10 hours.
  • Markus Koschany did 29.25 hours.
  • Mike Gabriel did 10 hours (out of 8 hours allocated + 2 extra hours).
  • Ola Lundqvist did 7 hours (out of 8 hours allocated + 11.5 remaining hours, but gave back 4.5 hours, thus keeping 8 extra hours for October).
  • Roberto C. Sanchez did 15 hours (out of 18h allocated + 12 extra hours, and gave back the 15 remaining hours).
  • Santiago Ruano Rincón did 4 hours (out of 20 hours allocated + 12 extra hours, thus keeping 28 extra hours for October).
  • Thorsten Alteholz did 29.25 hours.
Evolution of the situation

The number of sponsored hours decreased to 205 hours per month, we lost another small sponsor. Hopefully this trend will not continue. Time to subscribe your company if it’s not yet done!

The security tracker currently lists 30 packages with a known CVE and the dla-needed.txt file has 24 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Stephen Michael Kellat: Hard Question

Mër, 24/10/2018 - 4:54pd

Well, there is big question I get asked frequently during the day that I have trouble answering. You might be surprised at what it is. Simply put: "How are you doing?"

Things are not okay right now. I try to put on a brave face at work and keep on slogging away. We have a bit of "command uncertainty" in which the CEO and board of directors cannot come to a decision on how we're supposed to do things and the shareholders are electing people for seats on the board in two weeks. Hopefully we do not get a repeat of citizens howling at the moon shortly.

Remember, I work for USA Inc. so to say.

There has been an awesomely high level of stress at work. A major chunk of September into October has been spent on trying to get that under control. With greater responsibility in serving the public comes greater degrees of madness, it appears. The citizenry certainly hasn't calmed down since last year's howling at the moon let alone the Brett Kavanaugh affair,

I have at least opened an account on Liberapay since they're back up with two payment processors now. When time permits I need to talk to some legal specialists at work to make sure I have my interpretations nailed down on income definitions and how to appropriately handle any income that would be derived from money sourced via Liberapay. There are plenty of murky gray areas in tax law when it comes to things like Patreon and Liberapay in the United States where, unfortunately, I have seen equally persuasive yet completely contradictory outcomes happen. If I take a leap of faith and seek pledges to go back to writing documentation as well as other works, I do want to keep a roof over my head if possible. Until I get answers I am satisfied with, I won't directly link to my Liberapay profile but it is able to be found if you search for it.

As we start to wind down 2018, I have some leaps of faith coming up it seems. There aren't many safety nets under me. Exciting adventures may lay ahead?

Kees Cook: security things in Linux v4.19

Mar, 23/10/2018 - 1:17pd

Previously: v4.18.

Linux kernel v4.19 was released today. Here are some security-related things I found interesting:

L1 Terminal Fault (L1TF)

While it seems like ages ago, the fixes for L1TF actually landed at the start of the v4.19 merge window. As with the other speculation flaw fixes, lots of people were involved, and the scope was pretty wide: bare metal machines, virtualized machines, etc. LWN has a great write-up on the L1TF flaw and the kernel’s documentation on L1TF defenses is equally detailed. I like how clean the solution is for bare-metal machines: when a page table entry should be marked invalid, instead of only changing the “Present” flag, it also inverts the address portion so even a speculative lookup ignoring the “Present” flag will land in an unmapped area.

protected regular and fifo files

Salvatore Mesoraca implemented an O_CREAT restriction in /tmp directories for FIFOs and regular files. This is similar to the existing symlink restrictions, which take effect in sticky world-writable directories (e.g. /tmp) when the opening user does not match the owner of the existing file (or directory). When a program opens a FIFO or regular file with O_CREAT and this kind of user mismatch, it is treated like it was also opened with O_EXCL: it gets rejected because there is already a file there, and the kernel wants to protect the program from writing possibly sensitive contents to a file owned by a different user. This has become a more common attack vector now that symlink and hardlink races have been eliminated.

syscall register clearing, arm64

One of the ways attackers can influence potential speculative execution flaws in the kernel is to leak information into the kernel via “unused” register contents. Most syscalls take only a few arguments, so all the other calling-convention-defined registers can be cleared instead of just left with whatever contents they had in userspace. As it turns out, clearing registers is very fast. Similar to what was done on x86, Mark Rutland implemented a full register-clearing syscall wrapper on arm64.

Variable Length Array removals, part 3

As mentioned in part 1 and part 2, VLAs continue to be removed from the kernel. While CONFIG_THREAD_INFO_IN_TASK and CONFIG_VMAP_STACK cover most issues with stack exhaustion attacks, not all architectures have those features, so getting rid of VLAs makes sure we keep a few classes of flaws out of all kernel architectures and configurations. It’s been a long road, and it’s shaping up to be a 4-part saga with the remaining VLA removals landing in the next kernel. For v4.19, several folks continued to help grind away at the problem: Arnd Bergmann, Kyle Spiers, Laura Abbott, Martin Schwidefsky, Salvatore Mesoraca, and myself.

shift overflow helper
Jason Gunthorpe noticed that while the kernel recently gained add/sub/mul/div helpers to check for arithmetic overflow, we didn’t have anything for shift-left. He added check_shl_overflow() to round out the toolbox and Leon Romanovsky immediately put it to use to solve an overflow in RDMA.

Edit: I forgot to mention this next feature when I first posted:

trusted architecture-supported RNG initialization

The Random Number Generator in the kernel seeds its pools from many entropy sources, including any architecture-specific sources (e.g. x86’s RDRAND). Due to many people not wanting to trust the architecture-specific source due to the inability to audit its operation, entropy from those sources was not credited to RNG initialization, which wants to gather “enough” entropy before claiming to be initialized. However, because some systems don’t generate enough entropy at boot time, it was taking a while to gather enough system entropy (e.g. from interrupts) before the RNG became usable, which might block userspace from starting (e.g. systemd wants to get early entropy). To help these cases, Ted T’so introduced a toggle to trust the architecture-specific entropy completely (i.e. RNG is considered fully initialized as soon as it gets the architecture-specific entropy). To use this, the kernel can be built with CONFIG_RANDOM_TRUST_CPU=y (or booted with “random.trust_cpu=on“).

That’s it for now; thanks for reading. The merge window is open for v4.20! Wish us luck. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Benjamin Mako Hill: Why organizational culture matters for online groups

Mar, 23/10/2018 - 12:55pd

Leaders and scholars of online communities tend of think of community growth as the aggregate effect of inexperienced individuals arriving one-by-one. However, there is increasing evidence that growth in many online communities today involves newcomers arriving in groups with previous experience together in other communities. This difference has deep implications for how we think about the process of integrating newcomers. Instead of focusing only on individual socialization into the group culture, we must also understand how to manage mergers of existing groups with distinct cultures. Unfortunately, online community mergers have, to our knowledge, never been studied systematically.

To better understand mergers, my student Charlie Kiene spent six months in 2017 conducting ethnographic participant observation in two World of Warcraft raid guilds planning and undergoing mergers. The results—visible in the attendance plot below—shows that the top merger led to a thriving and sustainable community while the bottom merger led to failure and the eventual dissolution of the group. Why did one merger succeed while the other failed? What can managers of other communities learn from these examples?

In a new paper that will be published in the Proceedings of of the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW) and that Charlie will present in New Jersey next month, I teamed up with Charlie and Aaron Shaw try to answer these questions.

Raid team attendance before and after merging. Guilds were given pseudonyms to protect the identity of the research subjects.

In our research setting, World of Warcraft (WoW), players form organized groups called “guilds” to take on the game’s toughest bosses in virtual dungeons that are called “raids.” Raids can be extremely challenging, and they require a large number of players to be successful. Below is a video demonstrating the kind of communication and coordination needed to be successful as a raid team in WoW.

Because participation in a raid guild requires time, discipline, and emotional investment, raid guilds are constantly losing members and recruiting new ones to resupply their ranks. One common strategy for doing so is arranging formal mergers. Our study involved following two such groups as they completed mergers. To collect data for our study, Charlie joined both groups, attended and recorded all activities, took copious field notes, and spent hours interviewing leaders.

Although our team did not anticipate the divergent outcomes shown in the figure above when we began, we analyzed our data with an eye toward identifying themes that might point to reasons for the success of one merger and the failure of the other. The answers that emerged from our analysis suggest that the key differences that led one merger to be successful and the other to fail revolved around differences in the ways that the two mergers managed organizational culture. This basic insight is supported by a body of research about organizational culture in firms but seem to have not made it onto the radar of most members or scholars of online communities. My coauthors and I think more attention to the role that organizational culture plays in online communities is essential.

We found evidence of cultural incompatibility in both mergers and it seems likely that some degree of cultural clashes is inevitable in any merger. The most important result of our analysis are three observations we drew about specific things that the successful merger did to effectively manage organizational culture. Drawn from our analysis, these themes point to concrete things that other communities facing mergers—either formal or informal—can do.

A recent, random example of a guild merger recruitment post found on the WoW forums.

First, when planning mergers, groups can strategically select other groups with similar organizational culture. The successful merger in our study involved a carefully planned process of advertising for a potential merger on forums, testing out group compatibility by participating in “trial” raid activities with potential guilds, and selecting the guild that most closely matched their own group’s culture. In our settings, this process helped prevent conflict from emerging and ensured that there was enough common ground to resolve it when it did.

Second, leaders can plan intentional opportunities to socialize members of the merged or acquired group. The leaders of the successful merger held community-wide social events in the game to help new members learn their community’s norms. They spelled out these norms in a visible list of rules. They even included the new members in both the brainstorming and voting process of changing the guild’s name to reflect that they were a single, new, cohesive unit. The leaders of the failed merger lacked any explicitly stated community rules, and opportunities for socializing the members of the new group were virtually absent. Newcomers from the merged group would only learn community norms when they broke one of the unstated social codes.

The guild leaders in the successful merger documented every successful high end raid boss achievement in a community-wide “Hall of Fame” journal. A screenshot is taken with every guild member who contributed to the achievement and uploaded to a “Hall of Fame” page.

Third and finally, our study suggested that social activities can be used to cultivate solidarity between the two merged groups, leading to increased retention of new members. We found that the successful guild merger organized an additional night of activity that was socially-oriented. In doing so, they provided a setting where solidarity between new and existing members can cultivate and motivate their members to stick around and keep playing with each other—even when it gets frustrating.

Our results suggest that by preparing in advance, ensuring some degree of cultural compatibility, and providing opportunities to socialize newcomers and cultivate solidarity, the potential for conflict resulting from mergers can be mitigated. While mergers between firms often occur to make more money or consolidate resources, the experience of the failed merger in our study shows that mergers between online communities put their entire communities at stake. We hope our work can be used by leaders in online communities to successfully manage potential conflict resulting from merging or acquiring members of other groups in a wide range of settings.

Much more detail is available our paper which will be published open access and which is currently available as a preprint.

Both this blog post and the paper it is based on are collaborative work by Charles Kiene from the University of Washington, Aaron Shaw from Northwestern University, and Benjamin Mako Hill from the University of Washington. We are also thrilled to mention that the paper received a Best Paper Honorable Mention award at CSCW 2018!

Lubuntu Blog: Lubuntu 18.10 (Cosmic Cuttlefish) Released!

Sht, 20/10/2018 - 5:06pd
Thanks to all the hard work from our contributors, Lubuntu 18.10 has been released! With the codename Cosmic Cuttlefish, Lubuntu 18.10 is the 15th release of Lubuntu and the first release of Lubuntu with LXQt as the default desktop environment, with support until July of 2019. Translated into: español What is Lubuntu? Lubuntu is an […]

Xubuntu: Xubuntu 18.10 released!

Pre, 19/10/2018 - 1:56pd

The Xubuntu team is happy to announce the immediate release of Xubuntu 18.10!

Xubuntu 18.10 is a regular release and will be supported for 9 months, until July 2019. If you need a stable environment with longer support time, we recommend that you use Xubuntu 18.04 LTS instead.

The final release images are available as torrents and direct downloads from xubuntu.org/getxubuntu/

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues Highlights
  • Several Xfce components and apps were updated to their 4.13 development releases, bringing us closer to a Gtk+3-only desktop
  • elementary Xfce Icon Theme 0.13 with the manila folder icons as seen in the upstream elementary icon theme
  • Greybird 3.22.9, which improves the look and feel of our window manager, alt-tab dialog, Chromium, and even pavucontrol
  • A new default wallpaper featuring a gentle purple tone that greatly complements our Gtk+ and icon themes
Known Issues
  • At times the panel could show 2 network icons, this appears to be a race condition which we have not been able to rectify in time for release
  • In the settings Manager, the mouse fails to scroll apps in settings manager (GTK+ 3 regression)

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover both many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

Kubuntu General News: Kubuntu 18.10 is released today

Enj, 18/10/2018 - 7:53md

Kubuntu 18.10 has been released, featuring the beautiful Plasma 5.13 desktop from KDE.

Codenamed “Cosmic Cuttlefish”, Kubuntu 18.10 continues our proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 4.18-based kernel, Qt 5.11, KDE Frameworks 5.50, Plasma 5.13.5 and KDE Applications 18.04.3

Kubuntu has seen some exciting improvements, with newer versions of Qt, updates to major packages like Krita, Kdeconnect, Kstars, Peruse, Latte-dock, Firefox and LibreOffice, and stability improvements to KDE Plasma. In addition, Snap integration in Plasma Discover software center is now enabled by default, while Flatpak integration is also available to add on the settings page.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes:

https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseNotes/Kubuntu

Download 18.10 or read about how to upgrade from 18.04.

Additionally, users who wish to test the latest Plasma 5.14.1 and Frameworks 5.51, which came too late in our release cycle to make it into 18.10 as default, can install these via our backports PPA. This represents only the 1st initial bugfix release of Plasma 5.14, with 4 more to be released in the coming months, so early adopters should be aware that there may more bugs to be found (and reported).

Ubuntu Studio: Ubuntu Studio 18.10 Released

Enj, 18/10/2018 - 7:45md
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 18.10 “Cosmic Cuttlefish”. As a regular release, this version of Ubuntu Studio will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes […]

Ubuntu MATE: Ubuntu MATE 18.10 Final Release

Enj, 18/10/2018 - 7:00md

Ubuntu MATE 18.10 is a modest, yet strategic, upgrade over our 18.04 release. If you want bug fixes and improved hardware support then 18.10 is for you. For those who prefer staying on the LTS then everything in this 18.10 release is also important for the upcoming 18.04.2 release. Oh yeah, we've also made a bespoke Ubuntu MATE 18.10 image for the GPD Pocket and GPD Pocket 2. Read on to learn more...


Superposition on the Intel Core i7-8809G Radeon RX Vega M powered Hades Canyon NUC What changed since the Ubuntu MATE 18.04 final release?

Curiously, the work during this Ubuntu MATE 18.10 release has really been focused on what will become Ubuntu MATE 18.04.2. Let me explain.

MATE Desktop

The upstream MATE Desktop team have been working on many bug fixes for MATE Desktop 1.20.3, that has resulted in a lot of maintenance updates in the upstream releases of MATE Desktop. The Debian packaging team for MATE Desktop, of which I am member, has been updating all the MATE packages to track these upstream bug fixes and new releases. Just about all MATE Desktop packages and associated components, such as AppMenu and MATE Dock Applet have been updated. Now that all these fixes exist in the 18.10 release, we will start the process of SRU'ing (backporting) them to 18.04 so that they will feature in the Ubuntu MATE 18.04.2 release due in February 2019. The fixes should start landing in Ubuntu MATE 18.04 very soon, well before the February deadline.

Hardware Enablement

Ubuntu MATE 18.04.2 will include a hardware enablement stack (HWE) based on what is shipped in Ubuntu 18.10. Ubuntu users are increasingly adopting the current generation of AMD RX Vega GPUs, both discrete and integrated solutions such as the Intel Core i7-8809G Radeon RX Vega M found in the Hades Canyon NUC and some laptops. I have been lobbying people within the Ubuntu project to upgrade to newer versions of the Linux kernel, firmware, Mesa and Vulkan that offer the best possible "out of box" support for AMD GPUs. Consequently, Ubuntu 18.10 (of any flavour) is great for owners of AMD graphics solutions and these improvements will soon be available in Ubuntu 18.04.2 too.

GPD Pocket

Alongside the generic image for 64-bit Intel PCs we're also releasing a bespoke image for the GPD Pocket and GPD Pocket 2 that includes the hardware specific tweaks to get these devices working "out of the box" without any faffing about. See our GPD Pocket page for more details.


Ubuntu MATE 18.10 running on the GPD Pocket (left) and GPD Pocket 2 (right)


Raspberry Pi images

We're planning on releasing Ubuntu MATE images for the Raspberry Pi after that 18.10 release is out, in October 2018. It takes usually takes about a month to get the Raspberry Pi images built and tested, but we've encountered some challenges with the 18.04 based images which has delayed their release. Hopefully we'll have something in time for Christmas 2018 :-)

Major Applications

Accompanying MATE Desktop 1.20.3 and Linux 4.18 are Firefox 59.0.2, VLC 3.0.4, LibreOffice 6.1.2.1 and Thunderbird 60.2.1.


See the Ubuntu 18.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 18.10

We've even redesigned the download page so it's even easier to get started.

Download Known Issues

Here are the known issues.

Ubuntu MATE
  • Nothing significant.
Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

Stuart Langridge: Print to Google Drive in a non-Gnome desktop

Mar, 16/10/2018 - 12:31pd

Jeremy Bicha wrote up an unknown Ubuntu feature: “printing” direct to a Google Drive PDF. I rather wanted this, but I don’t run the Gnome desktop, so I thought I might be out of luck. But no! It works fine on my Ubuntu MATE desktop too. A couple of extra tweaks are required, though. This is unfortunately a bit technical, but it should only need setting up once.

You need the Gnome Control Centre and Gnome Online Accounts installed, if you don’t have them already, as well as the Google Cloud Print extension that Jeremy mentions. From a terminal, run sudo apt install gnome-control-center gnome-online-accounts cpdb-backend-gcp.

Next, you need to launch the Control Centre, but it doesn’t like you if you’re not running the Gnome desktop. So, we lie to it. In that terminal, run XDG_CURRENT_DESKTOP=GNOME gnome-control-center online-accounts. This should correctly start the Control Centre, showing the online accounts. Sign in to your Google account using that window. (I only have Files and Printers selected; you don’t need Mail and Calendars and so on to get this printing working.)

Then… it all works. From now on, when you go to print something, the print dialogue will, after a couple of seconds, show a new entry: “Save to Google Drive”. Choose that, and your document will “print” to a PDF stored in Google Drive. Easy peasy. Nice one Jeremy for the write-up. It’d be neat if Ubuntu MATE could integrate this a little more tightly.

Michael Zanetti: nymea

Hën, 15/10/2018 - 7:01md

It’s been quite a while since I had written a post now. Lots of things have changed around here but even though I am not actively developing for Ubuntu itself any more it doesn’t mean that I’ve left the Ubuntu and FOSS world in general. In fact, I’ve been pretty busy hacking on some more free software goodness. Some few have sure heard about it, but for the biggest part, allow me to introduce you to nymea.

nymea is an IoT platform mainly based on Ubuntu. Well, that’s where we develop on, we provide packages for debian and snaps for all the platforms supporting snaps too.

It consists of 3 parts: nymea:core, nymea:app and nymea:cloud.
The purpose of this project is to enable easy integration of various things with each other. Being plugin-based, it allows to make all sorts of things (devices, online services…) work together.

Practically speaking this means two things:

– It will allow users to have a completely open source smart home setup which does everything offline. Everything is processed offline, including the smartness. Turning your living room lights on when it gets dark? nymea will do it, and it’ll do it even without your internet connection. It comes with nymea:core to be installed on a gateway device in your home (a Raspberry Pi, or any other device that can run Ubuntu/Debian or snapd) and nymea:app, available in app stores and also as a desktop app in the snap store.

– It delivers a developer platform for device makers. Looking for a solution that easily allows you to make your device smart? Ubuntu:core + nymea:core together will get you sorted in no time to have an app for your “thing” and allow it to react on just about any input it gets.

nymea:cloud is an optional feature to nymea:core and nymea:app and allows to extend the nymea system with features like remote connection, push notifications or Alexa integration (not released yet).

So if that got you curious, check out https://wiki.nymea.io (and perhaps https://nymea.io in general) or simply install nymea and nymea-app and get going (on snap systems you need to connect some plugs and iterfaces for all the bits and pieces to work, alternatively we have a ppa ready for use too).

Jeremy Bicha: Google Cloud Print in Ubuntu

Dje, 14/10/2018 - 4:31md

There is an interesting hidden feature available in Ubuntu 18.04 LTS and newer. To enable this feature, first install cpdb-backend-gcp.

sudo apt install cpdb-backend-gcp

Make sure you are signed in to Google with GNOME Online Accounts. Open the Settings app1gnome-control-center to the Online Accounts page. If your Google account is near the top above the Add an account section, then you’re all set.

Currently, only LibreOffice is supported. Hopefully, for 19.04, other GTK+ apps will be able to use the feature.

This feature was developed by Nilanjana Lodh and Abhijeet Dubey when they were Google Summer of Code 2017 participants. Their mentors were Till Kamppeter, Aveek Basu, and Felipe Borges.

Till has been trying to get this feature installed by default in Ubuntu since 18.04 LTS, but it looks like it won’t make it in until 19.04.

I haven’t seen this feature packaged in any other Linux distros yet. That might be because people don’t know about this feature so that’s why I’m posting about it today! If you are a distro packager, the 3 packages you need are cpdb-libs , cpdb-backend-gcp, and cpdb-backend-cups. The final package enables easy printing to any IPP printer. (I didn’t mention it earlier because I believe Ubuntu 18.04 LTS already supports that feature through a different package.)

Save to Google Drive

In my original blog post, I confused the cpdb feature with a feature that already exists in GTK3 built with GNOME Online Accounts support. This should already work on most distros.

When you print a document, there will be an extra Save to Google Drive option. Saving to Google Drive saves a PDF of your document to your Google Drive account.

This post was edited on October 16 to mention that cpdb only supports LibreOffice now and that Save to Google Drive is a GTK3 feature instead.

October 17: Please see Felipe’s comments. It turns out that even Google Cloud Print works fine in distros with recent GTK3. The point of the cpdb feature is to make this work in apps that don’t use GTK3. So I guess the big benefit now is that you can use Google Cloud Print or Save to Google Drive from LibreOffice.

Julian Andres Klode: The demise of G+ and return to blogging (w/ mastodon integration)

Sht, 13/10/2018 - 11:03md

I’m back to blogging, after shutting down my wordpress.com hosted blog in spring. This time, fully privacy aware, self hosted, and integrated with mastodon.

Let’s talk details: In spring, I shutdown my wordpress.com hosted blog, due to concerns about GDPR implications with comment hosting and ads and stuff. I’d like to apologize for using that, back when I did this (in 2007), it was the easiest way to get into blogging. Please forgive me for subjecting you to that!

Recently, Google announced the end of Google+. As some of you might know, I posted a lot of medium-long posts there, rather than doing blog posts; especially after I disabled the wordpress site.

With the end of Google+, I want to try something new: I’ll host longer pieces on this blog, and post shorter messages on @juliank@mastodon.social. If you follow the Mastodon account, you will see toots for each new blog post as well, linking to the blog post.

Mastodon integration and privacy

Now comes the interesting part: If you reply to the toot, your reply will be shown on the blog itself. This works with a tiny bit of JavaScript that talks to a simple server-side script that finds toots from me mentioning the blog post, and then replies to that.

This protects your privacy, because mastodon.social does not see which blog post you are looking at, because it is contacted by the server, not by you. Rendering avatars requires loading images from mastodon.social’s file server, however - to improve your privacy, all avatars are loaded with referrerpolicy='no-referrer', so assuming your browser is half-way sane, it should not be telling mastodon.social which post you visited either. In fact, the entire domain also sets Referrer-Policy: no-referrer as an http header, so any link you follow will not have a referrer set.

The integration was originally written by @bjoern@mastodon.social – I have done some moderate improvements to adapt it to my theme, make it more reusable, and replace and extend the caching done in a JSON file with a Redis database.

Source code

This blog is free software; generated by the Hugo snap. All source code for it is available:

(Yes I am aware that hosting the repositories on GitHub is a bit ironic given the whole focus on privacy and self-hosting).

The theme makes use of Hugo pipes to minify and fingerprint JavaScript, and vendorizes all dependencies instead of embedding CDN links, to, again, protect your privacy.

Future work

I think I want to make the theme dark, to be more friendly to the eyes. I also might want to make the mastodon integration a bit more friendly to use. And I want to get rid of jQuery, it’s only used for a handful of calls in the Mastodon integration JavaScript.

If you have any other idea for improvements, feel free to join the conversation in the mastodon toot, send me an email, or open an issue at the github projects.

Closing thoughts

I think the end of Google+ will be an interesting time, requring a lot of people in the open source world to replace one of their main communication channels with a different approach.

Mastodon and Diaspora are both in the race, and I fear the community will split or everyone will have two accounts in the end. I personally think that Mastodon + syndicated blogs provide a good balance: You can quickly write short posts (up to 500 characters), and you can host long articles on your own and link to them.

I hope that one day diaspora* and mastodon federate together. If we end up with one federated network that would be the best outcome.

Jeremy Bicha: Shutter removed from Debian & Ubuntu

Sht, 13/10/2018 - 8:29md

This week, the popular screenshot app Shutter was removed from Debian Unstable & Ubuntu 18.10. (It had already been removed from Debian “Buster” 6 months ago and some of its “optional” dependencies had already been removed from Ubuntu 18.04 LTS).

Shutter will need to be ported to gtk3 before it can return to Debian. (Ideally, it would support Wayland desktops too but that’s not a blocker for inclusion in Debian.)

See the Debian bug for more discussion.

I am told that flameshot is a nice well-maintained screenshot app.

I believe Snap or Flatpak are great ways to make apps that use obsolete libraries available on modern distros that can no longer keep those libraries around. There isn’t a Snap or Flatpak version of Shutter yet, so hopefully someone interested in that will help create one.

David Tomaschik: Course Review: Adversarial Attacks and Hunt Teaming

Pre, 12/10/2018 - 9:00pd

At DerbyCon 8, I had the opportunity to take the “Adversarial Attacks and Hunt Teaming” presented by Ben Ten and Larry Spohn from TrustedSec. I went into the course hoping to get a refresher on the latest techniques for Windows domains (I do mostly Linux, IoT & Web Apps at work) as well as to get a better understanding of how hunt teaming is done. (As a Red Teamer, I feel understanding the work done by the blue team is critical to better success and reducing detection.) From the course description:

This course is completely hands-on, focusing on the latest attack techniques and building a defense strategy around them. This workshop will cover both red and blue team efforts and provide methods for understanding how to best detect threats in an enterprise. It will give penetration testers the ability to learn the newest techniques, as well as teach blue teamers how to defend against them.

The Good

The course was definitely hands-on, which I really appreciate as someone who learns by “doing” rather than by listening to someone talk. Both instructors were obviously knowledgeable and able to answer questions about how tools and techniques work. It’s really valuable to understand why things work instead of just running commands blindly. Having the why lets you pivot your knowledge to other tools when your first choice isn’t working for some reason. (AV, endpoint protection, etc.)

Both instructors are strong teachers with an obvious passion for what they do. They presented the material well and mostly at a reasonable pace. They also tag-team well: while one is presenting, the other can help students having issues without delaying the entire class.

The final lab/exam was really good. We were challenged to get Domain Admin on a network we hadn’t seen so far, with the top 5 finishers receiving challenge coins. Despite how little I do with Windows, I was happy to be one of the recipients!

The Bad

The course began quite slowly for my experience level. The first half-day or so involved basic reconnaisance with nmap and an introduction to Metasploit. While I understand that not everyone has experience with these tools, the course description did not make me feel like it would be as basic as was presented.

There was a section on physical attacks that, while extremely interesting, was not really a good fit for the rest of the course material. It was too brief to really learn how to execute these attacks from a Red Team perspective, and physical security is often out of scope for the Blue Team (or handled by a different group). Other than entertainment value, I do not feel like it added anything to the course.

I would have liked a little more “Blue” content. The hunt-teaming section was mostly about configuring Windows Logging and pointing it to an ELK server for aggregation and analysis. Again, this was interesting, but we did not dive into other sources of data (network firewalls, non-Windows systems, etc.) like I hoped we would. It also did not spend any time discussing how to relate different events, only how to log the events you would want to look for.

Summary

Overall, I think this is a good course presented by excellent instructors. If you’ve done an OSCP course or even basic penetration testing, expect some duplication in the first day or so, but there will still be techniques that you might not have seen (or had the chance to try out) before. This was my first time trying the “Kerberoasting” attack, so it was nice to be able to do it hands-on. Overall a solid course, but I’d generally recommend it to those early in their careers or transitioning to an offensive security role.

Simos Xenitellis: How to create a minimal container image for LXC/LXD with distrobuilder

Mër, 10/10/2018 - 10:12md

In the previous post,

Using distrobuilder to create container images for LXC and LXD

we saw how to build distrobuilder, then use it to create a LXD container image for Ubuntu. We used one of the existing configuration files for an Ubuntu container image.

In this post, we are going to see how to compose such YAML configuration files that describe how the container image will look like. The aim of this post is to deal with a minimal configuration file to create a container image for Alpine Linux. A future post will deal with a more complete configuration file.

Creating a minimal configuration for a container image

Here is the minimal configuration for a Alpine Linux container image. Note that we have omitted some parts that will make the container more useful (namespaces, etc). The containers from this container image will still work for our humble purposes.

image:
description: My Alpine Linux
distribution: minimalalpine
release: 3.8.1

source:
downloader: alpinelinux-http
url: http://dl-cdn.alpinelinux.org/alpine/
keys:
- 0482D84022F52DF1C4E7CD43293ACD0907D9495A
keyserver: keyserver.ubuntu.com

packages:
manager: apk

Save this as a file with filename such as myalpine.yaml, and then build the container image. It takes a couple of seconds to build the container image. We will come back to the minimal configuration and explain in detail in the next section.

$ sudo $HOME/go/bin/distrobuilder build-lxd myalpine.yaml
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
v3.8.1-27-g42946288bd [http://dl-cdn.alpinelinux.org/alpine/v3.8/main]
v3.8.1-23-ga2d8d72222 [http://dl-cdn.alpinelinux.org/alpine/v3.8/community]
OK: 9539 distinct packages available
Parallel mksquashfs: Using 4 processors
Creating 4.0 filesystem on /home/username/ContainerImages/minimal/rootfs.squashfs, block size 131072.
[==================================================|] 90/90 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 2093.68 Kbytes (2.04 Mbytes)
48.30% of uncompressed filesystem size (4334.32 Kbytes)
Inode table size 3010 bytes (2.94 Kbytes)
17.41% of uncompressed inode table size (17290 bytes)
Directory table size 4404 bytes (4.30 Kbytes)
54.01% of uncompressed directory table size (8154 bytes)
Number of duplicate files found 5
Number of inodes 481
Number of files 64
Number of fragments 5
Number of symbolic links 329
Number of device nodes 1
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 87
Number of ids (unique uids + gids) 2
Number of uids 1
root (0)
Number of gids 2
root (0)
shadow (42)
$

And here is the container image. The size of the container image is about 2MB.

$ ls -l
total 2108
-rw-r--r-- 1 root root 364 Oct 10 20:30 lxd.tar.xz
-rw-rw-r-- 1 user user 287 Oct 10 20:30 myalpine.yaml
-rw-r--r-- 1 root root 2146304 Oct 10 20:30 rootfs.squashfs

Let’s import it into our LXD installation.

$ lxc image import --alias myminimal lxd.tar.xz rootfs.squashfs
Image imported with fingerprint: ee9208767e745bb980a074006fa462f6878e763539c439e6bfa34c029cfc318b

And now launch a container from this container image.

$ lxc launch myminimal mycontainer
Creating mycontainer
Starting mycontainer

Let’s see the container running. It’s running, but did not get an IP address. That’s part of the cost-cutting in the initial minimal configuration file.

$ lxc list mycontainer
+-------------+---------+------+------+
| NAME | STATE | IPV4 | IPV6 |
+-------------+---------+------+------+
| mycontainer | RUNNING | | |
+-------------+---------+------+------+

Let’s get a shell in the container and start doing things! First, set up the network configuration.

$ lxc exec mycontainer -- sh
~ # pwd
/root
~ # cat /etc/network/interfaces
cat: can't open '/etc/network/interfaces': No such file or directory
~ # echo "auto eth0" > /etc/network/interfaces
~ # echo "iface eth0 inet dhcp" >> /etc/network/interfaces

Then, get an IP address using DHCP.

~ # ifup eth0
udhcpc: started, v1.28.4
udhcpc: sending discover
udhcpc: sending discover
udhcpc: sending select for 10.50.250.150
udhcpc: lease of 10.50.250.150 obtained, lease time 3600

We got a lease, but for some reason the network was not configured. Both ifconfig and route showed no configuration. So, we complete the network configuration manually. And it works, we have access to the Internet!

~ # ifconfig eth0 10.50.250.150 up
~ # route add -net default gw 10.50.250.1
~ # ping -c 1 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=120 time=17.451 ms
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 17.451/17.451/17.451 ms
~ # exit
$

Let’s clear up and start studying the configuration file. We force-delete the container, and then delete the container image.

$ lxc delete --force mycontainer
$ lxc image delete myminimal
Understanding the configuration file of a container image

Here is again the container file for a minimal Alpine container image. It has three sections,

  1. image, with information about the image. We can put anything for the description and distribution name. The release version, though, should exist.
  2. source, which describes where to get the image, ISO or packages of the distribution. The downloader is a plugin in distrobuilder that knows how to get the appropriate files, as long as it knows the URL and the release version. The url is the URL prefix of the location with the files. keys and keyserver are used to verify digitally the authenticity of the files.
  3. packages, which indicates the plugin that knows how to deal with the specific package manager of the distribution. In general, you can also indicate here which additional packages to install, which to remove and which to update.
image:
description: My Alpine Linux
distribution: minimalalpine
release: 3.8.1

source:
downloader: alpinelinux-http
url: http://dl-cdn.alpinelinux.org/alpine/
keys:
- 0482D84022F52DF1C4E7CD43293ACD0907D9495A
keyserver: keyserver.ubuntu.com

packages:
manager: apk

The downloader and url go hand in hand. The URL is the prefix for the repository that the downloader will use to get the necessary files.

The keys are necessary to verify the authenticity of the files. The keyserver is used to download the actual public keys of the IDs that were specified in the keys. You could very well not specify a keyserver, and distrobuilder would request the keys from the root PGP servers. However, those servers are often overloaded and the attempt can easily fail. It happened to me several times so that I explicitly use now the Ubuntu keyserver.

Summary

We have seen how to use a minimal configuration file for an Alpine container image. In future posts, we are going to see how to create more complete configuration files.

Simos Xenitellishttps://blog.simos.info/

Faqet