You are here

Planet Debian

Subscribe to Feed Planet Debian
Thinking inside the box liw's English language blog feed Stuff, Debian, Free Software and Craig Stuff, Debian, Free Software and Craig (y eso no es poca cosa) Thinking inside the box Just another WordPress.com weblog agrep -pB IT /dev/life Insider infos, master your Debian/Ubuntu distribution Thinking inside the box a blog sesse's blog Random thoughts about everything tagged by Debian Thinking inside the box Reproducible builds blog random musings and comments Recent content in Debian Blog on RESEARCHUT Any sufficiently advanced thinking is indistinguishable from madness Thinking inside the box showing latest 10 Jaldhar Vyas's Debian GNU/Linux Weblog ganbatte kudasai! thoughts of a free software enthusiast jmtd Blog from the Debian Project Entries tagged english random musings and comments Politics, tech and herding cats from Neil McGovern Comments on family, technology, and society random musings and comments Reproducible builds blog The public face of jwiltshire Thoughts about programming, sysadmin, Perl, Debian ... Free Software Hacking agrep -pB IT /dev/life agrep -pB IT /dev/life As time goes by ... (y eso no es poca cosa) showing latest 10 anarcat Just another WordPress.com weblog pabs Entries tagged english "Passion and dispassion. Choose two." -- Larry Wall Ben Hutchings's diary of life and technology sesse's blog Dude! Sweet!
Përditësimi: 1 month 3 javë më parë

Digital Minimalism and Deep Work

Mar, 18/09/2018 - 2:44md

Russ Allbery of the Debian project writes reviews of books he has read on his blog. It was through Russ's review that I learned of "Deep Work" by Cal Newport, and duly requested it from my local library.

I've a long-held skepticism of self-help books, but several aspects of this one strike the right notes for me. The author is a Computer Scientist, so there's a sense of kinship there, but the writing also follows the standard academic patterns of citing sources and a certain rigour to the new ideas that are presented. Despite this, there are a few sections of the book which I felt lacked much supporting evidence, or where some obvious questions of the relevant concept were not being asked. One of the case studies in the book is of a part-time PhD student with a full-time job and a young child, which I can relate to. The author obviously follows his own advice: he runs a productivity blog at calnewport.com and has no other social media presences. One of the key productivity tips he espouses in the book (and elsewhere) is simply "quit social media".

Through Newport's blog I learned that the title of his next book is Digital Minimalism. This intrigued me, because since I started thinking about minimalism myself, I've wondered about the difference of approach needed between minimalism in the "real world" and the digital domains. It turns out the topic of Newport's next book is about something different: from what I can tell, focussing on controlling how one spends one's time online for maximum productivity.

That's an interesting topic which I have more to write about at some point. However, my line of thought for the title "digital minimalism" spawned from reading Marie Kondo, Fumio Sakai and others. Many of the tips they offer to their readers revolve around moving meaning away from physical clutter and into the digital domain: scan your important papers, photograph your keepsakes, and throw away the physical copies. It struck me that whilst this was useful advice for addressing the immediate problem of clutter in the physical world, it exacerbates the problem of digital clutter, especially if we don't have good systems for effectively managing digital archives. Broadly speaking, I don't think we do: at least, not ones that are readily accessible to the majority of people. I have a hunch that most have no form of data backup in place at all, switch between digital hosting services on a relatively ad-hoc manner (flickr, snapchat, instagram…) and treat losing data (such as when an old laptop breaks, or a tablet or phone is stolen) as a fact of life, rather than something that could be avoided if our tools (or habits, or both) were better.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Review: The Collapsing Empire

Mar, 18/09/2018 - 5:39pd

Review: The Collapsing Empire, by John Scalzi

Series: Interdependency #1 Publisher: Tor Copyright: March 2017 ISBN: 0-7653-8889-8 Format: Kindle Pages: 333

Cardenia Wu-Patrick was never supposed to become emperox. She had a quiet life with her mother, a professor of ancient languages who had a brief fling with the emperox but otherwise stayed well clear of the court. Her older half-brother was the imperial heir and seemed to enjoy the position and the politics. But then Rennered got himself killed while racing and Cardenia ended up heir whether she wanted it or not, with her father on his deathbed and unwanted pressure on her to take over Rennered's role in a planned marriage of state with the powerful Nohamapetan guild family.

Cardenia has far larger problems than those, but she won't find out about them until becoming emperox.

The Interdependency is an interstellar human empire balanced on top of a complex combination of hereditary empire, feudal guild system, state religion complete with founding prophet, and the Flow. The Flow is this universe's equivalent of the old SF trope of a wormhole network: a strange extra-dimensional space with well-defined entry and exit points and a disregard for the speed of light. The Interdependency relies on it even more than one might expect. As part of the same complex and extremely long-term plan of engineered political stability that created the guild, empire, and church balance of power, the Interdependency created an economic web in which each system is critically dependent on imports from other systems. This plus the natural choke points of the Flow greatly reduces the chances of war.

It also means that Cardenia has inherited an empire that is more fragile than it may appear. Secret research happening at the most far-flung system in the Interdependency is about to tell her just how fragile.

John Clute and Malcolm Edwards provided one of the most famous backhanded compliments in SF criticism in The Encyclopedia of Science Fiction when they described Isaac Asimov as the "default voice" of science fiction: a consistent but undistinguished style that became the baseline that other writers built on or reacted against. The field is now far too large for there to be one default voice in that same way, but John Scalzi's writing reminds me of that comment. He is very good at writing a specific sort of book: a light science fiction story that draws as much on Star Trek as it does on Heinlein, comfortably sits on the framework of standard SF tropes built by other people, adds a bit of humor and a lot of banter, and otherwise moves reliably and competently through a plot. It's not hard to recognize Scalzi's writing, so in that sense he has less of a default voice than Asimov had, but if I had to pick out an average science fiction novel his writing would come immediately to mind. At a time when the field is large enough to splinter into numerous sub-genres that challenge readers in different ways and push into new ideas, Scalzi continues writing straight down the middle of the genre, providing the same sort of comfortable familiarity as the latest summer blockbuster.

This is not high praise, and I am sometimes mystified at the amount of attention Scalzi gets (both positive and negative). I think his largest flaw (and certainly the largest flaw in this book) is that he has very little dynamic range, particularly in his characters. His books have a tendency to collapse into barely-differentiated versions of the same person bantering with each other, all of them sounding very much like Scalzi's own voice on his blog. The Collapsing Empire has emperox Scalzi grappling with news from scientist Scalzi carried by dutiful Scalzi with the help of profane impetuous Scalzi, all maneuvering against devious Scalzi. The characters are easy to keep track of by the roles they play in the plot, and the plot itself is agreeably twisty, but if you're looking for a book to hook into your soul and run you through the gamut of human emotions, this is not it.

That is not necessarily a bad thing. I like that voice; I read Scalzi's blog regularly. He's reliable, and I wonder if that's the secret to his success. I picked up this book because I wanted to read a decent science fiction novel and not take a big risk. It delivered exactly what I asked for. I enjoyed the plot, laughed at some of the characters, felt for Cardenia, enjoyed the way some villainous threats fell flat because of characters who had a firm grasp of what was actually important and acted on it, and am intrigued enough by what will happen next that I'm going to read the sequel. Scalzi aimed to entertain, succeeded, and got another happy customer. (Although I must note that I would have been happier if my favorite character in the book, by far, did not make a premature exit.)

I am mystified at how The Collapsing Empire won a Locus Award for best science fiction novel, though. This is just not an award sort of book, at least in my opinion. It's book four in an urban fantasy series, or the sixth book of Louis L'Amour's Sackett westerns. If you like this sort of thing, you'll like this version of it, and much of the appeal is that it's not risky and requires little investment of effort. I think an award winner should be the sort of book that lingers, that you find yourself thinking about at odd intervals, that expands your view of what's possible to do or feel or understand.

But that complaint is more about awards voters than about Scalzi, who competently executed on exactly what was promised on the tin. I liked the setup and I loved the structure of Cardenia's inheritance of empire, so I do kind of wish I could read the book that, say, Ann Leckie would have written with those elements, but I was entertained in exactly the way that I wanted to be entertained. There's real skill and magic in that.

Followed by The Consuming Fire. This book ends on a cliffhanger, as apparently does the next one, so if that sort of thing bothers you, you may want to wait until they're all available.

Rating: 7 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

You Think the Visual Studio Code binary you use is a Free Software? Think again.

Mar, 18/09/2018 - 12:00pd

Did you download your binary of Visual Studio Code directly from the official website? If so, you’re not using a Free Software and only Microsoft knows what was added to this binary. And you should think of the worst possible.

It says « Open Source » and offers to download non open source binary packages. Very misleading.

The Microsoft Trick

I’m not a lawyer, I could be wrong or not accurate enough in my analysis (sorry!) but I’ll try nonetheless to give my understanding of the situation because the current state of licensing of Visual Studio Code tries to fool most users.

Microsoft uses here a simple but clever trick allowed by the license of the code source of Visual Studio Code: the MIT license, a permissive Free Software license.

Indeed, the MIT license is really straightforward. Do whatever you want with this software, keeps the original copyright and I’m not responsible of what could happen with this software. Ok. Except that, for the situation of Visual Studio Code, it only covers the source code, not the binary.

Unlike most of the GPL-based licenses for which both the source code and the binary built from this source code are covered by the terms of the license, using the MIT license authorizes Microsoft to make available the source code of the software, but do whatever they want with the binary of this software. And let’s be crystal-clear: 99,99% of the VSC users will never ever use directly the source code.

What a non-free license by Microsoft is

And of course Microsoft does not use purposely the MIT license for the binary of Visual Studio Code. In fact they use a fully-armed, Freedom-restricting license, the Microsoft Software License.

Lets have a look at some pieces of it. You can find the full license here: https://code.visualstudio.com/license

This license applies to the Visual Studio Code product. The source code is available under the MIT license agreement.

First sentence of the license. The difference between the license of the source code and the « product », meaning the binary you’re going to use, is clearly stated.

Data Collection. The software may collect information about you and your use of the software, and send that to Microsoft.

Yeah right, no kidding. Big Surprise from Microsoft.

UPDATES. The software may periodically check for updates, and download and install them for you. You may obtain updates only from Microsoft or authorized sources. Microsoft may need to update your system to provide you with updates. You agree to receive these automatic updates without any additional notice. Updates may not include or support all existing software features, services, or peripheral devices.

I’ll break your installation without further notice and I don’t care what you were doing with it before, because, you know.

SCOPE OF LICENSE (…) you may not:

  • work around any technical limitations in the software;

Also known as « hacking » since… years.

  • reverse engineer, decompile or disassemble the software, or otherwise attempt to derive the source code for the software, except and to the extent required by third party licensing terms governing use of certain open source components that may be included in the software;

Because, there is no way anybody should try to know what we are doing with the binary running on your computer.

  • share, publish, rent or lease the software, or provide the software as a stand-alone offering for others to use.

I may be wrong (again I’m not a lawyer), but it seems to me they forbid you to redistribute this binary, except for the conditions mentioned in the INSTALLATION AND USE RIGHTS section (mostly for the need of your company or/and for giving demos of your products using VSC).

The following sections EXPORT RESTRICTIONS and CONSUMER RIGHTS; REGIONAL VARIATIONS include more and more restrictions about using and sharing the binary.

DISCLAIMER OF WARRANTY. The software is licensed “as-is.”

At last a term which could be identified as a term of a Free Software license. But in this case it’s of course to limit any obligation Microsoft could have towards you.

So the Microsoft software license is definitely not a Free Software license, if you were not convinced by the clever trick of dual licensing the source code and the binary.

What You Could Do

Some answers exist to use VSC in good condition. After all, the source code of VSC comes as a Free Software. So why not building it yourself? It also seems some initiatives appeared, like this repository. That could be a good start.

About the GNU/Linux distributions, packaging VSC (see here for the discussion in Debian) would be a great way to avoid people being abused by the Microsoft trick in order they use a « product » breaking almost any term of what makes a Free Software.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.io, a Job board dedicated to Free and Open Source Jobs in the US.

Follow Me On Social Networks

 

 

Carl Chenet https://carlchenet.com debian – Carl Chenet's Blog

which spare laptop?

Hën, 17/09/2018 - 3:48md

I'm in a perpetual state of downsizing and ridding my life (and my family's life) of things we don't need: sometimes old computers. My main (nearly my sole) machine is my work-provided Thinkpad T470s: a fantastic laptop that works so well I haven't had anything to write about it. However, I decided that it was worth keeping just one spare, for emergencies or other odd situations. I have two candidate machines in my possession.

In the blue corner

left: X61S; right: R600

Toshiba Portégé R600. I've actually owned this now for 7 years, buying it originally to replace my beloved x40 which I loaned to my partner. At the time my main work machine was still a desktop. I received a new work laptop soon after buying this so it ended up gathering dust in a cupboard.

It's an extremely light laptop, even by today's standards. It compares favourably with the Apple Macbook Air 11" in that respect. A comfortable keyboard, but no trackpoint and a bog-standard trackpad. 1280x800 16:9 display, albeit TN panel technology with very limited viewing angles. Analog VGA video out on the laptop, but digital DVI-D out is possible via a separate dock, which was cheap and easy to acquire and very stowable. An integrated optical media drive which could be useful. Max 3G RAM (1G soldered, 2G in DIMM slot).

The CPU is apparently a generation newer but lower voltage and thus slower than its rival, which is…

In the red corner

x61s

Thinkpad X61s. The proportions match the Thinkpad X40, so it has a high nostalgia factor. Great keyboard, I love trackpoints, robust build. It has the edge on CPU over the Toshiba. A theoretical maximum of 8G (2x4) RAM, but practically nearer 4G (2x2), as the 4G sticks are too expensive. This is probably the "heart" choice.

The main drawback of the X61s is the display options: a 1024x768 TN panel, and no digital video out: VGA only on the laptop, and VGA only on the optional dock. It's possible to retro-fit a better panel, but it's not easy and the parts are now very hard to find. It's also a surprisingly heavy machine: heavier than I remember the X40 being, but it's been long enough ago that my expectations have changed.

The winner

Surprising myself perhaps more than anyone else, I've ended up opting for the Toshiba. The weight was the clincher. The CPU performance difference was too close to matter, and 3G RAM is sufficient for my spare laptop needs. Once I'd installed a spare SSD as the main storage device, day-to-day performance is very good. The resolution difference didn't turn out to be that important: it's still low enough that side-by-side text editor and browser feels crowded, so I end up using the same window management techniques as I would on the X61s.

What do I use it for? I've taken it on a couple of trips or holidays which I wouldn't want to risk my work machine for. I wrote nearly all of liquorice on it in downtime on a holiday to Turkey whilst my daughter was having her afternoon nap. I'm touching up this blog post on it now!

I suppose I should think about passing on the X61s to something/someone else.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

PAM HaveIBeenPwned module

Hën, 17/09/2018 - 11:01pd

So the PAM module which I pondered about in my previous post now exists:

I did mention "sponsorship" in my post which lead to a couple of emails, and the end result of that was that a couple of folk donated to charity in my/its name. Good enough.

Perhaps in the future I'll explore patreon/similar, but I don't feel very in-demand so I'll avoid it for the moment.

Anyway I guess it should be Debian-packaged for neatness, but I'll resist for the moment.

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Linus apologising

Hën, 17/09/2018 - 8:45pd

Someone pointed me towards this email, in which Linus apologizes for some of his more unhealthy behaviour.

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

To me, this came somewhat as a surprise. I'm not really involved in Linux kernel development, and so the history of what led up to this email mostly passed unnoticed, at least for me; but that doesn't mean I cannot recognize how difficult this must have been to write for him.

As I know from experience, admitting that you have made a mistake is hard. Admitting that you have been making the same mistake over and over again is even harder. Doing so publically? Even more so, since you're placing yourself in a vulnerable position, one that the less honorably inclined will take advantage of if you're not careful.

There isn't much I can contribute to the whole process, but there is this: Thanks, Linus, for being willing to work on those things, which can only make the community healthier as a result. It takes courage to admit things like that, and that is only to be admired. Hopefully this will have the result we're hoping for, too; but that, only time can tell.

Wouter Verhelst https://grep.be/blog//pd/ pd

Lookalikes

Dje, 16/09/2018 - 8:18md

Was my festive shirt the model for the men’s room signs at Daniel K. Inouye International Airport in Honolulu? Did I see the sign on arrival and subconsciously decide to dress similarly when I returned to the airport to depart Hawaii?

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

GIMP 2.10

Dje, 16/09/2018 - 6:00pd

GIMP 2.10 landed in Debian Testing a few weeks ago and I have to say I'm very happy about it. The last major version of GIMP (2.8) was released in 2012 and the new version fixes a lot of bugs and improved the user interface.

I've updated my Beginner's Guide to GIMP (sadly only in French) and in the process I found out a few things I thought I would share:

Theme

The default theme is Dark. Although it looks very nice in my opinion, I don't feel it's a good choice for productivity. The icon pack the theme uses is a monochrome flat 2D render and I feel it makes it hard to differentiate the icons from one another.

I would instead recommend on using the Light theme with the Color icon pack.

Single Window Mode

GIMP now enables Single Window Mode by default. That means that Dockable Dialog Windows like the Toolbar or the Layer Window cannot be moved around, but instead are locked to two docks on the right and the left of the screen.

Although you can hide and show these docks using Tab, I feel Single Window Mode is more suitable for larger screens. On my laptop, I still prefer moving the windows around as I used to do in 2.8.

You can disable Single Window Mode in the Windows tab.

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Two days afterward

Dje, 16/09/2018 - 2:03pd

Sheena plodded down the stairs barefoot, her shiny bunions glinting in the cheap fluorescent light. “My boobs hurt,” she announced.

“That happens every month,” mumbled Luke, not looking up from his newspaper.

“It does not!” she retorted. “I think I'm perimenopausal.”

“At age 29?” he asked skeptically.

“Don't mansplain perimenopause to me!” she shouted.

“Okay,” he said, putting down the paper and walking over to embrace her.

“My boobs hurt,” she whispered.

Posted on 2018-09-16 Tags: mintings C https://xana.scru.org Yammering

Backing the wrong horse?

Sht, 15/09/2018 - 2:13md

I started using the Ruby programming in around 2003 or 2004, but stopped at some point later, perhaps around 2008. At the time I was frustrated with the approach the Ruby community took for managing packages of Ruby software: Ruby Gems. They interact really badly with distribution packaging and made the jobs of organisations like Debian more difficult. This was around the time that Ruby on Rails was making a big splash for web application development (I think version 2.0 had just come out). I did fork out for the predominant Ruby on Rails book to try it out. Unfortunately the software was evolving so quickly that the very first examples in the book no longer worked with the latest versions of Rails. I wasn't doing a lot of web development that at the time anyway, so I put the book, Rails and Ruby itself on the shelf and moved on to looking at the Python programming language instead.

Since then I've written lots of Python, both professionally and personally. Whenever it looked like a job was best solved with scripting, I'd pick up Python. I hadn't stopped to reflect on the experience much at all, beyond being glad I wasn't writing Perl any more (the first language I had any real traction with, 20 years ago).

I'm still writing Python on most work days, and there are bits of it that I do really like, but there are also aspects I really don't. Some of the stuff I work on needs to work in both Python 2 and 3, and that can be painful. The whole 2-versus-3 situation is awkward: I'd much rather just focus on 3, but Python 3 didn't ship in (at least) RHEL 7, although it looks like it will in 8.

Recently I dusted off some 12-year old Ruby code and had a pleasant experience interacting with Ruby again. It made me wonder, had I perhaps backed the wrong horse? In some respects, clearly not: being proficient with Python was immediately helpful when I started my current job (and may have had a hand in getting me hired). But in other respects, I wonder how much time I've wasted wrestling with e.g. Python's verbose, rigid regular expression library when Ruby has nice language-native regular expression operators (taken straight from Perl), or the really awkward support for Unicode in Python 2 (this reminds me of Perl for all the wrong reasons)

Next time I have a computing problem to solve where it looks like a script is the right approach, I'm going to give Ruby another try. Assuming I don't go for Haskell instead, of course. Or, perhaps I should try something completely different? One piece of advice that resonated with me from the excellent book The Pragmatic Programmer was "Learn a new (programming) language every year". It was only recently that I reflected that I haven't learned a completely new language for a very long time. I tried Go in 2013 but my attempt petered out. Should I pick that back up? It has a lot of traction in the stuff I do in my day job (Kubernetes, Docker, Openshift, etc.). "Rust" looks interesting, but a bit impenetrable at first glance. Idris? Lua? Something else?

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Recommendations for software?

Sht, 15/09/2018 - 11:01pd

A quick post with two questions:

  • What spam-filtering software do you recommend?
  • Is there a PAM module for testing with HaveIBeenPwnd?
    • If not would you sponsor me to write it? ;)

So I've been using crm114 to perform spam-filtering on my incoming mail, via procmail, for the past few years.

Today I discovered it had archived about 12Gb of my email history, because I'd never pruned it. (Beneath ~/.crm/.)

So I wonder if there are better/simpler/different Bayesian-filters out there at that I should be switching to? Recommendations welcome - but don't say "SpamAssassin", thanks!

Secondly the excellent Have I Been Pwned site provides an API which allows you to test if a password has been previously included in a leak. This is great, and I've integrated their API in a couple of my own applications, but I was thinking on the bus home tonight it might be worth tying into PAM.

Sure in the interests of security people should use key-based authentication for SSH, but .. most people don't. Even so, if keys are used exclusively, a PAM module would allow you to validate the password which is used for sudo hasn't previously been leaked.

So it seems like there is value in a PAM module to do a lookup at authentication-time, via libcurl.

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Autobuilding Debian packages on salsa with Gitlab CI

Pre, 14/09/2018 - 4:45md

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

stages: - build - autopkgtest .build: &build before_script: - apt-get update - apt-get -y install devscripts adduser fakeroot sudo - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r - adduser --disabled-password --gecos "" builduser - chown -R builduser:builduser . - chown builduser:builduser .. stage: build artifacts: paths: - built script: - sudo -u builduser dpkg-buildpackage -b -rfakeroot after_script: - mkdir built - dcmd mv ../*ges built/ .test: &test before_script: - apt-get update - apt-get -y install autopkgtest stage: autopkgtest script: - autopkgtest built/*ges -- null build:testing: <<: *build image: debian:testing build:unstable: <<: *build image: debian:sid test:testing: <<: *test dependencies: - build:testing image: debian:testing test:unstable: <<: *test dependencies: - build:unstable image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

stages: - build - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build before_script: - apt-get update - apt-get -y install devscripts autoconf automake adduser fakeroot sudo - autoreconf -f -i - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r - adduser --disabled-password --gecos "" builduser - chown -R builduser:builduser . - chown builduser:builduser .. stage: build artifacts: paths: - built script: - sudo -u builduser dpkg-buildpackage -b -rfakeroot after_script: - mkdir built - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test before_script: - apt-get update - apt-get -y install autopkgtest stage: autopkgtest script: - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

build:testing: <<: *build image: debian:testing build:unstable: <<: *build image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

test:testing: <<: *test dependencies: - build:testing image: debian:testing test:unstable: <<: *test dependencies: - build:unstable image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

UPDATE (2018-09-16): dropped the autoreconf call, isn't needed (it was there because it didn't work from the first go, and I thought that might have been related, but that turned out to be a red herring, and I forgot to drop it)

Wouter Verhelst https://grep.be/blog//pd/ pd

New website for vmdb2

Pre, 14/09/2018 - 3:00md

I've set up a new website for vmdb2, my tool for building Debian images (basically "debootstrap, except in a disk image"). As usual for my websites, it's ugly. Feedback welcome.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

Gaming: Rise of the Tomb Raider

Pre, 14/09/2018 - 4:07pd

Over the last weekend I have finally finished The Rise of the Tomb Raider. As I wrote exactly 4 month ago when I started the game, I am a complete newby to these kind of games, and was blown away by the visual quality and great gameplay.

I was really surprised how huge an area I had to explore over the four month. Many of them with really excellent nature scenery, some of them with a depressingly dark and solemn atmosphere.

Another thing I learned that the Challenge Tombs – some kind of puzzle challenges – haven’t been that important in previous games. I enjoyed these puzzles much more then the fighting sequences (also because I am really bad at combat and have to die soo many times before I succeed!).

Lots of sliding down on ropes, jumping, running, diving, often into the unknown.

In the last part of the game when Lara enters into the city underneath the glacier one is reminded of the scene when Frodo tries to enter into Mordor, seeing all the dark riders and troops.

The final approach starts, there is still a long way (and many strange creatures to fight), but at least the final destination is in sight!

I finished the game with 100%, because I went back to all the areas and used Stella’s Tomb Raider Site walkthrough to help me find all the items. I think it is practically impossible within life time to find all the items alone without help. This is especially funny because in one of the trailers one of the developers mentions that they count with 15h of gameplay, and 30h if you want to finish at 100%. It took me 58h (!) to finish with 100% … and that with a walkthrough!!!

Anyway, tomorrow the Shadow of the Tomb Raider will be released, and I could also start the first game of the series, Tomb Raider, but I got a bit worn out by all the combat activity and decided to concentrate on a hard-core puzzle game, The Witness, which features loads and loads of puzzles, taught from simple to complex, and combined, to create a very interesting game. Now I only need the time …

Norbert Preining https://www.preining.info/blog There and back again

20180913-reproducible-builds-paris-meeting

Enj, 13/09/2018 - 11:54md
Reproducible Builds 2018 Paris meeting

Many lovely people interested in reproducible builds will meet again at a three-day event in Paris we will welcome both previous attendees and new projects alike! We hope to discuss, connect and exchange ideas in order to grow the reproducible builds effort and we would be delighted if you'd join! And this is the space we'll bring into life:

And whilst the exact content of the meeting will be shaped by the participants when we do it, the main goals will include:

  • Updating & exchanging the status of reproducible builds in various projects.
  • Improving collaboration both between and inside projects.
  • Expanding the scope and reach of reproducible builds to more projects.
  • Working and hacking together on solutions.
  • Brainstorming designs for tools enabling end-users to get the most benefits from reproducible builds.
  • Discussing how reproducible builds will be usable and meaningful to users and developers alike.

Please reach out if you'd like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

Holger Levsen http://layer-acht.org/thinking/ Any sufficiently advanced thinking is indistinguishable from madness

What is the difference between moderation and censorship?

Enj, 13/09/2018 - 11:09md

FSFE fellows recently started discussing my blog posts about Who were the fellowship? and An FSFE Fellowship Representative's dilemma.

Fellows making posts in support of reform have reported their emails were rejected. Some fellows had CC'd me on their posts to the list and these posts never appeared publicly. These are some examples of responses received by a fellow trying to post on the list:

The list moderation team decided now to put your email address on moderation for one month. This is not censorship.

One fellow forwarded me a rejected message to look at. It isn't obscene, doesn't attack anybody and doesn't violate the code of conduct. The fellow writes:

+1 for somebody to answer the original questions with real answers
-1 for more character assassination

Censors moderators responded to that fellow:

This message is in constructive and unsuited for a public discussion list.

Why would moderators block something like that? In the same thread, they allowed some very personal attack messages in favour of existing management.

Moderation + Bias = Censorship

Even links to the public list archives are giving errors and people are joking that they will only work again after the censors PR team change all the previous emails to comply with the censorship communications policy exposed in my last blog.

Fellows have started noticing that the blog of their representative is not being syndicated on Planet FSFE any more.

Some people complained that my last blog didn't provide evidence to justify my concerns about censorship. I'd like to thank FSFE management for helping me respond to that concern so conclusively with these heavy-handed actions against the community over the last 48 hours.

The collapse of the fellowship described in my earlier blog has been caused by FSFE management decisions. The solutions need to come from the grass roots. A totalitarian crackdown on all communications is a great way to make sure that never happens.

FSFE claims to be a representative of the free software community in Europe. Does this behaviour reflect how other communities operate? How successful would other communities be if they suffocated ideas in this manner?

This is what people see right now trying to follow links to the main FSFE Discussion list archive:

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Debian LTS work, August 2018

Enj, 13/09/2018 - 1:54md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from July. I worked only 5 hours and therefore carried over 18 hours to September.

I prepared and uploaded updates to the linux-4.9 (DLA 1466-1, DLA 1481-1) and linux-latest-4.9 packages.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

TeX Live contrib updates

Enj, 13/09/2018 - 4:56pd

It is now more than a year that I took over tlcontrib from Taco and provide it at the TeX Live contrib repository. It does now serve old TeX Live 2017 as well as the current TeX Live 2018, and since last year the number of packages has increased from 52 to 70.

Recent changes include pTeX support packages for non-free fonts and more packages from the AcroTeX bundle. In particular since the last post the following packages have been added: aeb-mobile, aeb-tilebg, aebenvelope, cjk-gs-integrate-macos, comicsans, datepicker-pro, digicap-pro, dps, eq-save, fetchbibpes, japanese-otf-nonfree, japanese-otf-uptex-nonfree, mkstmpdad, opacity-pro, ptex-fontmaps-macos, qrcstamps.

Here I want to thank Jürgen Gilg for reminding me consistently of updates I have missed, big thanks!

To recall what TLcontrib is for: It collects packages that not distributed inside TeX Live proper for one or another of the following reasons:

  • because it is not free software according to the FSF guidelines;
  • because it is an executable update;
  • because it is not available on CTAN;
  • because it is an intermediate release for testing.

In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can have a place on TLContrib. The full list of packages can be seen here.

Please see the main page for Quickstart, History, and details about how to contribute packages.

Last but not least, while this is a service to make access to non-free packages more easy for users of the TeX Live Manager, our aim is to have as many as possible packages made completely free and included into TeX Live proper!

Enjoy.

Norbert Preining https://www.preining.info/blog There and back again

digest 0.6.17

Enj, 13/09/2018 - 2:06pd

digest version 0.6.17 arrived on CRAN earlier today after a day of gestation in the bowels of CRAN, and should get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64 and murmur32 algorithms) permitting easy comparison of R language objects.

This release brings another robustifications thanks to Radford Neal who noticed a segfault in 32 bit mode on Sparc running Solaris. Yay for esoteric setups. But thanks to his very nice pull request, this is taken care of, and it also squashed one UBSAN error under the standard gcc setup. But two files remain with UBSAN issues, help would be welcome!

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Disappointment on the new commute

Mër, 12/09/2018 - 6:53md

Imagine my disappointment when I discovered that signs on Stanford’s campus pointing to their “Enchanted Broccoli Forest” and “Narnia”—both of which that I have been passing daily on my new commute—merely indicate the location of student living groups with whimsical names.

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

Faqet