You are here

Agreguesi i feed

Freexian’s report about Debian Long Term Support, August 2018

Planet Debian - Mër, 19/09/2018 - 10:41pd

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 220 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for September).
  • Antoine Beaupré did 23.75 hours.
  • Ben Hutchings did 5 hours (out of 15 hours allocated + 8 extra hours, thus keeping 8 extra hours for September).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did not manage to work but returned all his hours to the pool (out of 23.75 hours allocated + 19.5 extra hours).
  • Holger Levsen did 10 hours (out of 8 hours allocated + 16 extra hours, thus keeping 14 extra hours for September).
  • Hugo Lefeuvre did nothing (out of 10 hours allocated, but he gave back those hours).
  • Markus Koschany did 23.75 hours.
  • Mike Gabriel did 6 hours (out of 8 hours allocated, thus keeping 2 extra hours for September).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated + 8 remaining hours, thus keeping 11.5 extra hours for September).
  • Roberto C. Sanchez did 6 hours (out of 18h allocated, thus keeping 12 extra hours for September).
  • Santiago Ruano Rincón did 8 hours (out of 20 hours allocated, thus keeping 12 extra hours for September).
  • Thorsten Alteholz did 23.75 hours.
Evolution of the situation

The number of sponsored hours decreased to 206 hours per month, we lost two sponsors and gained only one.

The security tracker currently lists 38 packages with a known CVE and the dla-needed.txt file has 24 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog https://raphaelhertzog.com apt-get install debian-wizard

What is the relationship between FSF and FSFE?

Planet Debian - Mër, 19/09/2018 - 1:21pd

Ever since I started blogging about my role in FSFE as Fellowship representative, I've been receiving communications and queries from various people, both in public and in private, about the relationship between FSF and FSFE. I've written this post to try and document my own experiences of the issue, maybe some people will find this helpful. These comments have also been shared on the LibrePlanet mailing list for discussion (subscribe here)

Being the elected Fellowship representative means I am both a member of FSFE e.V. and also possess a mandate to look out for the interests of the community of volunteers and donors (they are not members of FSFE e.V). In both capacities, I feel uncomfortable about the current situation due to the confusion it creates in the community and the risk that volunteers or donors may be confused.

The FSF has a well known name associated with a distinctive philosophy. Whether people agree with that philosophy or not, they usually know what FSF believes in. That is the power of a brand.

When people see the name FSFE, they often believe it is a subsidiary or group working within the FSF. The way that brands work, people associate the philosophy with the name, just as somebody buying a Ferrari in Berlin expects it to do the same things that a Ferrari does in Boston.

To give an example, when I refer to "our president" in any conversation, people not knowledgeable about the politics believe I am referring to RMS. More specifically, if I say to somebody "would you like me to see if our president can speak at your event?", some people think it is a reference to RMS. In fact, FSFE was set up as a completely independent organization with distinct membership and management and therefore a different president. When I try to explain this to people, they sometimes lose interest and the conversation can go cold very quickly.

FSFE leadership have sometimes diverged from FSF philosophy, for example, it is not hard to find some quotes about "open source" and one fellow recently expressed concern that some people behave like "FSF Light". But given that FSF's crown jewels are the philosophy, how can an "FSF Light" mean anything? What would "Ferrari Light" look like, a red lawnmower? Would it be a fair use of the name Ferrari?

Some concerned fellows have recently gone as far as accusing the FSFE staff of effectively domain squatting or trolling the FSF (I can't link to that because of FSFE's censorship regime). When questions appear about the relationship in public, there is sometimes a violent response with no firm details. (I can't link to that either because of FSFE's censorship regime)

The FSFE constitution calls on FSFE to "join forces" with the FSF and sometimes this appears to happen but I feel this could be taken further.

FSF people have also produced vast amounts of code (the GNU Project) and some donors appear to be contributing funds to FSFE in gratitude for that or in the belief they are supporting that. However, it is not clear to me that funds given to FSFE support that work. As Fellowship representative, a big part of my role is to think about the best interests of those donors and so the possibility that they are being confused concerns me.

Given the vast amounts of money and goodwill contributed by the community to FSFE e.V., including a recent bequest of EUR 150,000 and the direct questions about this issue I feel it is becoming more important for both organizations to clarify the issue.

FSFE has a transparency page on the web site and this would be a good place to publish all documents about their relationship with FSF. For example, FSFE could publish the documents explaining their authorization to use a name derived from FSF and the extent to which they are committed to adhere to FSF's core philosophy and remain true to that in the long term. FSF could also publish some guidelines about the characteristics of a sister organization, especially when that organization is authorized to share the FSF's name.

In the specific case of sister organizations who benefit from the tremendous privilege of using the FSF's name, could it also remove ambiguity if FSF mandated the titles used by officers of sister organizations? For example, the "FSFE President" would be referred to as "FSFE European President", or maybe the word president could be avoided in all sister organizations.

People also raise the question of whether FSFE can speak for all Europeans given that it only has a large presence in Germany and other organizations are bigger in other European countries. Would it be fair for some of those other groups to aspire to sister organization status and name-sharing rights too? Could dozens of smaller FSF sister organizations dilute the impact of one or two who go off-script?

Even if FSFE was to distance itself from FSF or even start using a new name and philosophy, as a member, representative and also volunteer I would feel uncomfortable with that as there is a legacy of donations and volunteering that have brought FSFE to the position the organization is in today.

That said, I would like to emphasize that I regard RMS and the FSF, as the original FSF, as having the final authority over the use of the name and I fully respect FSF's right to act unilaterally, negotiate with sister organizations or simply leave things as they are.

If you have questions or concerns about this topic, I would invite you to raise them on the LibrePlanet-discuss mailing list or feel free to email me directly.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Using ARP via netlink to detect presence

Planet Debian - Mar, 18/09/2018 - 9:18md

If you remember my first post about home automation I mentioned a desire to use some sort of presence detection as part of deciding when to turn the heat on. Home Assistant has a wide selection of presence detection modules available, but the easy ones didn’t seem like the right solutions. I don’t want something that has to run on my phone to say where I am, but using the phone as the proxy for presence seemed reasonable. It connects to the wifi when at home, so watching for that involves no overhead on the phone and should be reliable (as long as I haven’t let my phone run down). I run OpenWRT on my main house router and there are a number of solutions which work by scraping the web interface. openwrt_hass_devicetracker is a bit better but it watches the hostapd logs and my wifi is actually handled by some UniFis.

So how to do it more efficiently? Learn how to watch for ARP requests via Netlink! That way I could have something sitting idle and only doing any work when it sees a new event, that could be small enough to run directly on the router. I could then tie it together with the Mosquitto client libraries and announce presence via MQTT, tying it into Home Assistant with the MQTT Device Tracker.

I’m going to go into a bit more detail about the Netlink side of things, because I found it hard to find simple documentation and ended up reading kernel source code to figure out what I wanted. If you’re not interested in that you can find my mqtt-arp (I suck at naming simple things) tool locally or on GitHub. It ends up as an 8k binary for my MIPS based OpenWRT box and just needs fed a list of MAC addresses to watch for and details of the MQTT server. When it sees a device it cares about make an ARP request it reports the presence for that device as “home” (configurable), rate limiting it to at most once every 2 minutes. Once it hasn’t seen anything from the device for 10 minutes it declares the location to be unknown. I have found Samsung phones are a little prone to disconnecting from the wifi when not in use so you might need to lengthen the timeout if all you have are Samsung devices.

Home Assistant configuration is easy:

device_tracker: - platform: mqtt devices: noodles: 'location/by-mac/0C:11:22:33:44:55' helen: 'location/by-mac/4C:11:22:33:44:55'

On to the Netlink stuff…

Firstly, you can watch the netlink messages we’re interested in using iproute2 - just run ip monitor. Works as an unpriviledged user which is nice. This happens via an AF_NETLINK routing socket (rtnetlink(7)):

int sock; sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);

We then want to indicate we’re listening for neighbour events:

struct sockaddr_nl group_addr; bzero(&group_addr, sizeof(group_addr)); group_addr.nl_family = AF_NETLINK; group_addr.nl_pid = getpid(); group_addr.nl_groups = RTMGRP_NEIGH; bind(sock, (struct sockaddr *) &group_addr, sizeof(group_addr));

At this point we’re good to go and can wait for an event message:

received = recv(sock, buf, sizeof(buf), 0);

This will be a struct nlmsghdr message and the nlmsg_type field will provide details of what type. In particular I look for RTM_NEWNEIGH, indicating a new neighbour has been seen. This is of type struct ndmsg and immediately follows the struct nlmsghdr in the received message. That has details of the address family type (IPv6 vs IPv4), the state and various flags (such as whether it’s NUD_REACHABLE indicating presence). The only slightly tricky bit comes in working out the MAC address, which is one of potentially several struct nlattr attributes which follow the struct ndmsg. In particular I’m interested in an nla_type of NDA_LLADDR, in which case the attribute data is the MAC address. The main_loop function in mqtt-arp.c shows this - it’s fairly simple stuff, and works nicely. It was just figuring out the relationship between it all and the exact messages I cared about that took me a little time to track down.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

censored Amazon review of Sandisk Ultra 32GB Micro SDHC Card

Planet Debian - Mar, 18/09/2018 - 8:03md

★ counterfeits in amazon pipeline

The 32 gb card I bought here at Amazon turned out to be fake. Within days I was getting read errors, even though the card was still mostly empty.

The logo is noticably blurry compared with a 32 gb card purchased elsewhere. Also, the color of the grey half of the card is subtly wrong, and the lettering is subtly wrong.

Amazon apparently has counterfiet stock in their pipeline, google "amazon counterfiet" for more.

You will not find this review on Sandisk Ultra 32GB Micro SDHC UHS-I Card with Adapter - 98MB/s U1 A1 - SDSQUAR-032G-GN6MA because it was rejected. As far as I can tell my review violates none of Amazon's posted guidelines. But it's specific about how to tell this card is counterfeit, and it mentions a real and ongoing issue that Amazon clearly wants to cover up.

Joey Hess http://joeyh.name/blog/ see shy jo

Reproducible Builds: Weekly report #177

Planet Debian - Mar, 18/09/2018 - 7:35md

Here’s what happened in the Reproducible Builds effort between Sunday September 9 and Saturday September 15 2018:

Patches filed diffoscope development

Chris Lamb made a large number of changes to diffoscope, our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages:

These changes were then uploaded as diffoscope version 101.

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org by Holger Levsen this month, including:

Misc.

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Digital Minimalism and Deep Work

Planet Debian - Mar, 18/09/2018 - 2:44md

Russ Allbery of the Debian project writes reviews of books he has read on his blog. It was through Russ's review that I learned of "Deep Work" by Cal Newport, and duly requested it from my local library.

I've a long-held skepticism of self-help books, but several aspects of this one strike the right notes for me. The author is a Computer Scientist, so there's a sense of kinship there, but the writing also follows the standard academic patterns of citing sources and a certain rigour to the new ideas that are presented. Despite this, there are a few sections of the book which I felt lacked much supporting evidence, or where some obvious questions of the relevant concept were not being asked. One of the case studies in the book is of a part-time PhD student with a full-time job and a young child, which I can relate to. The author obviously follows his own advice: he runs a productivity blog at calnewport.com and has no other social media presences. One of the key productivity tips he espouses in the book (and elsewhere) is simply "quit social media".

Through Newport's blog I learned that the title of his next book is Digital Minimalism. This intrigued me, because since I started thinking about minimalism myself, I've wondered about the difference of approach needed between minimalism in the "real world" and the digital domains. It turns out the topic of Newport's next book is about something different: from what I can tell, focussing on controlling how one spends one's time online for maximum productivity.

That's an interesting topic which I have more to write about at some point. However, my line of thought for the title "digital minimalism" spawned from reading Marie Kondo, Fumio Sakai and others. Many of the tips they offer to their readers revolve around moving meaning away from physical clutter and into the digital domain: scan your important papers, photograph your keepsakes, and throw away the physical copies. It struck me that whilst this was useful advice for addressing the immediate problem of clutter in the physical world, it exacerbates the problem of digital clutter, especially if we don't have good systems for effectively managing digital archives. Broadly speaking, I don't think we do: at least, not ones that are readily accessible to the majority of people. I have a hunch that most have no form of data backup in place at all, switch between digital hosting services on a relatively ad-hoc manner (flickr, snapchat, instagram…) and treat losing data (such as when an old laptop breaks, or a tablet or phone is stolen) as a fact of life, rather than something that could be avoided if our tools (or habits, or both) were better.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Review: The Collapsing Empire

Planet Debian - Mar, 18/09/2018 - 5:39pd

Review: The Collapsing Empire, by John Scalzi

Series: Interdependency #1 Publisher: Tor Copyright: March 2017 ISBN: 0-7653-8889-8 Format: Kindle Pages: 333

Cardenia Wu-Patrick was never supposed to become emperox. She had a quiet life with her mother, a professor of ancient languages who had a brief fling with the emperox but otherwise stayed well clear of the court. Her older half-brother was the imperial heir and seemed to enjoy the position and the politics. But then Rennered got himself killed while racing and Cardenia ended up heir whether she wanted it or not, with her father on his deathbed and unwanted pressure on her to take over Rennered's role in a planned marriage of state with the powerful Nohamapetan guild family.

Cardenia has far larger problems than those, but she won't find out about them until becoming emperox.

The Interdependency is an interstellar human empire balanced on top of a complex combination of hereditary empire, feudal guild system, state religion complete with founding prophet, and the Flow. The Flow is this universe's equivalent of the old SF trope of a wormhole network: a strange extra-dimensional space with well-defined entry and exit points and a disregard for the speed of light. The Interdependency relies on it even more than one might expect. As part of the same complex and extremely long-term plan of engineered political stability that created the guild, empire, and church balance of power, the Interdependency created an economic web in which each system is critically dependent on imports from other systems. This plus the natural choke points of the Flow greatly reduces the chances of war.

It also means that Cardenia has inherited an empire that is more fragile than it may appear. Secret research happening at the most far-flung system in the Interdependency is about to tell her just how fragile.

John Clute and Malcolm Edwards provided one of the most famous backhanded compliments in SF criticism in The Encyclopedia of Science Fiction when they described Isaac Asimov as the "default voice" of science fiction: a consistent but undistinguished style that became the baseline that other writers built on or reacted against. The field is now far too large for there to be one default voice in that same way, but John Scalzi's writing reminds me of that comment. He is very good at writing a specific sort of book: a light science fiction story that draws as much on Star Trek as it does on Heinlein, comfortably sits on the framework of standard SF tropes built by other people, adds a bit of humor and a lot of banter, and otherwise moves reliably and competently through a plot. It's not hard to recognize Scalzi's writing, so in that sense he has less of a default voice than Asimov had, but if I had to pick out an average science fiction novel his writing would come immediately to mind. At a time when the field is large enough to splinter into numerous sub-genres that challenge readers in different ways and push into new ideas, Scalzi continues writing straight down the middle of the genre, providing the same sort of comfortable familiarity as the latest summer blockbuster.

This is not high praise, and I am sometimes mystified at the amount of attention Scalzi gets (both positive and negative). I think his largest flaw (and certainly the largest flaw in this book) is that he has very little dynamic range, particularly in his characters. His books have a tendency to collapse into barely-differentiated versions of the same person bantering with each other, all of them sounding very much like Scalzi's own voice on his blog. The Collapsing Empire has emperox Scalzi grappling with news from scientist Scalzi carried by dutiful Scalzi with the help of profane impetuous Scalzi, all maneuvering against devious Scalzi. The characters are easy to keep track of by the roles they play in the plot, and the plot itself is agreeably twisty, but if you're looking for a book to hook into your soul and run you through the gamut of human emotions, this is not it.

That is not necessarily a bad thing. I like that voice; I read Scalzi's blog regularly. He's reliable, and I wonder if that's the secret to his success. I picked up this book because I wanted to read a decent science fiction novel and not take a big risk. It delivered exactly what I asked for. I enjoyed the plot, laughed at some of the characters, felt for Cardenia, enjoyed the way some villainous threats fell flat because of characters who had a firm grasp of what was actually important and acted on it, and am intrigued enough by what will happen next that I'm going to read the sequel. Scalzi aimed to entertain, succeeded, and got another happy customer. (Although I must note that I would have been happier if my favorite character in the book, by far, did not make a premature exit.)

I am mystified at how The Collapsing Empire won a Locus Award for best science fiction novel, though. This is just not an award sort of book, at least in my opinion. It's book four in an urban fantasy series, or the sixth book of Louis L'Amour's Sackett westerns. If you like this sort of thing, you'll like this version of it, and much of the appeal is that it's not risky and requires little investment of effort. I think an award winner should be the sort of book that lingers, that you find yourself thinking about at odd intervals, that expands your view of what's possible to do or feel or understand.

But that complaint is more about awards voters than about Scalzi, who competently executed on exactly what was promised on the tin. I liked the setup and I loved the structure of Cardenia's inheritance of empire, so I do kind of wish I could read the book that, say, Ann Leckie would have written with those elements, but I was entertained in exactly the way that I wanted to be entertained. There's real skill and magic in that.

Followed by The Consuming Fire. This book ends on a cliffhanger, as apparently does the next one, so if that sort of thing bothers you, you may want to wait until they're all available.

Rating: 7 out of 10

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

You Think the Visual Studio Code binary you use is a Free Software? Think again.

Planet Debian - Mar, 18/09/2018 - 12:00pd

Did you download your binary of Visual Studio Code directly from the official website? If so, you’re not using a Free Software and only Microsoft knows what was added to this binary. And you should think of the worst possible.

It says « Open Source » and offers to download non open source binary packages. Very misleading.

The Microsoft Trick

I’m not a lawyer, I could be wrong or not accurate enough in my analysis (sorry!) but I’ll try nonetheless to give my understanding of the situation because the current state of licensing of Visual Studio Code tries to fool most users.

Microsoft uses here a simple but clever trick allowed by the license of the code source of Visual Studio Code: the MIT license, a permissive Free Software license.

Indeed, the MIT license is really straightforward. Do whatever you want with this software, keeps the original copyright and I’m not responsible of what could happen with this software. Ok. Except that, for the situation of Visual Studio Code, it only covers the source code, not the binary.

Unlike most of the GPL-based licenses for which both the source code and the binary built from this source code are covered by the terms of the license, using the MIT license authorizes Microsoft to make available the source code of the software, but do whatever they want with the binary of this software. And let’s be crystal-clear: 99,99% of the VSC users will never ever use directly the source code.

What a non-free license by Microsoft is

And of course Microsoft does not use purposely the MIT license for the binary of Visual Studio Code. In fact they use a fully-armed, Freedom-restricting license, the Microsoft Software License.

Lets have a look at some pieces of it. You can find the full license here: https://code.visualstudio.com/license

This license applies to the Visual Studio Code product. The source code is available under the MIT license agreement.

First sentence of the license. The difference between the license of the source code and the « product », meaning the binary you’re going to use, is clearly stated.

Data Collection. The software may collect information about you and your use of the software, and send that to Microsoft.

Yeah right, no kidding. Big Surprise from Microsoft.

UPDATES. The software may periodically check for updates, and download and install them for you. You may obtain updates only from Microsoft or authorized sources. Microsoft may need to update your system to provide you with updates. You agree to receive these automatic updates without any additional notice. Updates may not include or support all existing software features, services, or peripheral devices.

I’ll break your installation without further notice and I don’t care what you were doing with it before, because, you know.

SCOPE OF LICENSE (…) you may not:

  • work around any technical limitations in the software;

Also known as « hacking » since… years.

  • reverse engineer, decompile or disassemble the software, or otherwise attempt to derive the source code for the software, except and to the extent required by third party licensing terms governing use of certain open source components that may be included in the software;

Because, there is no way anybody should try to know what we are doing with the binary running on your computer.

  • share, publish, rent or lease the software, or provide the software as a stand-alone offering for others to use.

I may be wrong (again I’m not a lawyer), but it seems to me they forbid you to redistribute this binary, except for the conditions mentioned in the INSTALLATION AND USE RIGHTS section (mostly for the need of your company or/and for giving demos of your products using VSC).

The following sections EXPORT RESTRICTIONS and CONSUMER RIGHTS; REGIONAL VARIATIONS include more and more restrictions about using and sharing the binary.

DISCLAIMER OF WARRANTY. The software is licensed “as-is.”

At last a term which could be identified as a term of a Free Software license. But in this case it’s of course to limit any obligation Microsoft could have towards you.

So the Microsoft software license is definitely not a Free Software license, if you were not convinced by the clever trick of dual licensing the source code and the binary.

What You Could Do

Some answers exist to use VSC in good condition. After all, the source code of VSC comes as a Free Software. So why not building it yourself? It also seems some initiatives appeared, like this repository. That could be a good start.

About the GNU/Linux distributions, packaging VSC (see here for the discussion in Debian) would be a great way to avoid people being abused by the Microsoft trick in order they use a « product » breaking almost any term of what makes a Free Software.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.io, a Job board dedicated to Free and Open Source Jobs in the US.

Follow Me On Social Networks

 

 

Carl Chenet https://carlchenet.com debian – Carl Chenet's Blog

which spare laptop?

Planet Debian - Hën, 17/09/2018 - 3:48md

I'm in a perpetual state of downsizing and ridding my life (and my family's life) of things we don't need: sometimes old computers. My main (nearly my sole) machine is my work-provided Thinkpad T470s: a fantastic laptop that works so well I haven't had anything to write about it. However, I decided that it was worth keeping just one spare, for emergencies or other odd situations. I have two candidate machines in my possession.

In the blue corner

left: X61S; right: R600

Toshiba Portégé R600. I've actually owned this now for 7 years, buying it originally to replace my beloved x40 which I loaned to my partner. At the time my main work machine was still a desktop. I received a new work laptop soon after buying this so it ended up gathering dust in a cupboard.

It's an extremely light laptop, even by today's standards. It compares favourably with the Apple Macbook Air 11" in that respect. A comfortable keyboard, but no trackpoint and a bog-standard trackpad. 1280x800 16:9 display, albeit TN panel technology with very limited viewing angles. Analog VGA video out on the laptop, but digital DVI-D out is possible via a separate dock, which was cheap and easy to acquire and very stowable. An integrated optical media drive which could be useful. Max 3G RAM (1G soldered, 2G in DIMM slot).

The CPU is apparently a generation newer but lower voltage and thus slower than its rival, which is…

In the red corner

x61s

Thinkpad X61s. The proportions match the Thinkpad X40, so it has a high nostalgia factor. Great keyboard, I love trackpoints, robust build. It has the edge on CPU over the Toshiba. A theoretical maximum of 8G (2x4) RAM, but practically nearer 4G (2x2), as the 4G sticks are too expensive. This is probably the "heart" choice.

The main drawback of the X61s is the display options: a 1024x768 TN panel, and no digital video out: VGA only on the laptop, and VGA only on the optional dock. It's possible to retro-fit a better panel, but it's not easy and the parts are now very hard to find. It's also a surprisingly heavy machine: heavier than I remember the X40 being, but it's been long enough ago that my expectations have changed.

The winner

Surprising myself perhaps more than anyone else, I've ended up opting for the Toshiba. The weight was the clincher. The CPU performance difference was too close to matter, and 3G RAM is sufficient for my spare laptop needs. Once I'd installed a spare SSD as the main storage device, day-to-day performance is very good. The resolution difference didn't turn out to be that important: it's still low enough that side-by-side text editor and browser feels crowded, so I end up using the same window management techniques as I would on the X61s.

What do I use it for? I've taken it on a couple of trips or holidays which I wouldn't want to risk my work machine for. I wrote nearly all of liquorice on it in downtime on a holiday to Turkey whilst my daughter was having her afternoon nap. I'm touching up this blog post on it now!

I suppose I should think about passing on the X61s to something/someone else.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

PAM HaveIBeenPwned module

Planet Debian - Hën, 17/09/2018 - 11:01pd

So the PAM module which I pondered about in my previous post now exists:

I did mention "sponsorship" in my post which lead to a couple of emails, and the end result of that was that a couple of folk donated to charity in my/its name. Good enough.

Perhaps in the future I'll explore patreon/similar, but I don't feel very in-demand so I'll avoid it for the moment.

Anyway I guess it should be Debian-packaged for neatness, but I'll resist for the moment.

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Linus apologising

Planet Debian - Hën, 17/09/2018 - 8:45pd

Someone pointed me towards this email, in which Linus apologizes for some of his more unhealthy behaviour.

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

To me, this came somewhat as a surprise. I'm not really involved in Linux kernel development, and so the history of what led up to this email mostly passed unnoticed, at least for me; but that doesn't mean I cannot recognize how difficult this must have been to write for him.

As I know from experience, admitting that you have made a mistake is hard. Admitting that you have been making the same mistake over and over again is even harder. Doing so publically? Even more so, since you're placing yourself in a vulnerable position, one that the less honorably inclined will take advantage of if you're not careful.

There isn't much I can contribute to the whole process, but there is this: Thanks, Linus, for being willing to work on those things, which can only make the community healthier as a result. It takes courage to admit things like that, and that is only to be admired. Hopefully this will have the result we're hoping for, too; but that, only time can tell.

Wouter Verhelst https://grep.be/blog//pd/ pd

Jono Bacon: Linus, His Apology, And Why We Should Support Him

Planet Ubuntu - Hën, 17/09/2018 - 12:12pd

Today, Linus Torvalds, the creator of Linux, which powers everything from smartwatches to electrical grids posted a pretty remarkable note on the kernel mailing list.

As a little bit of backstory, Linus has sometimes come under fire for the ways in which he has expressed feedback, provided criticism, and reacted to various scenarios on the kernel mailing list. This criticism has been fair in many cases: he has been overly aggressive at times, and while the kernel maintainers are a tight-knit group, the optics (not just of what it looks like, but what is actually happening), particularly for those new to kernel development has often been pretty bad.

Like many conflict scenarios, this feedback has been communicated back to him in both constructive and non-constructive ways. Historically he has been seemingly reluctant to really internalize this feedback, I suspect partially because (a) the Linux kernel is a very successful project, and (b) some of the critics have at times gone nuclear at him (which often doesn’t work as a strategy towards defensive people). Well, things changed today.

In his post today he shared some self-reflection on this feedback:

This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry.

He went on to not just share an admission that this has been a problem, but to also share a very personal acceptance that he struggles to understand and engage with people’s emotions:

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely. I am going to take time off and get some assistance on how to understand people’s emotions and respond appropriately.

His post is sure to light up the open source, Linux, and tech world for the next few weeks. For some it will be celebrated as a step in the right direction. For some it will be too little too late, and their animus will remain. For some they will be cautiously supportive, but defer judgement until they have seen his future behavior demonstrate substantive changes.

My Take

I wouldn’t say I know Linus very closely; we have a casual relationship. I see him at conferences from time to time, and we often bump into each other and catch up. I interviewed him for my book and for the Global Learning XPRIZE. From my experience he is a funny, genuine, friendly guy. Interestingly, and not unusually at all for open source, his online persona is rather different to his in-person persona. I am not going to deny that when I would see these dust-ups on LKML, it didn’t reflect the Linus I know. I chalked it down to a mixture of his struggles with social skills, dogmatic pragmatism, and ego.

His post today is a pretty remarkable change of posture for him, and I encourage that we as a community support him in making these changes.

Accepting these personal challenges is tough, particularly for someone in his position. Linux is a global phenomenon. It has resulted in billions of dollars of technology creation, powering thousands of companies, and changing the norms around of how software is consumed and created. It is easy to forget that Linux was started by a quiet Finnish kid in his university dorm room. It is important to remember that just because Linux has scaled elegantly, it doesn’t mean that Linus has been able to. He isn’t a codebase, he is a human being, and bugs are harder to spot and fix in humans. You can’t just deploy a fix immediately. It takes time to identify the problem and foster and grow a change. The starting point for this is to support people in that desire for change, not re-litigate the ills of the past: that will get us nowhere quickly.

I am also mindful of ego. None of us like to admit we have an ago, but we all do. You don’t get to build one of the most fundamental technologies in the last thirty years and not have an ego. He built it…they came…and a revolution was energized because of what he created. While Linus’s ego is more subtle, and certainly not overstated and extending to faddish self-promotion, overly expensive suits, and forays into Hollywood (quite the opposite), his ego has naturally resulted in abrupt and fixed opinions on how his project should run. This sometimes results in him plugging fingers in his ears to particularly challenging viewpoints from others (he is not the only person guilty of this, many people in similar positions do too). His post today is a clear example of him putting Linux as a project ahead of his own personal ego.

This is important for a few reasons. Firstly, being in such a public position and accepting your personal flaws isn’t a problem many people face, and isn’t a situation many people handle well. I work with a lot of CEOs, and they often say it is the loneliest job on the planet. I have heard American presidents say the same in interviews. This is because they are the top of the tree with all the responsibility and expectations on their shoulders. Put yourself in Linus’s position: his little project has blown up into a global phenomenon, and he didn’t necessarily have the social tools to be able to handle this change. Ego forces these internal struggles under the surface and to push them down and avoid them. So, to accept them as publicly and openly as he did today is a very firm step in the right direction. Now, the true test will be results, but we need to all provide the breathing space for him to accomplish them.

So, I would encourage everyone to give Linus a shot. This doesn’t mean the frustrations of the past are erased, and he has acknowledged and apologized for these mistakes as a first step. He has accepted he struggles with understanding other’s emotions, and a desire to help improve this for the betterment of the project and himself. He is a human, and the best tonic for humans to resolve their own internal struggles is the support and encouragement of other humans. This is not unique to Linus, but to anyone who faces similar struggles.

All the best, Linus.

The post Linus, His Apology, And Why We Should Support Him appeared first on Jono Bacon.

Lookalikes

Planet Debian - Dje, 16/09/2018 - 8:18md

Was my festive shirt the model for the men’s room signs at Daniel K. Inouye International Airport in Honolulu? Did I see the sign on arrival and subconsciously decide to dress similarly when I returned to the airport to depart Hawaii?

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

GIMP 2.10

Planet Debian - Dje, 16/09/2018 - 6:00pd

GIMP 2.10 landed in Debian Testing a few weeks ago and I have to say I'm very happy about it. The last major version of GIMP (2.8) was released in 2012 and the new version fixes a lot of bugs and improved the user interface.

I've updated my Beginner's Guide to GIMP (sadly only in French) and in the process I found out a few things I thought I would share:

Theme

The default theme is Dark. Although it looks very nice in my opinion, I don't feel it's a good choice for productivity. The icon pack the theme uses is a monochrome flat 2D render and I feel it makes it hard to differentiate the icons from one another.

I would instead recommend on using the Light theme with the Color icon pack.

Single Window Mode

GIMP now enables Single Window Mode by default. That means that Dockable Dialog Windows like the Toolbar or the Layer Window cannot be moved around, but instead are locked to two docks on the right and the left of the screen.

Although you can hide and show these docks using Tab, I feel Single Window Mode is more suitable for larger screens. On my laptop, I still prefer moving the windows around as I used to do in 2.8.

You can disable Single Window Mode in the Windows tab.

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Two days afterward

Planet Debian - Dje, 16/09/2018 - 2:03pd

Sheena plodded down the stairs barefoot, her shiny bunions glinting in the cheap fluorescent light. “My boobs hurt,” she announced.

“That happens every month,” mumbled Luke, not looking up from his newspaper.

“It does not!” she retorted. “I think I'm perimenopausal.”

“At age 29?” he asked skeptically.

“Don't mansplain perimenopause to me!” she shouted.

“Okay,” he said, putting down the paper and walking over to embrace her.

“My boobs hurt,” she whispered.

Posted on 2018-09-16 Tags: mintings C https://xana.scru.org Yammering

Backing the wrong horse?

Planet Debian - Sht, 15/09/2018 - 2:13md

I started using the Ruby programming in around 2003 or 2004, but stopped at some point later, perhaps around 2008. At the time I was frustrated with the approach the Ruby community took for managing packages of Ruby software: Ruby Gems. They interact really badly with distribution packaging and made the jobs of organisations like Debian more difficult. This was around the time that Ruby on Rails was making a big splash for web application development (I think version 2.0 had just come out). I did fork out for the predominant Ruby on Rails book to try it out. Unfortunately the software was evolving so quickly that the very first examples in the book no longer worked with the latest versions of Rails. I wasn't doing a lot of web development that at the time anyway, so I put the book, Rails and Ruby itself on the shelf and moved on to looking at the Python programming language instead.

Since then I've written lots of Python, both professionally and personally. Whenever it looked like a job was best solved with scripting, I'd pick up Python. I hadn't stopped to reflect on the experience much at all, beyond being glad I wasn't writing Perl any more (the first language I had any real traction with, 20 years ago).

I'm still writing Python on most work days, and there are bits of it that I do really like, but there are also aspects I really don't. Some of the stuff I work on needs to work in both Python 2 and 3, and that can be painful. The whole 2-versus-3 situation is awkward: I'd much rather just focus on 3, but Python 3 didn't ship in (at least) RHEL 7, although it looks like it will in 8.

Recently I dusted off some 12-year old Ruby code and had a pleasant experience interacting with Ruby again. It made me wonder, had I perhaps backed the wrong horse? In some respects, clearly not: being proficient with Python was immediately helpful when I started my current job (and may have had a hand in getting me hired). But in other respects, I wonder how much time I've wasted wrestling with e.g. Python's verbose, rigid regular expression library when Ruby has nice language-native regular expression operators (taken straight from Perl), or the really awkward support for Unicode in Python 2 (this reminds me of Perl for all the wrong reasons)

Next time I have a computing problem to solve where it looks like a script is the right approach, I'm going to give Ruby another try. Assuming I don't go for Haskell instead, of course. Or, perhaps I should try something completely different? One piece of advice that resonated with me from the excellent book The Pragmatic Programmer was "Learn a new (programming) language every year". It was only recently that I reflected that I haven't learned a completely new language for a very long time. I tried Go in 2013 but my attempt petered out. Should I pick that back up? It has a lot of traction in the stuff I do in my day job (Kubernetes, Docker, Openshift, etc.). "Rust" looks interesting, but a bit impenetrable at first glance. Idris? Lua? Something else?

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Recommendations for software?

Planet Debian - Sht, 15/09/2018 - 11:01pd

A quick post with two questions:

  • What spam-filtering software do you recommend?
  • Is there a PAM module for testing with HaveIBeenPwnd?
    • If not would you sponsor me to write it? ;)

So I've been using crm114 to perform spam-filtering on my incoming mail, via procmail, for the past few years.

Today I discovered it had archived about 12Gb of my email history, because I'd never pruned it. (Beneath ~/.crm/.)

So I wonder if there are better/simpler/different Bayesian-filters out there at that I should be switching to? Recommendations welcome - but don't say "SpamAssassin", thanks!

Secondly the excellent Have I Been Pwned site provides an API which allows you to test if a password has been previously included in a leak. This is great, and I've integrated their API in a couple of my own applications, but I was thinking on the bus home tonight it might be worth tying into PAM.

Sure in the interests of security people should use key-based authentication for SSH, but .. most people don't. Even so, if keys are used exclusively, a PAM module would allow you to validate the password which is used for sudo hasn't previously been leaked.

So it seems like there is value in a PAM module to do a lookup at authentication-time, via libcurl.

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Autobuilding Debian packages on salsa with Gitlab CI

Planet Debian - Pre, 14/09/2018 - 4:45md

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

stages: - build - autopkgtest .build: &build before_script: - apt-get update - apt-get -y install devscripts adduser fakeroot sudo - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r - adduser --disabled-password --gecos "" builduser - chown -R builduser:builduser . - chown builduser:builduser .. stage: build artifacts: paths: - built script: - sudo -u builduser dpkg-buildpackage -b -rfakeroot after_script: - mkdir built - dcmd mv ../*ges built/ .test: &test before_script: - apt-get update - apt-get -y install autopkgtest stage: autopkgtest script: - autopkgtest built/*ges -- null build:testing: <<: *build image: debian:testing build:unstable: <<: *build image: debian:sid test:testing: <<: *test dependencies: - build:testing image: debian:testing test:unstable: <<: *test dependencies: - build:unstable image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

stages: - build - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build before_script: - apt-get update - apt-get -y install devscripts autoconf automake adduser fakeroot sudo - autoreconf -f -i - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r - adduser --disabled-password --gecos "" builduser - chown -R builduser:builduser . - chown builduser:builduser .. stage: build artifacts: paths: - built script: - sudo -u builduser dpkg-buildpackage -b -rfakeroot after_script: - mkdir built - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test before_script: - apt-get update - apt-get -y install autopkgtest stage: autopkgtest script: - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

build:testing: <<: *build image: debian:testing build:unstable: <<: *build image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

test:testing: <<: *test dependencies: - build:testing image: debian:testing test:unstable: <<: *test dependencies: - build:unstable image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

UPDATE (2018-09-16): dropped the autoreconf call, isn't needed (it was there because it didn't work from the first go, and I thought that might have been related, but that turned out to be a red herring, and I forgot to drop it)

Wouter Verhelst https://grep.be/blog//pd/ pd

New website for vmdb2

Planet Debian - Pre, 14/09/2018 - 3:00md

I've set up a new website for vmdb2, my tool for building Debian images (basically "debootstrap, except in a disk image"). As usual for my websites, it's ugly. Feedback welcome.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

David Tomaschik: Course Review: Software Defined Radio with HackRF

Planet Ubuntu - Pre, 14/09/2018 - 9:00pd

Over the past two days, I had the opportunity to attend Michael Ossman’s course “Software Defined Radio with HackRF” at Toorcon XX. This is a course I’ve wanted to take for several years, and I’m extremely happy that I finally had the chance. I wanted to write up a short review for others considering taking the course.

Course Material

The material in the course focuses predominantly on the basics of Software Defined Radio and Digital Signal Processing. This includes the math necessary to understand how the DSP handles the signal. The math is presented in a practical, rather than academic, way. It’s not a math class, but a review of the necessary basics, mostly of complex mathematics and a bit of trigonometry. (My high school teachers are now vindicated. I did use that math again.) You don’t need the math background coming in, but you do need to be prepared to think about math during the class. Extracting meaningful information from the ether is, it turns out, an exercise in mathematics.

There’s a lot of discussions of frequencies, frequency mixers, and how frequency, amplitude, and phase are related. Also, despite more than 20 years as an amateur radio operator, I finally understand dB properly. It’s possible to understand reasonably without having to do logarithms:

  • +3db = x2
  • +10db = x10
  • -3db = 1/2
  • -10db = 1/10

In terms of DSP, he demonstrated extracting signals of interest, clock recovery, and other techniques necessary for understanding digital signals. It really just scratches the surface, but is enough to get a basic signal understood.

From a security point of view, there was only a single system that we “attacked” in the class. I was hoping for a little bit more of this, but given the detail in the other content, I am not disappointed.

Mike pointed out that the course primarily focuses on getting signals from the air to a digital series of 0 an 1 bits, and then leaves the remainder to tools like python for adding meaning and interpretation of the bits. While I understand this (and, admittedly, at that point it’s similar to decoding an unknown network protocol), I would still like to have gone into more detail.

Course Style

At the very beginning of the course, Mike makes it clear that no two classes he teaches are exactly the same. He adapts the course to the experience and background of each class, and that was very evident from our small group this week. With such a small class, it became more like a guided conversation than a formal class.

Overall, the course was very interactive, with lots of student questions, as well as “Socratic Method” questions from the instructor. This was punctuated with a number of hands-on exercises. One of the best parts of the hands-on exercises is that Mike provides a flash drive with a preconfigured Ubuntu Linux installation containing all the tools that are needed for the course. This allows students to boot into a working environment, rather than having to play around with tool installation or virtual machine settings. (We were, in fact, warned that VMs often do not play well with SDR, because the USB forwarding has overhead resulting in lost samples.)

Mike made heavy use of the poster pad in the room, diagramming waveforms and information about the processes involved in the SDR architecture and the DSP done in the computer. This works well because he customizes the diagrams to explain each part and answer student questions. It also feels much more engaging than just pointing at slides. In fact, the only thing displayed on the projector is Mike’s live screen from his laptop, displaying things like the work he’s doing in GNURadio Companion and other pieces of software.

If you have devices you’re interested in studying, you should bring them along with you. If time permits, Mike tries to work these devices into the analysis during the course.

Tools Used Additional Resources Opinions & Conclusion

This was a great class that I really enjoyed. However, I really wish there had been more emphasis on how you decode and interpret the unknown signals, such as discussion of common packet types over RF, any tools for signals analysis that could be built either in Python or in GNURadio. Perhaps he (or someone) could offer an advanced class that focuses on the signal analysis, interpretation, and “spoofing” portions of the problem of attacking RF-based systems.

If you’re interested in doing assessments of physical devices, or into radio at all, I highly recommend this course. Mike obviously really knows the material, and getting a HackRF One is a pretty nice bonus. Watching the videos on his website will help you prepare for the math, but will also result int a good portion of the content being duplicated in the course. I’m not disappointed that I did that, and I still feel that I more than made good use of the time in the course, but it is something to be aware of.

Faqet

Subscribe to AlbLinux agreguesi