You are here

Agreguesi i feed

Weblate 2.19.1

Planet Debian - Mar, 20/02/2018 - 3:00md

Weblate 2.19.1 has been released today. This is bugfix only release mostly to fix problematic migration from 2.18 which some users have observed.

Full list of changes:

  • Fixed migration issue on upgrade from 2.18.
  • Improved file upload API validation.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Michal Čihař https://blog.cihar.com/archives/debian/ Michal Čihař's Weblog, posts tagged by Debian

Listing and loading of Debian repositories: now live on Software Heritage

Planet Debian - Mar, 20/02/2018 - 2:52md

Software Heritage is the project for which I’ve been working during the past two and a half years now. The grand vision of the project is to build the universal software archive, which will collect, preserve and share the Software Commons.

Today, we’ve announced that Software Heritage is archiving the contents of Debian daily. I’m reposting this article on my blog as it will probably be of interest to readers of Planet Debian.

TL;DR: Software Heritage now archives all source packages of Debian as well as its security archive daily. Everything is ready for archival of other Debian derivatives as well. Keep on reading to get details of the work that made this possible.

History

When we first announced Software Heritage, back in 2016, we had archived the historical contents of Debian as present on the snapshot.debian.org service, as a one-shot proof of concept import.

This code was then left in a drawer and never touched again, until last summer when Sushant came do an internship with us. We’ve had the opportunity to rework the code that was originally written, and to make it more generic: instead of the specifics of snapshot.debian.org, the code can now work with any Debian repository. Which means that we could now archive any of the numerous Debian derivatives that are available out there.

This has been live for a few months, and you can find Debian package origins in the Software Heritage archive now.

Mapping a Debian repository to Software Heritage

The main challenge in listing and saving Debian source packages in Software Heritage is mapping the content of the repository to the generic source history data model we use for our archive.

Organization of a Debian repository

Before we start looking at a bunch of unpacked Debian source packages, we need to know how a Debian repository is actually organized.

At the top level of a Debian repository lays a set of suites, representing versions of the distribution, that is to say a set of packages that have been tested and are known to work together. For instance, Debian currently has 6 active suites, from wheezy (“old old stable” version), all the way up to experimental; Ubuntu has 8, from precise (12.04 LTS), up to bionic (the future 18.04 release), as well as a devel suite. Each of those suites also has a bunch of “overlay” suites, such as backports, which are made available in the archive alongside full suites.

Under the suites, there’s another level of subdivision, which Debian calls components, and Ubuntu calls areas. Debian uses its components to segregate packages along licensing terms (main, contrib and non-free), while Ubuntu uses its areas to denote the level of support of the packages (main, universe, multiverse, …).

Finally, components contain source packages, which merge upstream sources with distribution-specific patches, as well as machine-readable instructions on how to build the package.

Organization of the Software Heritage archive

The Software Heritage archive is project-centric rather than version-centric. What this means is that we are interested in keeping the history of what was available in software origins, which can be thought of as a URL of a repository containing software artifacts, tagged with a type representing the means of access to the repository.

For instance, the origin for the GitHub mirror of the Linux kernel repository has the following data:

For each visit of an origin, we take a snapshot of all the branches (and tagged versions) of the project that were visible during that visit, complete with their full history. See for instance one of the latest visits of the Linux kernel. For the specific case of GitHub, pull requests are also visible as virtual branches, so we fetch those as well (as branches named refs/pull/<pull request number>/head).

Bringing them together

As we’ve seen, Debian archives (just as well as archives for other “traditional” Linux distributions) are release-centric rather than package-centric. Mapping distributions to the Software Heritage archive therefore takes a little bit of gymnastics, to transpose the list of source packages available in each suite to a list of available versions per source package. We do this step by step:

  1. Download the Sources indices for all the suites and components known in the Debian repository
  2. Parse the Sources indices, listing all source packages inside
  3. For each source package, tell the Debian loader to load all the available versions (grouped by name), generating a complete snapshot of the state of the source package across the Debian repository

The source packages are mapped to origins using the following format:

  • type: deb
  • url: deb://<repository name>/packages/<source package name> (e.g. deb://Debian/packages/linux)

We use a repository name rather than the actual URL to a repository so that links can persist even if a given mirror disappears.

Loading Debian source packages

To load Debian source packages into the Software Heritage archive, we have to convert them: Debian-based distributions distribute source packages as a set of files, a dsc (Debian Source Control) and a set of tarballs (usually, an upstream tarball and a Debian-specific overlay). On the other hand, Software Heritage only stores version-control information such as revisions, directories, files.

Unpacking the source packages

Our philosophy at Software Heritage is to store the source code of software in the precise form that allows a developer to start working on it. For Debian source packages, this is the unpacked source code tree, with all patches applied. After checking that the files we have downloaded match the checksums published in the index files, we simply use dpkg-source -x to extract the source package, with patches applied, ready to build. This also means that we currently fail to import packages that don’t extract with the version of dpkg-source available in Debian Stretch.

Generating a synthetic revision

After walking the extracted source package tree, computing identifiers for all its contents, we get the identifier of the top-level tree, which we will reference in the synthetic revision.

The synthetic revision contains the “reproducible” metadata that is completely intrinsic to the Debian source package. With the current implementation, this means:

  • the author of the package, and the date of modification, as referenced in the last entry of the source package changelog (referenced as author and committer)
  • the original artifact (i.e. the information about the original source package)
  • basic information about the history of the package (using the parsed changelog)

However, we never set parent revisions in the synthetic commits, for two reasons:

  • there is no guarantee that packages referenced in the changelog have been uploaded to the distribution, or imported by Software Heritage (our update frequency is lower than that of the Debian archive)
  • even if this guarantee existed, and all versions of all packages were available in Software Heritage, there would be no guarantee that the version referenced in the changelog is indeed the version we imported in the first place

This makes the information stored in the synthetic revision fully intrinsic to the source package, and reproducible. In turn, this allows us to keep a cache, mapping the original artifacts to synthetic revision ids, to avoid loading packages again once we have loaded them once.

Storing the snapshot

Finally, we can generate the top-level object in the Software Heritage archive, the snapshot. For instance, you can see the snapshot for the latest visit of the glibc package.

To do so, we generate a list of branches by concatenating the suite, the component, and the version number of each detected source package (e.g. stretch/main/2.24-10 for version 2.24-10 of the glibc package available in stretch/main). We then point each branch to the synthetic revision that was generated when loading the package version.

In case a version of a package fails to load (for instance, if the package version disappeared from the mirror between the moment we listed the distribution, and the moment we could load the package), we still register the branch name, but we make it a “null” pointer.

There’s still some improvements to make to the lister specific to Debian repositories: it currently hardcodes the list of components/areas in the distribution, as the repository format provides no programmatic way of eliciting them. Currently, only Debian and its security repository are listed.

Looking forward

We believe that the model we developed for the Debian use case is generic enough to capture not only Debian-based distributions, but also RPM-based ones such as Fedora, Mageia, etc. With some extra work, it should also be possible to adapt it for language-centric package repositories such as CPAN, PyPI or Crates.

Software Heritage is now well on the way of providing the foundations for a generic and unified source browser for the history of traditional package-based distributions.

We’ll be delighted to welcome contributors that want to lend a hand to get there.

olasd https://blog.olasd.eu english – olasd's corner of the 'tubes

Hacking at EPFL Toastmasters, Lausanne, tonight

Planet Debian - Mar, 20/02/2018 - 12:39md

As mentioned in my earlier blog, I give a talk about Hacking at the Toastmasters club at EPFL tonight. Please feel free to join us and remember to turn off your mobile device or leave it at home, you never know when it might ring or become part of a demonstration.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Freexian’s report about Debian Long Term Support, January 2017

Planet Debian - Hën, 19/02/2018 - 6:18md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 160 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly at 187 hours per month. It would be nice if the slow growth could continue as the amount of work seems to be slowly growing too.

The security tracker currently lists 23 packages with a known CVE and the dla-needed.txt file 23 too. The number of open issues seems to be stable compared to last month which is a good sign.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog https://raphaelhertzog.com apt-get install debian-wizard

How we care for our child

Planet Debian - Hën, 19/02/2018 - 11:15pd

This post is a departure from the regular content, which is supposed to be "Debian and Free Software", but has accidentally turned into a hardware blog recently!

Anyway, we have a child who is now about 14 months old. The way that my wife and I care for him seems logical to us, but often amuses local people. So in the spirit of sharing this is what we do:

  • We divide the day into chunks of time.
  • At any given time one of us is solely responsible for him.
    • The other parent might be nearby, and might help a little.
    • But there is always a designated person who will be changing nappies, feeding, and playing at any given point in the day.
  • The end.

So our weekend routine, covering Saturday and Sunday, looks like this:

  • 07:00-08:00: Husband
  • 08:01-13:00: Wife
  • 13:01-17:00: Husband
  • 17:01-18:00: Wife
  • 18:01-19:30: Husband

Our child, Oiva, seems happy enough with this and he sometimes starts walking from one parent to the other at the appropriate time. But the real benefit is that each of us gets some time off - in my case I get "the morning" off, and my wife gets the afternoon off. We can hide in our bedroom, go shopping, eat cake, or do anything we like.

Week-days are similar, but with the caveat that we both have jobs. I take the morning, and the evenings, and in exchange if he wakes up overnight my wife helps him sleep and settle between 8PM-5AM, and if he wakes up later than 5AM I deal with him.

Most of the time our child sleeps through the night, but if he does wake up it tends to be in the 4:30AM/5AM timeframe. I'm "happy" to wake up at 5AM and stay up until I go to work because I'm a morning person and I tend to go to bed early these days.

Day-care is currently a complex process. There are three families with small children, and ourselves. Each day of the week one family hosts all the children, and the baby-sitter arrives there too (all the families live within a few blocks of each other).

All of the parents go to work, leaving one carer in charge of 4 babies for the day, from 08:15-16:15. On the days when we're hosting the children I greet the carer then go to work - on the days the children are at a different families house I take him there in the morning, on my way to work, and then my wife collects him in the evening.

At the moment things are a bit terrible because most of the children have been a bit sick, and the carer too. When a single child is sick it's mostly OK, unless that is the child which is supposed to be host-venue. If that child is sick we have to panic and pick another house for that day.

Unfortunately if the child-carer is sick then everybody is screwed, and one parent has to stay home from each family. I guess this is the downside compared to sending the children to public-daycare.

This is private day-care, Finnish-style. The social-services (kela) will reimburse each family €700/month if you're in such a scheme, and carers are limited to a maximum of 4 children. The net result is that prices are stable, averaging €900-€1000 per-child, per month.

(The €700 is refunded after a month or two, so in real-terms people like us pay €200-€300/month for Monday-Friday day-care. Plus a bit of beaurocracy over deciding which family is hosting, and which parents are providing food. With the size being capped, and the fees being pretty standard the carers earn €3600-€4000/month, which is a good amount. To be a school-teacher you need to be very qualified, but to do this caring is much simpler. It turns out that being an English-speaker can be a bonus too, for some families ;)

Currently our carer has a sick-note for three days, so I'm staying home today, and will likely stay tomorrow too. Then my wife will skip work on Wednesday. (We usually take it in turns but sometimes that can't happen easily.)

But all of this is due to change in the near future, because we've had too many sick days, and both of us have missed too much work.

More news on that in the future, unless I forget.

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Bryan Quigley: Stop changing the clocks

Planet Ubuntu - Hën, 19/02/2018 - 1:00pd

Florida, Tennessee, the EU and more are considering one timezone for the entire year - no more changing the clocks. Massachusetts had a group study the issue and recommend making the switch, but only if a majority of Northeast states decide to join them. I would like to see the NJ legislature vote to join them.

Interaction between countries would be helped by having one less factor that can impact collaboration. Below are two examples of ways this will help.

Meeting Times

Let's consider a meeting scheduled in EST with partipants from NJ, the EU, and Arizona.
NJ - normal disruption of changing times, but the clock time for the meeting stays the same.
Arizona - The clock time for the meeting changes twice a year.
EU - because they also change their clocks at different points throughout the year. Due to this, they have 4 clock time changes during each year.

This gets more complicated as we add partipants from more countries. UTC can help, but any location that has a time change has to be considered for both of it's timezones.

Global shift work or On-call

Generally, these are scheduled in UTC, but the shifts people actually work are in their local time. That can be disruptive in other ways, like finding child care.

In conclusion, while these may be minor compared to other concerns (like the potential health effects associated with change the clocks), the concerns of global collaboration should also be considered.

SwissPost putting another nail in the coffin of Swiss sovereignty

Planet Debian - Dje, 18/02/2018 - 11:17md

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about
  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

No canaries were harmed in the production of this blog.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Daniel Pocock: SwissPost putting another nail in the coffin of Swiss sovereignty

Planet Ubuntu - Dje, 18/02/2018 - 11:17md

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about
  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

No canaries were harmed in the production of this blog.

Vnlog!

Planet Debian - Dje, 18/02/2018 - 12:58md

In the last few jobs I've worked at I ended up writing a tool to store data in a nice format, and to be able to manipulate it easily. I'd rewrite this from scratch each time partly because I was never satisfied with the previous version. Each iteration was an improvement on the previous one, and this version is the good one. I wrote it at NASA/JPL, went through the release process (this thing was called asciilog then), added a few more features, and I'm now releasing it. The toolkit lives here and here's the initial README:

Summary

Vnlog (pronounced "vanillog") is a trivially-simple log format:

  • A whitespace-separated table of ASCII human-readable text
  • Lines beginning with # are comments
  • The first line that begins with a single # (not ## or #!) is a legend, naming each column

Example:

#!/usr/bin/whatever # a b c 1 2 3 ## another comment 4 5 6

Such data works very nicely with normal UNIX tools (awk, sort, join), can be easily read by fancier tools (numpy, matlab (yuck), excel (yuck), etc), and can be plotted with feedgnuplot. This tookit provides some tools to manipulate vnlog data and a few libraries to read/write it. The core philosophy is to keep everything as simple and light as possible, and to provide methods to enable existing (and familiar!) tools and workflows to be utilized in nicer ways.

Synopsis

In one terminal, sample the CPU temperature over time, and write the data to a file as it comes in, at 1Hz:

$ ( echo '# time temp1 temp2 temp3'; while true; do echo -n "`date +%s` "; < /proc/acpi/ibm/thermal awk '{print $2,$3,$4; fflush()}'; sleep 1; done ) > /tmp/temperature.vnl

In another terminal, I sample the consumption of CPU resources, and log that to a file:

$ (echo "# user system nice idle waiting hardware_interrupt software_interrupt stolen"; top -b -d1 | awk '/%Cpu/ {print $2,$4,$6,$8,$10,$12,$14,$16; fflush()}') > /tmp/cpu.vnl

These logs are now accumulating, and I can do stuff with them. The legend and the last few measurements:

$ vnl-tail /tmp/temperature.vnl # time temp1 temp2 temp3 1517986631 44 38 34 1517986632 44 38 34 1517986633 44 38 34 1517986634 44 38 35 1517986635 44 38 35 1517986636 44 38 35 1517986637 44 38 35 1517986638 44 38 35 1517986639 44 38 35 1517986640 44 38 34

I grab just the first temperature sensor, and align the columns

$ < /tmp/temperature.vnl vnl-tail | vnl-filter -p time,temp=temp1 | vnl-align # time temp 1517986746 45 1517986747 45 1517986748 46 1517986749 46 1517986750 46 1517986751 46 1517986752 46 1517986753 45 1517986754 45 1517986755 45

I do the same, but read the log data in realtime, and feed it to a plotting tool to get a live reporting of the cpu temperature. This plot updates as data comes in. I then spin a CPU core (while true; do true; done), and see the temperature climb. Here I'm making an ASCII plot that's pasteable into the docs.

$ < /tmp/temperature.vnl vnl-tail -f | vnl-filter --unbuffered -p time,temp=temp1 | feedgnuplot --stream --domain --lines --timefmt '%s' --set 'format x "%M:%S"' --ymin 40 --unset grid --terminal 'dumb 80,40' 70 +----------------------------------------------------------------------+ | + + + + + + + + + | | | | | | | | ** | 65 |-+ *** +-| | ** * | | * * | | * * | | * * | | ** ** | 60 |-+ * * +-| | * * | | * * | | * * | | ** * | | * * | 55 |-+ * * +-| | * * | | * ** | | * * | | ** * | | * ** | 50 |-+ * ** +-| | * ** | | * *** | | * * | | * **** | | * ***** | 45 |-+ * *********** +-| | ************ ********************** | | * ** | | | | | | + + + + + + + + + | 40 +----------------------------------------------------------------------+ 21:00 22:00 23:00 24:00 25:00 26:00 27:00 28:00 29:00 30:00 31:00

Cool. I can then join the logs, pull out simultaneous CPU consumption and temperature numbers, and plot the path in the temperature-cpu space:

$ vnl-join -j time /tmp/temperature.vnl /tmp/cpu.vnl | vnl-filter -p temp1,user | feedgnuplot --domain --lines --unset grid --terminal 'dumb 80,40' 45 +----------------------------------------------------------------------+ | + + + + + | | * | | * | 40 |-+ ** +-| | ** | | * * | | * * * * * | 35 |-+ **** *********** **** * **** *** ****** +-| | ********* ******** * * ***** *** * ** * * | | * * * * * * * ** * * * | | * * * * * * | 30 |-+ * * +-| | * * | | * * | | * * | 25 |-+ * * +-| | * * | | * * | | * * | 20 |-+ * * +-| | * * | | * * | | * * * | 15 |-+ * * * * +-| | * * * * | | *** * * | | *** * * | 10 |-+ *** * * +-| | *** * * | | *** * * | | *** * * | 5 |-+ *** * * +-| | *** * * | | * * * * * | | **** * ** ***** *********** + ******* ***** | 0 +----------------------------------------------------------------------+ 40 45 50 55 60 65 70 Description

As stated before, vnlog tools are designed to be very simple and light. There exist other tools that are similar. For instance:

These all provide facilities to run various analyses, and are neither simple nor light. Vnlog by contrast doesn't analyze anything, but makes it easy to write simple bits of awk or perl to process stuff to your heart's content. The main envisioned use case is one-liners, and the tools are geared for that purpose. The above mentioned tools are much more powerful than vnlog, so they could be a better fit for some use cases.

In the spirit of doing as little as possible, the provided tools are wrappers around tools you already have and are familiar with. The provided tools are:

  • vnl-filter is a tool to select a subset of the rows/columns in a vnlog and/or to manipulate the contents. This is effectively an awk wrapper where the fields can be referenced by name instead of index. 20-second tutorial:
vnl-filter -p col1,col2,colx=col3+col4 'col5 > 10' --has col6

will read the input, and produce a vnlog with 3 columns: col1 and col2 from the input and a column colx that's the sum of col3 and col4 in the input. Only those rows for which the col5 > 10 is true will be output. Finally, only those rows that have a non-null value for col6 will be selected. A null entry is signified by a single - character.

vnl-filter --eval '{s += x} END {print s}'

will evaluate the given awk program on the input, but the column names work as you would hope they do: if the input has a column named x, this would produce the sum of all values in this column.

  • vnl-sort, vnl-join, vnl-tail are wrappers around the corresponding GNU Coreutils tools. These work exactly as you would expect also: the columns can be referenced by name, and the legend comment is handled properly. These are wrappers, so all the commandline options those tools have "just work" (except options that don't make sense in the context of vnlog). As an example, vnl-tail -f will follow a log: data will be read by vnl-tail as it is written into the log (just like tail -f, but handling the legend properly). And you already know how to use these tools without even reading the manpages! Note: these were written for and have been tested with the Linux kernel and GNU Coreutils sort, join and tail. Other kernels and tools probably don't (yet) work. Send me patches.
  • vnl-align aligns vnlog columns for easy interpretation by humans. The meaning is unaffected
  • Vnlog::Parser is a simple perl library to read a vnlog
  • libvnlog is a C library to simplify writing a vnlog. Clearly all you really need is printf(), but this is useful if we have lots of columns, many containing null values in any given row, and/or if we have parallel threads writing to a log
  • vnl-make-matrix converts a one-point-per-line vnlog to a matrix of data. I.e.
$ cat dat.vnl # i j x 0 0 1 0 1 2 0 2 3 1 0 4 1 1 5 1 2 6 2 0 7 2 1 8 2 2 9 3 0 10 3 1 11 3 2 12 $ < dat.vnl vnl-filter -p i,x | vnl-make-matrix --outdir /tmp Writing to '/tmp/x.matrix' $ cat /tmp/x.matrix 1 2 3 4 5 6 7 8 9 10 11 12

All the tools have manpages that contain more detail. And tools will probably be added with time.

C interface

For most uses, these logfiles are simple enough to be generated with plain prints. But then each print statement has to know which numeric column we're populating, which becomes effortful with many columns. In my usage it's common to have a large parallelized C program that's writing logs with hundreds of columns where any one record would contain only a subset of the columns. In such a case, it's helpful to have a library that can output the log files. This is available. Basic usage looks like this:

In a shell:

$ vnl-gen-header 'int w' 'uint8_t x' 'char* y' 'double z' 'void* binary' > vnlog_fields_generated.h

In a C program test.c:

#include "vnlog_fields_generated.h" int main() { vnlog_emit_legend(); vnlog_set_field_value__w(-10); vnlog_set_field_value__x(40); vnlog_set_field_value__y("asdf"); vnlog_emit_record(); vnlog_set_field_value__z(0.3); vnlog_set_field_value__x(50); vnlog_set_field_value__w(-20); vnlog_set_field_value__binary("\x01\x02\x03", 3); vnlog_emit_record(); vnlog_set_field_value__w(-30); vnlog_set_field_value__x(10); vnlog_set_field_value__y("whoa"); vnlog_set_field_value__z(0.5); vnlog_emit_record(); return 0; }

Then we build and run, and we get

$ cc -o test test.c -lvnlog $ ./test # w x y z binary -10 40 asdf - - -20 50 - 0.2999999999999999889 AQID -30 10 whoa 0.5 -

The binary field in base64-encoded. This is a rarely-used feature, but sometimes you really need to log binary data for later processing, and this makes it possible.

So you

  1. Generate the header to define your columns
  2. Call vnlog_emit_legend()
  3. Call vnlog_set_field_value__...() for each field you want to set in that row.
  4. Call vnlog_emit_record() to write the row and to reset all fields for the next row. Any fields unset with a vnlog_set_field_value__...() call are written as null: -

This is enough for 99% of the use cases. Things get a bit more complex if we have have threading or if we have multiple vnlog ouput streams in the same program. For both of these we use vnlog contexts.

To support reentrant writing into the same vnlog by multiple threads, each log-writer should create a context, and use it when talking to vnlog. The context functions will make sure that the fields in each context are independent and that the output records won't clobber each other:

void child_writer( // the parent context also writes to this vnlog. Pass NULL to // use the global one struct vnlog_context_t* ctx_parent ) { struct vnlog_context_t ctx; vnlog_init_child_ctx(&ctx, ctx_parent); while(records) { vnlog_set_field_value_ctx__xxx(&ctx, ...); vnlog_set_field_value_ctx__yyy(&ctx, ...); vnlog_set_field_value_ctx__zzz(&ctx, ...); vnlog_emit_record_ctx(&ctx); } }

If we want to have multiple independent vnlog writers to different streams (with different columns andlegends), we do this instead:

file1.c:

#include "vnlog_fields_generated1.h" void f(void) { // Write some data out to the default context and default output (STDOUT) vnlog_emit_legend(); ... vnlog_set_field_value__xxx(...); vnlog_set_field_value__yyy(...); ... vnlog_emit_record(); }

file2.c:

#include "vnlog_fields_generated2.h" void g(void) { // Make a new session context, send output to a different file, write // out legend, and send out the data struct vnlog_context_t ctx; vnlog_init_session_ctx(&ctx); FILE* fp = fopen(...); vnlog_set_output_FILE(&ctx, fp); vnlog_emit_legend_ctx(&ctx); ... vnlog_set_field_value__a(...); vnlog_set_field_value__b(...); ... vnlog_emit_record(); }

Note that it's the user's responsibility to make sure the new sessions go to a different FILE by invoking vnlog_set_output_FILE(). Furthermore, note that the included vnlog_fields_....h file defines the fields we're writing to; and if we have multiple different vnlog field definitions in the same program (as in this example), then the different writers must live in different source files. The compiler will barf if you try to #include two different vnlog_fields_....h files in the same source.

More APIs are

vnlog_printf(...) and vnlog_printf_ctx(ctx, ...) write to a pipe like printf() does. This exists for comments.

vnlog_clear_fields_ctx(ctx, do_free_binary): Clears out the data in a context and makes it ready to be used for the next record. It is rare for the user to have to call this manually. The most common case is handled automatically (clearing out a context after emitting a record). One area where this is useful is when making a copy of a context:

struct vnlog_context_t ctx1; // .... do stuff with ctx1 ... add data to it ... struct vnlog_context_t ctx2 = ctx1; // ctx1 and ctx2 now both have the same data, and the same pointers to // binary data. I need to get rid of the pointer references in ctx1 vnlog_clear_fields_ctx(&ctx1, false);

vnlog_free_ctx(ctx):

Frees memory for an vnlog context. Do this before throwing the context away. Currently this is only needed for context that have binary fields, but this should be called in for all contexts, just in case

numpy interface

The built-in numpy.loadtxt numpy.savetxt functions work well to read and write these files. For example to write to standard output a vnlog with fields a, b and c:

numpy.savetxt(sys.stdout, array, fmt="%g", header="a b c")

Note that numpy automatically adds the # to the header. To read a vnlog from a file on disk, do something like

array = numpy.loadtxt('data.vnl')

These functions know that # lines are comments, but don't interpret anything as field headers. That's easy to do, so I'm not providing any helper libraries. I might do that at some point, but in the meantime, patches are welcome.

Caveats and bugs

The tools that wrap GNU coreutils (vnl-sort, vnl-join, vnl-tail) are written specifically to work with the Linux kernel and the GNU coreutils. None of these have been tested with BSD tools or with non-Linux kernels, and I'm sure things don't just work. It's probably not too effortful to get that running, but somebody needs to at least bug me for that. Or better yet, send me nice patches :)

These tools are meant to be simple, so some things are hard requirements. A big one is that columns are whitespace-separated. There is no mechanism for escaping or quoting whitespace into a single field. I think supporting something like that is more trouble than it's worth.

Repository

https://github.com/dkogan/vnlog/

Author

Dima Kogan (dima@secretsauce.net)

License and copyright

This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.

Copyright 2016-2017 California Institute of Technology

Copyright 2017-2018 Dima Kogan (dima@secretsauce.net)

b64_cencode.c comes from cencode.c in the libb64 project. It is written by Chris Venter (chris.venter@gmail.com) who placed it in the public domain. The full text of the license is in that file.

Dima Kogan http://notes.secretsauce.net Dima Kogan

Dustin Kirkland: 10 Amazing Years of Ubuntu and Canonical

Planet Ubuntu - Pre, 16/02/2018 - 6:12md
February 2008, Canonical's office in Lexington, MA10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!
And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.
Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)1. update-motd / motd.ubuntu.com (September 2008)Throughout the history of UNIX, the "message of the day" was always manually edited and updated by the local system administrator.  Until Ubuntu's message-of-the-day.  In fact, I received an email from Dennis Ritchie and Jon "maddog" Hall, confirming this, in April 2010.  This started as a feature request for the Landscape team, but has turned out to be tremendously useful and informative to all Ubuntu users.  Just last year, we launched motd.ubuntu.com, which provides even more dynamic information about important security vulnerabilities and general news from the Ubuntu ecosystem.  Mathias Gug help me with the design and publication.2. manpages.ubuntu.com (September 2008)This was the first public open source project I worked on, in my spare time at Canonical.  I had a local copy of the Ubuntu archive and I was thinking about what sorts of automated jobs I could run on it.  So I wrote some scripts that extracted the manpages out of each one, formatted them as HTML, and published into a structured set of web directories.  10 years later, it's still up and running, serving thousands of hits per day.  In fact, this was one of the ways we were able to shrink the Ubuntu minimal image, but removing the manpages, since they're readable online.  Colin Watson and Kees Cook helped me with the initial implementation, and Matthew Nuzum helped with the CSS and Ubuntu theme in the HTML.3. Byobu (December 2008)If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.4. eCryptfs / Ubuntu Encrypted Home Directories (October 2009)I was familiar with eCryptfs from its inception in 2005, in the IBM Linux Technology Center's Security Team, sitting next to Michael Halcrow who was the original author.  When I moved to Canonical, I helped Michael maintain the userspace portion of eCryptfs (ecryptfs-utils) and I shepherded into Ubuntu.  eCryptfs was super powerful, with hundreds of options and supported configurations, but all of that proved far to difficult for users at large.  So I set out to simplify it drastically, with an opinionated set of basic defaults.  I started with a simple command to mount a "Private" directory inside of your home directory, where you could stash your secrets.  A few months later, on a long flight to Paris, I managed to hack a new PAM module, pam_ecryptfs.c, that actually encrypted your entire home directory!  This was pretty revolutionary at the time -- predating Apple's FileVault or Microsoft's Bitlocker, even.  Today, tens of millions of Ubuntu users have used eCryptfs to secure their personal data.  I worked closely with Tyler Hicks, Kees Cook, Jamie Strandboge, Michael Halcrow, Colin Watson, and Martin Pitt on this project over the years.5. ssh-import-id (March 2010)With the explosion of virtual machines and cloud instances in 2009 / 2010, I found myself constantly copying public SSH keys around.  Moreover, given Canonical's globally distributed nature, I also regularly found myself asking someone for their public SSH keys, so that I could give them access to an instance, perhaps for some pair programming or assistance debugging.  As it turns out, everyone I worked with, had a Launchpad.net account, and had their public SSH keys available there.  So I created (at first) a simple shell script to securely fetch and install those keys.  Scott Moser helped clean up that earliest implementation.  Eventually, I met Casey Marshall, who helped rewrite it entirely in Python.  Moreover, we contacted the maintainers of Github, and asked them to expose user public SSH keys by the API -- which they did!  Now, ssh-import-id is integrated directly into Ubuntu's new subiquity installer and used by many other tools, such as cloud-init and MAAS.6. Orchestra / MAAS (August 2011)In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.7. pollinate / pollen / entropy.ubuntu.com (February 2014)In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at entropy.ubuntu.com.  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.8. The Orange Box (May 2014)In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.9. Hollywood (December 2014)Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container. 10. petname / golang-petname / python-petname (January 2015)From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!

Cheers,
:-Dustin

Xubuntu: Xubuntu 18.04 community wallpaper contest

Planet Ubuntu - Pre, 16/02/2018 - 5:17md

We’re on our way to the 18.04 LTS release and it’s time for another community wallpaper contest!

How to participate?

For a chance to win, submit your submission at contest.xubuntu.org.

Important dates
  • Start of submissions: Immediately
  • Submission deadline: March 15th, 2018
  • Announcement of selections: Late March

All dates are in UTC.

Contest terms

All submissions must adhere to the Terms and Guidelines, including specifics about subject matter, image resolution and attribution.

After the submission deadline, the Xubuntu team will pick 6 winners from all submissions for inclusion on the Xubuntu 18.04 ISO, and will also be available to other Xubuntu version users as a xubuntu-community-wallpaper package . The winners will also receive some Xubuntu stickers.

Any questions?

Please join #xubuntu-devel on Freenode for assistance or email the Xubuntu developer mailing list if you have any problems with your submission.

Stephen Michael Kellat: Damage Control Report

Planet Ubuntu - Pre, 16/02/2018 - 4:39pd

In no particular order:

  • There was another "partial government shutdown" of the federal government of the United States of America last Thursday. As a federal civil servant, I still rated an "essential-excepted" designation which required working without pay until the end of the crisis. President Trump could have solved the matter if anybody could have rousted him from bed at 0940Z on February 9th. That didn't happen. We had a "technical" shutdown that lasted two hours at the start of the working day with resolution at roughly 1300Z on February 9th. A good chunk of staff "technically" did not bothering to show up for duty when it was required and escaped any consequences.
  • Except for the Department of Defense, the remainder of the federal government of the United States of America remains without full-year appropriations for Fiscal Year 2018 which started on October 1, 2017. Appropriations are set to lapse once again on March 22, 2018. I've been given provisional approval for a vacation day on March 23rd but if we have another government shutdown that would be revoked and I would have to report to duty as "essential-excepted" personnel. Under current command guidance that designation lapses as of 0400Z on April 18, 2018. Chances remain pretty high this will happen again.
  • Donations are always accepted via PayPal although they are totally not tax-deductible. I've been trying to broaden the scope of the Domestic Mission Field Activity at West Avenue Church of Christ a bit. One area of interest is moving beyond just the outreach to one of the local nursing homes where we've been the main spiritual link for some of the residents for the past several months regardless of the denomination they're normally part of. Fortunately I'm not alone in conducting the Activity's functions.
  • I'm open to considering proposed transitions from the federal civil service and the data on LinkedIn is probably a good starting point if anybody wants to talk. My current job puts me at the forefront of seeing broken and shattered lives while I try to both protect the federal government's financial interests and also help meet the needs of callers. A change is needed. There is a limit to how much misery and suffering you end up seeing that you cannot help alleviate.
  • The house is still standing. We haven't lost anything due to wintry weather. With luck we'll be able to mount on top of the roof of the garage the VHF/UHF aerial that is currently mounted inside the garage to the bottom of the roof.
  • Being away for 12 hours per day for work plus commute time leaves little time for Xubuntu let alone Ubuntu MATE unless I give up sleeping. This long of a commute is a problem.
  • I am looking at edX MicroMasters as ways to jumpstart picking up the second graduate degree to be able to teach at the community college level. Beyond that, there is a program from Bowling Green State University as well as one at Thomas Edison State University in New Jersey and something at the Holden University Center if I am not feeling daring. I have one earned master's so an organized program leading to an accredited award from a US institution bearing at least 18 semester hours of postgraduate-level credit is the minimum sought.

Things are looking up. This year has gotten off to a rocky start.

Kubuntu General News: Plasma 5.12.1 bugfix update lands in backports PPA for Artful 17.10

Planet Ubuntu - Mër, 14/02/2018 - 9:35md

After the initial release of Plasma 5.12 was made available for Artful 17.10 via our backports PPA last week, we are pleased to say the the PPA has now been updated to the 1st bugfix release 5.12.1.

The full changelog for 5.12.1 can be found here.

Including fixes and polish for Discover and the desktop.

Also included is an update to the latest KDE Frameworks 5.43.

Upgrade instructions and caveats are as per last week’s blog post, which can be found here.

The Kubuntu team wishes users a happy experience with the excellent 5.12 LTS desktop, and thanks the KDE/Plasma team for such a wonderful desktop to package.

Andres Rodriguez: MAAS 2.4.0 alpha 1 & python-libmaas 0.6.0 released!

Planet Ubuntu - Mër, 14/02/2018 - 8:44md
Hello MAASters! I’m happy to announce that MAAS 2.4.0 alpha 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic. MAAS Availability MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA: ppa:maas/next   Python-libmaas Availability Libmaas is available in the Ubuntu Bionic archive or you can download the source from: https://github.com/maas/python-libmaas/releases MAAS 2.4.0 (alpha1) Important announcements Dependency on tgt (iSCSI) has now been dropped

Starting from MAAS 2.3, the way run ephemeral environments and perform deployments was changed away from using iSCSI. Instead, we introduced the ability to do the same using a squashfs image. With that, we completely removed the requirement for having tgt at all, but we didn’t drop the dependency in 2.3. As of 2.4, however, tgt has now been completely removed.

Dependency on apache2 has now been dropped in the debian packages

Starting from MAAS 2.0, MAAS now made the UI available in port 5240 and deprecated the use of port 80. However, as a mechanism to not break users when upgrading from the previous LTS, MAAS continued to have apache2 as a dependency to provide a reverse proxy to allow users to connect via port 80.

However, the MAAS snap changed that behavior no longer providing access to MAAS via port 80. In order to keep MAAS consistent with the snap, starting from MAAS 2.4, the debian package no longer depends on apache2 to provide a reverse proxy capability from port 80.

Python libmaas (0.6.0) now available in the Ubuntu Archive

I’m happy to announce that the new MAAS Client Library is now available in the Ubuntu Archives for Bionic. Libmaas is an asyncio based client library that provides a nice interface to interact with MAAS. More details below.

New Features & Improvements Machine Locking

MAAS now adds the ability to lock machines, which prevents the user from performing actions on machines that could change their state. This gives MAAS a prevention mechanism of potentially catastrophic actions. For example, it will prevent mistakenly powering off machines or mistanly releasing machines that could bring workloads down.

Audit logging

MAAS 2.4 now allows the administrators to audit the user’s actions, with the introduction of audit logging. The audit logs are available to administrators via the MAAS CLI/API, giving administrators a centralized location to access these logs.

Documentation is in the process of being published. For raw access please refer to the following link:

https://github.com/CanonicalLtd/maas-docs/pull/766/commits/eb05fb5efa42ba850446a21ca0d55cf34ced2f5d

Commissioning Harness – Supporting firmware upgrade and hardware specific scripts

The commissioning harness has been expanded with various improvements to help administrators write their own firmware upgrade and hardware specific scripts. These improvements addresses various of the challenges administrators face when performing such tasks at scale. The improvements include:

  • Ability to auto-select all the firmware upgrade/storage hardware changes (API only, UI will be available soon)

  • Ability to run scripts only for the hardware they are intended to run on.

  • Ability to reboot the machine while on the commissioning environment without disrupting the commissioning process.

This allows administrators to:

  • Create a hardware specific by declaring in which machine it needs to be run, by specifying the hardware specific PCI ID, modalias, vendor or model of the machine or device.

  • Create firmware upgrade scripts that require a reboot before the machine finishes the commissioning process, by allowing to describe this in the script’s metadata.

  • Allows administrators to define where the script can obtain proprietary firmware and/or proprietary tools to perform any of the operations required.

Minor improvements – Gather information about BIOS & firmware

MAAS now gathers more information about the underlying system, such as the Model, Serial, BIOS and firmware information of a machine (where available). It also gathers the information for storage devices as well as network interfaces.

MAAS Client Library (python-libmaas) New upstream release – 0.6.0

A new upstream release is now available in the Ubuntu Archive for Bionic. The new release includes the following changes:

  • Add/read/update/delete storage devices attached to machines.

  • Configure partitions and mount points

  • Configure Bcache

  • Configure RAID

  • Configure LVM

Known issues & work arounds LP: #1748712  – 2.4.0a1 upgrade failed with old node event data

It has been reported that an upgrade to MAAS 2.4.0a1 failed due to having old data from a non-existent know stored in the database. This could have been due to a older devel version of MAAS which would have left an entry in the node event table. A work around is provided in the bug report.

If you hit this issue, please update the bug report immediately so MAAS developers.

Bug fixes

Please refer to the following for all bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0alpha1

What is the best online dating site and the best way to use it?

Planet Debian - Mër, 14/02/2018 - 6:25md

Somebody recently shared this with me, this is what happens when you attempt to access Parship, an online dating site, from the anonymous Tor Browser.

Experian is basically a private spy agency. Their website boasts about how they can:

  • Know who your customers are regardless of channel or device
  • Know where and how to reach your customers with optimal messages
  • Create and deliver exceptional experiences every time

Is that third objective, an "exceptional experience", what you were hoping for with their dating site honey trap? You are out of luck: you are not the customer, you are the product.

When the Berlin wall came down, people were horrified at what they found in the archives of the Stasi. Don't companies like Experian and Facebook gather far more data than this?

So can you succeed with online dating?

There are only three strategies that are worth mentioning:

  • Access sites you can't trust (which includes all dating sites, whether free or paid for) using anonymous services like Tor Browser and anonymous email addresses. Use fake photos and fake all other data. Don't send your real phone number through the messaging or chat facility in any of these sites because they can use that to match your anonymous account to a real identity: instead, get an extra SIM card that you pay for and top-up with cash. One person told me they tried this for a month as an experiment, expediently cutting and pasting a message to each contact to arrange a meeting for coffee. At each date they would give the other person a card that apologized for their completely fake profile photos and offering to start over now they could communicate beyond the prying eyes of the corporation.
  • Join online communities that are not primarily about dating and if a relationship comes naturally, it is a bonus.
  • If you really care about your future partner and don't want your photo to be a piece of bait used to exploit and oppress them, why not expand your real-world activities?
Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Jo Shields: Packaging is hard. Packager-friendly is harder.

Planet Ubuntu - Mër, 14/02/2018 - 12:21md

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.

Packaging is hard. Packager-friendly is harder.

Planet Debian - Mër, 14/02/2018 - 12:21md

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.

directhex https://apebox.org/wordpress debian – APEBOX.ORG

Sean Davis: Exo 0.12.0 Stable Release

Planet Ubuntu - Mër, 14/02/2018 - 12:05md

With full GTK+ 2 and 3 support and numerous enhancements, Exo 0.12.0 provides a solid development base for new and refreshed Xfce applications.

What’s New?

Since this is the first stable release in nearly 2.5 years, I am going to provide a quick summary of the changes since version 0.10.7, released September 13, 2015.

New Features GTK Extensions Helpers
  • WebBrower: Added Brave, Google Chrome, and Vivaldi
  • MailReader: Added Geary, dropped Opera Mail (no longer available for Linux)
Utilities
  • exo-csource: Added a new --output flag to write the generated output to a file
  • exo-helper: Added a new --query flag to determine the preferred application
ICONS
  • Replaced non-standard gnome-* icons
  • Replaced non-existent “missing-image” icon
BUILD CHANGES
  • Build requirements were updated. Exo now requires GTK+ 2.24, GTK+ 3.22, GLib 2.42, libxfce4ui 4.12, and libxfce4util 4.12. Building GTK+ 3 libraries is not optional.
  • Default debug setting is now “yes” instead of “full”.
DOCUMENTATION UPDATES
  • Added missing per-release API indices
  • Resolved undocumented symbols (100% symbol coverage)
  • Updated project documentation (HACKING, README, THANKS)
Release Notes
  • The full release notes can be found here.
  • The full change log can be found here.
Downloads

The latest version of Exo can always be downloaded from the Xfce archives. Grab version 0.12.0 from the below link.

https://archive.xfce.org/src/xfce/exo/0.12/exo-0.12.0.tar.bz2

  • SHA-256: 64b88271a37d0ec7dca062c7bc61ca323116f7855092ac39698c421a2f30a18f
  • SHA-1: 364a9aaa1724b99fe33f46b93969d98e990e9a1f
  • MD5: 724afcca224f5fb22b510926d2740e52

David Tomaschik: Preparing for Penetration Testing with Kali Linux

Planet Ubuntu - Mër, 14/02/2018 - 9:00pd
The Penetration Testing with Kali Linux (PWK) course is one of the most popular information security courses, culminating in a hands-on exam for the Offensive Security Certified Professional certification. It provides a hands-on learning experience for those looking to get into penetration testing or other areas of offensive security. These are some of the things you might want to know before attempting the PWK class or the OSCP exam.

Read more...

Using VLC to stream bittorrent sources

Planet Debian - Mër, 14/02/2018 - 8:00pd

A few days ago, a new major version of VLC was announced, and I decided to check out if it now supported streaming over bittorrent and webtorrent. Bittorrent is one of the most efficient ways to distribute large files on the Internet, and Webtorrent is a variant of Bittorrent using WebRTC as its transport channel, allowing web pages to stream and share files using the same technique. The network protocols are similar but not identical, so a client supporting one of them can not talk to a client supporting the other. I was a bit surprised with what I discovered when I started to look. Looking at the release notes did not help answering this question, so I started searching the web. I found several news articles from 2013, most of them tracing the news from Torrentfreak ("Open Source Giant VLC Mulls BitTorrent Streaming Support"), about a initiative to pay someone to create a VLC patch for bittorrent support. To figure out what happend with this initiative, I headed over to the #videolan IRC channel and asked if there were some bug or feature request tickets tracking such feature. I got an answer from lead developer Jean-Babtiste Kempf, telling me that there was a patch but neither he nor anyone else knew where it was. So I searched a bit more, and came across an independent VLC plugin to add bittorrent support, created by Johan Gunnarsson in 2016/2017. Again according to Jean-Babtiste, this is not the patch he was talking about.

Anyway, to test the plugin, I made a working Debian package from the git repository, with some modifications. After installing this package, I could stream videos from The Internet Archive using VLC commands like this:

vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent

The plugin is supposed to handle magnet links too, but since The Internet Archive do not have magnet links and I did not want to spend time tracking down another source, I have not tested it. It can take quite a while before the video start playing without any indication of what is going on from VLC. It took 10-20 seconds when I measured it. Some times the plugin seem unable to find the correct video file to play, and show the metadata XML file name in the VLC status line. I have no idea why.

I have created a request for a new package in Debian (RFP) and asked if the upstream author is willing to help make this happen. Now we wait to see what come out of this. I do not want to maintain a package that is not maintained upstream, nor do I really have time to maintain more packages myself, so I might leave it at this. But I really hope someone step up to do the packaging, and hope upstream is still maintaining the source. If you want to help, please update the RFP request or the upstream issue.

I have not found any traces of webtorrent support for VLC.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Faqet

Subscribe to AlbLinux agreguesi