You are here

Agreguesi i feed

David Tomaschik: Course Review: Software Defined Radio with HackRF

Planet Ubuntu - Pre, 14/09/2018 - 9:00pd

Over the past two days, I had the opportunity to attend Michael Ossman’s course “Software Defined Radio with HackRF” at Toorcon XX. This is a course I’ve wanted to take for several years, and I’m extremely happy that I finally had the chance. I wanted to write up a short review for others considering taking the course.

Course Material

The material in the course focuses predominantly on the basics of Software Defined Radio and Digital Signal Processing. This includes the math necessary to understand how the DSP handles the signal. The math is presented in a practical, rather than academic, way. It’s not a math class, but a review of the necessary basics, mostly of complex mathematics and a bit of trigonometry. (My high school teachers are now vindicated. I did use that math again.) You don’t need the math background coming in, but you do need to be prepared to think about math during the class. Extracting meaningful information from the ether is, it turns out, an exercise in mathematics.

There’s a lot of discussions of frequencies, frequency mixers, and how frequency, amplitude, and phase are related. Also, despite more than 20 years as an amateur radio operator, I finally understand dB properly. It’s possible to understand reasonably without having to do logarithms:

  • +3db = x2
  • +10db = x10
  • -3db = 1/2
  • -10db = 1/10

In terms of DSP, he demonstrated extracting signals of interest, clock recovery, and other techniques necessary for understanding digital signals. It really just scratches the surface, but is enough to get a basic signal understood.

From a security point of view, there was only a single system that we “attacked” in the class. I was hoping for a little bit more of this, but given the detail in the other content, I am not disappointed.

Mike pointed out that the course primarily focuses on getting signals from the air to a digital series of 0 an 1 bits, and then leaves the remainder to tools like python for adding meaning and interpretation of the bits. While I understand this (and, admittedly, at that point it’s similar to decoding an unknown network protocol), I would still like to have gone into more detail.

Course Style

At the very beginning of the course, Mike makes it clear that no two classes he teaches are exactly the same. He adapts the course to the experience and background of each class, and that was very evident from our small group this week. With such a small class, it became more like a guided conversation than a formal class.

Overall, the course was very interactive, with lots of student questions, as well as “Socratic Method” questions from the instructor. This was punctuated with a number of hands-on exercises. One of the best parts of the hands-on exercises is that Mike provides a flash drive with a preconfigured Ubuntu Linux installation containing all the tools that are needed for the course. This allows students to boot into a working environment, rather than having to play around with tool installation or virtual machine settings. (We were, in fact, warned that VMs often do not play well with SDR, because the USB forwarding has overhead resulting in lost samples.)

Mike made heavy use of the poster pad in the room, diagramming waveforms and information about the processes involved in the SDR architecture and the DSP done in the computer. This works well because he customizes the diagrams to explain each part and answer student questions. It also feels much more engaging than just pointing at slides. In fact, the only thing displayed on the projector is Mike’s live screen from his laptop, displaying things like the work he’s doing in GNURadio Companion and other pieces of software.

If you have devices you’re interested in studying, you should bring them along with you. If time permits, Mike tries to work these devices into the analysis during the course.

Tools Used Additional Resources Opinions & Conclusion

This was a great class that I really enjoyed. However, I really wish there had been more emphasis on how you decode and interpret the unknown signals, such as discussion of common packet types over RF, any tools for signals analysis that could be built either in Python or in GNURadio. Perhaps he (or someone) could offer an advanced class that focuses on the signal analysis, interpretation, and “spoofing” portions of the problem of attacking RF-based systems.

If you’re interested in doing assessments of physical devices, or into radio at all, I highly recommend this course. Mike obviously really knows the material, and getting a HackRF One is a pretty nice bonus. Watching the videos on his website will help you prepare for the math, but will also result int a good portion of the content being duplicated in the course. I’m not disappointed that I did that, and I still feel that I more than made good use of the time in the course, but it is something to be aware of.

Gaming: Rise of the Tomb Raider

Planet Debian - Pre, 14/09/2018 - 4:07pd

Over the last weekend I have finally finished The Rise of the Tomb Raider. As I wrote exactly 4 month ago when I started the game, I am a complete newby to these kind of games, and was blown away by the visual quality and great gameplay.

I was really surprised how huge an area I had to explore over the four month. Many of them with really excellent nature scenery, some of them with a depressingly dark and solemn atmosphere.

Another thing I learned that the Challenge Tombs – some kind of puzzle challenges – haven’t been that important in previous games. I enjoyed these puzzles much more then the fighting sequences (also because I am really bad at combat and have to die soo many times before I succeed!).

Lots of sliding down on ropes, jumping, running, diving, often into the unknown.

In the last part of the game when Lara enters into the city underneath the glacier one is reminded of the scene when Frodo tries to enter into Mordor, seeing all the dark riders and troops.

The final approach starts, there is still a long way (and many strange creatures to fight), but at least the final destination is in sight!

I finished the game with 100%, because I went back to all the areas and used Stella’s Tomb Raider Site walkthrough to help me find all the items. I think it is practically impossible within life time to find all the items alone without help. This is especially funny because in one of the trailers one of the developers mentions that they count with 15h of gameplay, and 30h if you want to finish at 100%. It took me 58h (!) to finish with 100% … and that with a walkthrough!!!

Anyway, tomorrow the Shadow of the Tomb Raider will be released, and I could also start the first game of the series, Tomb Raider, but I got a bit worn out by all the combat activity and decided to concentrate on a hard-core puzzle game, The Witness, which features loads and loads of puzzles, taught from simple to complex, and combined, to create a very interesting game. Now I only need the time …

Norbert Preining https://www.preining.info/blog There and back again

Stephen Kelly: API Changes in Clang

Planet Ubuntu - Pre, 14/09/2018 - 12:45pd

I’ve started contributing to Clang, in the hope that I can improve the API for tooling. This will eventually mean changes to the C++ API of Clang, the CMake buildsystem, and new features in the tooling. Hopefully I’ll remember to blog about changes I make.

The Department of Redundancy Department

I’ve been implementing custom clang-tidy checks and have become quite familiar with the AST Node API. Because of my background in Qt, I was immediately disoriented by some API inconsistency. Certain API classes had both getStartLoc and getLocStart methods, as well as both getEndLoc and getLocEnd etc. The pairs of methods return the same content, so at least one set of them is redundant.

I’m used to working on stable library APIs, but Clang is different in that it offers no API stability guarantees at all. As an experiment, we staggered the introduction of new API and removal of old API. I ended up replacing the getStartLoc and getLocStart methods with getBeginLoc for consistency with other classes, and replaced getLocEnd with getEndLoc. Both old and new APIs are in the Clang 7.0.0 release, but the old APIs are already removed from Clang master. Users of the old APIs should port to the new ones at the next opportunity as described here.

Wait a minute, Where’s me dump()er?

Clang AST classes have a dump() method which is very useful for debugging. Several tools shipped with Clang are based on dumping AST nodes.

The SourceLocation type also provides a dump() method which outputs the file, line and column corresponding to a location. The problem with it though has always been that it does not include a newline at the end of the output, so the output gets lost in noise. This 2013 video tutorial shows the typical developer experience using that dump method. I’ve finally fixed that in Clang, but it did not make it into Clang 7.0.0.

In the same vein, I also added a dump() method to the SourceRange class. This prints out locations in the an angle-bracket format which shows only what changed between the beginning and end of the range.

Let it bind

When writing clang-tidy checks using AST Matchers, it is common to factor out intermediate variables for re-use or for clarity in the code.

auto valueMethod = cxxMethodDecl(hasName("value")); Finer->addMatcher(valueMethod.bind("methodDecl"));

clang-query has an analogous way to create intermediate matcher variables, but binding to them did not work. As of my recent commit, it is possible to create matcher variables and bind them later in a matcher:

let valueMethod cxxMethodDecl(hasName("value")) match valueMethod.bind("methodDecl") match callExpr(callee(valueMethod.bind("methodDecl"))).bind("methodCall") Preload your Queries

Staying on the same topic, I extended clang-query with a --preload option. This allows starting clang-query with some commands already invoked, and then continue using it as a REPL:

bash$ cat cmds.txt let valueMethod cxxMethodDecl(hasName("value")) bash$ clang-query --preload cmds.txt somefile.cpp clang-query> match valueMethod.bind("methodDecl") Match #1: somefile.cpp:4:2: note: "methodDecl" binds here void value(); ^~~~~~~~~~~~ 1 match.

Previously, it was only possible to run commands from a file without also creating a REPL using the -c option. The --preload option with the REPL is useful when experimenting with matchers and having to restart clang-query regularly. This happens a lot when modifying code to examine changes to AST nodes.

Enjoy!

20180913-reproducible-builds-paris-meeting

Planet Debian - Enj, 13/09/2018 - 11:54md
Reproducible Builds 2018 Paris meeting

Many lovely people interested in reproducible builds will meet again at a three-day event in Paris we will welcome both previous attendees and new projects alike! We hope to discuss, connect and exchange ideas in order to grow the reproducible builds effort and we would be delighted if you'd join! And this is the space we'll bring into life:

And whilst the exact content of the meeting will be shaped by the participants when we do it, the main goals will include:

  • Updating & exchanging the status of reproducible builds in various projects.
  • Improving collaboration both between and inside projects.
  • Expanding the scope and reach of reproducible builds to more projects.
  • Working and hacking together on solutions.
  • Brainstorming designs for tools enabling end-users to get the most benefits from reproducible builds.
  • Discussing how reproducible builds will be usable and meaningful to users and developers alike.

Please reach out if you'd like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

Holger Levsen http://layer-acht.org/thinking/ Any sufficiently advanced thinking is indistinguishable from madness

What is the difference between moderation and censorship?

Planet Debian - Enj, 13/09/2018 - 11:09md

FSFE fellows recently started discussing my blog posts about Who were the fellowship? and An FSFE Fellowship Representative's dilemma.

Fellows making posts in support of reform have reported their emails were rejected. Some fellows had CC'd me on their posts to the list and these posts never appeared publicly. These are some examples of responses received by a fellow trying to post on the list:

The list moderation team decided now to put your email address on moderation for one month. This is not censorship.

One fellow forwarded me a rejected message to look at. It isn't obscene, doesn't attack anybody and doesn't violate the code of conduct. The fellow writes:

+1 for somebody to answer the original questions with real answers
-1 for more character assassination

Censors moderators responded to that fellow:

This message is in constructive and unsuited for a public discussion list.

Why would moderators block something like that? In the same thread, they allowed some very personal attack messages in favour of existing management.

Moderation + Bias = Censorship

Even links to the public list archives are giving errors and people are joking that they will only work again after the censors PR team change all the previous emails to comply with the censorship communications policy exposed in my last blog.

Fellows have started noticing that the blog of their representative is not being syndicated on Planet FSFE any more.

Some people complained that my last blog didn't provide evidence to justify my concerns about censorship. I'd like to thank FSFE management for helping me respond to that concern so conclusively with these heavy-handed actions against the community over the last 48 hours.

The collapse of the fellowship described in my earlier blog has been caused by FSFE management decisions. The solutions need to come from the grass roots. A totalitarian crackdown on all communications is a great way to make sure that never happens.

FSFE claims to be a representative of the free software community in Europe. Does this behaviour reflect how other communities operate? How successful would other communities be if they suffocated ideas in this manner?

This is what people see right now trying to follow links to the main FSFE Discussion list archive:

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Debian LTS work, August 2018

Planet Debian - Enj, 13/09/2018 - 1:54md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from July. I worked only 5 hours and therefore carried over 18 hours to September.

I prepared and uploaded updates to the linux-4.9 (DLA 1466-1, DLA 1481-1) and linux-latest-4.9 packages.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

TeX Live contrib updates

Planet Debian - Enj, 13/09/2018 - 4:56pd

It is now more than a year that I took over tlcontrib from Taco and provide it at the TeX Live contrib repository. It does now serve old TeX Live 2017 as well as the current TeX Live 2018, and since last year the number of packages has increased from 52 to 70.

Recent changes include pTeX support packages for non-free fonts and more packages from the AcroTeX bundle. In particular since the last post the following packages have been added: aeb-mobile, aeb-tilebg, aebenvelope, cjk-gs-integrate-macos, comicsans, datepicker-pro, digicap-pro, dps, eq-save, fetchbibpes, japanese-otf-nonfree, japanese-otf-uptex-nonfree, mkstmpdad, opacity-pro, ptex-fontmaps-macos, qrcstamps.

Here I want to thank Jürgen Gilg for reminding me consistently of updates I have missed, big thanks!

To recall what TLcontrib is for: It collects packages that not distributed inside TeX Live proper for one or another of the following reasons:

  • because it is not free software according to the FSF guidelines;
  • because it is an executable update;
  • because it is not available on CTAN;
  • because it is an intermediate release for testing.

In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can have a place on TLContrib. The full list of packages can be seen here.

Please see the main page for Quickstart, History, and details about how to contribute packages.

Last but not least, while this is a service to make access to non-free packages more easy for users of the TeX Live Manager, our aim is to have as many as possible packages made completely free and included into TeX Live proper!

Enjoy.

Norbert Preining https://www.preining.info/blog There and back again

Lubuntu Blog: Lubuntu Development Newsletter #11

Planet Ubuntu - Enj, 13/09/2018 - 3:13pd
This is the eleventh issue of The Lubuntu Development Newsletter. You can read the last issue here. Changes General This list is a bit short because we have been focusing on general, behind-the-scenes (and admittedly tedious) administration tasks, but we plan on ramping up development progress in time for the next newsletter. Desktop Experience We […]

digest 0.6.17

Planet Debian - Enj, 13/09/2018 - 2:06pd

digest version 0.6.17 arrived on CRAN earlier today after a day of gestation in the bowels of CRAN, and should get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64 and murmur32 algorithms) permitting easy comparison of R language objects.

This release brings another robustifications thanks to Radford Neal who noticed a segfault in 32 bit mode on Sparc running Solaris. Yay for esoteric setups. But thanks to his very nice pull request, this is taken care of, and it also squashed one UBSAN error under the standard gcc setup. But two files remain with UBSAN issues, help would be welcome!

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Disappointment on the new commute

Planet Debian - Mër, 12/09/2018 - 6:53md

Imagine my disappointment when I discovered that signs on Stanford’s campus pointing to their “Enchanted Broccoli Forest” and “Narnia”—both of which that I have been passing daily on my new commute—merely indicate the location of student living groups with whimsical names.

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

Distributing static routes with DHCP

Planet Debian - Mër, 12/09/2018 - 10:00pd

This week I had to deal with a setup in which I needed to distribute additional static network routes using DHCP.

The setup is easy but there are some caveats to take into account. Also, DHCP clients might not behave as one would expect.

The starting situation was a working DHCP clinet/server deployment. Some standard virtual machines would request for their network setup over the network. Nothing new. The DHCP server is dnsmasq, and the daemon is running under Openstack control, but this has nothing to do with the DHCP problem itself.

By default, it seems dnsmasq sends to clients the Routers (code 3) option, which usually contains the gateway for clients in the subnet to use. My situation required to distribute one additional static route for another subnet. My idea was for DHCP clients to end with this simple routing table:

user@dhcpclient:~$ ip r default via 10.0.0.1 dev eth0 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.100 172.16.0.0/21 via 10.0.0.253 dev eth0 <--- extra static route

To distribute this extra static route, you only need to edit the dnsmasq config file and add a line like this:

dhcp-option=option:classless-static-route,172.16.0.0/21,10.0.0.253

For my initial tests of this config I was simply doing requesting to refresh the lease from the client DHCP. This got my new static route online, but if in the case of a reboot, the client DHCP would not get the default route. The different behaviour is documented in dhclient-script(8).

To try something similar to a reboot situation, I had to use this command:

user@dhcpclient:~$ sudo ifup --force eth0 Internet Systems Consortium DHCP Client 4.3.1 Copyright 2004-2014 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0/xx:xx:xx:xx:xx:xx Sending on LPF/eth0/xx:xx:xx:xx:xx:xx Sending on Socket/fallback DHCPREQUEST on eth0 to 255.255.255.255 port 67 DHCPACK from 10.0.0.1 RTNETLINK answers: File exists bound to 10.0.0.100 -- renewal in 20284 seconds.

Anyway this was really surprissing at first, and led me to debug DHCP packets using dhcpdump:

TIME: 2018-09-11 18:06:03.496 IP: 10.0.0.1 (xx:xx:xx:xx:xx:xx) > 10.0.0.100 (xx:xx:xx:xx:xx:xx) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: xxxxxxxx SECS: 8 FLAGS: 0 CIADDR: 0.0.0.0 YIADDR: 10.0.0.100 SIADDR: xx.xx.xx.x GIADDR: 0.0.0.0 CHADDR: xx:xx:xx:xx:xx:xx:00:00:00:00:00:00:00:00:00:00 OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 10.0.0.1 OPTION: 51 ( 4) IP address leasetime 43200 (12h) OPTION: 58 ( 4) T1 21600 (6h) OPTION: 59 ( 4) T2 37800 (10h30m) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 10.0.0.255 OPTION: 15 ( 13) Domainname xxxxxxxx OPTION: 12 ( 21) Host name xxxxxxxx OPTION: 3 ( 4) Routers 10.0.0.1 OPTION: 121 ( 8) Classless Static Route xxxxxxxxxxxxxx .....D.. [...] ---------------------------------------------------------------------------

(you can use this handy command both in server and client side)

So, the DHCP server was sending both the Routers (code 3) and the Classless Static Route (code 121) options to the clients. So, why would fail the client to install both routes?

I obtained some help from folks on IRC and they pointed me towards RFC3442:

DHCP Client Behavior [...] If the DHCP server returns both a Classless Static Routes option and a Router option, the DHCP client MUST ignore the Router option.

So, clients are supposed to ignore the Routers (code 3) option if they get an additional static route. This is very counter-intuitive, but can be easily workarounded by just distributing the default gateway route as another classless static route:

dhcp-option=option:classless-static-route,0.0.0.0/0,10.0.0.1,172.16.0.0/21,10.0.0.253 # ^^ default route ^^ extra static route

Obviously this was my first time in my career dealing with this setup and situation. My conclussion is that even old-enough protocols like DHCP can sometimes behave in a counter-intuitive way. Reading RFCs is not always funny, but can help understand what’s going on.

You can read the original issue in Wikimedia Foundation’s Phabricator ticket T202636, including all the back-and-forth work I did. Yes, is open to the public ;-)

Arturo Borrero González http://ral-arturo.org/ ral-arturo.org

o-tour 2018 (Halbmarathon)

Planet Debian - Mër, 12/09/2018 - 8:51pd

My first race redo at the same distance/ascent meters. Let’s see how it went… 45.2km, 1’773m altitude gain (officially: 45km, 1’800m). This was the Halbmarathon distance, compared to the full Marathon one, which is 86km/3’000m.

Pre-race

I registered for this race right after my previous one, and despite it having much more altitude meters, I was looking forward to it.

That is, until the week of the race. The entire week was just off. Work life, personal life, everything seemed out of sync. Including a half-sleepless night on Wednesday, which ruined my sleep schedule for the rest of the week and also my plans for the light maintenance rides before the event. And which also made me feel half-sick due to lack of sleep.

I prepared for my ride on Saturday (bike check, tyre pressure check, load bike on car), and I went to bed—late, again difficult to fall asleep—not being sure I’ll actually go to the race. I had a difficult night sleep, but actually I managed to wake up on the alarm. OK, chances looking somewhat better for getting to the race. Total sleep: 5 hours. Ouch!

So I get in the car—about 15 minutes later than planned—and start, only to find a road closure on the most direct route to the highway, and police people directing the traffic—at around 07:10—on the “new” route, resulting in yet another detour, and me getting stressed enough on which way to go and not paying attention to exact speed on a downhill that I got a flashed by a speed camera. Sigh…

The rest of the drive was uneventful, I reach Alpnach, I park, I get to the start/finish location, get my number, and finally get to the start line with two minutes (!!) to spare. The most “just-in-time” I ever was at a race, as I’m usually way early. By this time I was even in a later starting block since mine was already setup and would have been difficult to reach.

Oh, and because I was so late, and because this is smaller race (number of participants, setup, etc.), I didn’t find a place to fill my water bottle. And this, for the one time I didn’t fill it in advance. Fun!

The race

So given all this, I set low expectations for the race, and decided to consider it a simple Sunday ride. Will take it easy on the initial 12.5km, 1’150m climb, and then will see how it goes. There was a food station two thirds in the climb, so I said I’ll hopefully not get too dehydrated until then.

The climb starts relaxed-I was among the last people starting—and 15 minutes in, my friend the lower back says “ah, you’re climbing again, remember I’m here too”, which was way too early. So, I said to myself, nothing to lose, let’s just switch to standing every time my back gets tired, and stand until my legs get tired, then switch again.

The climb here was on pavement, so standing was pretty easy. And, to my surprise, this worked quite well: while standing I also went much faster (by much, I mean probably ~2-3km/h) than sitting so I was advancing in the long stretch of people going up the mountain, and my back was very relieved every time I switched.

So, up and down and up and down in the saddle, and up and up and up on the mountain, until I get to the food station. Water! I quickly drink an entire bottle (750ml!!), refill, and go on.

After the food station, the route changed to gravel, and this made pedalling while standing more difficult, due to less grip and slipping if you’re not careful. I tried the sit/stand/sit routine, but it was getting more difficult, so I went, slower, until a point I had to stop. I was by now in the sun, hot, and tired. And annoyed at the low volume out of the water bottle, so I opened it, and drank just like from a glass, and emptied it quickly - yet again! I felt much better, and restarted pedalling, eager to get to the top.

The last part of the climb is quite steep and more or less on a trail, so here I was pushing the bike, but since I didn’t have any goals did not feel guilty about it. Up and up, and finally I reach the top (altitude: 1’633m, elevation gained: 1’148m out of ~1’800m), and I can breathe easier knowing that the most difficult part is over.

From here, it was finally a good race. The o-tour route is much more beautiful than I remembered, but also more technically difficult, to the point of being quite annoying: it runs for long stretches on very uneven artificial paths, like if someone built a paved road, but the goal was to have the most uneven surface, all rocks being at an angle, instead of aiming for an even surface. For hikers this is excellent, especially in wet conditions, but for trying to move a bike forward, or even more, forward uphill, is annoying. There were stretches of ~5% grade that I was pushing the bike, due to how annoying biking on that surface was.

The route also has nice single track sections, some easily navigable, some not, at least for me, and some that I had to carry the bike. Or even carry the bike on my shoulder while I was climbing over roots. A very nice thing, and sadly uncommon in this series of races.

One other fun aspect of the race was the mud. Especially in the forests, there was enough water left on tracks that one got splashed quite often, and outside (where the soil doesn’t have the support of the rood), less water but quite deep mud. Deep enough that at one point, I misjudged how deep the around 3 meters long mud-alike section was, and I had enough speed so that my front wheel got stuck in mud, and slowly (and I imagine gracefully as well :P, of course) I went over the handlebars in the softest mud I ever landed in. Landed, as halfway up my elbows (!), hands full of mud, gloves muddy as hell, legs down to the ankle in mud so shoes also muddy, and me finding the situation the funniest moment of the race. The guy behind me asked if everything is alright, and I almost couldn’t answer due to laughing out-loud.

Back to serious stuff now. The rest of the “meters of climbing left”, about 600+ meters, were supposed to be distributed in about 4 sections, all about the same profile except the first one which was supposed to be a bit longer and flatter. At least, that’s what the official map was showing, in a somewhat stylised way. And that’s what I based my effort dosage on.

Of course, real life is not stylised, and there 2 small climbs (as expected), and then a long and slow climb (definitely unexpected). I managed to stay on the bike, but the unexpected long climb—two kilometres—exhausted my reserves, despite being a relatively small grade (~5%, gained ~100m). I really was not planning for it, and I paid for that. Then a bit of downhill, then another short climb, another downhill—on road, 60km/h!—and then another medium-sized climb: 1km long, gaining 60m. Then a slow and long descent, a 700m/50m climb, a descent, and another climb, short but more difficult: 900m/80m (~9%). By this time, I was spent, and was really looking forward to the final descent, which I remembered was half pavement, half very nice single-track. And indeed it was superb, after all that climbing. Yay!

And then, reaching basically the end of the race (a few kilometres left), I remembered something else: this race has a climb at the end! This is where the missing climbing meters were hiding!

So, after eight kilometres of fun, 1.5km of easy climbing to gain 80m of ascent. Really trivial, a regular commute almost, but for me at this stage, it was painful and the most annoying thing ever…

And then, reaching the final two kilometres of light descent on paved road, and finishing the race. Yay!

Overall, given the way the week went, this race was much easier than I hoped, and quite enjoyable. Why? No idea. I’ll just take the bonus points and not complain ☺

Real champions

After about two minutes of me finishing, I hear the speaker saying that the second placed woman in the long distance was nearing, and that it was Esther Süss! I’ve never seen her in person as far as I know, nor any of the other leaders in these races, since usually the end times are much apart. In this case, I apparently finished between the first and second places in the women’s race (there was 3m05s difference between them). This also explained what all those photographers with telephotos at the finish line were waiting for, and why they didn’t take my picture :)))))) In any case, I was very happy to see her in person, since I’m very impressed that at 44 years old, she’s still competing and most of the time winning against other women, 10 or even 20 years younger than her. Gives a bit of hope for older people like me. Of course minus being on the thinner side (unlike me), and actually liking long climbs (unlike me), and winning (definitely unlike me). Not even bringing up the world championships gold medals, OK?

Race analysis Hydration, hydration…

As I mentioned above, I drank a lot at the beginning of the race. I continued to drink, and by 2 hours I was 3 full bottles in, at 2:40 I finished the fourth bottle.

Four bottles is 3 litres of liquid, which is way more than my usual consumption since I stopped carrying my hydration pack. In the Eiger bike challenge, done in much hotter temperatures and for longer, I think I drank about the same or only slightly more (not sure exactly). Then temperature: 19° average, 33° max, 6½ hours, this time: 16.2° average, 20° max, ~4 hours. And this time, with 4L in 4 hours, I didn’t need to run to the bathroom as I finished (at all).

The only conclusion I can make is that I sweat much more than I think, and that I must more actively drink water. I don’t want to go back to hydration pack in a race (definitely yes for more relaxed rides), so I need to use all the food stops to drink and refill.

General fitness vs. leg muscles

I know my core is weak, but it’s getting hilarious that 15 minutes into the climbing, I start getting signals. This is not happening on flat nor indoor for at least 2-2½ hours, so the conclusion is that I need to get fitter (core) and also do more outdoors real climbing training—just long (slower) climbs.

The sit-stand-sit routine was very useful, but it did result in even my hands getting tired from having to move and stabilise the bike. So again, need to get fitter overall and do more cross-training.

That is, as if I didn’t know it already ☹

Numbers

This is now beyond subjective, let’s see how the numbers look like:

  • 2016:
    • time: overall 3h49m34.4s, start-Langis 2h44m31s, Langis-finish: 1h05m02s.
    • age category: overall 70/77, start-Langis ranking: 70, Langis-finish: 72.
    • overall gender ranking: overall 251/282, start-Langis: 250, Langis-finish: 255.
  • 2018:
    • time: 3h53m43.4s, start-Langis: 2h50m11s, Langis-finish: 1h03m31s.
    • age category 70/84, start-Langis: 71, Langis-finish: 70.
    • overall gender ranking: overall 191/220, start-Langis: 191, Langis-finish: 189.

The first conclusion is that I’ve done infinitesimally better in the overall rankings: 252/282=0.893, 191/220=0.868, so better but only trivially so, especially given the large decline in participants on the short distance (the long one had the same). I cannot compare age category, because ☺

The second interesting titbit is that in 2016, I was relatively faster on the climb plus first part of the high-altitude route, and relatively slower on the second half plus descent, both in the age category and the overall category. In 2018, this reversed, and I gained places on the descent. Time comparison, ~6 minutes slower in the first half, 1m30s faster on the second one.

But I find my numbers so close that I’m surprised I neither significantly improved nor slowed down in two years. Yes, I’m not consistently training, but still… I kind of expect some larger difference, one way or another. Strava also says that I beat my 2016 numbers on 7 segments, but only got second place to that on 14 others, so again a wash.

So, weight gain aside, it seems nothing much has changed. I need to improve my consistency in training 10× probably to see a real difference. On the other hand, maybe this result is quite good, given my much less consistent training than in 2016 — ¯\_(ツ)_/¯.

Equiment-wise, I had a different bike now (full suspension vs. hardtail), and—compared to previous race two weeks ago, at least—I had the tyre pressure quite well dialled in for this event. So I was able to go fast, and indeed overtake a couple of people on the flat/light descents, and more importantly, was not overtaken by other people on the long descent. My brakes were much better as well, so I was a bit more confident, but the front brake started squeaking again when it got hot, so I need to improve this even more. But again, not even the changed equipment made much of a difference ☺

I’ll finish here with an image of my “heroic efforts”:

Not very proud of this…

I’m very surprised that they put a photographer at the top of a climb, except maybe to motivate people to pedal up the next year… I’ll try to remember this ☺

Iustin Pop https://k1024.org iustin - all posts

TensorFlow on Debian/sid (including Keras via R)

Planet Debian - Mër, 12/09/2018 - 3:04pd

I have been struggling with getting TensorFlow running on Debian/sid for quite some time. The main problem is that the CUDA libraries installed by Debian are CUDA 9.1 based, and the precompiled pip installable TensorFlow packages require CUDA 9.0 which resulted in an unusable installation. But finally I got around and found all the pieces.

Step 1: Install CUDA 9.0

The best way I found was going to the CUDA download page, select Linux, then x86_64, then Ubuntu, then 17.04, and finally deb (network>. In the text appearing click on the download button to obtain currently cuda-repo-ubuntu1704_9.0.176-1_amd64.deb.

After installing this package as root with

dpkg -i cuda-repo-ubuntu1704_9.0.176-1_amd64.deb

the nvidia repository signing key needs to be added

apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1704/x86_64/7fa2af80.pub

and finally install the CUDA 9.0 libraries (not all of cuda-9-0 because this would create problems with the normally installed nvidia libraries):

apt-get update apt-get install cuda-libraries-9-0

This will install lots of libs into /usr/local/cuda-9.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-9-0.conf.

Step 2: Install CUDA 9.0 CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 9.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download for CUDA 9.0 and then cuDNN v7.2.1 Runtime Library for Ubuntu 16.04 (Deb).

This will download a file libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb.

Step 3: Install Tensorflow for GPU

This is the easiest one and can be done as explained on the TensorFlow installation page using

pip3 install --upgrade tensorflow-gpu

This will install several other dependencies, too.

Step 4: Check that everything works

Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

$ python3 -c "import tensorflow as tf; sess = tf.Session() ; print(tf.__version__)" 2018-09-11 16:30:27.075339: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-09-11 16:30:27.143265: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2018-09-11 16:30:27.143671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.4175 pciBusID: 0000:01:00.0 totalMemory: 3.94GiB freeMemory: 3.85GiB 2018-09-11 16:30:27.143702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0 2018-09-11 16:30:27.316389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-09-11 16:30:27.316432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 2018-09-11 16:30:27.316439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N 2018-09-11 16:30:27.316595: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3578 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) 1.10.1 $ Addendum: Keras and R

With the above settled, the installation of Keras can be done via

apt-get install python3-keras

and this should pick up the TensorFlow backend automatically.

For R there is a Keras library that can be installed without

install.packages('keras')

on the R command line (as root).

After that running a simple MNIST code example should use your GPU from R (taken from Deep Learning with R from Manning Publications):

library(keras) mnist <- dataset_mnist() train_images <- mnist$train$x train_labels <- mnist$train$y test_images <- mnist$test$x test_labels <- mnist$test$y network <- keras_model_sequential() %>% layer_dense(units = 512, activation = "relu", input_shape = c(28 * 28)) %>% layer_dense(units = 10, activation = "softmax") network %>% compile( optimizer = "rmsprop", loss = "categorical_crossentropy", metrics = c("accuracy") ) train_images <- array_reshape(train_images, c(60000, 28 * 28)) train_images <- train_images / 255 test_images <- array_reshape(test_images, c(10000, 28 * 28)) test_images <- test_images / 255 train_labels <- to_categorical(train_labels) test_labels <- to_categorical(test_labels) network %>% fit(train_images, train_labels, epochs = 5, batch_size = 128) metrics <- network %>% evaluate(test_images, test_labels) metrics Norbert Preining https://www.preining.info/blog There and back again

PSA: the.earth.li ceasing Debian mirror service

Planet Debian - Mar, 11/09/2018 - 9:22md

This is a public service announcement that the.earth.li (the machine that hosts this blog) will cease service as a Debian mirror on 1st February 2019 at the latest.

It has already been removed from the official list of Debian mirrors. Please update your sources.list to point to an alternative sooner rather than later.

The removal has been driven by a number of factors:

  • This mirror was originally setup when I was running Black Cat Networks, and a local mirror was generally useful to us. It’s 11+ years since Black Cat was sold, and 7+ since it moved away from that network.
  • the.earth.li currently lives with Bytemark, who already have an official secondary mirror. It does not add any useful resilience to the mirror network.
  • For a long time I’ve been unable to mirror all release architectures due to disk space limitations; I think such mirrors are of limited usefulness unless located in locations with dubious connectivity to alternative full mirrors.
  • Bytemark have been acquired by IOMart and I’m uncertain as to whether my machine will remain there long term - the acquisition announcement focuses on their cloud service rather than mentioning physical server provision. Disk space requirements are one of my major costs and the Debian mirror makes up ⅔ of my current disk usage. Dropping it will make moving host easier for me, should it prove necessary.

I can’t find an exact record of when I started running a mirror, but it was certainly before April 2005. 13 years doesn’t seem like a bad length of time to have been providing the service. Personally I’ve moved to deb.debian.org but if the network location of the is the reason you chose it then mirror.bytemark.co.uk should be a good option.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

Thinkpad X1 Carbon Gen 6

Planet Debian - Mar, 11/09/2018 - 12:33md

In February I reviewed a Thinkpad X1 Carbon Gen 1 [1] that I bought on Ebay.

I have just been supplied the 6th Generation of the Thinkpad X1 Carbon for work, which would have cost about $1500 more than I want to pay for my own gear. ;)

The first thing to note is that it has USB-C for charging. The charger continues the trend towards smaller and lighter chargers and also allows me to charge my phone from the same charger so it’s one less charger to carry. The X1 Carbon comes with a 65W charger, but when I got a second charger it was only 45W but was also smaller and lighter.

The laptop itself is also slightly smaller in every dimension than my Gen 1 version as well as being noticeably lighter.

One thing I noticed is that the KDE power applet disappears when battery is full – maybe due to my history of buying refurbished laptops I haven’t had a battery report itself as full before.

Disabling the touch pad in the BIOS doesn’t work. This is annoying, there are 2 devices for mouse type input so I need to configure Xorg to only read from the Trackpoint.

The labels on the lid are upside down from the perspective of the person using it (but right way up for people sitting opposite them). This looks nice for observers, but means that you tend to put your laptop the wrong way around on your desk a lot before you get used to it. It is also fancier than the older model, the red LED on the cover for the dot in the I in Thinkpad is one of the minor fancy features.

As the new case is thinner than the old one (which was thin compared to most other laptops) it’s difficult to open. You can’t easily get your fingers under the lid to lift it up.

One really annoying design choice was to have a proprietary Ethernet socket with a special dongle. If the dongle is lost or damaged it will probably be expensive to replace. An extra USB socket and a USB Ethernet device would be much more useful.

The next deficiency is that it has one USB-C/DisplayPort/Thunderbolt port and 2 USB 3.1 ports. USB-C is going to be used for everything in the near future and a laptop with only a single USB-C port will be as annoying then as one with a single USB 2/3 port would be right now. Making a small laptop requires some engineering trade-offs and I can understand them limiting the number of USB 3.1 ports to save space. But having two or more USB-C ports wouldn’t have taken much space – it would take no extra space to have a USB-C port in place of the proprietary Ethernet port. It also has only a HDMI port for display, the USB-C/Thunderbolt/DisplayPort port is likely to be used for some USB-C device when you want an external display. The Lenovo advertising says “So you get Thunderbolt, USB-C, and DisplayPort all rolled into one”, but really you get “a choice of one of Thunderbolt, USB-C, or DisplayPort at any time”. How annoying would it be to disconnect your monitor because you want to read a USB-C storage device?

As an aside this might work out OK if you can have a DisplayPort monitor that also acts as a USB-C hub on the same cable. But if so requiring a monitor that isn’t even on sale now to make my laptop work properly isn’t a good strategy.

One problem I have is that resume from suspend requires holding down power button. I’m not sure if it’s hardware or software issue. But suspend on lid close works correctly and also suspend on inactivity when running on battery power. The X1 Carbon Gen 1 that I own doesn’t suspend on lid close or inactivity (due to a Linux configuration issue). So I have one laptop that won’t suspend correctly and one that won’t resume correctly.

The CPU is an i5-8250U which rates 7,678 according to cpubenchmark.net [2]. That’s 92% faster than the i7 in my personal Thinkpad and more importantly I’m likely to actually get that performance without having the CPU overheat and slow down, that said I got a thermal warning during the Debian install process which is a bad sign. It’s also only 114% faster than the CPU in the Thinkpad T420 I bought in 2013. The model I got doesn’t have the fastest possible CPU, but I think that the T420 didn’t either. A 114% increase in CPU speed over 5 years is a long way from the factor of 4 or more that Moore’s law would have predicted.

The keyboard has the stupid positions for the PgUp and PgDn keys I noted on my last review. It’s still annoying and slows me down, but I am starting to get used to it.

The display is FullHD, it’s nice to have a laptop with the same resolution as my phone. It also has a slider to cover the built in camera which MIGHT also cause the microphone to be disconnected. It’s nice that hardware manufacturers are noticing that some customers care about privacy.

The storage is NVMe. That’s a nice feature, although being only 240G may be a problem for some uses.

Conclusion

Definitely a nice laptop if someone else is paying.

The fact that it had cooling issues from the first install is a concern. Laptops have always had problems with cooling and when a laptop has cooling problems before getting any dust inside it’s probably going to perform poorly in a few years.

Lenovo has gone too far trying to make it thin and light. I’d rather have the same laptop but slightly thicker, with a built-in Ethernet port, more USB ports, and a larger battery.

Related posts:

  1. More About the Thinkpad X301 Last month I blogged about the Thinkpad X301 I got...
  2. Thinkpad T420 I’ve owned a Thinkpad T61 since February 2010 [1]. In...
  3. Thinkpad X1 Carbon I just bought a Thinkpad X1 Carbon to replace my...
etbe https://etbe.coker.com.au etbe – Russell Coker

AsioHeaders 1.12.1-1

Planet Debian - Mar, 11/09/2018 - 3:21pd

A first update to the AsioHeaders package arrived on CRAN today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release is the first following the initial upload of version 1.11.0-1 in 2015. I had noticed the updated 1.12.1 version a few days ago, and then Joe Cheng surprised me with a squeaky clean PR as he needed it to get RStudio’s websocket package working with OpenSSL 1.1.0.

I actually bumbled up the release a little bit this morning, uploading 1.12.1 first and then 1.12.1-1 as we like having a packaging revision. Old habits die hard. So technically CRAN, but we may clean that up and remove the 1.12.1 release from the archive as 1.12.1-1 is identical but for two bytes in DESCRIPTION.

The NEWS entry follow, it really is just the header update done by Joe plus some Travis maintenance.

Changes in version 1.12.1-1 (2018-09-10)
  • Upgraded to Asio 1.12.1 (Joe Cheng in #2)

  • Updated Travis CI support via newer run.sh

Via CRANberries, there is a diffstat report relative to the previous release, as well as this time also one between the version-corrected upload and the main one.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

The Commons Clause doesn't help the commons

Planet Debian - Mar, 11/09/2018 - 1:26pd
The Commons Clause was announced recently, along with several projects moving portions of their codebase under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold[1], where the definition of being sold includes being used as a component of an online pay-for service. As described in the FAQ, this changes the effective license of the work from an open source license to a source-available license. However, the site doesn't go into a great deal of detail as to why you'd want to do that.

Fortunately one of the VCs behind this move wrote an opinion article that goes into more detail. The central argument is that Amazon make use of a great deal of open source software and integrate it into commercial products that are incredibly lucrative, but give little back to the community in return. By adopting the commons clause, Amazon will be forced to negotiate with the projects before being able to use covered versions of the software. This will, apparently, prevent behaviour that is not conducive to sustainable open-source communities.

But this is where things get somewhat confusing. The author continues:

Our view is that open-source software was never intended for cloud infrastructure companies to take and sell. That is not the original ethos of open source.

which is a pretty astonishingly unsupported argument. Open source code has been incorporated into proprietary applications without giving back to the originating community since before the term open source even existed. MIT-licensed X11 became part of not only multiple Unixes, but also a variety of proprietary commercial products for non-Unix platforms. Large portions of BSD ended up in a whole range of proprietary operating systems (including older versions of Windows). The only argument in favour of this assertion is that cloud infrastructure companies didn't exist at that point in time, so they weren't taken into consideration[2] - but no argument is made as to why cloud infrastructure companies are fundamentally different to proprietary operating system companies in this respect. Both took open source code, incorporated it into other products and sold them on without (in most cases) giving anything back.

There's one counter-argument. When companies sold products based on open source code, they distributed it. Copyleft licenses like the GPL trigger on distribution, and as a result selling products based on copyleft code meant that the community would gain access to any modifications the vendor had made - improvements could be incorporated back into the original work, and everyone benefited. Incorporating open source code into a cloud product generally doesn't count as distribution, and so the source code disclosure requirements don't trigger. So perhaps that's the distinction being made?

Well, no. The GNU Affero GPL has a clause that covers this case - if you provide a network service based on AGPLed code then you must provide the source code in a similar way to if you distributed it under a more traditional copyleft license. But the article's author goes on to say:

AGPL makes it inconvenient but does not prevent cloud infrastructure providers from engaging in the abusive behavior described above. It simply says that they must release any modifications they make while engaging in such behavior.

IE, the problem isn't that cloud providers aren't giving back code, it's that they're using the code without contributing financially. There's no difference between what cloud providers are doing now and what proprietary operating system vendors were doing 30 years ago. The argument that "open source" was never intended to permit this sort of behaviour is simply untrue. The use of permissive licenses has always allowed large companies to benefit disproportionately when compared to the authors of said code. There's nothing new to see here.

But that doesn't mean that the status quo is good - the argument for why the commons clause is required may be specious, but that doesn't mean it's bad. We've seen multiple cases of open source projects struggling to obtain the resources required to make a project sustainable, even as many large companies make significant amounts of money off that work. Does the commons clause help us here?

As hinted at in the title, the answer's no. The commons clause attempts to change the power dynamic of the author/user role, but it does so in a way that's fundamentally tied to a business model and in a way that prevents many of the things that make open source software interesting to begin with. Let's talk about some problems.

The power dynamic still doesn't favour contributors

The commons clause only really works if there's a single copyright holder - if not, selling the code requires you to get permission from multiple people. But the clause does nothing to guarantee that the people who actually write the code benefit, merely that whoever holds the copyright does. If I rewrite a large part of a covered work and that code is merged (presumably after I've signed a CLA that assigns a copyright grant to the project owners), I have no power in any negotiations with any cloud providers. There's no guarantee that the project stewards will choose to reward me in any way. I contribute to them but get nothing back in return - instead, my improved code allows the project owners to charge more and provide stronger returns for the VCs. The inequity has shifted, but individual contributors still lose out.

It discourages use of covered projects

One of the benefits of being able to use open source software is that you don't need to fill out purchase orders or start commercial negotiations before you're able to deploy. Turns out the project doesn't actually fill your needs? Revert it, and all you've lost is some development time. Adding additional barriers is going to reduce uptake of covered projects, and that does nothing to benefit the contributors.

You can no longer meaningfully fork a project

One of the strengths of open source projects is that if the original project stewards turn out to violate the trust of their community, someone can fork it and provide a reasonable alternative. But if the project is released with the commons clause, it's impossible to sell any forked versions - anyone who wishes to do so would still need the permission of the original copyright holder, and they can refuse that in order to prevent a fork from gaining any significant uptake.

It doesn't inherently benefit the commons

The entire argument here is that the cloud providers are exploiting the commons, and by forcing them to pay for a license that allows them to make use of that software the commons will benefit. But there's no obvious link between these things. Maybe extra money will result in more development work being done and the commons benefiting, but maybe extra money will instead just result in greater payout to shareholders. Forcing cloud providers to release their modifications to the wider world would be of benefit to the commons, but this is explicitly ruled out as a goal. The clause isn't inherently incompatible with this - the negotiations between a vendor and a project to obtain a license to be permitted to sell the code could include a commitment to provide patches rather money, for instance, but the focus on money makes it clear that this wasn't the authors' priority.

What we're left with is a license condition that does nothing to benefit individual contributors or other users, and costs us the opportunity to fork projects in response to disagreements over design decisions or governance. What it does is ensure that a range of VC-backed projects are in a better position to improve their returns, without any guarantee that the commons will be left better off. It's an attempt to solve a problem that's existed since before the term "open source" was even coined, by simply layering on a business model that's also existed since before the term "open source" was even coined[3]. It's not anything new, and open source derives from an explicit rejection of this sort of business model.

That's not to say we're in a good place at the moment. It's clear that there is a giant level of power disparity between many projects and the consumers of those projects. But we're not going to fix that by simply discarding many of the benefits of open source and going back to an older way of doing things. Companies like Tidelift[4] are trying to identify ways of making this sustainable without losing the things that make open source a better way of doing software development in the first place, and that's what we should be focusing on rather than just admitting defeat to satisfy a small number of VC-backed firms that have otherwise failed to develop a sustainable business model.

[1] It is unclear how this interacts with licenses that include clauses that assert you can remove any additional restrictions that have been applied
[2] Although companies like Hotmail were making money from running open source software before the open source definition existed, so this still seems like a reach
[3] "Source available" predates my existence, let alone any existing open source licenses
[4] Disclosure: I know several people involved in Tidelift, but have no financial involvement in the company

comments Matthew Garrett https://mjg59.dreamwidth.org/ Matthew Garrett

Reproducible Builds: Weekly report #176

Planet Debian - Hën, 10/09/2018 - 7:12md

Here’s what happened in the Reproducible Builds effort between Sunday September 2 and Saturday September 8 2018:

Patches filed Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Sebastian Dröge: GStreamer Rust bindings 0.12 and GStreamer Plugin 0.3 release

Planet Ubuntu - Hën, 10/09/2018 - 1:41md

After almost 6 months, a new release of the GStreamer Rust bindings and the GStreamer plugin writing infrastructure for Rust is out. As usual this was coinciding with the release of all the gtk-rs crates to make use of all the new features they contain.

Thanks to all the contributors of both gtk-rs and the GStreamer bindings for all the nice changes that happened over the last 6 months!

And as usual, if you find any bugs please report them and if you have any questions let me know.

GStreamer Bindings

For the full changelog check here.

Most changes this time were internally, especially because many user-facing changes (like Debug impls for various types) were already backported to the minor releases in the 0.11 release series.

WebRTC

The biggest change this time is probably the inclusion of bindings for the GStreamer WebRTC library.

This allows using building all kinds of WebRTC applications outside the browser (or providing a WebRTC implementation for a browser), and while not as full-featured as Google’s own implementation, this interoperates well with the various browsers and generally works much better on embedded devices.

A small example application in Rust is available here.

Serde

Optionally, serde trait implementations for the Serialize and Deserialize trait can be enabled for various fundamental GStreamer types, including caps, buffers, events, messages and tag lists. This allows serializing them into any format that can be handled by serde (which are many!), and deserializing them back to normal Rust structs.

Generic Tag API

Previously only a strongly-typed tag API was exposed that made it impossible to use the wrong data type for a specific tag, e.g. code that tries to store a string for the track number or an integer for the title would simply not compile:

let mut tags = gst::TagList::new(); { let tags = tags.get_mut().unwrap(); tags.add::<Title>(&"some title", gst::TagMergeMode::Append); tags.add::<TrackNumber>(&12, gst::TagMergeMode::Append); }

While this is convenient, it made it rather complicated to work with tag lists if you only wanted to handle them in a generic way. For example by iterating over the tag list and simply checking what kind of tags are available. To solve that, a new generic API was added in addition. This works on glib::Values, which can store any kind of type, and using the wrong type for a specific tag would simply cause an error at runtime instead of compile-time.

let mut tags = gst::TagList::new(); { let tags = tags.get_mut().unwrap(); tags.add_generic(&gst::tags::TAG_TITLE, &"some title", gst::TagMergeMode::Append) .expect("wrong type for title tag"); tags.add_generic(&gst::tags::TAG_TRACK_NUMBER, &12, gst::TagMergeMode::Append) .expect("wrong type for track number tag"); }

This also greatly simplified the serde serialization/deserialization for tag lists.

GStreamer Plugins

For the full changelog check here.

gobject-subclass

The main change this time is that all the generic GObject subclassing infrastructure was moved out of the gst-plugin crate and moved to its own gobject-subclass crate as part of the gtk-rs organization.

As part of this, some major refactoring has happened that allows subclassing more different types but also makes it simpler to add new types. There are also experimental crates for adding some subclassing support to gio and gtk, and a PR for autogenerating part of the code via the gir code generator.

More classes!

The other big addition this time is that it’s now possible to subclass GStreamer Pads and GhostPads, to implement the ChildProxy interface and to subclass the Aggregator and AggregatorPad class.

This now allows to write custom mixer/muxer-style elements (or generally elements that have multiple sink pads) in Rust via the Aggregator base class, and to have custom pad types for elements to allow for setting custom properties on the pads (e.g. to control the opacity of a single video mixer input).

There is currently no example for such an element, but I’ll add a very simple video mixer to the repository some time in the next weeks and will also write a blog post about it for explaining all the steps.

Sean Davis: Xubuntu Development Update September 2018

Planet Ubuntu - Sht, 08/09/2018 - 5:52pd

A week later than expected, it’s the September development update! The theme for August (and early September) has been visual improvements, with a few bug fixes tossed in for good measure. Check it out! 

Xubuntu Bionic Beaver (18.04)

Bionic was pretty stable in August, with only one of our packages making it through the SRU process and into the hands of our users.

We’re currently looking to get these new releases and fixes into Bionic.

  • Catfish 1.4.6
    • This is a new release with numerous bug fixes and a much-improved thumbnailer.
  • Xfce Settings: Mouse acceleration not configurable in Xubuntu 18.04 (LP: #1758023)
    • This issue is related to libinput taking over mouse configuration, and will be resolved by building xfce4-settings with libinput support.
Cosmic Cuttlefish (18.10)

The following source package updates landed in Cosmic in August.

  • elementary-xfce 0.12-1ubuntu1
    • The elementary-xfce icon theme is now available on Debian! Previously, this theme was part of the xubuntu-icon-theme package.
  • ristretto 0.8.3-1 (Debian sync)
  • thunar-archive-plugin 0.4.0-1 (Debian sync)
  • thunar-media-tags-plugin 0.3.0-1 (Debian sync)
  • xfce4-taskmanager 1.2.1-1 (Debian sync)
  • xubuntu-artwork 18.10
    • The development wallpaper is set on boot and the desktop.
    • The xubuntu-icon-theme is now a branding package, that upgrades the elementary-xfce icon theme with the Xubuntu distributor logo.
  • xubuntu-default-settings 18.10
    • “Square icons” were enabled in all supported plugins, improving size and shape consistency on the panel.
    • Orage configuration was updated to use the “paplay” command for sounds by default (LP: #1054396)
    • The panel was updated to be 80% transparent at all times. This is a settings migration to support the new GTK+ 3 panel.
    • The Xubuntu session was updated to correctly set XDG_CURRENT_DESKTOP (LP: #1590089)
  • xubuntu-meta 2.227
    • This package release replaces the fwupdate dependency with the newly minted fwupd.
Xfce New Releases

Xfce had 3 new releases in August, featuring a variety of bug fixes and usability improvements.

Xfce Display Profiles (Preview)

Simon has been hard at work implementing a nifty new feature for the Xfce Display Settings, display profiles! This feature allows you to save and switch between various display setups, useful for users on the go or presenters. This hasn’t been merged into master yet, so designs are not final.

Spacing Improvements

I’ve spent the last week submitting patches to the Xfce core applications, panel plugins, and Thunar plugins. My goal is to improve the overall look and feel of Xfce by improving the consistency of it’s preference dialogs. A few before and after screenshots are below.

I’ve based my work on the excellent GNOME 2 HIG Window Layout documentation. Xfce has long borrowed the design philosophies from this document (to varying degrees) and is once again benefiting from the well-written work.

Shimmer Project Elementary Xfce Icon Theme

Xubuntu’s beloved icon theme has had a few significant updates in recent weeks. From build optimizations to new upstream icons, there’s a lot to unpack.

The theme has added a Makefile and build tool to convert the theme’s SVG sources to PNG. Xubuntu has included the PNG-building functionality for some time, and now it’s available for everyone. PNG-based themes are faster to load and generally crisper at various sizes. Included in the new build options are PNG optimization. Optipng is now used to optimize each of the generated files, reducing the overall file size.

Updated icons coming from upstream this month include dialog-password, selection icons, graphics icons, and manila-colored folders. The manila folders are a sharp contrast from the longtime blue, but after using them for a few days, they’re actually pretty nice.

Other Updates Mugshot 0.4.1

I released Mugshot 0.4.1 early last month with a number of bug fixes and code quality improvements. You can check out the release notes and find downloads here.

Contributing

Ready to start giving back to your favorite open source projects? Remember that there’s something for everyone, and you can get started quickly with the Xubuntu and Xfce contributor docs. If you don’t know where to start, join us at #xubuntu-devel on Freenode… we’ll point you in the right direction.

Faqet

Subscribe to AlbLinux agreguesi