You are here

Agreguesi i feed

o-tour 2018 (Halbmarathon)

Planet Debian - Mër, 12/09/2018 - 8:51pd

My first race redo at the same distance/ascent meters. Let’s see how it went… 45.2km, 1’773m altitude gain (officially: 45km, 1’800m). This was the Halbmarathon distance, compared to the full Marathon one, which is 86km/3’000m.

Pre-race

I registered for this race right after my previous one, and despite it having much more altitude meters, I was looking forward to it.

That is, until the week of the race. The entire week was just off. Work life, personal life, everything seemed out of sync. Including a half-sleepless night on Wednesday, which ruined my sleep schedule for the rest of the week and also my plans for the light maintenance rides before the event. And which also made me feel half-sick due to lack of sleep.

I prepared for my ride on Saturday (bike check, tyre pressure check, load bike on car), and I went to bed—late, again difficult to fall asleep—not being sure I’ll actually go to the race. I had a difficult night sleep, but actually I managed to wake up on the alarm. OK, chances looking somewhat better for getting to the race. Total sleep: 5 hours. Ouch!

So I get in the car—about 15 minutes later than planned—and start, only to find a road closure on the most direct route to the highway, and police people directing the traffic—at around 07:10—on the “new” route, resulting in yet another detour, and me getting stressed enough on which way to go and not paying attention to exact speed on a downhill that I got a flashed by a speed camera. Sigh…

The rest of the drive was uneventful, I reach Alpnach, I park, I get to the start/finish location, get my number, and finally get to the start line with two minutes (!!) to spare. The most “just-in-time” I ever was at a race, as I’m usually way early. By this time I was even in a later starting block since mine was already setup and would have been difficult to reach.

Oh, and because I was so late, and because this is smaller race (number of participants, setup, etc.), I didn’t find a place to fill my water bottle. And this, for the one time I didn’t fill it in advance. Fun!

The race

So given all this, I set low expectations for the race, and decided to consider it a simple Sunday ride. Will take it easy on the initial 12.5km, 1’150m climb, and then will see how it goes. There was a food station two thirds in the climb, so I said I’ll hopefully not get too dehydrated until then.

The climb starts relaxed-I was among the last people starting—and 15 minutes in, my friend the lower back says “ah, you’re climbing again, remember I’m here too”, which was way too early. So, I said to myself, nothing to lose, let’s just switch to standing every time my back gets tired, and stand until my legs get tired, then switch again.

The climb here was on pavement, so standing was pretty easy. And, to my surprise, this worked quite well: while standing I also went much faster (by much, I mean probably ~2-3km/h) than sitting so I was advancing in the long stretch of people going up the mountain, and my back was very relieved every time I switched.

So, up and down and up and down in the saddle, and up and up and up on the mountain, until I get to the food station. Water! I quickly drink an entire bottle (750ml!!), refill, and go on.

After the food station, the route changed to gravel, and this made pedalling while standing more difficult, due to less grip and slipping if you’re not careful. I tried the sit/stand/sit routine, but it was getting more difficult, so I went, slower, until a point I had to stop. I was by now in the sun, hot, and tired. And annoyed at the low volume out of the water bottle, so I opened it, and drank just like from a glass, and emptied it quickly - yet again! I felt much better, and restarted pedalling, eager to get to the top.

The last part of the climb is quite steep and more or less on a trail, so here I was pushing the bike, but since I didn’t have any goals did not feel guilty about it. Up and up, and finally I reach the top (altitude: 1’633m, elevation gained: 1’148m out of ~1’800m), and I can breathe easier knowing that the most difficult part is over.

From here, it was finally a good race. The o-tour route is much more beautiful than I remembered, but also more technically difficult, to the point of being quite annoying: it runs for long stretches on very uneven artificial paths, like if someone built a paved road, but the goal was to have the most uneven surface, all rocks being at an angle, instead of aiming for an even surface. For hikers this is excellent, especially in wet conditions, but for trying to move a bike forward, or even more, forward uphill, is annoying. There were stretches of ~5% grade that I was pushing the bike, due to how annoying biking on that surface was.

The route also has nice single track sections, some easily navigable, some not, at least for me, and some that I had to carry the bike. Or even carry the bike on my shoulder while I was climbing over roots. A very nice thing, and sadly uncommon in this series of races.

One other fun aspect of the race was the mud. Especially in the forests, there was enough water left on tracks that one got splashed quite often, and outside (where the soil doesn’t have the support of the rood), less water but quite deep mud. Deep enough that at one point, I misjudged how deep the around 3 meters long mud-alike section was, and I had enough speed so that my front wheel got stuck in mud, and slowly (and I imagine gracefully as well :P, of course) I went over the handlebars in the softest mud I ever landed in. Landed, as halfway up my elbows (!), hands full of mud, gloves muddy as hell, legs down to the ankle in mud so shoes also muddy, and me finding the situation the funniest moment of the race. The guy behind me asked if everything is alright, and I almost couldn’t answer due to laughing out-loud.

Back to serious stuff now. The rest of the “meters of climbing left”, about 600+ meters, were supposed to be distributed in about 4 sections, all about the same profile except the first one which was supposed to be a bit longer and flatter. At least, that’s what the official map was showing, in a somewhat stylised way. And that’s what I based my effort dosage on.

Of course, real life is not stylised, and there 2 small climbs (as expected), and then a long and slow climb (definitely unexpected). I managed to stay on the bike, but the unexpected long climb—two kilometres—exhausted my reserves, despite being a relatively small grade (~5%, gained ~100m). I really was not planning for it, and I paid for that. Then a bit of downhill, then another short climb, another downhill—on road, 60km/h!—and then another medium-sized climb: 1km long, gaining 60m. Then a slow and long descent, a 700m/50m climb, a descent, and another climb, short but more difficult: 900m/80m (~9%). By this time, I was spent, and was really looking forward to the final descent, which I remembered was half pavement, half very nice single-track. And indeed it was superb, after all that climbing. Yay!

And then, reaching basically the end of the race (a few kilometres left), I remembered something else: this race has a climb at the end! This is where the missing climbing meters were hiding!

So, after eight kilometres of fun, 1.5km of easy climbing to gain 80m of ascent. Really trivial, a regular commute almost, but for me at this stage, it was painful and the most annoying thing ever…

And then, reaching the final two kilometres of light descent on paved road, and finishing the race. Yay!

Overall, given the way the week went, this race was much easier than I hoped, and quite enjoyable. Why? No idea. I’ll just take the bonus points and not complain ☺

Real champions

After about two minutes of me finishing, I hear the speaker saying that the second placed woman in the long distance was nearing, and that it was Esther Süss! I’ve never seen her in person as far as I know, nor any of the other leaders in these races, since usually the end times are much apart. In this case, I apparently finished between the first and second places in the women’s race (there was 3m05s difference between them). This also explained what all those photographers with telephotos at the finish line were waiting for, and why they didn’t take my picture :)))))) In any case, I was very happy to see her in person, since I’m very impressed that at 44 years old, she’s still competing and most of the time winning against other women, 10 or even 20 years younger than her. Gives a bit of hope for older people like me. Of course minus being on the thinner side (unlike me), and actually liking long climbs (unlike me), and winning (definitely unlike me). Not even bringing up the world championships gold medals, OK?

Race analysis Hydration, hydration…

As I mentioned above, I drank a lot at the beginning of the race. I continued to drink, and by 2 hours I was 3 full bottles in, at 2:40 I finished the fourth bottle.

Four bottles is 3 litres of liquid, which is way more than my usual consumption since I stopped carrying my hydration pack. In the Eiger bike challenge, done in much hotter temperatures and for longer, I think I drank about the same or only slightly more (not sure exactly). Then temperature: 19° average, 33° max, 6½ hours, this time: 16.2° average, 20° max, ~4 hours. And this time, with 4L in 4 hours, I didn’t need to run to the bathroom as I finished (at all).

The only conclusion I can make is that I sweat much more than I think, and that I must more actively drink water. I don’t want to go back to hydration pack in a race (definitely yes for more relaxed rides), so I need to use all the food stops to drink and refill.

General fitness vs. leg muscles

I know my core is weak, but it’s getting hilarious that 15 minutes into the climbing, I start getting signals. This is not happening on flat nor indoor for at least 2-2½ hours, so the conclusion is that I need to get fitter (core) and also do more outdoors real climbing training—just long (slower) climbs.

The sit-stand-sit routine was very useful, but it did result in even my hands getting tired from having to move and stabilise the bike. So again, need to get fitter overall and do more cross-training.

That is, as if I didn’t know it already ☹

Numbers

This is now beyond subjective, let’s see how the numbers look like:

  • 2016:
    • time: overall 3h49m34.4s, start-Langis 2h44m31s, Langis-finish: 1h05m02s.
    • age category: overall 70/77, start-Langis ranking: 70, Langis-finish: 72.
    • overall gender ranking: overall 251/282, start-Langis: 250, Langis-finish: 255.
  • 2018:
    • time: 3h53m43.4s, start-Langis: 2h50m11s, Langis-finish: 1h03m31s.
    • age category 70/84, start-Langis: 71, Langis-finish: 70.
    • overall gender ranking: overall 191/220, start-Langis: 191, Langis-finish: 189.

The first conclusion is that I’ve done infinitesimally better in the overall rankings: 252/282=0.893, 191/220=0.868, so better but only trivially so, especially given the large decline in participants on the short distance (the long one had the same). I cannot compare age category, because ☺

The second interesting titbit is that in 2016, I was relatively faster on the climb plus first part of the high-altitude route, and relatively slower on the second half plus descent, both in the age category and the overall category. In 2018, this reversed, and I gained places on the descent. Time comparison, ~6 minutes slower in the first half, 1m30s faster on the second one.

But I find my numbers so close that I’m surprised I neither significantly improved nor slowed down in two years. Yes, I’m not consistently training, but still… I kind of expect some larger difference, one way or another. Strava also says that I beat my 2016 numbers on 7 segments, but only got second place to that on 14 others, so again a wash.

So, weight gain aside, it seems nothing much has changed. I need to improve my consistency in training 10× probably to see a real difference. On the other hand, maybe this result is quite good, given my much less consistent training than in 2016 — ¯\_(ツ)_/¯.

Equiment-wise, I had a different bike now (full suspension vs. hardtail), and—compared to previous race two weeks ago, at least—I had the tyre pressure quite well dialled in for this event. So I was able to go fast, and indeed overtake a couple of people on the flat/light descents, and more importantly, was not overtaken by other people on the long descent. My brakes were much better as well, so I was a bit more confident, but the front brake started squeaking again when it got hot, so I need to improve this even more. But again, not even the changed equipment made much of a difference ☺

I’ll finish here with an image of my “heroic efforts”:

Not very proud of this…

I’m very surprised that they put a photographer at the top of a climb, except maybe to motivate people to pedal up the next year… I’ll try to remember this ☺

Iustin Pop https://k1024.org iustin - all posts

next-20180912: linux-next

Kernel Linux - Mër, 12/09/2018 - 6:37pd
Version:next-20180912 (linux-next) Released:2018-09-12

TensorFlow on Debian/sid (including Keras via R)

Planet Debian - Mër, 12/09/2018 - 3:04pd

I have been struggling with getting TensorFlow running on Debian/sid for quite some time. The main problem is that the CUDA libraries installed by Debian are CUDA 9.1 based, and the precompiled pip installable TensorFlow packages require CUDA 9.0 which resulted in an unusable installation. But finally I got around and found all the pieces.

Step 1: Install CUDA 9.0

The best way I found was going to the CUDA download page, select Linux, then x86_64, then Ubuntu, then 17.04, and finally deb (network>. In the text appearing click on the download button to obtain currently cuda-repo-ubuntu1704_9.0.176-1_amd64.deb.

After installing this package as root with

dpkg -i cuda-repo-ubuntu1704_9.0.176-1_amd64.deb

the nvidia repository signing key needs to be added

apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1704/x86_64/7fa2af80.pub

and finally install the CUDA 9.0 libraries (not all of cuda-9-0 because this would create problems with the normally installed nvidia libraries):

apt-get update apt-get install cuda-libraries-9-0

This will install lots of libs into /usr/local/cuda-9.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-9-0.conf.

Step 2: Install CUDA 9.0 CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 9.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download for CUDA 9.0 and then cuDNN v7.2.1 Runtime Library for Ubuntu 16.04 (Deb).

This will download a file libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb.

Step 3: Install Tensorflow for GPU

This is the easiest one and can be done as explained on the TensorFlow installation page using

pip3 install --upgrade tensorflow-gpu

This will install several other dependencies, too.

Step 4: Check that everything works

Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

$ python3 -c "import tensorflow as tf; sess = tf.Session() ; print(tf.__version__)" 2018-09-11 16:30:27.075339: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-09-11 16:30:27.143265: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2018-09-11 16:30:27.143671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.4175 pciBusID: 0000:01:00.0 totalMemory: 3.94GiB freeMemory: 3.85GiB 2018-09-11 16:30:27.143702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0 2018-09-11 16:30:27.316389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-09-11 16:30:27.316432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 2018-09-11 16:30:27.316439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N 2018-09-11 16:30:27.316595: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3578 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) 1.10.1 $ Addendum: Keras and R

With the above settled, the installation of Keras can be done via

apt-get install python3-keras

and this should pick up the TensorFlow backend automatically.

For R there is a Keras library that can be installed without

install.packages('keras')

on the R command line (as root).

After that running a simple MNIST code example should use your GPU from R (taken from Deep Learning with R from Manning Publications):

library(keras) mnist <- dataset_mnist() train_images <- mnist$train$x train_labels <- mnist$train$y test_images <- mnist$test$x test_labels <- mnist$test$y network <- keras_model_sequential() %>% layer_dense(units = 512, activation = "relu", input_shape = c(28 * 28)) %>% layer_dense(units = 10, activation = "softmax") network %>% compile( optimizer = "rmsprop", loss = "categorical_crossentropy", metrics = c("accuracy") ) train_images <- array_reshape(train_images, c(60000, 28 * 28)) train_images <- train_images / 255 test_images <- array_reshape(test_images, c(10000, 28 * 28)) test_images <- test_images / 255 train_labels <- to_categorical(train_labels) test_labels <- to_categorical(test_labels) network %>% fit(train_images, train_labels, epochs = 5, batch_size = 128) metrics <- network %>% evaluate(test_images, test_labels) metrics Norbert Preining https://www.preining.info/blog There and back again

PSA: the.earth.li ceasing Debian mirror service

Planet Debian - Mar, 11/09/2018 - 9:22md

This is a public service announcement that the.earth.li (the machine that hosts this blog) will cease service as a Debian mirror on 1st February 2019 at the latest.

It has already been removed from the official list of Debian mirrors. Please update your sources.list to point to an alternative sooner rather than later.

The removal has been driven by a number of factors:

  • This mirror was originally setup when I was running Black Cat Networks, and a local mirror was generally useful to us. It’s 11+ years since Black Cat was sold, and 7+ since it moved away from that network.
  • the.earth.li currently lives with Bytemark, who already have an official secondary mirror. It does not add any useful resilience to the mirror network.
  • For a long time I’ve been unable to mirror all release architectures due to disk space limitations; I think such mirrors are of limited usefulness unless located in locations with dubious connectivity to alternative full mirrors.
  • Bytemark have been acquired by IOMart and I’m uncertain as to whether my machine will remain there long term - the acquisition announcement focuses on their cloud service rather than mentioning physical server provision. Disk space requirements are one of my major costs and the Debian mirror makes up ⅔ of my current disk usage. Dropping it will make moving host easier for me, should it prove necessary.

I can’t find an exact record of when I started running a mirror, but it was certainly before April 2005. 13 years doesn’t seem like a bad length of time to have been providing the service. Personally I’ve moved to deb.debian.org but if the network location of the is the reason you chose it then mirror.bytemark.co.uk should be a good option.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

Thinkpad X1 Carbon Gen 6

Planet Debian - Mar, 11/09/2018 - 12:33md

In February I reviewed a Thinkpad X1 Carbon Gen 1 [1] that I bought on Ebay.

I have just been supplied the 6th Generation of the Thinkpad X1 Carbon for work, which would have cost about $1500 more than I want to pay for my own gear. ;)

The first thing to note is that it has USB-C for charging. The charger continues the trend towards smaller and lighter chargers and also allows me to charge my phone from the same charger so it’s one less charger to carry. The X1 Carbon comes with a 65W charger, but when I got a second charger it was only 45W but was also smaller and lighter.

The laptop itself is also slightly smaller in every dimension than my Gen 1 version as well as being noticeably lighter.

One thing I noticed is that the KDE power applet disappears when battery is full – maybe due to my history of buying refurbished laptops I haven’t had a battery report itself as full before.

Disabling the touch pad in the BIOS doesn’t work. This is annoying, there are 2 devices for mouse type input so I need to configure Xorg to only read from the Trackpoint.

The labels on the lid are upside down from the perspective of the person using it (but right way up for people sitting opposite them). This looks nice for observers, but means that you tend to put your laptop the wrong way around on your desk a lot before you get used to it. It is also fancier than the older model, the red LED on the cover for the dot in the I in Thinkpad is one of the minor fancy features.

As the new case is thinner than the old one (which was thin compared to most other laptops) it’s difficult to open. You can’t easily get your fingers under the lid to lift it up.

One really annoying design choice was to have a proprietary Ethernet socket with a special dongle. If the dongle is lost or damaged it will probably be expensive to replace. An extra USB socket and a USB Ethernet device would be much more useful.

The next deficiency is that it has one USB-C/DisplayPort/Thunderbolt port and 2 USB 3.1 ports. USB-C is going to be used for everything in the near future and a laptop with only a single USB-C port will be as annoying then as one with a single USB 2/3 port would be right now. Making a small laptop requires some engineering trade-offs and I can understand them limiting the number of USB 3.1 ports to save space. But having two or more USB-C ports wouldn’t have taken much space – it would take no extra space to have a USB-C port in place of the proprietary Ethernet port. It also has only a HDMI port for display, the USB-C/Thunderbolt/DisplayPort port is likely to be used for some USB-C device when you want an external display. The Lenovo advertising says “So you get Thunderbolt, USB-C, and DisplayPort all rolled into one”, but really you get “a choice of one of Thunderbolt, USB-C, or DisplayPort at any time”. How annoying would it be to disconnect your monitor because you want to read a USB-C storage device?

As an aside this might work out OK if you can have a DisplayPort monitor that also acts as a USB-C hub on the same cable. But if so requiring a monitor that isn’t even on sale now to make my laptop work properly isn’t a good strategy.

One problem I have is that resume from suspend requires holding down power button. I’m not sure if it’s hardware or software issue. But suspend on lid close works correctly and also suspend on inactivity when running on battery power. The X1 Carbon Gen 1 that I own doesn’t suspend on lid close or inactivity (due to a Linux configuration issue). So I have one laptop that won’t suspend correctly and one that won’t resume correctly.

The CPU is an i5-8250U which rates 7,678 according to cpubenchmark.net [2]. That’s 92% faster than the i7 in my personal Thinkpad and more importantly I’m likely to actually get that performance without having the CPU overheat and slow down, that said I got a thermal warning during the Debian install process which is a bad sign. It’s also only 114% faster than the CPU in the Thinkpad T420 I bought in 2013. The model I got doesn’t have the fastest possible CPU, but I think that the T420 didn’t either. A 114% increase in CPU speed over 5 years is a long way from the factor of 4 or more that Moore’s law would have predicted.

The keyboard has the stupid positions for the PgUp and PgDn keys I noted on my last review. It’s still annoying and slows me down, but I am starting to get used to it.

The display is FullHD, it’s nice to have a laptop with the same resolution as my phone. It also has a slider to cover the built in camera which MIGHT also cause the microphone to be disconnected. It’s nice that hardware manufacturers are noticing that some customers care about privacy.

The storage is NVMe. That’s a nice feature, although being only 240G may be a problem for some uses.

Conclusion

Definitely a nice laptop if someone else is paying.

The fact that it had cooling issues from the first install is a concern. Laptops have always had problems with cooling and when a laptop has cooling problems before getting any dust inside it’s probably going to perform poorly in a few years.

Lenovo has gone too far trying to make it thin and light. I’d rather have the same laptop but slightly thicker, with a built-in Ethernet port, more USB ports, and a larger battery.

Related posts:

  1. More About the Thinkpad X301 Last month I blogged about the Thinkpad X301 I got...
  2. Thinkpad T420 I’ve owned a Thinkpad T61 since February 2010 [1]. In...
  3. Thinkpad X1 Carbon I just bought a Thinkpad X1 Carbon to replace my...
etbe https://etbe.coker.com.au etbe – Russell Coker

Debian/TeX Live binaries update 2018.20180907.48586-1

Planet Debian - Mar, 11/09/2018 - 4:36pd

A new set of TeX Live binaries has been uploaded to Debian, based on the Subversion status as of 7 September (rev 48586). Aim was mostly fixing a bug of (x)dvipdfm(x) introduced by a previous upload. But besides fixing this, it also brought the new version of dvisvgm (2.5) into Debian.

The last update of TeX Live binaries is not so long ago, but with it a bug in dvipdfmx creeped in and strange things happened with xetex compilations. Upstream had already fixed this one, so I decided to upload new set of binaries to Debian. At the same time, dvisvgm saw a version update to 2.5, which did produce a bit of complications and for getting it into Debian I first packaged the C version of xxHash (Debian QA package page).

The current sources also contain another cherry picked bug fix for dvipdfmx, but unfortunately I will have to stop now using the subversion tree as is, due to the inclusion of an intermediate luatex release I am not convinced I want to see in Debian before the proper release of TeX Live 2019. That means, from now on I have to cherry pick till the next TeX Live release.

As usual, please report problems to the Debian Bug Tracking System.

Enjoy

Norbert Preining https://www.preining.info/blog There and back again

AsioHeaders 1.12.1-1

Planet Debian - Mar, 11/09/2018 - 3:21pd

A first update to the AsioHeaders package arrived on CRAN today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release is the first following the initial upload of version 1.11.0-1 in 2015. I had noticed the updated 1.12.1 version a few days ago, and then Joe Cheng surprised me with a squeaky clean PR as he needed it to get RStudio’s websocket package working with OpenSSL 1.1.0.

I actually bumbled up the release a little bit this morning, uploading 1.12.1 first and then 1.12.1-1 as we like having a packaging revision. Old habits die hard. So technically CRAN, but we may clean that up and remove the 1.12.1 release from the archive as 1.12.1-1 is identical but for two bytes in DESCRIPTION.

The NEWS entry follow, it really is just the header update done by Joe plus some Travis maintenance.

Changes in version 1.12.1-1 (2018-09-10)
  • Upgraded to Asio 1.12.1 (Joe Cheng in #2)

  • Updated Travis CI support via newer run.sh

Via CRANberries, there is a diffstat report relative to the previous release, as well as this time also one between the version-corrected upload and the main one.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

The Commons Clause doesn't help the commons

Planet Debian - Mar, 11/09/2018 - 1:26pd
The Commons Clause was announced recently, along with several projects moving portions of their codebase under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold[1], where the definition of being sold includes being used as a component of an online pay-for service. As described in the FAQ, this changes the effective license of the work from an open source license to a source-available license. However, the site doesn't go into a great deal of detail as to why you'd want to do that.

Fortunately one of the VCs behind this move wrote an opinion article that goes into more detail. The central argument is that Amazon make use of a great deal of open source software and integrate it into commercial products that are incredibly lucrative, but give little back to the community in return. By adopting the commons clause, Amazon will be forced to negotiate with the projects before being able to use covered versions of the software. This will, apparently, prevent behaviour that is not conducive to sustainable open-source communities.

But this is where things get somewhat confusing. The author continues:

Our view is that open-source software was never intended for cloud infrastructure companies to take and sell. That is not the original ethos of open source.

which is a pretty astonishingly unsupported argument. Open source code has been incorporated into proprietary applications without giving back to the originating community since before the term open source even existed. MIT-licensed X11 became part of not only multiple Unixes, but also a variety of proprietary commercial products for non-Unix platforms. Large portions of BSD ended up in a whole range of proprietary operating systems (including older versions of Windows). The only argument in favour of this assertion is that cloud infrastructure companies didn't exist at that point in time, so they weren't taken into consideration[2] - but no argument is made as to why cloud infrastructure companies are fundamentally different to proprietary operating system companies in this respect. Both took open source code, incorporated it into other products and sold them on without (in most cases) giving anything back.

There's one counter-argument. When companies sold products based on open source code, they distributed it. Copyleft licenses like the GPL trigger on distribution, and as a result selling products based on copyleft code meant that the community would gain access to any modifications the vendor had made - improvements could be incorporated back into the original work, and everyone benefited. Incorporating open source code into a cloud product generally doesn't count as distribution, and so the source code disclosure requirements don't trigger. So perhaps that's the distinction being made?

Well, no. The GNU Affero GPL has a clause that covers this case - if you provide a network service based on AGPLed code then you must provide the source code in a similar way to if you distributed it under a more traditional copyleft license. But the article's author goes on to say:

AGPL makes it inconvenient but does not prevent cloud infrastructure providers from engaging in the abusive behavior described above. It simply says that they must release any modifications they make while engaging in such behavior.

IE, the problem isn't that cloud providers aren't giving back code, it's that they're using the code without contributing financially. There's no difference between what cloud providers are doing now and what proprietary operating system vendors were doing 30 years ago. The argument that "open source" was never intended to permit this sort of behaviour is simply untrue. The use of permissive licenses has always allowed large companies to benefit disproportionately when compared to the authors of said code. There's nothing new to see here.

But that doesn't mean that the status quo is good - the argument for why the commons clause is required may be specious, but that doesn't mean it's bad. We've seen multiple cases of open source projects struggling to obtain the resources required to make a project sustainable, even as many large companies make significant amounts of money off that work. Does the commons clause help us here?

As hinted at in the title, the answer's no. The commons clause attempts to change the power dynamic of the author/user role, but it does so in a way that's fundamentally tied to a business model and in a way that prevents many of the things that make open source software interesting to begin with. Let's talk about some problems.

The power dynamic still doesn't favour contributors

The commons clause only really works if there's a single copyright holder - if not, selling the code requires you to get permission from multiple people. But the clause does nothing to guarantee that the people who actually write the code benefit, merely that whoever holds the copyright does. If I rewrite a large part of a covered work and that code is merged (presumably after I've signed a CLA that assigns a copyright grant to the project owners), I have no power in any negotiations with any cloud providers. There's no guarantee that the project stewards will choose to reward me in any way. I contribute to them but get nothing back in return - instead, my improved code allows the project owners to charge more and provide stronger returns for the VCs. The inequity has shifted, but individual contributors still lose out.

It discourages use of covered projects

One of the benefits of being able to use open source software is that you don't need to fill out purchase orders or start commercial negotiations before you're able to deploy. Turns out the project doesn't actually fill your needs? Revert it, and all you've lost is some development time. Adding additional barriers is going to reduce uptake of covered projects, and that does nothing to benefit the contributors.

You can no longer meaningfully fork a project

One of the strengths of open source projects is that if the original project stewards turn out to violate the trust of their community, someone can fork it and provide a reasonable alternative. But if the project is released with the commons clause, it's impossible to sell any forked versions - anyone who wishes to do so would still need the permission of the original copyright holder, and they can refuse that in order to prevent a fork from gaining any significant uptake.

It doesn't inherently benefit the commons

The entire argument here is that the cloud providers are exploiting the commons, and by forcing them to pay for a license that allows them to make use of that software the commons will benefit. But there's no obvious link between these things. Maybe extra money will result in more development work being done and the commons benefiting, but maybe extra money will instead just result in greater payout to shareholders. Forcing cloud providers to release their modifications to the wider world would be of benefit to the commons, but this is explicitly ruled out as a goal. The clause isn't inherently incompatible with this - the negotiations between a vendor and a project to obtain a license to be permitted to sell the code could include a commitment to provide patches rather money, for instance, but the focus on money makes it clear that this wasn't the authors' priority.

What we're left with is a license condition that does nothing to benefit individual contributors or other users, and costs us the opportunity to fork projects in response to disagreements over design decisions or governance. What it does is ensure that a range of VC-backed projects are in a better position to improve their returns, without any guarantee that the commons will be left better off. It's an attempt to solve a problem that's existed since before the term "open source" was even coined, by simply layering on a business model that's also existed since before the term "open source" was even coined[3]. It's not anything new, and open source derives from an explicit rejection of this sort of business model.

That's not to say we're in a good place at the moment. It's clear that there is a giant level of power disparity between many projects and the consumers of those projects. But we're not going to fix that by simply discarding many of the benefits of open source and going back to an older way of doing things. Companies like Tidelift[4] are trying to identify ways of making this sustainable without losing the things that make open source a better way of doing software development in the first place, and that's what we should be focusing on rather than just admitting defeat to satisfy a small number of VC-backed firms that have otherwise failed to develop a sustainable business model.

[1] It is unclear how this interacts with licenses that include clauses that assert you can remove any additional restrictions that have been applied
[2] Although companies like Hotmail were making money from running open source software before the open source definition existed, so this still seems like a reach
[3] "Source available" predates my existence, let alone any existing open source licenses
[4] Disclosure: I know several people involved in Tidelift, but have no financial involvement in the company

comments Matthew Garrett https://mjg59.dreamwidth.org/ Matthew Garrett

Reproducible Builds: Weekly report #176

Planet Debian - Hën, 10/09/2018 - 7:12md

Here’s what happened in the Reproducible Builds effort between Sunday September 2 and Saturday September 8 2018:

Patches filed Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Firefox 60, Yubikey, U2F vs my Google Account

Planet Debian - Hën, 10/09/2018 - 2:13md

tl;dr; Yes, you can use Firefox 60 in Debian/stretch with your U2F device to authenticate your Google account, but you've to use Chrome for the registration.

Thanks to Mike, Moritz and probably others there's now Firefox 60 ESR in Debian/stretch. So I took it as a chance to finally activate my for work YubiKey Nano as a U2F/2FA device for my at work Google account. Turns out it's not so simple. Basically Google told me that this browser is not support and I should install the trojan horse (Chrome) to use this feature. So I gave in, installed Chrome, logged in to my Google account and added the Yubikey as the default 2FA device. Then I quit Chrome, went back to Firefox and logged in again to my Google account. Bäm it works! The Yubikey blinks, I can touch it and I'm logged in.

Just in case: you probably want to install "u2f-host" to have "libu2f-host0" available which ships all the udev rules to detect common U2F devices correctly.

Sven Hoexter http://sven.stormbind.net/blog/ a blog

An FSFE Fellowship Representative's dilemma

Planet Debian - Hën, 10/09/2018 - 10:33pd

The FSFE Fellowship representative role may appear trivial, but it is surprisingly complicated. What's best for FSFE, what is best for the fellows and what is best for free software are not always the same thing.

As outlined in my blog Who are/were the FSFE Fellowship?, fellows have generously donated over EUR 1,000,000 to FSFE and one member of the community recently bequeathed EUR 150,000. Fellows want to know that this money is spent well, even beyond their death.

FSFE promised them an elected representative, which may have given them great reassurance about the checks and balances in the organization. In practice, I feel that FSFE hasn't been sincere about this role and it is therefore my duty to make fellows aware of what representation means in practice right now.

This blog has been held back for some time in the hope that things at FSFE would improve. Alas, that is not the case and with the annual general meeting in Berlin only four weeks away, now is the time for the community to take an interest. As fellowship representative, I would like to invite members of the wider free software community to attend as guests of the fellowship and try to help FSFE regain legitimacy.

Born with a conflict of interest

According to the FSFE e.V. constitution, as it was before elections were abolished, the Fellows elected according to §6 become members of FSFE e.V.

Yet all the other fellows who voted, the people being represented, are not considered members of FSFE e.V. Sometimes it is possible to view all fellows together as a unit, a separate organization, The Fellowship. Sometimes not all fellows want the same thing and a representative has to view them each as individuals.

Any representative of this organization, The Fellowship and the individual fellows, has a strong ethical obligation to do what is best for The Fellowship and each fellow.

Yet as the constitution recognizes the representative as a member of FSFE e.V., some people have also argued that he/she should do what is best for FSFE e.V.

What happens when what is best for The Fellowship is not in alignment with what is best for FSFE e.V.?

It is also possible to imagine situations where doing what is best for FSFE e.V. and doing what is best for free software in general is not the same thing. In such a case the representative and other members may want to resign.

Censorship of the Fellowship representatives by FSFE management

On several occasions management argued that communications to fellows need to be censored adapted to help make money. For example, when discussing an email to be sent to all fellows in February about the risk of abolishing elections, the president warned:

"people might even stop to support us financially"

if they found out about the constitutional changes. He subsequently subjected the email to censorship modification by other people.

This was not a new theme: in a similar discussion in August 2017 about communications from the representatives, another senior member of the executive team had commented:

"It would be beneficial if our PR team could support in this, who have the experience from shaping communication in ways which support retention of our donors."

A few weeks later, on 20 March, FSFE's management distributed a new censorship communications policy, requiring future emails to prioritize FSFE's interests and mandating that all emails go through the censors PR team. As already explained, a representative has an ethical obligation to prioritize the interests of the people represented, The Fellowship, not FSFE's interests. The censorship communications policy appears deliberately incompatible with that obligation.

As the elected representative of a 1500-strong fellowship, it seems obscene that communications to the people represented are subject to censorship by the very staff the representative scrutinizes. The situation is even more ludicrous when the organization concerned claims to be an advocate of freedom.

This gets to the core of our differences: FSFE appeared to be hoping a representative would be a stooge, puppet or cheerleader who's existence might "support retention of ... donors". Personally, I never imagined myself like that. Given the generosity of fellows and the large amounts of time and money contributed to FSFE, I feel obliged to act as a genuine representative, ensuring money already donated is spent effectively on the desired objectives and ensuring that communications are accurate. FSFE management appear to hope their clever policy document will mute those ambitions.

Days later, on 25 March, FSFE management announced the extraordinary general meeting to be held in the staff office in Berlin, to confirm the constitutional change and as a bonus, try to abruptly terminate the last representative, myself. Were these sudden changes happening by coincidence, or rather, a nasty reprisal for February's email about constitutional changes? I had simply been trying to fulfill my ethical obligations to fellows and suddenly I had become persona non grata.

When I first saw this termination proposal in March, it really made me feel quite horrible. They were basically holding a gun to my head and planning a vote on whether to pull the trigger. For all purposes, it looked like gangster behavior happening right under my nose in a prominent free software organization.

Both the absurdity and hostility of these tactics was further underlined by taking this vote on my role behind my back on 26 May, while I was on a 10 day trip to the Balkans pursuing real free software activities in Albania and Kosovo, starting with OSCAL.

In the end, while the motion to abolish elections was passed and fellows may never get to vote again, only four of the official members of the association backed the abusive motion to knife me and that motion failed. Nonetheless, it left me feeling I would be reluctant to trust FSFE again. An organization that relies so heavily on the contributions of volunteers shouldn't even contemplate treating them, or their representatives, with such contempt. The motion should never have been on the agenda in the first place.

Bullet or boomerang?

In May, I thought I missed the bullet but it appears to be making another pass.

Some senior members of FSFE e.V. remain frustrated that a representative's ethical obligations can't be hacked with policy documents and other juvenile antics. They complain that telling fellows the truth is an act of treason and speaking up for fellows in a discussion is a form of obstruction. Both of these crimes are apparently grounds for reprisals, threats, character assassination and potentially expulsion.

In the most outrageous act of scapegoating, the president has even tried to suggest that I am responsible for the massive exodus from the fellowship examined in my previous blog. The chart clearly shows the exodus coincides with the attempt to force-migrate fellows to the supporter program, long after the date when I took up this role.

Senior members have sent me threats to throw me out of office, most recently the president himself, simply for observing the basic ethical responsibilities of a representative.

Leave your conscience at the door

With the annual general meeting in Berlin only four weeks away, the president is apparently trying to assemble a list of people to throw the last remaining representative out of the association completely. It feels like something out of a gangster movie. After all, altering and suppressing the results of elections and controlling the behavior of the candidates are the modus operandi of dictators and gangsters everywhere.

Will other members of the association exercise their own conscience and respect the commitment of representation that was made to the community? Or will they leave their conscience at the door and be the president's puppets, voting in block like in many previous general meetings?

The free software ecosystem depends on the goodwill of volunteers and donors, a community that can trust our leaders and each other. If every free software organization behaved like this, free software wouldn't exist.

A president who conspires to surround himself with people who agree with him, appointing all his staff to be voting members of the FSFE e.V. and expelling his critics appears unlikely to get far promoting the organization's mission when he first encounters adults in the real world.

The conflict of interest in this role is not of my own making, it is inherent in FSFE's structure. If they do finally kill off the last representative, I'll wear it like a badge of honor, for putting the community first. After all, isn't that a representative's role?

As the essayist John Gardner wrote

“The citizen can bring our political and governmental institutions back to life, make them responsive and accountable, and keep them honest. No one else can.”

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Short-term contracting work?

Planet Debian - Hën, 10/09/2018 - 8:48pd

I'm starting a new job in about a month. Until then, it'd be really helpful if I could earn some money via a short-term contracting or consulting job. If your company or employer could benefit from any of the following, please get in touch. I will invoice via a Finnish company, not as a person (within the EU, at least, this makes it easier for the clients). I also reside in Finland, if that matters (meaning, meeting outside of Helsinki gets tricky).

  • software architecture design and review
  • coding in Python, C, shell, or code review
  • documentation: writing, review
  • git training
  • help with automated testing: unit tests, integration tests
  • help with Ansible
  • packaging and distributing software as .deb packages
Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

Marko Lukša: Kubernetes in Action

Planet Debian - Hën, 10/09/2018 - 5:42pd

The rise of Kubernetes as one of the most important tools for devops engineers and developers is out of discussion. But until I moved into my current company I never had any chance to actually use Docker, not to speak of Kubernetes. But it became necessary for me to learn it, so …

I choose the Kubernetes book Kubernetes in Action from Manning, mostly because I have very good experience with Manning books (and actually have collected quite an amount of them), and I wasn’t disappointed.

The book explains practically everything and much more I will ever need, with lots of examples, well-designed graphics, and in great detail. It is structured into an initial part “Overview” which gives a very light intro to Kubernetes and Docker. The second part “Core Concepts” introduces in 8 well-separated chapters everything I had to use to the micro-service deployment of the application I have developed. The final part “Beyond the basics” goes into more advanced details and specifics relevant for cluster administrators.

If I miss something from the book it is Rancher: While the last chapter discusses briefly systems built on top of Kubernetes, namely OpenShift, Deis Workflow (no support, final release in 2017) and Helm, another very popular platform, Rancher, has been left out, although I had very good experiences with it.

A very recommendable book if one wants to learn about Kubernetes.

Norbert Preining https://www.preining.info/blog There and back again

4.19-rc3: mainline

Kernel Linux - Hën, 10/09/2018 - 2:26pd
Version:4.19-rc3 (mainline) Released:2018-09-10 Source:linux-4.19-rc3.tar.gz Patch:full (incremental)

3.18.122: longterm

Kernel Linux - Dje, 09/09/2018 - 8:07md
Version:3.18.122 (EOL) (longterm) Released:2018-09-09 Source:linux-3.18.122.tar.xz PGP Signature:linux-3.18.122.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-3.18.122

4.4.155: longterm

Kernel Linux - Dje, 09/09/2018 - 8:04md
Version:4.4.155 (longterm) Released:2018-09-09 Source:linux-4.4.155.tar.xz PGP Signature:linux-4.4.155.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.4.155

4.9.126: longterm

Kernel Linux - Dje, 09/09/2018 - 8:01md
Version:4.9.126 (longterm) Released:2018-09-09 Source:linux-4.9.126.tar.xz PGP Signature:linux-4.9.126.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.9.126

4.14.69: longterm

Kernel Linux - Dje, 09/09/2018 - 7:56md
Version:4.14.69 (longterm) Released:2018-09-09 Source:linux-4.14.69.tar.xz PGP Signature:linux-4.14.69.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.14.69

Earthquake struck Hokkaido and caused blackout, but security.d.o run without trouble

Planet Debian - Dje, 09/09/2018 - 7:01md
Dec 2014, security.debian.org mirror came to Hokkaido, Japan. And in September 2018, Huge earthquake (magnitude 6.7) has hit Hokkaido. It was a surprise because the Japan government said such a large earthquake would shake Hokkaido is less than 0.2% in 30 years.


Below pics are left: after earthquake / right: before earthquake
And it causes a blackout for the whole of Hokkaido, of course, it includes Sakura Internet Ishikari DC. Ishikari DC had worked with an emergency power supply for almost 60 hours(!), so its security mirror run without any error.石狩データセンターの非常用発電設備の運転終了の瞬間です。60時間近くという、恐らくDCの歴史的最も長時間稼働した設備の一つで、最後まで支障なく動いてくれた事に感謝です。結果として備蓄燃料は70時間程度あり、節電運転する事で100時間程度が無給油で出来る状態でした。 pic.twitter.com/016aQg10Pj— 田中邦裕 (@kunihirotanaka) September 8, 2018

Hideki Yamane noreply@blogger.com Henrich plays with Debian

Printing paper: matte vs. glossy revisited

Planet Debian - Dje, 09/09/2018 - 6:11md

Let’s revisit some choices… whether they were explicit or not.

For the record, a Google search for “matte vs glossy” says “about 180.000.000 results found”, so it’s like emacs versus vi, except that only gets a paltry 10 million hits.

Tech background

Just a condensed summary that makes some large simplifications, skip if you already know this.

Photographic printing paper is normally of three main types: matte, glossy, and canvas. Glossy is the type one usually finds for normal small prints out of a printing shop/booth, matte is, well, like the normal document print paper, and canvas is really stretchable “fabric”. In the matte camp, there is the smooth vs. textured vs. rag-type (alternatively, smooth, light texture, textured), and in the glossy land, there’s luster (semi-gloss) and glossy (with the very glossy ones being unpleasant to the touch, even). Making some simplifications here, of course. In canvas land, I have no idea ☺

The black ink used for printing differs between glossy and matte, since you need a specific type to ensure that you get the deepest blacks possible for that type of paper. Some printers have two black ink “heads”, others—like (most?) Epson printers—have a single one and can switch between the two inks. This switching is costly since it needs to flush all current ink and then load the new ink, thus it wastes ink.

OK, with this in mind, let’s proceed.

My original paper choices

When I originally bought my photo printer (about five years ago), I thought at the beginning I’ll mostly print on matte paper. Good quality matte paper has a very special feel (in itself), whereas (very) glossy paper is what you usually see cheap prints on (the kind of you would have gotten 20 years ago from a photo developing shop). Good glossy paper is much more subdued, but still on the same “shiny” basis (compared to matte).

So I bought my printer, started printing—on matte paper—and because of the aforementioned switching cost, for a while all was good in matte-only land. I did buy quite a few sample packs for testing, including glossy.

Of course, at one point, the issue of printing small (e.g. the usual 10×15cm format) appeared, and because most paper you find in this format in non-specialist stores is glossy, I started printing on glossy as well. And then I did some large format prints also using glossy, and… well, glossy does have the advantage of more “impact” (colours jump up at you much more), so I realised it’s not that bad in glossy land. Time to use/test with all that sample paper!

Thus, I did do quite a bit of experimenting to decide which are my “go-to” papers and settled on four, two matte and two glossy. But because there’s always “need one small photo printed”, I never actively used the matte papers beyond my tests… Both matte papers were smooth matte, since the texture effect I found quite unpleasant with some photos, especially portraits.

So many years passed, with one-off printing, and the usual replacement of all other colours. But the matte black cartridge still had ~20% ink left, that I wasn’t using, so I ended up with having the original cartridge. Its manufacture date is 2013/08, so it’s more than five years old now. Epson says “for best results, use within 6 months), so at this time it’s about ten times the recommended age.

Accidental revisiting the matte-vs-glossy

Fast forward to earlier this week, and as I was printing a small photo for a friend, it reminded me that I the Epson paper I find in shops in Switzerland is much thinner than what I found once in US, and that for long I wanted to look up what other small format (10×15cm, A5, 5×7in, etc.) I can find in higher quality. I look at my preferred brands, and I find actually fine art paper in small format, but to my surprise, there’s also the option of smooth matte paper!

Small-format matte paper, and especially for portraits, sounded very strange; I wondered how would this actually feels (in hand). One of the best money spent during my paper research was a sample (printed) book from Hahnemühle in A5 format (this one, which I can’t find on the Hahnemühle web site, hence the link to a shop), which contains almost all their papers with—let’s hope—appropriate subjects. I grab it, search for the specific matte paper I saw available in small format (Photo Rag 308), and… WOW. I couldn’t believe my eyes and fingers. Definitely different than any small photo I’ve (personally) ever seen.

The glossy paper - Fine Art Pearl (285gsm) also looked much superior to the Epson Premium Glossy Photo paper I was using. So, time to make a three-way test.

OK, but that still left a problem - while I do have some (A4) paper of Photo Rag, I didn’t have matte ink; or rather, I had some but a very, very old one. Curiosity got the better of me - at worst, some clogging and some power cleaning (more ink waste), but I had to try it.

I chose one recent portrait photo in some subdued colours, printed (A4) using standard Epson Photo Glossy paper, then printed using Fine Art Pearl (again, what a difference!) and then, prepare to print using Photo Rag… switch black ink, run a quick small test pattern print (OK-ish), and print away. To my surprise, it did manage to print, with no problems even on this on-the-dark-side photograph.

And yes, it was as good as the sample was promising, at least for this photograph. I can only imagine how things will look and feel in small format. And I say feel because a large part of the printed photograph appeal is in the paper texture, not only the look.

Conclusion

So, two takeaways.

First, comparing these three papers, I’ve wasted a lot of prints (for friends/family/etc.) on sub-standard paper. Why didn’t I think of small-paper choices before, and only focused on large formats? Doesn’t make sense, but I’m glad I learned this now, at least.

Second, what’s with the “best used within 6 months”? Sure, 6 months is nothing if you’re a professional (as in, doing this for $day job), so maybe Epson didn’t test more than 1 year lifetimes, but still, I’m talking here about printing after 5 years.

The only thing left now is to actually order some packs and see how a small photo book will look like in the matte version. And in any case, I’ve found a better choice even for the glossy option.

What about textured matte?

In all this, where are the matte textured papers? Being very textured and much different from everything I talked above (Photo Rag is smooth matte), the normal uses for these are art reproductions. The naming of this series (for Hahnemühle) is also in-line: Albrecht Dürer, William Turner, German and Museum Etching, etc.

The sample book has these papers as well, with the following subjects:

  • Torchon: a photograph of a fountain; so-so effect, IMHO;
  • Albrecht Dürer: abstract art reproduction
  • William Turner: a family picture (photograph, not paint)!!
  • German Etching: something that looks like a painting
  • Museum Etching: abstract art

I was very surprised that between all those “art reproductions”, the William Turner one, a quite textured paper, had a well matching family picture that is, IMHO, excellent. I really don’t have a feeling on “would this paper match this photograph” or “what kind paper would match it”, so I’m often surprised like this. In this case, it wasn’t just passable, it was an excellent match. You can see it on the product page—you need to go to the third picture in the slideshow, and of course that’s the digital picture, not what you get in real life.

Unless I get some epiphany soon, “what can one use textured matte paper for” will remain an unsolved mystery. Or just a research item, assuming I find the time, the same way I find Hahnemühle’s rice paper very cool but I have no idea what to print on it. Ah, amateurs ☺

As usual, comments are welcome.

Iustin Pop https://k1024.org iustin - all posts

Faqet

Subscribe to AlbLinux agreguesi