You are here

Planet Debian

Subscribe to Feed Planet Debian
"Passion and dispassion. Choose two." -- Larry Wall "Passion and dispassion. Choose two." -- Larry Wall sesse's blog Entries tagged english Just another WordPress.com weblog Debian and Free Software random musings and comments showing latest 10 Comments on family, technology, and society liw's English language blog feed My WebLog at Intitut Mines-Telecom, Télécom SudParis Debian and Free Software Enrico Zini: pdo Linux, politics, and other interesting things Thinking inside the box Iustin Pop Ben Hutchings's diary of life and technology Blog from the Debian Project Thinking inside the box Dude! Sweet! Reproducible builds blog ganbatte kudasai! a personal blog of Dimitri John Ledkov joey As time goes by ... ein kleines, privates blog "Passion and dispassion. Choose two." -- Larry Wall "Passion and dispassion. Choose two." -- Larry Wall spwhitton Just another WordPress.com weblog keithp.com Thinking inside the box anarcat Joachim Breitners Denkblogade joey WEBlog -- Wouter's Eclectic Blog Comments on family, technology, and society Blog from the Debian Project All blogs related to Debian Insider infos, master your Debian/Ubuntu distribution Just another WordPress.com weblog pabs WEBlog -- Wouter's Eclectic Blog Hello, I'm Abhijith. A Free Software enthusiast. ein kleines, privates blog Comments on family, technology, and society "Passion and dispassion. Choose two." -- Larry Wall just my thoughts Reproducible builds blog random musings and comments
Përditësimi: 2 months 1 javë më parë

Successive Heresies

Enj, 28/12/2017 - 2:37md

I prefer the book The Hobbit to The Lord Of The Rings.

I much prefer the Hobbit movies to the LOTR movies.

I like the fact the Hobbit movies were extended with material not in the original book: I'm glad there are female characters. I love the additional material with Radagast the Brown. I love the singing and poems and sense of fun preserved from what was a novel for children.

I find the foreshadowing of Sauron in The Hobbit movies to more effectively convey a sense of dread and power than actual Sauron in the LOTR movies.

Whilst I am generally bored by large CGI battles, I find the skirmishes in The Hobbit movies to be less boring than the epic-scale ones in LOTR.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Reproducible Builds: Weekly report #139

Enj, 28/12/2017 - 1:55md

Here's what happened in the Reproducible Builds effort between Sunday December 17 and Saturday December 23 2017:

Packages reviewed and fixed, and bugs filed

Bugs filed in Debian:

Bugs filed in openSUSE:

  • Bernhard M. Wiedemann:
    • WindowMaker (merged) - use modification date of ChangeLog, upstreamable
    • ntp (merged) - drop date
    • bzflag - version upgrade to include already-upstreamed SOURCE_DATE_EPOCH patch
Reviews of unreproducible packages

20 package reviews have been added, 36 have been updated and 32 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (6)
  • Matthias Klose (8)
diffoscope development strip-nondeterminism development disorderfs development reprotest development reproducible-website development
  • Chris Lamb:
    • rws3:
      • Huge number of formatting improvements, typo fixes, capitalisation
      • Add section headings to make splitting up easier.
  • Holger Levsen:
    • rws3:
      • Add a disclaimer that this part of the website is a Work-In-Progress.
      • Split notes from each session into separate pages (6 sessions).
      • Other formatting and style fixes.
      • Link to Ludovic Courtès' notes on GNU Guix.
  • Ximin Luo:
    • rws3:
      • Format agenda.md to look like previous years', and other fixes
      • Split notes from each session into separate pages (1 session).
jenkins.debian.net development Misc.

This week's edition was written by Ximin Luo and Bernhard M. Wiedemann & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

(Micro)benchmarking Linux kernel functions

Enj, 28/12/2017 - 10:27pd

Usually, the performance of a Linux subsystem is measured through an external (local or remote) process stressing it. Depending on the input point used, a large portion of code may be involved. To benchmark a single function, one solution is to write a kernel module.

Minimal kernel module

Let’s suppose we want to benchmark the IPv4 route lookup function, fib_lookup(). The following kernel function executes 1,000 lookups for 8.8.8.8 and returns the average value.1 It uses the get_cycles() function to compute the execution “time.”

/* Execute a benchmark on fib_lookup() and put result into the provided buffer `buf`. */ static int do_bench(char *buf) { unsigned long long t1, t2; unsigned long long total = 0; unsigned long i; unsigned count = 1000; int err = 0; struct fib_result res; struct flowi4 fl4; memset(&fl4, 0, sizeof(fl4)); fl4.daddr = in_aton("8.8.8.8"); for (i = 0; i < count; i++) { t1 = get_cycles(); err |= fib_lookup(&init_net, &fl4, &res, 0); t2 = get_cycles(); total += t2 - t1; } if (err != 0) return scnprintf(buf, PAGE_SIZE, "err=%d msg=\"lookup error\"\n", err); return scnprintf(buf, PAGE_SIZE, "avg=%llu\n", total / count); }

Now, we need to embed this function in a kernel module. The following code registers a sysfs directory containing a pseudo-file run. When a user queries this file, the module runs the benchmark function and returns the result as content.

#define pr_fmt(fmt) "kbench: " fmt #include <linux/kernel.h> #include <linux/version.h> #include <linux/module.h> #include <linux/inet.h> #include <linux/timex.h> #include <net/ip_fib.h> /* When a user fetches the content of the "run" file, execute the benchmark function. */ static ssize_t run_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return do_bench(buf); } static struct kobj_attribute run_attr = __ATTR_RO(run); static struct attribute *bench_attributes[] = { &run_attr.attr, NULL }; static struct attribute_group bench_attr_group = { .attrs = bench_attributes, }; static struct kobject *bench_kobj; int init_module(void) { int rc; /* ❶ Create a simple kobject named "kbench" in /sys/kernel. */ bench_kobj = kobject_create_and_add("kbench", kernel_kobj); if (!bench_kobj) return -ENOMEM; /* ❷ Create the files associated with this kobject. */ rc = sysfs_create_group(bench_kobj, &bench_attr_group); if (rc) { kobject_put(bench_kobj); return rc; } return 0; } void cleanup_module(void) { kobject_put(bench_kobj); } /* Metadata about this module */ MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Microbenchmark for fib_lookup()");

In ❶, kobject_create_and_add() creates a new kobject named kbench. A kobject is the abstraction behind the sysfs filesystem. This new kobject is visible as the /sys/kernel/kbench/ directory.

In ❷, sysfs_create_group() attaches a set of attributes to our kobject. These attributes are materialized as files inside /sys/kernel/kbench/. Currently, we declare only one of them, run, with the __ATTR_RO macro. The attribute is therefore read-only (0444) and when a user tries to fetch the content of the file, the run_show() function is invoked with a buffer of PAGE_SIZE bytes as last argument and is expected to return the number of bytes written.

For more details, you can look at the documentation in the kernel and the associated example. Beware, random posts found on the web (including this one) may be outdated.2

The following Makefile will compile this example:

# Kernel module compilation KDIR = /lib/modules/$(shell uname -r)/build obj-m += kbench_mod.o kbench_mod.ko: kbench_mod.c make -C $(KDIR) M=$(PWD) modules

After executing make, you should get a kbench_mod.ko file:

$ modinfo kbench_mod.ko filename: /home/bernat/code/…/kbench_mod.ko description: Microbenchmark for fib_lookup() license: GPL depends: name: kbench_mod vermagic: 4.14.0-1-amd64 SMP mod_unload modversions

You can load it and execute the benchmark:

$ insmod ./kbench_mod.ko $ ls -l /sys/kernel/kbench/run -r--r--r-- 1 root root 4096 déc. 10 16:05 /sys/kernel/kbench/run $ cat /sys/kernel/kbench/run avg=75

The result is a number of cycles. You can get an approximate time in nanoseconds if you divide it by the frequency of your processor in gigahertz (25 ns if you have a 3 GHz processor).3

Configurable parameters

The module hard-code two constants: the number of loops and the destination address to test. We can make these parameters user-configurable by exposing them as attributes of our kobject and define a pair of functions to read/write them:

static unsigned long loop_count = 5000; static u32 flow_dst_ipaddr = 0x08080808; /* A mutex is used to ensure we are thread-safe when altering attributes. */ static DEFINE_MUTEX(kb_lock); /* Show the current value for loop count. */ static ssize_t loop_count_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { ssize_t res; mutex_lock(&kb_lock); res = scnprintf(buf, PAGE_SIZE, "%lu\n", loop_count); mutex_unlock(&kb_lock); return res; } /* Store a new value for loop count. */ static ssize_t loop_count_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { unsigned long val; int err = kstrtoul(buf, 0, &val); if (err < 0) return err; if (val < 1) return -EINVAL; mutex_lock(&kb_lock); loop_count = val; mutex_unlock(&kb_lock); return count; } /* Show the current value for destination address. */ static ssize_t flow_dst_ipaddr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { ssize_t res; mutex_lock(&kb_lock); res = scnprintf(buf, PAGE_SIZE, "%pI4\n", &flow_dst_ipaddr); mutex_unlock(&kb_lock); return res; } /* Store a new value for destination address. */ static ssize_t flow_dst_ipaddr_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { mutex_lock(&kb_lock); flow_dst_ipaddr = in_aton(buf); mutex_unlock(&kb_lock); return count; } /* Define the new set of attributes. They are read/write attributes. */ static struct kobj_attribute loop_count_attr = __ATTR_RW(loop_count); static struct kobj_attribute flow_dst_ipaddr_attr = __ATTR_RW(flow_dst_ipaddr); static struct kobj_attribute run_attr = __ATTR_RO(run); static struct attribute *bench_attributes[] = { &loop_count_attr.attr, &flow_dst_ipaddr_attr.attr, &run_attr.attr, NULL };

The IPv4 address is stored as a 32-bit integer but displayed and parsed using the dotted quad notation. The kernel provides the appropriate helpers for this task.

After this change, we have two new files in /sys/kernel/kbench. We can read the current values and modify them:

# cd /sys/kernel/kbench # ls -l -rw-r--r-- 1 root root 4096 déc. 10 19:10 flow_dst_ipaddr -rw-r--r-- 1 root root 4096 déc. 10 19:10 loop_count -r--r--r-- 1 root root 4096 déc. 10 19:10 run # cat loop_count 5000 # cat flow_dst_ipaddr 8.8.8.8 # echo 9.9.9.9 > flow_dst_ipaddr # cat flow_dst_ipaddr 9.9.9.9

We still need to alter the do_bench() function to make use of these parameters:

static int do_bench(char *buf) { /* … */ mutex_lock(&kb_lock); count = loop_count; fl4.daddr = flow_dst_ipaddr; mutex_unlock(&kb_lock); for (i = 0; i < count; i++) { /* … */ Meaningful statistics

Currently, we only compute the average lookup time. This value is usually inadequate:

  • A small number of outliers can raise this value quite significantly. An outlier can happen because we were preempted out of CPU while executing the benchmarked function. This doesn’t happen often if the function execution time is short (less than a millisecond), but when this happens, the outliers can be off by several milliseconds, which is enough to make the average inadequate when most values are several order of magnitude smaller. For this reason, the median usually gives a better view.

  • The distribution may be asymmetrical or have several local maxima. It’s better to keep several percentiles or even a distribution graph.

To be able to extract meaningful statistics, we store the results in an array.

static int do_bench(char *buf) { unsigned long long *results; /* … */ results = kmalloc(sizeof(*results) * count, GFP_KERNEL); if (!results) return scnprintf(buf, PAGE_SIZE, "msg=\"no memory\"\n"); for (i = 0; i < count; i++) { t1 = get_cycles(); err |= fib_lookup(&init_net, &fl4, &res, 0); t2 = get_cycles(); results[i] = t2 - t1; } if (err != 0) { kfree(results); return scnprintf(buf, PAGE_SIZE, "err=%d msg=\"lookup error\"\n", err); } /* Compute and display statistics */ display_statistics(buf, results, count); kfree(results); return strnlen(buf, PAGE_SIZE); }

Then, We need an helper function to be able to compute percentiles:

static unsigned long long percentile(int p, unsigned long long *sorted, unsigned count) { int index = p * count / 100; int index2 = index + 1; if (p * count % 100 == 0) return sorted[index]; if (index2 >= count) index2 = index - 1; if (index2 < 0) index2 = index; return (sorted[index] + sorted[index+1]) / 2; }

This function needs a sorted array as input. The kernel provides a heapsort function, sort(), for this purpose. Another useful value to have is the deviation from the median. Here is a function to compute the median absolute deviation:4

static unsigned long long mad(unsigned long long *sorted, unsigned long long median, unsigned count) { unsigned long long *dmedian = kmalloc(sizeof(unsigned long long) * count, GFP_KERNEL); unsigned long long res; unsigned i; if (!dmedian) return 0; for (i = 0; i < count; i++) { if (sorted[i] > median) dmedian[i] = sorted[i] - median; else dmedian[i] = median - sorted[i]; } sort(dmedian, count, sizeof(unsigned long long), compare_ull, NULL); res = percentile(50, dmedian, count); kfree(dmedian); return res; }

With these two functions, we can provide additional statistics:

static void display_statistics(char *buf, unsigned long long *results, unsigned long count) { unsigned long long p95, p90, p50; sort(results, count, sizeof(*results), compare_ull, NULL); if (count == 0) { scnprintf(buf, PAGE_SIZE, "msg=\"no match\"\n"); return; } p95 = percentile(95, results, count); p90 = percentile(90, results, count); p50 = percentile(50, results, count); scnprintf(buf, PAGE_SIZE, "min=%llu max=%llu count=%lu 95th=%llu 90th=%llu 50th=%llu mad=%llu\n", results[0], results[count - 1], count, p95, p90, p50, mad(results, p50, count)); }

We can also append a graph of the distribution function (and of the cumulative distribution function):

min=72 max=33364 count=100000 95th=154 90th=142 50th=112 mad=6 value │ ┊ count 72 │ 51 77 │▒ 3548 82 │▒▒░░ 4773 87 │▒▒░░░░░ 5918 92 │░░░░░░░ 1207 97 │░░░░░░░ 437 102 │▒▒▒▒▒▒░░░░░░░░ 12164 107 │▒▒▒▒▒▒▒░░░░░░░░░░░░░░ 15508 112 │▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░░░░░░░░░░░░░ 23014 117 │▒▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6297 122 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 905 127 │▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 3845 132 │▒▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6687 137 │▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 4884 142 │▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 4133 147 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1015 152 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1123 Benchmark validity

While the benchmark produces some figures, we may question their validity. There are several traps when writing a microbenchmark:

dead code
Compiler may optimize away our benchmark because the result is not used. In our example, we ensure to combine the result in a variable to avoid this.
warmup phase
One-time initializations may affect negatively the benchmark. This is less likely to happen with C code since there is no JIT. Nonetheless, you may want to add a small warmup phase.
too small dataset
If the benchmark is running using the same input parameters over and over, the input data may fit entirely in the L1 cache. This affects positively the benchmark. Therefore, it is important to iterate over a large dataset.
too regular dataset
A regular dataset may still affect positively the benchmark despite its size. While the whole dataset will not fit into L1/L2 cache, the previous run may have loaded most of the data needed for the current run. In the route lookup example, as route entries are organized in a tree, it’s important to not linearly scan the address space. Address space could be explored randomly (a simple linear congruential generator brings reproducible randomness).
large overhead
If the benchmarked function runs in a few nanoseconds, the overhead of the benchmark infrastructure may be too high. Typically, the overhead of the method presented here is around 5 nanoseconds. get_cycles() is a thin wrapper around the RDTSC instruction: it returns the number of cycles for the current processor since last reset. It’s also virtualized with low-overhead in case you run the benchmark in a virtual machine. If you want to measure a function with a greater precision, you need to wrap it in a loop. However, the loop itself adds to the overhead, notably if you need to compute a large input set (in this case, the input can be prepared). Compilers also like to mess with loops. At last, a loop hides the result distribution.
preemption
While the benchmark is running, the thread executing it can be preempted (or when running in a virtual machine, the whole virtual machine can be preempted by the host). When the function takes less than a millisecond to execute, one can assume preemption is rare enough to be filtered out by using a percentile function.
noise
When running the benchmark, noise from unrelated processes (or sibling hosts when benchmarking in a virtual machine) needs to be avoided as it may change from one run to another. Therefore, it is not a good idea to benchmark in a public cloud. On the other hand, adding controlled noise to the benchmark may lead to less artificial results: in our example, route lookup is only a small part of routing a packet and measuring it alone in a tight loop affects positively the benchmark.
syncing parallel benchmarks
While it is possible (and safe) to run several benchmarks in parallel, it may be difficult to ensure they really run in parallel: some invocations may work in better conditions because other threads are not running yet, skewing the result. Ideally, each run should execute bogus iterations and start measures only when all runs are present. This doesn’t seem a trivial addition.

As a conclusion, the benchmark module presented here is quite primitive (notably compared to a framework like JMH for Java) but, with care, can deliver some conclusive results like in these posts: “IPv4 route lookup on Linux” and “IPv6 route lookup on Linux.”

Alternative

Use of a tracing tool is an alternative approach. For example, if we want to benchmark IPv4 route lookup times, we can use the following process:

while true; do ip route get $((RANDOM%100)).$((RANDOM%100)).$((RANDOM%100)).5 sleep 0.1 done

Then, we instrument the __fib_lookup() function with eBPF (through BCC):

$ sudo funclatency-bpfcc __fib_lookup Tracing 1 functions for "__fib_lookup"... Hit Ctrl-C to end. ^C nsecs : count distribution 0 -> 1 : 0 | | 2 -> 3 : 0 | | 4 -> 7 : 0 | | 8 -> 15 : 0 | | 16 -> 31 : 0 | | 32 -> 63 : 0 | | 64 -> 127 : 0 | | 128 -> 255 : 0 | | 256 -> 511 : 3 |* | 512 -> 1023 : 1 | | 1024 -> 2047 : 2 |* | 2048 -> 4095 : 13 |****** | 4096 -> 8191 : 42 |********************|

Currently, the overhead is quite high, as a route lookup on an empty routing table is less than 100 ns. Once Linux supports inter-event tracing, the overhead of this solution may be reduced to be usable for such microbenchmarks.

  1. In this simple case, it may be more accurate to use:

    t1 = get_cycles(); for (i = 0; i < count; i++) { err |= fib_lookup(…); } t2 = get_cycles(); total = t2 - t1;

    However, this prevents us to compute more statistics. Moreover, when you need to provide a non-constant input to the fib_lookup() function, the first way is likely to be more accurate. 

  2. In-kernel API backward compatibility is a non-goal of the Linux kernel. 

  3. You can get the current frequency with cpupower frequency-info. As the frequency may vary (even when using the performance governor), this may not be accurate but this still provides an easier representation (comparable results should use the same frequency). 

  4. Only integer arithmetic is available in the kernel. While it is possible to approximate a standard deviation using only integers, the median absolute deviation just reuses the percentile() function defined above. 

Vincent Bernat https://vincent.bernat.im/en Vincent Bernat

Freezing of tasks failed

Enj, 28/12/2017 - 7:33pd

It is interesting how a user-space task could lead to hinder a Linux kernel software suspend operation.

[11735.155443] PM: suspend entry (deep) [11735.155445] PM: Syncing filesystems ... done. [11735.215091] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11735.215172] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11735.558676] rfkill: input handler enabled [11735.608859] (NULL device *): firmware: direct-loading firmware rtlwifi/rtl8723befw_36.bin [11735.609910] (NULL device *): firmware: direct-loading firmware rtl_bt/rtl8723b_fw.bin [11735.611871] Freezing user space processes ... [11755.615603] Freezing of tasks failed after 20.003 seconds (1 tasks refusing to freeze, wq_busy=0): [11755.615854] digikam D 0 13262 13245 0x00000004 [11755.615859] Call Trace: [11755.615873] __schedule+0x28e/0x880 [11755.615878] schedule+0x2c/0x80 [11755.615889] request_wait_answer+0xa3/0x220 [fuse] [11755.615895] ? finish_wait+0x80/0x80 [11755.615902] __fuse_request_send+0x86/0x90 [fuse] [11755.615907] fuse_request_send+0x27/0x30 [fuse] [11755.615914] fuse_send_readpages.isra.30+0xd1/0x120 [fuse] [11755.615920] fuse_readpages+0xfd/0x110 [fuse] [11755.615928] __do_page_cache_readahead+0x200/0x2d0 [11755.615936] filemap_fault+0x37b/0x640 [11755.615940] ? filemap_fault+0x37b/0x640 [11755.615944] ? filemap_map_pages+0x179/0x320 [11755.615950] __do_fault+0x1e/0xb0 [11755.615953] __handle_mm_fault+0xc8a/0x1160 [11755.615958] handle_mm_fault+0xb1/0x200 [11755.615964] __do_page_fault+0x257/0x4d0 [11755.615968] do_page_fault+0x2e/0xd0 [11755.615973] page_fault+0x22/0x30 [11755.615976] RIP: 0033:0x7f32d3c7ff90 [11755.615978] RSP: 002b:00007ffd887c9d18 EFLAGS: 00010246 [11755.615981] RAX: 00007f32d3fc9c50 RBX: 000000000275e440 RCX: 0000000000000003 [11755.615982] RDX: 0000000000000002 RSI: 00007ffd887c9f10 RDI: 000000000275e440 [11755.615984] RBP: 00007ffd887c9f10 R08: 000000000275e820 R09: 00000000018d2f40 [11755.615986] R10: 0000000000000002 R11: 0000000000000000 R12: 000000000189cbc0 [11755.615987] R13: 0000000001839dc0 R14: 000000000275e440 R15: 0000000000000000 [11755.616014] OOM killer enabled. [11755.616015] Restarting tasks ... done. [11755.817640] PM: suspend exit [11755.817698] PM: suspend entry (s2idle) [11755.817700] PM: Syncing filesystems ... done. [11755.983156] rfkill: input handler disabled [11756.030209] rfkill: input handler enabled [11756.073529] Freezing user space processes ... [11776.084309] Freezing of tasks failed after 20.010 seconds (2 tasks refusing to freeze, wq_busy=0): [11776.084630] digikam D 0 13262 13245 0x00000004 [11776.084636] Call Trace: [11776.084653] __schedule+0x28e/0x880 [11776.084659] schedule+0x2c/0x80 [11776.084672] request_wait_answer+0xa3/0x220 [fuse] [11776.084680] ? finish_wait+0x80/0x80 [11776.084688] __fuse_request_send+0x86/0x90 [fuse] [11776.084695] fuse_request_send+0x27/0x30 [fuse] [11776.084703] fuse_send_readpages.isra.30+0xd1/0x120 [fuse] [11776.084711] fuse_readpages+0xfd/0x110 [fuse] [11776.084721] __do_page_cache_readahead+0x200/0x2d0 [11776.084730] filemap_fault+0x37b/0x640 [11776.084735] ? filemap_fault+0x37b/0x640 [11776.084743] ? __update_load_avg_blocked_se.isra.33+0xa1/0xf0 [11776.084749] ? filemap_map_pages+0x179/0x320 [11776.084755] __do_fault+0x1e/0xb0 [11776.084759] __handle_mm_fault+0xc8a/0x1160 [11776.084765] handle_mm_fault+0xb1/0x200 [11776.084772] __do_page_fault+0x257/0x4d0 [11776.084777] do_page_fault+0x2e/0xd0 [11776.084783] page_fault+0x22/0x30 [11776.084787] RIP: 0033:0x7f31ddf315e0 [11776.084789] RSP: 002b:00007ffd887ca068 EFLAGS: 00010202 [11776.084793] RAX: 00007f31de13c350 RBX: 00000000040be3f0 RCX: 000000000283da60 [11776.084795] RDX: 0000000000000001 RSI: 00000000040be3f0 RDI: 00000000040be3f0 [11776.084797] RBP: 00007f32d3fca1e0 R08: 0000000005679250 R09: 0000000000000020 [11776.084799] R10: 00000000058fc1b0 R11: 0000000004b9ac50 R12: 0000000000000000 [11776.084801] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000 [11776.084806] QXcbEventReader D 0 13268 13245 0x00000004 [11776.084810] Call Trace: [11776.084817] __schedule+0x28e/0x880 [11776.084823] schedule+0x2c/0x80 [11776.084827] rwsem_down_write_failed_killable+0x25a/0x490 [11776.084832] call_rwsem_down_write_failed_killable+0x17/0x30 [11776.084836] ? call_rwsem_down_write_failed_killable+0x17/0x30 [11776.084842] down_write_killable+0x2d/0x50 [11776.084848] do_mprotect_pkey+0xa9/0x2f0 [11776.084854] SyS_mprotect+0x13/0x20 [11776.084859] system_call_fast_compare_end+0xc/0x97 [11776.084861] RIP: 0033:0x7f32d1f7c057 [11776.084863] RSP: 002b:00007f32cbb8c8d8 EFLAGS: 00000206 ORIG_RAX: 000000000000000a [11776.084867] RAX: ffffffffffffffda RBX: 00007f32c4000020 RCX: 00007f32d1f7c057 [11776.084869] RDX: 0000000000000003 RSI: 0000000000001000 RDI: 00007f32c4024000 [11776.084871] RBP: 00000000000000c5 R08: 00007f32c4000000 R09: 0000000000024000 [11776.084872] R10: 00007f32c4024000 R11: 0000000000000206 R12: 00000000000000a0 [11776.084874] R13: 00007f32c4022f60 R14: 0000000000001000 R15: 00000000000000e0 [11776.084906] OOM killer enabled. [11776.084907] Restarting tasks ... done. [11776.289655] PM: suspend exit [11776.459624] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11776.469521] rfkill: input handler disabled [11776.978733] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11777.038879] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11778.022062] wlp1s0: authenticate with 50:8f:4c:82:4d:dd [11778.033155] wlp1s0: send auth to 50:8f:4c:82:4d:dd (try 1/3) [11778.038522] wlp1s0: authenticated [11778.041511] wlp1s0: associate with 50:8f:4c:82:4d:dd (try 1/3) [11778.059860] wlp1s0: RX AssocResp from 50:8f:4c:82:4d:dd (capab=0x431 status=0 aid=5) [11778.060253] wlp1s0: associated [11778.060308] IPv6: ADDRCONF(NETDEV_CHANGE): wlp1s0: link becomes ready [11778.987669] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.117608] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.160930] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.784045] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.913668] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.961517] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch 11:58 ♒♒♒ ☺ Categories: Keywords: Like:  Ritesh Raj Sarraf https://www.researchut.com/taxonomy/term/2 RESEARCHUT - Debian-Blog

Testing Ansible Playbooks With Vagrant

Enj, 28/12/2017 - 12:00pd

I use Ansible to automate the deployments of my websites (LinuxJobs.fr, Journal du hacker) and my applications (Feed2toot, Feed2tweet). I’ll describe in this blog post my setup in order to test my Ansible Playbooks locally on my laptop.

Why testing the Ansible Playbooks

I need a simple and a fast way to test the deployments of my Ansible Playbooks locally on my laptop, especially at the beginning of writing a new Playbook, because deploying directly on the production server is both reeeeally slow… and risky for my services in production.

Instead of deploying on a remote server, I’ll deploy my Playbooks on a VirtualBox using Vagrant. This allows getting quickly the result of a new modification, iterating and fixing as fast as possible.

Disclaimer: I am not a profesionnal programmer. There might exist better solutions and I’m only describing one solution of testing Ansible Playbooks I find both easy and efficient for my own use cases.

My process
  1. Begin writing the new Ansible Playbook
  2. Launch a fresh virtual machine (VM) and deploy the playbook on this VM using Vagrant
  3. Fix the issues either from the playbook either from the application deployed by Ansible itself
  4. Relaunch the deployment on the VM
  5. If more errors, go back to step 3. Otherwise destroy the VM, recreate it and deploy to test a last time with a fresh install
  6. If no error remains, tag the version of your Ansible Playbook and you’re ready to deploy in production
What you need

First, you need Virtualbox. If you use the Debian distribution, this link describes how to install it, either from the Debian repositories either from the upstream.

Second, you need Vagrant. Why Vagrant? Because it’s a kind of middleware between your development environment and your virtual machine, allowing programmatically reproducible operations and easy linking your deployments and the virtual machine. Install it with the following command:

# apt install vagrant

Setting up Vagrant

Everything about Vagrant lies in the file Vagrantfile. Here is mine:

Vagrant.require_version ">= 2.0.0" Vagrant.configure(1) do |config| config.vm.box = "debian/stretch64" config.vm.provision "shell", inline: "apt install --yes git python3-pip" config.vm.provision "ansible" do |ansible| ansible.verbose = "v" ansible.playbook = "site.yml" ansible.vault_password_file = "vault_password_file" end end

Debian, the best OS to operate your online services

  1. The 1st line defines what versions of Vagrant should execute your Vagrantfile.
  2. The first loop of the file, you could define the following operations for as many virtual machines as you wish (here just 1).
  3. The 3rd line defines the official Vagrant image we’ll use for the virtual machine.
  4. The 4th line is really important: those are the missing apps we miss on the VM. Here we install git and python3-pip with apt.
  5. The next line indicates the start of the Ansible configuration.
  6. On the 6th line, we want a verbose output of Ansible.
  7. On the 7th line, we define the entry point of your Ansible Playbook.
  8. On the 8th line, if you use Ansible Vault to encrypt some files, just define here the file with your Ansible Vault passphrase.

When Vagrant launches Ansible, it’s going to launch something like:

$  ansible-playbook --inventory-file=/home/me/ansible/test-ansible-playbook/.vagrant/provisioners/ansible/inventory -v --vault-password-file=vault_password_file site.yml Executing Vagrant

After writing your Vagrantfile, you need to launch your VM. It’s as simple as using the following command:

$ vagrant up

That’s a slow operation, because the VM will be launched, the additionnal apps you defined in the Vagrantfile will be installed and finally your Playbook will be deployed on it. You should sparsely use it.

Ok, now we’re really ready to iterate fast. Between your different modifications, in order to test your deployments fast and on a regular basis, just use the following command:

$ vagrant provision

Once your Ansible Playbook is finally ready, usually after lots of iterations (at least that’s my case), you should test it on a fresh install, because your different iterations may have modified your virtual machine and could trigger unexpected results.

In order to test it from a fresh install, use the following command:

$ vagrant destroy && vagrant up

That’s again a slow operation. You should use it when you’re pretty sure your Ansible Playbook is almost finished. After testing your deployment on a fresh VM, you’re now ready to deploy in production.Or at least better prepared :p

Possible improvements? Let me know

I find the setup described in this blog post quite useful for my use cases. I can iterate quite fast especially when I begin writing a new playbook, not only on the playbook but sometimes on my own latest apps, not yet ready to be deployed in production. Deploying on a remote server would be both slow and dangerous for my services in production.

I could use a continous integration (CI) server, but that’s not the topic of this blog post.  As said before, the goal is to iterate as fast as possible in the beginning of writing a new Ansible Playbook.

Gitlab, offering Continuous Integration and Continuous Deployment services

Commiting, pushing to your Git repository and waiting for the execution of your CI tests is overkill at the beginning of your Ansible Playbook, when it’s full of errors waiting to be debugged one by one. I think CI is more useful later in the life of the Ansible Playbooks, especially when different people work on it and you have a set or code quality rules to enforce. That’s only my opinion and it’s open to discussion, one more time I’m not a professionnal programmer.

If you have better solutions to test Ansible Playbooks or to improve the one describe here, let me know by writing a comment or by contacting me through my accounts on social networks below, I’ll be delighted to listen to your improvements.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.fr, a job board for Free and Open Source Jobs in France.

Follow Me On Social Networks

 

Carl Chenet https://carlchenet.com debian – Carl Chenet's Blog

Translating my website to Finnish

Mër, 27/12/2017 - 11:00md

I've now been living in Finland for two years, and I'm pondering a small project to translate my main website into Finnish.

Obviously if my content is solely Finnish it will become of little interest to the world - if my vanity lets me even pretend it is useful at the moment!

The traditional way to do this, with Apache, is to render pages in multiple languages and let the client(s) request their preferred version with Accept-Language:. Though it seems that many clients are terrible at this, and the whole approach is a mess. Pretending it works though we render pages such as:

index.html index.en.html index.fi.html

Then "magic happens", such that the right content is served. I can then do extra-things, like add links to "English" or "Finnish" in the header/footers to let users choose.

Unfortunately I have an immediate problem! I host a bunch of websites on a single machine and I don't want to allow a single site compromise to affect other sites. To do that I run each website under its own Unix user. For example I have the website "steve.fi" running as the "s-fi" user, and my blog runs as "s-blog", or "s-blogfi":

root@www ~ # psx -ef | egrep '(s-blog|s-fi)' s-blogfi /usr/sbin/lighttpd -f /srv/blog.steve.fi/lighttpd.conf -D s-blog /usr/sbin/lighttpd -f /srv/blog.steve.org.uk/lighttpd.conf -D s-fi /usr/sbin/lighttpd -f /srv/steve.fi/lighttpd.conf -D

There you can see the Unix user, and the per-user instance of lighttpd which hosts the website. Each instance binds to a high-port on localhost, and I have a reverse proxy listening on the public IP address to route incoming connections to the appropriate back-end instance.

I used to use thttpd but switched to lighttpd to allow CGI scripts to be used - some of my sites are slightly/mostly dynamic.

Unfortunately lighttpd doesn't support multiviews without some Lua hacks which will require rewriting - as the supplied example only handles Accept rather than the language-header I want.

It seems my simplest solution is to switch from having lighttpd on the back-end to running apache2 instead, but I've not yet decided which way to jump.

Food for thought, anyway.

hyvää joulua!

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Debian-Med bug squashing

Mar, 26/12/2017 - 12:16md

The Debian Med Advent Calendar was again really successful this year. As announced on the mailinglist, this year the second highest number of bugs has been closed during that bug squashing:

year number of bugs closed 2011 63 2012 28 2013 73 2014 5 2015 150 2016 95 2017 105

Well done everybody who participated!

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

Dockerizing Compiled Software

Mar, 26/12/2017 - 8:00pd

I recently went through a stint of closing a huge number of issues in the docker-library/php repository, and one of the oldest (and longest) discussions was related to installing depedencies for a compiling extensions, and I wrote a semi-long comment explaining how I do this in a general way for any software I wish to Dockerize.

I’m going to copy most of that comment here and perhaps expand a little bit more in order to have a better/cleaner place to link to!

The first step I take is to write the naïve version of the Dockerfile: download the source, run ./configure && make etc, clean up. I then try building my naïve creation, and in doing so hope for an error message. (yes, really!)

The error message will usually take the form of something like error: could not find "xyz.h" or error: libxyz development headers not found.

If I’m building in Debian, I’ll hit https://packages.debian.org/file:xyz.h (replacing “xyz.h” with the name of the header file from the error message), or even just Google something like “xyz.h debian”, to figure out the name of the package I require.

If I’m building in Alpine, I’ll use https://pkgs.alpinelinux.org/contents to perform a similar search.

The same works to some extent for “libxyz development headers”, but in my experience Google works better for those since different distributions and projects will call these development packages by different names, so sometimes it’s a little harder to figure out exactly which one is the “right” one to install.

Once I’ve got a package name, I add that package name to my Dockerfile, rinse, and repeat. Eventually, this usually leads to a successful build. Occationally I find that some library either isn’t in Debian or Alpine, or isn’t new enough, and I’ve also got to build it from source, but those instances are rare in my own experience – YMMV.

I’ll also often check the source for the Debian (via https://sources.debian.org) or Alpine (via https://git.alpinelinux.org/cgit/aports/tree) package of the software I’m looking to compile, especially paying attention to Build-Depends (ala php7.0=7.0.26-1’s debian/control file) and/or makedepends (ala php7’s APKBUILD file) for package name clues.

Personally, I find this sort of detective work interesting and rewarding, but I realize I’m probably a bit of a unique creature. Another good technique I use occationally is to determine whether anyone else has already Dockerized the thing I’m trying to, so I can simply learn directly from their Dockerfile which packages I’ll need to install.

For the specific case of PHP extensions, there’s almost always someone who’s already figured out what’s necessary for this or that module, and all I have to do is some light detective work to find them.

Anyways, that’s my method! Hope it’s helpful, and happy hunting!

Tianon Gravi admwiggin@gmail.com Tianon's Ramblings

Salsa batch import

Hën, 25/12/2017 - 4:43md

Now that Salsa is in beta, it's time to import projects (= GitLab speak for "repository"). This is probably best done automated. Head to Access Tokens and generate a token with "api" scope, which you can then use with curl:

$ cat salsa-import #!/bin/sh set -eux PROJECT="${1%.git}" DESCRIPTION="$PROJECT packaging" ALIOTH_URL="https://anonscm.debian.org/git" ALIOTH_GROUP="collab-maint" SALSA_URL="https://salsa.debian.org/api/v4" SALSA_NAMESPACE="2" # 2 is "debian" SALSA_TOKEN="yourcryptictokenhere" curl -f "$SALSA_URL/projects?private_token=$SALSA_TOKEN" \ --data "path=$PROJECT&namespace_id=$SALSA_NAMESPACE&description=$DESCRIPTION&import_url=$ALIOTH_URL/$ALIOTH_GROUP/$PROJECT&visibility=public"

This will create the GitLab project in the chosen namespace, and import the repository from Alioth.

To get the namespace id, use something like:

curl https://salsa.debian.org/api/v4/groups | jq . | less

Pro tip: To import a whole Alioth group to GitLab, run this on Alioth:

for f in *.git; do sh salsa-import $f; done Christoph Berg http://www.df7cb.de/blog/tag/debian.html Myon's Debian Blog

Kitten Block equivalent for Firefox 57

Mar, 19/12/2017 - 1:00pd

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

Colin Watson https://www.chiark.greenend.org.uk/~cjwatson/blog/ Colin Watson's blog

littler 0.3.3

Dje, 17/12/2017 - 5:37md

The fourth release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. In my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. Last but not least it is also less silly than Rscript and always loads the methods package avoiding those bizarro bugs between code running in R itself and a scripting front-end.

littler prefers to live on Linux and Unix, has its difficulties on OS X due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

A few examples as highlighted at the Github repo:

This release brings a few new examples scripts, extends a few existing ones and also includes two fixes thanks to Carl. Again, no internals were changed. The NEWS file entry is below.

Changes in littler version 0.3.3 (2017-12-17)
  • Changes in examples

    • The script installGithub.r now correctly uses the upgrade argument (Carl Boettiger in #49).

    • New script pnrrs.r to call the package-native registration helper function added in R 3.4.0

    • The script install2.r now has more robust error handling (Carl Boettiger in #50).

    • New script cow.r to use R Hub's check_on_windows

    • Scripts cow.r and c4c.r use #!/usr/bin/env r

    • New option --fast (or -f) for scripts build.r and rcc.r for faster package build and check

    • The build.r script now defaults to using the current directory if no argument is provided.

    • The RStudio getters now use the rvest package to parse the webpage with available versions.

  • Changes in package

    • Travis CI now uses https to fetch script, and sets the group

Courtesy of CRANberries, there is a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

A second X server on vt8, running a different Debian suite, using systemd-nspawn

Pre, 15/12/2017 - 1:18pd
Two tensions
  1. Sometimes the contents of the Debian archive isn’t yet sufficient for working in a software ecosystem in which I’d like to work, and I want to use that ecosystem’s package manager which downloads the world into $HOME – e.g. stack, pip, lein and friends.

    But I can’t use such a package manager when $HOME contains my PGP subkeys and other valuable files, and my X session includes Firefox with lots of saved passwords, etc.

  2. I want to run Debian stable on my laptop for purposes of my day job – if I can’t open Emacs on a Monday morning, it’s going to be a tough week.

    But I also want to do Debian development on my laptop, and most of that’s a pain without either Debian testing or Debian unstable.

The solution

Have Propellor provision and boot a systemd-nspawn(1) container running Debian unstable, and start a window manager in that container with $DISPLAY pointing at an X server in vt8. Wooo!

In more detail:

  1. Laptop runs Debian stable. Main account is spwhitton.
  2. Achieve isolation from /home/spwhitton by creating a new user account, spw, that can’t read /home/spwhitton. Also, in X startup scripts for spwhitton, run xhost -local:.
  3. debootstrap a Debian unstable chroot into /var/lib/container/develacc.
  4. Install useful desktop things like task-british-desktop into /var/lib/container/develacc.
  5. Boot /var/lib/container/develacc as a systemd-nspawn container called develacc.
  6. dm-tool switch-to-greeter to start a new X server on vt8. Login as spw.
  7. Propellor installs a script enter-develacc which uses nsenter(1) to run commands in the develacc container. Create a further script enter-develacc-i3 which does

    /usr/local/bin/enter-develacc sh -c "cd ~spw; DISPLAY=$1 su spw -c i3"
  8. Finally, /home/spw/.xsession starts i3 in the chroot pointed at vt8’s X server:

    sudo /usr/local/bin/enter-develacc-i3 $DISPLAY
  9. Phew. May now pip install foo. And Ctrl-Alt-F7 to go back to my secure session. That session can read and write /home/spw, so I can dgit push etc.

The Propellor configuration develaccProvisioned :: Property (HasInfo + DebianLike) develaccProvisioned = propertyList "develacc provisioned" $ props & User.accountFor (User "spw") & Dotfiles.installedFor (User "spw") & User.hasDesktopGroups (User "spw") & withMyAcc "Sean has 'spw' group" (\u -> tightenTargets $ User.hasGroup u (Group "spw")) & withMyHome "Sean's homedir chmodded" (\h -> tightenTargets $ File.mode h 0O0750) & "/home/spw" `File.mode` 0O0770 & "/etc/sudoers.d/spw" `File.hasContent` ["spw ALL=(root) NOPASSWD: /usr/local/bin/enter-develacc-i3"] & "/usr/local/bin/enter-develacc-i3" `File.hasContent` [ "#!/bin/sh" , "" , "echo \"$1\" | grep -q -E \"^:[0-9.]+$\" || exit 1" , "" , "/usr/local/bin/enter-develacc sh -c \\" , "\t\"cd ~spw; DISPLAY=$1 su spw -c i3\"" ] & "/usr/local/bin/enter-develacc-i3" `File.mode` 0O0755 -- we have to start xss-lock outside of the container in order that it -- can interface with host logind & "/home/spw/.xsession" `File.hasContent` [ "if [ -e \"$HOME/local/wallpaper.png\" ]; then" , " xss-lock -- i3lock -i $HOME/local/wallpaper.png &" , "else" , " xss-lock -- i3lock -c 3f3f3f -n &" , "fi" , "sudo /usr/local/bin/enter-develacc-i3 $DISPLAY" ] & Systemd.nspawned develAccChroot & "/etc/network/if-up.d/develacc-resolvconf" `File.hasContent` [ "#!/bin/sh" , "" , "cp -fL /etc/resolv.conf \\" ,"\t/var/lib/container/develacc/etc/resolv.conf" ] & "/etc/network/if-up.d/develacc-resolvconf" `File.mode` 0O0755 where develAccChroot = Systemd.debContainer "develacc" $ props -- Prevent propellor passing --bind=/etc/resolv.conf which -- - won't work when system first boots as WLAN won't be up yet, -- so /etc/resolv.conf is a dangling symlink -- - doesn't keep /etc/resolv.conf up-to-date as I move between -- wireless networks ! Systemd.resolvConfed & osDebian Unstable X86_64 & Apt.stdSourcesList & Apt.suiteAvailablePinned Experimental 1 -- use host apt cacher (we assume I have that on any system with -- develaccProvisioned) & Apt.proxy "http://localhost:3142" & Apt.installed [ "i3" , "task-xfce-desktop" , "task-british-desktop" , "xss-lock" , "emacs" , "caffeine" , "redshift-gtk" , "gnome-settings-daemon" ] & Systemd.bind "/home/spw" -- note that this won't create /home/spw because that is -- bind-mounted, which is what we want & User.accountFor (User "spw") -- ensure that spw inside the container can read/write ~spw & scriptProperty [ "usermod -u $(stat --printf=\"%u\" /home/spw) spw" , "groupmod -g $(stat --printf=\"%g\" /home/spw) spw" ] `assume` NoChange Comments

I first tried using a traditional chroot. I bound lots of /dev into the chroot and then tried to start lightdm on vt8. This way, the whole X server would be within the chroot; this is in a sense more straightforward and there is not the overhead of booting the container. But lightdm refuses to start.

It might have been possible to work around this, but after reading a number of reasons why chroots are less good under systemd as compared with sysvinit, I thought I’d try systemd-nspawn, which I’ve used before and rather like in general. I couldn’t get lightdm to start inside that, either, because systemd-nspawn makes it difficult to mount enough of /dev for X servers to be started. At that point I realised that I could start only the window manager inside the container, with the X server started from the host’s lightdm, and went from there.

The security isn’t that good. You shouldn’t be running anything actually untrusted, just stuff that’s semi-trusted.

  • chmod 750 /home/spwhitton, xhost -local: and the argument validation in enter-develacc-i3 are pretty much the extent of the security here. The containerisation is to get Debian sid on a Debian stable machine, not for isolation

  • lightdm still runs X servers as root even though it’s been possible to run them as non-root in Debian for a few years now (there’s a wishlist bug against lightdm)

I now have a total of six installations of Debian on my laptop’s hard drive … four traditional chroots, one systemd-nspawn container and of course the host OS. But this is easy to manage with propellor!

Bugs

Screen locking is weird because logind sessions aren’t shared into the container. I have to run xss-lock in /home/spw/.xsession before entering the container, and the window manager running in the container cannot have a keybinding to lock the screen (as it does in my secure session). To lock the spw X server, I have to shut my laptop lid, or run loginctl lock-sessions from my secure session, which requires entering the root password.

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

Huawei Mate9

Enj, 14/12/2017 - 12:33md
Warranty Etc

I recently got a Huawei Mate 9 phone. My previous phone was a Nexus 6P that died shortly before it’s one year warranty ran out. As there have apparently been many Nexus 6P phones dying there are no stocks of replacements so Kogan (the company I bought the phone from) offered me a choice of 4 phones in the same price range as a replacement.

Previously I had chosen to avoid the extended warranty offerings based on the idea that after more than a year the phone won’t be worth much and therefore getting it replaced under warranty isn’t as much of a benefit. But now that it seems that getting a phone replaced with a newer and more powerful model is a likely outcome it seems that there are benefits in a longer warranty. I chose not to pay for an “extended warranty” on my Nexus 6P because getting a new Nexus 6P now isn’t such a desirable outcome, but when getting a new Mate 9 is a possibility it seems more of a benefit to get the “extended warranty”. OTOH Kogan wasn’t offering more than 2 years of “warranty” recently when buying a phone for a relative, so maybe they lost a lot of money on replacements for the Nexus 6P.

Comparison

I chose the Mate 9 primarily because it has a large screen. It’s 5.9″ display is only slightly larger than the 5.7″ displays in the Nexus 6P and the Samsung Galaxy Note 3 (my previous phone). But it is large enough to force me to change my phone use habits.

I previously wrote about matching phone size to the user’s hand size [1]. When writing that I had the theory that a Note 2 might be too large for me to use one-handed. But when I owned those phones I found that the Note 2 and Note 3 were both quite usable in one-handed mode. But the Mate 9 is just too big for that. To deal with this I now use the top corners of my phone screen for icons that I don’t tend to use one-handed, such as Facebook. I chose this phone knowing that this would be an issue because I’ve been spending more time reading web pages on my phone and I need to see more text on screen.

Adjusting my phone usage to the unusually large screen hasn’t been a problem for me. But I expect that many people will find this phone too large. I don’t think there are many people who buy jeans to fit a large phone in the pocket [2].

A widely touted feature of the Mate 9 is the Leica lens which apparently gives it really good quality photos. I haven’t noticed problems with my photos on my previous two phones and it seems likely that phone cameras have in most situations exceeded my requirements for photos (I’m not a very demanding user). One thing that I miss is the slow-motion video that the Nexus 6P supports. I guess I’ll have to make sure my wife is around when I need to make slow motion video.

My wife’s Nexus 6P is well out of warranty. Her phone was the original Nexus 6P I had. When her previous phone died I had a problem with my phone that needed a factory reset. It’s easier to duplicate the configuration to a new phone than restore it after a factory reset (as an aside I believe Apple does this better) I copied my configuration to the new phone and then wiped it for my wife to use.

One noteworthy but mostly insignificant feature of the Mate 9 is that it comes with a phone case. The case is hard plastic and cracked when I unsuccessfully tried to remove it, so it seems to effectively be a single-use item. But it is good to have that in the box so that you don’t have to use the phone without a case on the first day, this is something almost every other phone manufacturer misses. But there is the option of ordering a case at the same time as a phone and the case isn’t very good.

I regard my Mate 9 as fairly unattractive. Maybe if I had a choice of color I would have been happier, but it still wouldn’t have looked like EVE from Wall-E (unlike the Nexus 6P).

The Mate 9 has a resolution of 1920*1080, while the Nexus 6P (and many other modern phones) has a resolution of 2560*1440 I don’t think that’s a big deal, the pixels are small enough that I can’t see them. I don’t really need my phone to have the same resolution as the 27″ monitor on my desktop.

The Mate 9 has 4G of RAM and apps seem significantly less likely to be killed than on the Nexus 6P with 3G. I can now switch between memory hungry apps like Pokemon Go and Facebook without having one of them killed by the OS.

Security

The OS support from Huawei isn’t nearly as good as a Nexus device. Mine is running Android 7.0 and has a security patch level of the 5th of June 2017. My wife’s Nexus 6P today got an update from Android 8.0 to 8.1 which I believe has the fixes for KRACK and Blueborne among others.

Kogan is currently selling the Pixel XL with 128G of storage for $829, if I was buying a phone now that’s probably what I would buy. It’s a pity that none of the companies that have manufactured Nexus devices seem to have learned how to support devices sold under their own name as well.

Conclusion

Generally this is a decent phone. As a replacement for a failed Nexus 6P it’s pretty good. But at this time I tend to recommend not buying it as the first generation of Pixel phones are now cheap enough to compete. If the Pixel XL is out of your price range then instead of saving $130 for a less secure phone it would be better to save $400 and choose one of the many cheaper phones on offer.

Remember when Linux users used to mock Windows for poor security? Now it seems that most Android devices are facing the security problems that Windows used to face and the iPhone and Pixel are going to take the role of the secure phone.

Related posts:

  1. Another Broken Nexus 5 In late 2013 I bought a Nexus 5 for my...
  2. Samsung Galaxy Note 3 In June last year I bought a Samsung Galaxy Note...
  3. The Nexus 5 The Nexus 5 is the latest Android phone to be...
etbe https://etbe.coker.com.au etbe – Russell Coker

#13: (Much) Faster Package (Re-)Installation via Binaries

Enj, 14/12/2017 - 3:07pd

Welcome to the thirteenth post in the ridiculously rapid R recommendation series, or R4 for short. A few days ago we riffed on faster installation thanks to ccache. Today we show another way to get equally drastic gains for some (if not most) packages.

In a nutshell, there are two ways to get your R packages off CRAN. Either you install as a binary, or you use source. Most people do not think too much about this as on Windows, binary is the default. So why wouldn't one? Precisely. (Unless you are on Windows, and you develop, or debug, or test, or ... and need source. Another story.) On other operating systems, however, source is the rule, and binary is often unavailable.

Or is it? Exactly how to find out what is available will be left for another post as we do have a tool just for that. But today, just hear me out when I say that binary is often an option even when source is the default. And it matters. See below.

As a (mostly-to-always) Linux user, I sometimes whistle between my teeth that we "lost all those battles" (i.e. for the desktop(s) or laptop(s)) but "won the war". That topic merits a longer post I hope to write one day, and I won't do it justice today but my main gist that everybody (and here I mean mostly developers/power users) now at least also runs on Linux. And by that I mean that we all test our code in Linux environments such as e.g. Travis CI, and that many of us run deployments on cloud instances (AWS, GCE, Azure, ...) which are predominantly based on Linux. Or on local clusters. Or, if one may dream, the top500 And on and on. And frequently these are Ubuntu machines.

So here is an Ubuntu trick: Install from binary, and save loads of time. As an illustration, consider the chart below. It carries over the logic from the 'cached vs non-cached' compilation post and contrasts two ways of installing: from source, or as a binary. I use pristine and empty Docker containers as the base, and rely of course on the official r-base image which is supplied by Carl Boettiger and yours truly as part of our Rocker Project (and for which we have a forthcoming R Journal piece I might mention). So for example the timings for the ggplot2 installation were obtained via

time docker run --rm -ti r-base /bin/bash -c 'install.r ggplot2'

and

time docker run --rm -ti r-base /bin/bash -c 'apt-get update && apt-get install -y r-cran-ggplot2'

Here docker run --rm -ti just means to launch Docker, in 'remove leftovers at end' mode, use terminal and interactive mode and invoke a shell. The shell command then is, respectively, to install a CRAN package using install.r from my littler package, or to install the binary via apt-get after updating the apt indices (as the Docker container may have been built a few days or more ago).

Let's not focus on Docker here---it is just a convenient means to an end of efficiently measuring via a simple (wall-clock counting) time invocation. The key really is that install.r is just a wrapper to install.packages() meaning source installation on Linux (as used inside the Docker container). And apt-get install ... is how one gets a binary. Again, I will try post another piece to determine how one finds if a suitable binary for a CRAN package exists. For now, just allow me to proceed.

So what do we see then? Well have a look:

A few things stick out. RQuantLib really is a monster. And dplyr is also fairly heavy---both rely on Rcpp, BH and lots of templating. At the other end, data.table is still a marvel. No external dependencies, and just plain C code make the source installation essentially the same speed as the binary installation. Amazing. But I digress.

We should add that one of the source installations also required installing additional libries: QuantLib is needed along with Boost for RQuantLib. Similar for another package (not shown) which needed curl and libcurl.

So what is the upshot? If you can, consider binaries. I will try to write another post how I do that e.g. for Travis CI where all my tests us binaries. (Yes, I know. This mattered more in the past when they did not cache. It still matters today as you a) do not need to fill the cache in the first place and b) do not need to worry about details concerning compilation from source which still throws enough people off. But yes, you can of course survive as is.)

The same approach is equally valid on AWS and related instances: I answered many StackOverflow questions where folks were failing to compile "large-enough" pieces from source on minimal installations with minimal RAM, and running out of resources and failed with bizarre errors. In short: Don't. Consider binaries. It saves time and trouble.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Compute shaders

Enj, 14/12/2017 - 12:13pd

Movit, my GPU-accelerated video filter library, is getting compute shaders. But the experience really makes me happy that I chose to base it on fragment shaders originally and not compute! (Ie., many would claim CUDA would be the natural choice, even if it's single-vendor.) The deinterlace filter is significantly faster (10–70%, depending a bit on various factors) on my Intel card, so I'm hoping the resample filter is also going to get some win, but at this point, I'm not actually convinced it's going to be much faster… and the complexity of managing local memory effectively is sky-high. And then there's the fun of chaining everything together.

Hooray for already having an extensive battery of tests, at least!

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Collecting statistics from TP-Link HS110 SmartPlugs using collectd

Mër, 13/12/2017 - 7:15md

Running a 3d printer alone at home is not necessarily the best idea - so I was looking for a way to force it off from remote. As OctoPrint user I stumbled upon a plugin to control TP-Link Smartplugs, so I decided to give them a try. What I found especially nice on the HS110 model was that it is possible to monitor power usage, current and voltage. Of course I wanted to have a long term graph of it. The result is a small collectd plugin, written in Python. It is available on github: https://github.com/bzed/collectd-tplink_hs110. Enjoy!

Bernd Zeimetz https://bzed.de/categories/linux/ Linux on linux & the mountains

Debsources now in sources.debian.org

Mër, 13/12/2017 - 6:40md

Debsources is a web application for publishing, browsing and searching an unpacked Debian source mirror on the Web. With Debsources, all the source code of every Debian release is available in https://sources.debian.org, both via an HTML user interface and a JSON API.

This service was first offered in 2013 with the sources.debian.net instance, which was kindly hosted by IRILL, and is now becoming official under sources.debian.org, hosted on the Debian infrastructure.

This new instance offers all the features of the old one (an updater that runs four times a day, various plugins to count lines of code or measure the size of packages, and sub-apps to show lists of patches and copyright files), plus integration with other Debian services such as codesearch.debian.net and the PTS.

The Debsources Team has taken the opportunity of this move of Debsources onto the Debian infrastructure to officially announce the service. Read their message as well as the Debsources documentation page for more details.

Laura Arjona Reina https://bits.debian.org/ Bits from Debian

Idea for finding all public domain movies in the USA

Mër, 13/12/2017 - 10:15pd

While looking at the scanned copies for the copyright renewal entries for movies published in the USA, an idea occurred to me. The number of renewals are so few per year, it should be fairly quick to transcribe them all and add references to the corresponding IMDB title ID. This would give the (presumably) complete list of movies published 28 years earlier that did _not_ enter the public domain for the transcribed year. By fetching the list of USA movies published 28 years earlier and subtract the movies with renewals, we should be left with movies registered in IMDB that are now in the public domain. For the year 1955 (which is the one I have looked at the most), the total number of pages to transcribe is 21. For the 28 years from 1950 to 1978, it should be in the range 500-600 pages. It is just a few days of work, and spread among a small group of people it should be doable in a few weeks of spare time.

A typical copyright renewal entry look like this (the first one listed for 1955):

ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); 10Jun55; R151558.

The movie title as well as registration and renewal dates are easy enough to locate by a program (split on first comma and look for DDmmmYY). The rest of the text is not required to find the movie in IMDB, but is useful to confirm the correct movie is found. I am not quite sure what the L and R numbers mean, but suspect they are reference numbers into the archive of the US Copyright Office.

Tracking down the equivalent IMDB title ID is probably going to be a manual task, but given the year it is fairly easy to search for the movie title using for example http://www.imdb.com/find?q=adam+and+evil+1927&s=all. Using this search, I find that the equivalent IMDB title ID for the first renewal entry from 1955 is http://www.imdb.com/title/tt0017588/.

I suspect the best way to do this would be to make a specialised web service to make it easy for contributors to transcribe and track down IMDB title IDs. In the web service, once a entry is transcribed, the title and year could be extracted from the text, a search in IMDB conducted for the user to pick the equivalent IMDB title ID right away. By spreading out the work among volunteers, it would also be possible to make at least two persons transcribe the same entries to be able to discover any typos introduced. But I will need help to make this happen, as I lack the spare time to do all of this on my own. If you would like to help, please get in touch. Perhaps you can draft a web service for crowd sourcing the task?

Note, Project Gutenberg already have some transcribed copies of the US Copyright Office renewal protocols, but I have not been able to find any film renewals there, so I suspect they only have copies of renewal for written works. I have not been able to find any transcribed versions of movie renewals so far. Perhaps they exist somewhere?

I would love to figure out methods for finding all the public domain works in other countries too, but it is a lot harder. At least for Norway and Great Britain, such work involve tracking down the people involved in making the movie and figuring out when they died. It is hard enough to figure out who was part of making a movie, but I do not know how to automate such procedure without a registry of every person involved in making movies and their death year.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

#metoo

Mër, 13/12/2017 - 9:48pd

I long thought about whether I should post a/my #metoo. It wasn't a rape. Nothing really happened. And a lot of these stories are very disturbing.

And yet it still it bothers me every now and then. I was in school age, late elementary or lower school ... In my hometown there is a cinema. Young as we've been we weren't allowed to see Rambo/Rocky. Not that I was very interested in the movie ... But there the door to the screening room stood open. And curious as we were we looked through the door. The projectionist saw us and waved us in. It was exciting to see a moview from that perspective that was forbidden to us.

He explained to us how the machines worked, showed us how the film rolls were put in and showed us how to see the signals on the screen which are the sign to turn on the second projector with the new roll.

During these explenations he was standing very close to us. Really close. He put his arm around us. The hand moved towards the crotch. It was unpleasantly and we knew that it wasn't all right. But screaming? We weren't allowed to be there ... So we thanked him nicely and retreated disturbed. The movie wasn't that good anyway.

Nothing really happened, and we didn't say anything.

/personal | permanent link | Comments: 2 | Flattr this

Rhonda http://rhonda.deb.at/blog/ Rhonda's Blog

Thinkpad X301

Mër, 13/12/2017 - 9:02pd
Another Broken Thinkpad

A few months ago I wrote a post about “Observing Reliability” [1] regarding my Thinkpad T420. I noted that the T420 had been running for almost 4 years which was a good run, and therefore the failed DVD drive didn’t convince me that Thinkpads have quality problems.

Since that time the plastic on the lid by the left hinge broke, every time I open or close the lid it breaks a bit more. That prevents use of that Thinkpad by anyone who wants to use it as a serious laptop as it can’t be expected to last long if opened and closed several times a day. It probably wouldn’t be difficult to fix the lid but for an old laptop it doesn’t seem worth the effort and/or money. So my plan now is to give the Thinkpad to someone who wants a compact desktop system with a built-in UPS, a friend in Vietnam can probably find a worthy recipient.

My Thinkpad History

I bought the Thinkpad T420 in October 2013 [2], it lasted about 4 years and 2 months. It cost $306.

I bought my Thinkpad T61 in February 2010 [3], it lasted about 3 years and 8 months. It cost $796 [4].

Prior to the T61 I had a T41p that I received well before 2006 (maybe 2003) [5]. So the T41p lasted close to 7 years, as it was originally bought for me by a multinational corporation I’m sure it cost a lot of money. By the time I bought the T61 it had display problems, cooling problems, and compatibility issues with recent Linux distributions.

Before the T41p I had 3 Thinkpads in 5 years, all of which had the type of price that only made sense in the dot-com boom.

In terms of absolute lifetime the Thinkpad T420 did ok. In terms of cost per year it did very well, only $6 per month. The T61 was $18 per month, and while the T41p lasted a long time it probably cost over $2000 giving it a cost of over $20 per month. $20 per month is still good value, I definitely get a lot more than $20 per month benefit from having a laptop. While it’s nice that my most recent laptop could be said to have saved me $12 per month over the previous one, it doesn’t make much difference to my financial situation.

Thinkpad X301

My latest Thinkpad is an X301 that I found on an e-waste pile, it had a broken DVD drive which is presumably the reason why someone decided to throw it out. It has the same power connector as my previous 2 Thinkpads which was convenient as I didn’t find a PSU with it. I saw a review of the T301 dated 2008 which probably means it was new in 2009, but it has no obvious signs of wear so probably hasn’t been used much.

My X301 has a 1440*900 screen which isn’t as good as the T420 resolution of 1600*900. But a lower resolution is an expected trade-off for a smaller laptop. The T310 comes with a 64G SSD which is a significant limitation.

I previously wrote about a “cloud lifestyle” [6]. I hadn’t implemented all the ideas from that post due to distractions and a lack of time. But now that I’ll have a primary PC with only 64G of storage I have more incentive to do that. The 100G disk in the T61 was a minor limitation at the time I got it but since then everything got bigger and 64G is going to be a big problem and the fact that it’s an unusual 1.8″ form factor means that I can’t cheaply upgrade it or use the SSD that I’ve used in the Thinkpad T420.

My current Desktop PC is an i7-2600 system which builds the SE Linux policy packages for Debian (the thing I compile most frequently) in about 2 minutes with about 5 minutes of CPU time used. the same compilation on the X301 takes just over 6.5 minutes with almost 9 minutes of CPU time used. The i5 CPU in the Thinkpad T420 was somewhere between those times. While I can wait 6.5 minutes for a compile to test something it is an annoyance. So I’ll probably use one of the i7 or i5 class servers I run to do builds.

On the T420 I had chroot environments running with systemd-nspawn for the last few releases of Debian in both AMD64 and i386 variants. Now I have to use a server somewhere for that.

I stored many TV shows, TED talks, and movies on the T420. Probably part of the problem with the hinge was due to adjusting the screen while watching TV in bed. Now I have a phone with 64G of storage and a tablet with 32G so I will use those for playing videos.

I’ve started to increase my use of Git recently. There’s many programs I maintain that I really should have had version control for years ago. Now the desire to develop them on multiple systems gives me an incentive to do this.

Comparing to a Phone

My latest phone is a Huawei Mate 9 (I’ll blog about that shortly) which has a 1920*1080 screen and 64G of storage. So it has a higher resolution screen than my latest Thinkpad as well as equal storage. My phone has 4G of RAM while the Thinkpad only has 2G (I plan to add RAM soon).

I don’t know of a good way of comparing CPU power of phones and laptops (please comment if you have suggestions about this). The issues of GPU integration etc will make this complex. But I’m sure that the octa-core CPU in my phone doesn’t look too bad when compared to the dual-core CPU in my Thinkpad.

Conclusion

The X301 isn’t a laptop I would choose to buy today. Since using it I’ve appreciated how small and light it is, so I would definitely consider a recent X series. But being free the value for money is NaN which makes it more attractive. Maybe I won’t try to get 4+ years of use out of it, in 2 years time I might buy something newer and better in a similar form factor.

I can just occasionally poll an auction site and bid if there’s anything particularly tempting. If I was going to buy a new laptop now before the old one becomes totally unusable I would be rushed and wouldn’t get the best deal (particularly given that it’s almost Christmas).

Who knows, I might even find something newer and better on an e-waste pile. It’s amazing the type of stuff that gets thrown out nowadays.

Related posts:

  1. Observing Reliability Last year I wrote about how great my latest Thinkpad...
  2. I Just Bought a new Thinkpad and the Lenovo Web Site Sucks I’ve just bought a Thinkpad T61 at auction for $AU796....
  3. Thinkpad T420 I’ve owned a Thinkpad T61 since February 2010 [1]. In...
etbe https://etbe.coker.com.au etbe – Russell Coker

Faqet