You are here

Bits from Debian

Subscribe to Feed Bits from Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 11 orë 12 min më parë

Kees Cook: security things in Linux v5.4

Mër, 19/02/2020 - 1:37pd

Previously: v5.3.

Linux kernel v5.4 was released in late November. The holidays got the best of me, but better late than never! ;) Here are some security-related things I found interesting:

waitid() gains P_PIDFD
Christian Brauner has continued his pidfd work by adding a critical mode to waitid(): P_PIDFD. This makes it possible to reap child processes via a pidfd, and completes the interfaces needed for the bulk of programs performing process lifecycle management. (i.e. a pidfd can come from /proc or clone(), and can be waited on with waitid().)

kernel lockdown
After something on the order of 8 years, Linux can now draw a bright line between “ring 0” (kernel memory) and “uid 0” (highest privilege level in userspace). The “kernel lockdown” feature, which has been an out-of-tree patch series in most Linux distros for almost as many years, attempts to enumerate all the intentional ways (i.e. interfaces not flaws) userspace might be able to read or modify kernel memory (or execute in kernel space), and disable them. While Matthew Garrett made the internal details fine-grained controllable, the basic lockdown LSM can be set to either disabled, “integrity” (kernel memory can be read but not written), or “confidentiality” (no kernel memory reads or writes). Beyond closing the many holes between userspace and the kernel, if new interfaces are added to the kernel that might violate kernel integrity or confidentiality, now there is a place to put the access control to make everyone happy and there doesn’t need to be a rehashing of the age old fight between “but root has full kernel access” vs “not in some system configurations”.

tagged memory relaxed syscall ABI
Andrey Konovalov (with Catalin Marinas and others) introduced a way to enable a “relaxed” tagged memory syscall ABI in the kernel. This means programs running on hardware that supports memory tags (or “versioning”, or “coloring”) in the upper (non-VMA) bits of a pointer address can use these addresses with the kernel without things going crazy. This is effectively teaching the kernel to ignore these high bits in places where they make no sense (i.e. mathematical comparisons) and keeping them in place where they have meaning (i.e. pointer dereferences).

As an example, if a userspace memory allocator had returned the address 0x0f00000010000000 (VMA address 0x10000000, with, say, a “high bits” tag of 0x0f), and a program used this range during a syscall that ultimately called copy_from_user() on it, the initial range check would fail if the tag bits were left in place: “that’s not a userspace address; it is greater than TASK_SIZE (0x0000800000000000)!”, so they are stripped for that check. During the actual copy into kernel memory, the tag is left in place so that when the hardware dereferences the pointer, the pointer tag can be checked against the expected tag assigned to referenced memory region. If there is a mismatch, the hardware will trigger the memory tagging protection.

Right now programs running on Sparc M7 CPUs with ADI (Application Data Integrity) can use this for hardware tagged memory, ARMv8 CPUs can use TBI (Top Byte Ignore) for software memory tagging, and eventually there will be ARMv8.5-A CPUs with MTE (Memory Tagging Extension).

boot entropy improvement
Thomas Gleixner got fed up with poor boot-time entropy and trolled Linus into coming up with reasonable way to add entropy on modern CPUs, taking advantage of timing noise, cycle counter jitter, and perhaps even the variability of speculative execution. This means that there shouldn’t be mysterious multi-second (or multi-minute!) hangs at boot when some systems don’t have enough entropy to service getrandom() syscalls from systemd or the like.

userspace writes to swap files blocked
From the department of “how did this go unnoticed for so long?”, Darrick J. Wong fixed the kernel to not allow writes from userspace to active swap files. Without this, it was possible for a user (usually root) with write access to a swap file to modify its contents, thereby changing memory contents of a process once it got paged back in. While root normally could just use CAP_PTRACE to modify a running process directly, this was a loophole that allowed lesser-privileged users (e.g. anyone in the “disk” group) without the needed capabilities to still bypass ptrace restrictions.

limit strscpy() sizes to INT_MAX
Generally speaking, if a size variable ends up larger than INT_MAX, some calculation somewhere has overflowed. And even if not, it’s probably going to hit code somewhere nearby that won’t deal well with the result. As already done in the VFS core, and vsprintf(), I added a check to strscpy() to reject sizes larger than INT_MAX.

ld.gold support removed
Thomas Gleixner removed support for the gold linker. While this isn’t providing a direct security benefit, ld.gold has been a constant source of weird bugs. Specifically where I’ve noticed, it had been pain while developing KASLR, and has more recently been causing problems while stabilizing building the kernel with Clang. Having this linker support removed makes things much easier going forward. There are enough weird bugs to fix in Clang and ld.lld. ;)

Intel TSX disabled
Given the use of Intel’s Transactional Synchronization Extensions (TSX) CPU feature by attackers to exploit speculation flaws, Pawan Gupta disabled the feature by default on CPUs that support disabling TSX.

That’s all I have for this version. Let me know if I missed anything. :) Next up is Linux v5.5!

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Daniel Silverstone: Subplot volunteers? (Acceptance testing tool)

Mar, 18/02/2020 - 9:24md

Note: This is a repost from Lars' blog made to widen the reach and hopefully find the right interested parties.

Would you be willing to try Subplot for acceptance testing for one of your real projects, and give us feedback? We're looking for two volunteers.

given a project
when it uses Subplot
then it is successful

Subplot is a tool for capturing and automatically verifying the acceptance criteria for a software project or a system, in a way that's understood by all stakeholders.

In a software project there are always more than one stakeholder. Even in a project one writes for oneself, there are two stakeholders: oneself, and that malicious cretin oneself-in-the-future. More importantly, though, there are typically stakeholders such as end users, sysadmins, clients, software architects, developers, and testers. They all need to understand what the software should do, and when it's in an acceptable state to be put into use: in other words, what the acceptance criteria are.

Crucially, all stakeholders should understand the acceptance criteria the same way, and also how to verify they are met. In an ideal situation, all verification is automated, and happens very frequently.

There are various tools for this, from generic documentation tooling (word processors, text editors, markup languages, etc) to test automation (Cucumber, Selenium, etc). On the one hand, documenting acceptance criteria in a way that all stakeholders understand is crucial: otherwise the end users are at risk of getting something that's not useful to help them, and the project is a waste of everyone's time and money. On the other hand, automating the verification of how acceptance criteria is met is also crucial: otherwise it's done manually, which is slow, costly, and error prone, which increases the risk of project failure.

Subplot aims to solve this by an approach that combines documentation tooling with automated verification.

  • The stakeholders in a project jointly produce a document that captures all relevant acceptance criteria and also describes how they can be verified automatically, using scenarios. The document is written using Markdown.

  • The developer stakeholders produce code to implement the steps in the scenarios. The Subplot approach allows the step implementations to be done in a highly cohesive, de-coupled manner, making such code usually be quite simple. (Test code should be your best code.)

  • Subplot's "docgen" program produces a typeset version as PDF or HTML. This is meant to be easily comprehensible by all stakeholders.

  • Subplot's "codegen" program produces a test program in the language used by the developer stakeholders. This test program can be run to verify that acceptance criteria are met.

Subplot started in in late 2018, and was initially called Fable. It is based on the yarn tool for the same purpose, from 2013. Yarn has been in active use all its life, if not popular outside a small circle. Subplot improves on yarn by improving document generation, markup, and decoupling of concerns. Subplot is not compatible with yarn.

Subplot is developed by Lars Wirzenius and Daniel Silverstone as a hobby project. It is free software, implemented in Rust, developed on Debian, and uses Pandoc and LaTeX for typesetting. The code is hosted on gitlab.com. Subplot verifies its own acceptance criteria. It is alpha level software.

We're looking for one or two volunteers to try Subplot on real projects of their own, and give us feedback. We want to make Subplot good for its purpose, also for people other than us. If you'd be willing to give it a try, start with the Subplot website, then tell us you're using Subplot. We're happy to respond to questions from the first two volunteers, and from others, time permitting. (The reality of life and time constraints is that we can't commit to supporting more people at this time.)

We'd love your feedback, whether you use Subplot or not.

Mike Gabriel: MATE 1.24 landed in Debian unstable

Mar, 18/02/2020 - 11:03pd

Last week, Martin Wimpress (from Ubuntu MATE) and I did a 2.5-day packaging sprint and after that I bundle-uploaded all MATE 1.24 related components to Debian unstable. Thus, MATE 1.24 landed in Debian unstable only four days after the upstream release. I think this was the fastest version bump of MATE in Debian ever.

Packages should have been built by now for most of the 22 architectures supported by Debian. The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Please also refer to the MATE 1.24 upstream release notes for details on what's new and what's changed [2].

Credits

One big thanks goes to Martin Wimpress. Martin and I worked on all the related packages hand in hand. Only this team worked made this very fast upload possible. Martin especially found the fix for a flaw in Python Caja that caused all Python3 based Caja extensions to fail in Caja 1.24 / Python Caja 1.24. Well done!

Another big thanks goes to the MATE upstream team. You again did an awesome job, folks. Much, much appreciated.

Last but not least, a big thanks goes to Svante Signell for providing Debian architecture specific patches for Debian's non-Linux distributions (GNU/Hurd, GNU/kFreeBSD). We will wait now until all MATE 1.24 packages have initially migrated to Debian testing and then follow-up upload his fixes. As in the past, MATE shall be available on as many Debian architectures as possible (ideally: all of them). Saying this, all Debian porters are invited to send us patches, if they see components of MATE Desktop fail on not-so-common architectures.

References

light+love,
Mike Gabriel (aka sunweaver)

Keith Packard: more-iterative-splines

Mar, 18/02/2020 - 8:41pd
Slightly Better Iterative Spline Decomposition

My colleague Bart Massey (who is a CS professor at Portland State University) reviewed my iterative spline algorithm article and had an insightful comment — we don't just want any spline decomposition which is flat enough, what we really want is a decomposition for which every line segment is barely within the specified flatness value.

My initial approach was to keep halving the length of the spline segment until it was flat enough. This definitely generates a decomposition which is flat enough everywhere, but some of the segments will be shorter than they need to be, by as much as a factor of two.

As we'll be taking the resulting spline and doing a lot more computation with each segment, it makes sense to spend a bit more time finding a decomposition with fewer segments.

The Initial Search

Here's how the first post searched for a 'flat enough' spline section:

t = 1.0f; /* Iterate until s1 is flat */ do { t = t/2.0f; _de_casteljau(s, s1, s2, t); } while (!_is_flat(s1)); Bisection Method

What we want to do is find an approximate solution for the function:

flatness(t) = tolerance

We'll use the Bisection method to find the value of t for which the flatness is no larger than our target tolerance, but is at least as large as tolerance - ε, for some reasonably small ε.

float hi = 1.0f; float lo = 0.0f; /* Search for an initial section of the spline which * is flat, but not too flat */ for (;;) { /* Average the lo and hi values for our * next estimate */ float t = (hi + lo) / 2.0f; /* Split the spline at the target location */ _de_casteljau(s, s1, s2, t); /* Compute the flatness and see if s1 is flat * enough */ float flat = _flatness(s1); if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) { /* Stop looking when s1 is close * enough to the target tolerance */ if (flat >= SCALE_FLAT(SNEK_DRAW_TOLERANCE - SNEK_FLAT_TOLERANCE)) break; /* Flat: t is the new lower interval bound */ lo = t; } else { /* Not flat: t is the new upper interval bound */ hi = t; } }

This searches for a place to split the spline where the initial portion is flat but not too flat. I set SNEK_FLAT_TOLERANCE to 0.01, so we'll pick segments which have flatness between 0.49 and 0.50.

The benefit from the search is pretty easy to understand by looking at the number of points generated compared with the number of _de_casteljau and _flatness calls:

Search Calls Points Simple 150 33 Bisect 229 25

And here's an image comparing the two:

A Closed Form Approach?

Bart also suggests attempting to find an analytical solution to decompose the spline. What we need is to is take the flatness function and find the spline which makes it equal to the desired flatness. If the spline control points are a, b, c, and d, then the flatness function is:

ux = (3×b.x - 2×a.x - d.x)² uy = (3×b.y - 2×a.y - d.y)² vx = (3×c.x - 2×d.x - a.x)² vy = (3×c.y - 2×d.y - a.y)² flat = max(ux, vx) + max(uy, vy)

When the spline is split into two pieces, all of the control points for the new splines are determined by the original control points and the 't' value which sets where the split happens. What we want is to find the 't' value which makes the flat value equal to the desired tolerance. Given that the binary search runs De Casteljau and the flatness function almost 10 times for each generated point, there's a lot of opportunity to go faster with a closed form solution.

Update: Fancier Method Found!

Bart points me at two papers:

  1. Flattening quadratic Béziers by Raph Levien
  2. Precise Flattening of Cubic Bézier Segments by Thomas F. Hain, Athar L. Ahmad, and David D. Langan

Levien's paper offers a great solution for quadratic Béziers by directly computing the minimum set of line segments necessary to approximate within a specified flatness. However, it doesn't generalize to cubic Béziers.

Hain, Ahmad and Langan do provide a directly computed decomposition of a cubic Bézier. This is done by constructing a parabolic approximation to the first portion of the spline and finding a 't' value which produces the desired flatness. There are a pile of special cases to deal with when there isn't a good enough parabolic approximation. But, overall computational cost is lower than a straightforward binary decomposition, plus there's no recursion required.

This second algorithm has the same characteristics as my Bisection method as the last segment may have any flatness from zero through the specified tolerance; Levien's solution is neater in that it generates line segments of similar flatness across the whole spline.

Current Implementation /* * Copyright © 2020 Keith Packard <keithp@keithp.com> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA. */ #include <stdbool.h> #include <stdio.h> #include <string.h> #include <stdint.h> #include <math.h> typedef float point_t[2]; typedef point_t spline_t[4]; uint64_t num_flats; uint64_t num_points; #define SNEK_DRAW_TOLERANCE 0.5f #define SNEK_FLAT_TOLERANCE 0.01f /* * This actually returns flatness² * 16, * so we need to compare against scaled values * using the SCALE_FLAT macro */ static float _flatness(spline_t spline) { /* * This computes the maximum deviation of the spline from a * straight line between the end points. * * From https://hcklbrrfnn.files.wordpress.com/2012/08/bez.pdf */ float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0]; float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1]; float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0]; float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1]; ux *= ux; uy *= uy; vx *= vx; vy *= vy; if (ux < vx) ux = vx; if (uy < vy) uy = vy; ++num_flats; /* *If we wanted to return the true flatness, we'd use: * * return sqrtf((ux + uy)/16.0f) */ return ux + uy; } /* Convert constants to values usable with _flatness() */ #define SCALE_FLAT(f) ((f) * (f) * 16.0f) /* * Linear interpolate from a to b using distance t (0 <= t <= 1) */ static void _lerp (point_t a, point_t b, point_t r, float t) { int i; for (i = 0; i < 2; i++) r[i] = a[i]*(1.0f - t) + b[i]*t; } /* * Split 's' into two splines at distance t (0 <= t <= 1) */ static void _de_casteljau(spline_t s, spline_t s1, spline_t s2, float t) { point_t first[3]; point_t second[2]; int i; for (i = 0; i < 3; i++) _lerp(s[i], s[i+1], first[i], t); for (i = 0; i < 2; i++) _lerp(first[i], first[i+1], second[i], t); _lerp(second[0], second[1], s1[3], t); for (i = 0; i < 2; i++) { s1[0][i] = s[0][i]; s1[1][i] = first[0][i]; s1[2][i] = second[0][i]; s2[0][i] = s1[3][i]; s2[1][i] = second[1][i]; s2[2][i] = first[2][i]; s2[3][i] = s[3][i]; } } /* * Decompose 's' into straight lines which are * within SNEK_DRAW_TOLERANCE of the spline */ static void _spline_decompose(void (*draw)(float x, float y), spline_t s) { /* Start at the beginning of the spline. */ (*draw)(s[0][0], s[0][1]); /* Split the spline until it is flat enough */ while (_flatness(s) > SCALE_FLAT(SNEK_DRAW_TOLERANCE)) { spline_t s1, s2; float hi = 1.0f; float lo = 0.0f; /* Search for an initial section of the spline which * is flat, but not too flat */ for (;;) { /* Average the lo and hi values for our * next estimate */ float t = (hi + lo) / 2.0f; /* Split the spline at the target location */ _de_casteljau(s, s1, s2, t); /* Compute the flatness and see if s1 is flat * enough */ float flat = _flatness(s1); if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) { /* Stop looking when s1 is close * enough to the target tolerance */ if (flat >= SCALE_FLAT(SNEK_DRAW_TOLERANCE - SNEK_FLAT_TOLERANCE)) break; /* Flat: t is the new lower interval bound */ lo = t; } else { /* Not flat: t is the new upper interval bound */ hi = t; } } /* Draw to the end of s1 */ (*draw)(s1[3][0], s1[3][1]); /* Replace s with s2 */ memcpy(&s[0], &s2[0], sizeof (spline_t)); } /* S is now flat enough, so draw to the end */ (*draw)(s[3][0], s[3][1]); } void draw(float x, float y) { ++num_points; printf("%8g, %8g\n", x, y); } int main(int argc, char **argv) { spline_t spline = { { 0.0f, 0.0f }, { 0.0f, 256.0f }, { 256.0f, -256.0f }, { 256.0f, 0.0f } }; _spline_decompose(draw, spline); fprintf(stderr, "flats %lu points %lu\n", num_flats, num_points); return 0; }

Ulrike Uhlig: Reasons for job burnout and what motivates people in their job

Mar, 18/02/2020 - 12:00pd

Burnout comes in many colours and flavours.

Often, burnout is conceived as a weakness of the person experiencing it: "they can't work under stress", "they lack organizational skills", "they are currently going through grief or a break up, that's why they can't keep up" — you've heard it all before, right?

But what if job burnout would actually be an indicator for a toxic work environment? Or for a toxic work setup?

I had read quite a bit of literature trying to explain burnout before stumbling upon the work of Christina Maslach. She has researched burnout for thirty years and is most well known for her research on occupational burnout. While she observed burnout in the 90ies mostly in caregiver professions, we can see an increase of burnout in many other fields in recent years, such as in the tech industry. Maslach outlines in one of her talks what this might be due to.

More interesting to me is the question why job burnout occurs at all? High workload is only one out of six factors that increase the risk for burnout, according to Christina Maslach and her team.

Factors increasing job burnout
  1. Workload. This could be demand overload, lots of different tasks, lots of context switching, unclear expectations, having several part time jobs, lack of resources, lack of work force, etc.
  2. Lack of control. Absence of agency. Absence of the possibility to make decisions. Impossibility to act on one's own account.
  3. Insufficient reward. Here, we are not solely talking about financial reward, but also about gratitude, recognition, visibility, and celebration of accomplishments.
  4. Lack of community. Remote work, asynchronous communication, poor communication skills, isolation in working on tasks, few/no in-person meetings, lack of organizational caring.
  5. Absence of fairness. Invisible hierarchies, lack of (fair) decision making processes, back channel decision making, financial or other rewards unfairly distributed.
  6. Value conflicts. This could be over-emphasizing on return on investment, making unethical requests, not respecting colleagues' boundaries, the lack of organizational vision, poor leadership.

Interestingly, it is possible to improve one area of risk, and see improvements in all the other areas.

What motivates people?

So, what is it that motivates people, what makes them like their work?
Here, Maslach comes up with another interesting list:

  • Autonomy. This could mean for example to trust colleagues to work on tasks autonomously. To let colleagues make their own decisions on how to implement a feature as long as it corresponds to the code writing guidelines. The responsibility for the task should be transferred along with the task. People need to be allowed to make mistakes (and fix them). Autonomy also means to say goodbye to the expectation that colleagues do everything exactly like we would do it. Instead, we can learn to trust in collective intelligence for coming up with different solutions.
  • Feeling of belonging. This one could mean to search to use synchronous communication whenever possible. To privilege in-person meetings. To celebrate achievements. To make collective decisions whenever the outcome affects the collective (or part of it). To have lunch together. To have lunch together and not talk about work.
  • Competence. Having a working feedback process. Valueing each others' competences. Having the possibility to evolve in the workplace. Having the possibility to get training, to try new setups, new methods, or new tools. Having the possibility to increase one's competences, possibly with the financial backing of the workplace.
  • Positive emotions. Encouraging people to take breaks. Make sure work plannings also include downtime. Encouraging people to take at least 5 weeks of vacation per year. Allowing people to have (paid) time off. Practicing gratitude. Acknowledging and celebrating achievements. Giving appreciation.
  • Psychological safety. Learn to communicate with kindness. Practice active listening. Have meetings facilitated. Condemn harassment, personal insults, sexism, racism, fascism. Condemn silencing of people. Have a possibility to report on code of ethics/conduct abuses. Making sure that people who experience problems or need to share something are not isolated.
  • Fairness. How about exploring inclusive leadership models? Making invisible hierarchies visible (See the concept of rank). Being aware of rank. Have clear and transparent decision making processes. Rewarding people equally. Making sure there is no invisible unpaid work done by always the same people.
  • Meaning. Are the issues that we work on meaningful per se? Do they contribute anything to the world, or to the common good? Making sure that tasks or roles of other colleagues are not belittled. Meaning can also be given by putting tasks into perspective, for example by making developers attend conferences where they can meet users and get feedback on their work. Making sure we don't forget why we wanted to do a job in first place. Getting familiar with the concept of bullshit jobs.

In this list, the words written in bold are what we could call "Needs". The descriptions behind them are what we could call "Strategies". There are always many different strategies to fulfill a need, I've only outlined some of them. I'm sure you can come up with others, please don't hesitate to share them with me.

Holger Levsen: 20200217-SnowCamp

Hën, 17/02/2020 - 8:56md
SnowCamp 2020

This is just a late reminder that there are still some seats available for SnowCamp, taking place at the end of this week and during the whole weekend somewhere in the Italian mountains.

I believe it will be a really nice opportunity to hack on Debian things and thus I'd hope that there won't be empty seats, though atm this is the case.

The venue is reachable by train and Debian will be covering the cost of accomodation, so you just have to cover transportation and meals.

The event starts in three days, so hurry up and whatever you plans are, change them!

If you have any further questions, join #suncamp (yes!) on irc.debian.org.

Jonathan Dowland: Amiga floppy recovery project scope

Hën, 17/02/2020 - 5:05md

This is the eighth part in a series of blog posts. The previous post was First successful Amiga disk-dumping session. The whole series is available here: Amiga.

The main goal of my Amiga project is to read the data from my old floppy disks. After a bit of hiatus (and after some gentle encouragement from friends at FOSDEM) I'm nearly done, 150/200 disks attempted so far. Ultimately I intend to get rid of the disks to free up space in my house, and probably the Amiga, too. In the meantime, what could I do with it?

Gotek floppy emulator balanced on the Amiga

The most immediately obvious things are to improve the housing of the emulated floppy disk. My Gotek adaptor is unceremoniously balanced on top of the case. Housing it within the A500 would be much neater. I might try to follow this guide which requires no case modifications and no 3D printed brackets, but instead of soldering new push-buttons, add a separate OLED display and rotary encoder (knob) in a separate housing, such as this 3D-printed wedge-shaped mount on Thingiverse. I do wonder if some kind of side-mounted solution might be better, so the top casing could be removed without having to re-route the wires each time.

3D printed OLED mount, from Amibay

Next would be improving the video output. My A520 video modulator developed problems that are most likely caused by leaking or blown capacitors. At the moment, I have a choice of B&W RF out, or using a 30 year old Philips CRT monitor. The latter is too big to comfortably fit on my main desk, and the blue channel has started to fail. Learning the skills to fix the A520 could be useful as the same could happen to the Amiga itself. Alternatively replacements are very cheap on the second hand market. Or I could look at a 3rd-party equivalent like the RGB4ALL. I have tried a direct, passive socket adaptor on the off-chance my LCD TV supported 15kHz, but alas, it appears it doesn't. This list of monitors known to support 15kHz is very short, so sourcing one is not likely to be easy or cheap. It's possible to buy sophisticated "Flicker Fixers/Scan Doublers" that enable the use of any external display, but they're neither cheap nor common.

My original "tank" Amiga mouse (pictured above) is developing problems with the left mouse button. Replacing the switch looks simple (in this Youtube video) but will require me to invest in a soldering iron, multimeter and related equipment (not necessarily a bad thing). It might be easier to buy a different, more comfortable old serial mouse.

Once those are out of the way, It might be interesting to explore aspects of the system that I didn't touch on as a child: how do you program the thing? I don't remember ever writing any Amiga BASIC, although I had several doomed attempts to use "game makers" like AMOS or SEUCK. What programming language were the commercial games written in? Pure assembly? The 68k is supposed to have a pleasant instruction set for this. Was there ever a practically useful C compiler for the Amiga? I never networked my Amiga. I never played around with music sampling or trackers.

There's something oddly satisfying about the idea of taking a 30 year old computer and making it into a useful machine in the modern era. I could consider more involved hardware upgrades. The Amiga enthusiast community is old and the fans are very passionate. I've discovered a lot of incredible enhancements that fans have built to enhanced their machines, right up to FPGA-powered CPU replacements that can run several times faster than the fastest original m68ks, and also offer digital video out, hundreds of MB of RAM, modern storage options, etc. To give an idea, check out Epsilon's Amiga Blog, which outlines some of the improvements they've made to their fleet of machines.

This is a deep rabbit hole, and I'm not sure I can afford the time (or the money!) to explore it at the moment. It will certainly not rise above my more pressing responsibilities. But we'll see how things go.

Enrico Zini: AI and privacy links

Hën, 17/02/2020 - 12:00pd
Norman by MIT Media Lab ai archive.org 2020-02-17 Norman: World's first psychopath AI. Machine Learning Captcha ai comics archive.org 2020-02-17 Amazon's Rekognition shows its true colors ai consent privacy archive.org 2020-02-17 Mix together a bit of freely accessible facial recognition software and a free live stream of the public space, and what do you get? A powerful stalker tool. Self Driving ai comics archive.org 2020-02-17 So much of "AI" is just figuring out ways to offloa work onto random strangers. Information flow reveals prediction limits in online social activity privacy archive.org 2020-02-17 Information flow reveals prediction limits in online social activity Bagrow et al., arVix 2017 If I know your friends, then I know a lot about you! Suppose you don’t personally use a given app/serv… The NSA’s SKYNET program may be killing thousands of innocent people ai politics archive.org 2020-02-17 «In 2014, the former director of both the CIA and NSA proclaimed that "we kill people based on metadata." Now, a new examination of previously published Snowden documents suggests that many of those people may have been innocent.» What reporter Will Ockenden's metadata reveals about his life privacy archive.org 2020-02-17 We published ABC reporter Will Ockenden's metadata in full and asked you to analyse it. Here's what you got right - and wrong. Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance privacy archive.org 2020-02-17 It's time to shed light on the technical methods and business practices behind third-party tracking. For journalists, policy makers, and concerned consumers, this paper will demystify the fundamentals of third-party tracking, explain the scope of the problem, and suggest ways for users and legislation to fight back against the status quo.

Ben Armstrong: Introducing Dronefly, a Discord bot for naturalists

Dje, 16/02/2020 - 5:51md

In the past few years, since first leaving Debian as a free software developer in 2016, I’ve taken up some new hobbies, or more accurately, renewed my interest in some old ones.

Screenshot from Dronefly bot tutorial

During that hiatus, I also quietly un-retired from Debian, anticipating there would be some way to contribute to the project in these new areas of interest. That’s still an idea looking for the right opportunity to present itself, not to mention the available time to get involved again.

With age comes an increasing clamor of complaints from your body when you have a sedentary job in front of a screen, and hobbies that rarely take you away from it. You can’t just plunk down in front of a screen and do computer stuff non-stop & just bounce back again at the start of each new day. So in the past several years, getting outside more started to improve my well-being and address those complaints. That revived an old interest in me: nature photography. That, in turn, landed me at iNaturalist, re-ignited my childhood love of learning about the natural world, & hooked me on a regular habit of making observations & uploading them to iNat ever since.

Second, back in the late nineties, I wrote a little library loans renewal reminder project in Python. Python was a pleasure to work with, but that project never took off and soon was forgotten. Now once again, decades later, Python is a delight to be writing in, with its focus on writing readable code & backed by a strong culture of education.

Where Python came to bear on this new hobby was when the naturalists on the iNaturalist Discord server became a part of my life. Last spring, I stumbled upon this group & started hanging out. On this platform, we share what we are finding, we talk about those findings, and we challenge each other to get better at it. It wasn’t long before the idea to write some code to access the iNaturalist platform directly from our conversations started to take shape.

Now, ideally, what happened next would have been for an open platform, but this is where the community is. In many ways, too, other chat platforms (like irc) are not as capable vs. Discord to support the image-rich chat experience we enjoy. Thus, it seemed that’s where the code had to be. Dronefly, an open source Python bot for naturalists built on the Red DiscordBot bot framework, was born in the summer of 2019.

Dronefly is still alpha stage software, but in the short space of six months, has grown to roughly 3k lines of code and is used used by hundreds of users across 9 different Discord servers. It includes some innovative features requested by our users like the related command to discover the nearest common ancestor of one or more named taxa, and the map command to easily access a range map on the platform for all the named taxa. So far as I know, no equivalent features exist yet on the iNat website or apps for mobile. Commands like these put iNat data directly at users’ fingertips in chat, improving understanding of the material with minimal interruption to the flow of conversation.

This tutorial gives an overview of Dronefly’s features. If you’re intrigued, please look me up on the iNaturalist Discord server following the invite from the tutorial. You can try out the bot there, and I’d be happy to talk to you about our work. Even if this is not your thing, do have a look at iNaturalist itself. Perhaps, like me, you’ll find in this platform a fun, rewarding, & socially significant outlet that gets you outside more, with all the benefits that go along with that.

That’s what has been keeping me busy lately. I hope all my Debian friends are well & finding joy in what you’re doing. Keep up the good work!

Russell Coker: DisplayPort and 4K

Dje, 16/02/2020 - 12:00pd
The Problem

Video playback looks better with a higher scan rate. A lot of content that was designed for TV (EG almost all historical documentaries) is going to be 25Hz interlaced (UK and Australia) or 30Hz interlaced (US). If you view that on a low refresh rate progressive scan display (EG a modern display at 30Hz) then my observation is that it looks a bit strange. Things that move seem to jump a bit and it’s distracting.

Getting HDMI to work with 4K resolution at a refresh rate higher than 30Hz seems difficult.

What HDMI Can Do

According to the HDMI Wikipedia page [1], HDMI 1.3–1.4b (introduced in June 2006) supports 30Hz refresh at 4K resolution and if you use 4:2:0 Chroma Subsampling (see the Chroma Subsampling Wikipedia page [2] you can do 60Hz or 75Hz on HDMI 1.3–1.4b. Basically for colour 4:2:0 means half the horizontal and half the vertical resolution while giving the same resolution for monochrome. For video that apparently works well (4:2:0 is standard for Blue Ray) and for games it might be OK, but for text (my primary use of computers) it would suck.

So I need support for HDMI 2.0 (introduced in September 2013) on the video card and monitor to do 4K at 60Hz. Apparently none of the combinations of video card and HDMI cable I use for Linux support that.

HDMI Cables

The Wikipedia page alleges that you need either a “Premium High Speed HDMI Cable” or a “Ultra High Speed HDMI Cable” for 4K resolution at 60Hz refresh rate. My problems probably aren’t related to the cable as my testing has shown that a cheap “High Speed HDMI Cable” can work at 60Hz with 4K resolution with the right combination of video card, monitor, and drivers. A Windows 10 system I maintain has a Samsung 4K monitor and a NVidia GT630 video card running 4K resolution at 60Hz (according to Windows). The NVidia GT630 card is one that I tried on two Linux systems at 4K resolution and causes random system crashes on both, it seems like a nice card for Windows but not for Linux.

Apparently the HDMI devices test the cable quality and use whatever speed seems to work (the cable isn’t identified to the devices). The prices at a local store are $3.98 for “high speed”, $19.88 for “premium high speed”, and $39.78 for “ultra high speed”. It seems that trying a “high speed” cable first before buying an expensive cable would make sense, especially for short cables which are likely to be less susceptible to noise.

What DisplayPort Can Do

According to the DisplayPort Wikipedia page [3] versions 1.2–1.2a (introduced in January 2010) support HBR2 which on a “Standard DisplayPort Cable” (which probably means almost all DisplayPort cables that are in use nowadays) allows 60Hz and 75Hz 4K resolution.

Comparing HDMI and DisplayPort

In summary to get 4K at 60Hz you need 2010 era DisplayPort or 2013 era HDMI. Apparently some video cards that I currently run for 4K (which were all bought new within the last 2 years) are somewhere between a 2010 and 2013 level of technology.

Also my testing (and reading review sites) shows that it’s common for video cards sold in the last 5 years or so to not support HDMI resolutions above FullHD, that means they would be HDMI version 1.1 at the greatest. HDMI 1.2 was introduced in August 2005 and supports 1440p at 30Hz. PCIe was introduced in 2003 so there really shouldn’t be many PCIe video cards that don’t support HDMI 1.2. I have about 8 different PCIe video cards in my spare parts pile that don’t support HDMI resolutions higher than FullHD so it seems that such a limitation is common.

The End Result

For my own workstation I plugged a DisplayPort cable between the monitor and video card and a Linux window appeared (from KDE I think) offering me some choices about what to do, I chose to switch to the “new monitor” on DisplayPort and that defaulted to 60Hz. After that change TV shows on NetFlix and Amazon Prime both look better. So it’s a good result.

As an aside DisplayPort cables are easier to scrounge as the HDMI cables get taken by non-computer people for use with their TV.

Related posts:

  1. 4K Monitors A couple of years ago a relative who uses a...
  2. Sound Device Order with ALSA One problem I have had with my new Dell PowerEdge...
  3. Dell PowerEdge T30 I just did a Debian install on a Dell PowerEdge...