You are here

Planet Debian

Subscribe to Feed Planet Debian
Hack with Debian Debian related wlog entries — MirBSD Thinking inside the box "Passion and dispassion. Choose two." -- Larry Wall "Passion and dispassion. Choose two." -- Larry Wall As time goes by ... "Passion and dispassion. Choose two." -- Larry Wall random musings and comments joey "Passion and dispassion. Choose two." -- Larry Wall Thoughts, actions and projects Dude! Sweet! random musings and comments showing latest 10 rebel with rather too many causes Welcome, hope you enjoy what you're reading! :) Free Software Hacking spwhitton Debian related wlog entries — MirBSD random musings and comments Blog from the Debian Project random musings and comments spwhitton Ben Hutchings's diary of life and technology Reproducible builds blog Blog from the Debian Project just my thoughts ein kleines, privates blog "Passion and dispassion. Choose two." -- Larry Wall Joachim Breitners Denkblogade pabs random musings and comments Thinking inside the box JackYF's blog - LiveJournal.com Current Working Directory sesse's blog Random thoughts about everything tagged by Debian sesse's blog Dude! Sweet! Free Software Hacking Free Software Indie Hacker My WebLog at Intitut Mines-Telecom, Télécom SudParis Reproducible builds blog Any sufficiently advanced thinking is indistinguishable from madness rebel with rather too many causes Joachim Breitners Denkblogade Sam Hartman WEBlog -- Wouter's Eclectic Blog rebel with rather too many causes joey (y eso no es poca cosa)
Përditësimi: 5 months 2 javë më parë

food, consumer experience and Joshi Wadewala

Enj, 29/03/2018 - 9:29pd

For a while now, I have been looking at various options of how food quality experience is checked by various people. The only proper or official authority is FSSAI but according to CAG and quartz own web report FSSAI has to go a long way.

The reasons I share this is over the years I have mentioned about how Joshi Wadewala has managed to outdo what others could also have done. But lately, it seems the staff and the owners have grown lax and arrogant about the quality of food and service they provide. For instance under FSSAI it is written under labeling –

Labelling

It is mandatory that every package of food intended for sale should carry a label that bears all the information required under FSS (Packaging and Labelling) Regulation, 2011. Food package must carry a Label with the following information :

Common name of the Product.
Name and address of the product’s Manufacturer
Date of Manufacture
Ingredient List with additives
Nutrition Facts
Best before/ Expires on
Net contents in terms of weight, measure or count.
Packing codes/Batch number
Declaration regarding vegetarian or non-vegetarian
Country of origin for imported food

Also many a times their fresh food is either not fresh or not cooked properly. This has been happening for couple of weeks now. I have to point out that they are not the only ones although this is a proper shop, not a pavement dweller per-se.

I did file my concern with FSSAI but I highly doubt any action will be taken although it is a public safety issue, health issue but as biggies are never caught then he’s a smallish-time operator.

It is also a concern as my mother has no teeth and I was diagnosed with convulsive seizures last year which prevented me from attending debconf last year. I was in hospital for a period of 3 months.

I have stopped going to the establishment as there are others who are better at receiving feedback and strive to being better.

Disclaimer – All the photos shared are copyright zomato.com

I have also no idea if GST is paid or not as you do not get any receipts for your purchases which is also one of the basic consumer right. They just have one slip which you get when you do your purchase and have to hand it over for either take-away or getting food.

They do have a bill book but that is for bulk purchases only.

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

Limit personal data exposure with Firefox containers

Mër, 28/03/2018 - 8:44md

There was some noise recently about the massive amount of data gathered by Cambridge Analytica from Facebook users. While I don't use Facebook myself, I do use Google and other services which are known to gather a massive amount of data, and I obviously know a lot of people using those services. I also saw some posts or tweet threads about the data collection those services do.

Mozilla recently released a Firefox extension to help users confine Facebook data collection. This addon is actually based on the containers technology Mozilla develops since few years. It started as an experimental feature in Nightly, then as a test pilot experiment, and finally evolved into a fully featured extension called Multi-Account containers. A somehow restricted version of this is even included directly in Firefox but you don't have the configuration window without the extension and you need to configure it manually with about:config.

Basically, containers separate storage (cookies, site preference, login session etc.) and enable an user to isolate various aspect of their online life by only staying logged to specific websites in their respective containers. In a way it looks like having a separate Firefox profile per website, but it's a lot more usable daily.

I use this extension massively, in order to isolate each website. I have one container for Google, one for Twitter, one for banking etc. If I used Facebook, I would have a Facebook container, if I used gmail I would have a gmail container. Then, my day to day browsing is done using the “default” container, where I'm not logged to any website, so tracking is minimal (I also use uBlock origin to reduce ads and tracking).

That way, my online life is compartmentalized/containerized and Google doesn't always associate my web searches to my account (I actually usually use DuckDuckGo but sometimes I do a Google search), Twitter only knows about the tweets I read and I don't expose all my cookies to every website.

The extension and support pages are really helpful to get started, but basically:

  • you install the extension from the extension page
  • you create new containers for the various websites you want using the menu
  • when you open a new tab you can opt to open it in a selected container by long pressing on the + button
  • the current container is shown in the URL bar and with a color underline on the current tab
  • it's also optionally possible to assign a website to a container (for example, always open facebook.com in the Facebook container), which can help restricting data exposure but might prevent you browsing that site unidentified

When you're inside the container and you want to follow a link, you can get out of the container by right clicking on the link, select “Open link in new container tab” then select “no container”. That way Facebook won't follow you on that website and you'll start fresh (after the redirection).

As far as I can tell it's not yet possible to have disposable containers (which would be trashed after you close the tab) but a feature request is open and another extension seems to exist.

In the end, and while the isolation from that extension is not perfect, I really suggest Firefox users to give it a try. In my opinion it's really easy to use and really helps maintaining healthy barriers on one's online presence. I don't know about an equivalent system for Chromium (or Safari) users but if you know about it feel free to point it to me.

A French version of this post is also available here just in case.

Yves-Alexis corsac@debian.org Corsac.net - Debian

Reproducible Builds: Weekly report #152

Mar, 27/03/2018 - 11:33md

Here's what happened in the Reproducible Builds effort between Sunday March 18 and Saturday March 24 2018:

Packages reviewed and fixed, and bugs filed diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This week, version 92 was uploaded to unstable by Chris Lamb. It included contributions already covered by posts in previous weeks as well as new ones from:

reprotest development

reprotest is our tool to build software and check it for reproducibility.

trydiffoscope development

trydiffoscope is a lightweight command-line tool to the try.diffoscope.org web-based version of diffoscope.

Reviews of unreproducible packages

88 package reviews have been added, 109 have been updated and 18 have been removed in this week, adding to our knowledge about identified issues.

A random_order_in_javahelper_manifest_files toolchain issue was added by Chris Lamb and the timestamps_in_pdf_generated_by_inkscape toolchain issue was also updated with a URI to the upstream discussion.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (66)
  • Jeremy Bicha (1)
  • Michael Olbrich (1)
  • Ole Streicher (1)
  • Sebastien KALT (1)
  • Thorsten Glaser (1)
Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb & Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Replacing a lost Yubikey

Mër, 14/03/2018 - 7:05pd

Some weeks ago I lost my purse with everything in there, from residency card, driving license, credit cards, cash cards, all kind of ID cards, and last but not least my Yubikey NEO. Being Japan I did expect that the purse will show up in a few days, most probably the money gone but all the cards intact. Unfortunately not this time. So after having finally reissued most of the cards, I also took the necessary procedures concerning the Yubikey, which contained my GnuPG subkeys, and was used as second factor for several services (see here and here).

Although the GnuPG keys on the Yubikey are considered safe from extraction, I still decided to revoke them and create new subkeys – one of the big advantage of subkeys, one does not start at zero but just creates new subkeys instead of running around trying to get signatures again.

Other things that have to be made is removing the old Yubikey from all the services where it has been used as second factor. In my case that were quite a lot (Google, Github, Dropbox, NextCloud, WordPress, …). BTW, you have a set of backup keys saved somewhere for all the services you are using, right? It helps a lot getting into the system.

GnuPG keys renewal

To remind myself of what is necessary, here are the steps:

  • Get your master key from the backup USB stick
  • revoke the three subkeys that are on the Yubikey
  • create new subkeys
  • install the new subkeys onto a new Yubikey, update keyservers

All of that is quite straight-forward: Use gpg --expert --edit-key YOUR_KEY_ID, after this you select the subkey with key N, followed by a revkey. You can select all three subkeys and revoke them at the same time: just type key N for each of the subkeys (where N is the index starting from 0 of the key).

Next create new subkeys, here you can follow the steps laid out in the original blog. In the same way you can move them to a new Yubikey Neo (good that I bought three of them back then!).

Last but not least you have to update the key-servers with your new public key, which is normally done with gpg --send-keys (again see the original blog).

The most tricky part was setting up and distributing the keys on my various computers: The master key remains as usual on offline media only. On my main desktop at home I have the subkeys available, while on my laptop I only have stubs pointing at the Yubikey. This needs a bit of shuffling around, but should be obvious somehow when looking at the previous blogs.

Full disk encryption

I had my Yubikey also registered as unlock device for the LUKS based full disk encryption. The status before the update was as follows:

$ cryptsetup luksDump /dev/sdaN Version: 1 Cipher name: aes .... Key Slot 0: ENABLED ... Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: ENABLED ...

I was pretty sure that the Slot for the old Yubikey was Slot 7, but I wasn’t sure. So I first registered the new Yubikey in slot 6 with

yubikey-luks-enroll -s 6 -d /dev/sdaN

and checked that I can unlock during boot using the new Yubikey. Then I cleared the slot information in slot 7 with

cryptsetup luksKillSlot /dev/sdaN 7

and again made sure that I can boot using my passphrase (in slot 0) and the new Yubikey (in slot6).

TOTP/U2F second factor authentication

The last step was re-registering the new Yubikey with all the favorite services as second factor, removing the old key on the way. In my case the list comprises several WordPress sites, GitHub, Google, NextCloud, Dropbox and what else I have forgotten.

Although this is the nearly worst case scenario (ok, the main key was not compromised!), everything went very smooth and easy, to my big surprise. Even my Debian upload ability was not interrupted considerably. All in all it shows that having subkeys on a Yubikey is a very useful and effective solution.

Norbert Preining https://www.preining.info/blog There and back again

Playing with water

Mër, 14/03/2018 - 5:00pd

I'm currently taking a machine learning class and although it is an insane amount of work, I like it a lot. I initially had planned to use R to play around with the database I have, but the teacher recommended I use H2o, a FOSS machine learning framework.

I was a bit sceptical at first since I'm already pretty good with R, but then I found out you could simply import H2o as an R library. H2o replaces most R functions by its own parallelized ones to cut down on processing time (no more doParallel calls) and uses an "external" server you have to run on the side instead of running R calls directly.

I was pretty happy with this situation, that is until I actually started using H2o in R. With the huge database I'm playing with, the library felt clunky and I had a hard time doing anything useful. Most of the time, I just ended up with long Java traceback calls. Much love.

I'm sure in the right hands using H2o as a library could have been incredibly powerful, but sadly it seems I haven't earned my black belt in R-fu yet.

I was pissed for at least a whole day - not being able to achieve what I wanted to do - until I realised H2o comes with a WebUI called Flow. I'm normally not very fond of using web thingies to do important work like writing code, but Flow is simply incredible.

Automated graphing functions, integrated ETA when running resource intensive models, descriptions for each and every model parameters (the parameters are even divided in sections based on your familiarly with the statistical models in question), Flow seemingly has it all. In no time I was able to run 3 basic machine learning models and get actual interpretable results.

So yeah, if you've been itching to analyse very large databases using state of the art machine learning models, I would recommend using H2o. Try Flow at first instead of the Python or R hooks to see what it's capable of doing.

The only downside to all of this is that H2o is written in Java and depends on Java 1.7 to run... That, and be warned: it requires a metric fuckton of processing power and RAM. My poor server struggled quite a bit even with 10 available cores and 10Gb of RAM...

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Reproducible Builds: Weekly report #149

Mër, 07/03/2018 - 4:21pd

Here's what happened in the Reproducible Builds effort between Sunday February 25 and Saturday March 3 2018:

diffoscope development

Version 91 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks as well as new ones from:

In addition, Juliana — our Outreachy intern — continued her work on parallel processing; the above work is part of it.

reproducible-website development Packages reviewed and fixed, and bugs filed

An issue with the pydoctor documentation generator was merged upstream.

Reviews of unreproducible packages

73 package reviews have been added, 37 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Jeremy Bicha (4)
Misc.

This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Skellam distribution likelihood

Mar, 06/03/2018 - 10:37md

I wondered if it was possible to make a ranking system based on the Skellam distribution, taking point spread as the only input; first step is figuring out what the likelihood looks like, so here's an example for k=4 (ie., one team beat the other by four goals):

It's pretty, but unfortunately, it shows that the most likely combination is µ1 = 0 and µ2 = 4, which isn't really that realistic. I don't know what I expected, though :-)

Perhaps it's different when we start summing many of them (more games, more teams), but you get into too high dimensionality to plot. If nothing else, it shows that it's hard to solve symbolically by looking for derivatives, as the extreme point is on an edge, not on a hill.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Debian Bug Squashing Party in Tirana

Mar, 06/03/2018 - 10:15md

On 3 March I attended a Debian Bug Squashing Party in Tirana. Organized by colleagues at Open Labs Albania Anisa and friends and Daniel. Debian is the second oldest GNU/Linux distribution still active and a launchpad for so many others.

A large number of Kosovo participants took place, mostly female students. I chose to focus on adding Kosovo to country-lists in Debian by verifying that Kosovo was missing and then filing bug reports or, even better, doing pull requests.

apt-cache rdepends iso-codes will return a list of packages that include ISO codes. However, this proved hard to examine by simply looking at these applications on Debian; one would have to search through their code to find out how the ISO MA-3166 codes are used. So I left that for another time.

I moved next to what I thought I would be able complete within the event. Coding is becoming quite popular with children in Kosovo. I looked into MIT’s Scratch and Google’s Blockly, the second one being freeer software and targeting younger children. They both work by snapping together logical building blocks into a program.

Translation of Blockly into Albanian is now complete and hopefully will get much use. You can improve on my work at Translatewiki.

Thank you for the all fish and see you at the next Debian BSP.

Advertisements &b &b Arianit https://arianit2.wordpress.com debian – Arianit's Blog

Emacs #2: Introducing org-mode

Mër, 28/02/2018 - 11:09md

In my first post in my series on Emacs, I described returning to Emacs after over a decade of vim, and org-mode being the reason why.

I really am astounded at the usefulness, and simplicity, of org-mode. It is really a killer app.

So what exactly is org-mode?

I wrote yesterday:

It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”

That’s true, but doesn’t quite capture it. org-mode is a toolkit for you to organize things. It has reasonable out-of-the-box defaults, but it’s designed throughout for you to customize.

To highlight a few things:

  • Maintaining TODO lists: items can be scattered across org-mode files, contain attachments, have tags, deadlines, schedules. There is a convenient “agenda” view to show you what needs to be done. Items can repeat.
  • Authoring documents: org-mode has special features for generating HTML, LaTeX, slides (with LaTeX beamer), and all sorts of other formats. It also supports direct evaluation of code in-buffer and literate programming in virtually any Emacs-supported language. If you want to bend your mind on this stuff, read this article on literate devops. The entire Worg website
    is made with org-mode.
  • Keeping notes: yep, it can do that too. With full-text search, cross-referencing by file (as a wiki), by UUID, and even into other systems (into mu4e by Message-ID, into ERC logs, etc, etc.)

Getting started

I highly recommend watching Carsten Dominik’s excellent Google Talk on org-mode. It is an excellent introduction.

org-mode is included with Emacs, but you’ll often want a more recent version. Debian users can apt-get install org-mode, or it comes with the Emacs packaging system; M-x package-install RET org-mode RET may do it for you.

Now, you’ll probably want to start with the org-mode compact guide’s introduction section, noting in particular to set the keybindings mentioned in the activation section.

A good tutorial…

I’ve linked to a number of excellent tutorials and introductory items; this post is not going to serve as a tutorial. There are two good videos linked at the end of this post, in particular.

Some of my configuration

I’ll document some of my configuration here, and go into a bit of what it does. This isn’t necessarily because you’ll want to copy all of this verbatim — but just to give you a bit of an idea of some of what can be configured, an idea of what to look up in the manual, and maybe a reference for “now how do I do that?”

First, I set up Emacs to work in UTF-8 by default.

(prefer-coding-system 'utf-8)
(set-language-environment "UTF-8")

org-mode can follow URLs. By default, it opens in Firefox, but I use Chromium.

(setq browse-url-browser-function 'browse-url-chromium)

I set the basic key bindings as documented in the Guide, plus configure the M-RET behavior.

(global-set-key "\C-cl" 'org-store-link)
(global-set-key "\C-ca" 'org-agenda)
(global-set-key "\C-cc" 'org-capture)
(global-set-key "\C-cb" 'org-iswitchb)

(setq org-M-RET-may-split-line nil)

Configuration: Capturing

I can press C-c c from anywhere in Emacs. It will capture something for me, and include a link back to whatever I was working on.

You can define capture templates to set how this will work. I am going to keep two journal files for general notes about meetings, phone calls, etc. One for personal, one for work items. If I press C-c c j, then it will capture a personal item. The %a in all of these includes the link to where I was (or a link I had stored with C-c l).

(setq org-default-notes-file "~/org/tasks.org") (setq org-capture-templates '( ("t" "Todo" entry (file+headline "inbox.org" "Tasks") "* TODO %?\n %i\n %u\n %a") ("n" "Note/Data" entry (file+headline "inbox.org" "Notes/Data") "* %? \n %i\n %u\n %a") ("j" "Journal" entry (file+datetree "~/org/journal.org") "* %?\nEntered on %U\n %i\n %a") ("J" "Work-Journal" entry (file+datetree "~/org/wjournal.org") "* %?\nEntered on %U\n %i\n %a") )) (setq org-irc-link-to-logs t)

I like to link by UUIDs, which lets me move things between files without breaking locations. This helps generate UUIDs when I ask Org to store a link target for future insertion.


(require 'org-id)
(setq org-id-link-to-org-use-id 'create-if-interactive)

Configuration: agenda views

I like my week to start on a Sunday, and for org to note the time when I mark something as done.


(setq org-log-done 'time)
(setq org-agenda-start-on-weekday 0)

Configuration: files and refiling

Here I tell it what files to use in the agenda, and to add a few more to the plain text search. I like to keep a general inbox (from which I can move, or “refile”, content), and then separate tasks, journal, and knowledge base for personal and work items.

(setq org-agenda-files (list "~/org/inbox.org" "~/org/email.org" "~/org/tasks.org" "~/org/wtasks.org" "~/org/journal.org" "~/org/wjournal.org" "~/org/kb.org" "~/org/wkb.org" )) (setq org-agenda-text-search-extra-files (list "~/org/someday.org" "~/org/config.org" )) (setq org-refile-targets '((nil :maxlevel . 2) (org-agenda-files :maxlevel . 2) ("~/org/someday.org" :maxlevel . 2) ("~/org/templates.org" :maxlevel . 2) ) ) (setq org-outline-path-complete-in-steps nil) ; Refile in a single go (setq org-refile-use-outline-path 'file)

Configuration: Appearance

I like a pretty screen. After you’ve gotten used to org a bit, you might try this.

(require 'org-bullets) (add-hook 'org-mode-hook (lambda () (org-bullets-mode t))) (setq org-ellipsis "⤵")

Coming up next…

This hopefully showed a few things that org-mode can do. Coming up next, I’ll cover how to customize TODO keywords and tags, archiving old tasks, forwarding emails to org-mode, and using git to synchronize between machines.

You can also see a list of all articles in this series.

Resources to accompany this article

John Goerzen http://changelog.complete.org The Changelog

#17: Dependencies.

Mër, 28/02/2018 - 10:45md

Dependencies are invitations for other people to break your package.
-- Josh Ulrich, private communication

Welcome to the seventeenth post in the relentlessly random R ravings series of posts, or R4 for short.

Dependencies. A truly loaded topic.

As R users, we are spoiled. Early in the history of R, Kurt Hornik and Friedrich Leisch built support for packages right into R, and started the Comprehensive R Archive Network (CRAN). And R and CRAN had a fantastic run with. Roughly twenty years later, we are looking at over 12,000 packages which can (generally) be installed with absolute ease and no suprises. No other (relevant) open source language has anything of comparable rigour and quality. This is a big deal.

And coding practices evolved and changed to play to this advantage. Packages are a near-unanimous recommendation, use of the install.packages() and update.packages() tooling is nearly universal, and most R users learned to their advantage to group code into interdependent packages. Obvious advantages are versioning and snap-shotting, attached documentation in the form of help pages and vignettes, unit testing, and of course continuous integration as a side effect of the package build system.

But the notion of 'oh, let me just build another package and add it to the pool of packages' can get carried away. A recent example I had was the work on the prrd package for parallel recursive dependency testing --- coincidentally, created entirely to allow for easier voluntary tests I do on reverse dependencies for the packages I maintain. It uses a job queue for which I relied on the liteq package by Gabor which does the job: enqueue jobs, and reliably dequeue them (also in a parallel fashion) and more. It looks light enough:

R> tools::package_dependencies(package="liteq", recursive=FALSE, db=AP)$liteq [1] "assertthat" "DBI" "rappdirs" "RSQLite" R>

Two dependencies because it uses an internal SQLite database, one for internal tooling and one for configuration.

All good then? Not so fast. The devil here is the very innocuous and versatile RSQLite package because when we look at fully recursive dependencies all hell breaks loose:

R> tools::package_dependencies(package="liteq", recursive=TRUE, db=AP)$liteq [1] "assertthat" "DBI" "rappdirs" "RSQLite" "tools" [6] "methods" "bit64" "blob" "memoise" "pkgconfig" [11] "Rcpp" "BH" "plogr" "bit" "utils" [16] "stats" "tibble" "digest" "cli" "crayon" [21] "pillar" "rlang" "grDevices" "utf8" R> R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=AP)$RSQLite [1] "bit64" "blob" "DBI" "memoise" "methods" [6] "pkgconfig" "Rcpp" "BH" "plogr" "bit" [11] "utils" "stats" "tibble" "digest" "cli" [16] "crayon" "pillar" "rlang" "assertthat" "grDevices" [21] "utf8" "tools" R>

Now we went from four to twenty-four, due to the twenty-two dependencies pulled in by RSQLite.

There, my dear friend, lies madness. The moment one of these packages breaks we get potential side effects. And this is no laughing matter. Here is a tweet from Kieran posted days before a book deadline of his when he was forced to roll a CRAN package back because it broke his entire setup. (The original tweet has by now been deleted; why people do that to their entire tweet histories is somewhat I fail to comprehened too; in any case the screenshot is from a private discussion I had with a few like-minded folks over slack.)

That illustrates the quote by Josh at the top. As I too have "production code" (well, CRANberries for one relies on it), I was interested to see if we could easily amend RSQLite. And yes, we can. A quick fork and few commits later, we have something we could call 'RSQLighter' as it reduces the dependencies quite a bit:

R> IP <- installed.packages() # using my installed mod'ed version R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=IP)$RSQLite [1] "bit64" "DBI" "methods" "Rcpp" "BH" "bit" [7] "utils" "stats" "grDevices" "graphics" R>

That is less than half. I have not proceeded with the fork because I do not believe in needlessly splitting codebases. But this could be a viable candidate for an alternate or shadow repository with more minimal and hence more robust dependencies. Or, as Josh calls, the tinyverse.

Another maddening aspect of dependencies is the ruthless application of what we could jokingly call Metcalf's Law: the likelihood of breakage does of course increase with the number edges in the dependency graph. A nice illustration is this post by Jenny trying to rationalize why one of the 87 (as of today) tidyverse packages has now state "ORPHANED" at CRAN:

An invitation for other people to break your code. Well put indeed. Or to put rocks up your path.

But things are not all that dire. Most folks appear to understand the issue, some even do something about it. The DBI and RMySQL packages have saner strict dependencies, maybe one day things will improve for RMariaDB and RSQLite too:

R> tools::package_dependencies(package=c("DBI", "RMySQL", "RMariaDB"), recursive=TRUE, db=AP) $DBI [1] "methods" $RMySQL [1] "DBI" "methods" $RMariaDB [1] "bit64" "DBI" "hms" "methods" "Rcpp" "BH" [7] "plogr" "bit" "utils" "stats" "pkgconfig" "rlang" R>

And to be clear, I do not believe in giving up and using everything via docker, or virtualenvs, or packrat, or ... A well-honed dependency system is wonderful and the right resource to get code deployed and updated. But it required buy-in from everyone involved, and an understanding of the possible trade-offs. I think we can, and will, do better going forward.

Or else, there will always be the tinyverse ...

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Free software activities in February 2018

Mër, 28/02/2018 - 7:36md

Here is my monthly update covering what I have been doing in the free software world in February 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:



I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:

  • Add support for comparing Berkeley DB files. (Unfortunately this is currently incomplete because the libraries do not report metadata reliably!) (#890528)
  • Add support for comparing "XMLBeans" binary schemas. [...]
  • Drop spurious debugging code in Android tests. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

Patches contributed
  • debian-policy: Replace dh_systemd_install with dh_installsystemd. (#889167)
  • juce: Missing build-depends on graphviz. (#890035)
  • roffit: debian/rules does not override targets as intended. (#889975)
  • bugs.debian.org: Please add rel="canonical" to bug pages. (#890338)
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

Uploads
  • redis:
    • 4.0.8-1 — New upstream release and fix a potential hardlink vulnerability.
    • 4.0.8-2 — Also listen on ::1 (IPv6) by default. (#891432)
  • python-django:
    • 1.11.10-1 — New upstream security release.
    • 2.0.2-1 — New upstream security release.
  • redisearch:
    • 1.0.6-1 — New upstream release.
    • 1.0.7-1 — New upstream release & add Lintian overrides for package-does-not-install-examples.
    • 1.0.8-1 — New upstream release, which includes my reproducibility-related change improvement.
  • adminer:
    • 4.6.1-1 — New upstream release and override debian-watch-does-not-check-gpg-signature as upstream do not release signatures.
    • 4.6.2-1 — New upstream release.
  • process-cpp:
    • 3.0.1-3 — Make the documentation reproducible.
    • 3.0.1-4 — Correct Vcs-Bzr to Vcs-Git.
  • sleekxmpp (1.3.3-3) — Make the build reproducible. (#890193)
  • python-redis (2.10.6-2) — Correct autopkgtest dependencies and misc packaging updates.
  • bfs (1.2.1-1) — New upstream release.

I also made misc packaging updates for docbook-to-man (1:2.0.0-41), gunicorn (19.7.1-4), installation-birthday (8) & python-daiquiri (1.3.0-3).

Finally, I performed the following sponsored uploads: check-manifest (0.36-2), django-ipware (2.0.1-1), nose2 (0.7.3-3) & python-keyczar (0.716+ds-2).

Debian bugs filed
  • zsh: Please make apt install completion work on "local" files. (#891140)
  • git-gui: Ignores git hooks. (#891552)
  • python-coverage:
    • Installs pyfile.html into wrong directory breaking HTML report generation. (#890560)
    • Document copyright information for bundled JavaScript source. (#890578)
FTP Team

As a Debian FTP assistant I ACCEPTed 123 packages: apticron, aseba, atf-allwinner, bart-view, binutils, browserpass, bulk-media-downloader, ceph-deploy, colmap, core-specs-alpha-clojure, ctdconverter, debos, designate, editorconfig-core-py, essays1743, fis-gtm, flameshot, flex, fontmake, fonts-league-spartan, fonts-ubuntu, gcc-8, getdns, glyphslib, gnome-keyring, gnome-themes-extra, gnome-usage, golang-github-containerd-cgroups, golang-github-go-debos-fakemachine, golang-github-mattn-go-zglob, haskell-regex-tdfa-text, https-everywhere, ibm-3270, ignition-fuel-tools, impass, inetsim, jboss-bridger, jboss-threads, jsonrpc-glib, knot-resolver, libctl, liblouisutdml, libopenraw, libosmo-sccp, libtest-postgresql-perl, libtickit, linux, live-tasks, minidb, mithril, mutter, neuron, node-acorn-object-spread, node-babel, node-call-limit, node-color, node-colormin, node-console-group, node-consolidate, node-cosmiconfig, node-css-color-names, node-date-time, node-err-code, node-gulp-load-plugins, node-html-comment-regex, node-icss-utils, node-is-directory, node-mdn-data, node-mississippi, node-mutate-fs, node-node-localstorage, node-normalize-range, node-postcss-filter-plugins, node-postcss-load-options, node-postcss-load-plugins, node-postcss-minify-font-values, node-promise-retry, node-promzard, node-require-from-string, node-rollup, node-rollup-plugin-buble, node-ssri, node-validate-npm-package-name, node-vue-resource, ntpsec, nvidia-cuda-toolkit, nyx, pipsi, plasma-discover, pokemmo, pokemmo-installer, polymake, privacybadger, proxy-switcher, psautohint, purple-discord, pytest-astropy, pytest-doctestplus, pytest-openfiles, python-aiomeasures, python-coverage, python-fitbit, python-molotov, python-networkmanager, python-os-service-types, python-pluggy, python-stringtemplate3, python3-antlr3, qpack, quintuple, r-cran-animation, r-cran-clustergeneration, r-cran-phytools, re2, sat-templates, sfnt2woff-zopfli, sndio, thunar, uhd, undertime, usbauth-notifier, vmdb2 & xymonq.

I additionally filed 15 RC bugs against packages that had incomplete debian/copyright files against: browserpass, designate, fis-gtm, flex, gnome-keyring, ibm-3270, knot-resolver, libopenraw, libtest-postgresql-perl, mithril, mutter, ntpsec, plasma-discover, pytest-arraydiff & r-cran-animation.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Things that really matter

Mër, 28/02/2018 - 5:34md


gwolf http://gwolf.org Gunnar Wolf

Deploying a (simple) docker container system

Mër, 28/02/2018 - 9:54pd

When a small platform for shipping containers is needed, not speaking about Kubernets or something, you have a couple of common things you might want to deploy at first.

Usual things that I have to roll out everytime deloying such a platform:

Bootstraping docker and docker-compose

Most services are build upon multiple containers. A useful tool for doing this is for example docker-compose where you can describe your whole 'application'. So we need to deploy it beside docker itself.

Deploying Watchtower

An essential operational part is to keep you container images up to date.

Watchtower is an application that will monitor your running Docker containers and watch for changes to the images that those containers were originally started from. If watchtower detects that an image has changed, it will automatically restart the container using the new image.

Deploying http(s) reverse proxy Træfik

If you want to provide multiple (web)services on port 80 and 443, you have to think about how this should be solved. Usually you would use a http(s) reverse proxy, there are many of software implementations available.
The challenging part in such an environment is that services may appear and disappear frequently. (Re)-configuration of the proxy service it the gap that needs to be closed.

Træfik (pronounced like traffic) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease [...] to manage its configuration automatically and dynamically.

Træfik has many interesting features for example 'Let's Encrypt support (Automatic HTTPS with renewal)'.

Jan Wagner https://log.cyconet.org/ Planet - Cyconet Blog

Emacs #1: Ditching a bunch of stuff and moving to Emacs and org-mode

Mar, 27/02/2018 - 11:40md

I’ll admit it. After over a decade of vim, I’m hooked on Emacs.

I’ve long had this frustration over how to organize things. I’ve followed approaches like GTD and ZTD, but things like email or large files are really hard to organize.

I had been using Asana for tasks, Evernote for notes, Thunderbird for email, a combination of ikiwiki and some other items for a personal knowledge base, and various files in an archive directory on my PC. When my new job added Slack to the mix, that was finally the last straw.

A lot of todo-management tools integrate with email — poorly. When you want to do something like “remind me to reply to this in a week”, a lot of times that’s impossible because the tool doesn’t store the email in a fashion you can easily reply to. And that problem is even worse with Slack.

It was right around then that I stumbled onto Carsten Dominik’s Google Talk on org-mode. Carsten was the author of org-mode, and although the talk is 10 years old, it is still highly relevant.

I’d stumbled across org-mode before, but each time I didn’t really dig in because I had the reaction of “an outliner? But I need a todo list.” Turns out I was missing out. org-mode is all that.

Just what IS Emacs? And org-mode?

Emacs grew up as a text editor. It still is, and that heritage is definitely present throughout. But to say Emacs is an editor would be rather unfair.

Emacs is something more like a platform or a toolkit. Not only do you have source code to it, but the very configuration is a program, and there are hooks all over the place. It’s as if it was super easy to write a Firefox plugin. A couple lines, and boom, behavior changed.

org-mode is very similar. Yes, it’s an outliner, but that’s not really what it is. It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”

Capturing

If you’ve ever read productivity guides based on GTD, one of the things they stress is effortless capture of items. The idea is that when something pops into your head, get it down into a trusted system quickly so you can get on with what you were doing. org-mode has a capture system for just this. I can press C-c c from anywhere in Emacs, and up pops a spot to type my note. But, critically, automatically embedded in that note is a link back to what I was doing when I pressed C-c c. If I was editing a file, it’ll have a link back to that file and the line I was on. If I was viewing an email, it’ll link back to that email (by Message-Id, no less, so it finds it in any folder). Same for participating in a chat, or even viewing another org-mode entry.

So I can make a note that will remind me in a week to reply to a certain email, and when I click the link in that note, it’ll bring up the email in my mail reader — even if I subsequently archived it out of my inbox.

YES, this is what I was looking for!

The tool suite

Once you’re using org-mode, pretty soon you want to integrate everything with it. There are browser plugins for capturing things from the web. Multiple Emacs mail or news readers integrate with it. ERC (IRC client) does as well. So I found myself switching from Thunderbird and mairix+mutt (for the mail archives) to mu4e, and from xchat+slack to ERC.

And wouldn’t you know it, I liked each of those Emacs-based tools better than the standalone they replaced.

A small side tidbit: I’m using OfflineIMAP again! I even used it with GNUS way back when.

One Emacs process to rule them

I used to use Emacs extensively, way back. Back then, Emacs was a “large” program. (Now my battery status applet literally uses more RAM than Emacs). There was this problem of startup time back then, so there was a way to connect to a running Emacs process.

I like to spawn programs with Mod-p (an xmonad shortcut to a dzen menubar, but Alt-F2 in more traditional DEs would do the trick). It’s convenient to not run several emacsen with this setup, so you don’t run into issues with trying to capture to a file that’s open in another one. The solution is very simple: I created a script, named it em, and put it on my path. All it does is this:


#!/bin/bash
exec emacsclient -c -a "" "$@"

It creates a new emacs process if one doesn’t already exist; otherwise, it uses what you’ve got. A bonus here: parameters such as -nw work just fine, so it really acts just as if you’d typed emacs at the shell prompt. It’s a suitable setting for EDITOR.

Up next…

I’ll be talking about my use of, and showing off configurations for:

  • org-mode, including syncing between computers, capturing, agenda and todos, files, linking, keywords and tags, various exporting (slideshows), etc.
  • mu4e for email, including multiple accounts, bbdb integration
  • ERC for IRC and IM

You can also see a list of all articles in this series.

John Goerzen http://changelog.complete.org The Changelog

Woman. Not in tech.

Mar, 27/02/2018 - 10:00md

Thank you, Livia Gabos, for helping me to improve this article by giving me feedback on it.

Before I became an intern with Outreachy, my Twitter bio read: "Woman. Not in tech." Well, if you didn't get the picture, let me explain that meant.

It all began with a simple request I received almost an year ago:

Hey, do you want to join our [company] event and give a talk about being a women in tech?

I don't have a job in the tech industry. So, yes, while society does put me in the 'woman' column, I have to admit it's a little hard to give a talk about being 'in tech' when I'm not 'in tech'.

What I can talk about, though, it's about all the women who are not in tech. The many, many friends I have who come to Women in Tech events and meetings, who reach out to me by e-mail, Twitter or even in person, who are struggling to get into tech.

I can talk about the only other girl in my class who, besides me, managed to get an internship. And how we both only got the position because we had passed a written exam about informatics, instead of going through usual channels such as referrals, CV analysis or interviews.

I can talk about the women who are seen as lazy, or that they just don't get it the lessons in tech courses because they don't have the same background and the same amount of time available to study or to do homework at home as their male peers do, since they have to take care of relatives, take care of children, take care of the housework for their family, most of the times while working in one or two jobs just to be able to study.

I can talk about the women and about the mothers who after many years being denied the possibility for a tech career are daring to change paths, but are denied junior positions in favor of younger men who "can be trained on the job" and have "so much more willingness to learn".

I can talk about the women who are seen as uninterested in one or more FLOSS technologies because they don't contribute to said technology, since the men in FLOSS projects have continuously failed in engage and - most importantly - keep them included (but maybe that's just because women lack role models).

Even though there are so many Women in Tech communities in Curitiba, as listed above, the all-male 'core team' of the local Debian community itself couldn't find a single woman to work with them for the DebConf proposal. Go figure.

I can talk about the many women I met not at tech conferences, but at teachers' conferences, that have way more experience with computers and programming than I. Women who after years working on the field have given up IT to become teachers, not because it was their lifelong dream, but because they didn't feel comfortable and well-integrated in a male-dominated full-of-mysoginistic-ideals tech industry. Because it was - and is - almost impossible for them to break the glass ceiling.

I can even talk about all the women who are lesbians that a certain community of Women In Tech could not find when they wanted someone to write an article about 'being homossexual in tech' to be published right on Brazil's Lesbian Visibility Day, so they had to go and ask a gay man to talk about his own experience. Well, it seems like those women aren't "in tech" either.

Tokenization can be especially apparent when the lone person in a minority group is not only asked to speak for the group, but is consistently asked to speak about being a member of that group. Geek Feminism - Tokenism

The things is, a lot of people don't want to hear any those stories. Companies in particular only want token women from outside the company (because, let's face it, most tech companies can't find the talent within) who will come up to the stage and inspire other women saying what a great experience it is to be in tech - and that "everyone should try it too!".

I do believe all women should try and get knowledge about tech and that is what I work towards. We shouldn't have to rely only on the men in our life to get things done with our computers or our cell phones or our digital life.

But to tell other women they should get into the tech industry? I guess not.

After all, who am I to tell other women they should come to tech - and to stay in tech - when I know we are bound to face all this?

Addendum:

For Brazilian women not in tech, I'm organizing a crowdfunding campaign to get at least five of them the opportunity to attend MiniDebConf in Curitiba, Parana, in April. None of these girls can afford the trip and they don't have a company to sponsor them. If you are willing to help, please get in touch or check this link: Women in MiniDebConf.

More on the subject: Renata https://rsip22.github.io/blog/ Renata's blog

A Nice looking Blog

Enj, 22/02/2018 - 9:00md

I stumbled across this rather nicely-formatted blog by Alex Beal and thought I'd share it. It's a particular kind of minimalist style that I like, because it puts the content first. It reminds me of Mark Pilgrim's old blog.

I can't remember which post in particular I came across first, but the one that I thought I would share was this remarkably detailed personal research project on tracking mood.

That would have been the end of it, but I then stumbled across this great review of "Type Driven Development with Idris", a book by Edwin Brady. I bought this book during the Christmas break but I haven't had much of a chance to deep dive into it yet.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Dell PowerEdge T30

Enj, 22/02/2018 - 3:06md

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.

Conclusion

The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

Related posts:

  1. CPL I’ve just bught an NVidia video card from Computers and...
  2. Flash Storage and Servers In the comments on my post about the Dell PowerEdge...
  3. Dell PowerEdge T105 Today I received a Dell PowerEDGE T105 for use by...
etbe https://etbe.coker.com.au etbe – Russell Coker

How to use the EventCalendar ical

Mër, 21/02/2018 - 11:49md

Hello!

If you follow this blog, you should probably know by now that I have been working with my mentors to contribute to MoinMoin EventCalendar macro, adding the possility to export the events' data to an icalendar file.

The code (which can be found on this Github repository) isn't quite ready yet, because I'm still working to convert the recurrence rule to the icalendar format, but other than that, it should be working. Hopefully.

This guide assumes that you have the EventCalendar macro installed on the wiki and that the macro is called on a determined wikipage.

The icalendar file is now generated as an attachment the moment the macro is loaded. I created an "ical" link at the bottom of the calendar. When activated, this link prompts the download of the ical attachment of the page. Being an attachment, there is still the possibility to just view ical the file using the "attachment" menu if the user wishes to do so.

There are two ways of importing this calendar on Thunderbird. The first one is to download the file by clicking on the link and then proceeding to import it manually to Thunderbird.

The second option is to "Create a new calendar / On the network" and to use the URL address from the ical link as the "location", as it is shown below:

As usual, it's possible to customize the name for the calendar, the color for the events and such...

I noticed a few Wikis that use the EventCalendar, such as Debian wiki itself and the FSFE wiki. Python wiki also seems to be using MoinMoin and EventCalendar, but it seems that they use a Google service to export the event data do iCal.

If you read this and are willing to try the code in your wiki and give me feedback, I would really appreciate. You can find the ways to contact me in my Debian Wiki profile.

Renata https://rsip22.github.io/blog/ Renata's blog

Getting Debian booting on a Lenovo Yoga 720

Mër, 21/02/2018 - 10:46md

I recently got a new work laptop, a 13” Yoga 720. It proved difficult to install Debian on; pressing F12 would get a boot menu allowing me to select a USB stick I have EFI GRUB on, but after GRUB loaded the kernel and the initrd it would just sit there never outputting anything else that indicated the kernel was even starting. I found instructions about Ubuntu 17.10 which helped but weren’t the complete picture. What seems to be the situation is that the kernel won’t happily boot if “Legacy Support” is not enabled - enabling this (and still booting as EFI) results in a happier experience. However in order to be able to enable legacy boot you have to switch the SATA controller from RAID to AHCI, which can cause Windows to get unhappy about its boot device going away unless you warn it first.

  • Fire up an admin shell in Windows (right click on the start menu)
  • bcdedit /set safeboot minimal
  • Reboot into the BIOS
  • Change the SATA Controller mode from RAID to AHCI (dire warnings about “All data will be erased”. It’s not true, but you’ve back up first, right?) Set “Boot Mode” to “Legacy Support”.
  • Save changes and let Windows boot to Safe Mode
  • Fire up an admin shell in Windows (right click on the start menu again)
  • bcdedit /deletevalue safeboot
  • Reboot again and Windows will load in normal mode with the AHCI drivers

Additionally I had problems getting the GRUB entry added to the BIOS; efibootmgr shows it fine but it never appears in the BIOS boot list. I ended up using Windows to add it as the primary boot option using the following (<guid> gets replaced with whatever the new “Debian” section guid is):

bcdedit /enum firmware bcdedit /copy "{bootmgr}" /d "Debian" bcdedit /set "{<guid>}" path \EFI\Debian\grubx64.efi bcdedit /set "{fwbootmgr}" displayorder "{<guid>}" /addfirst

Even with that at one point the BIOS managed to “forget” about the GRUB entry and require me to re-do the final “displayorder” command.

Once you actually have the thing installed and booting it seems fine - I’m running Buster due to the fact it’s a Skylake machine with lots of bits that seem to want a newer kernel, but claimed battery life is impressive, the screen is very shiny (though sometimes a little too shiny and reflective) and the NVMe SSD seems pretty nippy as you’d expect.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

How hard can typing æ, ø and å be?

Mër, 21/02/2018 - 5:14md

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

mjr http://www.news.software.coop mjr – Software Cooperative News

Faqet