You are here

Bits from Debian

Subscribe to Feed Bits from Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 15 orë 10 min më parë

Erich Schubert: Chinese Citation Factory

19 orë 7 min më parë

RetractionWatch published in Feburary 2018 an article titled “A journal waited 13 months to reject a submission. Days later, it published a plagiarized version by different authors”, indicating that in the journal Multimedia Tools and Applications (MTAP) may have been manipulated in the editorial process.

Now, more than a year later, Springer apparently has retracted additional articles from the journal, as mentioned in the blog For Better Science. On the downside, Elsevier has been publishing many of these in another journal now instead…

I am currently aware of 22 retractions associated with this incident. One would have expected to see a clear pattern in the author names, but they seem to have little in common except Chinese names and affiliations, and suspicious email addresses (also, usually only one author has an email at all). It almost appears as if the names may be made up. And these retracted papers clearly contained citation spam: they cite a particular author very often, usually in a single paragraph.

The retraction notices typically include the explanation “there is evidence suggesting authorship manipulation and an attempt to subvert the peer review process”, confirming the earlier claims by Retraction Watch.

So I used the CrossRef API to get the citations from all the articles (I tried SemanticScholar first, but for some of the retracted papers it only had the self-cite of the retraction notice), and counted the citations in these papers.

Essentially, I am counting how many citations authors lost by the retractions.

Here is the “high score” with the top 5 citation losers:

Author Citations lost Cited in papers Citation share Retractions L. Zhang 385 20 60.6% 1 M. Song 68 20 10.9% 0 C. Chen 65 19 11.1% 0 X. Liu 65 19 11.0% 0 R. Zimmermann 60 18 10.8% 0

Now this is a surprisingly clear pattern. In 20 of the retracted papers, L. Zhang was cited on average 19.25 times. In these papers, also 60% of the references were co-authored by him. In one of the remaining two papers, he was an author. The next authors seem to be mostly in this list because of co-authoring with L. Zhang earlier. In fact, if we ignore all citations to papers co-authored by L. Zhang, no author receives more than 5 citations anymore.

So this very clearly suggests that L. Zhang manipulated the MTAP journal to boost his citation index. And it is quite disappointing how long it took until Springe retracted those articles! Judging by the For Better Science article, there may be even more affected papers.

Joey Hess: hacking water

Sht, 15/06/2019 - 6:02md

From water insecurity to offgrid, solar pumped, gravity flow 1000 gallons of running water.

I enjoy hauling water by hand, which is why doing it for 8 years was not really a problem. But water insecurity is; the spring has been drying up for longer periods in the fall, and the cisterns have barely been large enough to get through.

And if I'm going to add storage, it ought to be above the house, so it can gravity flow. And I have these old 100 watts of solar panels sitting unused after my solar upgrade. And a couple of pumps for a pressure tank system that was not working when I moved in. And I stumbled across an odd little flat spot halfway up the hillside. And there's an exposed copper pipe next to the house's retaining wall; email to Africa establishes that it goes down and through the wall and connects into the plumbing.

So I have an old system that doesn't do what I want. Let's hack the system..

(This took a year to research and put together, including learning a lot about plumbing.)

Run a cable from the old solar panels 75 feet over to the spring. Repurpose an old cooler as a pumphouse, to keep the rain off the Shurflow pump, and with the opening facing so it directs noise away from living areas. Add a Shurflow 902-200 linear current booster to control the pump.

Run a temporary pipe up to the logging road, and verify that the pump can just manage to push the water up there.

Sidetrack into a week spent cleaning out and re-sealing the spring's settling tank. This was yak shaving, but it was going to fail. Build a custom ladder because regular ladders are too wide to fit into it. Flashback to my tightest squeezes from caving. Yuurgh.

Install water level sensors in the settling tank, cut a hole for pipe, connect to pumphouse.

Now how to bury 250 feet of PEX pipe a foot deep up a steep hillside covered in rock piles and trees that you don't want to cut down to make way for equipment? Research every possibility, and pick the one that involves a repurposed linemans's tool resembling a medieval axe.

Dig 100 feet of 1 inch wide trench in a single afternoon by hand. Zeno in on the rest of the 300 foot run. Gain ability to bury underground cables without raising a sweat as an accidental superpower. Arms ache for a full month afterwards.

Connect it all up with a temporary water barrel, and it works! Gravity flow yields 30 PSI!

Pressure-test the copper pipe going into the house to make sure it's not leaking behind the retaining wall. Fix all the old leaky plumbing and fixtures in the house.

Clear a 6 foot wide path through the woods up the hill and roll up two 550 gallon Norwesco water tanks. Haul 650 pounds of sand up the hill, by hand, one 5 gallon bucket at a time. Level and prepare two 6 foot diameter pads.

Build a buried manifold with valves turned by water meter key. Include a fire hose outlet just in case.

Begin filling the tanks, unsure how long it will take as the pump balances available sunlight and spring flow.

François Marier: OpenSUSE 15 LXC setup on Ubuntu Bionic 18.04

Sht, 15/06/2019 - 5:15pd

Similarly to what I wrote for Fedora, here is how I was able to create an OpenSUSE 15 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc echo "veth" >> /etc/modules modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:

net.ipv4.ip_forward=1

and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers -A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 10.0.3.0/24 -j ACCEPT -A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT -A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT -A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT -A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and apply these changes:

iptables-apply

before restarting the lxc networking:

systemctl restart lxc-net.service Creating the container

Once that's in place, you can finally create the OpenSUSE 15 container:

lxc-create -n opensuse15 -t download -- -d opensuse -r 15 -a amd64

To see a list of all distros available with the download template:

lxc-create -n foo --template=download -- --list Logging in as root

Start up the container and get a login console:

lxc-start -n opensuse15 -F

In another terminal, set a password for the root user:

lxc-attach -n opensuse15 passwd

You can now use this password to log into the console you started earlier.

Logging in as an unprivileged user via ssh

As root, install a few packages:

zypper install vim openssh sudo man systemctl start sshd systemctl enable sshd

and then create an unprivileged user:

useradd francois passwd francois cd /home mkdir francois chown francois:100 francois/

and give that user sudo access:

visudo # uncomment "wheel" line groupadd wheel usermod -aG wheel francois

Now login as that user from the console and add an ssh public key:

mkdir .ssh chmod 700 .ssh echo "<your public key>" > .ssh/authorized_keys chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

Eddy Petri&#537;or: How to generate a usable map file for Rust code - and related (f)rustrations

Sht, 15/06/2019 - 2:24pd
Intro
Cargo does not produce a .map file, and if it does, mangling makes it very unusable. If you're searching for the TLDR, read from "How to generate a map file" on the bottom of the article.
MotivationAs a person with experience in embedded programming I find it very useful to be able to look into the map file.

Scenarios where looking at the map file is important:
  • evaluate if the code changes you made had the desired size impact or no undesired impact - recently I saw a compiler optimize for speed an initialization with 0 of an array by putting long blocks of u8 arrays in .rodata section
  • check if a particular symbol has landed in the appropriate memory section or region
  • make an initial evaluation of which functions/code could be changed to optimize either for code size or for more readability (if the size cost is acceptable)
  • check particular symbols have expected sizes and/or alignments
Rustrations  Because these kind of scenarios  are quite frequent in my work and I am used to looking at the .map file, some "rustrations" I currently face are:
  1. No map file is generated by default via cargo and information on how to do it is sparse
  2. If generated, the symbols are mangled and it seems each symbol is in a section of its own, making per section (e.g. .rodata, .text, .bss, .data) or per file analysys more difficult than it should be
  3. I haven't found a way disable mangling globally, without editing the rust sources. - I remember there is some tool to un-mangle the output map file, but I forgot its name and I find the need to post-process suboptimal
  4. no default map file filename or location - ideally it should be named as the crate or app, as specified in the .toml file.
How to generate a map fileGenerating map file for linux (and possibly other OSes)Unfortunately, not all architectures/targets use the same linker, or on some the preferred linker could change for various reasons.

Here is how I managed to generate a map file for an AMD64/X86_64 linux target where it seems the linker is GLD:

Create a .cargo/config file with the following content:

.cargo/config: [build]
    rustflags = ["-Clink-args=-Wl,-Map=app.map"]
This should apply to all targets which use GLD as a linker, so I suspect this is not portable to Windows integrated with MSVC compiler.

Generating a map file for thumb7m with rust-lld
On baremetal targets such as Cortex M7 (thumbv7m where you might want to use the llvm based rust-lld, more linker options might be necessary to prevent linking with compiler provided startup code or libraries, so the config would look something like this:
.cargo/config: [build]
target = "thumbv7m-none-eabi"
rustflags = ["-Clink-args=-Map=app.map"]The thins I dislike about this is the fact the target is forced to thumbv7m-none-eabi, so some unit tests or generic code which might run on the build computer would be harder to test.

Note: if using rustc directly, just pass the extra options
Map file generation with some readable symbolsAfter the changes above ae done, you'll get an app.map file (even if the crate is of a lib) with a predefined name, If anyone knows ho to keep the crate name or at least use lib.map for libs, and app.map for apps, if the original project name can't be used.

The problems with the generated linker script are that:
  1. all symbol names are mangled, so you can't easily connect back to the code; the alternative is to force the compiler to not mangle, by adding the #[(no_mangle)] before the interesting symbols.
  2. each symbol seems to be put in its own subsection (e.g. an initalized array in .data.
Dealing with manglingFor problem 1, the fix is to add in the source #[no_mangle] to symbols or functions, like this:

#[no_mangle]
pub fn sing(start: i32, end: i32) -> String {
    // code body follows
}Dealing with mangling globallyI wasn't able to find a way to convince cargo to apply no_mangle to the entire project, so if you know how to, please comment. I was thinking using #![no_mangle] to apply the attribute globally in a file would work, but is doesn't seem to work as expected: the subsection still contains the mangled name, while the symbol seems to be "namespaced":

Here is a some section from the #![no_mangle] (global) version:
.text._ZN9beer_song5verse17h0d94ba819eb8952aE
                0x000000000004fa00      0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)
                0x000000000004fa00                beer_song::verse
 When the #[no_mangle] attribute is attached directly to the function, the subsection is not mangled and the symbol seems to be global:

.text.verse    0x000000000004f9c0      0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)
                0x000000000004f9c0                verseI would prefer to have a cargo global option to switch for the entire project, and code changes would not be needed, comment welcome.
Each symbol in its sectionThe second issue is quite annoying, even if the fact that each symbol is in its own section can be useful to control every symbol's placement via the linker script, but I guess to fix this I need to custom linker file to redirect, say all constants "subsections" into ".rodata" section.

I haven't tried this, but it should work.

Utkarsh Gupta: GSoC Bi-Weekly Report - Week 1 and 2

Sht, 15/06/2019 - 2:04pd

Hello there.
The last two weeks have been adventurous. Here’s what happened.
My GSoC project is to package a software called Loomio. A little about Loomio:
Loomio is a decision-making software, designed to assist groups with the collaborative decision-making process.
It is a free software web-application, where users can initiate discussions and put up proposals.

Loomio is mostly written in Ruby, but also includes some CoffeeScript, Vue, JavaScript, with a little HTML, CSS.
The idea is to package all the dependencies of Loomio and get Loomio easily installable on the Debian machines.

The phase 1, that is, the first 4 weeks, were planned to package the Ruby and the Node dependencies. When I started off, I hit an obstacle. Little did we know about how to go about packaging complex applications like that.
I have been helping out in packages like gitlab, diaspora, et al. And towards the end of the last week, we learned that loomio needs to be done like diaspora.
First goes the loomio-installer, then would come the main package, loomio.

Now, the steps that are to be followed for loomio-installer are as follows:
» Get the app source.
» Install gem dependencies.
» Create database.
» Create tables/run migrations.
» Precomiple assets (scss -> css, et al).
» Configure nginx.
» Start service with systemd.
» In case of diaspora, JS front end is pulled via wrapper gems and in case of gitlab, it is pulled via npm/yarn.
» Loomio would be done with the same way we’re doing gitlab.

Thus, in the last two weeks, the following work has been done:
» Ruby gems’ test failures patched.
» 18 gems uploaded.
» Looked into loomio-installer’s setup.
» Basic scripts like nginx configuration, et al written.

My other activities in Debian last month:
» Updated and uploaded gitlab 11.10.4 to experimental (thanks to praveen).
» Uploaded gitaly, gitlab-workhorse.
» Sponsored a couple of packages (DM access).
» Learned Perl packaging and packaged 4 modules (thanks to gregoa and yadd).
» Learned basic Python packaging.
» Helping DC19 Bursary team (thanks to highvoltage).
» Helping DC19 Content team (thanks to terceiro).

Plans for the next 2 weeks:
» Get the app source via wget (script).
» Install gem and node dependencies via gem install and npm/yarn install (script).
» Create database for installer.
» Precomiple assets (scss -> css, et al).

I hope the next time I write a report, I’ll have no twists and adventures to share.

Until next time.
:wq for today.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, May 2019

Pre, 14/06/2019 - 9:20pd

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 214 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 17 hours (out of 14 hours allocated plus 10 extra hours from April, thus carrying over 7h to June).
  • Adrian Bunk did 0 hours (out of 8 hours allocated, thus carrying over 8h to June).
  • Ben Hutchings did 18 hours (out of 18 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated plus 0.25 extra hours from April, thus carrying over 0.25h to June).
  • Emilio Pozuelo Monfort did 33 hours (out of 18 hours allocated + 15.25 extra hours from April, thus carrying over 0.25h to June).
  • Hugo Lefeuvre did 18 hours (out of 18 hours allocated).
  • Jonas Meurer did 15.25 hours (out of 17 hours allocated, thus carrying over 1.75h to June).
  • Markus Koschany did 18 hours (out of 18 hours allocated).
  • Mike Gabriel did 23.75 hours (out of 18 hours allocated + 5.75 extra hours from April).
  • Ola Lundqvist did 6 hours (out of 8 hours allocated + 4 extra hours from April, thus carrying over 6h to June).
  • Roberto C. Sanchez did 22.25 hours (out of 12 hours allocated + 10.25 extra hours from April).
  • Sylvain Beucler did 18 hours (out of 18 hours allocated).
  • Thorsten Alteholz did 18 hours (out of 18 hours allocated).
Evolution of the situation

May was a calm month, nothing really changed compared to April, we are still at 214 hours funded by month. We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 34 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Candy Tsai: Outreachy Week 4: Weekly Report

Pre, 14/06/2019 - 9:03pd

Just a normal weekly report this week. Can’t believe I’ve been in the Outreachy program for a month!

Progress for this week Week 5 tasks
  • Fix the self service section merge request
  • Enhance the concept UI for the history section
  • Outreachy blog post

Julian Andres Klode: Encrypted Email Storage, or DIY ProtonMail

Enj, 13/06/2019 - 10:47md

In the previous post about setting up a email server, I explained how I setup a forwarder using Postfix. This post will look at setting up Dovecot to store emails (and provide IMAP and authentication) on the server using GPG encryption to make sure intruders can’t read our precious data!

Architecture

The basic architecture chosen for encrypted storage is that every incoming email is delivered to postfix via LMTP, and then postfix runs a sieve script that invokes a filter that encrypts the email with PGP/MIME using a user-specific key, before processing it further. Or short:

postfix --ltmp--> dovecot --sieve--> filter --> gpg --> inbox

Security analysis: This means that the message will be on the system unencrypted as long as it is in a Postfix queue. This further means that the message plain text should be recoverable for quite some time after Postfix deleted it, by investigating in the file system. However, given enough time, the probability of being able to recover the messages should reduce substantially. Not sure how to improve this much.

And yes, if the email is already encrypted we’re going to encrypt it a second time, because we can nest encryption and signature as much as we want! Makes the code easier.

Encrypting an email with PGP/MIME

PGP/MIME is a trivial way to encrypt an email. Basically, we take the entire email message, armor-encrypt it with GPG, and stuff it into a multipart mime message with the same headers as the second attachment; the first attachment is a control information.

Technically, this means that we keep headers twice, once encrypted and once decrypted. But the advantage compared to doing it more like most normal clients is clear: The code is a lot easier, and we can reverse the encryption and get back the original!

And when I say easy, I mean easy - the function to encrypt the email is just a few lines long:

def encrypt(message: email.message.Message, recipients: typing.List[str]) -> str: """Encrypt given message""" encrypted_content = gnupg.GPG().encrypt(message.as_string(), recipients) if not encrypted_content: raise ValueError(encrypted_content.status) # Build the parts enc = email.mime.application.MIMEApplication( _data=str(encrypted_content).encode(), _subtype='octet-stream', _encoder=email.encoders.encode_7or8bit) control = email.mime.application.MIMEApplication( _data=b'Version: 1\n', _subtype='pgp-encrypted; name="msg.asc"', _encoder=email.encoders.encode_7or8bit) control['Content-Disposition'] = 'inline; filename="msg.asc"' # Put the parts together encmsg = email.mime.multipart.MIMEMultipart( 'encrypted', protocol='application/pgp-encrypted') encmsg.attach(control) encmsg.attach(enc) # Copy headers headers_not_to_override = {key.lower() for key in encmsg.keys()} for key, value in message.items(): if key.lower() not in headers_not_to_override: encmsg[key] = value return encmsg.as_string()

Decypting the email is even easier: Just pass the entire thing to GPG, it will decrypt the encrypted part, which, as mentioned, contains the entire original email with all headers :)

def decrypt(message: email.message.Message) -> str: """Decrypt the given message""" return str(gnupg.GPG().decrypt(message.as_string()))

(now, not sure if it’s a feature that GPG.decrypt ignores any unencrypted data in the input, but well, that’s GPG for you).

Of course, if you don’t actually need IMAP access, you could drop PGP/MIME and just pipe emails through gpg --encrypt --armor before dropping them somewhere on the filesystem, and then sync them via ssh somehow (e.g. patching maildirsync to encrypt emails it uploads to the server, and decrypting emails it downloads).

Pretty Easy privacy (p≥p)

Now, we almost have a file conforming to draft-marques-pep-email-02, the Pretty Easy privacy (p≥p) format, version 2. That format allows us to encrypt headers, thus preventing people from snooping on our metadata!

Basically it relies on the fact that we have all the headers in the inner (encrypted) message. To mark an email as conforming to that format we just have to set the subject to p≥p and add a header describing the format version:

Subject: =?utf-8?Q?p=E2=89=A1p?= X-Pep-Version: 2.0

A client conforming to p≥p will, when seeing this email, read any headers from the inner (encrypted) message.

We also might want to change the code to only copy a limited amount of headers, instead of basically every header, but I’m going to leave that as an exercise for the reader.

Putting it together

Assume we have a Postfix and a Dovecot configured, and a script gpgmymail written using the function above, like this:

def main() -> None: """Program entry""" parser = argparse.ArgumentParser( description="Encrypt/Decrypt mail using GPG/MIME") parser.add_argument('-d', '--decrypt', action="store_true", help="Decrypt rather than encrypt") parser.add_argument('recipient', nargs='*', help="key id or email of keys to encrypt for") args = parser.parse_args() msg = email.message_from_file(sys.stdin) if args.decrypt: sys.stdout.write(decrypt(msg)) else: sys.stdout.write(encrypt(msg, args.recipient)) if __name__ == '__main__': main()

(don’t forget to add missing imports, or see the end of the blog post for links to full source code)

Then, all we have to is edit our .dovecot.sieve to add

filter "gpgmymail" "myemail@myserver.example";

and all incoming emails are automatically encrypted.

Outgoing emails

To handle outgoing emails, do not store them via IMAP, but instead configure your client to add a Bcc to yourself, and then filter that somehow in sieve. You probably want to set Bcc to something like myemail+sent@myserver.example, and then filter on the detail (the sent).

Encrypt or not Encrypt?

Now do you actually want to encrypt? The disadvantages are clear:

  • Server-side search becomes useless, especially if you use p≥p with encrypted Subject.

    Such a shame, you could have built your own GMail by writing a notmuch FTS plugin for dovecot!

  • You can’t train your spam filter via IMAP, because the spam trainer won’t be able to decrypt the email it is supposed to learn from

There are probably other things I have not thought about, so let me know on mastodon, email, or IRC!

More source code

You can find the source code of the script, and the setup for dovecot in my git repository.

Bits from Debian: 100 Paper cuts kick-off

Enj, 13/06/2019 - 8:30md
Introduction

Is there a thorny bug in Debian that ruins your user experience? Something just annoying enough to bother you but not serious enough to constitute an RC bug? Are grey panels and slightly broken icon themes making you depressed?

Then join the 100 papercuts project! A project to identify and fix the 100 most annoying bugs in Debian over the next stable release cycle. That also includes figuring out how to identify and categorize those bugs and make sure that they are actually fixable in Debian (or ideally upstream).

The idea of a papercuts project isn't new, Ubuntu did this some years ago which added a good amount of polish to the system.

Kick-off Meeting and DebConf BoF

On the 17th of June at 19:00 UTC we're kicking off an initial brainstorming session on IRC to gather some initial ideas.

We'll use that to seed discussion at DebConf19 in Brazil during a BoF session where we'll solidify those plans into something actionable.

Meeting details

When: 2019-06-17, 19:00 UTC Where: #debian-meeting channel on the OFTC IRC network

Your IRC nick needs to be registered in order to join the channel. Refer to the Register your account section on the OFTC website for more information on how to register your nick.

You can always refer to the debian-meeting wiki page for the latest information and up to date schedule.

Hope to see you there!

Steinar H. Gunderson: Nageru email list

Mër, 12/06/2019 - 2:45md

The Nageru/Futatabi community is now large enough that I thought it would be a good idea to make a proper gathering place. So now, thanks to Tollef Fog Heen's hosting, there is a nageru-discuss list. It's expected to be low-volume, but if you're interested, feel free to join!

As for Nageru itself, there keeps being interesting development(s), but that's for another post. :-)

Dirk Eddelbuettel: RcppArmadillo 0.9.500.2.0

Mër, 12/06/2019 - 1:58md

A new RcppArmadillo release based on a new Armadillo upstream release arrived on CRAN, and will get to Debian shortly. It brings a few upstream changes, including extened interfaces to LAPACK following the recent gcc/gfortran issue. See below for more details.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 610 other packages on CRAN.

Changes in RcppArmadillo version 0.9.500.2.0 (2019-06-11)
  • Upgraded to Armadillo release 9.500.2 (Riot Compact)

    • Expanded solve() with solve_opts::likely_sympd to indicate that the given matrix is likely positive definite

    • more robust automatic detection of positive definite matrices by solve() and inv()

    • faster handling of sparse submatrices

    • expanded eigs_sym() to print a warning if the given matrix is not symmetric

    • extended LAPACK function prototypes to follow Fortran passing conventions for so-called "hidden arguments", in order to address GCC Bug 90329; to use previous LAPACK function prototypes without the "hidden arguments", #define ARMA_DONT_USE_FORTRAN_HIDDEN_ARGS before #include <armadillo>

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Martin Michlmayr: ledger2beancount 1.8 released

Mër, 12/06/2019 - 11:32pd

I released version 1.8 of ledger2beancount, a ledger to beancount converter.

I ran ledger2beancount over the ledger test suite and made it much more robust. If ledger2beancount 1.8 can't parse your ledger file properly, I'd like to know about it.

Here are the changes in 1.8:

  • Add support for apply year
  • Fix incorrect account mapping of certain accounts
  • Handle fixated commodity and postings without amount
  • Improve behaviour for invalid end without apply
  • Improve error message when date can't be parsed
  • Deal with account names consisting of a single letter
  • Ensure account names don't end with a colon
  • Skip ledger directives eval, python, and value
  • Don't assume all filenames for include end in .ledger
  • Support price directives with commodity symbols
  • Support decimal commas in price directives
  • Don't misparse balance assignment as commodity
  • Ensure all beancount commodities have at least 2 characters
  • Ensure all beancount metadata keys have at least 2 characters
  • Don't misparse certain metadata as implicit conversion
  • Avoid duplicate commodity directives for commodities with name collisions
  • Recognise deferred postings
  • Recognise def directive

Thanks to Alen Siljak for reporting a bug.

You can get ledger2beancount from GitHub.

Markus Koschany: My Free Software Activities in May 2019

Mar, 11/06/2019 - 10:27md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • Like in previous release cycles I published a new version of debian-games at the end to incorporate the latest archive changes. Unfortunately, Netbeans, the Java IDE, cuyo and holdingnuts didn’t make it and I demoted them to Suggests.
  • A longstanding graphical issue (#871223) was resolved in Neverball where stars in goal points where displayed as squares. As usual something (OpenGL-related?) must have changed somewhere but in the end the installation of some missing png files made the difference. How it worked without them before remains a mystery.
  • I sponsored two uploads which were later unblocked for Buster. Bernat reported a crash in etw, a football simulation game ported from the AMIGA. Fortunately Steinar H. Gunderson could provide a patch quickly. (#928240)
  • A rebuild of marsshooter, a great looking space shooter with an awesome soundtrack, may have been the trigger for a segmentation fault. Jacob Nevins stumbled over it and Bernhard Übelacker provided a patch to fix missing return statements.  (#929513)
Debian Java
  • I provided a security update for jackson-databind to fix CVE-2019-12086 (#929177) in Buster and prepared DSA-4452-1 to fix the remaining 11 CVE in Stretch.
  • Unfortunately Netbeans will not be in Buster. There were at least two issues why I could not recommend our Debian version, clear regressions in comparison to the version in Stretch. I found it odd that the severest one was fixed in Ubuntu shortly after the removal from testing. I surely would have appreciated the patch for Debian too. At the moment I don’t believe I will continue to work on Netbeans, very time consuming to get it in shape for Debian, too many dependencies, where the slightest changes in r-deps may cause bugs in Netbeans, nobody else in the Java team is really interested and most Java developers probably install the upstream version. A really bad combination.
Misc Debian LTS

This was my thirty-ninth month as a paid contributor and I have been paid to work 18 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • I investigated CVE-2019-0227, axis and suggested to mark it as unimportant. I triaged CVE-2019-0227, ampache as no-dsa for Jessie.
  • DLA-1798-1. Issued a security update for jackson-databind fixing 1 CVE.
  • DLA-1804-1. Issued a security update for curl fixing 1 CVE.
  • DLA-1816-1. Issued a security update for otrs2 fixing 2 CVE.
  • DLA-1753-3. Issued a regression update for proftpd-dfsg. When the creation of a directory failed during sftp transfer, the sftp session would be terminated instead of failing gracefully due to a non-existing debug logging function.
  • DLA-xxxx-1. I’m currently testing the next security update of phpmyadmin. I triaged or fixed 19 CVE.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my twelfth month and I have been paid to work 8 hours on ELTS (15 hours were allocated). I intend to use the remaining hours in June.

  • I investigated three CVE in pacemaker, CVE-2018-16877, CVE-2018-16878, CVE-2019-3885 and found that none of them affected Wheezy.
  • ELA-127-1. Issued a security update for linux and linux-latest fixing 15 CVE.

Thanks for reading and see you next time.

Petter Reinholdtsen: More sales number for my Free Culture paper editions (2019-edition)

Mar, 11/06/2019 - 4:05md

The first book I published, Free Culture by Lawrence Lessig, is still selling a few copies. Not a lot, but enough to have contributed slightly over $500 to the Creative Commons Corporation so far. All the profit is sent there. Most books are still sold via Amazon (83 copies), with Ingram second (49) and Lulu (12) and Machette (7) as minor channels. Bying directly from Lulu bring the largest cut to Creative Commons. The English Edition sold 80 copies so far, the French 59 copies, and Norwegian only 8 copies. Nothing impressive, but nice to see the work we put down is still being appreciated. The ebook edition is available for free from Github.

Title / language Quantity 2016 jan-jun 2016 jul-dec 2017 jan-jun 2017 jul-dec 2018 jan-jun 2018 jul-dec 2019 jan-may Culture Libre / French 3 6 19 11 7 6 7 Fri kultur / Norwegian 7 1 0 0 0 0 0 Free Culture / English 14 27 16 9 3 7 3 Total 24 34 35 20 10 13 10

It is fun to see the French edition being more popular than the English one.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

Bits from Debian: DebConf19 welcomes its sponsors!

Mar, 11/06/2019 - 2:20md

DebConf19 is taking place in Curitiba, Brazil, from 21 July to 28 July 2019. It is the 20th edition of the Debian conference and organisers are working hard to create another interesting and fruitful event for attendees.

We would like to warmly welcome the first 29 sponsors of DebConf19, and introduce you to them.

So far we have three Platinum sponsors.

Our first Platinum sponsor is Infomaniak. Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

Next, as a Platinum sponsor, is Google. Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf since more than ten years, and is also a Debian partner.

Lenovo is our third Planinum sponsor. Lenovo is a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office solutions and data center solutions. This is their first year sponsoring DebConf.

Our Gold sponsor is Collabora, a global consultancy delivering Open Source software solutions to the commercial world. Their expertise spans all key areas of Open Source software development. In addition to offering solutions to clients, Collabora's engineers and developers actively contribute to many Open Source projets.

Our Silver sponsors are: credativ (a service-oriented company focusing on open-source software and also a Debian development partner), Cumulus Networks, (a company building web-scale networks using innovative, open networking technology), Codethink (specialists in system-level software infrastructure supporting advanced technical applications), the Bern University of Applied Sciences (with over 6,800 students enrolled, located in the Swiss capital), Civil Infrastructure Platform, (a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software), \WIT (offering a secure cloud solution and complete data privacy via Kubnernetes encrypted hardware virtualisation), Hudson-Trading, (a company researching and developing automated trading algorithms using advanced mathematical techniques), Ubuntu, (the Operating System delivered by Canonical), NHS (with a broad product portfolio, they offer solutions, amongst others, for data centres, telecommunications, CCTV, and residential, commercial and industrial automation), rentcars.com who helps customers find the best car rentals from over 100 rental companies at destinations in the Americas and around the world, and Roche, a major international pharmaceutical provider and research company dedicated to personalized healthcare.

Bronze sponsors: 4Linux, IBM, zpe, Univention, Policorp, Freexian, globo.com.

And finally, our Supporter level sponsors: Altus Metrum, Pengwin, ISG.EE, Jupter, novatec, Intnet, Linux Professional Institute.

Thanks to all our sponsors for their support! Their contributions make it possible for a large number of Debian contributors from all over the globe to work together, help and learn from each other in DebConf19.

Become a sponsor too!

DebConf19 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf19 website at https://debconf19.debconf.org.

Keith Packard: snek-1.0

Hën, 10/06/2019 - 12:48pd
Snek 1.0

I've released version 1.0 of Snek today.

Features
  • Python-inspired. Snek is a subset of Python: learning Snek is a great way to start learning Python.

  • Small. Snek runs on an original Arduino Duemilanove board with 32kB of ROM and 2kB of RAM. That's smaller than the Apollo Guidance Computer

  • Free Software. Snek is licensed under the GNU General Public License (v3 or later). You will always be able to get full source code for the system.

Ports Hosts Documentation

Read the Snek manual online or in PDF form:

Dirk Eddelbuettel: #22: Using Rocker and PPAs for Fun and Profit

Dje, 09/06/2019 - 8:18md

Welcome to the 22nd post in the reasonably rational R recommendations series, or R4 for short.

This post premieres something new: a matching video in lightning talk style:

The topic is something we had mentioned a few times before in this r^4 blog series, for example in this post on finding deb packages as well as in this post on binary installations. Binaries rocks, where available, and Michael Rutter’s PPAs should really be known and used more widely. Hence the video and supporting slides.

Dirk Eddelbuettel: littler 0.3.8: Several nice new features

Dje, 09/06/2019 - 6:49md

The nineth release of littler as a CRAN package is now available, following in the thirteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. And it is (in my very biased eyes) better as it allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript converted to only more recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH.

A few examples are highlighted at the Github repo, as well as in the examples vignette.

This release extends the support for options("Ncpus") to the scripts install.r and install2.r (which has docopt support) making installation of CRAN packages proceed in parallel and thus quite a bit faster. We also added a new script to run tests from the excellent tinytest package, made the rhub checking scripts more robust to the somewhat incomplete latex support there, and updated some documentation.

The NEWS file entry is below.

Changes in littler version 0.3.8 (2019-06-09)
  • Changes in examples

    • The install.r and install2.r scripts now use parallel installation using options("Ncpu") on remote packages.

    • The install.r script has an expanded help text mentioning the environment variables it considers.

    • A new script tt.t was added to support tinytest.

    • The rhub checking scripts now all suppress builds of manual and vignettes as asking for working latex appears to be too much.

  • Changes in package

    • On startup checks if r is in PATH and if not references new FAQ entry; text from Makevars mentions it too.
  • Changes in documentation

    • The FAQ vignette now details setting r to PATH.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Giovanni Mascellani: DQIB, the Debian Quick Image Baker

Dje, 09/06/2019 - 3:00md

Debian supports (either officially or unofficially) a lot of architectures, which is of course a nice thing. Sometimes you want to play with some exotic architecture you are not familiar with, or you want to debug a problem with that architecture, but you do not have a computer implementing that architecture. Fortunately QEMU is able to emulate most of the architectures supported by Debian (ia64 being an exception), however it can be difficult to install it or to find ready-to-use images on the Internet (there are some, but usually they are quite a few years old). Let's also say that for some reason you cannot or do not want to use the Debian porterboxes (maybe you are not a DD, or you want to mess up with the network, or you want to be root). What do you do?

Mostly for the fun of hacking on some exotic architectures, I tried to brew together a little script, the Debian Quick Image Baker (DQIB). It is basically a wrapper that calls qemu-debootstrap with the right options (where "right" means "those that I have experimentally found to work"), with some thin icing layer on top. qemu-debootstrap is basically another wrapper on top of debootstrap, which of course does the heavy lifting, and qemu-user-static, that allows debootstrap to run executables for foreign architectures.

With DQIB you can quickly create working images for most Debian official architectures (i386, amd64, mips, mipsel, mips64el, armhf, arm64, ppc64el). s390x works, but requires a little workaround because of a little bug that was fixed in recent QEMU versions. Images for armel can be created, but the only Linux kernel offered by Debian for armel does not work on any QEMU machine. I don't know of a workaround here. I would also like to support non official architectures, but this is work in progress. For all the non official architecture, either qemu-debootstrap fails for some reason, or I cannot find the right options to make the Debian-distributed kernel running (except for riscv64, where I know how to make the kernel work, but it requires some non trivial changes to the DQIB script; however, the riscv64 panorama is very dynamical and things could change in very little time).

You can either clone the repository and run DQIB on you computer (check out the README), or download pre-baked images regenerated weekly by a CI process (which include the right command line to launch QEMU; see above for the definition of "right").

(You might ask why this is hosted on Gitlab.com instead of the Debian Developer's obvious choice. The reason is that the artifacts generated by the CI are rather large, and I am not sure DSA would be happy to have them on their servers)

Have fun, and if know how to support more architectures please let me know!

Jonathan McDowell: NIDevConf 19 slides on Home Automation

Dje, 09/06/2019 - 12:50md

The 3rd Northern Ireland Developer Conference was held yesterday, once again in Riddel Hall at QUB. It’s a good venue for a great conference and as usual it was a thoroughly enjoyable day, with talks from the usual NI suspects as well as some people who were new to me. I finally submitted a talk this year, and ended up speaking about my home automation setup - basically stringing together a bunch of the information I’ve blogged about here over the past year or so. It seemed to go well other than having a bit too much content for the allocated time, but I got the main arc covered and mostly just had to skim through the additional information. I’ve had a similar talk accepted for DebConf19 this Summer, with a longer time slot that will allow me to go into a bit more detail about how Debian has enable each of the pieces.

Slides from yesterday’s presentation are below; if you’re a regular reader I doubt there’ll be anything new and it’s a slide deck very much intended to be talked around rather than stand alone so if you weren’t there they’re probably not that useful. I believe the talk was recorded, so I’ll update this post with a link once that’s available (or you can check the NIDevConf Youtube channel yourself).

Note that a lot of the slides have very small links at the bottom which will take you to either a blog post expanding on the details, or an external reference I think is useful.

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

</embed>

Also available for direct download.