The recent news that openSUSE considers btrfs safe for users prompted me to consider using it. And indeed I did. I was already familiar with zfs, so considered this a good opportunity to experiment with btrfs.
btrfs makes an intriguing filesystem for all sorts of workloads. The benefits of btrfs and zfs are well-documented elsewhere. There are a number of features btrfs has that zfs lacks. For instance:
The feature set of ZFS that btrfs lacks is well-documented elsewhere, but there are a few odd btrfs missteps:
btrfs would be fine if it worked reliably. I should say at the outset that I have never lost any data due to it, but it has caused enough kernel panics that I’ve lost count. I several times had a file that produced a panic when I tried to delete it, several times when it took more than 12 hours to unmount a btrfs filesystem, behaviors where hardlink-heavy workloads take days longer to complete than on zfs or ext4, and that’s just the ones I wrote about. I tried to use btrfs balance to change the metadata allocation on the filesystem, and never did get it to complete; it seemed to go into an endless I/O pattern after the first 1GB of metadata and never got past that. I didn’t bother trying the live migration of data from one disk to another on this filesystem.
I wanted btrfs to work. I really, really did. But I just can’t see it working. I tried it on my laptop, but had to turn of CoW on my virtual machine’s disk because of the rm bug. I tried it on my backup devices, but it was unusable there due to being so slow. (Also, the hardlink behavior is broken by default and requires btrfstune -r. Yipe.)
At this point, I don’t think it is really all that worth bothering with. I think the SuSE decision is misguided and ill-informed. btrfs will be an awesome filesystem. I am quite sure it will, and will in time probably displace zfs as the most advanced filesystem out there. But that time is not yet here.
In the meantime, I’m going to build a Debian Live Rescue CD with zfsonlinux on it. Because I don’t ever set up a system I can’t repair.
Lumicall is now offering free calls from browser to mobile.
The whole service is powered by free software using open standards.
Various open source projects have made this possible, in particular:
Please come and join us on the mailing list for any of the third-party projects that are involved. Please also join the Free real-time communications list sponsored by the FSF Europe for any general discussion about the future of free communications with free software.WebRTC Conference this week
I'll be presenting some of my own work with WebRTC at the WebRTC Conference and Exhibition 2013 in Paris this week. Various other free software developers are also in the program, including Ludovic Dubost from xWiki and Emil Ivov from Jitsi
Thanks to Steffen Ullrich, this bug is now fixed in LWP::UserAgent and LWP::Protocol::https repositories.
In Debian, I’ve updated libwww-perl 6.05-2 and liblwp-protocol-https-perl 6.04-2 to include the same patches. This fix is now available in Debian unstable.
See my previous blog for more details on this story.
All the best
I just realised a lot of my projects are deployed in the same way:
This includes both Apache-based projects, and node.js projects.
I'm sure I could generalize this, and do clever things with git-hooks. Right now for example I have run-scripts which look like this:#!/bin/sh # # /etc/service/blogspam.js/run - Runs the blogspam.net API. # # update the repository. git pull --update --quiet # install dependencies, if appropriate. npm install # launche exec node server.js
It seems the only thing that differs is the name of the directory and the remote git clone URL.
With a bit of scripting magic I'm sure you could push applications to a virgin Debian installation and have it do the right thing.
I think the only obvious thing I'm missing is a list of Debian dependencies. Perhaps adding soemthing like the packages.json file I could add an extra step:apt-get update -qq apt-get install --yes --force-yes $(cat packages.apt)
Making deployments easy is a good thing, and consistency helps..
It has been a while since I managed to publish the last interview, but the Debian Edu / Skolelinux community is still going strong, and yesterday we even had a new school administrator show up on #debian-edu to share his success story with installing Debian Edu at their school. This time I have been able to get some helpful comments from the creator of Knoppix, Klaus Knopper, who was involved in a Skolelinux project in Germany a few years ago.
Who are you, and how do you spend your days?
I am Klaus Knopper. I have a master degree in electrical engineering, and is currently professor in information management at the university of applied sciences Kaiserslautern / Germany and freelance Open Source software developer and consultant.
All of this is pretty much of the work I spend my days with. Apart from teaching, I'm also conducting some more or less experimental projects like the Knoppix GNU/Linux live system (Debian-based like Skolelinux), ADRIANE (a blind-friendly talking desktop system) and LINBO (Linux-based network boot console, a fast remote install and repair system supporting various operating systems).
How did you get in contact with the Skolelinux / Debian Edu project?
The credit for this have to go to Kurt Gramlich, who is the German coordinator for Skolelinux. We were looking for an all-in-one open source community-supported distribution for schools, and Kurt introduced us to Skolelinux for this purpose.
What do you see as the advantages of Skolelinux / Debian Edu?
What do you see as the disadvantages of Skolelinux / Debian Edu?
For these reasons and experience from our project, I would now rather consider using plain Debian for schools next time, until Skolelinux is more closely integrated into Debian and becomes upgradeable without reinstallation.
Which free software do you use daily?
GNU/Linux with LXDE desktop, bash for interactive dialog and programming, texlive for documentation and correspondence, occasionally LibreOffice for document format conversion. Various programming languages for teaching.
Which strategy do you believe is the right one to use to get schools to use free software?
Strong arguments are
I've been using POV-Ray off and on for the past decade or so. I've never been extremely talented with graphical stuff, but I've always liked playing around with it; and POV-Ray, with its turing-complete scene description language, appeals to me as a programmer. I've used it when I needed to do some animation; for instance, I created the FOSDEM 2013 and DebConf13 "wait screen" animations for the video team with FOSDEM.
One particular downside of POV-Ray has always been the fact that their license was a custom non-free one. This was a historical accident (POV-Ray has existed for a long time, since before the popularization of FLOSS), and AIUI, the relicensing was impossible for various reasons. However, a rewrite of POV-Ray (as version 3.7) has been in the making for quite a while.
Today, I noticed two things: first, POV-Ray 3.7 was released (under the AGPLv3, thereby becoming Free Software); and second, as of the 3.7 release, the POV-Ray is put into a git repository and available on github.
Also, apart from it being free software now, POV-Ray 3.7 has a few new features as well. Most importantly among those (at least in my opinion), POV-Ray 3.7 is a multithreaded application, in contrast to POV-Ray 3.6 and before which wasn't.
Building it seemed to have some issues with the versions of a few things that are in Debian unstable; but for one of these a fix has already been merged, and for the other a merge request is out.
Now to decide whether I should package it...
(Yes, yes... Maybe I should post in Spanish.. But hey, gotta keep consistecy in my blog!)General, public, open invitation
Are you in Mexico City, or do you plan to be next Wednesday (December 11)?
Are you interested in video edition? In Free Software?
I will have the pleasure to host at home the great Chema Serralde, a good friend, and a multifacetic guru both in the technical and musical areas. He will present a workshop: Video editing from the command line.
I asked Chema for an outline of his talk, but given he is a busy guy, I will basically translate the introduction he prepared for this same material in FSL Vallarta, held two weeks ago.
With the help of the commandline, you can become a multimedia guru. We will edit a video using just a terminal. This skill will surprise your friends — and your couple.
But the most important is that this knowledge is just an excuse to understand step by step what does a video CODEC mean, what is a FORMAT, and how video and audio editors work; by using this knowledge, you will be able to set the basis for multimedia editing, without the promises and secrets of propietary editors.
How much does my file weigh and why? How to improve a video file's quality? Why cannot I read my camera's information from GNU/Linux?
By the end of this workshop, we well see how some libraries help you develop your first audio and video application, what are their main APIs and uses.Logistics
Everybody is welcome to come for free, no questions asked, no fees collected. I can offer coffee for all, but if you want anything else to eat/drink, you are welcome to bring it.
We do require you to reserve and confirm your place (mail me to my usual mail address). We have limited space, and I must set an absolute quota of 10 participants.
Some people hide their address... Mine is quite publicly known: Av. Copilco 233, just by Parque Hugo Margain, on the Northern edge of UNAM (Metro Copilco).
So, that said... See you there! :-D
This is a report-back, since I know other people wanted to attend. I'm not a lawyer, but I develop software to improve communications security, I care about these questions, and I want other people to be aware of the discussion. I hope I did not misrepresent anything below. I'd be happy if anyone wants to offer corrections.Background
Off the Record Messaging (OTR) is a way to secure instant messaging (e.g. jabber/XMPP, gChat, AIM).
The two most common characteristics people want from a secure instant messaging program are:
As with many other modern networked encryption schemes, OTR relies on each user maintaining a long-lived "secret key", and publishing a corresponding "public key" for their peers to examine. These keys are critical for providing authentication (and by extension, for confidentiality).
But OTR offers several interesting characteristics beyond the common two. Its most commonly cited characteristics are "forward secrecy" and "deniability".
To be clear, this kind of deniability means Alice can correctly say "you have no cryptographic proof I said X", but it does not let her assert "here is cryptographic proof that I did not say X" (I can't think of any protocol that offers the latter assertion). The opposite of deniability is a cryptographic proof of origin, which usually runs something like "only someone with access to Alice's secret key could have said X."
The general sense of the room was that we'd all heard this question many times, from many people. There are lots of problems with the ideas behind the question from many perspectives. But just from a legal perspective, there are at least two problems with the way this question is posed:
This denial could take place in two rather different contexts: during rules over admissibility of evidence, or (once admitted) in front of a jury.
In legal wrangling over admissibility, apparently a lot of horse-trading can go on -- each side concedes some things in exchange for the other side conceding other things. It appears that cryptographic proof of origin (that is, a lack of deniability) on the chat logs themselves might reduce the amount of leverage a defense lawyer can get from conceding or arguing strongly over that piece of evidence. For example, if the chain of custody of a chat transcript is fuzzy (i.e. the transcript could have been mishandled or modified somehow before reaching trial), then a cryptographic proof of origin would make it much harder for the defense to contest the chat transcript on the grounds of tampering. Deniability would give the defense more bargaining power.
In arguing about already-admitted evidence before a jury, deniability in this sense seems like a job for expert witnesses, who would need to convince the jury of their interpretation of the data. There was a lot of skepticism in the room over this, both around the possibility of most jurors really understanding what OTR's claim of deniability actually means, and on jurors' ability to distinguish this argument from a bogus argument presented by an opposing expert witness who is willing to lie about the nature of the protocol (or who misunderstands it and passes on their misunderstanding to the jury).
The complexity of the tech systems involved in a data-heavy prosecution or civil litigation are themselves opportunities for lawyers to argue (and experts to weigh in) on the general reliability of these systems. Sifting through the quantities of data available and ensuring that the appropriate evidence is actually findable, relevant, and suitably preserved for the jury's inspection is a hard and complicated job, with room for error. OTR's deniability might be one more element in a multi-pronged attack on these data systems.
These are the most compelling arguments for the legal utility of deniability that I took away from the discussion. I confess that they don't seem particularly strong to me, though some level of "avoiding a weaker position when horse-trading" resonates with me.
What about the arguments against its utility?Limitations The most basic argument against OTR's deniability is that courts don't care about cryptographic proof for digital evidence. People are convicted or lose civil cases based on unsigned electronic communications (e.g. normal e-mail, plain chat logs) all the time. OTR's deniability doesn't provide any legal cover stronger than trying to claim you didn't write a given e-mail that appears to have originated from your account. As someone who understands the forgeability of e-mail, i find this overall situation troubling, but it seems to be where we are.
Worse, OTR's deniability doesn't cover whether you had a conversation, just what you said in that conversation. That is, Bob can still cryptographically prove to an adversary (or before a judge or jury) that he had a communication with someone controlling Alice's secret key (which is probably Alice); he just can't prove that Alice herself said any particular part of the conversation he produces.
Additionally, there are runtime tradeoffs depending on how the protocol manages to achieve these features. For example, forward secrecy itself requires an additional round trip or two when compared to authenticated, encrypted communications without forward secrecy (a "round trip" is a message from Alice to Bob followed by a message back from Bob to Alice).
Getting proper deniability into the mpOTR spec might incur extra latency (imagine having to wait 60 seconds after everyone joins before starting a group chat, or a pause in the chat of 15 seconds when a new member joins) or extra computational power (meaning that they might not work well on slower/older devices) or an order of magnitude more bandwidth (meaning that chat might not work at all on a weak connection). There could also simply be complexity that makes it harder to correctly implement a protocol with deniability than an alternate protocol without deniability. Incorrectly-implemented software can put its users at risk.
I don't know enough about the current state of mpOTR to know what the specific tradeoffs are for the deniability feature, but it's clear there will be some. Who decides whether the tradeoffs are worth the feature?Other kinds of deniability Further weakening the case for the legal utility of OTR's deniability, there seem to be other ways to get deniability in a legal context over a chat transcript.
There are deniability arguments that can be made from outside the protocol. For example, you can always claim someone else took control of your computer while you were asleep or using the bathroom or eating dinner, or you can claim that your computer had a virus that exported your secret key and it must have been used by someone else.
If you're desperate enough to sacrifice your digital identity, you could arrange to have your secret key published, at which point anyone can make signed statements with it. Having forward secrecy makes it possible to expose your secret key without exposing the content of your past communications to any listener who happened to log them.Conclusion My takeaway from the discussion is that the legal utility of OTR's deniability is non-zero, but quite low; and that development energy focused on deniability is probably only justified if there are very few costs associated with it.
Several folks pointed out that most communications-security tools are too complicated or inconvenient to use for normal people. If we have limited development energy to spend on securing instant messaging, usability and ubiquity would be a better focus than this form of deniability.
Secure chat systems that take too long to make, that are too complex, or that are too cumbersome are not going to be adopted. But this doesn't mean people won't chat at all -- they'll just use cleartext chat, or maybe they'll use supposedly "secure" protocols with even worse properties: for example, without proper end-to-end authentication (permitting spoofing or impersonation by the server operator or potentially by anyone else); with encryption that is reversible by the chatroom operator or flawed enough to be reversed by any listener with a powerful computer; without forward secrecy; or so on.
As a demonstration of this, we heard some lawyers in the room admit to using Skype to talk with their clients even though they know it's not a safe communications channel because their clients' adversaries might have access to the skype messaging system itself.
My conclusion from the meeting is that there are a few particular situations where deniability could be useful legally, but that overall, it is not where we as a community should be spending our development energy. Perhaps in some future world where all communications are already authenticated, encrypted, and forward-secret by default, we can look into improving our protocols to provide this characteristic, but for now, we really need to work on usability, popularization, and wide deployment.Thanks Many thanks to Nick Merrill for organizing the discussion, to Shayana Kadidal and Stanley Cohen for providing a wealth of legal insight and legal experience, to Tom Ritter for an excellent presentation of the technical details, and to everyone in the group who participated in the interesting and lively discussion.
Today I should have been heading down to York, to attend the Bytemark Christmas party. Instead I'm here in Edinburgh, because wind/storms basically shutdown the rail network in Scotland for the morning.
Technically I could have probably made it, but only belatedly and only at a huge cost to my sanity. The train-station was insane with stranded people, and there seemed no guarantee the recently-revived service would continue.
So instead I'm sulking at home.
I had a lot of other things scheduled to do in York/London today/tomorrow, for reasons that will become apparent next week, so to say I'm annoyed is an understatement.
In happier news I'm not dead.
Walking to work this morning was horrific, there was so much wind 70-100mph, that I counldn't actually cross a bridge, on Ocean Drive, because I just kept getting blown into the road. (Yeah, that's a road that is very close to the coast. Driving wind. Horrible rain. Storming sea. Fun.)
I ended up retracing my steps, and taking a detour. (PS. My boots leaked.)
Not a good day. Enjoy some software instead - a trivial HTTP / XMPP bridge.
Releasing the shift key is hard.
I’m sure this isn’t an original thought of mine, but it just popped into my head and I think it’s something of a “fundamental truth” that all software developers need to keep in mind:
Writing software is easy. The hard part is writing software that works.
All too often, we get so caught up in the rush of building something that we forget that it has to work – and, all too often, we fail in some fundamental fashion, whether it’s “doesn’t satisfy the user’s needs” or “you just broke my $FEATURE!” (which is the context I was thinking of).
Whatever you do with translations, consider translation management issues. For example, you are developing a multilingual web site. All kinds of labels and buttons and form fields are nicely translatable with trans template tag and ugettext. You have po files that follow your code from dev to stage to production environment.
Now you add a CMS into the mix. And suddenly - you translations are in more than one place, in more than one format and follow different routes to production.
Now imagine that you need to add Chinese language to your entire site. The translator is an off-site contractor. What files would you send to him to translate? How would you generate them? How will you integrate them?
If someone adds or changes a page on production in English: how will your developers see that change? how will you know that an updated translation for Chinese is needed? how will you manage the update of the translation?
If you make a CMS and don't have at least the export_po_file and import_po_file management commands, then you are not really multilingual. It is either that or figuring out your own answer for the above questions.
I have finally found a Django-based CMS that has those - http://pythonhosted.org/django-page-cms/ . Have not really tried it yet, but I am hopeful.
My laptop is just over 3 years old, which is about the point I start to think about a replacement. At present there's nothing that's an obvious contender so I've been looking at an SSD to prolong it by another year or two.
One of the other thoughts I had is that I currently use dm-crypt under Linux to provide whole disk encryption for everything except the boot partition - I have a bunch of my personal financial and immigration documents stored that I'd prefer not to get disclosed if my laptop is stolen. Modern drives have started offered integral AES encryption options, so perhaps I could offload that to the drive (my i5 470UM lacks the hardware instructions for this).
General consensus in the pub (where all the best security advice is to be found) is that no one present trusted SSD firmware authors to not use some badly chosen AES crypto mode, or leave the key lying around plain text in easily readable flash, or some other implementation mishap.
So how hard would it be to retrofit reliable (or at least source verifiable and thus more reliable) crypto to an SSD? There was an impressive article recently about reverse engineering the firmware of a HDD, to the point of modifying data returned to the host and also running Linux on the controller. It seems that SSD firmware should be easier - NAND is simpler to talk to than motors and magnetic sensors, right? It's a case of gluing together a SATA interface, a NAND controller and an AES offload engine, yes?
Aside from the minor matter of finding a suitable drive with an available JTAG interface, a controller with docs (or more likely that can be reverse engineered) and enough time to produce a replacement open firmware, that is.
Alternatively can anyone provide some idea of how secure the available laptop SSDs on the market actually are? I'm fine with "the NSA can read your data if they want" because a determined attacker will be able to find other ways to get my data anyway, but I don't want "anyone who finds the drive can use this loophole in the firmware by wiggling some bits with jtag to dump the key and read all your data".
I'm aware of how not-dark our modern city night sky is, but sometimes it still heavily surprises me how full of light it actually is.
Yesterday I went about five kilometres out of Zürich; I thought at that distance, on a reasonably dark hill, it would be good enough for some night-sky shots.
So I setup my tripod, only to realise I can't expose over 30 seconds because (at low ISO and fast aperture, shooting not straight up), everything is bleached out:
I couldn't believe my eyes. Yes, I heard that one needs to go 100Km out of big cities, but but… So I went and found that since about 2004, all of Switzerland is light polluted - even going on top of Jungfrau, for example, wouldn't completely eliminate it. I also learned about Merle Walker's equation - yeah, 200Km or more away from light sources would be good. Not here, in this quite small, quite populated country in the middle of Europe :/… I realised that as soon as my eyes got acclimated to the location I was at, I could easily see everything around me, due to the light pollution.
I miss a really dark sky - I remember a couple of years back, in a different country, stopping at night on the side of the road, and being shocked at how the sky was filled with starts. I didn't have any camera with me at that time ☹ Heck, I remember seeing the Milky Way way back as a child, but nowadays when I get out of work, I can barely see 4-5 points of light in the sky.
Anyway, enough with ranting ☺ Thanks to modern technology, one can recover lots of detail, even in a washed out picture. So in the end, shooting straight up, I could get some resemblance of structure (and this was only at 28mm, which is not that wide):
Or alternatively, one can use the glowing light for a bit of play/contrast:
I also played with stacking images via Deep Sky Stacker, but stacking - I learned to my surprise - only works to reduce noise in the final image, and not to make it "brighter" or more detailed. Live and learn ☺ The result was a bit better than not stacked, but not by much:
This was a 25×5s, ISO 800, f/4, same 28mm lens. What I found surprising is the "non-star object" (to call it so) near the centre of the image - I have no idea what it is, it definitely doesn't look like a star, it could be an elliptical galaxy or so. I tried navigating back in time via Stellarium, but I don't remember the orientation of the lens, so it will remain a mystery to me.
Anyway, two more pictures and higher resolutions on a Smugmug album. Feel free to leave a comment or drop me an email if you have suggestions where to take nice night sky photos in Switzerland…
Procps version 3.3.9 was released today. As there has been some API changes and fixes which means the library has changed again. There is a fine balance between fixing or enhancing library functions and keeping the API stable, with the added problem it wasn’t a terribly good one to start with.
Besides the API change, the following changes were made:
Tar file is at sourceforge at https://sourceforge.net/projects/procps-ng/files/Production/
So, using the command:
root@new# ssh root@old dd if=/dev/vg/somedisk | dd of=/dev/vg/somedisk
appears to fail, getting a SIGTERM at some point for no discernable reason... however, using
root@old# dd if=/dev/vg/somedisk | ssh root@new dd of=/dev/vg/somedisk
The pull version fails at a fairly random point after a fairly undefined period of time. The push version works everytime. This is most confusing and odd...
Dear lazyweb, please give me some new ideas as to what's going on, it's driving me nuts!
A different daemon wasn't limiting it's killing habits in the case that a certain process wasn't running, and was killing the ssh process on the new server almost at random, found the bug in the code and now testing with that.
Thanks for all the suggestions though, much appreciated.
The ski season has started again, I went down to Andermatt on Sunday to have some fun.
Here is a video made with my camera phone: