You are here

Agreguesi i feed

Daniel Pocock: SFK 2019 and next Prishtina Toastmasters meeting

Planet Ubuntu - Dje, 07/04/2019 - 11:14pd

I'm visiting Kosovo again for a few days.

Next Prishtina Toastmasters meeting

The next meeting of the Prishtina Toastmasters group will take place at the Innovation Centre Kosovo (ICK) on Monday, 8 April at 18:00. Location on OpenStreetmap

There is no entrance fee, all are welcome.

The first event attracted a diverse group of people, including students, young professionals, entrepreneurs and ex-pats living in Kosovo.

Free as in lunch

SFK 2019 has been a great success for everybody. The main venue for talks was the Kino Armata. The SFK logo, a small bird, greeted us on the first day:

but by the second day it was gone, evidence of a violent struggle and a few feathers are all that remain, who could be responsible?

OSCAL, May 2019

The next major free software event in the region will be OSCAL on 18-19 May 2019 in Tirana, Albania.

Sam Thursfield: The Lesson Planalyzer

Planet GNOME - Pre, 05/04/2019 - 5:23md

I’ve now been working as a teacher for 8 months. There are a lot of things I like about the job. One thing I like is that every day brings a new deadline. That sounds bad right? It’s not: one day I prepare a class, the next day I deliver the class one or more times and I get instant feedback on it right there and then from the students. I’ve seen enough of the software industry, and the music industry, to know that such a quick feedback loop is a real privilege!

Creating a lesson plan can be a slow and sometimes frustrating process, but the more plans I write the more I can draw on things I’ve done before. I’ve planned and delivered over 175 different lessons already. It’s sometimes hard to know if I’m repeating myself or not, or if I could be reusing an activity from a past lesson, so I’ve been looking for easy ways to look back at all my old lesson plans.

Search

GNOME’s Tracker search engine provides a good starting point for searching a setof lesson plans: I can put the plans in my ~/Documents folder, open the folder in Nautilus, and then I type a term like "present perfect" into the search bar.

The results aren’t as helpful as they could be, though. I can only see a short snippet of the text in each document, when I really need to see the whole paragraph for the result to be directly useful. Also, the search returns anything where the words present and perfect appear, so we could be talking about tenses, or birthdays, or presentation skills.  I wanted a better approach.

Reading .docx files

My lesson plans have a fairly regular structure. An all-purpose search tool doesn’t know anything about my personal approach to writing lesson plans, though. I decided to try writing my own tool to extract more structured information from the documents. The plans are in .docx format1 which is remarkably easy to parse — you just need the Python ‘unzip’ and ‘xml’ modules, and some guesswork to figure out what the XML elements mean. I was surprised not to find a Python library that already did this for me, but in the end I wrote a very basic .docx helper module, and I used this to create a tool that read my existing lesson plans and dumped the data as a JSON document.

It works reliably! In a few cases I chose to update documents rather than add code to the tool to deal with formatting inconsistencies. Also, the tool currently throws away all formatting information, but I barely notice.

Web and desktop apps

From there, of course, things got out of control and I started writing a simple web application to display and search the lesson plans. Two months of sporadic effort later, and I just made a prototype release of The Lesson Planalyzer. It remains to be seen how useful it is for anyone, including me, but it’s very satisfying to have gone from an idea to a prototype application in such a short time. Here’s an ugly screenshot, which displays a couple of example lesson plans that I found online.

The user interface is HTML5, made using Bootstrap and a couple of other cool JavaScript libraries (which I might mention in a separate blog post). I’ve wrapped that up in a basic GTK application, which runs a tiny HTTP server and uses a WebKitWebView display its output. The desktop application has a couple of features that can’t be implemented inside a browser, one is the ability to open plan documents directly in LibreOffice, and also the other is a dedicated entry in the alt+tab menu.

If you’re curious, you can see the source at https://gitlab.com/samthursfield/planalyzer/. Let me know if you think it might be useful for you!

1. I need to be able to print the documents on computers which don’t have LibreOffice available, so they are all in .docx format.

Sean Davis: Parole Media Player 1.0.2 Released

Planet Ubuntu - Pre, 05/04/2019 - 3:50pd

A new (more) stable version of the Xfce media player is now available! Parole 1.0.2 fixes several bugs and improves packaged releases for distributions.

What’s New?

Bug fixes. So… many… fixes!

Build Fixes
  • Fixed compiler error -Wcast-function-type with GCC 8
  • Fixed Appstream validation by removing <em></em> tags from translations (Xfce #14260)
  • Resolved g_type_class_add_private warnings (Xfce #15014)
Playback
  • Fixed play button sensitivity items are added to playlist (Xfce #13724, LP #1705243)
  • Improved support for missing Gstreamer plugin installers (Xfce #14529)
Plugins Manager
  • Fixed crash when opening files after disabling plugins (LP #1698540)
  • Fixed disabling plugins enabled by distributions (e.g. MPRIS2 in Xubuntu)
  • Fixed display of active/inactive plugins when reopening the Plugins Manager
Translation Updates

Albanian, Arabic, Asturian, Basque, Belarusian, Bulgarian, Catalan, Chinese (China), Chinese (Taiwan), Croatian, Czech, Danish, Dutch, English (Australia), Finnish, French, Galician, German, Greek, Hebrew, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Norwegian Bokmal, Occitan (post 1500), Polish, Portuguese, Portuguese (Brazil), Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Telugu, Thai, Turkish, Uighur, Ukrainian

Downloads

Parole Media Player 1.0.2 is included in Xubuntu 19.04, with other distributions likely adding it soon. If you can’t wait or want to install from source, download it below.

Source tarball (md5sha1sha256)

Jonathan Riddell: Add Appstream Release Data to your App Releases

Planet Ubuntu - Enj, 04/04/2019 - 3:50md

Appstream is a metadata standard for your software releases which gets used by package managers and app stores as well as web sites such as kde.org (one day at least).

If you are incharge of making releases of an application from KDE mind and make sure it has an appstream appdata file.  You should also include a screenshot preferably in the product-screenshots git repo.

You should also add release data to your appstream files.  See the docs for the full details.  Not all the data will be very practical to add before the release time but it is useful to at least have a version number and maybe a release date added in.

I’ve added this to the Releasing Software wiki page now. And I’ve written a wee script appstream-metainfo-release-update to update the XML with a simple command which I’ve now added to the Plasma release process.

by

Ian Jackson: Planar graph layout, straight line drawing

Planet Debian - Enj, 04/04/2019 - 1:56md
My project to make an alternative board for Pandemic Rising Tide needed a program to lay out a planar graph, choosing exact coordinates for the vertices.

(The vertices in question are the vertices of the graph which is the dual of the adjacency graph of the board "squares" - what Pandemic Rising Tide calls Regions. For gameplay reasons the layout wants to be a straight line drawing - that is, one where every boundary is a straight line.)
Existing softwareI found that this problem was not well handled by existing Free Software. The leading contender, graphviz, generally produces non-planar layouts even for planar inputs; and it does not provide a way to specify the planar embedding. There are some implementations of "straight line drawing" algorithms from the literature, but these produce layouts which meet the letter of the requirement for the drawing to consist only of nonintersecting straight lines, but they are very ugly and totally unsuitable for use as a game board layout.

My web searches for solutions to this problem yielded only forum postings etc. where people were asking roughly this question and not getting a satisfactory answer.

I have some experience with computer optimisation algorithms and I thought this should be a tractable problem, so I set out to solve it - well, at least well enough for my purposes.
My approachMy plan was to use one of the algorithms from the literature to generate a straight line drawing, and then use cost-driven nonlinear optimisation to shuffle the vertices about into something pretty and useable.

Helpfully Boost provides an implementation of Chrobak & Payne's straight line drawing algorithm. Unfortunately Boost's other planar graph functions were not suitable because they do not remember which face is the outer face. (In planar graph theory and algorithms the region outside the graph drawing is treated as a face, called the outer face.) So I also had to write my own implementations of various preparatory algorithms - yet more yak shaving before I could get to the really hard part.

Having been on a Rust jag recently, I decided on Rust as my implementation language. I don't regret this choice, although it did add a couple of yaks.
Cost function and constraintsMy cost function has a number of components:
  • I wanted to minimise the edge lengths.
  • But there was a minimum edge length (for both gameplay and aesthetic reasons)
  • Also I wanted to avoid the faces having sharp corners (ie, small angles between edges at the same vertex)
  • And of course I needed the edges to still come out of each vertex in the right order.
You will notice that two of these are not costs, but constraints. Different optimisation algorithms handle this differently.

Also "the edges to still come out of each vertex in the right order" is hard to express as a continuous quantity. (Almost all of these algorithms demand that constraints take the form of a function which is to be nonnegative, or some such.) My solution is, at each vertex, to add up the angles between successive edges (in the intended order, and always treating each direction difference as a positive angle). Ie, to add up the face corner angles. They should sum to tau: if so, we have gone round once and the order is right. If the edges are out of order, we'll end up going round more than once. If the sum was only tau too much, I defined the violation quantity to be tau minus the largest corner angle; this is right because probably it's just that two edges next to each other are out of order and the face angle has become "negative"; this also means that for a non-violating vertex, the violation quantity is negative but still represents how close to violation we are. (For larger corner angle sums, I added half of the additional angle sum as an additional violation quantity. That seemed good enough in the end.)
Simulated annealing - and visual debug of the optimisationMy first attempt used GSL's simulated annealing functions. I have had reasonable success with siman in the past. The constraints are folded into the cost function. (An alternative approach is to somehow deal with them in the random step function, eg by adjusting violating layouts to similar non-violating ones, but that seemed quite tricky here.)

Siman did not seem to be working at all.

I was hampered by not knowing what was going on so I wrote a visual debug utility which would let me observe the candidate layouts being tried, in real time. (I should have taken my first instinct and done it in Tcl/Tk, but initially Qt seemed like it would be easier. But in the end I had to fight several of Qt's built-in behaviours.)

The visual debug showed me the graph randomly jiggling about without any sign of progress. It was clear that if this was going to work at all it would be far too slow.
More suitable optimisation algorithmI felt that a gradient descent algorithm, or something like one, would work well for this problem. It didn't seem to me that there would be troublesome local minima. More web searching led me to Steven G. Johnson's very useful NLopt library. As well as having implementations of algorithms I thought would work well, it offered the ability to change algorithm without having to deal with a whole new API.

I quickly found that NLopt's Sbplx algorithm (T. Rowan's Subplex algorithm, reimplemented) did fairly well. That algorithm does not support constraints but the grandly-named Augmented Lagrangian Method can handle that: it adds the constraint violations to the cost. It then reruns the optimisation, cranking up the constraint violation cost factor until none of the constraints are violated by more than the tolerance.

Unfortunately the Augmented Lagrangian Method can convert a problem with a cost function without local minima, into one which does have bad local minima. The Sbplx algorithm is a kind of descent algorithm so it finds a local minimum and hopes it's what you wanted. But unfortunately for me it wasn't: during the initial optimisation, part of the graph "capsized", violating the edge order constraint and leaving a planar layout impossible. The subsequent cranking up of the constraint violation cost didn't help, I think maybe because my violation cost was not very helpful at guiding the algorithm when things were seriously wrong.

But I fixed this by the simple expedient of adding the edge order constraint with a high cost to my own cost function. The result worked pretty well for my simple tests and for my actual use case. The graph layout optimiation takes a couple of minutes. The results are nice, I think.

I made a screen capture video of the optimisation running. (First the debug build which is slower so captures the early shape better; then again with the release build.)
SoftwareThe planar graph layout tool I wrote is plag-mangler.

It's really not very productised, but I think it will be useful to people who have similar problems. Many of the worst features (eg the bad command line syntax) would be easy to fix. OTOH if you have a graph it does badly on, please do file an issue on salsa, as it will guide me to help make the program more general.
ReferencesSee my first post about this project for some proper references to the academic literature etc.

(Edit 2019-04-04 12:55 +0100: Fixed typos and grammar.)

comments

Andrew SB: Setting Up a Domain with SSL on DigitalOcean Kubernetes using ExternalDNS and Helm

Planet Ubuntu - Enj, 04/04/2019 - 6:00pd

A little while back I added support for DigitalOcean to the ExternalDNS Helm chart, and I wanted to share my notes on how to use it. ExternalDNS is an extremely convenient tool that allows you to dynamically control DNS records for your Kubernetes resources just by adding an annotation. In this post, I’ll walk through how to install it with Helm and use it to point a domain at a Kubernetes service. I’ll also cover setting up SSL using a DigitalOcean managed SSL certificate on the load balancer.

First, a few assumptions:

Installing ExternalDNS with Helm

With all of that in place, the first thing to do is install ExternalDNS into the cluster. You will need to generate a DigitalOcean API token for it to use. It’s best to create a token specifically for this service rather than using one you may have in your local environment. Then run the following command replacing $DO_API_TOKEN with the token you generated:

helm install --name external-dns \ --set digitalocean.apiToken=$DO_API_TOKEN,provider=digitalocean,rbac.create=true \ stable/external-dns

You can verify that it has been successfully installed by running:

$ kubectl get pods -l "app=external-dns"

When ready, the output should look something like this:

NAME READY STATUS RESTARTS AGE external-dns-68bfc948b-jhhrq 1/1 Running 0 34s Generating a DigitalOcean Managed SSL Certificate

Next, use doctl to generate an SSL certificate managed by DigitalOcean making use of their Let’s Encrypt integration. Giving it a name and replacing example.com with your domain, run:

doctl compute certificate create --name k8s-cert \ --type lets_encrypt --dns-names example.com

The output will include an ID that looks something like 9r3e053d-da5e-4390-b7b8-0fs23486e41q. You’ll need that in the next step.

Deploying the Kubernetes Service

Now you are ready to deploy your service to the Kubernetes cluster. For this example we are using an NGINX container for the deployment, but that could be any application running in your cluster. The important part for this exercise is the LoadBalancer Service. Here is the full example:

kind: Service apiVersion: v1 metadata: name: https-with-cert annotations: external-dns.alpha.kubernetes.io/hostname: "example.com" service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true" service.beta.kubernetes.io/do-loadbalancer-certificate-id: "9r3e053d-da5e-4390-b7b8-0fs23486e41q" spec: type: LoadBalancer selector: app: nginx-example ports: - name: https protocol: TCP port: 443 targetPort: 80 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-example spec: replicas: 1 template: metadata: labels: app: nginx-example spec: containers: - name: nginx image: nginx ports: - containerPort: 80 protocol: TCP

Let’s look a little closer at the annotations section:

annotations: external-dns.alpha.kubernetes.io/hostname: "example.com" service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true" service.beta.kubernetes.io/do-loadbalancer-certificate-id: "9r3e053d-da5e-4390-b7b8-0fs23486e41q"

Kubernetes annotations are just metadata attached to a Kubernetes object. They can be used for anything from specifying a maintainer for the service to a git commit hash or other release information. They can also be used to pass on information to controllers. In our case, both the DigitalOcean Cloud Controller Manager and the ExternalDNS controller are watching for services created with these annotations. Breaking down each one:

  • external-dns.alpha.kubernetes.io/hostname - Specifies the domain name to be assigned to the service
  • service.beta.kubernetes.io/do-loadbalancer-certificate-id - Specifies the ID of the DigitalOcean managed SSL cert
  • service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https - Configures the load balancer to automatically redirect clients from HTTP to HTTPS

After replacing the domain and certificate ID in the full example and saving it to a file, apply the configuration with:

kubectl apply -f path/to/https-with-domain.yaml

Now let’s take a quick look at the logs for ExternalDNS by running:

kubectl logs \ `kubectl get pod -l app=external-dns -o jsonpath="{.items[0].metadata.name}"`

When the record has been successfully configured, you will see two lines like:

time="2019-04-04T01:19:11Z" level=info msg="Changing record." action=CREATE record=example.com ttl=300 type=A zone=example.com time="2019-04-04T01:19:12Z" level=info msg="Changing record." action=CREATE record=example.com ttl=300 type=TXT zone=example.com

You might be wondering why it created two records. ExternalDNS uses TXT records to mark records that it manages. It will not modify any records without a corresponding TXT record.

Wrapping It Up

With our service successfully deployed, it will now be available at the configured domain with SSL. If we redeploy the service latter, the DNS record will persist even if the underlying IP address were to change. If you’re looking for more detail, here’s some further reading for you:

Iustin Pop: A small presentation on Linux namespaces

Planet Debian - Mër, 03/04/2019 - 9:41md

Over the weekend I spent time putting together a few slides on Linux namespaces, mostly because I wanted to understand better (and putting this together helped a lot!), but also because it will be useful to me later, and finally (and really) because I promised to a few colleagues I’ll explain how all this works :)

So the HTML slides are here, and the source is on github. I put the source up because I’m very sure this has lots of mistakes; not only in the intro where I mention FreeBSD jails and OpenVZ a bit (but I have zero experience with both), but also in the main content, so any corrections are more than welcome.

Writing this, and organising it, was actually much more entertaining than I originally thought. It also made me realise that the kernel-level implementation is very powerful, and—at least to the extent that e.g. Debian uses it by default—it’s basically wasted (a lot of lost opportunity). I know there are some tools to use this, but for example why Firefox is not by default namespaced… I don’t know. Food for later thought. Happy to receive information otherwise, of course.

Most of the information is gathered from man pages, Wikipedia (for the historic bits), blog posts, mailing list archives, etc., so I don’t claim a lot of deep original content; the main idea is just to put all this information together in a single place.

Hope this is useful to somebody else, and again, contributions and re-sharing welcome (CC-BY-SA-4.0).

Christian Schaller: Preparing for Fedora Workstation 30

Planet GNOME - Mër, 03/04/2019 - 9:20md

I just installed the Fedora Workstation 30 Beta yesterday and so far things are looking great. As many others have reported to, with the GNOME 3.32 update things definitely feels faster and smoother. So I thought it was a good time to talk about what is coming in Fedora Workstation 30 and what we are currently working on.

Fractional Scaling: One of the big features that landed, although still considered experimental was the fractional scaling feature that has been a collaboration between Jonas Ådahl here at Red hat and Marco Trevisan at Canonical. It has taken quite some time since the initial hackfest as it is a complex task, but we are getting close. Fractional scaling is a critical feature for many HiDPI screen laptops to get a desktop size that perfectly fits their screen, not being to small or to large.

Screen sharing support for Chrome and Firefox under Wayland. The Wayland security model doesn’t allow any application to freely grab images or streams of the whole desktop like you could under X. This is of course a huge improvement in security, but it did cause some disruption for valid usecases like screen sharing with things like BlueJeans and Google Hangouts. We been working on resolving that with the help of PipeWire. We been at it for some time and things are now coming together. Chrome 73 ships with everything needed to make this work with Chrome, although you have to turn it on manually (got to this URL to turn it on: chrome://flags/#enable-webrtc-pipewire-capturer). The reason it needs to be manually enabled is not that it is unreliable, it is because the UI is still a little fugly due to a combination of feature overlap between the browser and the desktop and also how the security feature of the desktop is done. We are trying to come up with ways for the UI to be smoother without sacrificing your privacy/security. For Firefox we will keep shipping with our downstream patch until we manage to get it landed upstream.

Firefox for Wayland: Martin Stransky has been hard at work making Firefox be able to run Wayland-native. That work is tantalizingly near, but we decided to postpone it for Fedora Workstation 31 in the end to make sure it is really well polished before releasing it upon the world. The advantage of Wayland native Firefox is that in addition to bring us one step closer to not needing to run an X server (XWayland) all the time it also enables things like fractional scaling mentioned above to work for Firefox.

OpenH264 improved: As many of you know Firefox relies on a library called OpenH264, provided by Cisco, for its H264 video codec support for WebRTC. This library is also provided to Fedora users from Cisco free of charge (you can install it through GNOME Software). However its usefulness have been somewhat limited due to only supporting the baseline profile used for video calling, but not the Main and High profiles used by most online video content. Well what I can tell you is that Red Hat, Endless and Cisco partnered with Centricular some time ago to add support for decoding those profiles to OpenH264 and that work is now almost complete. The basic code enabling them is already merged, but Jan Schmidt at Centricular is working on fixing a few files that are still giving us problems. As soon as that is generally shipping we hope to get Firefox to be able to use OpenH264 also for things like Youtube playback and of course also use OpenH264 to playback any H264 using GStreamer applications like Totem. So a big thank you to Endless, Cisco and Centricular for working with us on this and thus enabling us to have a legal way to offer H264 support to our users.

NVidia binary driver support under Wayland: We been putting it quite a bit of effort trying to tie off the lose ends for using the NVidia binary driver with Wayland. We did manage to fix a long list of bugs like dealing with various colorspace issues, multimonitor setups and so on. For Intel and AMD graphics users things should actually be pretty good to go at this point. The last major item holding us back on the NVidia side is full support for using the binary driver with XWayland applications (native Wayland applications should work fine already). Adam Jackson worked diligently to get all the pieces in place and we do think we have a model now that will allow NVidia to provide an updated driver that should enable XWayland. As it stands though that driver update is likely to only come out towards the fall, so we will keep defaulting to X for NVidia binary driver users for some time more.

Gaming under Wayland. Olivier Fourdan and Jonas Ådahl has trying to crush any major Wayland bug reported for quite some time now and one area where we seem to have rounded the corner is for games. Valve has been kind enough to give us the ability to install and run any steam game for testing purposes, so whenever we found a game giving us trouble we have been able to let Olivier and Jonas reproduce it easily. So on my own gaming box I am now able to run all the Steam games I have under Wayland, including those using Proton, without a hitch. We haven’t tested with the full Steam catalog of course, there are thousands, so if your favourite game is giving you trouble under Wayland still, please let us know. Talking about gaming one area we will try to free up some cycles going forward to look deeper at is Flatpaks and gaming. We already done quite a bit of work in this area, with things like the NVidia binary driver extension and the Steam package on Flathub. But we know from leading linux game devs that there are still some challenges to be resolved, like making host device access for gamepads simpler from within the Flatpak sandbox.

Flatpak Creation in Fedora. Owen Taylor has been in charge of getting Flatpaks building in Fedora, ensuring we can produce Flatpaks from Fedora packages. Owen set up a system to track the Fedora Flatpak status, we got about 10 applications so far, but hope to greatly grow that number of time as we polish up the system. This enables us to start planning for shipping some applications in Fedora Workstation as Flatpaks by default in a future release. This respository will be available by default in Fedora workstation 30 and you can choose the flatpak version of the package through the new drop down box in the top right corner of GNOME Software. For now the RPM version of the package is still the default, but we expect to change that in later releases of Fedora Workstation.

Gedit in GNOME Software with Source drop down box

Fedora Toolbox – Debarshi Ray is leading the effort we call Fedora Toolbox, which is our starting point for our goal to revitalise and revolutionize development on Linux. Fedora Toolbox is trying to take the model of a pet container for development and make it seamless and natural. Our goal is to make it dead simple to create pet containers for your projects, so you can for instance have a Fedora pet container where you develop against the leading edge libraries and tools in Fedora, and you can have a RHEL based container where you develop against the library versions and tools shipping in RHEL (makes updating and fixing in production applications a lot easier) and maybe a SteamOS container to work on your little game project. Currently the model is that you have one pet container per OS your targeting, but we are pondering if maybe having one pet container per project would be even better if we can find good ways to avoid it being a lot of extra overhead (by for example having to re-install all your favourite command line tools in the container) or just outright confusing (which container got what tools and libraries again). Our goal here though is to ensure Fedora becomes the premier container native OS out there and thus a natural home for developers doing container development.
We are also working with the team inside Red Hat focusing on AI/ML and trying to ensure that we have a super smooth way for you to get a pet container with things like TensorFlow and CUDA up and running quickly.

Being an excellent platform for Openshift and Kubernetes development: We are putting effort into together with the Red Hat developer tools organization to bringing the OpenShift and CodeReady Studio and CodeReady Workspaces tools to Fedora. These tools have so far been very focused on RHEL support, but thanks to Flatpak for CodeReady Studio and web integration for CodeReady Workspaces we now have a path for making them easily available in Fedora too. In the world of Kubernetes OpenShift is where you want to be, and we want Fedora Workstation to be the ultimate portal for OpenShift development.

Fleet Commander with Active Directory support – So we are about to hit a very major milestone with Fleet Commander our large scale desktop management tool for Fedora and RHEL. Oliver Gutierrez has been hard at work making it work with Active Directory in addition to the existing FreeIPA support. We know that a majority of people interested in Fleet Commander are only using Active Directory currently, so being able to use Active Directory with Fleet Commander should make this great tool available to a huge number of new users. So if you are managing a University computer lab or a large number of Fedora or RHEL clients in your company we should soon have a Fleet Commander release out that you can use. And if you are not using Fedora or RHEL today well Fleet Commander is a very big reason for switching over!
We will do a proper announcement with further details once the release with Active Directory support is out.

PipeWire – I don’t have a major development to report, just a lot of steady work being done to stabilize and improve PipeWire. As mentioned earlier we now have Wayland screen sharing and recording working smoothly in the major browsers which is the user facing feature I think most of you will notice. Wim is still working on pushing the audio side it forward, but that is also a huge task. We have started talking about organizing a new hackfest soon to see if we can accelerate the effort further again. Likely scenario at this point in time is that we start enabling the JACK side of PipeWire first, maybe as early as Fedora Workstation 31, and then come back and do the PulseAudio replacement as a last stage.

Improved Input handling Another area we keep focusing on is improving input in Fedora. Peter Hutterer and Benjamin Tissoires are working hard on improving the stack. Peter just sent an extensive RFC out for how to deal with high resolution mice under Linux and Benjamin has been trying to get support for the Dell Totem landed. Neither will be there unfortunately for Fedora Workstation 30,but we expect to land this before Fedora Workstation 31.

Flicker-free boot
Hans de Goede has continued working on his effort to create a flicker-free boot experience with Fedora. The results of this work is on display in Fedora Workstation 30 and will for most of you now provide a seamless bootup experience . This effort is not so much about functionality as it is about ensuring you have an end-to-end polished experience with your Linux desktop. Things like the constant mode changes we seen in the past contribute to giving Linux an image of being unpolished and we want Fedora to be the vehicle that breaks down that image.

Ramping up Silverblue

For those of you following Fedora you are probably aware of Silverblue, which is our effort to re-think the Linux desktop distribution from the ground up and help us take the Linux desktop to a new level. The distribution model hasn’t really changed much over the last 20 years and we probably polished up the offering as far as we can within the scope of that model. For instance I upgraded my system to Fedora 30 beta yesterday and it was a long and tedious process of looking at about 6000 individual packages get updated from the Fedora 29 version to the Fedora 30 version one by one. I didn’t hit a lot of major snags despite this being a beta, but it is screamingly obvious that updating your operating system in this way is both slow and inherently fragile as anyone of those 6000 packages might hit a problem during upgrade and leave the system in a unknown state, especially since its common for packages to run scripts and similar as part of their upgrade.

Silverblue provides a revolutionary replacement for that process. First of all since it ships as a unified image we make life a lot easier for our QE team who can then test and verify against a single image which is in a known state. This in turn ensures that you as a user can feel confident that the new OS version will not break something on your system. And since the new version is just an image stored on your system next to the old one, upgrading is just about rebooting your system. There is no waiting for individual packages to get upgraded, as everything is already there and ready. Compare it to booting into a different kernel version on Fedora, it is quick and trivial.
And this also means that in the unlikely case that there is a problem with the new OS version you can just as easily go back to the previous version, by rebooting again and choosing to boot into that version. So you basically have instant upgrades with instant rollback if needed.
We believe this will radically change the way you look at OS upgrades forever, in fact you might almost forget they are happening.

And since Silverblue will basically be a Flatpak (and other containers) only OS you will have a clean delimitation between OS and applications. This means that even if we do major updates to the host, your applications should remain unaffected by the host OS update.
In fact we have some very interesting developments underway for Flatpak, with some major new efforts underway, efforts that I would love to talk about, but they are tied to some major Red Hat announcements that will happen at this years Red Hat Summit which will happen on May 7th – May 9th, so I will leave it as a teaser and then let you all know once the Summit is underway and Red Hats related major announcements are done.

There is a lot of work happening around Silverblue and as it happens Matthias Clasen wrote a long blog entry about it today. That blog goes into a lot more details on some of the Silverblue work items we been doing.

Anyway, I feel really excited about Silverblue and as we continue to refine the experience and figure out how everything will look in this brave new world I am sure everyone else will get excited too. Silverblue represents the single biggest evolution of the Linux desktop since the original GNOME and KDE releases back in the late nineties. It is not just about trying to tweak the existing experience, but an attempt at taking a big leap forward and provide an operating system that embodies all that we learned over these last 20 years and provide a natural home for developers and creators of all kind in our container centric computing future. Be sure to grab the Silverblue image of Fedora 30 beta and give it a test run. I recommend activating flathub.org repo to get started in order to get a decent range of applications available. As we move forward we are working hard to ensure that you have the world of applications available out of the box, so no need to go an enable any 3rd party repositories, but there are some more work that needs to happen before we can do that.

Summary
So Fedora Workstation 30 is going to be another exiting release of both of traditional RPM based Workstation version and of Silverblue, and I hope they will encourage even more people to join our rapidly growing Fedora community. Be sure to join us in #fedora-workstation on freenode IRC to talk!

Podcast Ubuntu Portugal: S01E52 – Querida mudei para Ubuntu!

Planet Ubuntu - Mër, 03/04/2019 - 8:50md

Neste episódio para além da nossa conversa da treta e noticias, temos um convidado especial! Ele é o João Ribeiro, jornalista do Shifter e é um recém-convertido ao Ubuntu, que aceitou o nosso convite para uma entrevista e falar-nos sobre a sua experiência com a migração para a nossa distribuição favorita.
Já sabes, ouve, subscreve e partilha!

  • https://musi.sh/
  • http://dtstyle.net/
  • https://pixelfed.org/
  • https://www.omgubuntu.co.uk/2019/03/firefox-66-out-whats-changed
  • https://ar.al/2019/03/10/indie-web-server/
  • https://www.anandtech.com/show/14060/lenovo-thinkstation-p520-p920-ai-workstations-xeon-plus-quadro-rtx-6000
  • https://www.omgubuntu.co.uk/2019/03/kde-connect-app-sms-features-removed
  • https://www.omgubuntu.co.uk/2019/03/hurrah-kde-connect-is-back-on-the-google-play-store
  • https://www.omgubuntu.co.uk/2019/03/nvidia-jetson-nano-99-computer-for-ai
  • https://www.osnews.com/story/129643/nvidia-announces-99-ai-computer-for-developers-makers-and-researchers/
  • https://www.omgubuntu.co.uk/2019/03/googles-new-game-service-is-based-on-linux-open-source-tech
  • https://www.phoronix.com/scan.php?page=news_item&px=WireGuard-V9-Maybe-Linux-5.2
  • https://www.omgubuntu.co.uk/2019/03/nexdock-2-kickstarter
  • https://discourse.ubuntu.com/t/ubuntu-19-04-disco-dingo-community-wallpaper-competition-vote-here/10224ik
Patrocínios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa: Mating cuttlefish e está licenciada como CC BY 2.0.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

cujo texto integral pode ser lido aqui

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Matthias Clasen: Silverblue at 1

Planet GNOME - Mër, 03/04/2019 - 8:07md

It has been a bit more than a year that we’ve set up the Atomic Workstation SIG. A little later,  we settled on the name Silverblue, and did a preview release with Fedora 29.

The recent F30 beta release is an good opportunity to look back. What have we achieved?

When we set out to turn Atomic Workstation into an every-day-usable desktop, we had a list of items that we knew needed to be addressed. As it turns out, we have solved most of them, or are very close to that.

Here is an unsorted list.

Full Flatpak support

GNOME Software already had support for installing Flatpaks, a year ago, so this is not 100% new. But the support has been greatly improved with the port to libflatpak – GNOME Software is now using the same code as the Flatpak commandline. And  more recently, it learned to display information about sandbox permissions, so that users can see what level of system access the installed applications have.

This information is now also available in the new Application Settings panel. The panel also offers some control over permissions and lets you clean up storage per application.

A Flatpak registry

Flathub is a great place to find desktop applications – there are over 500 now. But since we can’t enable Flathub by default, we have looked for an alternative, and started to provide Flatpak apps in the Fedora container registry. This is taking advantage of Flatpaks support for the OCI format, and uses the Fedora module-build-system.

GNOME Software support for rpm-ostree

GNOME Software was designed as an application installer, but it also provides the UI for OS updates and upgrades. On a Silverblue system, that means supporting rpm-ostree. GNOME Software has learned to do this.

Another bit of functionality for which GNOME Software was traditionally talking to PackageKit is Addons. These are things that could be classified as system extensions: fonts, language support, shell extensions,, etc.  On a Silverblue system, the direct replacement is to use the rpm-ostree layering capability to add such packages to the OS image. GNOME Software knows how to do this now. It is not ideal, since you probably don’t expect to have to reboot your system for installing a font. But it gets us the basic functionality back until we have better solutions for system extensions.

Nvidia driver support

One class of system extensions that I haven’t mention in the previous section is drivers.  If you have an Nvidia graphics card, you may want the Nvidia driver to make best use of your hardware.  The situation with the Nvidia drivers is a little more complicated than with plain rpms, since the rpm needs to match your kernel, and if you don’t have the right driver, your system may boot to a black screen.

These complications are not unique to Silverblue, and the traditional solution for this in Fedora is to use the akmod system to build drivers that match your kernel. With Fedora 30, we put the necessary changes in place in rpm-ostree and the OS image to make this work for Silverblue as well.

Third-party rpms

Fedora contains a lot of apps, but there’s always the odd one that you can’t find in the repositories. A popular app in this category is the Chrome browser. Thankfully, Google provides an rpm that works on Fedora. But, it installs its content into /opt. That is not technically wrong, but causes a problem on Silverblue, since rpm-ostree has so far insisted on keeping packaged content under its tight control in /usr.

Ultimatively, we  want to see apps shipped as Flatpaks, but for Fedora 30, we have managed to get rpm-ostree to handle this situation, so chrome and similar 3rd party rpms can now be installed via package layering on Silverblue.

A toolbox

An important target audience for Fedora Workstation is developers. Not being able to install toolchains and libraries (because the OS is immutable) is obviously not going to make this audience happy.

The short answer is: switch to container-based workflows. Its the future!

But that doesn’t excuse us  from making these workflows easy and convenient for people who are used to the power of the commandline. So, we had to come up with a better answer, and started to develop the toolbox. The toolbox is a commandline tool to take the pain out of working with ‘pet’ containers. With a single command,

toolbox enter

it gives you a ‘traditional’ Fedora environment with dnf,  where you can install the packages you need. The toolbox has the infrastructure to manage multiple named containers, so you can work on different projects in parallel without interference.

Whats missing?

There are many bigger and smaller things that can still be improved – software is never finished. To name just a few:

  • Make IDEs work well with containers on an immutable OS
  • Codec availability and installation
  • Handle “difficult” applications such as virtualbox well
  • Find better ways to handle system extensions

But we’ve come a long way in the one year since I’ve started using Atomic Workstation as my day-to-day OS.

If you want to see for yourself, download the F30 beta image and give it a try!

Debarshi Ray: Fedora Toolbox is now just Toolbox

Planet GNOME - Mër, 03/04/2019 - 7:52md

Fedora Toolbox has been renamed to just Toolbox. Even though the project is obviously driven by the needs of Fedora Silverblue and uses technologies like Buildah and Podman that are driven by members of the wider Fedora project, it was felt that a toolbox container is a generic concept that appeals to a lot many more communities than just Fedora. You can also think of it as a nod to coreos/toolbox which served as the original inspiration for the project, and there are plans to use it in Fedora CoreOS too.

If you’re curious, here’s a subset of the discussion that drove the renaming.

There have already been two releases with the new name, so I assume that almost all users have been migrated.

Note that the name of the base OCI image for creating Fedora toolbox containers is still fedora-toolbox for obvious namespacing reasons, but the names of the client-side command line tool, and the overall project itself have changed. That way you could have a debian-toolbox, a centos-toolbox and so on.

It should be obvious, but the Toolbox logo was designed and created by Jakub Steiner.

Lennart Poettering: Walkthrough for Portable Services in Go

Planet GNOME - Mër, 03/04/2019 - 7:36md
Portable Services Walkthrough (Go Edition)

A few months ago I posted a blog story with a walkthrough of systemd Portable Services. The example service given was written in C, and the image was built with mkosi. In this blog story I'd like to revisit the exercise, but this time focus on a different aspect: modern programming languages like Go and Rust push users a lot more towards static linking of libraries than the usual dynamic linking preferred by C (at least in the way C is used by traditional Linux distributions).

Static linking means we can greatly simplify image building: if we don't have to link against shared libraries during runtime we don't have to include them in the portable service image. And that means pretty much all need for building an image from a Linux distribution of some kind goes away as we'll have next to no dependencies that would require us to rely on a distribution package manager or distribution packages. In fact, as it turns out, we only need as few as three files in the portable service image to be fully functional.

So, let's have a closer look how such an image can be put together. All of the following is available in this git repository.

A Simple Go Service

Let's start with a simple Go service, an HTTP service that simply counts how often a page from it is requested. Here are the sources: main.go — note that I am not a seasoned Go programmer, hence please be gracious.

The service implements systemd's socket activation protocol, and thus can receive bound TCP listener sockets from systemd, using the $LISTEN_PID and $LISTEN_FDS environment variables.

The service will store the counter data in the directory indicated in the $STATE_DIRECTORY environment variable, which happens to be an environment variable current systemd versions set based on the StateDirectory= setting in service files.

Two Simple Unit Files

When a service shall be managed by systemd a unit file is required. Since the service we are putting together shall be socket activatable, we even have two: portable-walkthrough-go.service (the description of the service binary itself) and portable-walkthrough-go.socket (the description of the sockets to listen on for the service).

These units are not particularly remarkable: the .service file primarily contains the command line to invoke and a StateDirectory= setting to make sure the service when invoked gets its own private state directory under /var/lib/ (and the $STATE_DIRECTORY environment variable is set to the resulting path). The .socket file simply lists 8080 as TCP/IP port to listen on.

An OS Description File

OS images (and that includes portable service images) generally should include an os-release file. Usually, that is provided by the distribution. Since we are building an image without any distribution let's write our own version of such a file. Later on we can use the portablectl inspect command to have a look at this metadata of our image.

Putting it All Together

The four files described above are already every file we need to build our image. Let's now put the portable service image together. For that I've written a Makefile. It contains two relevant rules: the first one builds the static binary from the Go program sources. The second one then puts together a squashfs file system combining the following:

  1. The compiled, statically linked service binary
  2. The two systemd unit files
  3. The os-release file
  4. A couple of empty directories such as /proc/, /sys/, /dev/ and so on that need to be over-mounted with the respective kernel API file system. We need to create them as empty directories here since Linux insists on directories to exist in order to over-mount them, and since the image we are building is going to be an immutable read-only image (squashfs) these directories cannot be created dynamically when the portable image is mounted.
  5. Two empty files /etc/resolv.conf and /etc/machine-id that can be over-mounted with the same files from the host.

And that's already it. After a quick make we'll have our portable service image portable-walkthrough-go.raw and are ready to go.

Trying it out

Let's now attach the portable service image to our host system:

# portablectl attach ./portable-walkthrough-go.raw (Matching unit files with prefix 'portable-walkthrough-go'.) Created directory /etc/systemd/system.attached. Created directory /etc/systemd/system.attached/portable-walkthrough-go.socket.d. Written /etc/systemd/system.attached/portable-walkthrough-go.socket.d/20-portable.conf. Copied /etc/systemd/system.attached/portable-walkthrough-go.socket. Created directory /etc/systemd/system.attached/portable-walkthrough-go.service.d. Written /etc/systemd/system.attached/portable-walkthrough-go.service.d/20-portable.conf. Created symlink /etc/systemd/system.attached/portable-walkthrough-go.service.d/10-profile.conf → /usr/lib/systemd/portable/profile/default/service.conf. Copied /etc/systemd/system.attached/portable-walkthrough-go.service. Created symlink /etc/portables/portable-walkthrough-go.raw → /home/lennart/projects/portable-walkthrough-go/portable-walkthrough-go.raw.

The portable service image is now attached to the host, which means we can now go and start it (or even enable it):

# systemctl start portable-walkthrough-go.socket

Let's see if our little web service works, by doing an HTTP request on port 8080:

# curl localhost:8080 Hello! You are visitor #1!

Let's try this again, to check if it counts correctly:

# curl localhost:8080 Hello! You are visitor #2!

Nice! It worked. Let's now stop the service again, and detach the image again:

# systemctl stop portable-walkthrough-go.service portable-walkthrough-go.socket # portablectl detach portable-walkthrough-go Removed /etc/systemd/system.attached/portable-walkthrough-go.service. Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d/10-profile.conf. Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d/20-portable.conf. Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d. Removed /etc/systemd/system.attached/portable-walkthrough-go.socket. Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.d/20-portable.conf. Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.d. Removed /etc/portables/portable-walkthrough-go.raw. Removed /etc/systemd/system.attached.

And there we go, the portable image file is detached from the host again.

A Couple of Notes
  1. Of course, this is a simplistic example: in real life services will be more than one compiled file, even when statically linked. But you get the idea, and it's very easy to extend the example above to include any additional, auxiliary files in the portable service image.

  2. The service is very nicely sandboxed during runtime: while it runs as regular service on the host (and you thus can watch its logs or do resource management on it like you would do for all other systemd services), it runs in a very restricted environment under a dynamically assigned UID that ceases to exist when the service is stopped again.

  3. Originally I wanted to make the service not only socket activatable but also implement exit-on-idle, i.e. add a logic so that the service terminates on its own when there's no ongoing HTTP connection for a while. I couldn't figure out how to do this race-freely in Go though, but I am sure an interested reader might want to add that? By combining socket activation with exit-on-idle we can turn this project into an excercise of putting together an extremely resource-friendly and robust service architecture: the service is started only when needed and terminates when no longer needed. This would allow to pack services at a much higher density even on systems with few resources.

  4. While the basic concepts of portable services have been around since systemd 239, it's best to try the above with systemd 241 or newer since the portable service logic received a number of fixes since then.

Further Reading

A low-level document introducing Portable Services is shipped along with systemd.

Please have a look at the blog story from a few months ago that did something very similar with a service written in C.

There are also relevant manual pages: portablectl(1) and systemd-portabled(8).

Sergio Schvezov: Snapcraft 3.3

Planet Ubuntu - Mër, 03/04/2019 - 7:05md
snapcraft 3.3 is now available on the stable channel of the Snap Store. This is a new minor release building on top of the foundations laid out from the snapcraft 3.0 release. If you are already on the stable channel for snapcraft then all you need to do is wait for the snap to be refreshed. The full release notes are replicated here below Core base: core In order to use the new features of snapcraft, introduced with 3.

Mike Gabriel: My Work on Debian LTS/ELTS (March 2019)

Planet Debian - Mër, 03/04/2019 - 3:23md

In March 2019, I have worked on the Debian LTS project for 14 hours (of 10 hours planned plus 4 hours pulled over from February) and on the Debian ELTS project for another 2 hours (of originally planned 6 hours) as a paid contributor.

LTS Work
  • CVE triaging (ntp, glib2.0, libjpeg-turbo, cron, otrs2, poppler)
  • Sponsor upload to jessie-security (aka LTS): cron (DLA 1723-1 [1])
  • Upload to jessie-security (aka LTS): openssh (DLA 1728-1 [2])
  • Upload to jessie-security (aka LTS): libssh2 (DLA 1730-1 [3])
  • Upload to jessie-security (aka LTS): libav (DLA 1740-1 [4])
ELTS Work
  • Create .debdiff for cron src:pkg targetting wheezy (but I failed to build it due to two issues with Debian 10 as build machine)
  • Discover and document that kernel boot parameter "vsyscall=emulate" is required for building wheezy packages on Debian 10. (See #844350 and #845942 for details).
  • Bug hunt sbuild bug #926161 in sbuild 0.78.1-1 [5]
References

Tristan Van Berkom: FOSSASIA 2019 Report

Planet GNOME - Mër, 03/04/2019 - 12:36md

Hi,

This post is a broad summary of my experience at FOSSASIA this year in Singapore.

Singapore

This was my first visit to Singapore, and I think it is a very nice and interesting place. The city is very clean (sometimes disturbingly so), the food I encountered was mostly Chinese and Indian, and while selling food out of carts on the street has been outlawed some time ago, there is thankfully still a strong culture of street food available in the various “Hawker Centres” (food courts) where the previous street vendors have taken up shop instead.

From my very limited experience there, I would have to recommend roaming around China Town food street and enjoying beer and food (be warned, beer in Singapore is astoundingly expensive !)… Here is a picture of what it looks like.

Many of us ate food here on Friday night

 

 

 

Since the majority of people living in Singapore can speak English, I think this is a great place for a westerner to enjoy their first taste of the Asian experience, without being too disoriented.

The Presentations

The Conference took place in the Lifelong Learning Institute this year, and asides from its obvious focus on FOSS, the conference has a strong focus on education, and also open hardware. There are many students who attend the conference, many whom participated in an exciting hackathon.

There were a lot of talks, so I’ll just summarize some of the talks which I attended and found particularly memorable.

Open Science, Open Mind

In the opening sessions, Lim Tit Meng, who was responsible for hosting FOSSASIA in previous years at Science Centre Singapore, gave an inspirational talk which I thought was quite moving. To quote from the summary of his talk:

Scientific information, discoveries, and inventions should be treated as an open source to benefit as many people and sectors as possible.

There are many reasons for people to get involved in FOSS these days. The ideals of software freedom has been a strong driver, the desire to create software that is superior in quality compared to software developed in silo has been a strong driver for myself. What I took home from Lim Tit Meng’s talk is that FOSS also embodies the spirit of sharing knowledge simply for the good of humanity, that we shouldn’t limit this sharing only to software but that it should extend across all the sciences, and this is a very powerful idea.

Betrusted & the Case for Trusted I/O

Also on the first day, Bunnie Huang joined us to talk about his project Betrusted, an open hardware design comprised of a simple display, input device and fpga. The idea is to have a hardware design which can be easily audited and validated for tampering, and it can be used to store your private matters separately from your complicated mobile device such as a hand phone or tablet.

I think Bunnie gave a very clear overview of the various attack surfaces we need to care about when considering modern personal computing devices.

The blockchain talks

I attended two talks with a focus on applications of blockchain technology, these were interesting to watch for people like me who don’t really have any deep understanding of blockchain (or crypto), but would like to have a higher level understanding of what kinds of applications we can use blockchain for.

First, Ong Khai Wei from IBM gave a talk entitled What would you build next with Blockchain ? where he shared some of the current applications of blockchain technology and introduced us to Hyperledger, a system for managing supply chains.

The other blockchain talk I attended was presented by Jollen Chen, presenting Flowchain, he talked mostly about how we can store and transfer data in a network of IoT devices using Flowchain and IPFS.

Open Source Quantum Computing

Matthew Treinish gave an interesting talk for people like me who know basically nothing about quantum computing. As someone who got interested in quantum computing purely as a hobby, I thought he was perfectly placed to explain things in terms that are simple enough to understand.

Open Source Hardware and Education

This report would be incomplete without a mention of Mitch Altman, a charismatic fellow with an enthusiasm for teaching and inspiring youth to get interested in making things work.

He also gave a workshop in the afternoons where he was teaching people to solder using a selection of kits with neat little lights and speakers.

Open Source Firmware

This was another interesting talk delivered by Daniel Maslowski and Philipp Deppenwiese, unfortunately I was not able to find a recording of this talk.

It included a history of open source firmwares and Daniel’s story as an end user, and the hoops he needed to jump through in order to upgrade his proprietary firmware.

Finally there was a demo where Daniel successfully bricked a laptop for us using the manufacturer’s closed source BIOS updater, and upgraded the firmware on another laptop using Coreboot (I presume the bricked machine has come back to life by now).

My BuildStream Talk

Yes, I did attend my own talk. Although I should say it is by far the worst presentation I have ever given.

The lesson to take home for me is: take the time to understand your target audience and adapt your talk to be more suitable for the audience. My biggest mistake here is that I had adapted material from previous presentations, but those previous presentations had a very technical audience; I could tell as soon as I started my presentation that the people in the room clearly had no idea what I was talking about (although I did ask for a show of hands in a couple of instances and stopped to explain some things which clearly needed explaining).

Instead of explaining how our tooling addresses various problems in existing tooling and how we aim to cleanly separate the “build” problem from the “deployment” problem – I really should have taken a step back and made a presentation about “Why people should care about how their software gets built and integrated” in general.

Closing Ceremonies

On the last day of the conference, we got to see the students who participated in the hackathon present the applications they developed.

The hackathon itself had some interesting guidelines. As UNESCO is one of the primary sponsors of the FOSSASIA event, it seemed fitting that the hackathon competition entries should be related to protecting endangered indigenous languages and culture, in observation of the Year of Indigenous Languages.

The result was truly splendid and this was probably my favorite part of the entire conference. You can watch the young coders presenting their projects here.

FOSSASIA 2019 Organizers and Volunteers

 

 

 

 

 

 

 

 

 

 

On the closing day there was also a professional photographer taking pictures of anyone who volunteered, I took the opportunity to get a “GNOME + GitLab” photo as I was wearing my GUADEC shirt and some of the GitLab development team was also present.

GNOME and GitLab join forces !

 

 

 

Thankyou

I’d like to thank Hong Phuc for accepting my paper on such short notice, and all of the organizers and volunteers who’s hard work helped to make this such a wonderful event.

And of course, thanks to Codethink for sponsoring my travel and allowing me to participate in this event !

 

 

 

Shirish Agarwal: ASAT and ISRO, DRDO merger rumor

Planet Debian - Mër, 03/04/2019 - 12:48pd

ASAT Test

For last few days I was not in Pune as had gone to attend a workshop which was funded by Innovation for change . Unfortunately, I was not able to take part in the workshop as the traveling proved to be a bit too much in too short a time. While I would share more in another blog post for the moment, I would like to share about the ASAT test that India conducted. While it’s a positive development, from my perspective there was no need for the Prime Minister to come on-stage and declare that we can shoot down a Satellite at 3k when China can do the same at 38k . So we have a long way to go, in as far as parity with China is concerned. While I’m not sharing the source of this information, this is for all and anybody to see and figure out if you know how to use the web. There are a few things I would share, I didn’t use any private data-sets to get this information, which means it’s available easily online. I did not use tor, the dark web otherwise I probably could have got far more material. Thirdly and more interestingly, if you wanna start your search from scratch, ORF could be a good starting point from an Indian POV although there are many other such think-tanks which could help you in your research.

The only question I have to ask is if we are the weaker party, which is clearly the case herein, then whom are we trying to sell this idea if not the Indian public ? Chinese military satellites are in varying range from 300 km. to 36,000 km. so there is hardly a chance that we would be able to make any significant dent to their military usage. Also using an ASAT on another country’s satellite would be an act of war. As far as Communication satellites are concerned, they are also at 36,000 km. are at the Geostationary orbit so they will not be harmed. There is also a pretty nice animation of the same at wikimedia .

International Politics

While we can understand that Mr. Modi did it for electioneering, it does have impact internationally. Last year when the Chinese did another ASAT test (which the Pentagon guestimate it reached 36k from sea level from their ground and space-based instruments) . The Chinese statement was quite brief and to the point . They said that they did the test and it performed on all the military objectives. This is a sort of perfect statement which doesn’t reveal either what the Chinese military objectives of the test were and what was accomplished. All other Governments either have to rely on their own instrumentation (if they have in space to spy and on lookout for such activities ) or rely on Pentagon’s guestimates and findings which they chose to public. The Americans are also well to not show their hand and may share some information or even share mis-information as this is and would be considered part of Information warfare. This is also precisely the reasons we have ambassadors, diplomats and others who sit together and are engaged in naunced wording. There were no need of an announcement and even if it needed, it could have been done by some mid-level executive on DRDO saying something similar on the lines of what the Chinese said and probably adding we have a long road ahead of us or something like that.

Update – 04/04/2019 – Somebody on twitter shared a link to Dr. Saraswat’s latest interview which was held a few days back .

The answers were designed in the way so as to show that the UPA govt. didn’t show the interest for the ASAT test while the NDA Govt. Even if we do take Dr. Saraswat’s interpretion of how the event happened, it still raises questions rather than answers.

  1. By. Dr. Saraswat’s own admission, it was an informal presentation . While he didn’t go into the details of what he meant by ‘informal presentation’ it could be something akin to somebody asking me to do an informal presentation on Debian. For this, the most I have to do is collect my thoughts, read up a bit onto what’s new, exciting if there is something which catches my eye and at the most have 5-7 pages of slides and depending upon what kind of organization it is, I would share what Debian is. If however, somebody would ask me to make a presentation on a possible Debian deployment, it would consist of knowing and having details of how small or big the network is ? What are the critical points in the network (for e.g. many shops or small businessess have either their custom-designed billing system whose source-code they don’t have and has to be on MS-Windows) while other systems you could potentially do the deployment. Apart from doing the actual deployment, there would be time for training, documentation etc. all of which involve some sort of hard numbers and time which both parties would have to work at to get some sort of understanding of how this different system works.

2. And this is where my question comes in. In the interview it’s also not mentioned what time or date when the presentation was done. Now we all know that 2014 was only a year away, if the presentation was done 6-9 months before elections, it is very much possible that there was no interest because it would be time-consuming and there are no guarantees of a successful test. In fact, before this test which was declared a success, there was another test which was conducted by DRDO which was a failure. This also begs or marks the question as to when did Dr. Saraswat approach NDA or vice-versa and when he started actively working on the project. Did it take 5 years for this to come to this stage or 2 years or less because that would give some more guidance and a way for us to guage future success of the project.

Rumour of Merging DRDO and ISRO

There is also a worrying bit of news that the Government of India is thinking of merging both DRDO and ISRO to be similar structure to what the Chinese have for their space program, which I think will be disastrous for the Indian Space Program, the taxpayer public money as well as the two organizations as well.

DRDO work culture

While my mother had the honor of serving within a sub-set of DRDO and she was friends with few scientists, one of the major grouses for most scientists was the constant shifting of parameters or specifications. To take a very simple example, let’s think that you are told or given a set of specs. of a Maruti 800, a small city car , then a year, year down the half, you are told that the design specifications has changed to now a Station Wagon or a hatchback and when you start to design for those, the specs. are changed again in a year or two to a sports car. Now any car-enthusiast would know that these three are completely different cars having their unique needs, dimensions, center of gravity, steering, fuel consumption, the works. Extrapolate that to a missile or missiles where more often than not, these design changes were at many a times not asked by the Armed forces who would be the actual users but the bureaucracy i.e. civil servants, many from IAS who instead of consulting, using consesus of the people on both sides, instead share and put whatever opinion they have. Of course inter-personality conflicts also do occur and inspite of it DRDO is able to do what it does. Because of quite a few such Inter-personality conflicts, many a brilliant scientist have been forced to leave DRDO and are now either serving private Indian interests or some foreign ones and they repent why they spent their best productive years at DRDO or whatever sub-unit they were into.

ISRO Work culture

While I do not have relatives working in ISRO, I do and did have friends who work or have worked in ISRO. Due to the nature of the work itself, which is more exploratory and peaceful in nature, they are able to collaborate with lot of educational institutions within India and worldwide and even collaborate with organizations like NASA, ESA and others. The civilian beaureacracy has had a more hands-off approach which has resulted in ISRO being able to carry out whatever fantastic achievements they have been able to achieve. The only thing, if they need to learn from this Government, is the ability to find money and do more of promotion of the good work they are doing. Even if ISRO were to do 1% of the promotion that NASA does in promotion with merchandising, they would get more than money back while at the same time inspire millions of young children to take up challenges in space sciences.

So from the above, it is pretty clear it would be disastrous as both have a very different mind-set and ways of working. I remember hearing or conversing with some military gentleman couple of years ago and we were talking on some similar topics. This was on a short train trip. The gentleman remarked, it’s not often that we get things to work right the first time, in any of the fields of endevour the military does. If we do, even some small part, we make sure not to disturb or change it and would make changes around it so it works and fix all the other things and processes till there is cohesion. He went on to share some real-life examples from his work which I have since forgotten but the principle seems good, solid enough at least to me.

Making Organizations Fun

At the very end, I would like to draw attention to Jonathan Carter’s blog post where he shares about Debian and Fun . I found both the art peices most appropriate not just for the organizations listed above, but should be the calling points of any organization which believes in genuine stewardship of whatever organization they have or hope to take forward.

While I would invite everybody who has more than a passing interest in the world of computer science to see Jonathan’s and other potential DPL (Debian Project Leader) platforms as well as their rebuttals, the difference between the two is statements or pictures above is that while the first one is an employer-employee model, the second is more on the volunteer, contributor-steward model. Although as DPL , the only perks the DPL enjoys are speaking about Debian in sometimes exotic locations, although that is more than tempered by being part of Debian Politics and Free software politics which comes with its own rewards, risk scenario and is and can be pretty tricky as has been observed over the years.

Kubuntu General News: Kubuntu Disco Dingo (19.04) Beta Released

Planet Ubuntu - Mar, 02/04/2019 - 4:33md

The beta of Disco Dingo (to become 19.04) has now been released, and is available for download at http://cdimage.ubuntu.com/kubuntu/releases/19.04/beta/

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of the Disco Dingo are not encouraged for:

* Anyone needing a stable system
* Anyone who is not comfortable running into occasional, even frequent breakage.

They are, however, recommended for:

* Ubuntu flavor developers
* Those who want to help in testing, reporting, and fixing bugs as we work towards getting this release ready.
* the Beta includes some software updates that are ready for broader testing. However, it is an early set of images, so you should expect some bugs.

You can:

Read more information about the Kubuntu 19.04 Beta: https://wiki.ubuntu.com/DiscoDingo/Beta/Kubuntu

Download the Kubuntu 19.04 Beta images

Read the full text of the main Ubuntu Cosmic Beta announcement:

https://lists.ubuntu.com/archives/ubuntu-release/2019-March/004743.html

The Ubuntu Cosmic Release notes will give more details of changes to the Ubuntu base: https://wiki.ubuntu.com/DiscoDingo/ReleaseNotes

Reproducible builds folks: Reproducible Builds: Weekly report #205

Planet Debian - Mar, 02/04/2019 - 3:11md

Here’s what happened in the Reproducible Builds project between March 24th and March 30th 2019:

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy which offers paid internships to work on free software. Internships are open to applicants around the world and are paid a stipend for the three month internship with an additional travel stipend to attend conferences. So far, we received more than ten initial requests from candidates and the closing date for applicants is April 2nd. More information is available on the application page.

Packages reviewed and fixed, and bugs filed Test framework development
  • We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were done this week:

    • Mattia Rizzolo built a static list of SSH host keys [] so we could build the ssh_config file based on this file [], leading to being able to enable OpenSSH’s StrictHostKeyChecking option [][][].
    • Holger Levsen added a number of links to pages, including Guix’s challenge command [], the F-Droid tests [] as well as NixOS and openSUSE tests [].

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & Holger Levsen and was reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Simon Raffeiner: The state of the USB-C connector in 2019

Planet Ubuntu - Mar, 02/04/2019 - 2:13md

In episode 2x46 of the Bad Voltage podcast, Stuart Langridge predicts companies will finally embrace the USB-C connector in 2019. Which prompts both Jono Bacon and Jeremy Garcia to ask: "What doesn't ship with USB-C today"? Turns out a lot of devices still do...

The post The state of the USB-C connector in 2019 appeared first on LIEBERBIBER.

Ben Hutchings: Debian LTS work, March 2019

Planet Debian - Mar, 02/04/2019 - 12:12md

I was assigned 20 hours of work by Freexian's Debian LTS initiative and carried over 16.5 hours from February. I worked 22.5 hours and so will carry over 14 hours.

I merged changes from stretch's linux package into the linux-4.9 package, uploaded that, and issued DLA-1715. I made another stable update to Linux 3.16 (3.16.64). I then rebased Debian's linux package on that version, uploaded it, and issued DLA-1731. This unfortunately introduced a regression, which I fixed in a second update.

I also reviewed and merged Emilio Pozuelo Monfort's changes to the firmware-nonfree package to address CVE-2018-5383.

Faqet

Subscribe to AlbLinux agreguesi