You are here

Agreguesi i feed

prrd 0.0.2: Many improvements

Planet Debian - Pre, 26/01/2018 - 12:30md

The prrd package was introduced recently, and made it to CRAN shortly thereafter. The idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development and is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the following screenshot (running six parallel workers, arranged in split byobu session).

This note announce the second, and much improved, release. The package now runs on all operating systems supported by R and no longer has external system requirements. Several functions were improved, two new helper functions were added in a so-far still preliminary form, and everything is more robust now.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.2 (2018-01-24)
  • The package no longer require wget.

  • Enhanced sanity checker function.

  • Expanded and improved dequeue function.

  • No longer use $HOME in xvfb-run-safe (#2).

  • The use of xvfb-run use is now conditional on the OS (#3).

  • The set of available packages is no longer constrained to CRAN, but could be via the local setup script (#4).

  • The dequeue() function now uses system2().

  • The enqueue() functions checks if no reverse dependencies are found and stops (#6).

  • The enqueue() functions checks for repository information being set (#5).

CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

David Tomaschik: Psychological Issues in the Security Industry

Planet Ubuntu - Pre, 26/01/2018 - 9:00pd

I’ve unfortunately had the experience of dealing with a number of psychological issues (either personally or through personal connections) during my tenure in the security fold. I hope to shed some light on them and encourage others to take them seriously.

If you are hoping this post will be some grand reveal of security engineers going psychotic and stabbing users who enter passwords into phishing pages with poor grammar and spelling, web site administrators who can’t be bothered to set up HTTPS, and ransomware authors, then I hate to disappoint you. If, on the other hand, you’re interested in observations of people who have experienced various psychological problems while in the security industry, then I’ll probably still disappoint, just but not as much.

Impostor Syndrome

According to Wikipedia:

Impostor syndrome is a concept describing individuals who are marked by an inability to internalize their accomplishments and a persistent fear of being exposed as a “fraud”. Despite external evidence of their competence, those exhibiting the syndrome remain convinced that they are frauds and do not deserve the success they have achieved.

I know many, many people in this industry who suffer from this and do not have the ability to recognize their own successes. They may only credit themselves with having “helped out” or done the “non-technical” parts. Some do not take career opportunities or refuse to believe their work is interesting to others.

I, myself, sit on the border of impostor syndrome, and it took me years to be convinced that it was only impostor syndrome and that I am not actually incompetent. Even after promotions, performance reviews “exceeding expectations”, and other signs that a rationale individual would take as signs of success, I still believed that I was not doing the right things. I still believe that I’m not as technically strong or effective as any of my coworkers, despite repeated statements by my manager, my skip-level manager, and my director.

I’m not sure if it is possible to “get over” impostor syndrome, but I think most are able to recognize that their self-doubt is a figment of their imagination. It doesn’t necessarily make it easier to swallow, but at a certain point, if you really believe you’re a failure, you will sabotage yourself into being a failure. If you’re concerned about your performance, you don’t have to admit to impostor syndrome, but ask your teammates and manager: am I performing up to your expectations for someone at my level? Am I on track to keep progressing?

Impostor syndrome, though not a diagnosable mental illness, is probably the most common psychological issue faced by those in the security industry. It’s a fast-paced, competitive, field and it’s hard not to compare yourself to others who are more visible and have achieved so much.

Depression

I don’t think I need to define depression. I think it’s important to acknowledge that depression is not the same as “feeling down” or occasionally having a bad day. Most people who have suffered from depression describe it as a feeling that things will never get better or a complete lack of desire to do anything.

Depression doesn’t seem to be quite as widespread as impostor syndrome, but it’s clearly still a big issue. I have known multiple people in the industry who have suffered from depression, and it’s not something that gets “cured” – you just learn how to live with it (and sometimes use medication to help with the worst of it).

Depression is obviously not unique to our field, but I’ve known several people who suffered in silence because of the social aversion/shyness so prevalent. I strongly encourage those who think they might have depression to seek professional help. It’s not an easy thing to deal with, and the consequences can be terrible.

Working as part of a team can help with depression by increasing exposure to other individuals. If you currently work remotely/from home, consider a change that gets you out more and spending time with coworkers or others. It’s clearly not a fix or a panacea, but social interactions can help.

Anxiety

Anxiety is an entire spectrum of issues that members of our field deal with. There are many “introverts” in this industry, so social anxiety is a common issue, especially in situations like conferences or other events with large crowds. I use the term introverts loosely, because it turns out many people who call themselves introverted actually like to be around others and enjoy social interactions, but find them hard to do for reasons of anxiety. Social anxiety and introversion, it turns out, are not the same thing. (I’ve heard that shyness is the bottom end of a spectrum that leads to social anxiety at the upper end.)

Beyond social anxiety, we have generalized anxiety disorder. Given that we work in a field where we spend all day long looking at the way things can fail and the problems that can occur, it’s not surprising that we can tend to have a somewhat negative and anxious view of things. This tends to present with anxiety about a variety of topics, and often also has panic attacks associated.

There are, of course, many other forms of anxiety. I have long had anxiety in the form of so-called “Pure-O” OCD – that is, Obsessive-Compulsive Disorder with only the Obsessive Thoughts and not the Compulsions. This leads to worst-case scenario thinking and an inability to avoid “intrusive” thoughts. It also makes it incredibly hard to manage my work-life balance because I cannot separate my thoughts from my work. I have spent entire weekends unable to do anything because I’ve been thinking about projects I’m dreading or meetings I have the next week. I also tend to obsess about stupid mistakes I make or whether or not I have missed something. At the end of the day, I value certainty and hate the unknowns. (Security is a perfect field for discovering that you don’t handle uncertainty!) At times it can lead to depression as well.

Feeling Overwhelmed (Burn Out)

Obviously this isn’t a diagnosable issue either, but a lot of people in this industry get quite overwhelmed. Burn out is a big problem, and one I’m trying to cope with even as I write this. There’s a number of reasons I see for this:

  1. It’s hard to keep up with this industry. If you just work a 9-5 job in security and spend no time outside that keeping current, I don’t think you’ll have an easy time keeping up.
  2. In many companies, once someone has interacted with you on one project, you’re their permanent “security contact” – and they’ll ask you every question they possibly can.
  3. At least on my team, you’re never able to work on a single thing – I currently have at least a half-dozen projects in parallel. Context switching is very hard for me, and the fastest way to lead to burn out for me.
  4. A lot of being a security professional is not technical, even if that’s the part you love the most. You’ll spend a big part of your time explaining things to product managers, non-security engineers, and others to get your point across.

I wish I had an instant solution for burnout, but if I did, I probably wouldn’t be feeling burnt out. If you have a supportive manager, get them involved early if you’re feeling burnout coming on. I have a great management chain, but I was too “proud” to admit to approaching burnout (because I viewed it as a personal failure) until I was nearly at the point of rage quitting. I still haven’t really fixed it, but I’ve discussed some steps I’ll be taking over the next couple of months to see if I can get myself back to a sane and productive state.

Other Reading Conclusion

I’m hoping this is a helpful tour of some of the mental issues I’ve dealt with in my life, my career, and 5 years in the security industry. It’s not easy, and by no means do I think I hold the answers (if I did, I would probably feel a lot better myself), but I think it’s important to recognize these issues exist. Most of them are not unique to our industry, but I feel that our industry tends to exacerbate them when they exist. I hope that we, as an industry and a community, can work to help those who are suffering or have issues to work past them and become a more successful member of the community.

Bastian Ilsø Hougaard: GNOME at FOSDEM 2018 – with socks and more!

Planet GNOME - Pre, 26/01/2018 - 1:50pd


Sunrise over Hobart seen from Mt Wellington, Tasmania (CC-BY-SA 4.0).

It’s been a while huh? The past six months held me busy traveling and studying abroad in Australia, but I’m back! With renewed energy, and lots and lots of GNOME socks for everyone. Like previous years, I’m helping out in GNOME’s FOSDEM booth at the FOSDEM 2018 conference.


FOSDEM 2016. (CC-BY-SA 4.0)

I have arranged a whopping 420 pairs of GNOME socks produced and hopefully arriving before my departure. baby Socks, ankle socks, regular Socks and even knee socks – maybe I should order an extra suit case to fill up. Even so, I estimate I can probably bring 150 pairs at max (last year my small luggage held 55 pairs..). Because of the large quantity I’ve designed them to be fairly neutral and “simple” (well, actually the pattern is rather complicated).


Sample sock made prior to production.


Breakdown of the horizontally repeatable sock pattern.

I plan to bring them to FOSDEM 2018, Open Source Days in Copenhagen, FOSS North and GUADEC. However, we have also talked about getting some socks shipped to the US or Asia, although a box of 100 socks weigh a lot resulting in expensive shipping. So if anyone is going to any of the aforementioned conferences and can keep some pairs in their luggage, let me know!

Apart from GNOME Booth staffing I am also helping out with organizing small newcomer workshops at FOSDEM! If you are coming to FOSDEM and is interested in mentoring one or two newcomers with your project, let us know on the Newcomer workshop page (more details here too). Most of all, I look forward to meeting fellow GNOME people again as I feel I have been gone quite a long time. I miss you!

Matthew Helmke: Attacking Network Protocols

Planet Ubuntu - Enj, 25/01/2018 - 11:22md

I am always trying to expand the boundaries of my knowledge. While I have a basic understanding of networking and a high-level understanding of security issues, I have never studied or read up on the specifics of packet sniffing or other network traffic security topics. This book changed that.

Attacking Network Protocols: A Hacker’s Guide to Capture, Analysis, and Exploitation takes a network attacker’s perspective while probing topics related to data and system vulnerability over a network. The author, James Forshaw, takes an approach similar to the perspective taken by penetration testers (pen testers), the so-called white hat security people who test a company’s security by trying to break through its defenses. The premise is that if you understand the vulnerabilities and attack vectors, you will be better equipped to protect against them. I agree with that premise.

Most of us in the Free and Open Source software world know about Wireshark and using it to capture network traffic information. This book mentions that tool, but focuses on using a different tool that was written by the author, called CANAPE.Core. Along the way, the author calls out multiple other resources for further study. I like and appreciate that very much! This is a complex topic and even a detailed and technically complex book like this one cannot possibly cover every aspect of the topic in 300 pages. What is covered is clearly expressed, technically deep, and valuable.

The book covers topics ranging from network basics to passive and active traffic capture all the way to the reverse engineering of applications. Along the way Forshaw covers network protocols and their structures, compilers and assemblers, operating system basics, CPU architectures, dissectors, cryptography, and the many causes of vulnerabilities.

Closing the book is an appendix (additional chapter? It isn’t precisely defined, but it is extra content dedicated to a specific topic) that describes a multitude of tools and libraries that the author finds useful, but may not have had an excuse to mention earlier in the book. This provides a set of signposts for the reader to follow for further research and is, again, much appreciated.

While I admit I am a novice in this domain, I found the book helpful, interesting, of sufficient depth to be immediately useful, with enough high-level descriptions and clarification to give me the context and thoughts for further study.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

Adrien Plazas: GTK+ Apps on Phones

Planet GNOME - Enj, 25/01/2018 - 4:42md

As some of you may already know, I recently joined Purism to help developing GTK+ apps for the upcoming Librem 5 phone.

Purism and GNOME share a lot of ideas and values, so the GNOME HIG and GNOME apps are what we will focus on primarily: we will do all we can to not fork nor to reinvent the wheel but to help allowing existing GTK+ applications to work on phones.

How Fit are Existing GTK+ Apps?

Phones are very different from laptops and even tablets: their screen is very small and their main input method is a single thumb on a touchscreen. Luckily, many GNOME applications are touch-friendly and are fit for small screens. Many applications present you a tree of information you can browse and I see two main layouts used by for GNOME applications to let you navigate it.

A first kind of layout is found in applications like Documents, I'll call it stack UI: it uses all the available space to display the collection of information sources (in that case, documents), clicking an element from the collection will focus on it, displaying its content stacked on top of the collection and letting you go back to the collection with a Back button. Applications sporting this layout are the most phone-enabled ones as phone apps typical following a similar layout. Some polish may be needed to make them shine on a phone but overall not so much. Other applications using this layout are Music, Videos, Games, Boxes…

A second kind of layout is found in applications like Contacts, I'll call it panel UI: it displays all the levels of information side-by-side in panels: the closer the information is from the left, the closer it is from the root, each selected node of the information tree being highlighted. This is nice if you have enough window space to display all this information and focus on the leaves isn't required by the user as it allows to quickly jump to other elements of the collection. Unfortunately window space is rare on phones, so these applications would need to be adjusted to fit their screens. Other applications using this layout are Settings, Geary, Polari, FeedReader…

Of course, other layouts exist and are used, but I won't cover these here.

Stack UIs respond to size changes by displaying more or less of the current level of information, but panel UIs tend to seemingly arbitrarily limit the minimum size of the window to keep displaying all the levels information, even though some may not be needed. The responsibility of handling the layout ans sizes to display more or less of some levels of information is sometimes offloaded to the user via the usage of GtkPanel, who then has to manually adjust which information to hide or to display by changing the width of columns every time they need access to another information or when the window's size changes. A notable example of hurtful GtkPanel usage is Geary, which can be a bit of a pain to use half-maximized on a 1920×1080 screen.

Responsive GTK+ Apps

Panel UIs need to be smarter and should decide depending on the window's size which information is relevant to the user and should be displayed. As we don't want to replace the current experience but to extend it, the UIs need to respond to window size changes and explicit focus change requests.

One way of doing it would be to stack the panels one on top of the other to show only one at a time, adding extra Back buttons as needed, effectively switching the UI between panels and a stack.

Another one would be to have floating panels like on KDE Discover. I am not a fan of this method, but on the other hand I'm not a designer.

I expect that to make applications like Geary easier to use even on laptops.

Implementing GtkResponsiveBox

I will try to implement a widget I call GtkResponsiveBox. It contains two children displayed side by side when the box's size is above a given threshold and only one of them when the size is below it.

I expect this widget to look like a weird mix of GtkPaned and GtkStack, to be orientable and to have the following sizes:

  • minimal size = max (widget 1 minimal size, widget 2 minimal size)
  • natural size = widget 1 natural size + widget 2 natural size
  • threshold size = widget 1 minimal size + widget 2 minimal size

I am not completely sure yet how to implement it nor if this widget is a good idea overall. Don't expect anything working soon as it's the first time I subclass GtkContainer. I'll let you know how implementing the widget goes, but in the meantime any comment is welcome!

Thanks Philip for your guide detailing how to implement a custom container, it helps me a lot!

Changes in Prometheus 2.0

Planet Debian - Enj, 25/01/2018 - 1:00pd

This is one part of my coverage of KubeCon Austin 2017. Other articles include:

2017 was a big year for the Prometheus project, as it published its 2.0 release in November. The new release ships numerous bug fixes, new features and, notably, a new storage engine that brings major performance improvements. This comes at the cost of incompatible changes to the storage and configuration-file formats. An overview of Prometheus and its new release was presented to the Kubernetes community in a talk held during KubeCon + CloudNativeCon. This article covers what changed in this new release and what is brewing next in the Prometheus community; it is a companion to this article, which provided a general introduction to monitoring with Prometheus.

What changed

Orchestration systems like Kubernetes regularly replace entire fleets of containers for deployments, which means rapid changes in parameters (or "labels" in Prometheus-talk) like hostnames or IP addresses. This was creating significant performance problems in Prometheus 1.0, which wasn't designed for such changes. To correct this, Prometheus ships a new storage engine that was specifically designed to handle continuously changing labels. This was tested by monitoring a Kubernetes cluster where 50% of the pods would be swapped every 10 minutes; the new design was proven to be much more effective. The new engine boasts a hundred-fold I/O performance improvement, a three-fold improvement in CPU, five-fold in memory usage, and increased space efficiency. This impacts container deployments, but it also means improvements for any configuration as well. Anecdotally, there was no noticeable extra load on the servers where I deployed Prometheus, at least nothing that the previous monitoring tool (Munin) could detect.

Prometheus 2.0 also brings new features like snapshot backups. The project has a longstanding design wart regarding data volatility: backups are deemed to be unnecessary in Prometheus because metrics data is considered disposable. According to Goutham Veeramanchaneni, one of the presenters at KubeCon, "this approach apparently doesn't work for the enterprise". Backups were possible in 1.x, but they involved using filesystem snapshots and stopping the server to get a consistent view of the on-disk storage. This implied downtime, which was unacceptable for certain production deployments. Thanks again to the new storage engine, Prometheus can now perform fast and consistent backups, triggered through the web API.

Another improvement is a fix to the longstanding staleness handling bug where it would take up to five minutes for Prometheus to notice when a target disappeared. In that case, when polling for new values (or "scraping" as it's called in Prometheus jargon) a failure would make Prometheus reuse the older, stale value, which meant that downtime would go undetected for too long and fail to trigger alerts properly. This would also cause problems with double-counting of some metrics when labels vary in the same measurement.

Another limitation related to staleness is that Prometheus wouldn't work well with scrape intervals above two minutes (instead of the default 15 seconds). Unfortunately, that is still not fixed in Prometheus 2.0 as the problem is more complicated than originally thought, which means there's still a hard limit to how slowly you can fetch metrics from targets. This, in turn, means that Prometheus is not well suited for devices that cannot support sub-minute refresh rates, which, to be fair, is rather uncommon. For slower devices or statistics, a solution might be the node exporter "textfile support", which we mentioned in the previous article, and the pushgateway daemon, which allows pushing results from the targets instead of having the collector pull samples from targets.

The migration path

One downside of this new release is that the upgrade path from the previous version is bumpy: since the storage format changed, Prometheus 2.0 cannot use the previous 1.x data files directly. In his presentation, Veeramanchaneni justified this change by saying this was consistent with the project's API stability promises: the major release was the time to "break everything we wanted to break". For those who can't afford to discard historical data, a possible workaround is to replicate the older 1.8 server to a new 2.0 replica, as the network protocols are still compatible. The older server can then be decommissioned when the retention window (which defaults to fifteen days) closes. While there is some work in progress to provide a way to convert 1.8 data storage to 2.0, new deployments should probably use the 2.0 release directly to avoid this peculiar migration pain.

Another key point in the migration guide is a change in the rules-file format. While 1.x used a custom file format, 2.0 uses YAML, matching the other Prometheus configuration files. Thankfully the promtool command handles this migration automatically. The new format also introduces rule groups, which improve control over the rules execution order. In 1.x, alerting rules were run sequentially but, in 2.0, the groups are executed sequentially and each group can have its own interval. This fixes the longstanding race conditions between dependent rules that create inconsistent results when rules would reuse the same queries. The problem should be fixed between groups, but rule authors still need to be careful of that limitation within a rule group.

Remaining limitations and future

As we saw in the introductory article, Prometheus may not be suitable for all workflows because of its limited default dashboards and alerts, but also because of the lack of data-retention policies. There are, however, discussions about variable per-series retention in Prometheus and native down-sampling support in the storage engine, although this is a feature some developers are not really comfortable with. When asked on IRC, Brian Brazil, one of the lead Prometheus developers, stated that "downsampling is a very hard problem, I don't believe it should be handled in Prometheus".

Besides, it is already possible to selectively delete an old series using the new 2.0 API. But Veeramanchaneni warned that this approach "puts extra pressure on Prometheus and unless you know what you are doing, its likely that you'll end up shooting yourself in the foot". A more common approach to native archival facilities is to use recording rules to aggregate samples and collect the results in a second server with a slower sampling rate and different retention policy. And of course, the new release features external storage engines that can better support archival features. Those solutions are obviously not suitable for smaller deployments, which therefore need to make hard choices about discarding older samples or getting more disk space.

As part of the staleness improvements, Brazil also started working on "isolation" (the "I" in the ACID acronym) so that queries wouldn't see "partial scrapes". This hasn't made the cut for the 2.0 release, and is still work in progress, with some performance impacts (about 5% CPU and 10% RAM). This work would also be useful when heavy contention occurs in certain scenarios where Prometheus gets stuck on locking. Some of the performance impact could therefore be offset under heavy load.

Another performance improvement mentioned during the talk is an eventual query-engine rewrite. The current query engine can sometimes cause excessive loads for certain expensive queries, according the Prometheus security guide. The goal would be to optimize the current engine so that those expensive queries wouldn't harm performance.

Finally, another issue I discovered is that 32-bit support is limited in Prometheus 2.0. The Debian package maintainers found that the test suite fails on i386, which lead Debian to remove the package from the i386 architecture. It is currently unclear if this is a bug in Prometheus: indeed, it is strange that Debian tests actually pass in other 32-bit architectures like armel. Brazil, in the bug report, argued that "Prometheus isn't going to be very useful on a 32bit machine". The position of the project is currently that "'if it runs, it runs' but no guarantees or effort beyond that from our side".

I had the privilege to meet the Prometheus team at the conference in Austin and was happy to see different consultants and organizations working together on the project. It reminded me of my golden days in the Drupal community: different companies cooperating on the same project in a harmonious environment. If Prometheus can keep that spirit together, it will be a welcome change from the drama that affected certain monitoring software. This new Prometheus release could light a bright path for the future of monitoring in the free software world.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Movit 1.6.0 released

Planet Debian - Enj, 25/01/2018 - 12:49pd

I just released version 1.6.0 of Movit, my GPU-based video filter library.

The full changelog is below, but what's more interesting is maybe what isn't in it, namely the compute shader version of the high-quality resampling filter I blogged about earlier. It turned out that my benchmark setup was wrong in a sort-of subtle way, and unfortunately biased towards the compute shader. Fixing that negated the speed difference—it was actually usually a few percent slower than the fragment shader version, despite a fair amount of earlier tweaks. (It did use less CPU when setting up new parameters, which was nice for things like continuous zooms, but probably not enough to justify the GPU slowdown.)

Which means that after a month or so of testing and performance tuning, I had to scrap it—it's sad to notice so late (I only realized that something was wrong as I started writing up the final documentation, and figured I couldn't actually justify why I would let one of them chain with other effects and the other one not), but it's a sunk cost, and keeping it in based on known-bad benchmarks would have helped nobody. I've left it in a git branch in case the world should change.

I still believe there are useful gains from compute shaders—in particular, the deinterlacer shines—but it's increasingly clear to me that fragment shaders should remain the default go-to tool for graphics on the GPU. (I guess the next natural target would be the convolution/FFT operators, but they're not all that much used.)

The full changelog reads:

Movit 1.6.0, January 24th, 2018 - Support for effects that work as compute shaders. Compute shaders are generally slower than fragment shaders for the same algorithm, but allow some forms of communication between shader invocations and have more flexible output, which can enable more efficient algorithms. See effect.h for more details. Note that the fastest rendering API on EffectChain is now to a texture if possible, not to an FBO. This will only matter if the last effect is a compute shader. - Movit now includes a compute shader implementation of DeinterlaceEffect, which is automatically used instead of the fragment shader implementation if your GPU and OpenGL driver supports it (in practice, this means on all platforms except on macOS). The compute shader version is typically 20–80% faster than the fragment shader version, depending on your GPU and other factors. A compute shader implementation of ResampleEffect was written but ultimately failed to be faster, and so is not included. - Support for microbenchmarks of effects through the Google microbenchmarking framework (optional). Currently, DeinterlaceEffect and ResampleEffect has benchmarks; enable them by running the unit test with --benchmark (also try --benchmark --help). - Effects can now explicitly request _not_ to have mipmaps, which means they can do so without needing to request bounce and fiddling with the sampler state. Note that this is an API change for effects. - Movit now requires C++11, both to build and to #include the header files. Support for SDL1 has been dropped; unit tests and the demo program now need SDL2. - Various smaller bugfixes and optimizations.

Debian packages are on their way up through the NEW queue (there's a soname bump).

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

The Pune Metro 1st anniversary celebrations

Planet Debian - Enj, 25/01/2018 - 12:13pd

This would be long. First and foremost, couple of days ago, I got the following direct message on my twitter handle –

Hi Shirish,

We are glad to inform you that we are celebrating the 1st anniversary of Pune Metro Rail Project & the incorporation of both Nagpur Metro and Pune Metro into Maharashtra Metro Rail Corporation Limited(MahaMetro) on 23rd January at 13:00 hrs followed by the lunch.

On this occasion we would like to invite you to accept a small token of appreciation for your immense support & continued valuable interaction on our social media channels for the project at the hands of Dr. Brijesh Dixit, Managing Director, MahaMetro.

Venue: Hotel Citrus, Opposite PCMC Building, Pimpri-Chinchwad.
Time: 13:00 Hrs
Lunch: 14:00 hrs

Kindly confirm your attendance. Looking forward to meet you.

Regards & Thanks, Pune Metro Team

I went and had an interaction with Mr. Dixit and was gifted a gift card which can be redeemed.

I shared it on facebook. Some people have asked me privately as to what I did.

First of all, let me be very clear. I did not enter into any competition or put up any queries with getting any sort of monetary benefit at all. I have been a user of public transport both out of necessity and choice and do feel the need for a fast, secure, reasonable mode of transport. I am also immensely passionate and curious about public transport as a whole.

Just to share couple of facts and I’m sure most of you will agree with me, it takes more than twice the time if you are taking public transport. at least that’s true in India. Part of it is due to the drivers not keeping the GPS on, nor people/users asking/stressing for using GPS and using that location-based info. to derive when the next bus is gonna come. So, for instance for my journey to PCMC roughly took 20 kms. and about 45 minutes, but waiting took almost twice the time and this was not the rush-hour time where it could easily have taken double the time. Hence people opting for private vehicles even though they know it’s harmful for the environment as well as for themselves and their loved ones.

There was/has been a plan of PMC (Pune Municipal Corporation) for over a decade to use GPS to make aware the traveling public tentatively when the next bus would arrive but due to number of reasons (corruption, lack of training, awareness, discipline) all of which has been hampering citizen’s productivity due to which people are forced to get private vehicles and it becomes a zero-sum game. There is much more but don’t want to go in there.

Now people have asked me what sort of suggestions I gave or am giving –

Yesterday’s interaction after seeing Mahametro’s interaction with the press, it seems the press or media seems to have a very poor understanding of the dynamics and not really interested in enriching citizen’s understanding of either the Pune Metro or the idea of Integrated Transport Initiative which has been in making for sometime now. Part of the issue also seem to lay with Pune Metro in not sharing knowledge as much as they can with the opportunities that digital media/space provides and at very low-cost.

Suggestions and Queries –

1. One of the first things that Pune Metro could make is an animation of how a DPR (Detailed Project Report) is made. I don’t think any of the people from the press, especially English language press has seen the DPR or otherwise many of the questions would have been answered.

http://www.punemetrorail.org/download/PuneMetro-DPR.pdf

The simplest analogy I can give is let’s say you want to build a hospital but the land on which you have to build it belongs to 2-3 parties, so how will you will build it? Also you don’t have money. The DPR is different only in the sense of the scale of the things and construction of the routes is not by a single contractor but by multiple contractors. A route say A – B is divided in 5 parts and asked by people to submit tenders for the packet a company/contractor/s are interested in.

The way I see it, the DPR has to figure out the right of way where construction of the spans have to be, where the stations have to be built, from where electricity and water has to come, where the maintainance depot will be (usually the depot is at the end), the casting yard for the spans/pillars etc.

There is a pre-qualification round so that only eligible bidders are interested who have history of doing similar scale work and then bidding as to who can do it at lowest cost with a set reserve price. If no bidder comes up either due to from contractor’s POV a very high reserve price, then the reserve price is lowered. The idea there is simply to have a price discovery which may be seen as being found out by a just and fair method.

The press seemed to be more interested in making a tiff between Pune Metro/Maha Metro chief and Gaurdian Minister Shri Girish Bapat of something which to my mind is a non-issue at this juncture.

Mr. Dixit was absolutely correct in saying that he can’t comment on when the extension to Nigdi will happen unless the DPR for extension to Nigdi is made, land is found and the various cost heads, expenses and funding is approved in the State and Central Government and funding from multiple places is done.

The other question which was being raised by the press was razing of the BRTS in Pune. While the press knew it was neither Mr. Dixit’s place or responsibility nor is he supposed to comment upon whatever happens to BRTS. He can’t even comment as that would come under Pune Urban Transport Ministry.

As far as I understand Mr. Dixit’s obligations, it is to build Pune Metro as safely, as quickly, using good materials, give good signages and give an efficient public transit service that we Puneties can be proud of.

2. The site http://www.punemetrorail.org/ really needs an update and upgrade. You should use something like wordpress or something where you are able to change themes every now and then. Every 3-6 months the themes should be tweaked so the content remains or at least looks fresh.

3. There are no time-stamps of any of the videos. At the very least should have time-stamps so some sort of contextual information is available.

4. There is no way to know if there is news. News should be highlighted and more information be shared. For instance, there should have been more info. e.g. about this particular item –

MoU signed between Dr. Brijesh Dixit, MD MahaMetro, Mrs. Prerna Deshbhratar, Addl Municipal Commissioner(Spl), PMC and Mr Kong Wy Mun, CEO, Singapore Cooperation Enterprise for the Urban Management(Multi-Modal Transport) during a program at Yashada, Pune.

from http://www.punemetrorail.org/projectupdate.aspx

It would have been interesting to know what it means and how would the Singapore Government help us in achieving a unified multi-modal transport model.

There were/are tons of questions that the press could have asked but didn’t as above and more below.

5. The DPR was made in November 2015 and then now it is 2018. There probably needs to be vis-a-vis adjusted prices taking into consideration changes over 3 years and would probably change till 2021.

6. Similarly, there are time-gaps between plans and execution of the plan and for Puneties we don’t even know what the plan is.

I would urge Pune Metro to have a dynamic plan which shows areas in which work is progressing in terms of blinking lights to know which are active areas and which are not. They could be a source of inspiration and trail-blazer on this.

7. Similarly, another idea which could be done or might even be done is to have a single photograph taken everyday at say 1200 hrs. in afternoon at all the areas at 640×480 resolution which can be uploaded to the site and in turn could be published onto a separate web-page which in-turn over days and weeks could be turn into a time-lapse video similar to what was achieved for the first viaduct video shot over a day or two –

If you want to make it more interesting and challenging, you could invite students from Symbiosis to make it on something like a Raspberry Pi2/3 or some other SBC (Single Board Computer), a camera lens, a solar cell and a modem with instructions to stagger images to send it to Pune metro rail portal in case some web traffic is already there. There could be specific port (not port 80) .

Later on make a time-lapse video would be simple as stitching all those photographs together and getting some nice audio music as fillers. Something which has already been done once for the viaduct video as seen above.

8. Tracking planned versus real-time progress – While Mr. Dixit has time and again ensured that things are progressing well, it would make it far much easier to trust if there was a web-service which tells if things are going according to schedule or is it bit off. It does overlap a bit with my earlier suggestion but there are many development projects around the wold which show tentative and actual progress.

9. Apart from traffic diversion news in major newspapers, it would be nice to also have a section about traffic diversion with blinkers or something about road diversions which are in effect.

10. Another would be to have a RSS feed about all news found out by various search-engine crawlers, delete duplicate links and share the news and views for people to click-through and know for themselves.

11. Statistics of jobs (both direct and indirect created) due to Pune Metro works displayed prominently.

12. Have a glossary of terms which can easily be garnered by having a 10th-12th average student going through say the DPR and see which terms he has problems with.

The simplest example is the word ‘Reach’ which has been used in a different context in Pune Metro than what is usually understood.

13. Are there and if there are, How many Indian SME’s have been entrusted either via joint-venture or whichever way to ensure knowledge transfer of making and maintaining the rakes, car/bogies, track etc.

14. If any performance and load guarantee has been asked from various packet holders. If yes, what are the guarantees and for what duration ?

These are all low-hanging fruits. Also I’m no web-developer although am a bit of content producer (as can be seen) and a voracious consumer of the web. I do have few friends though, if there is requirement and who understand the medium in a far more better, intimate way than the crudish manner I shared above.

A student who believes democracy needs work and needs efforts to democracy work. If citizens themselves would not ask these questions who would ?

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

Michael Catanzaro: Announcing Epiphany Technology Preview

Planet GNOME - Enj, 25/01/2018 - 12:09pd

If you use macOS, the best way to use a recent development snapshot of WebKit is surely Safari Technology Preview. But until now, there’s been no good way to do so on Linux, short of running a development distribution like Fedora Rawhide.

Enter Epiphany Technology Preview. This is a nightly build of Epiphany, on top of the latest development release of WebKitGTK+, running on the GNOME master Flatpak runtime. The target audience is anyone who wants to assist with Epiphany development by testing the latest code and reporting bugs, so I’ve added the download link to Epiphany’s development page.

Since it uses Flatpak, there are no host dependencies asides from Flatpak itself, so it should work on any system that can run Flatpak. Thanks to the Flatpak sandbox, it’s far more secure than the version of Epiphany provided by your operating system. And of course, you enjoy automatic updates from GNOME Software or any software center that supports Flatpak.

Enjoy!

(P.S. If you want to use the latest stable version instead, with all the benefits provided by Flatpak, get that here.)

Ismael Olea: I'm going to FOSDEM 2018

Planet GNOME - Mër, 24/01/2018 - 8:51md

Yeah. I finally decided I’m going to FOSDEM this year. 2018 is the year I’m re-taken my life as I like it and a right way to start it is meeting all those friends and colleagues I missed in those years of exile. I plan to attend to the beer event as soon I arrive to Brussels.

If you want to talk to me about GUADEC 2018, Fedora Flock 2018 or whatever please reach me by Twitter (@olea) or Telegram (@IsmaelOlea).

BTW, there are a couple relevant Telegram Groups FOSDEM related:

General English Telegram group: https://t.me/fosdem

Spanish spoken one: https://t.me/fosdem_ES

PS:A funny thing about FOSDEM is… this is the place when the Spaniards (or Madrileños indeed) opensource entusiasts can meet at once a year… in Brussels!

Michael Meeks: 2018-01-24 Wednesday

Planet GNOME - Mër, 24/01/2018 - 6:37md
  • Poked mail, chat with Miklos, poked at partner mail and tasks. Sync with Noel & Eloy, customer call, out to see the dentist.
  • Pleased to discover from Pranav that although Linux' skype client inexplicably (and globally) eats the ctrl-alt-shift-D key-combination (without doing anything with it seemingly) - I can still select an online window and do map._docLayer.toggleTileDebugMode().

Ideas for the project architecture and short term goals

Planet Debian - Mër, 24/01/2018 - 5:49md

There has been many discussions about planning for the FOSS calendar. On this post, I report about some of the ideas.

How I first thought the Foss Events calendar

Back in december, when I was just making sense of my surroundings and trying to find a way to start the internship, I drawed this diagram to picture in my head how everything would work:

  1. There would be a "crawler.py" module, which would access each site on a determined list (It could be Facebook, Meetup or any other site such as another calendar) that have events information. This module would pull the event data from those sites.

  2. A validator.py would check if the data was good and if there was data. Once this module verified this, it would dump all info into a dirty_events database.

  3. The dirty_events database would be accessed by the module parser.py, which would clean and organize the data to be properly stored in the events database.

  4. An API.py module would query the events database and return the proper data, formatted into JSON, ical and/or some other formats.

  5. There would be an user interface to get data from API.py and to display this data. It should also be possible to add (properly formatted) events to the database using this interface. [If we were talking about a plugin to merely display the events in MoinMoin or Wordpress or some other system, this plugin would enter in this category.]

The ideas that Daniel put on paper

Then, when I shared with my mentors, Daniel came up with this:

Daniel proposed that module or plugins could be developed or improved (there are some of them already, but they might not support iCalendar URLs) for MoinMoin, Drupal, Wordpress that would allow the data each of these systems have about events to be aggregated. Information from the Meetup and the Facebook APIs could be converted to ical to be agreggated. This aggregation process could happen through a cron job - and I believe daily is enough, because people don't usually publish an event to happen in the very next day (they need time for people to acknowledge it). If the time frame ends up not being the ideal, this can be reviewed and changed later.

Once all this data is gathered, it would then be stored, inserting it or updating it in what could be a PostgreSQL or NoSQL solution.

Using the database with events information, it should be possible to do a data dump with all the information or to give "reports" of the event data, whether the user wants to access the data in iCalendar format (for Thunderbird or GNOME Evolution) or just HTML for viewing in the browser.

Short term goals

Creating a FOSS events calendar it is a big project that will most certainly continue beyond my Outreachy internship.

Therefore, along with my mentors, we have established that my short term goal will be to contribute a bit to it by working on the MoinMoin EventCalendar so the events can be exported to the iCalendar format.

I have been studying and playing around with the EventCalendar code and, so far, I've concluded that the best way to do this might be by writing a function to it. Just like there are other functions on this plugin to change the display of the calendar, there might be a function to just sort the data to the iCalendar format and to allow downloading the file.

Renata https://rsip22.github.io/blog/ Renata's blog

Kubuntu General News: Plasma 5.12 LTS beta available in PPA for testing on Artful & Bionic

Planet Ubuntu - Mër, 24/01/2018 - 4:10md

Adventurous users, testers and developers running Artful 17.10 or our development release Bionic 18.04 can now test the beta version of Plasma 5.12 LTS.

An upgrade to the required Frameworks 5.42 is also provided.

As with previous betas, this is experimental and is only suggested for people who are prepared for possible bugs and breakages.

In addition, please be prepared to use ppa-purge to revert changes, should the need arise at some point.

Read more about the beta release.

If you want to test then:

sudo add-apt-repository ppa:kubuntu-ppa/beta

and then update packages with

sudo apt update
sudo apt full-upgrade

A Wayland session can be made available at the SDDM login screen by installing the package plasma-workspace-wayland. Please note the information on Wayland sessions in the KDE announcement.

Note: Due to Launchpad builder downtime and maintenance due to Meltdown/Spectre fixes, limiting us to amd64/i386 architectures, these builds may be superseded with a rebuild once the builders are back to normal availability.

The primary purpose of this PPA is to assist testing for bugs and quality of the upcoming final Plasma 5.12 LTS release, due for release by KDE on 6th Febuary.

It is anticipated that Kubuntu Bionic Beaver 18.04 LTS will ship with Plasma 5.12.4, the latest point release of 5.12 LTS available at release date.

Bug reports on the beta itself should be reported to bugs.kde.org.

Packaging bugs can be reported as normal to: Kubuntu PPA bugs: https://bugs.launchpad.net/kubuntu-ppa

Should any issues occur, please provide feedback on our mailing lists [1] or IRC [2]

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net

Stephen Michael Kellat: Damage Report

Planet Ubuntu - Mër, 24/01/2018 - 4:24pd

In no particular order:

  • There was a "partial government shutdown" of the federal government of the United States of America. As a federal civil servant, I rated an "essential-excepted" designation this time which required working without pay until the end of the crisis. Fortunately this did not change my tour of duty. Deroy Murdock has a good write-up of the sordid affair Not all my co-workers at my bureau in the department were rated "essential-excepted" so I had to staff a specific appointment hotline to reschedule taxpayer appointments as no specific outreach was made to tell any taxpayers that the in-person offices were closed.
  • The federal government of the United States of America remains without full-year appropriations for Fiscal Year 2018. Appropriations are set to lapse again on February 8, 2018. I've talked to family about apportioning costs of the household bills and have told them that even though I am working now nothing is guaranteed. Donations are always accepted via PayPal although they are totally not tax-deductible. I'm open to considering proposed transitions from the federal civil service and the data on LinkedIn is probably a good starting point if anybody wants to talk.
  • Sporadic power outages are starting to erupt in Ashtabula. A sudden outage happened on Monday during my Regular Day Off that stressed the many UPS units littered around the house. Multiple outages happened today that also stressed the UPS units. Nothing too unusual has been going on other than snow has been melting.
  • Finances are balanced on a knife edge. Who ever said a government job was a life of luxury?

Positives:

  • I haven't broken any computers recently
  • I haven't run any cell phones or other electronics through the washer/dryer combo
  • My limited work with the church in mission outreach to one of the local nursing homes is still going on
  • I own my home

It isn't all bad. Tallying up the damage lately has just taken a bit of energy. There has been a lot of bad stuff going on.

Nuritzi Sanchez: Meet Shobha Tyagi from GNOME.Asia Summit 2016

Planet GNOME - Mër, 24/01/2018 - 2:09pd

This month’s community spotlight is on Shobha Tyagi, one of the volunteer organizers of GNOME.Asia Summit 2016.

Courtesy of Shobha Tyagi

Shobha’s history with GNOME began when she participated in the Outreach Program for Women (OPW) internship in December 2013, with GNOME as her mentoring organization. She attended her first GUADEC in 2014 while she was an OPW intern, and met Emily Chen, who introduced her to the GNOME.Asia Summit.

Passionate about helping to spread GNOME throughout Asia, Shobha was resolute to rise to the challenge of bringing GNOME.Asia Summit to her home in Delhi, India. Fast-forward two years, Shobha is proudly leading the local organizing team of GNOME.Asia, which is ready to lift its curtain in Delhi, on April 21, 2016.

We chatted with Shobha about GNOME and her experience organizing GNOME.Asia.

Why did you choose to work with GNOME for your OPW internship? To be honest, I thought that since GNOME organizes OPW, I would receive the most productive mentoring from GNOME. Sure enough, that happened! I decided to make my initial contribution to Documentation, and after that I met my guru and mentor, Ekaterina Gerasimova. Courtesy of Shobha Tyagi

Do you have a favorite thing about GNOME?

My favorite thing about GNOME is its people. The same people who create it, maintain it, and use it – they are what makes GNOME really great. I really enjoy committing my patches directly to the upstream repositories and meeting the contributors in person. I also get great satisfaction whenever I tell people about GNOME and let them know how they can also contribute. You submitted the winning bid to host GNOME.Asia Summit 2016; do you have any tips for those who are interested to bid for upcoming GNOME conferences? Sure! It does help if you have attended a GNOME conference in the past, but once you have made up your mind to bid, have faith in yourself and just write your proposal. Can you describe a challenge you faced while organizing the GNOME.Asia Summit and how you overcame it? There are many challenges, especially when you are the only one who knows the ins and outs of the event and have a limited amount of time. I’m surrounded by very supportive people. Even so, people expect more from the person who lays the initial groundwork. I thank the summit committee members for their tremendous help and persistence through countless IRC meetings and discussion, without which, it would have been impossible to overcome all of the small obstacles throughout the entire planning experience. What’s the most exciting part about being an organizer? The most exciting part is learning new things! Writing sponsorship documents, calling for presentations, picking up basic web development skills, identifying keynote speakers, chief guests and sponsors, amongst other things. I learned first-hand what goes into designing logos, posters, and stickers. There were also other tasks that I wouldn’t have had to do in a normal situation like arranging a day tour to Taj Mahal for a big group. Life after GNOME.Asia Summit Delhi; what is going to be your next project? After the GNOME.Asia Summit, I would like to focus my efforts on establishing a GNOME user group in Delhi. Advice for eager newcomers and first-time contributors? My advice for them is to come and join GNOME! GNOME enables you and me to contribute, and when we contribute, we help each other improve our lives. If you are committed, you can commit patches too. And now, some fun questions. What is your favorite color?  Yellow. Favorite food? All vegetarian Indian food. What is your spirit animal? Cow! They have a calm demeanor, and symbolize abundance and fertility since they represents both earth and sky. Finally, and this one is important; what do you think cats dream about? Cats dream about being loved, cared for and pampered by their master.

Shobha is helping to organize the 2016 GNOME.Asia Summit while working as an Assistant Professor at Manav Rachna International University, and pursuing a doctorate in Software Engineering. She has been a Foundation member since 2014, and has previously contributed to the Documentation team.

Thank you so much, Shobha, for sparing some of your time to talk to us! We wish you a successful Summit!

Interviewed by Adelia Rahim. 

Nuritzi Sanchez: Giving Spotlight | Meet Øyvind Kolås, GEGL maintainer extraordinaire

Planet GNOME - Mër, 24/01/2018 - 2:09pd

Last month, we had the pleasure of interviewing Øyvind Kolås, aka “pippin,” about his work on GEGL — a fundamental technology enabling GIMP and GNOME Photos.

GIMP Stickers, CC-BY-SA Michael Natterer

This interview is part of a “Giving Spotlight” series we are doing on some long-time GNOME contributors who have fundraising campaigns. The goal is to help GNOME users understand the importance of the technologies, get to know the maintainers, and learn how to support them.

Without further ado, we invite you to get to know Øyvind and his work on GEGL!

The following interview was conducted over email. 

Getting to know Øyvind

Where are you from and where are you based?

I’m from the town of Ørsta – in the end of a fjord in Norway, but since high-school I’ve been quite migratory. Studying fine art in Oslo and Spain, color science research at a color lab and lecturing multimedia CD-ROM authoring in south-eastern Norway, and working on GNOME technologies like Clutter and cairo for Opened Hand and Intel in London, followed by half a decade of low-budget backpacking. At the moment I am based in Norway – and try to keep in touch with a few people and places – among others England, Germany, and Spain.

Øyvind “pippin” Kolås, CC BY-NC-ND Ross Burton

What do you do and how does it relate to GNOME?

I like tinkering with code – frequently code that involves graphics or UI. This results in sometimes useful, at other times odd, but perhaps interesting, tools, infrastructure, or other software artifacts. Through the years and combined with other interests, this has resulted in contributions to cairo and Clutter, as well as being the maintainer of babl and GEGL, which provide pixel handling and processing machinery for GIMP 2.9, 2.10 and beyond.

How did you first get involved in GNOME?

I attended the second GUADEC which happend in Copenhagen in 2001. This was my first in-person meeting with the people behind nicknames in #gimp as well as meeting in-person GIMP developers and power users, and the wider community around it including the GNOME project.

Why is your fundraising campaign important?

I want GIMP to improve and continue being relevant in the future, as well as
having a powerful graph-based framework for other imaging tasks. I hope that my continued maintainership of babl/GEGL will enable many new and significant workflows in GIMP and related software, as well as provide a foundation for implementing and distributing experimental image processing filters.

Wilber Week 2017, a hackathon for GEGL and GIMP, CC-BY-SA Debarshi Ray Getting to know GEGL

How did your project originate?

GEGL’s history starts in 1997 with Rythm and Hues studios, a Californian visual effects and animation company. They were experimenting with a 16bit/high bit depth fork of GIMP known as filmgimp/cinepaint. Rythm and Hues succeeded in making GIMP work on high bit depth images, but the internal architecture was found to be lacking – and they started GEGL as a more solid future basis for high bit depth non-destructive editing in GIMP. Their funding/management interest waned, and GEGL development went dormant. GIMP however continued considering GEGL to be its future core.

How did you start working on GEGL?

I’ve been making and using graphics-related software since the early ’90s. In 2003-2004 I made a video editor for my own use in a hobby collaboration music video/short film venture. This video editing project was discontinued and salvaged for spare parts, like babl and a large set of initial operations when I took up maintainership and development of GEGL.

What are some of the greatest challenges that you’ve faced along the way?

When people get to know that I am somehow involved in development of the GIMP project, they expect me to be in control of and responsible for how the UI currently is. I have removed some GIMP menu items in attempts to clean things up and reduce technical debt, but most improvements I can take credit for now, and in the future, are indirect, like how moving to GEGL enables higher bit depths and on-canvas preview instead of using postage stamp-sized previews in dialogs.

What are some of your greatest successes?

Bringing GEGL from a duke-nukem-forever state, where GIMP was waiting on GEGL for all future enhancements, to GEGL waiting for GIMP to adopt it. The current development series of GIMP (2.9.x) is close to be released as 2.10 which will be the new stable; it is a featureful version with feature parity with 2.8 but a new engine under the hood. I am looking forward to seeing where GIMP will take GEGL in the future.

What are you working on right now?

One of the things I am working on – and playing with – at the moment is experiments in color separation. I’m making algorithms that simulate the color mixing behavior of inks and paints. That might be useful in programs like GIMP for tasks ranging from soft-proofing spot-colors to preparing photos or designs for multi-color silk-screening, for instance for textiles.

Which projects depend on your project? What’s the impact so far?

There are GIMP and GNOME Photos, as well as imgflo, which is a visual front-end provided by the visual programming environment noflo. GEGL (and babl, a companion library), are designed to be generally useful and do not have any APIs that could only be considered useful for GIMP. GEGL itself also contains various example and experimental command line and graphical interfaces for image and video processing.

How can I get involved? 

GEGL continues needing, and thankfully getting, contributions, new filters, fixes to old filters, improvements to infrastructure, improved translations, and documentation. Making more projects use GEGL is also a good way of attracting more contributors. With funds raised through Liberapay and Patreon, I find it easier to allocate time and energy towards making the contribution experience of others smoother.

And now a few questions just for fun…

What is your favorite place on Earth?

Tricky, I have traveled a lot and not found a single place that is a definitive favorite. Places I’ve found to be to my liking are near the equator and have little seasonal variation, as well as are sufficiently high altitude to cool down to a comfortable day high temperature of roughly 25 degrees Celsius.

Favorite ice cream?

Could I have two scoops in a waffle cone, one mango sorbet, one coconut please?

Finally, our classic question: what do you think cats dream about?

Some cats probably dream about being able to sneak through walls.

Øyvind Kolås, CC BY-NC-ND Ross Burton

 

 

Thank you, Øyvind, for your answers. We look forward to seeing your upcoming work on GEGL this year and beyond!

Please consider supporting Øyvind through his GEGL Liberapay or GEGL Patreon campaigns. 

Michael Meeks: 2018-01-23 Tuesday

Planet GNOME - Mar, 23/01/2018 - 10:00md
  • Mail chew, admin, meeting prep, partner cal, lunch. Installed LibreOffice build tools on HP / Ryzen 5 laptop; upgraded Windows 10 endlessly. Build ESC stats, commercial call.
  • Went out with J. for the first time in some months; just lovely to spend time alone with her out of the house.

Ismael Olea: Opensource gratitude

Planet GNOME - Mar, 23/01/2018 - 8:11md

Some weeks ago I’ve read somewhere in Twitter about how good will be to adopt and share the practice of thanking the opensource developers of the tools you use and love. Don’t remember neither who or where, and probably I’m stealing the method s/he proposed. Personally I’m getting used myself to visiting the project development site, and if not better method is available, to open and issue with a text like this:

Im opening this issue just to thankyou for the tool you wrote. It’s nice, useful and saves a lot of my time.

t*h*a*n*k*s

PS: please don’t close the issue so other persons could vote it to exprese their gratitude too.

As an example I’ve just wrote it for the CuteMarkEd editor: https://github.com/cloose/CuteMarkEd/issues/362

Hope this bring a litte bit of endorphines dose to those people who, with their effort, are building the infraestructure of the digital society. Think about it.

Richard Hughes: GCab and CVE-2018-5345

Planet GNOME - Mar, 23/01/2018 - 2:35md

tl;dr: Update GCab from your distributor.

Longer version: Just before Christmas I found a likely exploitable bug in the libgcab library. Various security teams have been busy with slightly more important issues, and so it’s taken a lot longer than usual to be verified and assigned a CVE. The issue I found was that libgcab attempted to read a large chunk into a small buffer, overwriting lots of interesting things past the end of the buffer. ALSR and SELinux saves us in nearly all cases, so it’s not the end of the world. Almost a textbook C buffer overflow (rust, yada, whatever) so it was easy to fix.

Some key points:

  • This only affects libgcab, not cabarchive or libarchive
  • All gcab versions less than 0.8 are affected
  • Anything that links to gcab is affected, so gnome-software, appstream-glib and fwupd at least
  • Once you install the fixed gcab you need to restart anything that’s using it, e.g. fwupd
  • There is no silly branded name for this bug
  • The GCab project is incredibly well written, and I’ve been hugely impressed with the code quality
  • You can test if your GCab has been fixed by attempting to decompress this file, if the program crashes, you need to update

With Marc-André’s blessing, I’ve released version v0.8 of gcab with this fix. I’ve also released v1.0 which has this fix (and many more nice API additions) which also switches the build system to Meson and cleans up a lot of leaks using g_autoptr(). If you’re choosing a version to update to, the answer is probably 1.0 unless you’re building for something more sedate like RHEL 5 or 6. You can get the Fedora 27 packages here or they’ll be on the mirrors tomorrow.

Didier Roche: Welcome To The (Ubuntu) Bionic Age: Nautilus, a LTS and desktop icons

Planet GNOME - Mar, 23/01/2018 - 10:54pd
Nautilus, Ubuntu 18.04 LTS and desktop icons: upstream and downstream views.

If you are following closely the news of various tech websites, one of the latest hot topic in the community was about Nautilus removing desktop icons. Let’s try to clarify some points to ensure the various discussions around it have enough background information and not reacting on emotions only as it could be seen lately. You will have both downstream (mine) and upstream (Carlos) perspectives here.

Why upstream Nautilus developers are removing the desktop icons

First, I wasn’t personally really surprised by the announce. Let’s be clear: GNOME, since its 3.0 release, doesn’t have any icons on the desktop by default. There was an option in Tweaks to turn it back on, but let’s be honest: this wasn’t really supported.

The proof is that this code was never really maintained for 7 years and didn’t transition to newer view technologies like the ones Nautilus is migrating to. Having patched myself this code many years ago for Unity (moving desktop icons to the right depending on the icon size, in intellihide mode which thus doesn’t workarea STRUT), I can testify that this code was getting old. Consequently, it became old and rotten for something not even used on default upstream GNOME experience! It would be some irony to keep it that way.

I’m reading a lot of comments about “just keep it as option, the answer is easy”. Let me disagree with this. As already stated previously during my artful blog post series, and for the same reason that we keep Ubuntu Dock with a very few set of supported options, any added option has a cost:

  • It’s another code path to test (manually, most of the time, unfortunately), and the exploding combination of options which can interact badly between each other just produce an unfinished projects, where you have to be careful to not enable this and that option together, or it crashes or cause side-effects… People having played enough with Compiz Config Settings Manager should know what I’m talking about.
  • Not only that, but more code means more bugs, and if you have to transition to a newer technology, you have to modify that code as well. And working on that is detrimental to other bug fixes, features, tests or documentations that could benefit the project. So, this piece of code that you keep and don’t use, has a very negative impact on your whole project. Worse, it impacts indirectly even users who are using the defaults as they are not benefiting of planned enhancements from other part of the project, due to maintainer’s time constraints.

So, yeah, there is never “just an option”.

In addition to that argument that I took to defend upstream’s position (even in front of the French ubuntu community), I want also to hilight that the plan to remove desktop icons was really well executed in term of communication. However, seeing the feedback the upstream developers got when following this communication plan, which takes time, doesn’t motivate to do it again, where in my opinion, it should be the standard for any important (or considered as this) decisions:

  • Carlos blogged about it on planet GNOME by the end of December. He didn’t only explain the context, why this change, who are impacted by this, possible solutions, but he also presented some proposals. So, there is complete what/why/who/how!
  • In addition to this, there is a very good abstract, more technically oriented, in an upstream bug report.
  • A GNOME Shell proof of concept extension was even built to show that the long term solution for users who wants to support icons on the desktop is feasible. It wasn’t just a throwaway experience as clear goals and targets were defined on what’s need to be done to move from a PoC to a working extension. And yes, by the exact same group who are removing desktop icons from Nautilus, I guess that means a lot!

Consequently, those are the detailed information for users to understand why this change is happening and what will be the long-term consequence of it. That is the foundation for good comments and exchanges on the various striking news blog posts. Very well done Carlos! I hope that more and more of those decisions on any free software project will be presented and explained as well as this one. ;)

A word from Nautilus upstream maintainer

Now that I’ve said what I wanted to tell about my view on the upstream changes, and before detailing what we are going to do for Ubuntu 18.04 LTS, let me introduce you Nautilus upstream maintainer already many times mentioned: Carlos Soriano.

Hello Ubuntu and GNOME community,

Thanks Didier for the detailed explanation and giving me a place in your blog!

I’m writting here because I wanted to clarify some details in the interaction with downstreams, in this case Ubuntu and Canonical developers. When I wrote the blog post with all the details I explained only the part that purely refers to upstream Nautilus. And that was actually quite well received. However, two weeks after my blog post some websites explained the change in a not very good way (the clickbait magic).

Usually that is not very important, those who want factual information know that we are in IRC all the time and that we usually write blog posts about upcoming changes in our blog agregation at Planet GNOME or here in Didier’s blog for Ubuntu. This time though, some people in our closer communities (both GNOME and Ubuntu) got splashed with missconceptions, and I wanted to address that, and what better than to do it with Didier :)

One missconception was that Ubuntu and Canonical were ‘yet again’ using older version of software just because. Well maybe you are surprised now, my recommendation for Ubuntu and Canonical was to actually stay in Nautilus 3.26. With a LTS version coming that is by far the most reasonable option. While for a regular user the upstream recommendation is to try out nemo-desktop (by the way, this is another missconception, we said nemo-desktop, not Nemo the app, for a user those are in practice two different things), for a distribution that needs to support and maintain all kind of requests and stability promises for years, staying with a single code that they already worked with is the best option.

Another missconception I saw these days is that seems we take decisions in a rush. In short, I became Nautilus maintainer 3 years and 4 months ago. Exactly 3 years and one month ago I realized that we need to remove that part from Nautilus. It has been quite hard to reason within myself during these 3 years that an option that upstream was not considered the experience we wanted to provide was holding most of the major works on Nautilus, including making away contributions from new contributors given the poor state of the desktop part that unfortunately impacted the whole code of the application itself. In all this time, dowstreams like Ubuntu were a major reason for me to hold this code. Discussions about this decision happened all this time with the other developers of Nautilus.

And the last missconception was that it looks like GNOME devs and Ubuntu devs are in completely separate nichos where noone communicates with each other. While we are usually focused on our personal tasks, when a change is going to happen we do communicate. In this case, I reached out to the desktop team at Canonical before taking the final decision, providing a draft of the blog post to check out the impact, and give possible options for the LTS release of Ubuntu and further.

In summary, the take out from here is that while we might have slighly different visions, at the end of the day we just want to provide the best experience to the users, and for that believe me we do the best we can.

In case you have any question you can always reach out to us Nautilus upstream in #nautilus IRC channel at irc.gnome.org or in our mailing list.

Hope you enjoy this read, and hopefully we will have the benefits of this work to be shown soon. Thanks again Didier!

What does this mean for Ubuntu?

We thought about this as the Ubuntu Desktop team. Our next release is a LTS (Long-Term Support) version, meaning that Ubuntu 18.04 (currently named Bionic Beaver during its development) will have 5 years of support in term of bug fixes and security updates.

It also mean that most of our user audience will upgrade from our last Ubuntu 16.04 LTS to Ubuntu 18.04 LTS (or even 14.04 -> 16.04 -> 18.04!). The changes are quite large in those last 2 years in term of software updates and new features. On top of this, those users will experience for the first time the Unity -> GNOME Shell transition, and we want to give them a feeling of comfort and familiar landmarks in our default experience, despite the huge changes underneath.

On Ubuntu desktop, we are shipping a Dock, visible by default. Consequently, the desktop view itself, without any application on top, is more important in our user experience than it is for upstream GNOME Shell default one. We think that shipping icons on the desktop is still relevant for our user bases.

Where does this leave us regarding those changes? Thinking about the problem, we came to approximately the same conclusions that upstream Nautilus developers have:

  • Staying for the LTS release, on Nautilus 3.26: the pros is that it’s a battle tested code, that we already know we can support (shipped on 17.10). This matches the fact that the LTS is a very important and strong commitment to us. The cons is that it won’t be by release date the latest and greatest upstream Nautilus release, and maybe some integrations with other parts of GNOME 3.28 code would require more downstream work from us.
  • Using an alternative file manager for the desktop, like Nemo. Shipping on a LTS entirely new code, having to support 2 file managers (Nautilus for normal file browsing and Nemo for the desktop) and ensuring the integration between those two and all other applications works well quickly ruled out that solution.
  • Upgrading to Nautilus 3.28 and shipping the PoC GNOME-Shell extension, contributing to it as much as possible before release. The issue (despite being the long-term solution) is that we’ll ship as in the previous solution entirely new code and that the extension needs new APIs from Nautilus which aren’t fully shaped yet (and maybe won’t be ready for GNOME 3.28 even). Also, we did plan a long time in advance (end of September for 18.04 LTS) on the features and work needed to be done for the next release and we still have a lot to do for Ubuntu 18.04 LTS, some of them being GNOME upstream code. Consequently, rushing into this extension coding, looking to our approaching Feature Freeze deadline on March 1st, would mean that we either drop some initially planned features, or fix less bugs, less polish, which will be detrimental to our overall release quality.

As in every release, we decide on a component by component bases what to do (upgrade to latest or not), weighing the pros and cons and trying to take the best decision for our end user audience. We think that most of GNOME components will be upgraded to 3.28. However, in that particular instance, we decided to keep Nautilus to version 3.26 on Ubuntu 18.04 LTS. You can read the discussion that took place during our weekly Ubuntu Desktop meeting on IRC, leading to that decision.

Another pro to that decision is that it gives flavors shipping Nautilus by default, like Ubuntu Budgie and Edubuntu, a little bit more time to find a solution matching their need, as they don’t run GNOME Shell, and so, can’t use that extension.

The experience will thus be: desktop icons (with Nautilus 3.26) on the default ubuntu session. The vanilla GNOME session I talked many times about will still be running Nautilus 3.26 (as we can only have one version of a software in the archive and installed on user’s machine with traditional deb packages), but with icons on the desktop disabled, as we did on Ubuntu Artful. I think some motivated users will build Nautilus 3.28 in a ppa, but it won’t receive official security and bug fixes support of course.

Meanwhile, we will start contributing for a more long term plan on this new GNOME Shell extension with the Nautilus developers to shape a proper API, have good Drag and Drop support and so on, progressively… This will give better long term code and we hope that the following ubuntu releases will be able to move to it once we reach the minimal set of features we want from it (and consequently, update to latest Nautilus version!).

I hope that sheds some lights on both GNOME upstream and our ubuntu decisions, seeing the two perspectives and why those actions were taken, as well as the long term plan. Hopefully, those posts explaining a little bit the context will lead to informed and constructive comments as well!

Faqet

Subscribe to AlbLinux agreguesi