You are here

Planet Debian

Subscribe to Feed Planet Debian
Entries tagged english Ben Hutchings's diary of life and technology Reproducible builds blog Google Summer of Code 2018 Intern with Debian Chez Charles Bálint's blog about some of the important things in the Universe ganbatte kudasai! Entries tagged english Dude! Sweet! Entries tagged english Google Summer of Code 2018 Intern with Debian Thoughts about programming, sysadmin, Perl, Debian ... Stuff, Debian, Free Software and Craig jmtd Any sufficiently advanced thinking is indistinguishable from madness As time goes by ... Insider infos, master your Debian/Ubuntu distribution Thinking inside the box Digital-Scurf Ramblings Free Software Hacking Recent content in Debian on /home/athos Reproducible builds blog Thoughts, actions and projects Debian and Free Software sesse's blog Thinking inside the box showing latest 10 pabs Entries tagged english Thinking inside the box Recent content in Gsoc18 on bisco.org I began this blog as part of my homework of Master of Libre Software at URJC. I finished my studies but I keep on writing about free (as in freedom) software, networks and knowledge. Dude! Sweet! Recent content in Debian on Tickets'n'patches Just another WordPress.com weblog anarcat Blog from the Debian Project mejo roaming Entries tagged english anarcat joey Debian and Free Software Reproducible builds blog rebel with rather too many causes Thinking inside the box "Passion and dispassion. Choose two." -- Larry Wall something around Debian, written in funny Eng"r"ish ;)
Përditësimi: 7 months 1 javë më parë

prrd 0.0.2: Many improvements

Pre, 26/01/2018 - 12:30md

The prrd package was introduced recently, and made it to CRAN shortly thereafter. The idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development and is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the following screenshot (running six parallel workers, arranged in split byobu session).

This note announce the second, and much improved, release. The package now runs on all operating systems supported by R and no longer has external system requirements. Several functions were improved, two new helper functions were added in a so-far still preliminary form, and everything is more robust now.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.2 (2018-01-24)
  • The package no longer require wget.

  • Enhanced sanity checker function.

  • Expanded and improved dequeue function.

  • No longer use $HOME in xvfb-run-safe (#2).

  • The use of xvfb-run use is now conditional on the OS (#3).

  • The set of available packages is no longer constrained to CRAN, but could be via the local setup script (#4).

  • The dequeue() function now uses system2().

  • The enqueue() functions checks if no reverse dependencies are found and stops (#6).

  • The enqueue() functions checks for repository information being set (#5).

CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Changes in Prometheus 2.0

Enj, 25/01/2018 - 1:00pd

This is one part of my coverage of KubeCon Austin 2017. Other articles include:

2017 was a big year for the Prometheus project, as it published its 2.0 release in November. The new release ships numerous bug fixes, new features and, notably, a new storage engine that brings major performance improvements. This comes at the cost of incompatible changes to the storage and configuration-file formats. An overview of Prometheus and its new release was presented to the Kubernetes community in a talk held during KubeCon + CloudNativeCon. This article covers what changed in this new release and what is brewing next in the Prometheus community; it is a companion to this article, which provided a general introduction to monitoring with Prometheus.

What changed

Orchestration systems like Kubernetes regularly replace entire fleets of containers for deployments, which means rapid changes in parameters (or "labels" in Prometheus-talk) like hostnames or IP addresses. This was creating significant performance problems in Prometheus 1.0, which wasn't designed for such changes. To correct this, Prometheus ships a new storage engine that was specifically designed to handle continuously changing labels. This was tested by monitoring a Kubernetes cluster where 50% of the pods would be swapped every 10 minutes; the new design was proven to be much more effective. The new engine boasts a hundred-fold I/O performance improvement, a three-fold improvement in CPU, five-fold in memory usage, and increased space efficiency. This impacts container deployments, but it also means improvements for any configuration as well. Anecdotally, there was no noticeable extra load on the servers where I deployed Prometheus, at least nothing that the previous monitoring tool (Munin) could detect.

Prometheus 2.0 also brings new features like snapshot backups. The project has a longstanding design wart regarding data volatility: backups are deemed to be unnecessary in Prometheus because metrics data is considered disposable. According to Goutham Veeramanchaneni, one of the presenters at KubeCon, "this approach apparently doesn't work for the enterprise". Backups were possible in 1.x, but they involved using filesystem snapshots and stopping the server to get a consistent view of the on-disk storage. This implied downtime, which was unacceptable for certain production deployments. Thanks again to the new storage engine, Prometheus can now perform fast and consistent backups, triggered through the web API.

Another improvement is a fix to the longstanding staleness handling bug where it would take up to five minutes for Prometheus to notice when a target disappeared. In that case, when polling for new values (or "scraping" as it's called in Prometheus jargon) a failure would make Prometheus reuse the older, stale value, which meant that downtime would go undetected for too long and fail to trigger alerts properly. This would also cause problems with double-counting of some metrics when labels vary in the same measurement.

Another limitation related to staleness is that Prometheus wouldn't work well with scrape intervals above two minutes (instead of the default 15 seconds). Unfortunately, that is still not fixed in Prometheus 2.0 as the problem is more complicated than originally thought, which means there's still a hard limit to how slowly you can fetch metrics from targets. This, in turn, means that Prometheus is not well suited for devices that cannot support sub-minute refresh rates, which, to be fair, is rather uncommon. For slower devices or statistics, a solution might be the node exporter "textfile support", which we mentioned in the previous article, and the pushgateway daemon, which allows pushing results from the targets instead of having the collector pull samples from targets.

The migration path

One downside of this new release is that the upgrade path from the previous version is bumpy: since the storage format changed, Prometheus 2.0 cannot use the previous 1.x data files directly. In his presentation, Veeramanchaneni justified this change by saying this was consistent with the project's API stability promises: the major release was the time to "break everything we wanted to break". For those who can't afford to discard historical data, a possible workaround is to replicate the older 1.8 server to a new 2.0 replica, as the network protocols are still compatible. The older server can then be decommissioned when the retention window (which defaults to fifteen days) closes. While there is some work in progress to provide a way to convert 1.8 data storage to 2.0, new deployments should probably use the 2.0 release directly to avoid this peculiar migration pain.

Another key point in the migration guide is a change in the rules-file format. While 1.x used a custom file format, 2.0 uses YAML, matching the other Prometheus configuration files. Thankfully the promtool command handles this migration automatically. The new format also introduces rule groups, which improve control over the rules execution order. In 1.x, alerting rules were run sequentially but, in 2.0, the groups are executed sequentially and each group can have its own interval. This fixes the longstanding race conditions between dependent rules that create inconsistent results when rules would reuse the same queries. The problem should be fixed between groups, but rule authors still need to be careful of that limitation within a rule group.

Remaining limitations and future

As we saw in the introductory article, Prometheus may not be suitable for all workflows because of its limited default dashboards and alerts, but also because of the lack of data-retention policies. There are, however, discussions about variable per-series retention in Prometheus and native down-sampling support in the storage engine, although this is a feature some developers are not really comfortable with. When asked on IRC, Brian Brazil, one of the lead Prometheus developers, stated that "downsampling is a very hard problem, I don't believe it should be handled in Prometheus".

Besides, it is already possible to selectively delete an old series using the new 2.0 API. But Veeramanchaneni warned that this approach "puts extra pressure on Prometheus and unless you know what you are doing, its likely that you'll end up shooting yourself in the foot". A more common approach to native archival facilities is to use recording rules to aggregate samples and collect the results in a second server with a slower sampling rate and different retention policy. And of course, the new release features external storage engines that can better support archival features. Those solutions are obviously not suitable for smaller deployments, which therefore need to make hard choices about discarding older samples or getting more disk space.

As part of the staleness improvements, Brazil also started working on "isolation" (the "I" in the ACID acronym) so that queries wouldn't see "partial scrapes". This hasn't made the cut for the 2.0 release, and is still work in progress, with some performance impacts (about 5% CPU and 10% RAM). This work would also be useful when heavy contention occurs in certain scenarios where Prometheus gets stuck on locking. Some of the performance impact could therefore be offset under heavy load.

Another performance improvement mentioned during the talk is an eventual query-engine rewrite. The current query engine can sometimes cause excessive loads for certain expensive queries, according the Prometheus security guide. The goal would be to optimize the current engine so that those expensive queries wouldn't harm performance.

Finally, another issue I discovered is that 32-bit support is limited in Prometheus 2.0. The Debian package maintainers found that the test suite fails on i386, which lead Debian to remove the package from the i386 architecture. It is currently unclear if this is a bug in Prometheus: indeed, it is strange that Debian tests actually pass in other 32-bit architectures like armel. Brazil, in the bug report, argued that "Prometheus isn't going to be very useful on a 32bit machine". The position of the project is currently that "'if it runs, it runs' but no guarantees or effort beyond that from our side".

I had the privilege to meet the Prometheus team at the conference in Austin and was happy to see different consultants and organizations working together on the project. It reminded me of my golden days in the Drupal community: different companies cooperating on the same project in a harmonious environment. If Prometheus can keep that spirit together, it will be a welcome change from the drama that affected certain monitoring software. This new Prometheus release could light a bright path for the future of monitoring in the free software world.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Movit 1.6.0 released

Enj, 25/01/2018 - 12:49pd

I just released version 1.6.0 of Movit, my GPU-based video filter library.

The full changelog is below, but what's more interesting is maybe what isn't in it, namely the compute shader version of the high-quality resampling filter I blogged about earlier. It turned out that my benchmark setup was wrong in a sort-of subtle way, and unfortunately biased towards the compute shader. Fixing that negated the speed difference—it was actually usually a few percent slower than the fragment shader version, despite a fair amount of earlier tweaks. (It did use less CPU when setting up new parameters, which was nice for things like continuous zooms, but probably not enough to justify the GPU slowdown.)

Which means that after a month or so of testing and performance tuning, I had to scrap it—it's sad to notice so late (I only realized that something was wrong as I started writing up the final documentation, and figured I couldn't actually justify why I would let one of them chain with other effects and the other one not), but it's a sunk cost, and keeping it in based on known-bad benchmarks would have helped nobody. I've left it in a git branch in case the world should change.

I still believe there are useful gains from compute shaders—in particular, the deinterlacer shines—but it's increasingly clear to me that fragment shaders should remain the default go-to tool for graphics on the GPU. (I guess the next natural target would be the convolution/FFT operators, but they're not all that much used.)

The full changelog reads:

Movit 1.6.0, January 24th, 2018 - Support for effects that work as compute shaders. Compute shaders are generally slower than fragment shaders for the same algorithm, but allow some forms of communication between shader invocations and have more flexible output, which can enable more efficient algorithms. See effect.h for more details. Note that the fastest rendering API on EffectChain is now to a texture if possible, not to an FBO. This will only matter if the last effect is a compute shader. - Movit now includes a compute shader implementation of DeinterlaceEffect, which is automatically used instead of the fragment shader implementation if your GPU and OpenGL driver supports it (in practice, this means on all platforms except on macOS). The compute shader version is typically 20–80% faster than the fragment shader version, depending on your GPU and other factors. A compute shader implementation of ResampleEffect was written but ultimately failed to be faster, and so is not included. - Support for microbenchmarks of effects through the Google microbenchmarking framework (optional). Currently, DeinterlaceEffect and ResampleEffect has benchmarks; enable them by running the unit test with --benchmark (also try --benchmark --help). - Effects can now explicitly request _not_ to have mipmaps, which means they can do so without needing to request bounce and fiddling with the sampler state. Note that this is an API change for effects. - Movit now requires C++11, both to build and to #include the header files. Support for SDL1 has been dropped; unit tests and the demo program now need SDL2. - Various smaller bugfixes and optimizations.

Debian packages are on their way up through the NEW queue (there's a soname bump).

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

The Pune Metro 1st anniversary celebrations

Enj, 25/01/2018 - 12:13pd

This would be long. First and foremost, couple of days ago, I got the following direct message on my twitter handle –

Hi Shirish,

We are glad to inform you that we are celebrating the 1st anniversary of Pune Metro Rail Project & the incorporation of both Nagpur Metro and Pune Metro into Maharashtra Metro Rail Corporation Limited(MahaMetro) on 23rd January at 13:00 hrs followed by the lunch.

On this occasion we would like to invite you to accept a small token of appreciation for your immense support & continued valuable interaction on our social media channels for the project at the hands of Dr. Brijesh Dixit, Managing Director, MahaMetro.

Venue: Hotel Citrus, Opposite PCMC Building, Pimpri-Chinchwad.
Time: 13:00 Hrs
Lunch: 14:00 hrs

Kindly confirm your attendance. Looking forward to meet you.

Regards & Thanks, Pune Metro Team

I went and had an interaction with Mr. Dixit and was gifted a gift card which can be redeemed.

I shared it on facebook. Some people have asked me privately as to what I did.

First of all, let me be very clear. I did not enter into any competition or put up any queries with getting any sort of monetary benefit at all. I have been a user of public transport both out of necessity and choice and do feel the need for a fast, secure, reasonable mode of transport. I am also immensely passionate and curious about public transport as a whole.

Just to share couple of facts and I’m sure most of you will agree with me, it takes more than twice the time if you are taking public transport. at least that’s true in India. Part of it is due to the drivers not keeping the GPS on, nor people/users asking/stressing for using GPS and using that location-based info. to derive when the next bus is gonna come. So, for instance for my journey to PCMC roughly took 20 kms. and about 45 minutes, but waiting took almost twice the time and this was not the rush-hour time where it could easily have taken double the time. Hence people opting for private vehicles even though they know it’s harmful for the environment as well as for themselves and their loved ones.

There was/has been a plan of PMC (Pune Municipal Corporation) for over a decade to use GPS to make aware the traveling public tentatively when the next bus would arrive but due to number of reasons (corruption, lack of training, awareness, discipline) all of which has been hampering citizen’s productivity due to which people are forced to get private vehicles and it becomes a zero-sum game. There is much more but don’t want to go in there.

Now people have asked me what sort of suggestions I gave or am giving –

Yesterday’s interaction after seeing Mahametro’s interaction with the press, it seems the press or media seems to have a very poor understanding of the dynamics and not really interested in enriching citizen’s understanding of either the Pune Metro or the idea of Integrated Transport Initiative which has been in making for sometime now. Part of the issue also seem to lay with Pune Metro in not sharing knowledge as much as they can with the opportunities that digital media/space provides and at very low-cost.

Suggestions and Queries –

1. One of the first things that Pune Metro could make is an animation of how a DPR (Detailed Project Report) is made. I don’t think any of the people from the press, especially English language press has seen the DPR or otherwise many of the questions would have been answered.

http://www.punemetrorail.org/download/PuneMetro-DPR.pdf

The simplest analogy I can give is let’s say you want to build a hospital but the land on which you have to build it belongs to 2-3 parties, so how will you will build it? Also you don’t have money. The DPR is different only in the sense of the scale of the things and construction of the routes is not by a single contractor but by multiple contractors. A route say A – B is divided in 5 parts and asked by people to submit tenders for the packet a company/contractor/s are interested in.

The way I see it, the DPR has to figure out the right of way where construction of the spans have to be, where the stations have to be built, from where electricity and water has to come, where the maintainance depot will be (usually the depot is at the end), the casting yard for the spans/pillars etc.

There is a pre-qualification round so that only eligible bidders are interested who have history of doing similar scale work and then bidding as to who can do it at lowest cost with a set reserve price. If no bidder comes up either due to from contractor’s POV a very high reserve price, then the reserve price is lowered. The idea there is simply to have a price discovery which may be seen as being found out by a just and fair method.

The press seemed to be more interested in making a tiff between Pune Metro/Maha Metro chief and Gaurdian Minister Shri Girish Bapat of something which to my mind is a non-issue at this juncture.

Mr. Dixit was absolutely correct in saying that he can’t comment on when the extension to Nigdi will happen unless the DPR for extension to Nigdi is made, land is found and the various cost heads, expenses and funding is approved in the State and Central Government and funding from multiple places is done.

The other question which was being raised by the press was razing of the BRTS in Pune. While the press knew it was neither Mr. Dixit’s place or responsibility nor is he supposed to comment upon whatever happens to BRTS. He can’t even comment as that would come under Pune Urban Transport Ministry.

As far as I understand Mr. Dixit’s obligations, it is to build Pune Metro as safely, as quickly, using good materials, give good signages and give an efficient public transit service that we Puneties can be proud of.

2. The site http://www.punemetrorail.org/ really needs an update and upgrade. You should use something like wordpress or something where you are able to change themes every now and then. Every 3-6 months the themes should be tweaked so the content remains or at least looks fresh.

3. There are no time-stamps of any of the videos. At the very least should have time-stamps so some sort of contextual information is available.

4. There is no way to know if there is news. News should be highlighted and more information be shared. For instance, there should have been more info. e.g. about this particular item –

MoU signed between Dr. Brijesh Dixit, MD MahaMetro, Mrs. Prerna Deshbhratar, Addl Municipal Commissioner(Spl), PMC and Mr Kong Wy Mun, CEO, Singapore Cooperation Enterprise for the Urban Management(Multi-Modal Transport) during a program at Yashada, Pune.

from http://www.punemetrorail.org/projectupdate.aspx

It would have been interesting to know what it means and how would the Singapore Government help us in achieving a unified multi-modal transport model.

There were/are tons of questions that the press could have asked but didn’t as above and more below.

5. The DPR was made in November 2015 and then now it is 2018. There probably needs to be vis-a-vis adjusted prices taking into consideration changes over 3 years and would probably change till 2021.

6. Similarly, there are time-gaps between plans and execution of the plan and for Puneties we don’t even know what the plan is.

I would urge Pune Metro to have a dynamic plan which shows areas in which work is progressing in terms of blinking lights to know which are active areas and which are not. They could be a source of inspiration and trail-blazer on this.

7. Similarly, another idea which could be done or might even be done is to have a single photograph taken everyday at say 1200 hrs. in afternoon at all the areas at 640×480 resolution which can be uploaded to the site and in turn could be published onto a separate web-page which in-turn over days and weeks could be turn into a time-lapse video similar to what was achieved for the first viaduct video shot over a day or two –

If you want to make it more interesting and challenging, you could invite students from Symbiosis to make it on something like a Raspberry Pi2/3 or some other SBC (Single Board Computer), a camera lens, a solar cell and a modem with instructions to stagger images to send it to Pune metro rail portal in case some web traffic is already there. There could be specific port (not port 80) .

Later on make a time-lapse video would be simple as stitching all those photographs together and getting some nice audio music as fillers. Something which has already been done once for the viaduct video as seen above.

8. Tracking planned versus real-time progress – While Mr. Dixit has time and again ensured that things are progressing well, it would make it far much easier to trust if there was a web-service which tells if things are going according to schedule or is it bit off. It does overlap a bit with my earlier suggestion but there are many development projects around the wold which show tentative and actual progress.

9. Apart from traffic diversion news in major newspapers, it would be nice to also have a section about traffic diversion with blinkers or something about road diversions which are in effect.

10. Another would be to have a RSS feed about all news found out by various search-engine crawlers, delete duplicate links and share the news and views for people to click-through and know for themselves.

11. Statistics of jobs (both direct and indirect created) due to Pune Metro works displayed prominently.

12. Have a glossary of terms which can easily be garnered by having a 10th-12th average student going through say the DPR and see which terms he has problems with.

The simplest example is the word ‘Reach’ which has been used in a different context in Pune Metro than what is usually understood.

13. Are there and if there are, How many Indian SME’s have been entrusted either via joint-venture or whichever way to ensure knowledge transfer of making and maintaining the rakes, car/bogies, track etc.

14. If any performance and load guarantee has been asked from various packet holders. If yes, what are the guarantees and for what duration ?

These are all low-hanging fruits. Also I’m no web-developer although am a bit of content producer (as can be seen) and a voracious consumer of the web. I do have few friends though, if there is requirement and who understand the medium in a far more better, intimate way than the crudish manner I shared above.

A student who believes democracy needs work and needs efforts to democracy work. If citizens themselves would not ask these questions who would ?

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

Ideas for the project architecture and short term goals

Mër, 24/01/2018 - 5:49md

There has been many discussions about planning for the FOSS calendar. On this post, I report about some of the ideas.

How I first thought the Foss Events calendar

Back in december, when I was just making sense of my surroundings and trying to find a way to start the internship, I drawed this diagram to picture in my head how everything would work:

  1. There would be a "crawler.py" module, which would access each site on a determined list (It could be Facebook, Meetup or any other site such as another calendar) that have events information. This module would pull the event data from those sites.

  2. A validator.py would check if the data was good and if there was data. Once this module verified this, it would dump all info into a dirty_events database.

  3. The dirty_events database would be accessed by the module parser.py, which would clean and organize the data to be properly stored in the events database.

  4. An API.py module would query the events database and return the proper data, formatted into JSON, ical and/or some other formats.

  5. There would be an user interface to get data from API.py and to display this data. It should also be possible to add (properly formatted) events to the database using this interface. [If we were talking about a plugin to merely display the events in MoinMoin or Wordpress or some other system, this plugin would enter in this category.]

The ideas that Daniel put on paper

Then, when I shared with my mentors, Daniel came up with this:

Daniel proposed that module or plugins could be developed or improved (there are some of them already, but they might not support iCalendar URLs) for MoinMoin, Drupal, Wordpress that would allow the data each of these systems have about events to be aggregated. Information from the Meetup and the Facebook APIs could be converted to ical to be agreggated. This aggregation process could happen through a cron job - and I believe daily is enough, because people don't usually publish an event to happen in the very next day (they need time for people to acknowledge it). If the time frame ends up not being the ideal, this can be reviewed and changed later.

Once all this data is gathered, it would then be stored, inserting it or updating it in what could be a PostgreSQL or NoSQL solution.

Using the database with events information, it should be possible to do a data dump with all the information or to give "reports" of the event data, whether the user wants to access the data in iCalendar format (for Thunderbird or GNOME Evolution) or just HTML for viewing in the browser.

Short term goals

Creating a FOSS events calendar it is a big project that will most certainly continue beyond my Outreachy internship.

Therefore, along with my mentors, we have established that my short term goal will be to contribute a bit to it by working on the MoinMoin EventCalendar so the events can be exported to the iCalendar format.

I have been studying and playing around with the EventCalendar code and, so far, I've concluded that the best way to do this might be by writing a function to it. Just like there are other functions on this plugin to change the display of the calendar, there might be a function to just sort the data to the iCalendar format and to allow downloading the file.

Renata https://rsip22.github.io/blog/ Renata's blog

Introducing Computational Methods to Social Media Scientists

Mar, 23/01/2018 - 1:38pd

The ubiquity of large-scale data and improvements in computational hardware and algorithms have provided enabled researchers to apply computational approaches to the study of human behavior. One of the richest contexts for this kind of work is social media datasets like Facebook, Twitter, and Reddit.

We were invited by Jean BurgessAlice Marwick, and Thomas Poell to write a chapter about computational methods for the Sage Handbook of Social Media. Rather than simply listing what sorts of computational research has been done with social media data, we decided to use the chapter to both introduce a few computational methods and to use those methods in order to analyze the field of social media research.

A “hairball” diagram from the chapter illustrating how research on social media clusters into distinct citation network neighborhoods. Explanations and Examples

In the chapter, we start by describing the process of obtaining data from web APIs and use as a case study our process for obtaining bibliographic data about social media publications from Elsevier’s Scopus API.  We follow this same strategy in discussing social network analysis, topic modeling, and prediction. For each, we discuss some of the benefits and drawbacks of the approach and then provide an example analysis using the bibliographic data.

We think that our analyses provide some interesting insight into the emerging field of social media research. For example, we found that social network analysis and computer science drove much of the early research, while recently consumer analysis and health research have become more prominent.

More importantly though, we hope that the chapter provides an accessible introduction to computational social science and encourages more social scientists to incorporate computational methods in their work, either by gaining computational skills themselves or by partnering with more technical colleagues. While there are dangers and downsides (some of which we discuss in the chapter), we see the use of computational tools as one of the most important and exciting developments in the social sciences.

Steal this paper!

One of the great benefits of computational methods is their transparency and their reproducibility. The entire process—from data collection to data processing to data analysis—can often be made accessible to others. This has both scientific benefits and pedagogical benefits.

To aid in the training of new computational social scientists, and as an example of the benefits of transparency, we worked to make our chapter pedagogically reproducible. We have created a permanent website for the chapter at https://communitydata.cc/social-media-chapter/ and uploaded all the code, data, and material we used to produce the paper itself to an archive in the Harvard Dataverse.

Through our website, you can download all of the raw data that we used to create the paper, together with code and instructions for how to obtain, clean, process, and analyze the data. Our website walks through what we have found to be an efficient and useful workflow for doing computational research on large datasets. This workflow even includes the paper itself, which is written using LaTeX + knitr. These tools let changes to data or code propagate through the entire workflow and be reflected automatically in the paper itself.

If you  use our chapter for teaching about computational methods—or if you find bugs or errors in our work—please let us know! We want this chapter to be a useful resource, will happily consider any changes, and have even created a git repository to help with managing these changes!

The book chapter and this blog post were written with Jeremy Foote and Aaron Shaw. You can read the book chapter here. This blog post was originally published on the Community Data Science Collective blog.

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

Mentors and co-mentors for Debian's Google Summer of Code 2018

Mar, 23/01/2018 - 12:50pd

Debian is applying as a mentoring organization for the Google Summer of Code 2018, an internship program open to university students aged 18 and up.

Debian already has a wide range of projects listed but it is not too late to add more or to improve the existing proposals. Google will start reviewing the ideas page over the next two weeks and students will start looking at it in mid-February.

Please join us and help extending Debian! You can consider listing a potential project for interns or listing your name as a possible co-mentor for one of the existing projects on Debian's Google Summer of Code wiki page.

At this stage, mentors are not obliged to commit to accepting an intern but it is important for potential mentors to be listed to get the process started. You will have the opportunity to review student applications in March and April and give the administrators a definite decision if you wish to proceed in early April.

Mentors, co-mentors and other volunteers can follow an intern through the entire process or simply volunteer for one phase of the program, such as helping recruit students in a local university or helping test the work completed by a student at the end of the summer.

Participating in GSoC has many benefits for Debian and the wider free software community. If you have questions, please come and ask us on IRC #debian-outreach or the debian-outreachy mailing list.

Daniel Pocock and Laura Arjona Reina https://bits.debian.org/ Bits from Debian

Ick: a continuous integration system

Hën, 22/01/2018 - 7:30md

TL;DR: Ick is a continuous integration or CI system. See http://ick.liw.fi/ for more information.

More verbose version follows.

First public version released

The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own.

My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at http://ick.liw.fi/, and the download page has links to the source code and .deb packages and an Ansible playbook for installing it.

I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky.

Invitation to contribute

Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the governance page for the constitution, the getting started page for tips on how to start contributing, and the contact page for how to get in touch.

Architecture

Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the architecture page for details.

Manifesto

Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested.

A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working.

A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned.

Also, like all software, CI should be fully and completely free software and your instance should be under your control.

(Ick is little of this yet, but it will try to become all of it. In the best possible taste.)

Dreams of the future

In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented.

  • A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again.

  • Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead.

  • Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.)

  • Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro controller.

  • Ick should be able to effortlessly handle very large numbers of projects. I'm thinking here that it should be able to keep up with building everything in Debian, whenever a new Debian source package is uploaded. (Obviously whether that is feasible depends on whether there are enough resources to actually build things, but ick itself should not be the bottleneck.)

  • Ick should optionally provision workers as needed. If all workers of a certain type are busy, and ick's been configured to allow using more resources, it should do so. This seems like it would be easy to do with virtual machines, containers, cloud providers, etc.

  • Ick should be flexible in how it can notify interested parties, particularly about failures. It should allow an interested party to ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS, or even by a phone call and speech syntethiser. "Hello, interested party. It is 04:00 and you wanted to be told when the hello package has been built for RISC-V."

Please give feedback

If you try ick, or even if you've just read this far, please share your thoughts on it. See the contact page for where to send it. Public feedback is preferred over private, but if you prefer private, that's OK too.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

Improving communication

Hën, 22/01/2018 - 3:49md

After my last post, a lot of things happened, but what I'm going to talk about now is the thing that I believe had the most impact in improving my experience with the Outreachy internship: the changes that were made in communication, specially between me and my mentors.

When I struggled with the tasks, with moving forward, it was somewhat a wish of mine to change the ways I communicate with my mentors. (Alright, Renata, so why didn't you start by just doing that? Well, I wasn't sure where to begin.)

I didn't know how to propose something like that to my mentors, I mean... maybe that was how Outreachy was supposed to be and I just might have set different expectations? The first step to figure this out I took by reaching Anna, an Outreachy intern with Wikimedia who I'd been talking to since the interns announcement had been made.

I asked her about how she interacted with her mentors and how often, so I knew what I could ask for. She told me about her weekly meetings with her mentors and how she could chat direcly with them when she ran into some issues. And, indeed, I felt like things like that what I wanted to happen.

Before I could reach out and discuss this with my mentors, though, Daniel himself read last week's post and brought up the idea of us speaking on the phone for the first time. That was indeed a good experience and I told him I would like to repeat or establish some sort of schedule to communicate with each other.

Yes, well, a schedule would be the best improvement, I think. It's not just about the means (phone call or IRC, for instance) that we communicate, but to know that, at some point, either one per week or bi-weekly, there would be someone to talk to at a determined time so I could untie any knots that were created during my internship (if that makes sense). I know I could just send an email at any time to my mentors (and sometimes I do) and they would reply, but that's not quite the point.

So, to make this short: I started to talk to one of my mentors daily and it's been really helpful. We are working on a schedule for bi-weekly calls. And we always have e-mails. I'm glad to say that now I talk not just with mentors, but also with fellow brazilian Outreachers and former participants and everyone is willing to help out.

For all the ways to reach me, you can look up my Debian wiki profile.

Renata https://rsip22.github.io/blog/ Renata's blog

FAI.me build service now supports backports

Hën, 22/01/2018 - 2:00md

The FAI.me build service now supports packages from the backports repository. When selecting the stable distribution, you can also enable backports packages. The customized installation image will then uses the kernel from backports (currently 4.14) and you can add additional packages by appending /stretch-backports to the package name, e.g. notmuch/stretch-backports.

Currently, the FAIme service offers images build with Debian stable, stable with backports and Debian testing.

If you have any ideas for extensions or any feedback, send an email to FAI.me =at= fai-project.org

FAI.me

Thomas Lange http://blog.fai-project.org/ FAI (Fully Automatic Installation) / Plan your Installation and FAI installs your Plan

Rblpapi 0.3.8: Strictly maintenance

Hën, 22/01/2018 - 1:47md

Another Rblpapi release, now at version 0.3.8, arrived on CRAN yesterday. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the eight release since the package first appeared on CRAN in 2016. This release wraps up a few smaller documentation and setup changes, but also includes an improvement to the (less frequently-used) subscription mode which Whit cooked up on the weekend. Details below:

Changes in Rblpapi version 0.3.8 (2018-01-20)
  • The 140 day limit for intra-day data histories is now mentioned in the getTicks help (Dirk in #226 addressing #215 and #225).

  • The Travis CI script was updated to use run.sh (Dirk in #226).

  • The install_name_tool invocation under macOS was corrected (@spennihana in #232)

  • The blpAuthenticate help page has additional examples (@randomee in #252).

  • The blpAuthenticate code was updated and improved (Whit in #258 addressing #257)

  • The jump in version number was an oversight; this should have been 0.3.7.

And only while typing up these notes do I realize that I fat-fingered the version number. This should have been 0.3.7. Oh well.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Keeping an Irish home warm and free in winter

Hën, 22/01/2018 - 10:20pd

The Irish Government's Better Energy Homes Scheme gives people grants from public funds to replace their boiler and install a zoned heating control system.

Having grown up in Australia, I think it is always cold in Ireland and would be satisfied with a simple control switch with a key to make sure nobody ever turns it off but that isn't what they had in mind for these energy efficiency grants.

Having recently stripped everything out of the house, right down to the brickwork and floorboards in some places, I'm cautious about letting any technologies back in without checking whether they are free and trustworthy.

This issue would also appear to fall under the scope of FSFE's Public Money Public Code campaign.

Looking at the last set of heating controls in the house, they have been there for decades. Therefore, I can't help wondering, if I buy some proprietary black box today, will the company behind it still be around when it needs a software upgrade in future? How many of these black boxes have wireless transceivers inside them that will be compromised by security flaws within the next 5-10 years, making another replacement essential?

With free and open technologies, anybody who is using it can potentially make improvements whenever they want. Every time a better algorithm is developed, if all the homes in the country start using it immediately, we will always be at the cutting edge of energy efficiency.

Are you aware of free and open solutions that qualify for this grant funding? Can a solution built with devices like Raspberry Pi and Arduino qualify for the grant?

Please come and share any feedback you have on the FSFE discussion list (join, reply to the thread).

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Continuous integration testing of TeX Live sources

Hën, 22/01/2018 - 10:15pd

The TeX Live sources consists in total of around 15000 files and 8.7M lines (see git stats). It integrates several upstream projects, including big libraries like FreeType, Cairo, and Poppler. Changes come in from a variety of sources: external libraries, TeX specific projects (LuaTeX, pdfTeX etc), as well as our own adaptions and changes/patches to upstream sources. Since quite some time I wanted to have a continuous integration (CI) testing, but since our main repository is based on Subversion, the usual (easy, or the one I know) route via Github and one of the CI testing providers, didn’t come to my mind – until last week.

Over the weekend I have set up CI testing for our TeX Live sources by using the following ingredients: git-svn for checkout, Github for hosting, Travis-CI for testing, and a cron job that does the connection. To be more specific:

  • git-svn I use git-svn to check out only the source part of the (otherwise far to big) subversion repository onto my server. This is similar to the git-svn checkout of the whole of TeX Live as I reported here, but contains only the source part.
  • Github The git-svn checkout is pushed to the project TeX-Live/texlive-source on Github.
  • Travis-CI The CI testing is done in the TeX-Live/texlive-source project on Travis-CI (who are offering free services for open source projects, thanks!)

Although this sounds easy, there are a few stumbling blocks: First of all, the .travis.yml file is not contained in the main subversion repository. So adding it to the master tree that is managed via git-svn is not working, because the history is rewritten (git svn rebase). My solution was to create a separate branch travis-ci which adds only the .travis.yml file and merge master.

Travis-CI by default tests all branches, and does not test those not containing a .travis.yml, but to be sure I added an except clause stating that the master branch should not be tested. This way other developers can try different branches, too. The full .travis.yml can be checked on Github, here is the current status:

# .travis.yml for texlive-source CI building # Norbert Preining # Public Domain language: c branches: except: - master before_script: - find . -name \*.info -exec touch '{}' \; before_install: - sudo apt-get -qq update - sudo apt-get install -y libfontconfig-dev libx11-dev libxmu-dev libxaw7-dev script: ./Build

What remains is stitching these things together by adding a cron job that regularly does git svn rebase on the master branch, merges the master branch into travis-ci branch, and pushes everything to Github. The current cron job is here:

#!/bin/bash # cron job for updating texlive-source and pushing it to github for ci set -e TLSOURCE=/home/norbert/texlive-source.git GIT="git --no-pager" quiet_git() { stdout=$(tempfile) stderr=$(tempfile) if ! $GIT "$@" $stdout 2>$stderr; then echo "STDOUT of git command:" cat $stdout echo "************" cat $stderr >&2 rm -f $stdout $stderr exit 1 fi rm -f $stdout $stderr } cd $TLSOURCE quiet_git checkout master quiet_git svn rebase quiet_git checkout travis-ci # don't use [skip ci] here because we only built the # last commit, which would stop building quiet_git merge master -m "merging master" quiet_git push --all

With this setup we can CI testing of our changes in the TeX Live sources, and in the future maybe some developers will use separate branches to get testing there, too.

Enjoy.

Norbert Preining https://www.preining.info/blog There and back again

PrimeZ270-p, Intel i7400 review and Debian – 1

Hën, 22/01/2018 - 6:23pd

This is going to be a biggish one as well.

This is a continuation from my last blog post .

Before diving into installation, I had been reading for quite a while Matthew Garett’s work. Thankfully most of his blog posts do get mirrored on planet.debian.org hence it is easy to get some idea as what needs to be done although have told him (I think even shared here) that he should somehow make his site more easily navigable. Trying to find posts on either ‘GPT’ and ‘UEFI’ and to have those posts in an ascending or descending way date-wise is not possible, at least I couldn’t find a way to do it as he doesn’t do it date-wise or something.

The closest I could come to is sing ‘$keyword’ site:https://mjg59.dreamwidth.org/ via a search-engine and go through the entries shared therein. This doesn’t mean I don’t value his contribution. It is in fact, the opposite. AFAIK he was one of the first people who drew the community’s attention when UEFI came in and only Microsoft Windows could be booted on them, nothing else.

I may be wrong but AFAIK he was the first one to talk about having a shim and was part of getting people to be part of the shim process.

While I’m sure Matthew’s understanding may have evolved significantly from what he had shared before, it was two specific blog posts that I had to re-read before trying to install MS-Windows and then Debian-GNU/Linux system on it. .

I went to a friend’s house who had windows 7 running at his end, I ran over there, used diskpart and did the change to GPT using Windows technet article.

I had to use/go the GPT way as I understood that MS-Windows takes all the four primary partitions for itself, leaving nothing for any other operating system to do/use .

I did the conversion to GPT and tried to have MS-Windows 10 as my current motherboard and all future motherboards from Intel Gen7/Gen8 onwards do not support anything less than Windows 10. I did see an unofficial patch floating on github somewhere but now have lost the reference to it. I had read some of the bug-reports of the repo. which seemed to suggest it was still a work in progress.

Now this is where it starts becoming a bit… let’s say interesting.

Now a friend/client of mine offered me a job to review MS-Windows 10, with his product keys of course. I was a bit hesitant as it had been a long time since I had worked with MS-Windows and didn’t know if I could do it or not, the other was a suspicion that I might like it too much. While I did review it, I found –

a. It it one heck of a bloatware – I had thought MS-Windows would have learned it by now but no, they still have to have to learn that adware and bloatware aren’t solutions. I still can’t get my head wrapped around as to how 4.1 GB of an MS-WIndows ISO gets extracted to 20 GB and still have to install shit-loads of third-party tools to actually get anything done. Just amazed (and not in good way.) .

Just to share as an example I still had to get something like Revo Uninstaller as MS-Windows even till date hasn’t learned to uninstall programs cleanly and needs a tool like that to clean the registry and other places to remove the titbits left along the way.

Edit/Update – It still doesn’t have Fall Creators Update which is still supposed to be another 4 GB+ iso which god only knows how much space that will take.

b. It’s still not gold – With all the hoopla around MS-Windows 10 that I had been hearing and seeing ads, I was under the impression that MS-Windows had turned gold i.e. it had a release like Debian would have ‘buster’ something around next year probably around or after 2019 Debconf is held. Windows 10 Microsoft would be released around July 2018, so it’s still a few months off.

c. I had read an insightful article few years ago by a Junior Microsoft employee sharing/emphasizing why MS cannot do GNU/Linux volunteer/bazaar type of development. To put in not so many words, it came down to the cultural differences the way two communities operate. While in GNU/Linux a one more patch, one more pull request will be encouraged, and it may be integrated in that point release or it can’t it would be in the next point release (unless it changes something much more core/fundamentally which needs more in-depth review) MS-Windows on the other hand, actively discourages that sort of behavior as it meant more time for integration and testing and from the sound of it MS still doesn’t do Continuous Integration (CI), regressive testing etc. as is common in many GNU/Linux common projects more and more.

I wish I could have shared the article but don’t have the link anymore. @Lazyweb, if you would be so kind so as to help find that article. The developer had shared some sort of ssh credentials or something to prove who he was which he later to remove (probably) because of the consequences to him for sharing that insight were not worth it, although the writings seemed to be valid.

There were many more quibbles but shared the above ones. For e.g. copying files from hdd to usb disks doesn’t tell how much time it takes, while in Debian I’ve come to see time taken for any operation as guaranteed.

Before starting on to the main issue, some info. before-hand although I don’t know how relevant or not that info. might be –

Prime Z270-P uses EFI 2.60 by American Megatrends –

/home/shirish> sudo dmesg | grep -i efi
[sudo] password for shirish:
[ 0.000000] efi: EFI v2.60 by American Megatrends

I can share more info. if needed later.

Now as I understood/interpretated info. found on the web and by experience Microsoft makes quite a few more partitions than necessary to get MS-Windows installed.

This is how it stacks up/shows up –

> sudo fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: xxxxxxxxxxxxxxxxxxxxxxxxxxx

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data
/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem
/dev/sda6 3718232064 5280731135 1562499072 745.1G Linux filesystem
/dev/sda7 5280731136 7761199103 2480467968 1.2T Linux filesystem
/dev/sda8 7761199104 7814035455 52836352 25.2G Linux swap

I had made 2 GB for /boot in MS-Windows installer as I had thought it would take only some space and leave the rest for Debian GNU/Linux’s /boot to put its kernel entries, tools to check memory and whatever else I wanted to have on /boot/debian but for some reason I have not yet understood, that didn’t work out as I expected it to be.

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data

As seen in the above, the first four primary partitions are taken by MS-Windows themselves. I just wish I had understood how to use GPT disklabels properly so I could figure out things better, but it seems (for reasons not fully understood) why the efi partition is a lowly 100 MB which I suspect where /boot is when I asked it to be 2 GB. Is that UEFI doing, Microsoft’s doing or something which is a default bit, dunno. Having the EFI partition smaller hampers the way I want to do things as will be clear in a short while from now.

After I installed MS-Windows, I installed Debian GNU/Linux using the net install method.

The following is what I had put on piece of paper as what partitions would be for GNU/Linux –

/boot – 512 MB (should be enough to accommodate couple of kernel versions, memory checking and any other tools I might need in the future.

/ – 700 GB – well admittedly that looks insane a bit but I do like to play with new programs/binaries as and when possible and don’t want to run out of space as and when I forget to clean it up.

[off-topic, wishlist] One tool I would like to have (and dunno if it’s there) is an ability to know when I installed a package, how many times I have used it, how frequently and the ability to add small notes or description to the package. Many a times I have seen that the package description is either too vague or doesn’t focus on the practical usefulness of a package to me .

An easy example to share what I mean would be the apt package –

aptitude show apt
Package: apt
Version: 1.6~alpha6
Essential: yes
State: installed
Automatically installed: no
Priority: required
Section: admin
Maintainer: APT Development Team
Architecture: amd64
Uncompressed Size: 3,840 k
Depends: adduser, gpgv | gpgv2 | gpgv1, debian-archive-keyring, libapt-pkg5.0 (>= 1.6~alpha6), libc6 (>= 2.15), libgcc1 (>= 1:3.0), libgnutls30 (>= 3.5.6), libseccomp2 (>=1.0.1), libstdc++6 (>= 5.2)
Recommends: ca-certificates
Suggests: apt-doc, aptitude | synaptic | wajig, dpkg-dev (>= 1.17.2), gnupg | gnupg2 | gnupg1, powermgmt-base, python-apt
Breaks: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~), aptitude (< 0.8.10)
Replaces: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~)
Provides: apt-transport-https (= 1.6~alpha6)
Description: commandline package manager
This package provides commandline tools for searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library.

These include:
* apt-get for retrieval of packages and information about them from authenticated sources and for installation, upgrade and removal of packages together with their dependencies
* apt-cache for querying available information about installed as well as installable packages
* apt-cdrom to use removable media as a source for packages
* apt-config as an interface to the configuration settings
* apt-key as an interface to manage authentication keys

Now while I love all the various tools that the apt package has, I do have special fondness for $apt-cache rdepends $package

as it gives another overview of a package or library or shared library that I may be interested in and which other packages are in its orbit.

Over period of time it becomes easy/easier to forget packages that you don’t use day-to-day hence having something like such a tool would be a god-send where you can put personal notes about packages. Another could be reminders of tickets posted upstream or something on those lines. I don’t know of any tool/package which does something on those lines. [/off-topic, wishlist]

/home – 1.2 TB

swap – 25.2 GB

Admit I got a bit overboard on swap space but as and when I get more memory at least should have swap 1:1 right. I am not sure if the old rules would still apply or not.

Then I used Debian buster alpha 2 netinstall iso

https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/debian-buster-DI-alpha2-amd64-netinst.iso and put it on the usb stick. I did use the sha1sum to ensure that the netinstall iso was the same as the original one https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/SHA1SUMS

After that simply doing a dd if of was enough to copy the net install to the usb stick.

I did have some issues with the installation which I’ll share in the next post but the most critical issue was that I had to again do make a /boot and even though I made /boot as a separate partition and gave 1 GB to it during the partitioning step, I got only 100 MB and I have no idea why it is like that.

/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem

> df -h /boot
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 88M 68M 14M 84% /boot

home/shirish> ls -lh /boot
total 55M
-rw-r--r-- 1 root root 193K Dec 22 19:42 config-4.14.0-2-amd64
-rw-r--r-- 1 root root 193K Jan 15 01:15 config-4.14.0-3-amd64
drwx------ 3 root root 1.0K Jan 1 1970 efi
drwxr-xr-x 5 root root 1.0K Jan 20 10:40 grub
-rw-r--r-- 1 root root 19M Jan 17 10:40 initrd.img-4.14.0-2-amd64
-rw-r--r-- 1 root root 21M Jan 20 10:40 initrd.img-4.14.0-3-amd64
drwx------ 2 root root 12K Jan 1 17:49 lost+found
-rw-r--r-- 1 root root 2.9M Dec 22 19:42 System.map-4.14.0-2-amd64
-rw-r--r-- 1 root root 2.9M Jan 15 01:15 System.map-4.14.0-3-amd64
-rw-r--r-- 1 root root 4.4M Dec 22 19:42 vmlinuz-4.14.0-2-amd64
-rw-r--r-- 1 root root 4.7M Jan 15 01:15 vmlinuz-4.14.0-3-amd64

root@debian:/boot/efi/EFI# ls -lh
total 3.0K
drwx------ 2 root root 1.0K Dec 31 21:38 Boot
drwx------ 2 root root 1.0K Dec 31 19:23 debian
drwx------ 4 root root 1.0K Dec 31 21:32 Microsoft

I would be the first to say I don’t really the understand this EFI business.

The only thing I do understand that it’s good that even without OS it becomes easier to see that all the components if you change/add which would or would not work in BIOS. In bios, getting info on components were iffy at best.

There have been other issues with EFI which I may take in another blog post but for now I would be happy if somebody can share –

how to have a big /boot/ so it’s not a small partition for debian boot. I don’t see any value in having a bigger /boot for MS-Windows unless there is a way to also get grub2 pointer/header added in MS-Windows bootloader. Will share the reasons for it in the next blog post.

I am open to reinstalling both MS-Windows and Debian from scratch although that would happen when debian-buster-alpha3 arrives. Any answer to the above would give me something to try the solution and share if I get the desired result.

Looking forward for answers.

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

French Gender-Neutral Translation for Roundcube

Hën, 22/01/2018 - 6:00pd

Here's a quick blog post to tell the world I'm now doing a French gender-neutral translation for Roundcube.

A while ago, someone wrote on the Riseup translation list to complain against the current fr_FR translation. French is indeed a very gendered language and it is common place in radical spaces to use gender-neutral terminologies.

So yeah, here it is: https://github.com/baldurmen/roundcube_fr_FEM

I haven't tested the UI integration yet, but I'll do that once the Riseup folks integrate it to their Roundcube instance.

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

#15: Tidyverse and data.table, sitting side by side ... (Part 1)

Dje, 21/01/2018 - 11:40md

Welcome to the fifteenth post in the rarely rational R rambling series, or R4 for short. There are two posts I have been meaning to get out for a bit, and hope to get to shortly---but in the meantime we are going start something else.

Another longer-running idea I had was to present some simple application cases with (one or more) side-by-side code comparisons. Why? Well at times it feels like R, and the R community, are being split. You're either with one (increasingly "religious" in their defense of their deemed-superior approach) side, or the other. And that is of course utter nonsense. It's all R after all.

Programming, just like other fields using engineering methods and thinking, is about making choices, and trading off between certain aspects. A simple example is the fairly well-known trade-off between memory use and speed: think e.g. of a hash map allowing for faster lookup at the cost of some more memory. Generally speaking, solutions are rarely limited to just one way, or just one approach. So if pays off to know your tools, and choose wisely among all available options. Having choices is having options, and those tend to have non-negative premiums to take advantage off. Locking yourself into one and just one paradigm can never be better.

In that spirit, I want to (eventually) show a few simple comparisons of code being done two distinct ways.

One obvious first candidate for this is the gunsales repository with some R code which backs an earlier NY Times article. I got involved for a similar reason, and updated the code from its initial form. Then again, this project also helped motivate what we did later with the x13binary package which permits automated installation of the X13-ARIMA-SEATS binary to support Christoph's excellent seasonal CRAN package (and website) for which we now have a forthcoming JSS paper. But the actual code example is not that interesting / a bit further off the mainstream because of the more specialised seasonal ARIMA modeling.

But then this week I found a much simpler and shorter example, and quickly converted its code. The code comes from the inaugural datascience 1 lesson at the Crosstab, a fabulous site by G. Elliot Morris (who may be the highest-energy undergrad I have come across lately) focusssed on political polling, forecasts, and election outcomes. Lesson 1 is a simple introduction, and averages some polls of the 2016 US Presidential Election.

Complete Code using Approach "TV"

Elliot does a fine job walking the reader through his code so I will be brief and simply quote it in one piece:

## Getting the polls library(readr) polls_2016 <- read_tsv(url("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv")) ## Wrangling the polls library(dplyr) polls_2016 <- polls_2016 %>% filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters")) library(lubridate) polls_2016 <- polls_2016 %>% mutate(end_date = ymd(end_date)) polls_2016 <- polls_2016 %>% right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date), max(polls_2016$end_date), by="days"))) ## Average the polls polls_2016 <- polls_2016 %>% group_by(end_date) %>% summarise(Clinton = mean(Clinton), Trump = mean(Trump)) library(zoo) rolling_average <- polls_2016 %>% mutate(Clinton.Margin = Clinton-Trump, Clinton.Avg = rollapply(Clinton.Margin,width=14, FUN=function(x){mean(x, na.rm=TRUE)}, by=1, partial=TRUE, fill=NA, align="right")) library(ggplot2) ggplot(rolling_average)+ geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") + geom_point(aes(x=end_date,y=Clinton.Margin))

It uses five packages to i) read some data off them interwebs, ii) then filters / subsets / modifies it leading to a right (outer) join with itself before iv) averaging per-day polls first and then creates rolling averages over 14 days before v) plotting. Several standard verbs are used: filter(), mutate(), right_join(), group_by(), and summarise(). One non-verse function is rollapply() which comes from zoo, a popular package for time-series data.

Complete Code using Approach "DT"

As I will show below, we can do the same with fewer packages as data.table covers the reading, slicing/dicing and time conversion. We still need zoo for its rollapply() and of course the same plotting code:

## Getting the polls library(data.table) pollsDT <- fread("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv") ## Wrangling the polls pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ] pollsDT[, end_date := as.IDate(end_date)] pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]), max(pollsDT[,end_date]), by="days")), on="end_date"] ## Average the polls library(zoo) pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), by=end_date] pollsDT[, Clinton.Margin := Clinton-Trump] pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14, FUN=function(x){mean(x, na.rm=TRUE)}, by=1, partial=TRUE, fill=NA, align="right")] library(ggplot2) ggplot(pollsDT) + geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") + geom_point(aes(x=end_date,y=Clinton.Margin))

This uses several of the components of data.table which are often called [i, j, by=...]. Row are selected (i), columns are either modified (via := assignment) or summarised (via =), and grouping is undertaken by by=.... The outer join is done by having a data.table object indexed by another, and is pretty standard too. That allows us to do all transformations in three lines. We then create per-day average by grouping by day, compute the margin and construct its rolling average as before. The resulting chart is, unsurprisingly, the same.

Benchmark Reading

We can looking how the two approaches do on getting data read into our session. For simplicity, we will read a local file to keep the (fixed) download aspect out of it:

R> url <- "http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv" R> download.file(url, destfile=file, quiet=TRUE) R> file <- "/tmp/poll-responses-clean.tsv" R> res <- microbenchmark(tidy=suppressMessages(readr::read_tsv(file)), + dt=data.table::fread(file, showProgress=FALSE)) R> res Unit: milliseconds expr min lq mean median uq max neval tidy 6.67777 6.83458 7.13434 6.98484 7.25831 9.27452 100 dt 1.98890 2.04457 2.37916 2.08261 2.14040 28.86885 100 R>

That is a clear relative difference, though the absolute amount of time is not that relevant for such a small (demo) dataset.

Benchmark Processing

We can also look at the processing part:

R> rdin <- suppressMessages(readr::read_tsv(file)) R> dtin <- data.table::fread(file, showProgress=FALSE) R> R> library(dplyr) R> library(lubridate) R> library(zoo) R> R> transformTV <- function(polls_2016=rdin) { + polls_2016 <- polls_2016 %>% + filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters")) + polls_2016 <- polls_2016 %>% + mutate(end_date = ymd(end_date)) + polls_2016 <- polls_2016 %>% + right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date), + max(polls_2016$end_date), by="days"))) + polls_2016 <- polls_2016 %>% + group_by(end_date) %>% + summarise(Clinton = mean(Clinton), + Trump = mean(Trump)) + + rolling_average <- polls_2016 %>% + mutate(Clinton.Margin = Clinton-Trump, + Clinton.Avg = rollapply(Clinton.Margin,width=14, + FUN=function(x){mean(x, na.rm=TRUE)}, + by=1, partial=TRUE, fill=NA, align="right")) + } R> R> transformDT <- function(dtin) { + pollsDT <- copy(dtin) ## extra work to protect from reference semantics for benchmark + pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ] + pollsDT[, end_date := as.IDate(end_date)] + pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]), + max(pollsDT[,end_date]), by="days")), on="end_date"] + pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), + by=end_date][, Clinton.Margin := Clinton-Trump] + pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14, + FUN=function(x){mean(x, na.rm=TRUE)}, + by=1, partial=TRUE, fill=NA, align="right")] + } R> R> res <- microbenchmark(tidy=suppressMessages(transformTV(rdin)), + dt=transformDT(dtin)) R> res Unit: milliseconds expr min lq mean median uq max neval tidy 12.54723 13.18643 15.29676 13.73418 14.71008 104.5754 100 dt 7.66842 8.02404 8.60915 8.29984 8.72071 17.7818 100 R>

Not quite a factor of two on the small data set, but again a clear advantage. data.table has a reputation for doing really well for large datasets; here we see that it is also faster for small datasets.

Side-by-side

Stripping the reading, as well as the plotting both of which are about the same, we can compare the essential data operations.

Summary

We found a simple task solved using code and packages from an increasingly popular sub-culture within R, and contrasted it with a second approach. We find the second approach to i) have fewer dependencies, ii) less code, and iii) running faster.

Now, undoubtedly the former approach will have its staunch defenders (and that is all good and well, after all choice is good and even thirty years later some still debate vi versus emacs endlessly) but I thought it to be instructive to at least to be able to make an informed comparison.

Acknowledgements

My thanks to G. Elliot Morris for a fine example, and of course a fine blog and (if somewhat hyperactive) Twitter account.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

New year haul

Dje, 21/01/2018 - 12:08pd

Some new acquired books. This is a pretty wide variety of impulse purchases, filled with the optimism of a new year with more reading time.

Libba Bray — Beauty Queens (sff)
Sarah Gailey — River of Teeth (sff)
Seanan McGuire — Down Among the Sticks and Bones (sff)
Alexandra Pierce & Mimi Mondal (ed.) — Luminescent Threads (nonfiction anthology)
Karen Marie Moning — Darkfever (sff)
Nnedi Okorafor — Binti (sff)
Malka Older — Infomocracy (sff)
Brett Slatkin — Effective Python (nonfiction)
Zeynep Tufekci — Twitter and Tear Gas (nonfiction)
Martha Wells — All Systems Red (sff)
Helen S. Wright — A Matter of Oaths (sff)
J.Y. Yang — Waiting on a Bright Moon (sff)

Several of these are novellas that were on sale over the holidays; the rest came from a combination of reviews and random on-line book discussions.

The year hasn't been great for reading time so far, but I do have a couple of things ready to review and a third that I'm nearly done with, which is not a horrible start.

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

PC desktop build, Intel, spectre issues etc.

Sht, 20/01/2018 - 11:05md

This is and would be a longish one.

I have been using desktop computers for around couple of decades now. My first two systems were an Intel Pentium III and then a Pentium Dual-core, the first one on kobian/mercury motherboard. The motherboards were actually called Mercury and was a brand which was later sold to Kobian which kept the brand-name. The motherboards and the CPU/processor used to be cheap. One could set up a decentish low-end system with display for around INR 40k/- which seemed to be decent as a country we had just come out of non-alignment movement and also chose to come out of isolationist tendencies (technology and otherwise as well). Most middle-class income families got their first taste of computers after y2k. There were quite a few y2k incomes which prompted the Government to lose duties further.

One of the highlights during 1991 when satellite TV came was shown by CNN (probably CNN International) was the coming down of the Berlin Wall. There were many of us who were completely ignorant of world politics or what is/was happening in other parts of the world.

Computer systems at those times were considered a luxury item and duties were sky-high ( between 1992-2001). The launch of Mars Pathfinder, its subsequent successful landing on the Martian surface also catapulted people’s imagination about PCs and micro-processors.

I can still recall the excitement that was among young people of my age first seeing the liftoff from Cape Canaveral and then later the processed images of Spirits cameras showing images of a desolate desert-type land. We also witnessed the beginnings of ‘International Space Station‘ (ISS) .

Me and few of my friends had drunk lot of Carl Sagan and many other sci-fi coolaids/stories. Star Trek, the movies and the universal values held/shared by them was a major influence to all our lives.

People came to know about citizen based science or projects/distributed science projects, y2k fear appeared to be unfounded all these factors and probably a few more prompted the Government of India to reduce duties on motherboards, processors, components as well as taking Computers out of the restricted list which lead to competition and finally the common man being able to dream of a system sooner than later. Y2K also kick-started the beginnings of Indian software industry which is the bread and butter of many a middle class men-women who are in the service industry using technology directly or indirectly.

In 2002 I bought my first system, an Intel Pentium III, i810 chipset (integrated graphics) with 256 MB of SDRAM which was supposed to be sufficient for the tasks it was being used for, Some light gaming, some web-mails, seeing movies,etc running on a mercury board. I don’t remember the code-name partly because the code-names are/were really weird and partly because it is just too long ago. I remember using Windows ’98 and trying to install one of the early GNU/Linux variants on that machine. Ir memory serves right, you had to flick a jumper (like a switch) to use the extended memory.

I do not know/remember what happened but I think somewhere in a year or two in that time-frame Mercury India filed for bankruptcy and the name, manufacturing was sold to Kobian. After Kobian took over the ownership, it said it would neither honor the 3/5 year warranty or even repairs on the motherboards Mercury had sold, it created a lot of bad will against the company and relegated itself to the bottom of the pile for both experienced and new system-builders. Also mercury motherboards weren’t reputed/known to have a long life although the one I had gave me quite a decent life.

The next machine I purchased was a Pentium Dual-core, (around 2009/2010) LGA a Williamnette which had out-of-order execution, the bug meltdown which is making news nowadays has history this far back. I think I bought it in 45nm which was a huge jump from the previous version although still secure in the mATX package. Again the board was from mercury. (Intel 845 chipset, DDR2 2 GB RAM and SATA came to stay).

So meltdown has been in existence for 10-12 odd years and is in everything which either uses Intel or ARM processors.

As you can probably make-out most systems came stretched out 2-3 years later than when they were launched in American or/and European markets. Also business or tourism travel was neither so easy, smooth or transparent as is today. All of which added to delay in getting new products in India.

Sadly, the Indian market is similar to other countries where Intel is used in more than 90% machines. I know of few institutions (though pretty much rare) who insisted and got AMD solutions.

That was the time when gigabyte came onto the scene which formed the basis of the Wolfdale-3M 45nm system which was in the same price range as the earlier models, and offered a weeny tiny bit of additional graphics performance.To the best of my knowledge, it was perhaps the first motherboard which had solid state capacitors being offered/put in a budget motherboard. The mobo-processor bundle used to be in the range of INR 7/8k excluding RAM. cabinet etc, I had a Philips 17″ CRT display which ran a good decade or so, so just had to get the new cabinet, motherboard, CPU, RAM and was good to go.

Few months later at a hardware exhibition held in the city I was invited to an Asus party which was just putting a toe-hold in the Indian market. I went to the do, enjoyed myself. They had a small competition where they asked some questions and asked if people had queries. To my surprise, I found that most people who were there were hardware vendors and for one reason or the other they chose to remain silent. Hence I got an AMD Asus board. This is different from winning another Gigabyte motherboard which I also won in the same year in another competition as well in the same time-frame. Both were mid-range motherboards (ATX build).

As I had just bought a Gigabyte (mATX) motherboard and had made the build, I had to give both the motherboards away, one to a friend and one to my uncle and both were pleased with the AMD-based mobos which they somehow paired with AMD processors. At that time AMD had one-upped Intel in both graphics and even bare computing especially at the middle level and they were striving to push into new markets.

Apart from the initial system bought, most of my systems when being changed were in the INR 20-25k/- budget including all and any accessories I bought later.

The only real expensive parts I purchased have been external hdd ( 1 TB WD passport) and then a Viewsonic 17″ LCD which together sent me back by around INR 10k/- but both seem to give me adequate performance (both have outlived the warranty years) with the monitor being used almost 24×7 over 6 years or so, of course over GNU/Linux specifically Debian. Both have been extremely well value for the money.

As I had been exposed to both the motherboards I had been following those and other motherboards as well. What was and has been interesting to observe what Asus did later was to focus more on the high-end gaming market while Gigabyte continued to dilute it energy both in the mid and high-end motherboards.

Cut to 2017 and had seen quite a few reports –

http://www.pcstats.com/NewsView.cfm?NewsID=131618

http://www.digitimes.com/news/a20170904PD207.html

http://www.guru3d.com/news-story/asus-has-the-largest-high-end-intel-motherboard-share.html

All of which points to the fact that Asus had cornered a large percentage of the market and specifically the gaming market . While there are no formal numbers as both Asus and Gigabyte choose to releases only APAC numbers rather than a country-wide split which would have made for some interesting reading.

Just so that people do not presume anything, there are about 4-5 motherboard vendors in the Indian market. There is Asus at the top (I believe) followed by Gigabyte, Intel at a distant 3rd place (because it’s too expensive). There are also pockets of Asrock and MSI and I know of people who follow them religiously although their mobos are supposed to be somewhat pensive than the two above. Asus and Gigabyte do try to fight out with each other but each has its core competency I believe with Asus being used by heavy gamers, overclockers more than Gigabyte.

Anyway come October 2017 and my main desktop died and am left as they say up the creek without the paddle. I didn’t even have Net access for about 3 weeks due to BSNL or PMC’s foolishness and then later small riots breaking out due to Koregaon Bhima conflict.

This led to a situation where I had to buy/build a system with oldish/half knowledge. I was open to having an AMD system but both datacare and even Rashi peripherals, Pune both of whom used to deal in AMD systems shared they had stopped dealing in AMD stuff sometime back. While datacare had AMD mobos, getting processors were an issue. Both the vendors are near to my home so if I buy from them getting support becomes an non-issue. I could have gone out of my way to get an AMD processor but getting support could have been an issue as would have had to travel and I do not know the vendors enough. Hence fell back to the Intel platform.

I asked around quite a few PC retailers and distributors around and found the Asus Prime Z270-P was the only mid-range motherboard available at that time. I did come to know a bit later of other motherboards in the z270 series but most vendors didn’t/don’t stock them as there is capital, interest and stock cost.

History – Historically, there has also been huge time lag in getting motherboards, processors etc. between worldwide announcements, and then announcements of sale in India and actually getting hands-on to the newest motherboards and processors as seen above. This had led to quite a bit of frustration to many a users. I have known of many a soul visiting Lamington Road, Mumbai to get the latest motherboard, processor. Even to-date this system flourishes as Mumbai has an International Airport and there is always a demand and people willing to pay a premium for the newest processor/motherboard even before any reviews are in.

I was highly surprised to know recently that Prime Z370-P motherboards are already selling (just 3 months late) with the Intel 8th generation processors although these are still as samples rather than a torrent some of the other motherboard-combo might be.

At the end I bought an Intel I7400 chip and an Asus Prime Z270-P motherboard with 2400 mhz Corsair 8 GB and a 4 TB WD Green (5400) HDD with a Circle 545 cabinet and (with the almost criminal 400 Watts SMPS). Later came to know that it’s not really even 400 Watts, but around 20-25% less . The whole package costed me north of INR 50k/- with still need to spend on a better SMPS (probably a Cosair or Coolermaster 80 600/650 SMPS) with a few accessories I still need to complete the system.

I will be changing the PSU most probably next week.

Disclosure – The neatness you see is not me. I was unsure if I would be able to put the heatsink on the CPU properly as that is the most sensitive part while building a system. A bent pin on the CPU could play havoc as well as void the warranty on the CPU or motherboard or both. The new thing I saw were the knobs that can be seen on the heatsink fan is something which I hadn’t seen before. The vendor did the fixing of the processor on the mobo for me as well as tied up the remaining power cables without asking for which I am and was grateful and would definitely provide him with more business as and when I need components.

Future – While it’s ok for now, I’m still using a pretty old 2 speaker setup which I hope to upgrade to either a 2.1/3.1 speaker setup, have full 64 GB 2400 Mhz Kingston Razor/G.Skill/Corsair memory, an M.2 512 MB SSD .

If I do get the Taiwan Debconf bursary I do hope to buy some or all of the above plus a Samsung or some other Android/Replicant/Librem smartphone. I have been also looking for a vastly simplified smartphone for my mum with big letters and everything but that has been a failure to find in the Indian market. Of course this all depends if I do get the bursary and even after the bursary if Global warranty and currency exchange works out in my favor vis-a-vis what I would have to pay in India.

Apart from above, Taiwan is supposed to be a pretty good source to get graphic novels, manga comics, lots of RPG games for very cheap prices with covers and hand-drawn material etc. All of this is based upon few friend’s anecdotal experiences so dunno if all of that would still hold true if I manage to be there.

There are also quite a few chip foundries and maybe during debconf could have visit to one of them if possible. It would be rewarding if the visit was to any 45nm or lower chip foundry as India is still stuck at 65nm range till date.

I would be sharing about my experience about the board, the CPU, the expectations I had from the Intel chip and the somewhat disappointing experience of using Debian on the new board in the next post, not necessarily Debian’s fault but the free software ecosystem being at fault here.

Feel free to point out any mistakes you find, grammatically or even otherwise. The blog post has been in the works for over couple of weeks so its possible for mistakes to creep in.

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

TLCockpit v0.8

Sht, 20/01/2018 - 3:32md

Today I released v0.8 of TLCockpit, the GUI front-end for the TeX Live Manager tlmgr. I spent the winter holidays in updating and polishing, but also in helping me debug problems that users have reported. Hopefully the new version works better for all.

If you are looking for a general introduction to TLCockpit, please see the blog introducing it. Here I only want to introduce the changes made since the last release:

  • add debug facility: It is now possible to pass -d for debugging to tlcockpit, activating debugging. There is also -dd for more verbose debugging.
  • select mirror facility: The edit screen for the repository setting now allows selecting from the current list of mirrors, see the following screenshot:
  • initial loading speedup: Till now we used to parse the json output of tlmgr, which included everything the whole database contains. We now load the initial minimal information via info --data and load additional data when details for a package is shown on demand. This should especially make a difference on systems without a compiled json Perl library available.
  • fixed self update: In the previous version, updating the TeX Live Manager itself was not properly working – it was updated but the application itself became unresponsive afterwards. This is hopefully fixed (although this is really tricky).
  • status indicator: The status indicator has moved from the menu bar (where it was somehow a stranger) to below the package listing, and now also includes the currently running command, see screenshot after the next item.
  • nice spinner: Only an eye-candy, but I added a rotating spinner while loading the database, updates, backups, or doing postactions. See the attached screenshot, which also shows the new location of the status indicator and the additional information provided.

I hope that this version is more reliable, stable, and easier to use. As usual, please use the issue page of the github project to report problems.

TeX Live should contain the new version starting from tomorrow.

Enjoy.

Norbert Preining https://www.preining.info/blog There and back again

Suppressing color output of the Google Repo tool

Pre, 19/01/2018 - 7:51pd
On Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its windows port made by ESRLabs) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:
repo --color=never infoOther options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto. eddyp noreply@blogger.com Rambling around foo

Faqet