You are here

Agreguesi i feed

Tobias Mueller: Talking at OSDNConf in Kyiv, Ukraine

Planet GNOME - Mar, 02/10/2018 - 2:08md

I was fortunate enough to be invited to Kyiv to keynote (video) the local Open Source Developer Network conference. Actually, I had two presentations. The opening keynote was on building a more secure operating system with fewer active security measures. I presented a few case studies why I believe that GNOME is well positioned to deliver a nice and secure user experience. The second talk was on PrivacyScore and how I believe that it makes the world a little bit better by making security and privacy properties of Web sites transparent.

The audience was super engaged which made it very nice to be on stage. The questions, also in the hallway track, were surprisingly technical. In fact, most of the conference was around Kernel stuff. At least in the English speaking track. There is certainly a lot of potential for Free Software communities. I hope we can recruit these excellent people for writing Free Software.

Lennart eventually talked about CAsync and how you can use that to ship your images. I’m especially interested in the cryptography involved to defend against certain attacks. We also talked about how to protect the integrity of the files on the offline disk, e.g. when your machine is off and some can access the (encrypted) drive. Currently, LUKS does not use authenticated encryption which makes it possible that an attacker can flip some bits in the disk image you read.

Canonical’s Christian Brauner talked about mounting in user namespaces which, historically, seemed to have been a contentious topic. I found that interesting, because I think we currently have a problem: Filesystem drivers are not meant for dealing with maliciously crafted images. Let that sink for a moment. Your kernel cannot deal with arbitrary data on the pen drive you’ve found on the street and are now inserting into your system. So yeah, I think we should work on allowing for insertion of random images without having to risk a crash of the system. One approach might be libguestfs, but launching a full VM every time might be a bit too much. Also you might somehow want to promote drives as being trusted enough to get the benefit of higher bandwidth and lower latency. So yeah, so much work left to be done. ouf.

Then, Tycho Andersen talked about forwarding syscalls to userspace. Pretty exciting and potentially related to the disk image problem mentioned above. His opening example was the loading of a kernel module from within a container. This is scary, of course, and you shouldn’t be able to do it. But you may very well want that if you have to deal with (proprietary) legacy code like Cisco, his employer, does. Eventually, they provide a special seccomp filter which forwards all the syscall details back to userspace.

As I’ve already mentioned, the conference was highly technical and kernel focussed. That’s very good, because I could have enlightening discussions which hopefully get me forward in solving a few of my problems. Another one of those I was able to discuss with Jakob on the days around the conference which involves the capabilities of USB keyboards. Eventually, you wouldn’t want your machine to be hijacked by a malicious security device like the Yubikey. I have some idea there involving modifying the USB descriptor to remove the capabilities of sending funny keys. Stay tuned.

Anyway, we’ve visited the city and the country before and after the event and it’s certainly worth a visit. I was especially surprised by the coffee that was readily available in high quality and large quantities.

Hans de Goede: Announcing flickerfree boot for Fedora 29

Planet GNOME - Hën, 01/10/2018 - 2:11md
A big project I've been working on recently for Fedora Workstation is what we call flickerfree boot. The idea here is that the firmware lights up the display in its native mode and no further modesets are done after that. Likewise there are also no unnecessary jarring graphical transitions.

Basically the machine boots up in UEFI mode, shows its vendor logo and then the screen keeps showing the vendor logo all the way to a smooth fade into the gdm screen. Here is a video of my main workstation booting this way.

Part of this effort is the hidden grub menu change for Fedora 29. I'm happy to announce that most of the other flickerfree changes have also landed for Fedora 29:

  1. There have been changes to shim and grub to not mess with the EFI framebuffer, leaving the vendor logo intact, when they don't have anything to display (so when grub is hidden)

  2. There have been changes to the kernel to properly inherit the EFI framebuffer when using Intel integrated graphics, and to delay switching the display to the framebuffer-console until the first kernel message is printed. Together with changes to make "quiet" really quiet (except for oopses/panics) this means that the kernel now also leaves the EFI framebuffer with the logo intact if quiet is used.

  3. There have been changes to plymouth to allow pressing ESC as soon as plymouth loads to get detailed boot messages.

With all these changes in place it is possible to get a fully flickerfree boot today, as the video of my workstation shows. This video is made with a stock Fedora 29 with 2 small kernel commandline tweaks:

  1. Add "i915.fastboot=1" to the kernel commandline, this removes the first and last modeset during the boot when using the i915 driver.

  2. Add "plymouth.splash-delay=20" to the kernel commandline. Normally plymouth waits 5 seconds before showing the charging Fedora logo so that on systems which boot in less then 5 seconds the system simply immediately transitions to gdm. On systems which take slightly longer to boot this makes the charging Fedora logo show up, which IMHO makes the boot less fluid. This option increases the time plymouth waits with showing the splash to 20 seconds.

So if you have a machine with Intel integrated graphics and booting in UEFI mode, you can give flickerfree boot support a spin with Fedora 29 by just adding these 2 commandline options. Note this requires the new grub hidden menu feature to be enabled, see the FAQ on this.

The need for these 2 commandline options shows that the work on this is not yet entirely complete, here is my current TODO list for finishing this feature:

  1. Work with the upstream i915 driver devs to make i915.fastboot the default. If you try i915.fastboot=1 and it causes problems for you please let me know.

  2. Write a new plymouth theme based on the spinner theme which used the vendor logo as background and draws the spinner beneath it. Since this keeps the logo and black background as is and just draws the spinner on top this avoids the current visually jarring transition from logo screen to plymouth, allowing us to set plymouth.splash-delay to 0. This also has the advantage that the spinner will provide visual feedback that something is actually happening as soon as plymouth loads.

  3. Look into making this work with AMD and NVIDIA graphics.

Please give the new flickerfree support a spin and let me know if you have any issues with it.

Free software activities in September 2018

Planet Debian - Dje, 30/09/2018 - 9:02md

Here is my monthly update covering what I have been doing in the free software world during September 2018 (previous month):

More hacking on the Lintian static analysis tool for Debian packages:

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

Debian
  • As a member of the Debian Python Module Team I pushed a large number of changes across 100s of repositories including removing empty debian/patches/series & debian/source/options files, correcting email addresses, dropping generated .debhelper dirs, removing trailing whitespaces, respecting the nocheck build profile via DEB_BUILD_OPTIONS and correcting spelling mistakes in debian/control files.

  • Added a missing dependency on golang-golang-x-tools for digraph(1) in dh-make-golang as part of the Debian Go Packaging Team.


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project

  • "Frontdesk" duties, triaging CVEs, responding to user questions, etc.

  • Issued DLA 1492-1 fixing a string injection vulnerability in the dojo Javascript library.

  • Issued DLA 1496-1 to correct an integer overflow vulnerability in the "Little CMS 2" colour management library. A specially-crafted input file could have lead to a heap-based buffer overflow.

  • Issued DLA 1498-1 for the curl utility to fix an integer overflow vulnerability (background).

  • Issued DLA 1501-1 to fix an out-of-bounds read vulnerability in libextractor, a tool to extract meta-data from files of arbitrary type.

  • Issued DLA 1503-1 to prevent a potential denial of service and a potential arbitrary code execution vulnerability in the kamailio SIP (Session Initiation Protocol) server. A specially-crafted SIP message with an invalid Via header could cause a segmentation fault and crash the server due to missing input validation.

  • Issued ELA 34-1 for the Redis key-value database where the redis-cli tool could have allowed an attacker to achieve code execution and/or escalate to higher privileges via a specially-crafted command line.


Uploads

I also uploaded the following packages as a member of the Debian Python Module Team: django-ipware (2.1.0-1), django-adminaudit (0.3.3-2), python-openid (2.2.5-7), python-social-auth (1:0.2.21+dfsg-3). python-vagrant (0.5.15-2) & python-validictory (0.8.3-3)

Finally, I sponsored the following uploads: bm-el (201808-1), elpy (1.24.0-1), mutt-alias-el (1.5-1) & android-platform-external-boringssl (8.1.0+r23-2).


Debian bugs filed


FTP Team

As a Debian FTP assistant I ACCEPTed 81 packages: adios, android-platform-system-core, aom, appmenu-registrar, astroid2, black, bm-el, colmap, cowpatty, devpi-common, equinox-bundles, fabulous, fasttracker2, folding-mode-el, fontpens, ganeti-2.15, geomet, golang-github-google-go-github, golang-github-gregjones-httpcache, hub, infnoise, intel-processor-trace, its-playback-time, jsonb-api, kitinerary, kpkpass, libclass-tiny-chained-perl, libmoox-traits-perl, librda, libtwitter-api-perl, liburl-encode-perl, libwww-oauth-perl, llvm-toolchain-7, lucy, markdown-toc-el, mmdebstrap, mozjs60, mutt-alias-el, nvidia-graphics-drivers-legacy-390xx, o-saft, pass-tomb, pass-tomb-basic, pgformatter, picocli, pikepdf, pipewire, poliastro, port-for, pyagentx, pylint2, pynwb, pytest-flask, python-argon2, python-asteval, python-caldav, python-djangosaml2, python-pcl, python-persist-queue, python-rfc3161ng, python-treetime, python-x2go, python-x3dh, python-xeddsa, rust-crossbeam-deque, rust-iovec, rust-phf-generator, rust-simd, rust-spin, rustc, sentinelsat, sesman, sphinx-autobuild, sphinxcontrib-restbuilder, tao-pegtl, trojan, ufolib2, ufonormalizer, unarr, vlc-plugin-bittorrent, xlunzip & xxhash.

I additionally filed 6 RC bugs against packages that had potentially-incomplete debian/copyright files against adios, pgformatter, picocli, python-argon2, python-pcl & python-treetime.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

nanotime 0.2.3

Planet Debian - Dje, 30/09/2018 - 5:14md

A minor maintenance release of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it now uses a more rigorous S4-based approach thanks to a rewrite by Leonardo Silvestri.

This release disables some tests on the Slowlaris platform we are asked to conform to (which is a good thing as wider variety of test platforms widens test converage) yet have no real access to (which is bad thing, obviously) beyind what the helpful rhub service offers. We also updated the Travis setup. No code changes.

Changes in version 0.2.3 (2018-09-30)
  • Skip some tests on Solaris which seems borked with timezones. As we have no real, no fixed possible (Dirk in #42).

  • Update Travis setup

Once this updates on the next hourly cron iteration, we also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

All i wanted to do is check an error code

Planet Debian - Dje, 30/09/2018 - 2:03md
I was feeling a little under the weather last week and did not have enough concentration to work on developing a new NetSurf feature as I had planned. Instead I decided to look at a random bug from our worryingly large collection.

This lead me to consider the HTML form submission function at which point it was "can open, worms everywhere". The code in question has a fairly simple job to explain:
  1. A user submits a form (by clicking a button or such) and the Document Object Model (DOM) is used to create a list of information in the web form.
  2. The list is then converted to the appropriate format for sending to the web site server.
  3. An HTTP request is made using the correctly formatted information to the web server.
    However the code I was faced with, while generally functional, was impenetrable having accreted over a long time.

    At this point I was forced into a diversion to fix up the core URL library handling of query strings (this is used when the form data is submitted as part of the requested URL) which was necessary to simplify some complicated string handling and make the implementation more compliant with the specification.

    My next step was to add some basic error reporting instead of warning the user the system was out of memory for every failure case which was making debugging somewhat challenging. I was beginning to think I had discovered a series of very hairy yaks although at least I was not trying to change a light bulb which can get very complicated.

    At this point I ran into the form_successful_controls_dom() function which performs step one of the process. This function had six hundred lines of code, hundreds of conditional branches 26 local variables and five levels of indentation in places. These properties combined resulted in a cyclomatic complexity metric of 252. For reference programmers generally try to keep a single function to no more than a hundred lines of code with as few local variables as possible resulting in a CCM of 20.

    I now had a choice:

    • I could abandon investigating the bug, because even if I could find the issue changing such a function without adequate testing is likely to introduce several more.
    • I could refactor the function into multiple simpler pieces.
    I slept on this decision and decided to at least try to refactor the code in an attempt to pay back a little of the technical debt in the browser (and maybe let me fix the bug). After several hours of work the refactored source has the desirable properties of:
    • multiple straightforward functions
    • no function much more than a hundred lines long
    • resource lifetime is now obvious and explicit
    • errors are correctly handled and reported

    I carefully examined the change in generated code and was pleased to see the compiler output had become more compact. This is an important point that less experienced programmers sometimes miss, if your source code is written such that a compiler can reason about it easily you often get much better results than the compact alternative. However even if the resulting code had been larger the improved source would have been worth it.
    After spending over ten hours working on this bug I have not resolved it yet, indeed one might suggest I have not even directly considered it yet! I wanted to use this to explain a little to users who have to wait a long time for their issues to get resolved (in any project not just NetSurf) just how much effort is sometimes involved in a simple bug.
    Vincent Sanders noreply@blogger.com Vincents Random Waffle

    RcppAPT 0.0.5

    Planet Debian - Dje, 30/09/2018 - 2:08pd

    A new version of RcppAPT – our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like – is now on CRAN.

    This version is a bit of experiment. I had asked on the r-package-devel and r-devel list how I could suppress builds on macOS. As it does not have the required libapt-pkg-dev library to support the apt, builds always failed. CRAN managed to not try on Solaris or Fedora, but somewhat macOS would fail. Each. And. Every. Time. Sadly, nobody proposed a working solution.

    So I got tired of this. Now we detect where we build, and if we can infer that it is not a Debian or Ubuntu (or derived system) and no libapt-pkg-dev is found, we no longer fail. Rather, we just set a #define and at compile-time switch to essentially empty code. Et voilà: no more build errors.

    And as before, if you want to use the package to query the system packaging information, build it on system using apt and with its libapt-pkg-dev installed.

    A few other cleanups were made too.

    Changes in version 0.0.5 (2017-09-29)
    • NAMESPACE now sets symbol registration

    • configure checks for suitable system, no longer errors if none found, but sets good/bad define for the build

    • Existing C++ code is now conditional on having a 'good' build system, or else alternate code is used (which succeeds everywhere)

    • Added suitable() returning a boolean with configure result

    • Tests are conditional on suitable() to test good builds

    • The Travis setup was updated

    • The vignette was updated and expanded

    Courtesy of CRANberries, there is also a diffstat report for this release.

    A bit more information about the package is available here as well as as the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

    Valutakrambod - A python and bitcoin love story

    Planet Debian - Sht, 29/09/2018 - 10:20md

    It would come as no surprise to anyone that I am interested in bitcoins and virtual currencies. I've been keeping an eye on virtual currencies for many years, and it is part of the reason a few months ago, I started writing a python library for collecting currency exchange rates and trade on virtual currency exchanges. I decided to name the end result valutakrambod, which perhaps can be translated to small currency shop.

    The library uses the tornado python library to handle HTTP and websocket connections, and provide a asynchronous system for connecting to and tracking several services. The code is available from github.

    There are two example clients of the library. One is very simple and list every updated buy/sell price received from the various services. This code is started by running bin/btc-rates and call the client code in valutakrambod/client.py. The simple client look like this:

    import functools import tornado.ioloop import valutakrambod class SimpleClient(object): def __init__(self): self.services = [] self.streams = [] pass def newdata(self, service, pair, changed): print("%-15s %s-%s: %8.3f %8.3f" % ( service.servicename(), pair[0], pair[1], service.rates[pair]['ask'], service.rates[pair]['bid']) ) async def refresh(self, service): await service.fetchRates(service.wantedpairs) def run(self): self.ioloop = tornado.ioloop.IOLoop.current() self.services = valutakrambod.service.knownServices() for e in self.services: service = e() service.subscribe(self.newdata) stream = service.websocket() if stream: self.streams.append(stream) else: # Fetch information from non-streaming services immediately self.ioloop.call_later(len(self.services), functools.partial(self.refresh, service)) # as well as regularly service.periodicUpdate(60) for stream in self.streams: stream.connect() try: self.ioloop.start() except KeyboardInterrupt: print("Interrupted by keyboard, closing all connections.") pass for stream in self.streams: stream.close()

    The library client loops over all known "public" services, initialises it, subscribes to any updates from the service, checks and activates websocket streaming if the service provide it, and if no streaming is supported, fetches information from the service and sets up a periodic update every 60 seconds. The output from this client can look like this:

    Bl3p BTC-EUR: 5687.110 5653.690 Bl3p BTC-EUR: 5687.110 5653.690 Bl3p BTC-EUR: 5687.110 5653.690 Hitbtc BTC-USD: 6594.560 6593.690 Hitbtc BTC-USD: 6594.560 6593.690 Bl3p BTC-EUR: 5687.110 5653.690 Hitbtc BTC-USD: 6594.570 6593.690 Bitstamp EUR-USD: 1.159 1.154 Hitbtc BTC-USD: 6594.570 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Bl3p BTC-EUR: 5687.110 5653.690 Paymium BTC-EUR: 5680.000 5620.240

    The exchange order book is tracked in addition to the best buy/sell price, for those that need to know the details.

    The other example client is focusing on providing a curses view with updated buy/sell prices as soon as they are received from the services. This code is located in bin/btc-rates-curses and activated by using the '-c' argument. Without the argument the "curses" output is printed without using curses, which is useful for debugging. The curses view look like this:

    Name Pair Bid Ask Spr Ftcd Age BitcoinsNorway BTCEUR 5591.8400 5711.0800 2.1% 16 nan 60 Bitfinex BTCEUR 5671.0000 5671.2000 0.0% 16 22 59 Bitmynt BTCEUR 5580.8000 5807.5200 3.9% 16 41 60 Bitpay BTCEUR 5663.2700 nan nan% 15 nan 60 Bitstamp BTCEUR 5664.8400 5676.5300 0.2% 0 1 1 Bl3p BTCEUR 5653.6900 5684.9400 0.5% 0 nan 19 Coinbase BTCEUR 5600.8200 5714.9000 2.0% 15 nan nan Kraken BTCEUR 5670.1000 5670.2000 0.0% 14 17 60 Paymium BTCEUR 5620.0600 5680.0000 1.1% 1 7515 nan BitcoinsNorway BTCNOK 52898.9700 54034.6100 2.1% 16 nan 60 Bitmynt BTCNOK 52960.3200 54031.1900 2.0% 16 41 60 Bitpay BTCNOK 53477.7833 nan nan% 16 nan 60 Coinbase BTCNOK 52990.3500 54063.0600 2.0% 15 nan nan MiraiEx BTCNOK 52856.5300 54100.6000 2.3% 16 nan nan BitcoinsNorway BTCUSD 6495.5300 6631.5400 2.1% 16 nan 60 Bitfinex BTCUSD 6590.6000 6590.7000 0.0% 16 23 57 Bitpay BTCUSD 6564.1300 nan nan% 15 nan 60 Bitstamp BTCUSD 6561.1400 6565.6200 0.1% 0 2 1 Coinbase BTCUSD 6504.0600 6635.9700 2.0% 14 nan 117 Gemini BTCUSD 6567.1300 6573.0700 0.1% 16 89 nan Hitbtc+BTCUSD 6592.6200 6594.2100 0.0% 0 0 0 Kraken BTCUSD 6565.2000 6570.9000 0.1% 15 17 58 Exchangerates EURNOK 9.4665 9.4665 0.0% 16 107789 nan Norgesbank EURNOK 9.4665 9.4665 0.0% 16 107789 nan Bitstamp EURUSD 1.1537 1.1593 0.5% 4 5 1 Exchangerates EURUSD 1.1576 1.1576 0.0% 16 107789 nan BitcoinsNorway LTCEUR 1.0000 49.0000 98.0% 16 nan nan BitcoinsNorway LTCNOK 492.4800 503.7500 2.2% 16 nan 60 BitcoinsNorway LTCUSD 1.0221 49.0000 97.9% 15 nan nan Norgesbank USDNOK 8.1777 8.1777 0.0% 16 107789 nan

    The code for this client is too complex for a simple blog post, so you will have to check out the git repository to figure out how it work. What I can tell is how the three last numbers on each line should be interpreted. The first is how many seconds ago information was received from the service. The second is how long ago, according to the service, the provided information was updated. The last is an estimate on how often the buy/sell values change.

    If you find this library useful, or would like to improve it, I would love to hear from you. Note that for some of the services I've implemented a trading API. It might be the topic of a future blog post.

    As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

    Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

    Pulling back

    Planet Debian - Sht, 29/09/2018 - 6:15md

    I've updated my fork of the monkey programming language to allow object-based method calls.

    That's allowed me to move some of my "standard-library" code into Monkey, and out of Go which is neat. This is a simple example:

    // // Reverse a string, // function string.reverse() { let r= ""; let l = len(self); for( l > 0 ) { r += self[l-1]; l--; } return r; }

    Usage is the obvious:

    puts( "Steve".reverse() );

    Or:

    let s = "Input"; s = s.reverse(); puts( s + "\n" );

    Most of the pain here was updating the parser to recognize that "." meant a method-call was happening, once that was done it was otherwise only a matter of passing the implicit self object to the appropriate functions.

    This work was done in a couple of 30-60 minute chunks. I find that I'm only really able to commit to that amount of work these days, so I've started to pull back from other projects.

    Oiva is now 21 months old and he sucks up all my time & energy. I can't complain, but equally I can't really start concentrating on longer-projects even when he's gone to sleep.

    And that concludes my news for the day.

    Goodnight dune..

    Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

    Self-plotting output from feedgnuplot and python-gnuplotlib

    Planet Debian - Sht, 29/09/2018 - 1:40md

    I just made a small update to feedgnuplot (version 1.51) and to python-gnuplotlib (version 0.23). Demo:

    $ seq 5 | feedgnuplot --hardcopy showplot.gp $ ./showplot.gp [plot pops up] $ cat showplot.gp #!/usr/bin/gnuplot set grid set boxwidth 1 histbin(x) = 1 * floor(0.5 + x/1) plot '-' notitle 1 1 2 2 3 3 4 4 5 5 e pause mouse close

    I.e. there's now support for a fake gp terminal that's not a gnuplot terminal at all, but rather a way to produce a self-executable gnuplot script. 99% of this was already implemented in --dump, but this way to access that functionality is much nicer. In fact, the machine running feedgnuplot doesn't even need to have gnuplot installed at all. I needed this because I was making complicated plots on a remote box, and X-forwarding was being way too slow. Now the remote box creates the self-plotting gnuplot scripts, I scp those, evaluate them locally, and then work with interactive visualizations.

    The python frontend gnuplotlib has received an analogous update.

    Dima Kogan http://notes.secretsauce.net Dima Kogan

    MicroDebConf Brasília 2018

    Planet Debian - Pre, 28/09/2018 - 11:20md

    After I came back to my home city (Brasília) I felt the necessity to promote and help people to contribute to Debian, some old friends from my former university (Univesrity of Brasília) and the local comunnity (Debian Brasília) came up with the idea to run a Debian related event and I just thought: “That sounds amazing!”. We contacted the university to book a small auditorium there for an entire day. After that we started to think, how should we name the event? The Debian Day was more or less one month ago, someone speculated a MiniDebConf but I thought that it was going to be much smaller than regular MiniDebConfs. So we decided to use a term that we used sometime ago here in Brasília, we called MicroDebConf :)

    MicroDebConf Brasília 2018 took place at Gama campus of University of Brasília on September 8th. It was amazing, we gathered a lot of students from university and some high schools, and some free software enthisiastics too. We had 44 attendees in total, we did not expect all these people in the begining! During the day we presented to them what is Debian Project and the many different ways to contribute to it.

    Since our focus was newcommers we started from the begining explaining how to use Debian properly, how to interact with the community and how to contribute. We also introduced them to some other subjects such as management of PGP keys, network setup with Debian and some topics about Linux kernel contributions. As you probably know, students are never satisfied, sometimes the talks are too easy and basic and other times are too hard and complex to follow. Then we decided to balance the talks level, we started from Debian basics and went over details of Linux kernel implementation. Their feedback was positive, so I think that we should do it again, atract students is always a challenge.

    In the end of the day we had some discussions regarding what should we do to grow our local community? We want more local people actually contributing to free software projects and specially Debian. A lot of people were interested but some of them said that they need some guidance, the life of a newcommer is not so easy for now.

    After some discussion we came up with the idea of a study group about Debian packaging, we will schedule meetings every week (or two weeks, not decided yet), and during these meetings we will present about packaging (good practices, tooling and anything that people need) and do some hands-on work. My intention is document everything that we will do to facilitate the life of future newcommers that wants to do Debian packaging. My main reference for this study groups has been LKCamp, they are a more consolidated group and their focus is to help people start contributing to Linux kernel.

    In my opinion, this kind of initiative could help us on bring new blood to the project and disseminate the free software ideas/culture. Other idea that we have is to promote Debian and free software in general to non technical people. We realized that we need to reach these people if we want a broader community, we do not know how exactly yet but it is in our radar.

    After all these talks and discussions we needed some time to relax, and we did that together! We went to a bar and got some beer (except people with less than 18 years old :) and food. Of course that ours discussions about free software kept running all night long.

    The following is an overview about this conference:

    • We probably defined this term and are the first organizing a MicroDebConf (we already did it in 2015). We should promote more this kind of local events

    • I guess we inspired a lot of young people to contribute to Debian (and free software in general)

    • We defined a way to help local people starting contributing to Debian with packaging. I really like this idea of a study group, meet people in person is always the best way to create bonds

    • Now we hopefully will have a stronger Debian community in Brasília - Brazil \o/

    Last but not least, I would like to thank LAPPIS (a research lab which I was part in my undergrad), they helped us with all the logistics and buroucracies, and Collabora for the coffee break sponsorship! Collabora, LAPPIS and us share the same goal: promote FLOSS to all these young people and make our commuity grow!

    Lucas Kanashiro http://blog.kanashiro.xyz/ Lucas Kanashiro’s blog

    Nathan Haines: Announcing the Ubuntu 18.10 Free Culture Showcase winners

    Planet Ubuntu - Pre, 28/09/2018 - 9:00pd

    October approaches, and Ubuntu marches steadly along the road from one LTS to another. Ubuntu 18.10 is another step in Ubuntu’s future. And now it’s time to unveil a small part of that change: the community wallpapers to be included in Ubuntu 18.10!

    Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. This cycle we had some amazing images submitted to the Ubuntu 18.10 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found. The competition was fierce; narrowing down the options to the final selections was painful!

    But there can be only 12, and the final images that will be included in Ubuntu 18.10 are:

    A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) today at the links above, or in your desktop wallpaper list after you upgrade to or install Ubuntu 18.10 on October 18th.

    Ubuntu Studio: Ubuntu Studio 18.10 (Cosmic Cuttlefish) Beta released

    Planet Ubuntu - Pre, 28/09/2018 - 7:09pd
    The Ubuntu Studio team is pleased to announce the final beta release of Ubuntu Studio 18.10 Cosmic Cuttlefish. While this beta is reasonably free of any showstopper CD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 18.10 is released […]

    Ubuntu MATE: Ubuntu MATE 18.10 Beta

    Planet Ubuntu - Pre, 28/09/2018 - 1:30pd

    Ubuntu MATE 18.10 is a modest, yet strategic, upgrade over our 18.04 release. If you want bug fixes and improved hardware support then 18.10 is for you. For those who prefer staying on the LTS then everything in this 18.10 release is also important for the upcoming 18.04.2 release. Read on to learn more...

    We are preparing Ubuntu MATE 18.10 (Cosmic Cuttlefish) for distribution on October 18th, 2018 With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.


    Superposition on the Intel Core i7-8809G Radeon RX Vega M powered Hades Canyon NUC What works?

    People tell us that Ubuntu MATE is stable. You may, or may not, agree.

    Ubuntu MATE Beta Releases are NOT recommended for:

    • Regular users who are not aware of pre-release issues
    • Anyone who needs a stable system
    • Anyone uncomfortable running a possibly frequently broken system
    • Anyone in a production environment with data or workflows that need to be reliable

    Ubuntu MATE Beta Releases are recommended for:

    • Regular users who want to help us test by finding, reporting, and/or fixing bugs
    • Ubuntu MATE, MATE, and GTK+ developers
    What changed since the Ubuntu MATE 18.04 final release?

    Curiously, the work during this Ubuntu MATE 18.10 release has really been focused on what will become Ubuntu MATE 18.04.2. Let me explain.

    MATE Desktop

    The upstream MATE Desktop team have been working on many bug fixes for MATE Desktop 1.20.x, that has resulted in a lot of maintenance updates in the upstream releases of MATE Desktop. The Debian packaging team for MATE Desktop, of which I am member, has been updating all the MATE packages to track these upstream bug fixes and new releases. Just about all MATE Desktop packages and associated components, such as AppMenu and MATE Dock Applet have been updated. Now that all these fixes exist in the 18.10 release, we will start the process of SRU'ing (backporting) them to 18.04 so that they will feature in the Ubuntu MATE 18.04.2 release due in February 2019. The fixes should start landing in Ubuntu MATE 18.04 very soon, well before the February deadline.

    Hardware Enablement

    Ubuntu MATE 18.04.2 will include a hardware enablement stack (HWE) based on what is shipped in Ubuntu 18.10. Ubuntu users are increasingly adopting the current generation of AMD RX Vega GPUs, both discrete and integrated solutions such as the Intel Core i7-8809G Radeon RX Vega M found in the Hades Canyon NUC and some laptops. I have been lobbying people within the Ubuntu project to upgrade to newer versions of the Linux kernel, firmware, Mesa and Vulkan that offer the best possible "out of box" support for AMD GPUs. Consequently, Ubuntu 18.10 (of any flavour) is great for owners of AMD graphics solutions and these improvements will soon be available in Ubuntu 18.04.2 too.

    Download Ubuntu MATE 18.10 Beta

    We've even redesigned the download page so it's even easier to get started.

    Download Known Issues

    Here are the known issues.

    Ubuntu MATE
    • The Software Boutique doesn't list any available software.
      • An update, due very soon, will re-stock the software library and add a few new applications too.
    Ubuntu family issues

    This is our known list of bugs that affects all flavours.

    You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

    Feedback

    Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

    aristotledndalignments

    Planet Debian - Enj, 27/09/2018 - 10:12md

    Aristotle’s distinction in EN between brutishness and vice might be comparable to the distinction in Dungeons & Dragons between chaotic evil and lawful evil, respectively.

    I’ve always thought that the forces of lawful evil are more deeply threatening than those of chaotic evil. In the Critical Hit podcast, lawful evil is equated with tyranny.

    Of course, at least how I run it, Aristotelian ethics involves no notion of evil, only mistakes about the good.

    Sean Whitton https://spwhitton.name//blog/ Notes from the Library

    Debian Policy call for participation -- September 2018

    Planet Debian - Enj, 27/09/2018 - 10:07md

    Here’s a summary of some of the bugs against the Debian Policy Manual that are thought to be easy to resolve.

    Please consider getting involved, whether or not you’re an existing contributor.

    For more information, see our README.

    #152955 force-reload should not start the daemon if it is not running

    #172436 BROWSER and sensible-browser standardization

    #188731 Also strip .comment and .note sections

    #212814 Clarify relationship between long and short description

    #273093 document interactions of multiple clashing package diversions

    #314808 Web applications should use /usr/share/package, not /usr/share/doc/package

    #348336 Clarify Policy around shared configuration files

    #425523 Describe error unwind when unpacking a package fails

    #491647 debian-policy: X font policy unclear around TTF fonts

    #495233 debian-policy: README.source content should be more detailed

    #649679 [copyright-format] Clarify what distinguishes files and stand-alone license paragraphs.

    #682347 mark ‘editor’ virtual package name as obsolete

    #685506 copyright-format: new Files-Excluded field

    #685746 debian-policy Consider clarifying the use of recommends

    #694883 copyright-format: please clarify the recommended form for public domain files

    #696185 [copyright-format] Use short names from SPDX.

    #697039 expand cron and init requirement to check binary existence to other scripts

    #722535 debian-policy: To document: the “Binary-Only” field in Debian changes files.

    #759316 Document the use of /etc/default for cron jobs

    #770440 debian-policy: policy should mention systemd timers

    #780725 PATH used for building is not specified

    #794653 Recommend use of dpkg-maintscript-helper where appropriate

    #809637 DEP-5 does not support filenames with blanks

    #824495 debian-policy: Source packages “can” declare relationships

    #833401 debian-policy: virtual packages: dbus-session-bus, dbus-default-session-bus

    #845715 debian-policy: Please document that packages are not allowed to write outside their source directories

    #850171 debian-policy: Addition of having an ‘EXAMPLES’ section in manual pages debian policy 12.1

    #853779 debian-policy: Clarify requirements about update-rc.d and invoke-rc.d usage in maintainer scripts

    #904248 Add netbase to build-essential

    Sean Whitton https://spwhitton.name//blog/ Notes from the Library

    My Work on Debian LTS (September 2018)

    Planet Debian - Enj, 27/09/2018 - 11:40pd

    In September 2018, I did 10 hours of work on the Debian LTS project as a paid contributor. Thanks to all LTS sponsors for making this possible.

    This is my list of work done in September 2018:

    • Upload of polarssl (DLA 1518-1) [1].
    • Work on CVE-2018-16831 discovered in the smarty3 package. Plan (A) was to backport latest smarty3 release to Debian stretch and jessie, but runtime tests against GOsa² (one of the PHP applications that utilize smarty3) already failed for Debian stretch. So, this plan was dropped. Plan (B) then was extracting a patch [2] for fixing this issue in Debian stretch's smarty3 package version from a manifold of upstream code changes; finally with the realization that smarty3 in Debian jessie is very likely not affected. Upstream feedback is still pending, upload(s) will occur in the coming week (first week of Octobre).

    light+love
    Mike

    References

    [1] https://lists.debian.org/debian-lts-announce/2018/09/msg00029.html

    [2] https://salsa.debian.org/debian/smarty3/commit/8a1eb21b7c4d971149e76cd2b...

    sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

    A nice oneliner

    Planet Debian - Mër, 26/09/2018 - 7:51md

    Pop quiz! Let's say I have a datafile describing some items (images and feature points in this example):

    # filename x y 000.jpg 79.932824 35.609049 000.jpg 95.174662 70.876506 001.jpg 19.655072 52.475315 002.jpg 19.515351 33.077847 002.jpg 3.010392 80.198282 003.jpg 84.183099 57.901647 003.jpg 93.237358 75.984036 004.jpg 99.102619 7.260851 005.jpg 24.738357 80.490116 005.jpg 53.424477 27.815635 .... .... 149.jpg 92.258132 99.284486

    How do I get a random subset of N images, using only the shell and standard commandline tools?

    Bam!

    $ N=5; ( echo '# filename'; seq 0 149 | shuf | head -n $N | xargs -n1 printf "%03d.jpg\n" | sort) | vnl-join -j filename input.vnl - # filename x y 017.jpg 41.752204 96.753914 017.jpg 86.232504 3.936258 027.jpg 41.839110 89.148368 027.jpg 82.772742 27.880592 067.jpg 57.790706 46.153623 067.jpg 87.804939 15.853087 076.jpg 41.447477 42.844849 076.jpg 93.399829 64.552090 142.jpg 18.045497 35.381083 142.jpg 83.037867 17.252172 Dima Kogan http://notes.secretsauce.net Dima Kogan

    Benjamin Mako Hill: Shannon’s Ghost

    Planet Ubuntu - Mër, 26/09/2018 - 4:34pd

    I’m spending the 2018-2019 academic year as a fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford.

    Claude Shannon on a bicycle.

    Every CASBS study is labeled with a list of  “ghosts” who previously occupied the study. This year, I’m spending the year in Study 50 where I’m haunted by an incredible cast that includes many people whose scholarship has influenced and inspired me.

    The top part of the list of ghosts in Study #50 at CASBS.

    Foremost among this group is Study 50’s third occupant: Claude Shannon

    At 21 years old, Shannon’s masters thesis (sometimes cited as the most important masters thesis in history) proved that electrical circuits could encode any relationship expressible in Boolean logic and opened the door to digital computing. Incredibly, this is almost never cited as Shannon’s most important contribution. That came in 1948 when he published a paper titled A Mathematical Theory of Communication which effectively created the field of information theory. Less than a decade after its publication, Aleksandr Khinchin (the mathematician behind my favorite mathematical constant) described the paper saying:

    Rarely does it happen in mathematics that a new discipline achieves the character of a mature and developed scientific theory in the first investigation devoted to it…So it was with information theory after the work of Shannon.

    As someone whose own research is seeking to advance computation and mathematical study of communication, I find it incredibly propitious to be sharing a study with Shannon.

    Although I teach in a communication department, I know Shannon from my background in computing. I’ve always found it curious that, despite the fact Shannon’s 1948 paper is almost certainly the most important single thing ever published with the word “communication” in its title, Shannon is rarely taught in communication curricula is sometimes completely unknown to communication scholars.

    In this regard, I’ve thought a lot about this passage in Robert’s Craig’s  influential article “Communication Theory as a Field” which argued:

    In establishing itself under the banner of communication, the discipline staked an academic claim to the entire field of communication theory and research—a very big claim indeed, since communication had already been widely studied and theorized. Peters writes that communication research became “an intellectual Taiwan-claiming to be all of China when, in fact, it was isolated on a small island” (p. 545). Perhaps the most egregious case involved Shannon’s mathematical theory of information (Shannon & Weaver, 1948), which communication scholars touted as evidence of their field’s potential scientific status even though they had nothing whatever to do with creating it, often poorly understood it, and seldom found any real use for it in their research.

    In preparation for moving into Study 50, I read a new biography of Shannon by Jimmy Soni and Rob Goodman and was excited to find that Craig—although accurately describing many communication scholars’ lack of familiarity—almost certainly understated the importance of Shannon to communication scholarship.

    For example, the book form of Shannon’s 1948 article was published by University Illinois on the urging of and editorial supervision of Wilbur Schramm (one of the founders of modern mass communication scholarship) who was a major proponent of Shannon’s work. Everett Rogers (another giant in communication) devotes a chapter of his “History of Communication Studies”² to Shannon and to tracing his impact in communication. Both Schramm and Rogers built on Shannon in parts of their own work. Shannon has had an enormous impact, it turns out, in several subareas of communication research (e.g., attempts to model communication processes).

    Although I find these connections exciting. My own research—like most of the rest of communication—is far from the substance of technical communication processes at the center of Shannon’s own work. In this sense, it can be a challenge to explain to my colleagues in communication—and to my fellow CASBS fellows—why I’m so excited to be sharing a space with Shannon this year.

    Upon reflection, I think it boils down to two reasons:

    1. Shannon’s work is both mathematically beautiful and incredibly useful. His seminal 1948 article points to concrete ways that his theory can be useful in communication engineering including in compression, error correcting codes, and cryptography. Shannon’s focus on research that pushes forward the most basic type of basic research while remaining dedicated to developing solutions to real problems is a rare trait that I want to feature in my own scholarship.
    2. Shannon was incredibly playful. Shannon played games, juggled constantly, and was always seeking to teach others to do so. He tinkered, rode unicycles, built a flame-throwing trumpet, and so on. With Marvin Minsky, he invented the “ultimate machine”—a machine that’s only function is to turn itself off—which he kept on his desk.

      A version of the Shannon’s “ultimate machine” that is sitting on my desk at CASBS.

    I have no misapprehension that I will accomplish anything like Shannon’s greatest intellectual achievements during my year at CASBS. I do hope to be inspired by Shannon’s creativity, focus on impact, and playfulness. In my own little ways, I hope to build something at CASBS that will advance mathematical and computational theory in communication in ways that Shannon might have appreciated.

    1. Incredibly, the year that Shannon was in Study 50, his neighbor in Study 51 was Milton Friedman. Two thoughts: (i) Can you imagine?! (ii) I definitely chose the right study!
    2. Rogers book was written, I found out, during his own stint at CASBS. Alas, it was not written in Study 50.

    Shannon’s Ghost

    Planet Debian - Mër, 26/09/2018 - 4:34pd

    I’m spending the 2018-2019 academic year as a fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford.

    Claude Shannon on a bicycle.

    Every CASBS study is labeled with a list of  “ghosts” who previously occupied the study. This year, I’m spending the year in Study 50 where I’m haunted by an incredible cast that includes many people whose scholarship has influenced and inspired me.

    The top part of the list of ghosts in Study #50 at CASBS.

    Foremost among this group is Study 50’s third occupant: Claude Shannon

    At 21 years old, Shannon’s masters thesis (sometimes cited as the most important masters thesis in history) proved that electrical circuits could encode any relationship expressible in Boolean logic and opened the door to digital computing. Incredibly, this is almost never cited as Shannon’s most important contribution. That came in 1948 when he published a paper titled A Mathematical Theory of Communication which effectively created the field of information theory. Less than a decade after its publication, Aleksandr Khinchin (the mathematician behind my favorite mathematical constant) described the paper saying:

    Rarely does it happen in mathematics that a new discipline achieves the character of a mature and developed scientific theory in the first investigation devoted to it…So it was with information theory after the work of Shannon.

    As someone whose own research is seeking to advance computation and mathematical study of communication, I find it incredibly propitious to be sharing a study with Shannon.

    Although I teach in a communication department, I know Shannon from my background in computing. I’ve always found it curious that, despite the fact Shannon’s 1948 paper is almost certainly the most important single thing ever published with the word “communication” in its title, Shannon is rarely taught in communication curricula is sometimes completely unknown to communication scholars.

    In this regard, I’ve thought a lot about this passage in Robert’s Craig’s  influential article “Communication Theory as a Field” which argued:

    In establishing itself under the banner of communication, the discipline staked an academic claim to the entire field of communication theory and research—a very big claim indeed, since communication had already been widely studied and theorized. Peters writes that communication research became “an intellectual Taiwan-claiming to be all of China when, in fact, it was isolated on a small island” (p. 545). Perhaps the most egregious case involved Shannon’s mathematical theory of information (Shannon & Weaver, 1948), which communication scholars touted as evidence of their field’s potential scientific status even though they had nothing whatever to do with creating it, often poorly understood it, and seldom found any real use for it in their research.

    In preparation for moving into Study 50, I read a new biography of Shannon by Jimmy Soni and Rob Goodman and was excited to find that Craig—although accurately describing many communication scholars’ lack of familiarity—almost certainly understated the importance of Shannon to communication scholarship.

    For example, the book form of Shannon’s 1948 article was published by University Illinois on the urging of and editorial supervision of Wilbur Schramm (one of the founders of modern mass communication scholarship) who was a major proponent of Shannon’s work. Everett Rogers (another giant in communication) devotes a chapter of his “History of Communication Studies”² to Shannon and to tracing his impact in communication. Both Schramm and Rogers built on Shannon in parts of their own work. Shannon has had an enormous impact, it turns out, in several subareas of communication research (e.g., attempts to model communication processes).

    Although I find these connections exciting. My own research—like most of the rest of communication—is far from the substance of technical communication processes at the center of Shannon’s own work. In this sense, it can be a challenge to explain to my colleagues in communication—and to my fellow CASBS fellows—why I’m so excited to be sharing a space with Shannon this year.

    Upon reflection, I think it boils down to two reasons:

    1. Shannon’s work is both mathematically beautiful and incredibly useful. His seminal 1948 article points to concrete ways that his theory can be useful in communication engineering including in compression, error correcting codes, and cryptography. Shannon’s focus on research that pushes forward the most basic type of basic research while remaining dedicated to developing solutions to real problems is a rare trait that I want to feature in my own scholarship.
    2. Shannon was incredibly playful. Shannon played games, juggled constantly, and was always seeking to teach others to do so. He tinkered, rode unicycles, built a flame-throwing trumpet, and so on. With Marvin Minsky, he invented the “ultimate machine”—a machine that’s only function is to turn itself off—which he kept on his desk.

      A version of the Shannon’s “ultimate machine” that is sitting on my desk at CASBS.

    I have no misapprehension that I will accomplish anything like Shannon’s greatest intellectual achievements during my year at CASBS. I do hope to be inspired by Shannon’s creativity, focus on impact, and playfulness. In my own little ways, I hope to build something at CASBS that will advance mathematical and computational theory in communication in ways that Shannon might have appreciated.

    1. Incredibly, the year that Shannon was in Study 50, his neighbor in Study 51 was Milton Friedman. Two thoughts: (i) Can you imagine?! (ii) I definitely chose the right study!
    2. Rogers book was written, I found out, during his own stint at CASBS. Alas, it was not written in Study 50.
    Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

    Stephen Michael Kellat: Work Items To Remember

    Planet Ubuntu - Mër, 26/09/2018 - 4:30pd

    Sometimes I truly cannot remember everything. There have been many, many things going on as of late. Being on medical leave has not been helpful, either.

    As we look to the last quarter of 2018, there are some matters I need to remind myself about keeping in the work plan:

    1. Finish the write-up on the research for Outernet/Othernet.

    2. Begin looking at what I need to do to set up a FidoNet node. I haven’t been involved in FidoNet since high school during President Bill Clinton’s second term in office.

    3. Consider the possibility that the folks of DarkNetPlan failed. After looking at this post I honestly need to look at finding a micrographics artist that I can set up a working relationship with. Passing digital data via microfilm sounds old-fashioned but seems more durable these days.

    4. Construct a proper permanent HF antenna for operating. I am a ham radio operator with General class privileges in the United States that remain barely used even though I am only a few years away from joining the Quarter Century Wireless Association.

    5. Figure out what I’m doing wrong setting up multiple HDHomeRun receivers to be tapped by a PVR-styled computer.

    6. Pick up 18 graduate semester hours so I can teach as an adjunct somewhere. This would generally have to happen in a graduate certificate program in the US or at the halfway mark in a master’s degree program.

    With my day job being constantly in flux, I am sure I’ve missed something in the listing above.

    Faqet

    Subscribe to AlbLinux agreguesi