You are here

Agreguesi i feed

Christian Hergert: Builder happenings for January

Planet GNOME - Sht, 20/01/2018 - 12:37md

I’ve been very busy with Builder since returning from the holidays. As mentioned previously, we’ve moved to gitlab. I’m very happy about it. I can see how this is going to improve the engagement and communication between our existing community and help us keep new contributors.

I made two releases of Builder so far this month. That included both a new stable build (which flatpak users are already using) and a new snapshot for those on developer operating systems like Fedora Rawhide.

The vast majority of my work this month has been on stabilization efforts. Builder is already a very large project. Every moving part we add makes this Rube Goldberg machine just a bit more difficult to maintain. I’ve tried to focus my time on things that are brittle and either improve or replace the designs. I’ve also fixed a good number of memory leaks and safety issues. However, the memory overhead of clang sort of casts a large shadow on all that work. We really need to get clang out of process one of these days.

Over the past couple years, our coding style evolved thanks to new features like g_autoptr() and friends. Every time I come across old-style code during my bug hunts, I clean it up.

Builder learned how to automatically install Flatpak SDK Extensions. These can save you a bunch of time when building your application if you have a complex stack. Things like Rust and Mono can just be pulled in and copied into your app rather than compiled from source on your machine. In doing so, every app that uses the technology can share objects in the OSTree repository, saving disk space and network transfer.

That allowed me to create a new template, a GNOME C♯ template. It uses the Mono SDK extension and gtk-sharp for 3.x. If you want to help here, work on a omni-sharp language server plugin for us!

A new C++ template using Gtkmm was added. Given that I don’t have a lot of recent experience with Gtkmm, it’d be nice to have someone from that community come in and make sure things are in good shape.

I also did some cleanup on our code-indexer to avoid threading in our API. Creating plugins on threads turned out to be rather disastrous, so now we try extra hard to keep things on the main thread with the typical async/finish function pairs.

I created a new messages panel to elevate warnings to the user without them having to run Builder from a terminal. If you want an easy project to work on, we need to go find interesting calls to g_warning() and use ide_context_warning() instead.

Our flatpak plugin now tries extra hard to avoid downloads. Those were really annoying for people when opening builder. It took some troubleshooting in flatpak-builder, and that is fixed now.

In the process of fixing the extraneous downloading I realized we could start bundling flatpak-builder with Builder. After a couple of fixes to flatpak-builder Builder Nightly no longer requires flatpak-builder on the host. That’s one less thing to go wrong for people going through the newcomers work-flow.

We just landed the beginning of a go-langserver plugin. It seems like the language server for Go is pretty new though. We only have a symbol resolver thus far.

I found a fun bug in Vala that caused const gchar * const * parameters to async functions to turn into gchar **, int. It was promptly fixed upstream for us (thanks Rico).

Some 350 commits have landed this month so far, most of them around stabilizing Builder. It’s a good time to start playing with the Nightly branch if you’re into that.

Oh, and after some 33 years on Earth, I finally needed glasses. So I look educated now.

Joe Barker: Configuring MSMTP On Ubuntu 16.04 (Again)

Planet Ubuntu - Pre, 19/01/2018 - 11:30pd

This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I’m posting it as-is for posterity, and have no idea if it’ll work on later versions. As I’m not hosting my own Ubuntu/MSMTP server anymore I can’t see any updates being made to this, but if I ever do have to set this up again I’ll create an updated post! Anyway, here’s what I had…

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mta ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults. defaults # Enable or disable TLS/SSL encryption. tls on tls_starttls on tls_trust_file /etc/ssl/certs/ca-certificates.crt # Setup WP account's settings. account host smtp.gmail.com port 587 auth login user password from logfile /var/log/msmtp/msmtp.log account default :

Any of the uppercase items (i.e. ) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp sudo chown -R www-data:adm /var/log/msmtp sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log { rotate 12 monthly compress missingok notifempty }

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text'); exit();

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

Suppressing color output of the Google Repo tool

Planet Debian - Pre, 19/01/2018 - 7:51pd
On Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its windows port made by ESRLabs) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:
repo --color=never infoOther options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto. eddyp noreply@blogger.com Rambling around foo

Jim Hall: Programming with ncurses

Planet GNOME - Pre, 19/01/2018 - 12:33pd
Over at Linux Journal, I am writing an article series about programming on Linux. While graphical user interfaces are very cool, not every program needs to run with a point-and-click interface. So in my "Getting started with ncurses" article series, I discuss how to write programs using the ncurses library functions.

Maybe you aren't familiar with curses or ncurses, but I guarantee you've run programs that use this library. Many programs that run in "terminal" mode, including vi editor, use the curses set of functions to draw to the screen. The curses functions allow you to put text anywhere on the screen, or read from the keyboard.

My article series starts with a simple example that demonstrates how to put characters and text on the screen. My example program is a chaos game iteration of Sierpinski's Triangle, which is a very simple program (only 73 lines).

Follow-up articles in the series will include a "Quest" program to demonstrate how to query the screen and use the arrow keys, and how to add colors.
Update:

Linux Journal has posted the second part of my article series: Creating an adventure game in the terminal using ncurses. Soon to come: The same adventure game, using colors!

Building packages with Meson and Debhelper version level 11 for Debian stretch-backports

Planet Debian - Enj, 18/01/2018 - 8:30md

More a reminder for myself, than a blog post...

If you want to backport a project from unstable based on the meson build system and your package uses debhelper to invoke the meson build process, then you need to modify the backported package's debian/control file slightly:

diff --git a/debian/control b/debian/control index 43e24a2..d33e76b 100644 --- a/debian/control +++ b/debian/control @@ -14,7 +14,7 @@ Build-Depends: debhelper (>= 11~), libmate-menu-dev (>= 1.16.0), libmate-panel-applet-dev (>= 1.16.0), libnotify-dev, - meson, + meson (>= 0.40.0), ninja-build, pkg-config, Standards-Version: 4.1.3

Enforce the build to pull-in meson from stretch-backports, i.e. a meson version that is newer than 0.40.0.

Reasoning: if you want to build your package against debhelper (>= 11~) from stretch-backports it will use the --wrap-mode option when invoking meson. However, this option only got added in meson 0.40.0. So you need to make sure, the meson version from stretch-backports gets pulled in, too, for your build. The build will fail when using the meson version that we find in Debian stretch.

sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

Morten Welinder: Security From Whom, Indeed

Planet GNOME - Enj, 18/01/2018 - 3:05md

So Spectre and Meltdown happened.

That was completely predicable, so much so that I, in fact, did predict that side-channel attacks, including those coming via javascript run in a browser, was the thing to look out for. (This was in the context of pointing out that pushing Wayland as a security improvement over plain old X11 was misguided.)

I recall being told that such attacks were basically nation-state level due to cost, complexity, and required target information. How is that prediction working out for you?

Sean Davis: MenuLibre 2.1.4 Released

Planet Ubuntu - Enj, 18/01/2018 - 12:50md

The wait is over. MenuLibre 2.1.4 is now available for public testing and translations! With well over 100 commits, numerous bug fixes, and a lot of polish, the best menu editing solution for Linux is ready for primetime.

What’s New? New Features
  • New Test Launcher button to try out new changes without saving (LP: #1315875)
  • New Sort Alphabetically button to instantly sort subdirectories (LP: #1315536)
  • Added ability to create subdirectories in system-installed paths (LP: #1315872)
  • New Parsing Errors tool to review invalid launcher files
  • New layout preferences! Budgie, GNOME, and Pantheon users will find that MenuLibre uses client side decorations (CSD) by default, while other desktops will use the more traditional server side decorations with a toolbar. Users are able to set their preference via the commandline.
    • -b, --headerbar to use the headerbar layout
    • -t, --toolbar to use the toolbar layout
General
  • The folder icon is now used in place of applications-other for directories (LP: #1605905)
  • DBusActivatable and Hidden keys are now represented by switches instead of text entries
  • Additional non-standard but commonly used categories have been added
  • Support for the Implements key has been added
  • Cinnamon, EDE, LXQt, and Pantheon have been added to the list of supported ShowIn environments
  • All file handling has been replaced with the better-maintained GLib KeyFile library
  • The Version key has been bumped to 1.1 to comply with the latest version of the specification
Bug Fixes
  • TypeError when adding a launcher and nothing is selected in the directory view (LP: #1556664)
  • Invalid categories added to first launcher in top-level directory under Xfce (LP: #1605973)
  • Categories created by Alacarte not respected, custom launchers deleted (LP: #1315880)
  • Exit application when Ctrl-C is pressed in the terminal (LP: #1702725)
  • Some categories cannot be removed from a launcher (LP: #1307002)
  • Catch exceptions when saving and display an error (LP: #1444668)
  • Automatically replace ~ with full home directory (LP: #1732099)
  • Make hidden items italic (LP: #1310261)
Translation Updates
  • This is the first release with complete documentation for every translated string in MenuLibre. This allows translators to better understand the context of each string when they adapt MenuLibre to their language, and should lead to more and better quality translations in the future.
  • The following languages were updating since MenuLibre 2.1.3:
    • Brazilian Portuguese, Catalan, Croatian, Danish, French, Galician, German, Italian, Kazakh, Lithuanian, Polish, Russian, Slovak, Spanish, Swedish, Ukrainian
Screenshots Downloads

The latest version of MenuLibre can always be downloaded from the Launchpad archives. Grab version 2.1.4 from the below link.

https://launchpad.net/menulibre/2.1/2.1.4/+download/menulibre-2.1.4.tar.gz

  • SHA-256: 36a6350019e45fbd1219c19a9afce29281e806993d4911b45b371dac50064284
  • SHA-1: 498fdd0b6be671f4388b6fa77a14a7d1e127e7ce
  • MD5: 0e30f24f544f0929621046d17874ecf0

cubietruck temperature sensor

Planet Debian - Enj, 18/01/2018 - 4:47pd

I wanted to use 1-wire temperature sensors (DS18B20) with my Cubietruck board, running Debian. The only page I could find documenting this is for the sunxi kernel, not the mainline kernel Debian uses. After a couple of hours of research I got it working, so here goes.

wiring

First you need to pick a GPIO pin to use for the 1-wire signal. The Cubietruck's GPIO pins are documented here, and I chose to use pin PG8. Other pins should work as well, although I originally tried to use PB17 and could not get it to work for an unknown reason. I also tried to use PB18 but there was a conflict with something else trying to use that same pin. To find a free pin, cat /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins and look for a line like: "pin 200 (PG8): (MUX UNCLAIMED) (GPIO UNCLAIMED)"

Now wire the DS18B20 sensor up. With its flat side facing you, the left pin goes to ground, the center pin to PG8 (or whatever GPIO pin you selected), and the right pin goes to 3.3V. Don't forget to connect the necessary 4.7K ohm resistor between the center and right pins.

You can find plenty of videos showing how to wire up the DS18B20 on youtube, which typically also involve a quick config change to a Raspberry Pi running Raspbian to get it to see the sensor. With Debian it's unfortunately quite a lot more complicated, and so this blog post got kind of long.

configuration

We need to get the kernel to enable the GPIO pin. This seems like a really easy thing, but this is where it gets really annoying and painful.

You have to edit the Cubietruck's device tree. So apt-get source linux and in there edit arch/arm/boot/dts/sun7i-a20-cubietruck.dts

In the root section ('/'), near the top, add this:

onewire_device { compatible = "w1-gpio"; gpios = <&pio 6 8 GPIO_ACTIVE_HIGH>; /* PG8 */ pinctrl-names = "default"; pinctrl-0 = <&my_w1_pin>; };

In the '&pio` section, add this:

my_w1_pin: my_w1_pin@0 { allwinner,pins = "PG8"; allwinner,function = "gpio_in"; };

Note that if you used a different pin than PG8 you'll need to change that. The "pio 6 8" means letter G, pin 8. The 6 is because G is the 7th letter of the alphabet. I don't know where this is documented; I reverse engineered it from another example. Why this can't be hex, or octal, or symbolic names or anything sane, I don't know.

Now you'll need to compile the dts file into a dtb file. One way is to configure the kernel and use its Makefile; I avoided that by first sudo apt-get install device-tree-compiler and then running, in the top of the linux source tree:

cpp -nostdinc -I include -undef -x assembler-with-cpp \ ./arch/arm/boot/dts/sun7i-a20-cubietruck.dts | \ dtc -O dtb -b 0 -o sun7i-a20-cubietruck.dtb -

You'll need to install that into /etc/flash-kernel/dtbs/sun7i-a20-cubietruck.dtb on the cubietruck. Then run flash-kernel to finish installing it.

use

Now reboot, and if all went well, it'll come up and the GPIO pin will finally be turned on:

# grep PG8 /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins pin 200 (PG8): onewire_device 1c20800.pinctrl:200 function gpio_in group PG8

And if you picked a GPIO pin that works and got the sensor wired up correctly, in /sys/bus/w1/devices/ there should be a subdirectory for the sensor, using its unique ID. Here I have two sensors connected, which 1-wire makes easy to do, just hang them all off the same wire.. er wires.

root@honeybee:/sys/bus/w1/devices> ls 28-000008290227@ 28-000008645973@ w1_bus_master1@ root@honeybee:/sys/bus/w1/devices> cat *-*/w1_slave f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES f6 00 4b 46 7f ff 0a 10 d6 t=15375 f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES f6 00 4b 46 7f ff 0a 10 d6 t=15375

So, it's 15.37 Celsius in my house. I need to go feed the fire, this took too long to get set up.

future work

Are you done at this point? I fear not entirely, because what happens when there's a kernel upgrade? If the device tree has changed in some way in the new kernel, you might need to update the modified device tree file. Or it might not boot properly or not work in some way.

With Raspbian, you don't need to modify the device tree. Instead it has support for device tree overlay files, which add some entries to the main device tree. The distribution includes a bunch of useful overlays, including one that enables GPIO pins. The Raspberry Pi's bootloader takes care of merging the main device tree and the selected overlays.

There are u-boot patches to do such merging, or the merging could be done before reboot (by flash-kernel perhaps), but apparently Debian's device tree files are built without phandle based referencing needed for that to work. (See http://elektranox.org/2017/05/0020-dt-overlays/)

There's also a kernel patch to let overlays be loaded on the fly using configfs. It seems to have been around for several years without being merged, for whatever reason, but would avoid this problem nicely if it ever did get merged.

Joey Hess http://joeyh.name/blog/ see shy jo

First steps with arm64

Planet Debian - Mër, 17/01/2018 - 10:52md

As it was Christmas time recently, I wanted to allow oneself something special. So I ordered a Macchiatobin from SolidRun. Unfortunately they don’t exaggerate with their delivery times and I had to wait about two months for my device. I couldn’t celebrate Christmas time with it, but fortunately New Year.

Anyway, first I tried to use the included U-Boot to start the Debian installer on an USB stick. Oh boy, that was a bad idea and in retrospect just a waste of time. But there is debian-arm@l.d.o and Steve McIntyre was so kind to help me out of my vale of tears.

First I put the EDK2 flash image from Leif on an SD card, set the jumper on the board to boot from it (for the SD card boot, the right most jumper has to be set!) and off we go. Afterwards I put the debian-testing-arm64-netinst.iso on an USB stick and tried to start this. Unfortunately I was hit by #887110 and had to use a mini installer from here. Installation went smooth and as a last step I had to start the rescue mode and install grub to the removable media path. It is an extra point in the installer, so no need to enter cryptic commands :-).

Voila, rebooted and my Macchiatobin is up and running.

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

Privacy expectations and the connected home

Planet Debian - Mër, 17/01/2018 - 10:45md
Traditionally, devices that were tied to logins tended to indicate that in some way - turn on someone's xbox and it'll show you their account name, run Netflix and it'll ask which profile you want to use. The increasing prevalence of smart devices in the home changes that, in ways that may not be immediately obvious to the majority of people. You can configure a Philips Hue with wall-mounted dimmers, meaning that someone unfamiliar with the system may not recognise that it's a smart lighting system at all. Without any actively malicious intent, you end up with a situation where the account holder is able to infer whether someone is home without that person necessarily having any idea that that's possible. A visitor who uses an Amazon Echo is not necessarily going to know that it's tied to somebody's Amazon account, and even if they do they may not know that the log (and recorded audio!) of all interactions is available to the account holder. And someone grabbing an egg out of your fridge is almost certainly not going to think that your smart egg tray will trigger an immediate notification on the account owner's phone that they need to buy new eggs.

Things get even more complicated when there's multiple account support. Google Home supports multiple users on a single device, using voice recognition to determine which queries should be associated with which account. But the account that was used to initially configure the device remains as the fallback, with unrecognised voices ended up being logged to it. If a voice is misidentified, the query may end up being logged to an unexpected account.

There's some interesting questions about consent and expectations of privacy here. If someone sets up a smart device in their home then at some point they'll agree to the manufacturer's privacy policy. But if someone else makes use of the system (by pressing a lightswitch, making a spoken query or, uh, picking up an egg), have they consented? Who has the social obligation to explain to them that the information they're producing may be stored elsewhere and visible to someone else? If I use an Echo in a hotel room, who has access to the Amazon account it's associated with? How do you explain to a teenager that there's a chance that when they asked their Home for contact details for an abortion clinic, it ended up in their parent's activity log? Who's going to be the first person divorced for claiming that they were vegan but having been the only person home when an egg was taken out of the fridge?

To be clear, I'm not arguing against the design choices involved in the implementation of these devices. In many cases it's hard to see how the desired functionality could be implemented without this sort of issue arising. But we're gradually shifting to a place where the data we generate is not only available to corporations who probably don't care about us as individuals, it's also becoming available to people who own the more private spaces we inhabit. We have social norms against bugging our houseguests, but we have no social norms that require us to explain to them that there'll be a record of every light that they turn on or off. This feels like it's going to end badly.

(Thanks to Nikki Everett for conversations that inspired this post)

(Disclaimer: while I work for Google, I am not involved in any of the products or teams described in this post and my opinions are my own rather than those of my employer's)

comments Matthew Garrett https://mjg59.dreamwidth.org/ Matthew Garrett

Philip Chimento: Announcing Flapjack

Planet GNOME - Mër, 17/01/2018 - 9:54md

Here’s a post about a tool that I’ve developed at work. You might find it useful if you contribute to any desktop platform libraries that are packaged as a Flatpak runtime, such as GNOME or KDE.

Flatpak is a system for delivering desktop applications that was pioneered by the GNOME community. At Endless, we have jumped aboard the Flatpak train. Our product Endless OS is a Linux distribution, but not a traditional one in the sense of being a collection of packages that you install with a package manager; it’s an immmutable OS image, with atomic updates delivered through OSTree. Applications are sandboxed-only and Flatpak-only.

Flatpak makes the lives of application developers much easier, who want to get their applications to users without having to care which Linux distribution those users use. It means that as an application developer, I don’t have to fire up three different virtual machines and email five packaging contributors whenever I make a release of my application. (Or, in theory it would work that way if I would stop using deprecated libraries in my application!)

This is what flapjacks are in the UK, Ireland, Isle of Man, and Newfoundland. Known as “granola bars” or “oat bars” elsewhere. By Alistair Young, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=5331306

On my work computer I took the leap and now develop everything on an immutable OSTree system just like it would be running in production. I now develop everything inside a Flatpak sandbox. However, while Flatpak works great when packaging some code that already exists, it is a bit lacking in the developer experience.

For app developers, Carlos Soriano has written a tool called flatpak-dev-cli based on a workflow designed by Thibault Saunier of the Pitivi development team. This has proven very useful for developing Flatpak apps.

But a lot of the work I do is not on apps, but on the library stack that is used by apps on Endless OS. In fact, my team’s main product is a Flatpak runtime. I wanted an analogue of flatpak-dev-cli for developing the libraries that live inside a Flatpak runtime.

Flapjack

…while this is what flapjacks are everywhere else in Canada, and in the US. Also known as “pancakes.” By Belathee Photography, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15167594

Flapjack is that tool. It’s a wrapper around Flatpak-builder that is intended to replace JHBuild in the library developer’s toolbox.

For several months I’ve been using it in my day-to-day work, on a system running Endless OS (which has hardly any developer tools installed by default.) It only requires Flatpak-builder, Git, and Python.

In Flapjack’s README I included a walkthrough for reproducing Tristan’s trick from his BuildStream talk at GUADEC 2017 where he built an environment with a modified copy of GTK that showed all the UI labels upside-down.

That walkthrough is pretty much what my day-to-day development workflow looks like now. As an example, a recent bug required me to patch eos-knowledge-lib and xapian-glib at the same time, which are both components of Endless’s Modular Framework runtime. I did approximately this:

flapjack open xapian-glib flapjack open eos-knowledge-lib cd checkout/xapian-glib # ... make changes to code ... flapjack test xapian-glib # ... keep changing and repeating until the tests pass! cd ../eos-knowledge-lib # ... make more changes to code ... flapjack test eos-knowledge-lib # ... keep changing and repeating until the tests pass! flapjack build # ... keep changing and repeating until the whole runtime builds! flapjack run com.endlessm.encyclopedia.en # run Encyclopedia, which is an app that uses this runtime, to check # that my fix worked git checkout -b etc. etc. # create branches for my work and push them

I also use Flapjack’s “devtools manifest” to conveniently provide developer tools that aren’t present in Endless OS’s base OSTree layer. In Flapjack’s readme I gave an example of adding the jq tool to the devtools manifest, but I also have cppcheck, RR, and a bunch of Python modules that I added with flatpak-pip-generator. Whenever I need to use any of these tools, I just open flapjack shell and they’re available!

Questions you might ask Why is it called Flapjack?

The working title was jokingly chosen to mess up your muscle memory if you were used to typing flatpak, but it stuck and became the real name. If it does annoy you, you can alias it to fj or something.

Flatpak-builder is old news, why does Flapjack not use BuildStream?

I would like it if that were the case! I suspect that BuildStream would solve my main problem with Flapjack, which is that it is slow. In fact I started out writing Flapjack as a wrapper around BuildStream, instead of Flatpak-builder. But at the time BuildStream just didn’t have enough documentation for me to get my head around it quickly enough. I hear that this is changing and I would welcome a port to BuildStream!

As well, it was not possible to allow --socket=x11 during a build like you can with Flatpak-builder, so I couldn’t get it to run unit tests for modules that depended on GTK.

Why are builds with Flapjack so slow?

The slowest parts are caching each build step (I suspect here is where using BuildStream would help a lot) and exporting the runtime’s debug extension to the local Flatpak repository. For the latter, this used to be even slower, before my colleague Emmanuele Bassi suggested to use a “bare-user” repository. I’m still looking for a way to speed this up. I suspect it should be possible, since for Flapjack builds we would probably never care about the Flatpak repository history.

Can I use Flapjack to develop system components like GNOME Shell?

No. There still isn’t a good developer story for working on system components on an immutable OS! At Endless, the people who work on those components will generally replace their OSTree file system with a mutable one. This isn’t a very good strategy because it means you’re developing on a system that is different from what users are running in production, but I haven’t found any better way so far.

Epilogue

Thanks to my employer Endless for allowing me to reserve some time to write this tool in a way that it would be useful for the wider Flatpak community, rather than just internally.

That’s about it! I hope Flapjack is useful for you. If you have any other questions, feel free to ask me.

Where to find it

Flapjack’s page on PyPI: https://pypi.python.org/pypi/flapjack
The code on GitHub: https://github.com/endlessm/flapjack
Report bugs and request features: https://github.com/endlessm/flapjack/issues

Not being perfect

Planet Debian - Mër, 17/01/2018 - 8:49md

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, I was wondering that when I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that could be me. Even though I had written down almost every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong intern.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start doing some work. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I had studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>> <<EventCalendar(category=CategoryEventCalendar)>> <<EventCalendar(,category=CategoryEventCalendar)>> <<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968' [Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Renata https://rsip22.github.io/blog/ Renata's blog

Not being perfect

Planet Debian - Mër, 17/01/2018 - 8:49md

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, when I was wondering that I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that was very much me, because even though I had written down pretty much every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong candidate.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>> <<EventCalendar(category=CategoryEventCalendar)>> <<EventCalendar(,category=CategoryEventCalendar)>> <<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968' [Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Renata https://rsip22.github.io/blog/ Renata's blog

Announcing "Just TODO It"

Planet Debian - Mër, 17/01/2018 - 6:20md

Recently, I wished to use a trivially-simple TODO-list application whilst working on a project. I had a look through what was available to me in the "GNOME Software" application and was surprised to find nothing suitable. In particular I just wanted to capture a list of actions that I could tick off; I didn't want anything more sophisticated than that (and indeed, more sophistication would mean a learning curve I couldn't afford at the time). I then remembered that I'd written one myself, twelve years ago. So I found the old code, dusted it off, made some small adjustments so it would work on modern systems and published it.

At the time that I wrote it, I found (at least) one other similar piece of software called "Tasks" which used Evolution's TODO-list as the back-end data store. I can no longer find any trace of this software, and the old web host (projects.o-hand.com) has disappeared.

My tool is called Just TODO It and it does very little. If that's what you want, great! You can reach the source via that prior link or jump straight to GitHub: https://github.com/jmtd/todo

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

The Fridge: Ubuntu 17.04 (Zesty Zapus) reached End of Life on January 13, 2018

Planet Ubuntu - Mër, 17/01/2018 - 2:32md
Ubuntu announced its 17.04 (Zesty Zapus) release almost 9 months ago, on April 13, 2017. As a non-LTS release, 17.04 has a 9-month support cycle and, as such, will reach end of life on Saturday, January 13th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 17.04. The supported upgrade path from Ubuntu 17.04 is via Ubuntu 17.10. Instructions and caveats for the upgrade may be found at: https://help.ubuntu.com/community/Upgrades Ubuntu 17.10 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce Development of a complete response to the highly-publicized Meltdown and Spectre vulnerabilities is ongoing, and due to the timing with respect to this End of Life, we will not be providing updated Linux kernel packages for Ubuntu 17.04. We advise users to upgrade to Ubuntu 17.10 and install the updated kernel packages for that release when they become available. For more information about Canonical’s response to the Meltdown and Spectre vulnerabilities, see: https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/ Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs. https://lists.ubuntu.com/archives/ubuntu-announce/2018-January/000227.html Originally posted to the ubuntu-announce mailing list on Fri Jan 5 22:23:25 UTC 2018 by Steve Langasek, on behalf of the Ubuntu Release Team

Andy Wingo: instruction explosion in guile

Planet GNOME - Mër, 17/01/2018 - 11:30pd

Greetings, fellow Schemers and compiler nerds: I bring fresh nargery!

instruction explosion

A couple years ago I made a list of compiler tasks for Guile. Most of these are still open, but I've been chipping away at the one labeled "instruction explosion":

Now we get more to the compiler side of things. Currently in Guile's VM there are instructions like vector-ref. This is a little silly: there are also instructions to branch on the type of an object (br-if-tc7 in this case), to get the vector's length, and to do a branching integer comparison. Really we should replace vector-ref with a combination of these test-and-branches, with real control flow in the function, and then the actual ref should use some more primitive unchecked memory reference instruction. Optimization could end up hoisting everything but the primitive unchecked memory reference, while preserving safety, which would be a win. But probably in most cases optimization wouldn't manage to do this, which would be a lose overall because you have more instruction dispatch.

Well, this transformation is something we need for native compilation anyway. I would accept a patch to do this kind of transformation on the master branch, after version 2.2.0 has forked. In theory this would remove most all high level instructions from the VM, making the bytecode closer to a virtual CPU, and likewise making it easier for the compiler to emit native code as it's working at a lower level.

Now that I'm getting close to finished I wanted to share some thoughts. Previous progress reports on the mailing list.

a simple loop

As an example, consider this loop that sums the 32-bit floats in a bytevector. I've annotated the code with lines and columns so that you can correspond different pieces to the assembly.

0 8 12 19 +-v-------v---v------v- | 1| (use-modules (rnrs bytevectors)) 2| (define (f32v-sum bv) 3| (let lp ((n 0) (sum 0.0)) 4| (if (< n (bytevector-length bv)) 5| (lp (+ n 4) 6| (+ sum (bytevector-ieee-single-native-ref bv n))) 7| sum)))

The assembly for the loop before instruction explosion went like this:

L1: 17 (handle-interrupts) at (unknown file):5:12 18 (uadd/immediate 0 1 4) 19 (bv-f32-ref 1 3 1) at (unknown file):6:19 20 (fadd 2 2 1) at (unknown file):6:12 21 (s64<? 0 4) at (unknown file):4:8 22 (jnl 8) ;; -> L4 23 (mov 1 0) at (unknown file):5:8 24 (j -7) ;; -> L1

So, already Guile's compiler has hoisted the (bytevector-length bv) and unboxed the loop index n and accumulator sum. This work aims to simplify further by exploding bv-f32-ref.

exploding the loop

In practice, instruction explosion happens in CPS conversion, as we are converting the Scheme-like Tree-IL language down to the CPS soup language. When we see a Tree-Il primcall (a call to a known primitive), instead of lowering it to a corresponding CPS primcall, we inline a whole blob of code.

In the concrete case of bv-f32-ref, we'd inline it with something like the following:

(unless (and (heap-object? bv) (eq? (heap-type-tag bv) %bytevector-tag)) (error "not a bytevector" bv)) (define len (word-ref bv 1)) (define ptr (word-ref bv 2)) (unless (and (<= 4 len) (<= idx (- len 4))) (error "out of range" idx)) (f32-ref ptr len)

As you can see, there are four branches hidden in the bv-f32-ref: two to check that the object is a bytevector, and two to check that the index is within range. In this explanation we assume that the offset idx is already unboxed, but actually unboxing the index ends up being part of this work as well.

One of the goals of instruction explosion was that by breaking the operation into a number of smaller, more orthogonal parts, native code generation would be easier, because the compiler would only have to know about those small bits. However without an optimizing compiler, it would be better to reify a call out to a specialized bv-f32-ref runtime routine instead of inlining all of this code -- probably whatever language you write your runtime routine in (C, rust, whatever) will do a better job optimizing than your compiler will.

But with an optimizing compiler, there is the possibility of removing possibly everything but the f32-ref. Guile doesn't quite get there, but almost; here's the post-explosion optimized assembly of the inner loop of f32v-sum:

L1: 27 (handle-interrupts) 28 (tag-fixnum 1 2) 29 (s64<? 2 4) at (unknown file):4:8 30 (jnl 15) ;; -> L5 31 (uadd/immediate 0 2 4) at (unknown file):5:12 32 (u64<? 2 7) at (unknown file):6:19 33 (jnl 5) ;; -> L2 34 (f32-ref 2 5 2) 35 (fadd 3 3 2) at (unknown file):6:12 36 (mov 2 0) at (unknown file):5:8 37 (j -10) ;; -> L1

good things

The first thing to note is that unlike the "before" code, there's no instruction in this loop that can throw an exception. Neat.

Next, note that there's no type check on the bytevector; the peeled iteration preceding the loop already proved that the bytevector is a bytevector.

And indeed there's no reference to the bytevector at all in the loop! The value being dereferenced in (f32-ref 2 5 2) is a raw pointer. (Read this instruction as, "sp[2] = *(float*)((byte*)sp[5] + (uptrdiff_t)sp[2])".) The compiler does something interesting; the f32-ref CPS primcall actually takes three arguments: the garbage-collected object protecting the pointer, the pointer itself, and the offset. The object itself doesn't appear in the residual code, but including it in the f32-ref primcall's inputs keeps it alive as long as the f32-ref itself is alive.

bad things

Then there are the limitations. Firstly, instruction 28 tags the u64 loop index as a fixnum, but never uses the result. Why is this here? Sadly it's because the value is used in the bailout at L2. Recall this pseudocode:

(unless (and (<= 4 len) (<= idx (- len 4))) (error "out of range" idx))

Here the error ends up lowering to a throw CPS term that the compiler recognizes as a bailout and renders out-of-line; cool. But it uses idx as an argument, as a tagged SCM value. The compiler untags the loop index, but has to keep a tagged version around for the error cases.

The right fix is probably some kind of allocation sinking pass that sinks the tag-fixnum to the bailouts. Oh well.

Additionally, there are two tests in the loop. Are both necessary? Turns out, yes :( Imagine you have a bytevector of length 1025. The loop continues until the last ref at offset 1024, which is within bounds of the bytevector but there's one one byte available at that point, so we need to throw an exception at this point. The compiler did as good a job as we could expect it to do.

is is worth it? where to now?

On the one hand, instruction explosion is a step sideways. The code is more optimal, but it's more instructions. Because Guile currently has a bytecode VM, that means more total interpreter overhead. Testing on a 40-megabyte bytevector of 32-bit floats, the exploded f32v-sum completes in 115 milliseconds compared to around 97 for the earlier version.

On the other hand, it is very easy to imagine how to compile these instructions to native code, either ahead-of-time or via a simple template JIT. You practically just have to look up the instructions in the corresponding ISA reference, is all. The result should perform quite well.

I will probably take a whack at a simple template JIT first that does no register allocation, then ahead-of-time compilation with register allocation. Getting the AOT-compiled artifacts to dynamically link with runtime routines is a sufficient pain in my mind that I will put it off a bit until later. I also need to figure out a good strategy for truly polymorphic operations like general integer addition; probably involving inline caches.

So that's where we're at :) Thanks for reading, and happy hacking in Guile in 2018!

Umang Jain: GNOME Photos: Happenings

Planet GNOME - Mar, 16/01/2018 - 11:10md
New image editing op: Shadows and highlights

Enjoy the Shadows and highlights operations fresh from the oven of GEGL’s workshop. Implementation of shadows and highlights is a port of DarkTable’s operation.


Image zooming and scrolling


Crop in landscape/portrait orientation modes

Alessandro implemented the changing of orientation of the crop rectangle to portrait/landscape mode. Using libdazzle, I animated the crop preset and orientation changes.


Effects are now applied when you set an edited photo as a background


Other work done:
  • GeglProcessor logging time (789977) gegl_processor_process is an idle handler in gnome-photos hence, summation of all invocations = total processing time. This will help tracking the performance without having to write toy programs on and on.

  • Port to g_auto* (788174) - almost there

  • The loading images with EXIF rotation is now 15 times faster and subsequently, complete support for EXIF rotation (789196)

Work in progress:
  • Mipmap support (790224) - Mipmapping will enable to have a better interative image editing experience as it process the image at a lower resolution. Hence, less pixels to process will leads to faster processing.

  • Facebook upload (776098) - For a while now, I have been working on and off facebook upload but now I think it’s very close. You are welcome to review the patches :)

  • Import from device (751212) - Debarshi is working on imports and few core other things related to GEGL.

GEGL is the image processing library that powers GIMP, gnome-photos and many other projects. These upstream projects (GEGL and Babl) also received major updates including faster RGB to CIE conversions, faster Gaussian blur, faster GeglBuffer accesses, and reduced lock contention. Probably, it will be covered in a separate post with specific details.

If you want to see more awesome gnome-photos editing features in future, you can support GEGL and Øyvind Kolås via patreon.

Check out the Photos roadmap. If you are interested in contributing, come say hi, at #photos GIMPNet IRC.

Federico Mena-Quintero: Help needed for librsvg 2.42.1

Planet GNOME - Mar, 16/01/2018 - 6:01md

Would you like to help fix a couple of bugs in librsvg, in preparation for the 2.42.1 release?

I have prepared a list of bugs which I'd like to be fixed in the 2.42.1 milestone. Two of them are assigned to myself, as I'm already working on them.

There are two other bugs which I'd love someone to look at. Neither of these requires deep knowledge of librsvg, just some debugging and code-writing:

  • Bug 141 - GNOME's thumbnailing machinery creates an icon which has the wrong fill: it's an image of a builder's trowel, and the inside is filled black instead of with a nice gradient. This is the only place in librsvg where a cairo_surface_t is converted to a GdkPixbuf; this involves unpremultiplying the alpha channel. Maybe the relevant function is buggy?

  • Bug 136: The stroke-dasharray attribute in SVG elements is parsed incorrectly. It is a list of CSS length values, separated by commas or spaces. Currently librsvg uses a shitty parser based on g_strsplit() only for commas; it doesn't allow just a space-separated list. Then, it uses g_ascii_strtod() to parse plain numbers; it doesn't support CSS lengths generically. This parser needs to be rewritten in Rust; we already have machinery there to parse CSS length values properly.

Feel free to contact me by mail, or write something in the bugs themselves, if you would like to work on them. I'll happily guide you through the code :)

Procrastinating by tweaking my desktop with devilspie2

Planet Debian - Mar, 16/01/2018 - 3:51md

Tweaking my desktop seems to be my preferred form of procrastination. So, a blog like this is a sure sign I have too much work on my plate.

I have a laptop. I carry it to work and plug it into a large monitor - where I like to keep all my instant or near-instant communications displayed at all times while I switch between workspaces on my smaller laptop screen as I move from email (workspace one), to shell (workspace two), to web (workspace three), etc.

When I'm not at the office, I only have my laptop screen - which has to accomdate everything.

I soon got tired of dragging things around everytime I plugged or unplugged the monitor and starting accumulating a mess of bash scripts running wmctrl and even calling my own python-wnck script. (At first I couldn't get wmctrl to pin a window but I lived with it. But when gajim switched to gtk3 and my openbox window decorations disappeared, then I couldn't even pin my window manually.)

Now I have the following simpler setup.

Manage hot plugging of my monitor.

Symlink to my monitor status device:

0 jamie@turkey:~$ ls -l ~/.config/turkey/monitor.status lrwxrwxrwx 1 jamie jamie 64 Jan 15 15:26 /home/jamie/.config/turkey/monitor.status -> /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-1/status 0 jamie@turkey:~$

Create a udev rule to place my monitor to the right of my LCD every time the monitor is plugged in and every time it is unplugged.

0 jamie@turkey:~$ cat /etc/udev/rules.d/90-vga.rules # When a monitor is plugged in, adjust my display to take advantage of it ACTION=="change", SUBSYSTEM=="drm", ENV{HOTPLUG}=="1", RUN+="/etc/udev/scripts/vga-adjust" 0 jamie@turkey:~$

And here is the udev script:

0 jamie@turkey:~$ cat /etc/udev/scripts/vga-adjust #!/bin/bash logger -t "jamie-udev" "Monitor event detected, waiting 1 second for system to detect change." # We don't know whether the VGA monitor is being plugged in or unplugged so we # have to autodetect first. And,it takes a few seconds to assess whether the # monitor is there or not, so sleep for 1 second. sleep 1 monitor_status="/home/jamie/.config/turkey/monitor.status" status=$(cat "$monitor_status") XAUTHORITY=/home/jamie/.Xauthority if [ "$status" = "disconnected" ]; then # The monitor is not plugged in logger -t "jamie-udev" "Monitor is being unplugged" xrandr --output DP-1 --off else logger -t "jamie-udev" "Monitor is being plugged in" xrandr --output DP-1 --right-of eDP-1 --auto fi 0 jamie@turkey:~$ Move windows into place.

So far, this handles ensuring the monitor is activated and placed in the right position. But, nothing has changed in my workspace.

Here's where the devilspie2 configuration comes in:

==> /home/jamie/.config/devilspie2/00-globals.lua <== -- Collect some global varibles to be used throughout. name = get_window_name(); app = get_application_name(); instance = get_class_instance_name(); -- See if the monitor is plugged in or not. If monitor is true, it is -- plugged in, if it is false, it is not plugged in. monitor = false; device = "/home/jamie/.config/turkey/monitor.status" f = io.open(device, "rb") if f then -- Read the contents, remove the trailing line break. content = string.gsub(f:read "*all", "\n", ""); if content == "connected" then monitor = true; end end ==> /home/jamie/.config/devilspie2/gajim.lua <== -- Look for my gajim message window. Pin it if we have the monitor. if string.find(name, "Gajim: conversations.im") then if monitor then set_window_geometry(1931,31,590,1025); pin_window(); else set_window_workspace(4); set_window_geometry(676,31,676,725); unpin_window(); end end ==> /home/jamie/.config/devilspie2/grunt.lua <== -- grunt is the window I use to connect via irc. I typically connect to -- grunt via a terminal called spade, which is opened using a-terminal-yoohoo -- so that bell actions cause a notification. The window is called spade if I -- just opened it but usually changes names to grunt after I connect via autossh -- to grunt. -- -- If no monitor, put spade in workspace 2, if monitor, then pin it to all -- workspaces and maximize it vertically. if instance == "urxvt" then -- When we launch, the terminal is called spade, after we connect it -- seems to get changed to jamie@grunt or something like that. if name == "spade" or string.find(name, "grunt:") then if monitor then set_window_geometry(1365,10,570,1025); set_window_workspace(3); -- maximize_vertically(); pin_window(); else set_window_geometry(677,10,676,375); set_window_workspace(2); unpin_window(); end end end ==> /home/jamie/.config/devilspie2/terminals.lua <== -- Note - these will typically only work after I start the terminals -- for the first time because their names seem to change. if instance == "urxvt" then if name == "heart" then set_window_geometry(0,10,676,375); elseif name == "spade" then set_window_geometry(677,10,676,375); elseif name == "diamond" then set_window_geometry(0,376,676,375); elseif name == "clover" then set_window_geometry(677,376,676,375); end end ==> /home/jamie/.config/devilspie2/zimbra.lua <== -- Look for my zimbra firefox window. Shows support queue. if string.find(name, "Zimbra") then if monitor then unmaximize(); set_window_geometry(2520,10,760,1022); pin_window(); else set_window_workspace(5); set_window_geometry(0,10,676,375); -- Zimbra can take up the whole window on this workspace. maximize(); unpin_window(); end end

And lastly, it is started (and restartd) with:

0 jamie@turkey:~$ cat ~/.config/systemd/user/devilspie2.service [Unit] Description=Start devilspie2, program to place windows in the right locations. [Service] ExecStart=/usr/bin/devilspie2 [Install] WantedBy=multi-user.target 0 jamie@turkey:~$

Which I have configured via a key combination that I hit everytime I plug in or unplug my monitor.

Jamie McClelland http://current.workingdirectory.net/tags/debian/ pages tagged debian

Faqet

Subscribe to AlbLinux agreguesi