You are here

Agreguesi i feed

Jeremy Bicha: GNOME Tweaks 3.28 Progress Report 1

Planet Ubuntu - Hën, 29/01/2018 - 10:07md

A few days ago, I released GNOME Tweaks 3.27.4, a development snapshot on the way to the next stable version 3.28 which will be released alongside GNOME 3.28 in March. Here are some highlights of what’s changed since 3.26.

New Name (Part 2)

For 3.26, we renamed GNOME Tweak Tool to GNOME Tweaks. It was only a partial rename since many underlying parts still used the gnome-tweak-tool name. For 3.28, we have completed the rename. We have renamed the binary, the source tarball releases, the git repository, the .desktop, and app icons. For upgrade compatibility, the autostart file and helper script for the Suspend on Lid Close inhibitor keeps the old name.

New Home

GNOME Tweaks has moved from the classic GNOME Git and Bugzilla to the new GNOME-hosted gitlab.gnome.org. The new hosting includes git hosting, a bug tracker and merge requests. Much of GNOME Core has moved this cycle, and I expect many more projects will move for the 3.30 cycle later this year.

Dark Theme Switch Removed

As promised, the Global Dark Theme switch has been removed. Read my previous post for more explanation of why it’s removed and a brief mention of how theme developers should adapt (provide a separate Dark theme!).

Improved Theme Handling

The theme chooser has been improved in several small ways. Now that it’s quite possible to have a GNOME desktop without any gtk2 apps, it doesn’t make sense to require that a theme provide a gtk2 version to show up in the theme chooser so that requirement has been dropped.

The theme chooser will no longer show the same theme name multiple times if you have a system-wide installed theme and a theme in your user theme directory with the same name. Additionally, GNOME Tweaks does better at supporting the  XDG_DATA_DIRS standard in case you use custom locations to store your themes or gsettings overrides.

GNOME Tweaks 3.27.4 with the HighContrastInverse theme

Finally, gtk3 still offers a HighContrastInverse theme but most people probably weren’t aware of that since it didn’t show up in Tweaks. It does now! It is much darker than Adwaita Dark.

Several of these theme improvements (including HighContrastInverse) have also been included in 3.26.4.

For more details about what’s changed and who’s done the changing, see the project NEWS file.

Sam Thursfield: How BuildStream uses OSTree

Planet GNOME - Hën, 29/01/2018 - 7:09md

I’ve been asked a few times about the relationship between BuildStream and OSTree. The answer is a bit complicated so I decided to answer the question here.

OSTree is a content-addressed content store, inspired in many ways by Git but optimized for storing trees of binary files rather than trees of text files.

BuildStream is an integration tool which deals with trees of binary files, and at present it uses OSTree to help with storing, identifying and transferring these trees of binary files.

I’m deliberately using the abstract term “trees of binary files” here because neither BuildStream or OSTree limit themselves to a particular use case. BuildStream itself uses the term “artifact” to describe the output of a build job and in practice this could be the set of development headers and documentation for library, a package file such as a .deb or .rpm, a filesystem for a whole operating system, a bootable VM disk image, or whatever else.

Anyway let’s get to the point! There are actually four ways that BuildStream directly makes use of OSTree.

The `ostree` source plugin

The `ostree` source plugin allows pulling arbitrary data from a remote OSTree repository. It is normally used with an `import` element as a way of importing prebuilt binaries into a build pipeline. For example BuildStream’s integration tests currently run on top of the Freedesktop SDK binaries (which were originally intended for use with Flatpak applications but are equally useful as a generic platform runtime). The gnome-build-meta project uses this mechanism to import a prebuilt Debian base image, which is currently manually pushed to an OSTree repo (this is a temporary measure, in future we want to base gnome-build-meta on top of the upcoming Freedesktop SDK 1.8 instead).

It’s also possible to import binaries using the `tar` and `local` source types of course, and you can even use the `git` or `bzr` plugins for this if you really get off on using the wrong tools for the wrong job.

In future we will likely add other source plugins for importing binaries, for example from the Docker Registry and perhaps using casync.

Storing artifacts locally

Once a build has completed, BuildStream needs to store the results somewhere locally. The results go in the exciting-sounding “local artifact cache”, which is usually located inside your home directory at ​~/.cache/buildstream/artifacts.

There are actually two implementions of the local artifact cache, one using OSTree and one using .tar files. There are several advantages to the OSTree implementation, a major one being that it deduplicates files that are present in multiple artifacts which can save huge amounts of disk space if you do many builds of a large component. The biggest disadvantage to using OSTree is that it currently relies on a bunch of features that are specific to the Linux kernel and so it can only run on Linux OSes. BuildStream needs to support other UNIX-like operating systems and we found the simplest route for now to solve this was to implement a second type of local artifact cache which stores each artifact as a separate .tar file. This is less efficient in terms of disk space but much more portable.

So the fact that we use OSTree for caching artifacts locally should be considered an implementation detail of BuildStream. If a better tool for the job is found then we will switch to that. The precise structure of the artifacts should also be considered an internal detail — it’s possible to check artifacts out from the cache by poking around in the  ​~/.cache/buildstream/artifacts directory but there’s no stability guarantee in how you do this or what you might get out as a result. If you want to see the results of a build, use the `bst checkout` command.

It’s worth noting that we don’t yet support automated cleanups of the local artifact cache; that is issue #135.

Storing artifacts remotely

As a way of saving everyone from building the same things, BuildStream supports downloading prebuilt artifacts from a remote cache.

Currently the recommended way of setting up a remote artifact cache requires that you use OSTree. In theory, any storage mechanism could be used but that is currently not trivial because we also make use of OSTree’s transfer protocols, as described below.

We currently lack a way to do automated artifact expiry on remote caches.

Pushing and pulling artifacts

Of course there needs to be a way to push and pull artifacts between the local cache and the remote cache.

OSTree is designed to support downloading artifacts over HTTP or HTTPS and this is how `bst pull` works. The `bst push` command is more complex because officially OSTree does not support pushing, however we have a rather intricate push mechanism based off Dan  Nicholson’s ostree-push project which tunnels the data over SSH in order to get it onto the remote server.

Users of the tar cache cannot currently interact with remote artifact shares at all, which is an unfortunate issue that we aim to solve this year. The solution may be to switch away from using OSTree’s transfer protocols but marshalling the data into some other format in order to transfer it instead. We are particularly keen to make use of the Bazel
content-addressable store protocol although there may be too much of an impedence mismatch there.

Indirect uses of OSTree

It may be that you also end up deploying stuff into an OSTree repository somewhere. BuildStream itself is only interested with building and integrating your project — once that is done you run `bst checkout` and are rewarded with a tree of files on your local machine. What if, let’s say, your project aims to build a Flatpak application?

Flatpak actually uses OSTree as well and so your deployment step may involve committing those files into yet another OSTree repo ready for Flatpak to run them. (This can be a bit long winded at present so there will likely be some better integration appearing here at some point).

So, is anywhere safe from the rise of OSTree or is it going to take over completely? Something you might not know about me is that I grew up outside a town in north Shropshire called Oswestry. Is that a coincidence? I can’t say.

 

Oswestry, from Wikipedia.

Julita Inca: We are back! #LinuXatUNI on the stage

Planet GNOME - Hën, 29/01/2018 - 7:08md

Yesterday, we had an ‘extreme’ workshop-day at Villa el Salvador to prepare ourselves to the LFCS Certification Exam in 2018. We had received a-year-scholarships from the Linux Foundation because they were the prizes to our winners of the programs the group LinuXatUNI organized last year, and we are more than happy to learn Linux Administration in deep.   

We differentiated the root shell from the user environments by using commands mixed with pipes and redirection commands. As well, we learned more about the configuration of the PATH, wildcards, ulimit, ipcs to check semaphores, shared memory process, Message queues, and the disk usage command experiences with regular expressions.

It was a journey of almost 5 hours roundtrip to Toto’s house, we enjoyed the sightseen, though. Thanks to Fiorella Effio and Carlos Aznaran for your effort and kindness! 

Simos Xenitellis: Checking the Ubuntu Linux kernel updates on Spectre and Meltdown

Planet Ubuntu - Hën, 29/01/2018 - 5:40md
Here is the status page for the Ubuntu updates on Spectre and Meltdown. For a background on these vulnerabilities, see the Meltdown and Spectre Attacks website. In this post we are trying out the Spectre & Meltdown Checker on different versions of the stock Ubuntu Linux kernel. Trying the Spectre & Meltdown Checker before any …

Continue reading

Simos Xenitellis: How to make your LXD containers get IP addresses from your LAN using a bridge

Planet Ubuntu - Hën, 29/01/2018 - 5:11md
Background: LXD is a hypervisor that manages machine containers on Linux distributions. You install LXD on your Linux distribution and then you can launch machine containers into your distribution running all sort of (other) Linux distributions. In the previous post, we saw how to get our LXD container to receive an IP address from the …

Continue reading

Debarshi Ray: GNOME Photos: an overview of thumbnailing

Planet GNOME - Hën, 29/01/2018 - 4:23md

From time to time, I find myself being asked about various details about how content is thumbnailed in GNOME Photos, and the reasons behind various implementation decisions. I can never remember all the details, and always have to dig through Git history and bug reports across multiple modules to come up with an answer. I am hoping that this brain dump will be more persistent than my memory, and more holistic than random comments here and there.

Feel free to read and comment, or you can also happily ignore it.

Background

Having accurate and quality thumbnails is absolutely crucial for Photos. The main user interface is a grid of thumbnails. By design, it tries hard not to expose the filesystem, which means that the user doesn’t have the path or directory hierarchy to complement the contents of the grid. In comparison, thumbnails can be optional in a file manager. Note how Files has settings to disable thumbnailing, and defaults to not thumbnailing remote content, but users can still go about interacting with their files.

Thumbnailing in GNOME is spread across GIO, GVfs, GnomeDesktopThumbnailFactory, and together they implement the Thumbnail Managing Standard. Usually, one uses GIO to lookup thumbnails from the cache and the state they are in, while GnomeDesktopThumbnailFactory is used to create and store the thumbnail files. These thumbnails are stored in the global thumbnail cache in $XDG_CACHE_HOME/thumbnails, and are often, but not necessarily, created by the thumbnailers listed under /usr/share/thumbnailers. This is how most components (eg., GTK+’s GtkFileChooserWidget), and applications (eg., Files and Videos) show thumbnails.

Then there are those “odd” ones that have their own custom setup.

Prior to version 3.24, Photos entirely relied on the global cache and the aforementioned GNOME APIs for its thumbnails. That changed in 3.24 when it switched to its own custom thumbnailer and application specific cache.

Requirements

Ever since editing was added in 3.20, we felt the need to ensure that the thumbnail represents the current state of each item. Being a non-destructive editor, Photos never modifies the original file but separately serializes the edits to disk. The image is rendered by loading the original file, deserializing the edits into objects in memory and running the pixels through them [1]. Therefore, to have the thumbnails accurately represent the current state of the item, it would have to do something similar. However, the edits are application-specific [2], so it is not reasonable to expect the generic OS-wide thumbnailers to be able to handle them.

I believe this is a requirement that all non-destructive image editors have [3]. Notable examples are Darktable and Shotwell.

Secondly, it is important to be able to create and lookup thumbnails of a specific size, as opposed to enumerated constants with pre-determined presets.

The standard specifies two sizes – normal, which is 128×128, and large, which is 256×256. I think this was alright in a world without HiPPI, and is also fine if the thumbnails are either too small or are not an existential necessity for the application. For a HiPPI display with a scaling factor of N, we want to make the thumbnail grid as visually appealing as possible by pumping in NxN times more pixels. Since Photos wants the thumbnails to be 256×256 logical pixels, they should be 256Nx256N raw device pixels on HiPPI. To make things complicated, the cache might get used across different scaling factors – either display or disk got switched, multi-monitor with different resolutions, etc..

Upscaling the low-resolution counterpart of a thumbnail by N is still passable, but it looks much worse if the thumbnail is significantly smaller. Although, I must note that this was the easiest hurdle to surmount. It originates from GIO’s desire to fallback to 128×128 thumbnails, even if the application asked for 256×256. This is pretty straightforward to fix, if necessary.

Last but not the least, I find it important to version the cache to tide over bugs in the thumbnailer. If the cache isn’t versioned, then it is difficult to discard thumbnails that might have been generated by a broken thumbnailer. Hopefully, such bugs would be rare enough that it won’t be necessary to invalidate the cache very often, but when they do happen, it is very reassuring to be able to bump the version, and be guaranteed that users won’t be looking at a broken user interface.

Solution

Starting from version 3.24, Photos uses its own out-of-process thumbnailer and cache [4]. The cache is at $XDG_CACHE_HOME/gnome-photos/thumbnails/$SIZE-$GENERATION, where SIZE is the thumbnail size in raw device pixels and GENERATION is the cache’s version. The main application talks to the thumbnailer over peer-to-peer D-Bus and a simple, cancellable private D-Bus API.

The thumbnailer isn’t separately sandboxed, though. It might be an interesting thing to look at for those who don’t use Flatpak, or to restrict it even more than the main application when running inside Flatpak’s sandbox.

Known bugs

Photos’ thumbnailing code can be traced back to its origins in GNOME Documents. They don’t persistently track thumbnailing failures, and will attempt to re-thumbnail an item that had previously failed when any metadata change is detected. In short, they don’t use G_FILE_ATTRIBUTE_THUMBNAILING_FAILED. The current behaviour might help to overcome a temporary glitch in the network, or it can be simply wasteful.

They predate the addition of G_FILE_ATTRIBUTE_THUMBNAIL_IS_VALID and don’t update the thumbnail once an item gets updated. This could have still been done using GnomeDesktopThumbnailFactory, but that’s water under the bridge, and should possibly be fixed. Although, images don’t tend to get updated so often, which is probably why nobody notices it.

Related to the above point, currently the modification time of the original doesn’t get stored in the thumbnail. It slipped through the cracks while I was reading the sources of the various modules involved in creating thumbnails in GNOME. However, a versioned cache makes it possible to fix it.

[1] If you are reading between the lines, then you might be thinking that it is serializing and deserializing GeglOperations, and you’d be right.

[2] GEGL might be a generic image processing library with its set of built-in operations, but for various reasons, an application can end up carrying its own custom operations.

[3] The idea of an application storing its edits separately from the original can strike as unusual, but this is how most modern image editors work.

[4] Both Darktable and Shotwell have similar thumbnailing infrastructure. You can read about them here and here respectively.

Ted Gould: Jekyll and Mastodon

Planet Ubuntu - Hën, 29/01/2018 - 1:00pd

A while back I moved my website to Jekyll for all the static-y goodness that provides. Recently I was looking to add Mastodon to my domain as well. Doing so with Jekyll isn't hard, but searching for it seemed like something no one had written up. For your searchable pleasure I am writing it up.

I used Masto.host to put the Mastodon instance at social.gould.cx. But I wanted my Mastodon address to be @ted@gould.cx. To do that you need to link the gould.cx domain to point at social.gould.cx. To do that you need a .well-known/host-meta file that redirects webfinger to the Mastodon instance:

<?xml version='1.0' encoding='UTF-8'?> <XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'> <!-- Needed for Mastodon --> <Link rel='lrdd' type='application/xrd+xml' template='https://social.gould.cx/.well-known/webfinger?resource={uri}' /> </XRD>

The issue is that Jekyll doesn't copy static files that are in hidden directories. This is good for if you have a Git repository, so it doesn't copy the .git directory. We can get around this by using Jekyll's YAML front matter to set the location of the file.

--- layout: null permalink: /.well-known/host-meta --- <?xml version='1.0' encoding='UTF-8'?> <XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'> <!-- Needed for Mastodon --> <Link rel='lrdd' type='application/xrd+xml' template='https://social.gould.cx/.well-known/webfinger?resource={uri}' /> </XRD>

This file can then be placed anywhere, and Jekyll will put it in the right location on the static site. And you can folow me as @ted@gould.cx even though my Mastodon instance is social.gould.cx.

Jorge Castro: Updating your CNCF Developer Affiliation

Planet Ubuntu - Hën, 29/01/2018 - 1:00pd

The Cloud Native Computing Foundation uses gitdm to figue out who is contributing and from where. This is used to generate reports and so forth.

There is a huge text file where they are mapping email addresses used and affiliation. It probably doesn’t hurt to check your entry, for example, here’s mine:

Jorge O. Castro*: jorge.castro!gmail.com Lemon Ice Lemon Location City until 2017-05-01 Lemon Travel Smart Vacation Club until 2015-06-01

Whoa? What? This is what a corrected entry looks like, as you can see it takes into account where you used to work for correctness:

Jorge O. Castro*: jorge!heptio.com, jorge!ubuntu.com, jorge.castro!gmail.com Heptio Canonical until 2017-03-31

As an aside this also really makes a nice rolodex for looking up people. :D

Sean Davis: Catfish 1.4.4 Released

Planet Ubuntu - Hën, 29/01/2018 - 12:35pd

I’ve got some great news for fans of Catfish, the fast and powerful graphical search utility for Linux. The latest version, 1.4.4, has arrived with performance improvements and tons of localization updates!

What’s New

This update covers both versions 1.4.3 and 1.4.4.

General
  • Improved theming support
  • Improved error handling with thumbnails
  • Improved search performance by excluding .cache and .gvfs when not explicitly requested
  • Improved locate method performance with the addition of the –basename flag
  • Added keywords to the launcher for improved discoverability and Debian packaging improvements
  • Updated included AppData to latest standards
Bug Fixes
  • All search methods are stopped when the search activity is canceled. This results in a much faster response time when switching search terms.
  • Debian #798074: New upstream release available
  • Debian #794544: po/en_AU.po has Sinhalese (not English) translations for catfish.desktop
Translation Updates

Afrikaans, Brazilian Portuguese, Bulgarian, Catalan, Chinese (Traditional), Croatian, Czech, Danish, Dutch, French, Greek, Italian, Kurdish, Lithuanian, Portuguese, Serbian, Slovak, Spanish, Swedish, Turkish, Ukrainian

Downloads

Debian Unstable and Ubuntu Bionic users can install Catfish 1.4.4 from the repositories.

sudo apt update && sudo apt install catfish

The latest version of Catfish can always be downloaded from the Launchpad archives. Grab version 1.4.4 from the below link.

https://launchpad.net/catfish-search/1.4/1.4.4/+download/catfish-1.4.4.tar.gz

  • SHA-256: a2d452780bf51f80afe7621e040fe77725021c24a0fe4a9744c89ba88dbf87d7
  • SHA-1: b149b454fba75de6e6f9029cee8eec4adfb4be0e
  • MD5: 8fd7e8bb241f2396ebc3d9630b47a635

Philip Chimento: Geek tip: g_object_new and constructors

Planet GNOME - Dje, 28/01/2018 - 9:09md

tl;dr Don’t put any code in your foo_label_new() function other than g_object_new(), and watch out with Vala.

From this GJS bug report I realized there’s a trap that GObject library writers can fall into,

Avoid code at your construction site.

that I don’t think is documented anywhere. So I’m writing a blog post about it. I hope readers from Planet GNOME can help figure out where it needs to be documented.

For an object (let’s call it FooLabel) that’s part of the public API of a library (let’s call it libfoo), creating the object via its foo_label_new() constructor function should be equivalent to creating it via g_object_new().

If foo_label_new() takes no arguments then it should literally be only this:

FooLabel * foo_label_new(void) { return g_object_new(FOO_TYPE_LABEL, NULL); }

If it does take arguments, then they should correspond to construct properties, and they should get set in the g_object_new() call. (It’s customary to at least put all construct-only properties as arguments to the constructor function.) For example:

FooLabel * foo_label_new(const char *text) { return g_object_new(FOO_TYPE_LABEL, "text", text, NULL); }

Do not put any other code in foo_label_new(). That is, don’t do this:

FooLabel * foo_label_new(void) { FooLabel *retval = g_object_new(FOO_TYPE_LABEL, NULL); retval->priv->some_variable = 5; /* Don't do this! */ return retval; }

The reason for that is because callers of your library will expect to be able to create FooLabels using g_object_new() in many situations. This is done when creating a FooLabel in JS and Python, but also when creating one from a Glade file, and also in plain old C when you need to set construct properties. In all those situations, the private field some_variable will not get initialized to 5!

Instead, put the code in foo_label_init(). That way, it will be executed regardless of how the object is constructed. And if you need to write code in the constructor that depends on construct properties that have been set, use the constructed virtual function. There’s a code example here.

If you want more details about what function is called when, Allison Lortie has a really useful blog post.

This trap can be easy to fall into in Vala. Using a construct block is the right way to do it:

namespace Foo { public class Label : GLib.Object { private int some_variable; construct { some_variable = 5; } } }

This is the wrong way to do it:

namespace Foo { public class Label : GLib.Object { private int some_variable; public Label() { some_variable = 5; // Don't do this! } } }

This is tricky because the wrong way seems like the most obvious way to me!

This has been a public service announcement for the GNOME community, but here’s where you come in! Please help figure out where this should be documented, and whether it’s possible to enforce it through automated tools.

For example, the Writing Bindable APIs page seems like a good place to warn about it, and I’ve already added it there. But this should probably go into Vala documentation in the appropriate place. I have no idea if this is a problem with Rust’s gobject_gen! macro, but if it is then it should be documented as well.

Documented pitfalls are better than undocumented pitfalls, but removing the pitfall altogether is better. Is there a way we can check this automatically?

David Tomaschik: Playing with the Gigastone Media Streamer Plus

Planet Ubuntu - Dje, 28/01/2018 - 9:00pd
Background

A few months ago, I was shopping on woot.com and discovered the Gigastone Media Streamer Plus for about $25. I figured this might be something occassionally useful, or at least fun to look at for security vulnerabilities. When it arrived, I didn’t get around to it for quite a while, and then when I finally did, I was terribly disappointed in it as a security research target – it was just too easy.

The Gigastone Media Streamer Plus is designed to provide streaming from an attached USB drive or SD card over a wireless network. It features a built-in battery that can be used to charge a device as well. In concept, it sounds pretty awesome (and there’s many such devices on the market) but it turns out there’s no security to speak of in this particular device.

Exploration

By default the device creates its own wireless network that you can connect to in order to configure and stream, but it can quickly be reconfigured as a client on another wireless network. I chose the latter and joined it to my lab network so I wouldn’t need to be connected to just the device during my research.

NMAP Scan

The first thing I do when something touches the network is perform an NMAP scan. I like to use the version scan as well, though it’s not nearly as accurate on embedded devices as it is on more common client/server setups. NMAP quickly returned some interesting findings:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 # Nmap 7.40 scan initiated as: nmap -sV -T4 -p1-1024 -Pn -o gigastone.nmap 192.168.40.114 Nmap scan report for 192.168.40.114 Host is up (0.14s latency). Not shown: 1020 closed ports PORT STATE SERVICE VERSION 21/tcp open ftp vsftpd 2.0.8 or later 23/tcp open telnet security DVR telnetd (many brands) 53/tcp open domain dnsmasq 2.52 80/tcp open http Boa httpd MAC Address: C0:34:B4:80:29:EB (Gigastone) Service Info: Host: use Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . # Nmap done -- 1 IP address (1 host up) scanned in 22.33 seconds

Hrrm, FTP and Telnet. I’m sure they’re for a good reason.

Web Interface

The web interface is functional, but not attractive. It provides functionality for uploading and downloading files as well as changing settings, such as wireless configuration, WAN/LAN settings, and storage usage.

I noticed that, when loading the Settings page, you would sometimes get the settings visible before authenticating to the admin interface.

Problems with Burp Suite

While playing with this device, I did notice a bug in Burp Suite. The Gigastone Media Streamer Plus does not adhere to the HTTP RFCs, and all of their cgi-bin scripts send only \r at the end of line, instead of \r\n per the RFC. Browsers are forgiving, so they handled this gracefully. Unfortunately, when passing the traffic through Burp Suite, it transformed the ‘\r\r’ at the end of the response headers to \n\r\n\r\n. This causes the browser to interpret an extra blank line at the beginning of the response. Still not a problem for the browser parsing things, but slightly more a problem for the Gigastone Javascript parsing its own custom response format (newline-separated).

I reported the bug to PortSwigger and not only got a prompt confirmation of the bug, but a Python Burp extension to work around the issue until a proper fix lands in Burp Suite. That’s an incredible level of support from the authors of a quality tool.

Vulnerabilities Telnet with Default Credentials

The device exposes telnet to the local network and accepts username ‘root’ and password ‘root’. This gives full control of the device to anyone on the local network.

Information Disclosure: Administrative PIN (and Other Settings)

The administrative PIN can be retrieved by an unauthenticated request to an API. In fact, the default admin interface uses this API to compare the entered PIN entirely on the client side.

1 2 3 % curl 'http://192.168.40.114/cgi-bin/gadmin' get 1234

In fact, all of the administrative settings can be retrieved by unauthenticated requests, such as the WiFi settings. (Though, on a shared network, this is of limited value.)

1 2 3 4 5 6 7 8 9 % curl 'http://192.168.40.114/cgi-bin/cgiNK' AP_SSID=LabNet AP_SECMODE=WPA2 PSK_KEY=ThisIsNotAGoodPassphrase AP_PRIMARY_KEY=1 WEPKEY_1= WEPKEY_2= WEPKEY_3= WEPKEY_4= Authentication Bypass: Everything

None of the administrative APIs actually require any authentication. The admin PIN is never sent with requests, no session cookie is set, and there are no other authentication controls. For example, the admin PIN can be set via a GET request as follows:

1 2 3 4 % curl 'http://192.168.40.114/cgi-bin/gadmin?set=4444' set 0 4444 Timeline
  • Discovered in ~May 2017
  • Reported Jan 28 2018
  • Response from Gigastone on Jan 28 2018:

Media Streamer Plus provides convenient functions for portable use. It is not to replace or to be comparable to normal networking devices. However, we do not recommend users to change internal setup to avoid unrecoverable errors.

Jo Shields: Hello PGO

Planet GNOME - Dje, 28/01/2018 - 12:58pd

Assuming the Planet configuration change was correct, this should be my first post aggregated on Planet GNOME.

Hello!

I’m Jo.

I used to work on Free Software at Collabora, until I sold out, and now I work on Free Software at Microsoft. Specifically, I divide my time between administration of various Xamarin engineering services (primarily the public Jenkins server and its build agents); develop and manage the release of the Mono framework on Windows/Linux and MonoDevelop IDE on Linux; and occasionally work on internal proprietary projects which definitely don’t include Visual Studio Enterprise for Linux. I’m based in the Microsoft office in Cambridge, Mass, along with the Xamarin Release Engineering team, and most of the Xamarin engineering team.

Whilst it hasn’t had the highest profile in the GNOME community for a while, Mono is still out there, in its current niches – in 2018 that would primarily be on smartphones in a wider context, and for games (either via Unity3D or MonoGame/FNA) on the Linux desktop. But hey, it’s still there for desktop apps on Linux if you want it to be! I still use Smuxi as my IRC client. Totally still a thing. And there’s the MonoDevelop IDE, which nowadays I’m trying to release on Linux via Flatpak.

So, um, hi. You’ll see blog posts from me occasionally about Linux software releasing from an ISV perspective, packaging, etc. It’ll be fun for all concerned.

Exploring minimax polynomials with Sollya

Planet Debian - Sht, 27/01/2018 - 11:18pd

Following Fabian Giesen's advice, I took a look at Sollya—I'm not really that much into numerics (and Sollya, like the other stuff that comes out of the same group, is really written by hardcode numerics nerds), but approximation is often useful.

A simple example: When converting linear light values to sRGB, you need to be able to compute the formula f(x) = (x + ɑ - 1) / ɑ)^ɣ for a given (non-integer) ɑ and ɣ. (Movit frequently needs this. For the specific case of sRGB, GPUs often have hard-coded lookup tables, but they are not always applicable, for instance if the data comes from Y'CbCr.) However, even after simplifications, the exponentiation is rather expensive to run for every pixel, so we'd like some sort of approximation.

If you've done any calculus, you may have heard of Taylor series, which looks at the derivatives in a certain point and creates a polynomial from that. One of the perhaps most famous is arctan(x) = x - 1/3 x³ + 1/5 x⁵ - 1/7 x⁷ + ..., which gives rise to a simple formula for approximating pi if you set x=1 (since arctan(1) = pi/4). However, for practical approximation, Taylor series are fairly useless; they're accurate near the origin point of the expansion, but don't care at all about what happens far from it. Minimax polynomials are better; they minimize the maximum error over the range of interest.

In the past, I've been using Maple for this (I never liked Mathematica much); it's non-free, but not particularly expensive for a personal license, and it can do pretty much everything I expect from a computer algebra system. However, it would be interesting to see if Sollya could do better. After toying around a bit, it seems there are pros and cons:

  • Sollya appears to be faster. I haven't made any formal benchmarks, but I just feel like I have to wait a lot less for it.
  • I find Sollya's syntax maybe a bit more obscure (e.g., [| to start a list), although this is probably partially personal preference. Its syntax error handling is also a lot less friendly.
  • Sollya appears to be a lot more robust towards actually terminating with a working result. E.g., Maple just fails on optimizing sqrt(x) over 0..1 (a surprisingly hard case), whereas I haven't really been able to make Sollya fail yet except in the case of malformed problems (e.g. asking for optimizing for relative error of an error which is zero at certain points). Granted, I haven't pushed it that hard.
  • Maple supports a much wider range of functions. This is a killer for me; I frequently need something as simple as piecewise functions, and Sollya simply doesn't appear to support them.
  • Maple supports rational expansions, ie. two polynomials divided by each other (which can often increase performance dramatically—although the execution cost also balloons, of course). Sollya doesn't. On the other hand, Sollya supports expansion over given base functions, e.g. if you happen to sin(x) computed for whatever obscure reason, you can get an expansion of the type f(x) = a + bsin(x) + cx + dsin(x)² + ex².
  • Maple supports arbitrary weighing of the error (e.g. if you care more about errors at the endpoints)—I find this super-useful, especially if you are dealing with transformed variables or piecewise approximations. Sollya only supports relative and absolute errors, which is more limiting.
  • Sollya can seemingly be embedded as a library. Useful for some, not really relevant for me.
  • And finally, Sollya doesn't optimize coefficients over arbitrary precision; you tell it what accuracy you have to deal with (number of bits in floating or fixed point) and it optimizes the coefficients with that round-off error in mind. (I don't know if it also deals with intermediate roundoff errors when evaluating the polynomial.) Fabian makes a big deal of this, but for fp32, it doesn't really seem to matter much; I did some tests relative to what I had already gotten out of Maple, and the difference in maximum error was microscopic.

So, the verdict? Sollya is certainly good, and I can see myself using it in the future, but for me, it's more of an augmentation than replacing Maple for this use.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Umang Jain: No more hunched back

Planet GNOME - Sht, 27/01/2018 - 11:10pd

I have been looking into laptop stands as I spend considerable hours sitting with my laptop which has led to hunched back. But when you open amazon, I wasn’t able to justify the cost for such a simple thing.

So, little preface. I experimented with thick books and what not, so that the laptop base could be raised as an inclined plane but they had some problems and I didn’t generally like it. Therefore, what I did? Ask the DIY internet.

/me goes and watches videos on DIY laptop stands.

Now, these videos’ people have all sort of fancy machines/tools/workshop to make stuff like this. Obviously, I didn’t have those so the most I could do is find a carpenter and ask him to do it.

BUT even that sounds like a lot of work and most important, will take time! :-/

I have some friends over hardware labs which has access to things like laser cutter, CNC, 3-D printer; that could work but again, I wanted something fast and almost at no effort cost.

All this time, I had a voice in mind going on, “It can’t be this tricky, it has to simple, it has to simple!!”

Finally, I got it !! PVC pipe parts. You can get them very easily in India and they are really really cheap and gives sturdy construction. So, the total cost incurred is Rs. 120/- (1/6th of the amazon thing, whatever that was) and I built it within half an hour. It’s very simple so pictures will do the talking. So, no more hunched back problem.

This does pretty good job given the cost incurred.

Sebastian Schauenburg: Local OsmAnd and Geo URL's

Planet Ubuntu - Pre, 26/01/2018 - 11:50md

Earlier this year I went on a long holiday to Japan and China. I have an Android phone and am a very big fan of OpenStreetMap. So I used OsmAnd (which uses OpenStreetMap data) to navigate through those countries. I made a spreadsheet with LibreOffice, which included a few links to certain location which are hard to find or do not have an address. Then I exported that .ods to a .pdf and was able to click on the links, which then openend perfectly in OsmAnd.

The URL I was able to use in my PDF document was this one (of course you can substitute longitude and latitude):

http://osmand.net/go?lat=51.4404&lon=4.3294&z=16

And then I helped a friend of mine with something similar to use on a website. Of course the link above did not work. After a short look on Wikipedia I found the page about Geo URI scheme. Constructing a URL with the Geo URI scheme will trigger the default navigation application on a mobile device to open the location. And of course, here you can also substitute the longitude and latitude.

<a href="geo:51.4404,4.3294;u=15">Hoogerheide</a>

Which will result in this link (usable on mobile devices) and of course you can still create a "normal one" for non-mobile device such as this one.

Salih Emin: ucaresystem core 4.4.0 : Pkexec, check for reboot and minor fix

Planet Ubuntu - Pre, 26/01/2018 - 10:49md
The new release 4.4.0 of ucaresystem core introduces two internal but important features and a minor bug fix for Debian Jessie. Let’s check them out…Thanks to an idea of Mark Drone on launchpad, I added in ucaresystem core the feature to recognize and inform the user in case they need to restart the system afterContinue reading "ucaresystem core 4.4.0 : Pkexec, check for reboot and minor fix"

Tobias Bernard: Introducing the CSD Initiative

Planet GNOME - Pre, 26/01/2018 - 4:24md

tl;dr: Let’s get rid of title bars. Join the revolution!

Unless you’re one of a very lucky few, you probably use apps with title bars. In case you’ve never come across that term, title bars are the largely empty bars at the top of some application windows. They contain only the window title and a close button, and are completely separate from the window’s content. This makes them very inflexible, as they can not contain any additional UI elements, or integrate with the application window’s content.

Blender, with its badly integrated and pretty much useless title bar Luckily, the GNOME ecosystem has been moving away from title bars in favor of header bars. This is a newer, more flexible pattern that allows putting window controls and other UI elements in the same bar. Header bars are client-side decorations (CSD), which means they are drawn by the app rather than the display server. This allows for better integration between application and window chrome.

 

GNOME Builder, an app that makes heavy use of the header bar for UI elements

All GNOME apps (except for Terminal) have moved to header bars over the past few years, and so have many third-party apps. However, there are still a few holdouts. Sadly, these include some of the most important productivity apps people rely on every day (e.g. LibreOffice, Inkscape, and Blender).

There are ways to hide title bars on maximized and tiled windows, but these do not (and will never) work on Wayland (Note: I’m talking about GNOME Shell on Wayland here, not other desktops). All window decorations are client-side on Wayland (even when they look like title bars), so there is no way to hide them at a window manager level.

The CSD Initiative

The only way to solve this problem long-term is to patch applications upstream to not use title bars. So this is what we’ll have to do.

That is why I’m hereby announcing the CSD Initiative, an effort to get as many applications as possible to drop title bars in favor of client-side decorations. This won’t be quick or easy, and will require work on many levels. However, with Wayland already being shipped as the default session by some distros, it’s high time we got started on this.

For a glimpse at what this kind of transition will look like in practice, we can look to Firefox and Chromium. Chromium has recently shipped GNOME-style client-side decorations in v63, and Firefox has them in experimental CSD builds. These are great examples for other apps to follow, as they show that apps don’t have to be 100% native GTK in order to use CSD effectively.

Chromium 63 with CSD Chromium 63 with window buttons on the left What is the goal?

This initiative doesn’t aim to make all apps look and act exactly like native GNOME apps. If an app uses GTK, we do of course want it to respect the GNOME HIG. However, it isn’t realistic to assume that apps like Blender or Telegram will ever look like perfectly native GNOME apps. In these cases, we’re are aiming for functional, not visual consistency. For example, it’s fine if an Electron app has custom close/maximize/minimize icons, as long as they use the same metaphors as the native icons.

Thus, our goal is for as many apps as possible to have the following properites:

  • No title bar
  • Native-looking close/maximize/minimize icons
  • Respects the setting for showing/hiding minimize and maximize
  • Respects the setting for buttons to be on the left/right side of the window
Which apps are affected?

Basically, all applications not using GTK3 (and a few that do use GTK3). That includes GTK2, Qt, and Electron apps. There’s a list of some of the most popular affected apps on this initiative’s Wiki page.

The process will be different for each app, and the changes required will range from “can be done in a weekend” to “holy shit we have to redesign the entire app”. For example, GTK3 apps are relatively easy to port to header bars because they can just use the native GTK component. GTK2 apps first need to be ported to GTK3, which is a major undertaking in itself. Some apps will require major redesigns, because removing the title bar goes hand in hand with moving from old-style menu bars to more modern, contextual menus.

Many Electron apps might be low-hanging fruit, because they already use CSD on macOS. This means it should be possible to make this happen on GNU/Linux as well without major changes to the app. However, some underlying work in Electron to expose the necessary settings to apps might be required first.

Slack, like many Electron apps, uses CSD on macOS The same Slack app on Ubuntu (with a title bar)

Apps with custom design languages will have to be evaluated on a case-by-case basis. For example, Telegram’s design should be easy to adapt to a header bar layout. Removing the title bar and adding window buttons in the toolbar would come very close to a native GNOME header bar functionally.

Telegram as it looks currently, with a title bar Telegram mockup with no title bar How can I help?

The first step will be making a list of all the apps affected by this initiative. You can add apps to the list on this Wiki page.

Then we’ll need to do the following things for each app:

  1. Talk to the maintainers and convince them that this is a good idea
  2. Do the design work of adapting the layout
  3. Figure out what is required at a technical level
  4. Implement the new layout and get it merged

In addition, we need to evaluate what we can do at the toolkit level to make it easier to implement CSD (e.g. in Electron or Qt apps). This will require lots of work from lots of people with different skills, and nothing will happen overnight. However, the sooner we start, the sooner we’ll live in an awesome CSD-only future.

And that’s where you come in! Are you a developer who needs help updating your app to a header bar layout? A designer who would like to help redesign apps? A web developer who’d like to help make CSD work seamlessly in Electron apps? Come to #gnome-design on IRC/Matrix and talk to us. We can do this!

Happy hacking!

 

Update:

There have been some misunderstandings about what I meant regarding server-side decorations on Wayland. As far as I know (and take this with a grain of salt), Wayland uses CSD by default, but it is possible to add SSD support via protocol extensions. KDE has proposed such a protocol, and support for this protocol has been contributed to GTK by the Sway developers. However, GNOME Shell does not support it and its developers have stated that they have no plans to support it at the moment.

This is what I was referring to by saying that “it will never work on Wayland”. I can see how this could be misinterpreted from the point of view of other desktop environments but that was not my intention, it was simply unfortunate wording. I have updated the relevant part of this post to clarify.

Also, some people seem to have taken from this that we plan on lobbying for removing title bar support from third-party apps in a way that affects other desktops. The goal of this initiative is for GNOME users to get a better experience by having fewer applications with badly integrated title bars on their systems. That doesn’t preclude applications from having title bars on different desktops, or having a preference for this (like Chromium does, for example).

Ruben Vermeersch: Go: debugging multiple response.WriteHeader calls

Planet GNOME - Pre, 26/01/2018 - 4:11md

Say you’re building a HTTP service in Go and suddenly it starts giving you these:

http: multiple response.WriteHeader calls

Horrible when that happens, right?

It’s not always very easy to figure out why you get them and where they come from. Here’s a hack to help you trace them back to their origin:

type debugLogger struct{} func (d debugLogger) Write(p []byte) (n int, err error) { s := string(p) if strings.Contains(s, "multiple response.WriteHeader") { debug.PrintStack() } return os.Stderr.Write(p) } // Now use the logger with your http.Server: logger := log.New(debugLogger{}, "", 0) server := &http.Server{ Addr: ":3001", Handler: s, ErrorLog: logger, } log.Fatal(server.ListenAndServe())

This will output a nice stack trace whenever it happens. Happy hacking!


Comments | More on rocketeer.be | @rubenv on Twitter

Detecting binary files in the history of a git repository

Planet Debian - Pre, 26/01/2018 - 3:57md
Git, VCSes and binary filesGit is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being git-lfs and git-annex.

My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.

Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).

The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.

But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?
Detecting any binary files, only in the current commitAs with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:

find . -type f -print | grep -v '^\.\/\.git\/'Then we can use the 'file' utility to list for non-text files:
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK Of course, we assume here, the work tree is clean.
Checking all commits in a branchSince we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.

Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:
git checkout -b test_binsGit has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:

cat > ../check_file_text.sh#!/bin/sh

(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
then (ab)use 'git rebase' to execute that for us for all commits:
git rebase --exec="sh ../check_file_text.sh" -i $startcommitAfter we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.

Here is an example output when checking the newest 5 commits:

$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Successfully rebased and updated refs/heads/test_bins.

Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.

$ git co master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master' $ git branch -D test_bins
Deleted branch test_bins (was 6358b91).Enjoy! eddyp noreply@blogger.com Rambling around foo

Christian Schaller: An update on Pipewire – the multimedia revolution

Planet GNOME - Pre, 26/01/2018 - 3:35md

We launched PipeWire last September with this blog entry. I thought it would be interesting for people to hear about the latest progress on what I believe is going to be a gigantic step forward for the Linux desktop. So I caught up with Pipewire creator Wim Taymans during DevConf 2018 in Brno where Wim is doing a talk about Pipewire and we discussed the current state of the code and Wim demonstrated a few of the things that PipeWire now can do.

Christian Schaller and Wim Taymans testing PipeWire with Cheese

Priority number 1: video handling

So as we said when we launched the top priority for PipeWire is to address our needs on the video side of multimedia. This is critical due to the more secure nature of Wayland, which makes the old methods for screen sharing not work anymore and the emergence of desktop containers in the form of Flatpak. Thus we need PipeWire to help us provide appliation and desktop developers with a new method for doing screen sharing and also to provide a secure way for applications inside a container to access audio and video devices on the system.

There are 3 major challenges PipeWire wants to solve for video. One is device sharing, meaning that multiple applications can share the same video hardware device, second it wants to be able to do so in a secure manner, ensuring your video streams are not highjacked by a rogue process and finally it wants to provide an efficient method for sharing of multimedia between applications, like for instance fullscreen capture from your compositor (like GNOME Shell) to your video conferencing application running in your browser like Google Hangouts, Blue Jeans or Pexip.

So the first thing Wim showed me in action was the device sharing. We launched the GNOME photoboot application Cheese which gets PipeWire support for free thanks to the PipeWire GStreamer plugin. And this is an important thing to remember, thanks to so many Linux applications using GStreamer these days we don’t need to port each one of them to PipeWire, instead the PipeWire GStreamer plugin does the ‘porting’ for us. We then launched a gst-launch command line pipeline in a terminal. The result is two applications sharing the same webcam input without one of them blocking access for the other.

As you can see from the screenshot above it worked fine, and this was actually done on my Fedora Workstation 27 system and the only thing we had to do was to start the ‘pipewire’ process in a termal before starting Cheese and the gst-launch pipeline. GStreamer autoplugging took care of the rest. So feel free to try this out yourself if you are interested, but be aware that you will find bugs quickly if you try things like on the fly resolution changes or switching video devices. This is still tech preview level software in Fedora 27.

The plan is for Wim Taymans to sit down with the web browser maintainers at Red Hat early next week and see if we can make progress on supporting PipeWire in Firefox and Chrome, so that conferencing software like the ones mentioned above can start working fully under Wayland.

Since security was one of the drivers for the move to Wayland from X Windows we of course also put a lot of emphasis of not recreating the security holes of X in the compositor. So the way PipeWire now works is that if an application wants to do full screen capture it will check with the compositor through a dbus-api, or a portal in Flatpak and Wayland terminology, and only allows the permited application to do the screen capture, so the stream can’t be highjacked by a random rougue application or process on your computer. This also works from within a sandboxed setting like Flatpaks.

Jack Support

Another important goal of PipeWire was to bring all Linux audio and video together, which means PipeWire needed to be as good or better replacement for Jack for the Pro-Audio usecase. This is a tough usecase to satisfy so while getting the video part has been the top development priority Wim has also worked on verifying that the design allows for the low latency and control needed for Pro-Audio. To do this Wim has implemented the Jack protocol on top of PipeWire.

Carla, a Jack application running on top of PipeWire.


Through that work he has now verified that he is able to achieve the low latency needed for pro-audio with PipeWire and that he will be able to run Jack applications without changes on top of PipeWire. So above you see a screenshot of Carla, a Jack-based application running on top of PipeWire with no Jack server running on the system.

ALSA/Legacy applications

Another item Wim has written the first code for and verfied will work well is the Alsa emulation. The goal of this piece of code is to allow applications using the ALSA userspace API to output to Pipewire without needing special porting or application developer effort. At Red Hat we have many customers with older bespoke applications using this API so it has been of special interest for us to ensure this works just as well as the native ALSA output. It is also worth nothing that Pipewire also does mixing so that sound being routed through ALSA will get seamlessly mixed with audio coming through the Jack layer.

Bluetooth support

The last item Wim has spent some time on since last September is working on making sure Bluetooth output works and he demonstrated this to me while we where talking together during DevConf. The Pipewire bluetooth module plugs directly into the Bluez Bluetooth framework, meaning that things like the GNOME Bluetooth control panel just works with it without any porting work needed. And while the code is still quite young, Wim demonstrated pairing and playing music over bluetooth using it to me.

What about PulseAudio?

So as you probably noticed one thing we didn’t mention above is how to deal with PulseAudio applications. Handling this usecase is still on the todo list and the plan is to at least initially just keep PulseAudio running on the system outputing its sound through PipeWire. That said we are a bit unsure how many appliations would actually be using this path because as mentioned above all GStreamer applications for instance would be PipeWire native automatically through the PipeWire GStreamer plugins. And for legacy applications the PipeWire ALSA layer would replace the current PulseAudio ALSA layer as the default ALSA output, meaning that the only applications left are those outputing to PulseAudio directly themselves. The plan would also be to keep the PulseAudio ALSA device around so if people want to use things like the PulseAudio networked audio functionality they can choose the PA ALSA device manually to be able to keep doing so.
Over time the goal would of course be to not have to keep the PulseAudio daemon around, but dropping it completely is likely to be a multiyear process with current plans, so it is kinda like XWayland on top of Wayland.

Summary

So you might read this and think, hey if all this work we are almost done right? Well unfortunately no, the components mentioned here are good enough for us to verify the design and features, but they still need a lot of maturing and testing before they will be in a state where we can consider switching Fedora Workstation over to using them by default. So there are many warts that needs to be cleaned up still, but a lot of things have become a lot more tangible now than when we last spoke about PipeWire in September. The video handling we hope to enable in Fedora Workstation 28 as mentioned, while the other pieces we will work towards enabling in later releases as the components mature.
Of course the more people interesting in joining the PipeWire community to help us out, the quicker we can mature these different pieces. So if you are interested please join us in #pipewire on irc.freenode.net or just clone the code of github and start hacking. You find the details for irc and git here.

Faqet

Subscribe to AlbLinux agreguesi