You are here

Agreguesi i feed

Uber Hit With $650 Million Employment Tax Bill In New Jersey

Slashdot - Pre, 15/11/2019 - 1:45pd
New Jersey's labor department says Uber owes the state about $650 million in unemployment and disability insurance taxes because the rideshare company has been misclassifying drivers as independent contractors. Bloomberg Law News reports: Uber and subsidiary Rasier LLC were assessed $523 million in past-due taxes over the last four years, the state Department of Labor and Workforce Development said in a pair of letters to the companies. The rideshare businesses also are on the hook for as much as $119 million in interest and penalties on the unpaid amounts, according to other internal department documents. The New Jersey labor department has been after Uber for unpaid employment taxes for at least four years, according to the documents, which Bloomberg Law obtained through an open public records request. The state's determination is limited to unemployment and disability insurance, but it could also mean that Uber is required to pay drivers minimum wages and overtime under state law. Uber's costs per driver, and those of Lyft, could jump by more than 20% if they are forced to reclassify workers as employees, according to Bloomberg Intelligence. [...] New Jersey informed Uber in 2015 that it had obtained a court judgment ordering the company to pay about $54 million in overdue unemployment and temporary disability insurance contributions. It is not clear whether the company ever paid any of that bill. "We are challenging this preliminary but incorrect determination, because drivers are independent contractors in New Jersey and elsewhere," Uber spokeswoman Alix Anfang told Bloomberg Law.

Read more of this story at Slashdot.

Wikipedia's Co-Founder Takes On Facebook With Ad-Free Social Network

Slashdot - Pre, 15/11/2019 - 1:02pd
Wikipedia co-founder Jimmy Wales has launched a social network called WT:Social. It has no financial association with Wikipedia and operates on donations, not advertising. The Next Web reports: WT:Social went live last month and is currently nearing 50,000 users. The company is rolling out access slowly; when I signed up, I was approximately number 28,000 on the waitlist. Alternatively, you can pay 13 bucks a month or 100 a year to get access right away. In comments to the Financial Times, Wales said "The business model of social media companies, of pure advertising, is problematic. It turns out the huge winner is low-quality content." You don't say. WT:Social's interface is rather sparse at the moment, featuring a simple feed comprised of news stories and comments below them. News is a big part of the network; it's a spinoff of Wales' previous project, WikiTribune, which sought to be a global news site comprised of professional journalists and citizen contributors. Both WikiTribune and WT:Social emphasize combatting fake news, highlighting evidence-based coverage over the focus on "engagement" seen on other networks. Each story posted to the network makes prominent where the article comes from, as well as sources and references. You can also join various "SubWikis" that are essentially like Facebook groups or subreddits, which filter content to stories of a given topic. You can also add hashtags to a post or follow hashtags for more specific interests that might span more than one SubWiki. Posts are currently sorted chronologically, but the site plans to add an upvote system for users to promote quality stories.

Read more of this story at Slashdot.

SpaceX Successfully Tests Crewed Dragon Launch Abort Engines

Slashdot - Pre, 15/11/2019 - 12:20pd
An anonymous reader quotes a report from ExtremeTech: SpaceX has cleared a major hurdle on the way to launching manned missions with its Dragon spacecraft. The company had to push back its launch plans after the stunning explosion of a Crew Dragon capsule during testing earlier this year. Now, SpaceX has successfully tested the engines without incident, paving the way for a test flight next year. The SpaceX Dragon is one of two commercial spacecraft NASA hopes to use to launch manned missions to the International Space Station, the other being Boeing's CST-100 Starliner. SpaceX was on track to beat Boeing to launch before its April testing failure, but picking through the pieces of the demolished capsule pushed back the timetable. After an investigation, SpaceX confirmed the craft's SuperDraco engines themselves were not at fault. These innovative launch abort engines use hydrazine and nitrogen tetroxide propellants, which mix together and ignite, but most launch abort systems use solid propellants. SpaceX went this way because it intends to do propulsive landings with the Dragon in the future, but NASA hasn't authorized that for crewed flights. Unfortunately, a leaky fuel valve in the abort propulsion system allowed nitrogen tetroxide to leak into the helium pressurization system. It was then driven back into the titanium check valve, which caused the explosion. The new and improved Dragon has a burst disk in the fuel lines that keeps propellant from leaking into the high-pressure lines before ignition. This week's test-firing demonstrates that the new system functions as intended, and SpaceX says it can now move forward with launch plans. The next step is to test the SuperDraco engines in-flight later this year. Then, once SpaceX can prove that its spacecraft can handle an in-flight abort, it'll prepare for the first crewed flight in early 2020.

Read more of this story at Slashdot.

Microsoft Adds Over 50 Games To xCloud Preview, Plans Launch For 2020

Slashdot - Enj, 14/11/2019 - 11:40md
Microsoft has added more than 50 new games to the preview of its Project xCloud game streaming service, including Devil May Cry 5, Tekken 7 and Madden 2020. Engadget reports: In a blog post today, Microsoft said it'll send out a new wave of xCloud preview invites to gamers in the US, UK and South Korea. Starting next year, it also plans to expand the preview to Canada, India, Japan and Western Europe. If you live in one of those countries, you can sign up for the preview here and hope you get selected. For now, the xCloud preview is only available for Android phones and tablets, but Microsoft says next year it'll also be headed to Windows PCs and other devices. I'm sure Roku owners would be pleased, but it'd be even more intriguing if Microsoft could eventually bring the xCloud preview to smart TVs and Apple devices. While testers need to use Xbox controllers with the service now, Microsoft also says it'll work with other bluetooth controllers next year, including Sony's Dual Shock 4 and Razer's entries. Yes, you'll soon live in a world where you can play Halo with a PlayStation branded gamepad. Among other tidbits, the xCloud preview will also let gamers stream titles they already own next year, as well those made available through Xbox GamePass for subscribers.

Read more of this story at Slashdot.

AMD Launches 16-Core Ryzen 9 3950X At $750, Beating Intel's $2K 18-Core Chip

Slashdot - Enj, 14/11/2019 - 11:00md
MojoKid writes: AMD officially launched its latest many-core Zen 2-based processor today, a 16-core/32-thread beast known as the Ryzen 9 3950X. The Ryzen 9 3950X goes head-to-head against Intel's HEDT flagship line-up like the 18-core Core i9-9980XE but at a much more reasonable price point of $750 (versus over $2K for the Intel chip). The Ryzen 9 3950X has base and boost clocks of 3.5GHz and 4.7GHz, respectively. The CPU cores at the heart of the Ryzen 9 39050X are grouped into two, 7nm 8-core chiplets, each with dual, four-core compute complexes (CCX). Those chiplets link to an IO die that houses the memory controller, PCI Express lanes, and other off-chip IO. The new 16-core Zen 2 chips also use the same AM4 socket and are compatible with the same motherboards, memory, and coolers currently on the market for lower core-count AMD Ryzen CPUs. Throughout all of Hot Hardware's benchmark testing, the 16-core Ryzen 9 3950X consistently finished at or very near the top of the charts in every heavily-threaded workload, and handily took Intel's 18-core chip to task, beating it more often than not.

Read more of this story at Slashdot.

FCC Sued By Dozens of Cities After Voting To Kill Local Fees and Rules

Slashdot - Enj, 14/11/2019 - 10:20md
An anonymous reader quotes a report from Ars Technica: The Federal Communications Commission faces a legal battle against dozens of cities from across the United States, which sued the FCC to stop an order that preempts local fees and regulation of cable-broadband networks. The cities filed lawsuits in response to the FCC's August 1 vote that limits the fees municipalities can charge cable companies and prohibits cities and towns from regulating broadband services offered over cable networks. "At least 46 cities are asking federal appeals courts to undo an FCC order they argue will force them to raise taxes or cut spending on local media services, including channels that schools, governments, and the general public can use for programming," Bloomberg Law wrote Tuesday. Various lawsuits were filed against the FCC between August and the end of October, and Bloomberg's report said that most of the suits are being consolidated into a single case in the US Court of Appeals for the 9th Circuit. An FCC motion to transfer the case to the 6th Circuit, which has decided previous cases on the same topic, is pending. The 9th Circuit case was initially filed by Eugene, Oregon, which said the FCC order was arbitrary and capricious and that it violated the Administrative Procedure Act, the Constitution, and the Communications Act. The cities' arguments and the FCC's defense will be fleshed out more in future briefs. Big cities such as Los Angeles, Chicago, Philadelphia, San Antonio, San Francisco, Denver, and Boston are among those suing the FCC. Also suing are other municipalities from Maine, Pennsylvania, Delaware, Virginia, Maryland, Georgia, Indiana, Iowa, Minnesota, South Dakota, Nebraska, Oklahoma, Texas, Arizona, California, Oregon, and Washington, according to a Bloomberg graphic. The state of Hawaii is also suing the FCC, and New York City is supporting the lawsuit against the FCC as an intervening party.

Read more of this story at Slashdot.

Over Half of Fortune 500 Exposed To Remote Access Hacking

Slashdot - Enj, 14/11/2019 - 7:53md
Over a two-week period, the computer networks at more than half of the Fortune 500 left a remote access protocol dangerously exposed to the internet, something many experts warn should never happen, according to new research by the security firm Expanse and 451 research. From a report: According to Coveware, more than 60% of ransomware is installed via a Windows remote access feature called Remote Desktop Protocol (RDP). It's a protocol that's fine in secure environments but once exposed to the open internet can, at its best, allow attackers to disrupt access and, at its worst, be vulnerable to hacking itself. RDP is a way of offering virtual access to a single computer. It allows, for example, an IT staffer in one office to provide tech support for a baffled user in a different office. But RDP is best used over a secured network rather than over the open internet. "We compare exposed RDP to leaving a computer attached to your network out on your lawn," Matt Kraning, co-founder and CTO of Expanse, told Axios.

Read more of this story at Slashdot.

Google's Rollout of RCS Chat for all Android Users in the US Begins Today

Slashdot - Enj, 14/11/2019 - 7:15md
Google is announcing that today, a year and a half after it first unveiled RCS chat as Android's primary texting platform, it is actually making RCS chat Android's primary texting platform. That's because it is rolling out availability to any Android user in the US who wants to use it, starting today. From a report: RCS stands for "rich communication services," and it's the successor to SMS. Like other texting services, it supports read receipts, typing indicators, improved group chats, and high-quality images. Unlike several texting apps, like iMessage or Signal, it does not offer end-to-end encryption as an option. RCS is based on your phone number, so when you are texting with somebody who also has it, it should just turn on automatically in your chat. To get RCS, you simply need to use Android Messages as your default texting app on your Android phone. Many Android phones do that already by default, but Samsung users will need to head to the Google Play Store to download it and then switch to it as their default. Further reading: The Four Major Carriers Finally Agree To Replace SMS With a New RCS Standard.

Read more of this story at Slashdot.

The USPTO Wants To Know if Artificial Intelligence Can Own the Content it Creates

Slashdot - Enj, 14/11/2019 - 6:30md
The US office responsible for patents and trademarks is trying to figure out how AI might call for changes to copyright law, and it's asking the public for opinions on the topic. From a report: The United States Patent and Trademark Office (USPTO) published a notice in the Federal Register last month saying it's seeking comments, as spotted by TorrentFreak. The office is gathering information about the impact of artificial intelligence on copyright, trademark, and other intellectual property rights. It outlines thirteen specific questions, ranging from what happens if an AI creates a copyright-infringing work to if it's legal to feed an AI copyrighted material. It starts off by asking if output made by AI without any creative involvement from a human should qualify as a work of authorship that's protectable by US copyright law. If not, then what degree of human involvement "would or should be sufficient so that the work qualifies for copyright protection?" Other questions ask if the company that trains an AI should own the resulting work, and if it's okay to use copyrighted material to train an AI in the first place. "Should authors be recognized for this type of use of their works?" asks the office. "If so, how?"

Read more of this story at Slashdot.

Windows and Linux Get Options To Disable Intel TSX To Prevent Zombieload v2 Attacks

Slashdot - Enj, 14/11/2019 - 5:54md
Both Microsoft and the Linux kernel teams have added ways to disable support for Intel Transactional Synchronization Extensions (TSX). From a report: TSX is the Intel technology that opens the company's CPUs to attacks via the Zombieload v2 vulnerability. Zombieload v2 is the codename of a vulnerability that allows malware or a malicious threat actor to extract information processed inside a CPU, information to which they normally shouldn't be able to access due to the security walls present inside modern-day CPUs. This new vulnerability was disclosed earlier this week. Intel said it would release microcode (CPU firmware) updates -- available on the company's Support & Downloads center. But, the reality of a real-world production environment is that performance matters. Past microcode updates for other attacks, such as Meltdown, Spectre, Foreshadow, Fallout, and Zombieload v1, have been known to introduce performance hits of up to 40%. Seeing that all the CPU attacks listed above are not only theoretical but also hard to pull off, some companies don't see this performance hit as an option.

Read more of this story at Slashdot.

Instagram Tests Hiding Like Counts Globally

Slashdot - Enj, 14/11/2019 - 5:10md
Instagram is making Like counts private for some users everywhere. From a report: Instagram tells TechCrunch the hidden Likes test is expanding to a subset of users globally. Users will have to decide for themselves if something is worth Liking rather than judging by the herd. The change could make users more comfortable sharing what's important to them without the fear of people seeing them receive an embarrassingly small number of likes. Instagram began hiding Likes in April in Canada and then brought the test to Ireland, Italy, Japan, Brazil, Australia and New Zealand in July. Facebook started a similar experiment in Australia in September. Instagram said last week the test would expand to the US, but now it's running everywhere to a small percentage of users in each country.

Read more of this story at Slashdot.

PayPal Pulls Out of Pornhub, Hurting 'Hundreds of Thousands' of Performers

Slashdot - Enj, 14/11/2019 - 4:25md
Pornhub announced late Wednesday that PayPal is no longer supporting payments for Pornhub -- a decision that will impact thousands of performers using the site as a source of income. From a report: Most visitors to Pornhub likely think of it as a website that simply provides access to an endless supply of free porn, but Pornhub also allows performers to upload, sell, and otherwise monetize videos they make themselves. Performers who used PayPal to get paid for this work now have to switch to a different payment method. "We are all devastated by PayPal's decision to stop payouts to over a hundred thousand performers who rely on them for their livelihoods," the company said on its blog. It then directed models to set up a new payment method, with instructions on how PayPal users can transfer pending payments. "We sincerely apologize if this causes any delays and we will have staff working around the clock to make sure all payouts are processed as fast as possible on the new payment methods," the statement said.

Read more of this story at Slashdot.

China Completes Crucial Landing Test For First Mars Mission in 2020

Slashdot - Enj, 14/11/2019 - 3:44md
China on Thursday successfully completed a crucial landing test in northern Hebei province ahead of a historic unmanned exploration mission to Mars next year. From a report: China is on track to launch its Mars mission, Zhang Kejian, head of the China National Space Administration, said on Thursday, speaking to foreign diplomats and the media before the test. The Mars lander underwent a hovering-and-obstacle avoidance test at a sprawling site in Huailai, northwest of Beijing. The site was littered with small mounds of rocks to simulate the uneven terrain on Mars which the lander would have to navigate on its descent to the planet's surface. "In 2016, China officially began the Mars exploration mission work, and currently all of the different development work is progressing smoothly," Zhang said.

Read more of this story at Slashdot.

Apple Is Considering Bundling Digital Subscriptions as Soon as 2020

Slashdot - Enj, 14/11/2019 - 3:00md
Apple is considering bundling its paid internet services, including News+, Apple TV+ and Apple Music, as soon as 2020, in a bid to gain more subscribers, Bloomberg reported on Thursday, citing people familiar with the matter. From a report: The latest sign of this strategy is a provision that Apple included in deals with publishers that lets the iPhone maker bundle the News+ subscription service with other paid digital offerings, the people said. They asked not to be identified discussing private deals. Apple News+, which debuted in March, sells access to dozens of publications for $10 a month. It's often called the "Netflix of News." Apple keeps about half of the monthly subscription price, while magazines and newspapers pocket the other half. If Apple sold Apple News+ as part of a bundle with Apple TV+ and Apple Music, publishers would get less money because the cost of the news service would likely be reduced, the people said. As the smartphone market stagnates, Apple is seeking growth by selling online subscriptions to news, music, video and other content. Bundling these offerings could attract more subscribers, as's Prime service has done.

Read more of this story at Slashdot.

Public Cloud Providers' Network Performance Wildly Varies

Slashdot - Enj, 14/11/2019 - 2:00md
ThousandEyes, a cloud analysis company, in its second annual Cloud Performance Benchmark, has succeeded in measuring a major performance factor objectively: Public cloud providers' global network performance. ZDNet reports: In this study, ThousandEyes looked at the five major public cloud providers: Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud, and Microsoft Azure. It did by analyzing over 320 million data points from 98 global metro locations over 30 days. This included measuring network performance from within the U.S. using multiple ISPs and global network measurements and by checking out speeds between availability zones (AZ)s and connectivity patterns between the cloud providers. Besides measuring raw speed, the company also looked at latency, jitter, and data loss. First, ThousandEyes found some cloud providers rely heavily on the public internet to transport traffic instead of their backbones. This, needless to say, impacts performance predictability. During the evening Netflix internet traffic jam, if your cloud provider relies on the internet, you will see slowdowns in the evening. So, while Google Cloud and Azure rely heavily on their private backbone networks to transport their customer traffic, AWS and Alibaba Cloud rely heavily on the public internet for the majority of transport, IBM takes a hybrid approach that varies regionally. What about AWS Global Accelerator? If you pay for this service, which puts your traffic on the AWS private backbone network, will you always see a better performance? Surprisingly, the answer's no. AWS doesn't always out-perform the internet. ThousandEyes found several cases, where the internet performs faster and more reliably than Global Accelerator -- or the results were negligible. For example, ThousandEyes discovered that from your headquarters in Seoul, you'd see a major latency improvement when accessing AWS US-East-1. That's great. But your office in San Francisco wouldn't see any improvement, while your group in Bangalore India would see a performance decrease. Generally speaking, Latin America and Asia have the highest performance variations across all clouds, whereas, in North America, cloud performance is generally comparable. You need to look at ThousandEye's detailed findings to pick out the best cloud provider on a per-region basis to ensure optimal performance. Regional performance differences can make a huge impact. Additionally, the ISP you use and whether or not you're moving traffic in or out of China also affects cloud performance. For more on the report, see ThousandEyes' website.

Read more of this story at Slashdot.

Christian Kellner: fwupd and bolt power struggles

Planet GNOME - Enj, 14/11/2019 - 1:52md

As readers of this blog might remember, there is a mode where the firmware (BIOS) is responsible for powering the Thunderbolt controller. This means that if no device is connected to the USB type C port the controller will be physically powered down. The obvious upside is battery savings. The downside is that, for a system in that state, we cannot tell if it has a Thunderbolt controller, nor determine any of its properties, like firmware version. Luckily, there is an interface to tell the firmware (BIOS) to "force-power" the controller. The interface is a write only sysfs attribute. The writes are not reference counted, i.e. two separate commands to enable the force-power state followed by a single disable, will indeed disable the controller. For some time boltd and the firmware update daemon both directly poked that interface. This lead to some interference, leading in turn to strange timing bugs. The canonical example goes like this: fwupd force-powers the controller, uevents will be triggered and Thunderbolt entries appear in sysfs. The boltd daemon will be started via udev+systemd activation. The daemon initializes itself and starts enumerating and probing the Thunderbolt controller. Meanwhile fwupd is done with its thing and cuts the power to the controller. That makes boltd and the controller sad because they were still in the middle of getting to know each other.

boltctl power -q can be used to inspect the current force power settings

To fix this issue, boltd gained a force-power D-Bus API and fwupd in turn gained support for using that new API. No more fighting over the force-power sysfs interface. So far so good. But an unintended side-effect of that change was that now bolt was always being started, indirectly by fwupd via D-Bus activation, even if there was no Thunderbolt controller in the system to begin with. Since the daemon currently does not exit even if there is no Thunderbolt hardware1, you have a system-daemon running, but not doing anything useful. This understandably made some people unhappy (rhbz#1650881, lp#1801796). I recently made a small change to the fwupd, which should do away with this issue: before making a call to boltd, fwupd now itself checks if the force-power facility is present. If not, don't bother asking boltd and starting it in the process. The change is included in fwupd 1.3.3. Now both machine and people should be happy, I hope.

  1. That is a complicated story that needs new systemd features. See #92 for the interesting technical details.

Daniel García Moreno: LAS 2019, Barcelona

Planet GNOME - Enj, 14/11/2019 - 12:00pd
The event

The Linux App Summit (LAS) is a great event that bring together a lot of linux application developers, from the bigger communities, it's organized by GNOME and KDE in collaboration and it's a good place to talk about the Linux desktop, application distribution and development.

This year the event was organized in Barcelona, this is not too far from my home town, Málaga, so I want to be there.

I sent a talk proposal and was accepted, so I was talking about distributing services with flatpak and problems related to service deployment in a flatpaked world.

Clicking in this image you can find my talk in the event streaming. The sound is not too good and my accent doesn't help, but there it is :D

The event was a really great event, with really good talks, about different topics, we've some technical talks, some talks about design, talks about language, about distribution, about the market and economics, and at least two about "removing" the system tray 😋

It was really interesting the talk about the "future" inclusion of payments in flathub because I think that this will give a new incentive to people to write and publish apps in flathub and could be a great step to get donations for developers.

Another talk that I liked was the one about the maintenance of flatpak repositories, it's always interesting to know how the things works and this talk give an easy introduction to ostree, flatpak, repositories and application distribution.

Besides the talks, this event is really interesting for the people that bring together. I've been talking with a lot of people, not too much, because I'm a shy person, but I've the opportunity to talk a bit with some Fractal developers, and during a coffee talk with Jordan Petridis, we've time to share some ideas about a cool new functionality that maybe we can implement in the near future, thanks to the outreachy program and maybe some help from the gstreamer people.

I'm also very happy to be able to spend some time talking with Martín Abente, about sugar labs, the hack computer and the different ways to teach kids with free software. Martín is a really nice person and I liked a lot to meet him and share some thoughts.

The city

This is not my first time in Barcelona, I was here at the beginning of this year, but this is a great city and I've no time to visit all the places the first time.

So I've spent the Thursday afternoon doing some tourism, visiting the "Sagrada Familia" and the "Montjuïc" fountain.

If you have not been in Barcelona and you have the opportunity to come here, don't hesitate, it's a really good city, with a great architecture to admire and really nice culture and people, and here you can find good food to enjoy.

Thank you all

I was sponsored by the GNOME Foundation, I'm really thankful for this opportunity, to come here, give a talk and share some time with great people that makes the awesome Linux and open source community possible.

I want to thank to my employer Endless because it's really a privilege to have a job that allows this kind of interactions with the community, and my team Hack, because they I've missed some meetings this week and I was not very responsive during the week.

And I want to thank to the LAS organization, because this was a really good event, good job, you can be very proud.

Sebastian Dröge: The GTK Rust bindings are not ready yet? Yes they are!

Planet GNOME - Mër, 13/11/2019 - 4:02md

When talking to various people at conferences in the last year or at conferences, a recurring topic was that they believed that the GTK Rust bindings are not ready for use yet.

I don’t know where that perception comes from but if it was true, there wouldn’t have been applications like Fractal, Podcasts or Shortwave using GTK from Rust, or I wouldn’t be able to do a workshop about desktop application development in Rust with GTK and GStreamer at the Linux Application Summit in Barcelona this Friday (code can be found here already) or earlier this year at GUADEC.

One reason I sometimes hear is that there is not support for creating subclasses of GTK types in Rust yet. While that was true, it is not true anymore nowadays. But even more important: unless you want to create your own special widgets, you don’t need that. Many examples and tutorials in other languages make use of inheritance/subclassing for the applications’ architecture, but that’s because it is the idiomatic pattern in those languages. However, in Rust other patterns are more idiomatic and even for those examples and tutorials in other languages it wouldn’t be the one and only option to design applications.

Almost everything is included in the bindings at this point, so seriously consider writing your next GTK UI application in Rust. While some minor features are still missing from the bindings, none of those should prevent you from successfully writing your application.

And if something is actually missing for your use-case or something is not working as expected, please let us know. We’d be happy to make your life easier!


Some people are already experimenting with new UI development patterns on top of the GTK Rust bindings. So if you want to try developing an UI application but want to try something different than the usual signal/callback spaghetti code, also take a look at those.

Federico Mena-Quintero: CSS in librsvg is now in Rust, courtesy of Mozilla Servo

Planet GNOME - Mar, 12/11/2019 - 2:36pd

Summary: after an epic amount of refactoring, librsvg now does all CSS parsing and matching in Rust, without using libcroco. In addition, the CSS engine comes from Mozilla Servo, so it should be able to handle much more complex CSS than librsvg ever could before.

This is the story of CSS support in librsvg.


The first commit to introduce CSS parsing in librsvg dates from 2002. It was as minimal as possible, written to support a small subset of what was then CSS2.

Librsvg handled CSS stylesheets more "piecing them apart" than "parsing them". You know, when g_strsplit() is your best friend. The basic parsing algorithm was to turn a stylesheet like this:

rect { fill: blue; } .classname { fill: green; stroke-width: 4; }

Into a hash table whose keys are strings like rect and .classname, and whose values are everything inside curly braces.

The selector matching phase was equally simple. The code only handled a few possible match types as follows. If it wanted to match a certain kind of CSS selector, it would say, "what would this selector look like in CSS syntax", it would make up a string with that syntax, and compare it to the key strings it had stored in the hash table from above.

So, to match an element name selector, it would sprintf("%s", element->name), obtain something like rect and see if the hash table had such a key.

To match a class selector, it would sprintf(".%s", element->class), obtain something like .classname, and look it up in the hash table.

This scheme supported only a few combinations. It handled tag, .class, tag.class, and a few combinations with #id in them. This was enough to support very simple stylesheets.

The value corresponding to each key in the hash table was the stuff between curly braces in the stylesheet, so the second rule from the example above would contain fill: green; stroke-width: 4;. Once librsvg decided that an SVG element matched that CSS rule, it would re-parse the string with the CSS properties and apply them to the element's style.

I'm amazed that so little code was enough to deal with a good number of SVG files with stylesheets. I suspect that this was due to a few things:

  • While people were using complex CSS in HTML all the time, it was less common for SVG...

  • ... because CSS2 was somewhat new, and the SVG spec was still being written...

  • ... and SVGs created with illustration programs don't really use stylesheets; they include the full style information inside each element instead of symbolically referencing it from a stylesheet.

From the kinds of bugs that librsvg has gotten around "CSS support is too limited", it feels like SVGs which use CSS features are either hand-written, or machine-generated from custom programs like data plotting software. Illustration programs tend to list all style properties explicitly in each SVG element, and don't use CSS.

Libcroco appears

The first commit to introduce libcroco was to do CSS parsing, from March 2003.

At the same time, libcroco was introducing code to do CSS matching. However, this code never got used in librsvg; it still kept its simple string-based matcher. Maybe libcroco's API was not ready?

Libcroco fell out of maintainership around the first half of 2005, and volunteers have kept fixing it since then.

Problems with librsvg's string matcher for CSS

The C implementation of CSS matching in librsvg remained basically untouched until 2018, when Paolo Borelli and I started porting the surrounding code to Rust.

I had a lot of trouble figuring out the concepts from the code. I didn't know all the terminology of CSS implementations, and librsvg didn't use it, either.

I think that librsvg's code suffered from what the refactoring literature calls primitive obsession. Instead of having a parsed representation of CSS selectors, librsvg just stored a stringified version of them. So, a selector like rect#classname really was stored with a string like that, instead of an actual decomposition into structs.

Moreover, things were misnamed. This is the field that stored stylesheet data inside an RsvgHandle:

GHashTable *css_props;

From just looking at the field declaration, this doesn't tell me anything about what kind of data is stored there. One has to grep the source code for where that field is used:

static void rsvg_css_define_style (RsvgHandle * ctx, const gchar * selector, const gchar * style_name, const gchar * style_value, gboolean important) { GHashTable *styles; styles = g_hash_table_lookup (ctx->priv->css_props, selector);

Okay, it looks up a selector by name in the css_props, and it gives back... another hash table styles? What's in there?

g_hash_table_insert (styles, g_strdup (style_name), style_value_data_new (style_value, important));

Another string key called style_name, whose key is a StyleValueData; what's in it?

typedef struct _StyleValueData { gchar *value; gboolean important; } StyleValueData;

The value is another string. Strings all the way!

At the time, I didn't really figure out what each level of nested hash tables was supposed to mean. I didn't understand why we handled style properties in a completely different part of the code, and yet this part had a css_props field that didn't seem to store properties at all.

It took a while to realize that css_props was misnamed. It wasn't storing a mapping of selector names to properties; it was storing a mapping of selector names to declaration lists, which are lists of property/value pairs.

So, when I started porting the CSS parsing code to Rust, I started to create real types with for each concept.

// Maps property_name -> Declaration type DeclarationList = HashMap<String, Declaration>; pub struct CssStyles { selectors_to_declarations: HashMap<String, DeclarationList>, }

Even though the keys of those HashMaps are still strings, because librsvg didn't have a better way to represent their corresponding concepts, at least those declarations let one see what the hell is being stored without grepping the rest of the code. This is a part of the code that I didn't really touch very much, so it was nice to have that reminder.

The first port of the CSS matching code to Rust kept the same algorithm as the C code, the one that created strings with element.class and compared them to the stored selector names. Ugly, but it still worked in the same limited fashion.

Rustifying the CSS parsers

It turns out that CSS parsing is divided in two parts. One can have a style attribute inside an element, for example

<rect x="0" y="0" width="100" height="100" style="fill: green; stroke: magenta; stroke-width: 4;"/>

This is a plain declaration list which is not associated to any selectors, and which is applied directly to just the element in which it appears.

Then, there is the <style> element itself, with a normal-looking CSS stylesheet

<style type="text/css"> rect { fill: green; stroke: magenta; stroke-width: 4; } </style>

This means that all <rect> elements will get that style applied.

I started to look for existing Rust crates to parse and handle CSS data. The cssparser and selectors crates come from Mozilla, so I thought they should do a pretty good job of things.

And they do! Except that they are not a drop-in replacement for anything. They are what gets used in Mozilla's Servo browser engine, so they are optimized to hell, and the code can be pretty intimidating.

Out of the box, cssparser provides a CSS tokenizer, but it does not know how to handle any properties/values in particular. One must use the tokenizer to implement a parser for each kind of CSS property one wants to support — Servo has mountains of code for all of HTML's style properties, and librsvg had to provide a smaller mountain of code for SVG style properties.

Thus started the big task of porting librsvg's string-based parsers for CSS properties into ones based on cssparser tokens. Cssparser provides a Parser struct, which extracts tokens out of a CSS stream. Out of this, librsvg defines a Parse trait for parsable things:

use cssparser::Parser; pub trait Parse: Sized { type Err; fn parse(parser: &mut Parser<'_, '_>) -> Result<Self, Self::Err>; }

What's with those two default lifetimes in Parser<'_, '_>? Cssparser tries very hard to be a zero-copy tokenizer. One of the lifetimes refers to the input string which is wrapped in a Tokenizer, which is wrapped in a ParserInput. The other lifetime is for the ParserInput itself.

In the actual implementation of that trait, the Err type also uses the lifetime that refers to the input string. For example, there is a BasicParseErrorKind::UnexpectedToken(Token<'i>), which one returns when there is an unexpected token. And to avoid copying the substring into the error, one returns a slice reference into the original string, thus the lifetime.

I was more of a Rust newbie back then, and it was very hard to make sense of how cssparser was meant to be used.

The process was more or less this:

  • Port the C parsers to Rust; implement types for each CSS property.

  • Port the &str-based parsers into ones that use cssparser.

  • Fix the error handling scheme to match what cssparser's high-level traits expect.

This last point was... hard. Again, I wasn't comfortable enough with Rust lifetimes and nested generics; in the end it was all right.

Moving declaration lists to Rust

With the individual parsers for CSS properties done, and with them already using a different type for each property, the next thing was to implement cssparser's traits to parse declaration lists.

Again, a declaration list looks like this:

fill: blue; stroke-width: 4;

It's essentially a key/value list.

The trait that cssparser wants us to implement is this:

pub trait DeclarationParser<'i> { type Declaration; type Error: 'i; fn parse_value<'t>( &mut self, name: CowRcStr<'i>, input: &mut Parser<'i, 't>, ) -> Result<Self::Declaration, ParseError<'i, Self::Error>>; }

That is, define a type for a Declaration, and implement a parse_value() method that takes a name and a Parser, and outputs a Declaration or an error.

What this really means is that the type you implement for Declaration needs to be able to represent all the CSS property types that you care about. Thus, a struct plus a big enum like this:

pub struct Declaration { pub prop_name: String, pub property: ParsedProperty, pub important: bool, } pub enum ParsedProperty { BaselineShift(SpecifiedValue<BaselineShift>), ClipPath(SpecifiedValue<ClipPath>), ClipRule(SpecifiedValue<ClipRule>), Color(SpecifiedValue<Color>), ColorInterpolationFilters(SpecifiedValue<ColorInterpolationFilters>), Direction(SpecifiedValue<Direction>), ... }

This gives us declaration lists (the stuff inside curly braces in a CSS stylesheet), but it doesn't give us qualified rules, which are composed of selector names plus a declaration list.

Refactoring towards real CSS concepts

Paolo Borelli has been steadily refactoring librsvg and fixing things like the primitive obsession I mentioned above. We now have real concepts like a Document, Stylesheet, QualifiedRule, Rule, AtRule.

This refactoring took a long time, because it involved redoing the XML loading code and its interaction with the CSS parser a few times.

Implementing traits from the selectors crate

The selectors crate contains Servo's code for parsing CSS selectors and doing matching. However, it is extremely generic. Using it involves implementing a good number of concepts.

For example, this SelectorImpl trait has no methods, and is just a collection of types that refer to your implementation of an element tree. How do you represent an attribute/value? How do you represent an identifier? How do you represent a namespace and a local name?

pub trait SelectorImpl { type ExtraMatchingData: ...; type AttrValue: ...; type Identifier: ...; type ClassName: ...; type PartName: ...; type LocalName: ...; type NamespaceUrl: ...; type NamespacePrefix: ...; type BorrowedNamespaceUrl: ...; type BorrowedLocalName: ...; type NonTSPseudoClass: ...; type PseudoElement: ...; }

A lot of those can be String, but Servo has smarter things in store. I ended up using the markup5ever crate, which provides a string interning framework for markup and XML concepts like a LocalName, a Namespace, etc. This reduces memory consumption, because instead of storing string copies of element names everywhere, one just stores tokens for interned strings.

(In the meantime I had to implement support for XML namespaces, which the selectors code really wants, but which librsvg never supported.)

Then, the selectors crate wants you to say how your code implements an element tree. It has a monster trait Element:

pub trait Element { type Impl: SelectorImpl; fn opaque(&self) -> OpaqueElement; fn parent_element(&self) -> Option<Self>; fn parent_node_is_shadow_root(&self) -> bool; ... fn prev_sibling_element(&self) -> Option<Self>; fn next_sibling_element(&self) -> Option<Self>; fn has_local_name( &self, local_name: &<Self::Impl as SelectorImpl>::BorrowedLocalName ) -> bool; fn has_id( &self, id: &<Self::Impl as SelectorImpl>::Identifier, case_sensitivity: CaseSensitivity, ) -> bool; ... }

That is, when you provide an implementation of Element and SelectorImpl, the selectors crate will know how to navigate your element tree and ask it questions like, "does this element have the id #foo?"; "does this element have the name rect?". It makes perfect sense in the end, but it is quite intimidating when you are not 100% comfortable with webs of traits and associated types and generics with a bunch of trait bounds!

I tried implementing that trait twice in the last year, and failed. It turns out that its API needed a key fix that landed last June, but I didn't notice until a couple of weeks ago.


Two days ago, Paolo and I committed the last code to be able to completely replace libcroco.

And, after implementing CSS specificity (which was easy now that we have real CSS concepts and a good pipeline for the CSS cascade), a bunch of very old bugs started falling down (1 2 3 4 5 6).

Now it is going to be easy to implement things like letting the application specify a user stylesheet. In particular, this should let GTK remove the rather egregious hack it has to recolor SVG icons while using librsvg indirectly.


This will appear in librsvg 2.47.1 — that version will no longer require libcroco.

As far as I know, the only module that still depends on libcroco (in GNOME or otherwise) is gnome-shell. It uses libcroco to parse CSS and get the basic structure of selectors so it can implement matching by hand.

Gnome-shell has some code which looks awfully similar to what librsvg had when it was written in C:

  • StTheme has the high-level CSS stylesheet parser and the selector matching code.

  • StThemeNode has the low-level CSS property parsers.

... and it turns out that those files come all the way from HippoCanvas, the CSS-aware canvas that Mugshot used! Mugshot was a circa-2006 pre-Facebook aggregator for social media data like blogs, Flickr pictures, etc. HippoCanvas also got used in Sugar, the GUI for One Laptop Per Child. Yes, our code is that old.

Libcroco is unmaintained, and has outstanding CVEs. I would be very happy to assist someone in porting gnome-shell's CSS code to Rust :)

Lithium-Sulfur Battery Project Aims To Double the Range of Electric Airplanes

Slashdot - Mar, 12/11/2019 - 1:20pd
Oxis Energy, of Abingdon, UK, says it has a battery based on lithium-sulfur chemistry that can greatly increase the ratio of watt-hours per kilogram, and do so in a product that's safe enough for use even in an electric airplane. Specifically, a plane built by Bye Aerospace, in Englewood, Colo., whose founder, George Bye, described the project in this 2017 article for IEEE Spectrum. From a report: The two companies said in a statement that they were beginning a one-year joint project to demonstrate feasibility. They said the Oxis battery would provide "in excess" of 500 Wh/kg, a number which appears to apply to the individual cells, rather than the battery pack, with all its packaging, power electronics, and other paraphernalia. That per-cell figure may be compared directly to the "record-breaking energy density of 260 watt-hours per kilogram" that Bye cited for the batteries his planes were using in 2017. This per-cell reduction will cut the total system weight in half, enough to extend flying range by 50 to 100 percent, at least in the small planes Bye Aerospace has specialized in so far. If lithium-sulfur wins the day, bigger planes may well follow. [...] One reason why lithium-sulfur batteries have been on the sidelines for so long is their short life, due to degradation of the cathode during the charge-discharge cycle. Oxis expects its batteries will be able to last for 500 such cycles within the next two years. That's about par for the course for today's lithium-ion batteries. Another reason is safety: Lithium-sulfur batteries have been prone to overheating. Oxis says its design incorporates a ceramic lithium sulfide as a "passivation layer," which blocks the flow of electricity -- both to prevent sudden discharge and the more insidious leakage that can cause a lithium-ion battery to slowly lose capacity even while just sitting on a shelf. Oxis also uses a non-flammable electrolyte.

Read more of this story at Slashdot.


Subscribe to AlbLinux agreguesi