You are here

Agreguesi i feed

Scientists Found Breathable Oxygen In Another Galaxy For the First Time

Slashdot - 11 orë 29 min më parë
Astronomers have spotted molecular oxygen in a galaxy far far away, marking the first time that this important element has ever been detected outside of the Milky Way. Motherboard reports: This momentous "first detection of extragalactic molecular oxygen," as it is described in a recent study in The Astrophysical Journal, has big implications for understanding the crucial role of oxygen in the evolution of planets, stars, galaxies, and life. Oxygen is the third most abundant element in the universe, after hydrogen and helium, and is one of the key ingredients for life here on Earth. Molecular oxygen is the most common free form of the element and consists of two oxygen atoms with the designation O2. It is the version of the gas that we humans, among many other organisms, need to breathe in order to live. Now, a team led by Junzhi Wang, an astronomer at the Shanghai Astronomical Observatory, reports the discovery of molecular oxygen in a dazzling galaxy called Markarian 231, located 581 million light years from the Milky Way. The researchers were able to make this detection with ground-based radio observatories. "Deep observations" from the IRAM 30-meter telescope in Spain and the NOEMA interferometer in France revealed molecular oxygen emission "in an external galaxy for the first time," Wang and his co-authors wrote. Motherboard notes that you couldn't just inhale the molecular oxygen found in Markarian 231 like you would the oxygen on Earth. "This is because the oxygen is not mixed with the right abundances of nitrogen, carbon dioxide, methane, and all the other molecules that make Earth's air breathable to humans and other organisms." Still, the discovery "provides an ideal tool to study" molecular outflows from quasars and other AGNs, the team said in the study. [Markarian 231 has remained a curiosity to scientists for decades because it contains the closest known quasar, a type of hyper-energetic object. Quasars are active galactic nuclei (AGN), meaning that they inhabit the core regions of special galaxies, and they are among the most radiant and powerful objects in the universe.] "O2 may be a significant coolant for molecular gas in such regions affected by AGN-driven outflows," the researchers noted. "New astrochemical models are needed to explain the implied high molecular oxygen abundance in such regions several kiloparsecs away from the center of galaxies."

Read more of this story at Slashdot.

JP Morgan Economists Warn of 'Catastrophic' Climate Change

Slashdot - 14 orë 59 min më parë
An anonymous reader quotes a report from the BBC: Human life "as we know it" could be threatened by climate change, economists at JP Morgan have warned. In a hard-hitting report to clients, the economists said that without action being taken there could be "catastrophic outcomes." The bank said the research came from a team that was "wholly independent from the company as a whole." Climate campaigners have previously criticized JP Morgan for its investments in fossil fuels. The firm's stark report was sent to clients and seen by BBC News. While JP Morgan economists have warned about unpredictability in climate change before, the language used in the new report was very forceful. "We cannot rule out catastrophic outcomes where human life as we know it is threatened," JP Morgan economists David Mackie and Jessica Murray said. Carbon emissions in the coming decades "will continue to affect the climate for centuries to come in a way that is likely to be irreversible," they said, adding that climate change action should be motivated "by the likelihood of extreme events." Climate change could affect economic growth, shares, health, and how long people live, they said. It could put stresses on water, cause famine, and cause people to be displaced or migrate. Climate change could also cause political stress, conflict, and it could hit biodiversity and species survival, the report warned. To mitigate climate change net carbon emissions need to be cut to zero by 2050. To do this, there needed to be a global tax on carbon, the report authors said. But they said that "this is not going to happen anytime soon."

Read more of this story at Slashdot.

Radical Hydrogen-Boron Reactor Leapfrogs Current Nuclear Fusion Tech

Slashdot - 16 orë 27 min më parë
HB11 Energy, a spin-out company originating at the University of New South Wales, claims its hydrogen-boron fusion technology is already working a billion times better than expected. Along with this announcement, the company also announced a swag of patents through Japan, China and the USA protecting its unique approach to fusion energy generation. New Atlas reports: The results of decades of research by Emeritus Professor Heinrich Hora, HB11's approach to fusion does away with rare, radioactive and difficult fuels like tritium altogether -- as well as those incredibly high temperatures. Instead, it uses plentiful hydrogen and boron B-11, employing the precise application of some very special lasers to start the fusion reaction. Here's how HB11 describes its "deceptively simple" approach: the design is "a largely empty metal sphere, where a modestly sized HB11 fuel pellet is held in the center, with apertures on different sides for the two lasers. One laser establishes the magnetic containment field for the plasma and the second laser triggers the 'avalanche' fusion chain reaction. The alpha particles generated by the reaction would create an electrical flow that can be channeled almost directly into an existing power grid with no need for a heat exchanger or steam turbine generator." HB11's Managing Director Dr. Warren McKenzie clarifies over the phone: "A lot of fusion experiments are using the lasers to heat things up to crazy temperatures -- we're not. We're using the laser to massively accelerate the hydrogen through the boron sample using non-linear forced. You could say we're using the hydrogen as a dart, and hoping to hit a boron , and if we hit one, we can start a fusion reaction. That's the essence of it. If you've got a scientific appreciation of temperature, it's essentially the speed of atoms moving around. Creating fusion using temperature is essentially randomly moving atoms around, and hoping they'll hit one another, our approach is much more precise." He continues: "The hydrogen/boron fusion creates a couple of helium atoms. They're naked heliums, they don't have electrons, so they have a positive charge. We just have to collect that charge. Essentially, the lack of electrons is a product of the reaction and it directly creates the current." The lasers themselves rely upon cutting-edge "Chirped Pulse Amplification" technology, the development of which won its inventors the 2018 Nobel prize in Physics. Much smaller and simpler than any of the high-temperature fusion generators, HB11 says its generators would be compact, clean and safe enough to build in urban environments. There's no nuclear waste involved, no superheated steam, and no chance of a meltdown. "This is brand new," Professor Hora tells us. "10-petawatt power laser pulses. It's been shown that you can create fusion conditions without hundreds of millions of degrees. This is completely new knowledge. I've been working on how to accomplish this for more than 40 years. It's a unique result. Now we have to convince the fusion people -- it works better than the present day hundred million degree thermal equilibrium generators. We have something new at hand to make a drastic change in the whole situation. A substitute for carbon as our energy source. A radical new situation and a new hope for energy and the climate."

Read more of this story at Slashdot.

Scientists Condemn Conspiracy Theories About Origin of Coronavirus Outbreak

Slashdot - 17 orë 4 min më parë
hackingbear writes: A group of 27 prominent public health scientists from outside China, who have studied SARS-CoV-2 and "overwhelmingly conclude that this coronavirus originated in wildlife" just like many other viruses that have recently emerged in humans, is pushing back against a steady stream of stories and even a scientific paper suggesting a laboratory in Wuhan, China, may be the origin of the outbreak of COVID-19. "The rapid, open, and transparent sharing of data on this outbreak is now being threatened by rumors and misinformation around its origins," the scientists, from nine countries, write in a statement published online by The Lancet . Many posts on social media have singled out the Wuhan Institute of Virology for intense scrutiny because it has a laboratory at the highest security level -- biosafety level 4 -- and its researchers study coronaviruses from bats; speculations have included the possibility that the virus was bioengineered in the lab or that a lab worker was infected while handling a bat. Researchers from the institute have insisted there is no link between the outbreak and their laboratory. Peter Daszak, president of the EcoHealth Alliance and a cosignatory of the statement, has collaborated with researchers at the Wuhan institute who study bat coronaviruses. "We're in the midst of the social media misinformation age, and these rumors and conspiracy theories have real consequences, including threats of violence that have occurred to our colleagues in China."

Read more of this story at Slashdot.

US Defense Agency That Secures Trump's Communications Confirms Data Breach

Slashdot - 17 orë 44 min më parë
An anonymous reader quotes a report from Forbes: The Department of Defense agency responsible for securing the communications of President Trump has suffered a data breach. Here's what is known so far. The U.S. Defense Information Systems Agency (DISA) describes itself as a combat support agency of the Department of Defense (DoD) and is tasked with the responsibility for supporting secure White House communications, including those of President Trump. As well as overseeing Trump's secure calls technology, DISA also establishes and supports communications networks in combat zones and takes care of military cyber-security issues. It has also confirmed a data breach of its network, which exposed data affecting as many as 200,000 users. First picked up by Reuters, disclosure letters dated February 11 have been sent out to those whose personal data may have been compromised. Although it is not clear which specific servers have been breached, nor the nature of the users to whom the letters have been sent, that an agency with a vision to "connect and protect the war-fighter in cyberspace" should suffer such an incident is concerning, to say the least. While many of the details surrounding this breach are likely to remain, understandably, confidential, given the nature of the DISA work, the letter itself has already been published on Twitter by one recipient. Signed by Roger S. Greenwell, the chief information officer at DISA, the letter revealed the breach took place between May and July last year, and information including social security numbers may have been compromised as a result. It also stated that there is no evidence that any personally identifiable information (PII) has been misused as a result. The letter does, however, confirm that DISA will be offering free credit monitoring services to those who want it.

Read more of this story at Slashdot.

Gopher's Rise and Fall Shows How Much We Lost When Monopolists Stole the Net

Slashdot - 18 orë 6 min më parë
Science-fiction writer, journalist and longtime Slashdot reader, Cory Doctorow, a.k.a. mouthbeef, writes: The Electronic Frontier Foundation (EFF) just published the latest installment in my case histories of "adversarial interoperability" -- once the main force that kept tech competitive. Today, I tell the story of Gopher, the web's immediate predecessor, which burrowed under the mainframe systems' guardians and created a menu-driven interface to campus resources, then the whole internet. Gopher ruled until browser vendors swallowed Gopherspace whole, incorporating it by turning gopher:// into a way to access anything on any Gopher server. Gopher served as the booster rocket that helped the web attain a stable orbit. But the tools that Gopher used to crack open the silos, and the moves that the web pulled to crack open Gopher, are radioactively illegal today. If you wanted do to Facebook what Gopher did to the mainframes, you would be pulverized by the relentless grinding of software patents, terms of service, anticircumvention law, bullshit theories about APIs being copyrightable. Big Tech blames "network effects" for its monopolies -- but that's a counsel of despair. If impersonal forces (and not anticompetitive bullying) are what keeps tech big then there's no point in trying to make it small. Big Tech's critics swallow this line, demanding that Big Tech be given state-like duties to police user conduct -- duties that require billions and total control to perform, guaranteeing tech monopolists perpetual dominance. But the lesson of Gopher is that adversarial interoperability is judo for network effects.

Read more of this story at Slashdot.

Company Buying<nobr> <wbr></nobr>.Org Offers To Sign a Contract Banning Price Hikes

Slashdot - 18 orë 27 min më parë
Ethos Capital, the company controversially buying the .org top-level domain, says it will sign legally binding agreements banning steep fee increases for nonprofit domain holders and establishing an independent "stewardship council" that could veto attempts at censorship or inappropriate data use. "The rules would kick in if Ethos successfully acquires Public Interest Registry (PIR), a nonprofit organization that manages .org," reports The Verge. From the report: ICANN, which oversees the internet's top-level domains, is currently scrutinizing the acquisition. President and CEO Goran Marby previously expressed discomfort with the deal, and PIR announced today that it's extending the review period until March 20th. ICANN hasn't yet taken a position on the latest proposal. "We are in the process [of] analyzing the information we have received and therefore have no comment beyond the fact that we welcome Ethos' efforts to engage with the Internet Society community and .org customers, and look forward to the outcome of those discussions," said Marby in a statement to The Verge. PIR said it would "continue to work collaboratively" to address any outstanding issues with ICANN. In addition to the details above, Ethos and PIR committed to creating a "Community Enablement Fund" to support .org initiatives, and PIR promised to publish an annual transparency report. The price restrictions, meanwhile, would forbid Ethos from raising domain registration and renewal fees by more than 10 percent per year (on average) for the next eight years. Ethos and PIR's press release quotes Sullivan praising the new agreements. "Ethos shows that it has been listening to the questions some have raised. Ethos has responded by embedding its commitments on pricing, censorship and data use policies in a legally-binding contract, and giving ICANN and the community the ability to hold Ethos to its commitments," says the statement.

Read more of this story at Slashdot.

AT&amp;T Loses Key Ruling In Class Action Over Unlimited-Data Throttling

Slashdot - 19 orë 9 min më parë
An anonymous reader quotes a report from Ars Technica: AT&T's mandatory-arbitration clause is unenforceable in a class-action case over AT&T's throttling of unlimited data, a panel of U.S. appeals court judges ruled this week. The nearly five-year-old case has gone through twists and turns, with AT&T's forced-arbitration clause initially being upheld in March 2016. If that decision had stood, the customers would have been forced to have any complaints heard individually in arbitration. But an April 2017 decision by the California Supreme Court in a different case effectively changed the state's arbitration law, causing a U.S. District Court judge to revive the class action in March 2018. AT&T appealed that ruling to the U.S. Court of Appeals for the Ninth Circuit, but a three-judge panel at that court rejected AT&T's appeal in a ruling issued Tuesday. Judges said they must follow the California Supreme Court decision -- known as the McGill rule -- which held that an agreement, like AT&T's, that waives public injunctive relief in any forum is contrary to California public policy and unenforceable." AT&T claimed that the Federal Arbitration Act preempts the California law, but the appeals court had already ruled in Blair [another case involving the McGill rule] that this federal law doesn't preempt the McGill rule. The judges were also not persuaded by AT&T's argument that the court "abused its discretion in reconsidering its initial order compelling arbitration."

Read more of this story at Slashdot.

Slickwraps Data Breach Exposing Financial and Customer Info

Slashdot - 19 orë 29 min më parë
Slickwraps, a mobile device case retailer, has suffered a major data breach exposing employee resumes, personal customer information, API credentials, and more. Bleeping Computer reports: In a post to Medium, a security researcher named Lynx states that in January 2020 he was able to gain full access to the Slickwraps web site using a path traversal vulnerability in an upload script used for case customizations. Using this access, Lynx stated that they were allegedly able to gain access to the resumes of employees, 9GB of personal customer photos, ZenDesk ticketing system, API credentials, and personal customer information such as hashed passwords, addresses, email addresses, phone numbers, and transactions. After trying to report these breaches to Slickwraps, Lynx stated they were blocked multiple times even when stating they did not want a bounty, but rather for Slickwraps to disclose the data breach. "They had no interest in accepting security advice from me. They simply blocked and ignored me," Lynx stated in the Medium post. This post has since been taken down by Medium, but is still available via archive.org. Since posting his Medium post, Lynx told BleepingComputer that another unauthorized user sent an email to 377,428 customers using Slickwraps' ZenDesk help desk system. These emails begin with "If you're reading this it's too late, we have your data" and then link to the Lynx's Medium post. [...] In a statement posted to their Twitter account, Slickwraps CEO Jonathan Endicott has apologized for the data breach and promises to do better in the future. In the statement, though, Endicott says they first learned about this today, February 21st, while Lynx stated and showed screenshots of attempts to contact both Endicott via email and Slickwraps on Twitter prior to today.

Read more of this story at Slashdot.

FBI Recommends Passphrases Over Password Complexity

Slashdot - Pre, 21/02/2020 - 11:40md
An anonymous reader shares a report: For more than a decade now, security experts have had discussions about what's the best way of choosing passwords for online accounts. There's one camp that argues for password complexity by adding numbers, uppercase letters, and special characters, and then there's the other camp, arguing for password length by making passwords longer. This week, in its weekly tech advice column known as Tech Tuesday, the FBI Portland office leaned on the side of longer passwords. "Instead of using a short, complex password that is hard to remember, consider using a longer passphrase," the FBI said. "This involves combining multiple words into a long string of at least 15 characters," it added. "The extra length of a passphrase makes it harder to crack while also making it easier for you to remember."

Read more of this story at Slashdot.

More Bosses Give 4-Day Workweek A Try

Slashdot - Pre, 21/02/2020 - 11:05md
Companies around the world are embracing what might seem like a radical idea: a four-day workweek. From a report: The concept is gaining ground in places as varied as New Zealand and Russia, and it's making inroads among some American companies. Employers are seeing surprising benefits, including higher sales and profits. The idea of a four-day workweek might sound crazy, especially in America, where the number of hours worked has been climbing and where cellphones and email remind us of our jobs 24/7. But in some places, the four-day concept is taking off like a viral meme. Many employers aren't just moving to 10-hour shifts, four days a week, as companies like Shake Shack are doing; they're going to a 32-hour week -- without cutting pay. In exchange, employers are asking their workers to get their jobs done in a compressed amount of time. Last month, a Washington state senator introduced a bill to reduce the standard workweek to 32 hours. Russian Prime Minister Dmitry Medvedev is backing a parliamentary proposal to shift to a four-day week. Politicians in Britain and Finland are considering something similar. In the U.S., Shake Shack started testing the idea a year and a half ago. The burger chain shortened managers' workweeks to four days at some stores and found that recruitment spiked, especially among women. Shake Shack's president, Tara Comonte, says the staff loved the perk: "Being able to take their kids to school a day a week, or one day less of having to pay for day care, for example." So the company recently expanded its trial to a third of its 164 U.S. stores. Offering that benefit required Shake Shack to find time savings elsewhere, so it switched to computer software to track supplies of ground beef, for example.

Read more of this story at Slashdot.

Finnish City Espoo Pioneers Civic AI With Education and Explainability

Slashdot - Pre, 21/02/2020 - 10:25md
While civic leaders believe AI could help reinvent government services, they are also aware of citizens' profound privacy concerns. To navigate this challenge, the Finnish city of Espoo is conducting experiments that mix consultations, transparency, and limited use cases to demonstrate the potential of civic AI. From a report: Espoo has already conducted AI trials that initially required overcoming technical hurdles but ultimately improved city services. Over the long-term, the city is crafting a model that places ethics at the center of its AI plans by ensuring citizens can understand how these systems work and participate in debates about their implementation. Though the plan is still very much in its early stages, the city hopes to blaze a trail that other governments can follow. "I think Finns trust the government and the public sector more than [citizens] in any country in Europe," said Tomas Lehtinen, data analyst consultant for Espoo. "We wanted to keep that trust in the future. And so we wanted to be transparent about this project for citizens, but also because many of our employees also don't understand AI."

Read more of this story at Slashdot.

150K Nature Illustrations Spanning Hundreds of Years Are Now Free Online

Slashdot - Pre, 21/02/2020 - 9:45md
The Biodiversity Heritage Library (BHL) has uploaded more than 150,000 images of biological sketches, some dating back to the 15th century, onto the internet. A report adds: They're all in the public domain, and free for anyone who wants them. The images are pulled from journals, research material, and libraries, altogether more than 55 million pages of literature. BHL is "the world's largest open access digital library for biodiversity literature and archives," according to its website. On top of public domain content, BHL also works with rights holders to get permission to make copyrighted materials available under a Creative Commons license.

Read more of this story at Slashdot.

Global Telcos Join Alphabet, SoftBank's Flying Cellphone Antenna Lobbying Effort

Slashdot - Pre, 21/02/2020 - 9:01md
Alphabet and SoftBank's attempts to launch flying cellphone antennas high into the atmosphere have received backing from global telcos, energizing lobbying efforts aimed at driving regulatory approval for the emerging technology. From a report: Loon, which was spun out of Google parent Alphabet's business incubator, and HAPSMobile, a unit of SoftBank Group's domestic telco, plan to deliver high speed internet to remote areas by flying network equipment at high altitudes. Lobbying efforts by the two firms, which formed an alliance last year, are being joined by companies including aerospace firm Airbus, network vendors Nokia and Ericsson and telcos China Telecom, Deutsche Telekom, Telefonica and Bharti Airtel.

Read more of this story at Slashdot.

The CIA Won't Admit It Uses Slack

Slashdot - Pre, 21/02/2020 - 8:21md
Given its traditional missions, which include subverting democracy around the world and providing U.S. leaders with unreliable intelligence analysis, it's understandable that the Central Intelligence Agency would be among our less transparent federal agencies. From a report: Now, though, it's gripping even more tightly to inconsequential information about what it gets up to than the ultra-secretive National Security Agency -- and for no evident reason. Last year, VICE filed a Freedom of Information Act request asking for any Slack domains in use by the CIA. The NSA, responding to a similar request, admitted that it had records responsive to the request -- that the agency uses the demonic chat app, in other words -- but said it couldn't release them because they were a state secret. Recently, the CIA replied to our request by saying this: "CIA can neither confirm nor deny the existence or nonexistence of records responsive to your request. The fact of the existence or nonexistence of such records is itself currently and properly classified." In its response to our request, the CIA cited broad provisions in federal law that allow it to keep all sorts of information from the public by claiming it has to do with "intelligence sources and methods," which can mean anything from the identity of a spy in a foreign leader's inner circle to the podcasts a random bureaucrat listens to while driving to work. The agency is within its rights to do this, but it's just another in a long list of examples of why federal classification laws should be changed to give more weight to the public's right to get answers to even stupid questions relative to the right of public employees to keep what they do and how they do it entirely secret.

Read more of this story at Slashdot.

next-20200221: linux-next

Kernel Linux - Pre, 21/02/2020 - 4:42pd
Version:next-20200221 (linux-next) Released:2020-02-21

Felipe Borges: Try the GNOME Nightly VM images with GNOME Boxes

Planet GNOME - Enj, 20/02/2020 - 4:52md

It was a long time overdue but we now have bootable VM images for GNOME again. These VMs are good for testing and documenting new features before they reach distros.

To provide the best experience in terms of performance and host-guest integration, we landed in BoxesDevel (Nightly GNOME Boxes) an option to create GNOME VMs with the correct device drivers and configurations assigned to it. You know…the Boxes way.

Installing GNOME Boxes (Nightly)

1. Set up our nightlies Flatpak repository:

flatpak remote-add --if-not-exists gnome-nightly https://nightly.gnome.org/gnome-nightly.flatpakrepo

2. Install Boxes

flatpak install gnome-nightly org.gnome.BoxesDevel

Testing the GNOME VM image

1. Download a recent VM snapshot (linked on the unstable release announcements). It is a qcow2 file.

2. Open the new VM dialog in Boxes and click on the “GNOME Nightly” entry in the Featured Downloads section. It will open a file chooser.

3. After selecting the qcow2 file downloaded in step one, you can continue to Create a VM. Once the creation is over, you will be able to start the VM by clicking in it on the icon view.

Future developments

We haven’t reached a consensus yet on how we are going to distribute/store/host these VM images, that’s why we have the extra-step before, requiring to pick the file in a file chooser.

In the near future, we will host the images and you will be able to download them directly from GNOME Boxes.

Also, the latest image as of today (3.35.91) doesn’t come with spice-vdagent. It should be included in the next builds, allowing for a maximum host-guest integration like dragging and dropping files from host to guest, automatic resolution, etc…

This is just the beginning. Stay tuned!

Peter Hutterer: A tale of missing touches

Planet GNOME - Enj, 20/02/2020 - 8:39pd

libinput 1.15.1 had a new feature: it matched the expected touch count with the one actually seen as opposed to the one advertised by the kernel. That is good news for ALPS devices whose kernel driver lies about their capabilities because these days who doesn't. However, in some cases that feature had the side-effect of reducing the touch count to zero - meaning libinput would ignore any touch. This caused a slight UX degradation.

After a bit of debugging and/or cursing, the issue was identified as a libevdev issue, specifically - the way libevdev replays events after a SYN_DROPPED event. And after several days of fixing things, adding stuff to the CI and adding meson support for libevdev so the CI can actually run a few useful things, it's time for a blog post to brain-dump and possibly entertain the occasional reader such as you are. Congratulations, I guess.

The Linux kernel's evdev protocol is a serial protocol where all events have a type, a code and a value. Events are grouped by EV_SYN.SYN_REPORT events, so the event type is EV_SYN (0), the event code is SYN_REPORT (also 0). The value is usually (but not always), you guessed it, zero. A SYN_REPORT signals that the current event sequence (also called a "frame") is to be interpreted as one hardware event [0]. In the simplest case, two hardware events from a mouse could look like this:


EV_REL REL_X 1
EV_SYN SYN_REPORT 0
EV_REL REL_X 1
EV_REL REL_Y 1
EV_SYN SYN_REPORT 0
While we have five evdev events here, those represent one hardware event with an x movement of 1 and a second hardware event with a diagonal movement by 1/1. Glorious, we all understand evdev now (if not, read this and immediately afterwards this, although that second post will be rather reinforced by this post).

Life as software developer would be quite trivial but our universe hates us and we need an extra event code called SYN_DROPPED. This event is used by the kernel when events from the device come in faster than you're reading them. This shouldn't happen given that most input devices scan out at the casual rate of every 7ms or slower and we're not exactly running on carrier pigeons here. But your compositor has been a busy bee rendering all these browser windows containing kitten videos and thus completely neglected to check whether you've moved the finger on the touchpad recently. So the kernel sort-of clears the current event buffer and positions a shiny steaming SYN_DROPPED in there to notify the compositor of its wrongdoing. [1]

Now, we could assume that every evdev client (libinput, every Xorg driver, ...) knows how to handle SYN_DROPPED events correctly but we're self-aware enough that we don't. So SYN_DROPPED handling is wrapped via libevdev, in a way that lets the clients use almost exactly the same processing paths they use for normal events. libevdev gives you a notification that a SYN_DROPPED occured, then you fetch the events one-by-one until libevdev tells you have the complete current state of the device, and back to kittens you go. In pseudo-code, your input stack's event loop works like this:


while (user_wants_kittens):
event = libevdev_get_event()

if event is a SYN_DROPPED:
while (libevdev_is_still_synchronizing):
event = libevdev_get_event()
process_event(event)
else:
process_event(event)
Now, this works great for keys where you get the required events to release or press new keys. This works great for relative axes because meh, who cares [2]. This works great for absolute axes because you just get the current state of the device and done. This works great for touch because, no wait, that bit is awful.

You see, the multi-touch protocol is ... special. It uses the absolute axes, but it also multiplexes over those axes via the slot protocol. A normal two-touch event looks like this:


EV_ABS ABS_MT_SLOT 0
EV_ABS ABS_MT_POSITION_X 123
EV_ABS ABS_MT_SLOT 1
EV_ABS ABS_MT_POSITION_X 456
EV_ABS ABS_MT_POSITION_Y 789
EV_ABS ABS_X 123
EV_SYN SYN_REPORT 0
The first two evdev events are slot 0 (first touch [3]), the second two evdev events are slot 1 (second touch [3]). Both touches update their X position but the second touch also updates its Y position. But for single-touch emulation we also get the normal absolute axis event [3]. Which is equivalent to the first touch [3] and can be ignored if you're handling the MT axes [3] (I'm getting a lot of mileage out of that footnote). And because things aren't confusing enough: events within an evdev frame are position-independent except the ABS_MT axes which need to be processed in sequence. So that ABS_X events could be anywhere within that frame, but the ABS_MT axes need to be grouped by slot.

About that single-touch emulation... We also have a single-touch multi-touch protocol via EV_KEY. For devices that can only track N fingers but can detect N+M fingers, we have a set of BTN_TOOL defines. Two fingers down sets BTN_TOOL_DOUBLETAP, three fingers down sets BTN_TOOL_TRIPLETAP, etc. Those are just a bitfield though, so no position data is available. And it tops out at BTN_TOOL_QUINTTAP but then again, that's a good maximum backed by a lot of statistical samples from users hands. On many devices, we have to combine that single-touch MT protocol with the real MT protocol. Synaptics touchpads on PS/2 only support 2 finger positions but detect up 5 touches otherwise [4]. And remember the ALPS devices? They say they have 4 slots but may only send data for two or three, so we have to detect this at runtime and switch to the BTN_TOOL bits for some touches.

So anyway, now that we unfortunately all understand the MT protocol(s), let's look at that libevdev bug. libevdev checks the slot states after SYN_DROPPED to detect whether any touch has stopped or started during SYN_DROPPED. It also detects whether a touch has changed, i.e. the user lifted the finger(s) and put the finger(s) down again while SYN_DROPPED was happening. For those touches it generates the events to stop the original touch, then events to start the new touch. This needs to be done over two event frames, i.e. with a SYN_REPORT in between [5]. But the implementation ended up splitting those changes - any touch that changed was terminated in the first event frame, any touch that outright stopped was terminated in the second event frame. That in itself wasn't the problem yet, the problem was that libevdev didn't emulate the single-touch multi-touch protocol with those emulated frames. So we ended up with event frames where slots would terminate but the single-touch protocol didn't update until a frame later.

This doesn't matter for most users. Both protocols were still correct-enough in their own bubble, only once you start mixing protocols was where it all started getting wonky. libinput does this because it has to, too many devices out there only track two fingers. So if you want three-finger tapping and pinch gestures, you need to handle both protocols. Despite this we didn't notice until we added the quirk for ALPS devices. Because now libinput sometimes noticed that after a SYN_DROPPED there were no fingers on the touchpad (because they all stopped/changed) but the BTN_TOOL bits were still on so clearly we have a touchpad that cannot track all fingers it detects - in this case zero. [6]

So to recap: libinput's auto-adjustment of the touch count for buggy touchpad devices failed thanks to libevdev's buggy workaround of the device sync. The device sync we need because we can't rely on userspace handling touches correctly across SYN_DROPPED. An event which only gets triggered because the compositor is too buggy to read input events in time. I don't know how to describe it exactly, but what I can see all the way down are definitely not turtles.

And the sad thing about it: if we didn't try to correct for the firmware and accepted that gestures are just broken on ALPS devices because the kernel driver is lying to us, none of the above would have mattered. Likewise, the old xorg synaptics driver won't be affected by this because it doesn't handle multitouch properly anyway, so it doesn't need to care about these discrepancies. Or, in other words and much like real life: the better you try to be, the worse it all gets.

And as the take-home lesson: do upgrade to libinput 1.15.2 and do upgrade to libevdev 1.9.0 when it's out. Your kittens won't care but at least that way it won't make me feel like I've done all this work in vain.

[0] Unless the SYN_REPORT value is nonzero but let's not confuse everyone more than necessary
[1] A SYN_DROPPED is per userspace client, so a debugging tool reading from the same event node may not see that event unless it too is busy with feline renderings.
[2] yes, you'll get pointer jumps because event data is missing but since you've been staring at those bloody cats anyway, you probably didn't even notice
[3] usually, but not always
[4] on those devices, identifying a 3-finger pinch gesture only works if you put the fingers down in the correct order
[5] historical reasons: in theory a touch could change directly but most userspace can't handle it and it's too much effort to add now
[6] libinput 1.15.2 leaves you with 1 finger in that case and that's good enough until libevdev is released

Matthew Garrett: What usage restrictions can we place in a free software license?

Planet GNOME - Enj, 20/02/2020 - 1:45pd
Growing awareness of the wider social and political impact of software development has led to efforts to write licenses that prevent software being used to engage in acts that are seen as socially harmful, with the Hippocratic License being perhaps the most discussed example (although the JSON license's requirement that the software be used for good, not evil, is arguably an earlier version of the theme). The problem with these licenses is that they're pretty much universally considered to fall outside the definition of free software or open source licenses due to their restrictions on use, and there's a whole bunch of people who have very strong feelings that this is a very important thing. There's also the more fundamental underlying point that it's hard to write a license like this where everyone agrees on whether a specific thing is bad or not (eg, while many people working on a project may feel that it's reasonable to prohibit the software being used to support drone strikes, others may feel that the project shouldn't have a position on the use of the software to support drone strikes and some may even feel that some people should be the victims of drone strikes). This is, it turns out, all quite complicated.

But there is something that many (but not all) people in the free software community agree on - certain restrictions are legitimate if they ultimately provide more freedom. Traditionally this was limited to restrictions on distribution (eg, the GPL requires that your recipient be able to obtain corresponding source code, and for GPLv3 must also be able to obtain the necessary signing keys to be able to replace it in covered devices), but more recently there's been some restrictions that don't require distribution. The best known is probably the clause in the Affero GPL (or AGPL) that requires that users interacting with covered code over a network be able to download the source code, but the Cryptographic Autonomy License (recently approved as an Open Source license) goes further and requires that users be able to obtain their data in order to self-host an equivalent instance.

We can construct examples of where these prevent certain fields of endeavour, but the tradeoff has been deemed worth it - the benefits to user freedom that these licenses provide is greater than the corresponding cost to what you can do. How far can that tradeoff be pushed? So, here's a thought experiment. What if we write a license that's something like the following:

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. All permissions granted by this license must be passed on to all recipients of modified or unmodified versions of this work
2. This work may not be used in any way that impairs any individual's ability to exercise the permissions granted by this license, whether or not they have received a copy of the covered work

This feels like the logical extreme of the argument. Any way you could use the covered work that would restrict someone else's ability to do the same is prohibited. This means that, for example, you couldn't use the software to implement a DRM mechanism that the user couldn't replace (along the lines of GPLv3's anti-Tivoisation clause), but it would also mean that you couldn't use the software to kill someone with a drone (doing so would impair their ability to make use of the software). The net effect is along the lines of the Hippocratic license, but it's framed in a way that is focused on user freedom.

To be clear, I don't think this is a good license - it has a bunch of unfortunate consequences like it being impossible to use covered code in self-defence if doing so would impair your attacker's ability to use the software. I'm not advocating this as a solution to anything. But I am interested in seeing whether the perception of the argument changes when we refocus it on user freedom as opposed to an independent ethical goal.

Thoughts?

Edit:

Rich Felker on Twitter had an interesting thought - if clause 2 above is replaced with:

2. Your rights under this license terminate if you impair any individual's ability to exercise the permissions granted by this license, even if the covered work is not used to do so

how does that change things? My gut feeling is that covering actions that are unrelated to the use of the software might be a reach too far, but it gets away from the idea that it's your use of the software that triggers the clause.

comments

5.5.5: stable

Kernel Linux - Mër, 19/02/2020 - 7:54md
Version:5.5.5 (stable) Released:2020-02-19 Source:linux-5.5.5.tar.xz PGP Signature:linux-5.5.5.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-5.5.5

Faqet

Subscribe to AlbLinux agreguesi