You are here

Agreguesi i feed

Crypto-Driven GPU Crash Makes Nvidia Miss Q2 Projections By $1.4 Billion

Slashdot - Hën, 08/08/2022 - 11:30md
In preliminary second-quarter financial results announced today, Nvidia's year-over-year growth is "down from a previously forecasted $8.1 billion, a miss of $1.4 billion," reports Ars Technica. "Nvidia blamed this shortfall on weaker-than-expected demand for its gaming products, including its GeForce graphics processors." The full results won't arrive until the end of the month. From the report: Nvidia pointed to "a reduction in channel partner sales," meaning that partners like Evga, MSI, Asus, Zotac, Gigabyte, and others were selling fewer new GPUs than anticipated. This drop can be attributed partly to a crash in the value of mining-based cryptocurrencies like Bitcoin and Ethereum -- fewer miners are buying these cards, and miners looking to unload their GPUs on the secondhand market are also giving gamers a cheaper source for graphics cards. "As we expect the macroeconomic conditions affecting sell-through to continue, we took actions with our Gaming partners to adjust channel prices and inventory," said Nvidia CEO Jensen Huang. That means we may see further price drops for existing GeForce GPUs, which have already been dropping in price throughout the year. Some cards still haven't reverted to their originally advertised prices, but they're getting closer all the time. In better news for Nvidia, the small overall increase in revenue [$6.7 billion] is driven almost exclusively by the company's data center business, including GPU-accelerated AI and machine learning applications and GPU acceleration for cloud-hosted virtual machines. Nvidia's data center revenue is projected to be up 61 percent from last year, from $2.37 billion to $3.81 billion. Nvidia will supposedly launch its next-generation RTX 4000 series GPUs later this year. Based on the new Lovelace architecture, these GPUs may appeal to some gamers who originally sat out the RTX 3000 series due to shortages and inflated prices and are now avoiding the GPUs because they know a replacement is around the corner.

Read more of this story at Slashdot.

Amazon's Roomba Deal Is Really About Mapping Your Home

Slashdot - Hën, 08/08/2022 - 10:50md
An anonymous reader quotes a report from Bloomberg: hasn't just bought a maker of robot vacuum cleaners. It's acquired a mapping company. To be more precise: a company that can make maps of your home. The company announced a $1.7 billion deal on Friday for iRobot, the maker of the Roomba vacuum cleaner. And yes, Amazon will make money from selling those gadgets. But the real value resides in those robots' ability to map your house. As ever with Amazon, it's all about the data. A smart home, you see, isn't actually terribly smart. It only knows that your Philips Hue lightbulbs and connected television are in your sitting room because you've told it as much. It certainly doesn't know where exactly the devices are within that room. The more it knows about a given space, the more tightly it can choreograph the way they interact with you. The smart home is clearly a priority for Amazon. Its Echo smart speakers still outsell those from rivals Apple and Google, with an estimated 9.9 million units sold in the three months through March, according to the analysis firm Strategy Analytics. It's complemented that with a $1 billion deal for the video doorbell-maker Ring in 2018, and the wi-fi company Eero a year later. But you still can't readily buy the Astro, Amazon's household robot that was revealed with some fanfare last year, is still only available in limited quantities. That, too, seemed at least partly an effort to map the inside of your property, a task that will now fall to iRobot. The Bedford, Mass.-based company's most recent products include a technology it calls Smart Maps, though customers can opt out of sharing the data. Amazon said in a statement that protecting customer data is "incredibly important." Slightly more terrifying, the maps also represent a wealth of data for marketers. The size of your house is a pretty good proxy for your wealth. A floor covered in toys means you likely have kids. A household without much furniture is a household to which you can try to sell more furniture. This is all useful intel for a company such as Amazon which, you may have noticed, is in the business of selling stuff.

Read more of this story at Slashdot.

Debian Day 2022 - call for celebration

Bits from Debian - Hën, 08/08/2022 - 5:00md

Every year on August 16th, the anniversary of the Debian Project takes place. And several communities around the world celebrate this date by organizing local meetings in an event called "Debian Day".

So, how about celebrating the 29th anniversary of the Debian Project in 2022 in your city?

We invite you and your local community to organize Debian Day by hosting an event with talks, workshops, bug squashing party, OpenPGP keysigning, etc. Or simply holding a meeting between people who like Debian in a bar/pizzeria/cafeteria/restaurant to celebrate. In other words, any type of meeting is valid!

But remember that the COVID-19 pandemic is not over yet, so take all necessary measures to protect attendees.

As the 16th of August falls on a Tuesday, if you think it's better to organize it during the weekend, no problem. The importance is to celebrate the Debian Project.

Remember to add your city to the Debian Day wiki page

There is a list of Debian Local Groups around the world. If your city is listed, talk to them to organize DebianDay together.

There is a list of Debian Local Groups around the world. If your city is listed, talk to them to organized the Debian Day together.

Let's use hashtags #DebianDay #DebianDay2022 on social media.

Benefits & Drawbacks of Using a VPN on Linux - Hën, 08/08/2022 - 1:00md
If you use Linux, whether it is just to browse the web, use it as a VPN server, or even if you use it to hack people (Just kidding!), then it is pretty essential and worthwhile to understand the pros and cons of using a VPN for Linux. An effective VPN works by routing all your Internet traffic through another computer. This means that if you use the Internet with a VPN, the remote computer/server in which traffic is being routed through becomes the source of data. In short, A VPN allows you to secure traffic between two locations, whether that be a VPN server you set up yourself at home or a location provided by a VPN provider, or even between your location and your work office! All your data traffic is routed through an encrypted virtual tunnel. With a VPN, not even your ISP or other third parties can see which websites you visit or the data you send and receive online. This article will explore the benefits and drawbacks of using a VPN on Linux.

next-20220808: linux-next

Kernel Linux - Hën, 08/08/2022 - 5:44pd
Version:next-20220808 (linux-next) Released:2022-08-08

Ownership of "" domain - Dje, 07/08/2022 - 12:00pd
The World Intellectual Property Organization (WIPO), under its Uniform Domain-Name Dispute-Resolution Policy (UDRP), decided that ownership of the domain should be transferred to the Debian Project.

Windows Subsystem for Linux 0.65.1 is now live for all Insiders - Pre, 05/08/2022 - 1:54md
Windows Subsystem for Linux (WSL) is a powerful piece of software wizardry that allows users to run GNU/Linux environments directly in Windows without requiring virtual machines (VMs) or dual-boot configurations. Available for both Windows 10 and Windows 11 , it's a very handy utility, especially for cross-platform development and testing. Microsoft regularly updates WSL with new features and capabilities. Today, it has announced WSL version 0.65.1 for Insiders.

Tutanota Cries Antitrust Foul Over Microsoft Teams Blocking Sign-Ups For Its Email Users

Slashdot - Pre, 05/08/2022 - 1:20pd
Microsoft is being called out for blocking users of the end-to-end encrypted email service Tutanota from registering an account with its cloud-based collaboration platform, Teams, if they try to do that using a Tutanota email address. TechCrunch reports: The problem, which has been going on unrectified for some time -- with an initial complaint raised with Microsoft support back in January 2021 -- appears to have arisen because it treats Tutanota as a corporate email, rather than what it actually is (and has always been), an email service. This misclassification means that when a Tutanota email user tries to use this email address to register an account with Teams they get a classic "computer says no' response -- with the interface blocking the registration and suggesting the person "contact your admin or try a different email." "When the first Tutanota user registered a Teams account, they were assigned the domain. That's why now everyone who logs in with Tutanota address should report to their 'admin' (see screenshot)," explains a spokeswoman for Tutanota when asked why they think this is happening. To get past this denial -- and register a Teams account -- the Tutanota user has to enter a non-Tutanota email. (Such as, for example, a Microsoft email address.) To get past this denial -- and register a Teams account -- the Tutanota user has to enter a non-Tutanota email. (Such as, for example, a Microsoft email address.) In a blog post detailing the saga, Tutanota co-founder, Matthias Pfau, dubs Microsoft's behavior a "severe anti-competitive practice." "Politicians on both sides of the Atlantic are discussing stronger antitrust legislation to regulate Big Tech. These laws are badly needed as the example of Microsoft blocking Tutanota users from registering a Teams account demonstrates," he writes. "The problem: Big Tech companies have the market power to harm smaller competitors with some very easy steps like refusing smaller companies' customers from using their own services." "This is just one example of how Microsoft can and does abuse its dominant market position to harm competitors, which in turn also harms consumers," he adds. [...] "As earlier discussed, we are unable to make your domain a public domain. The domain has already been used for Microsoft Teams. If teams have been used with a specific domain, it can't work as a vanity/public domain," runs another of Microsoft's support's shrugging-off responses. Tutanota kept on trying to press for a reason why Microsoft could not reclassify the domain for weeks -- but just hit the same brick wall denial. Hence it's going public with its complaint now. "The conversation went back and forth for at lest six weeks until we finally gave up -- due to the repeated response that they would not change this," the spokeswoman added. In an update, a Microsoft spokesperson said: "We are currently looking into the issue raised by Tutanota."

Read more of this story at Slashdot.

Visa, Mastercard Suspend Payment For Ad Purchases On PornHub and MindGeek

Slashdot - Pre, 05/08/2022 - 12:40pd
Visa and Mastercard said Thursday card payments for advertising on Pornhub and its parent company MindGeek would be suspended after a lawsuit stoked controversy over whether the payments giants could be facilitating child pornography. CNBC reports: A federal judge in California on Friday denied Visa's motion to dismiss a lawsuit by a woman who accuses the payment processor of knowingly facilitating the distribution of child pornography on Pornhub and other sites operated by parent company MindGeek. Visa CEO and Chairman Al Kelly said in a statement Thursday that he strongly disagrees with this court and is confident in his position. "Visa condemns sex trafficking, sexual exploitation, and child sexual abuse," Kelly said. "It is illegal, and Visa does not permit the use of our network for illegal activity. Our rules explicitly and unequivocally prohibit the use of our products to pay for content that depicts nonconsensual sexual behavior or child sexual abuse. We are vigilant in our efforts to deter this, and other illegal activity on our network." Kelly said the court decision created uncertainty about the role of TrafficJunky, MindGeek's advertising arm, and accordingly, the company will suspend its Visa acceptance privileges until further notice. During this suspension, Visa cards will not be able to be used to purchase advertising on any sites, including Pornhub or other MindGeek-affiliated sites, Kelly said. "It is Visa's policy to follow the law of every country in which we do business. We do not make moral judgments on legal purchases made by consumers, and we respect the rightful role of lawmakers to make decisions about what is legal and what is not," Kelly said. "Visa can be used only at MindGeek studio sites that feature adult professional actors in legal adult entertainment." Separately, Mastercard told CNBC it's directing financial institutions to suspend acceptance of its products at TrafficJunky following the court ruling. "New facts from last week's court ruling made us aware of advertising revenue outside of our view that appears to provide Pornhub with indirect funding," a statement from Mastercard said. "This step will further enforce our December 2020 decision to terminate the use of our products on that site." At that time, Visa also suspended sites that contained user-generated content and acceptance on those sites has not been reinstated.

Read more of this story at Slashdot.

The Founder of GeoCities On What Killed the 'Old Internet'

Slashdot - Pre, 05/08/2022 - 12:02pd
An anonymous reader quotes a report from Gizmodo, written by Jody Serrano: In the early aughts, my wheezing dialup connection often operated as if it were perpetually out of breath. Thus, unlike my childhood friends, it was near to impossible for me to watch videos, TV shows, or listen to music. Far from feeling limited, I felt like I was lucky, for I had access to an encyclopedia of lovingly curated pages about anything I wanted to know -- which in those days was anime -- the majority of which was conveniently located on GeoCities. For all the zoomers scrunching up their brows, here's a primer. Back in the 1990s, before the birth of modern web hosting household names like GoDaddy and WP Engine, it wasn't exactly easy or cheap to publish a personal website. This all changed when GeoCities came on the scene in 1994. The company gave anyone their own little space of the web if they wanted it, providing users with roughly 2 MB of space for free to create a website on any topic they wished. Millions took GeoCities up on its offer, creating their own homemade websites with web counters, flashing text, floating banners, auto-playing sound files, and Comic Sans. Unlike today's Wild Wild Internet, websites on GeoCities were organized into virtual neighborhoods, or communities, built around themes. "HotSprings" was dedicated to health and fitness, while "Area 51" was for sci-fi and fantasy nerds. There was a bottom-up focus on users and the content they created, a mirror of what the public internet was like in its infancy. Overall, at least 38 million webpages were built on GeoCities. At one point, it was the third most-visited domain online. Yahoo acquired GeoCities in 1999 for $3.6 billion. The company lived on for a decade more until Yahoo shut it down in 2009, deleting millions of sites. Nearly two decades have passed since GeoCities, founded by David Bohnett, made its debut, and there is no doubt that the internet is a very different place than it was then. No longer filled with webpages on random subjects made by passionate folks, it now feels like we live in a cyberspace dominated by skyscrapers -- named Facebook, Google, Amazon, Twitter, and so on -- instead of neighborhoods. [...] We can, however, ask GeoCities' founder what he thinks of the internet of today, subsumed by social media networks, hate speech, and more corporate than ever. Bohnett now focuses on funding entrepreneurs through Baroda Ventures, an early-stage tech fund he founded, and on philanthropy with the David Bohnett Foundation, a nonprofit dedicated to social justice and social activism that he chairs. Right off the bat, Bohnett says something that strikes me. It may, in fact, be the sentence that summarizes the key distinction between the internet of the '90s-early 2000s and the internet we have today. "GeoCities was not about self-promotion," Bohnett told Gizmodo in an interview. "It was about sharing your interest and your knowledge." When asked to share his thoughts on the internet of today, Bohnett said: "... The heart of GeoCities was sharing your knowledge and passions about subjects with other people. It really wasn't about what you had to eat and where you've traveled. [...] It wasn't anything about your face." He added: "So, what has surprised me is how far away we've gotten from that original intent and how difficult it is [now]. It's so fractured these days for people to find individual communities. [...] I've been surprised at sort of the evolution away from self-generated content and more toward centralized programing and more toward sort of the self-promotion that we've seen on Facebook and Instagram and TikTok." Bohnett went on to say that he thinks it's important to remember that "the pace of innovation on the internet continues to accelerate, meaning we're not near done. In the early days when you had dial up and it was the desktop, how could you possibly envision an Uber?" "We're still in that trajectory where there's going to be various technologies and ways of communicating with each other, [as well as] wearable devices, blockchain technology, virtual reality, that will be as astounding as Uber seemed in the early days of GeoCities," added Bohnett. "I'm very, very excited about the future, which is why I continue to invest in early-stage startups because as I say, the pace of innovation accelerates and builds on top of itself. It's so exciting to see where we might go."

Read more of this story at Slashdot.

Philippines Legislator Offers Up Bill That Would Criminalize 'Ghosting'

Slashdot - Enj, 04/08/2022 - 11:25md
An anonymous reader shares a report: Real problems are what legislators are supposed to be solving. The Philippines has plenty of those, ranging from (government-endorsed) extrajudicial killings of drug dealers and drug users to abuses of state power to silence journalists to the actual murders of human rights activists. But legislators with their own axes to grind will always find ways to hone this edge, even if it means subjecting themselves to international ridicule. Enter Representative Arnolfo "Arnie" Teves, Jr. The rep has introduced a bill that would criminalize the act of "ghosting." For those unfamiliar with internet slang, it may appear Teves is trying to criminalize the act of being a ghost. (Webster's Ye Olde English Dictionary, perhaps.) But ghosts actually engage in "haunting," which is not the same thing as "ghosting." Ghosting is something else. Ghosting is disengaging from a relationship (short-term or long-term) by ignoring all calls, IMs, text messages, emails, etc. from a paramour until the problem ultimately solves itself. If one interested person can't get a response from a disinterested person, sooner or later the interested person stops trying.

Read more of this story at Slashdot.

Equifax Issued Wrong Credit Scores For Millions of Consumers

Slashdot - Enj, 04/08/2022 - 10:45md
Credit giant Equifax sent lenders incorrect credit scores for millions of consumers this spring, in a technology snafu with major real-world impact. From a report: In certain cases the errors were significant enough -- the differential was at least 25 points for around 300,000 consumers -- that some would-be borrowers may have been wrongfully denied credit, the company said in a statement. The problem occurred because of a "coding issue" when making a change to one of Equifax's servers, according to the company, which said the issue "was in place over a period of a few weeks [and] resulted in the potential miscalculation" of credit scores. While Equifax did not specify dates or figures, a June 1 alert from housing agency Freddie Mac to its clients said Equifax told the agency that about 12% of all credit scores released from March 17 to April 6 may be have been incorrect. Equifax wrote that "there was no shift in the vast majority of scores" and that "credit reports were not affected." But the company declined to comment to CNN Business about how people can learn whether they were among those whose credit scores were incorrectly reported -- and what recourse they may have if they were issued loans at a higher rate or denied a loan outright because of the snafu.

Read more of this story at Slashdot.

US Officials Declare Monkeypox a Public Health Emergency

Slashdot - Enj, 04/08/2022 - 10:05md
The Biden administration declared monkeypox a public health emergency on Thursday as cases topped 6,600 nationwide. From a report: The declaration could facilitate access to emergency funds, allow health agencies to collect more data about cases and vaccinations, accelerate vaccine distribution and make it easier for doctors to prescribe treatment. "We're prepared to take our response to the next level in addressing this virus and we urge every American to take monkeypox seriously and to take responsibility to help us tackle this virus," Department of Health and Human Services Secretary Xavier Becerra said in a Thursday briefing about the emergency declaration. A quarter of U.S. cases are in New York state, which declared a state of emergency last week. California and Illinois followed suit with emergency declarations Monday.

Read more of this story at Slashdot.

GitLab Plans To Delete Dormant Projects in Free Accounts

Slashdot - Enj, 04/08/2022 - 9:19md
GitLab plans to automatically delete projects if they've been inactive for a year and are owned by users of its free tier, The Register reported Thursday. From the report: The Register has learned that such projects account for up to a quarter of GitLab's hosting costs, and that the auto-deletion of projects could save the cloudy coding collaboration service up to $1 million a year. The policy has therefore been suggested to help GitLab's finances remain sustainable. People with knowledge of the situation, who requested anonymity as they are not authorized to discuss it with the media, told The Register the policy is scheduled to come into force in September 2022. GitLab is aware of the potential for angry opposition to the plan, and will therefore give users weeks or months of warning before deleting their work. A single comment, commit, or new issue posted to a project during a 12-month period will be sufficient to keep the project alive. The Register understands some in the wider GitLab community worry that the policy could see projects disappear before users have the chance to archive code on which they rely. As many open-source projects are widely used, it is feared that the decision could have considerable negative impact.

Read more of this story at Slashdot.

Record Amount of Seaweed Chokes Caribbean Beaches and Shoreline

Slashdot - Enj, 04/08/2022 - 8:42md
Bruce66423 writes: A record amount of seaweed is smothering Caribbean coasts from Puerto Rico to Barbados as tons of brown algae kill wildlife, choke the tourism industry and release toxic gases. More than 24 million tons of sargassum blanketed the Atlantic in June, up from 18.8 million tons in May, according to a monthly report published by the University of South Florida's Optical Oceanography Lab, which noted it as "a new historical record." July saw no decrease of algae in the Caribbean Sea, said Chuanmin Hu, an optical oceanography professor who helps produce the reports. "I was scared," he recalled feeling when he saw the historic number for June. He noted that it was 20% higher than the previous record set in May 2018. Hu compiled additional data for the Associated Press that showed sargassum levels for the eastern Caribbean at a near record high this year, second only to those reported in July 2018. Levels in the northern Caribbean are at their third-highest, following July 2018 and July 2021, he said.

Read more of this story at Slashdot.

Solana Hack Blamed on Slope Mobile Wallet Exploit

Slashdot - Enj, 04/08/2022 - 8:03md
Thousands of Solana users collectively lost about $4.5 million worth of SOL and other tokens from Tuesday night into early Wednesday, and now there's a likely explanation for why: it's being blamed on a private key exploit tied to mobile software wallet Slope. From a report: On Wednesday afternoon, the official Solana Status Twitter account shared preliminary findings through collaboration between developers and security auditors, and said that "it appears affected addresses were at one point created, imported, or used in Slope mobile wallet applications." "This exploit was isolated to one wallet on Solana, and hardware wallets used by Slope remain secure," the thread continues. "While the details of exactly how this occurred are still under investigation, but private key information was inadvertently transmitted to an application monitoring service." "There is no evidence the Solana protocol or its cryptography was compromised," the account added. Some Phantom wallets were also drained of their SOL and tokens in the attack, however it appears that those wallets' holders had previously interacted with a Slope wallet. "Phantom has reason to believe that the reported exploits are due to complications related to importing accounts to and from Slope," the Phantom team tweeted today.

Read more of this story at Slashdot.

Starbucks To Unveil Its Web3-Based Rewards Program Next Month

Slashdot - Enj, 04/08/2022 - 7:22md
Starbucks will unveil its web3 initiative, which includes coffee-themed NFTs, at next month's Investor Day event. From a report: The company earlier this year announced its plans to enter the web3 space, noting its NFTs wouldn't just serve as digital collectibles, but would provide their owners with access to exclusive content and other perks. At the time, Starbucks was light on details as to what its debut set of NFTs would look like, specific features they'd provide or even what blockchain it was building on. It said the plan was likely to be multichain or chain-agnostic, hinting at plans that weren't yet finalized. Overall, the coffee retailer kept its web3 news fairly high level, explaining simply that it believed digital collectibles could create an accretive business adjust to its stores and that more would be revealed later in 2022.

Read more of this story at Slashdot.

Philip Withnall: Looking at project resource use and CI pipelines in GitLab

Planet GNOME - Sht, 23/07/2022 - 2:39md

While at GUADEC I finished a small script which uses the GitLab API to estimate the resource use of a project on GitLab. It looks at the CI pipeline job durations and artifact storage for the project and its forks over a given period, and totals things.

You might want to run it on your project!

It gives output something like the following:

Between 2022-06-23 00:00:00+00:00 and 2022-07-23 00:00:00+00:00, GNOME/glib and its 20 forks used:

  • 4592 CI jobs, totalling 17125 minutes (duration minimum 0.0, median 2.3, maximum 65.0)
  • Total energy use: 32.54kWh
  • Total artifact storage: 4426 MB (minimum 0.0, median 0.2, maximum 20.9)

This is useful for giving a rough look at the CI resources used by a project, which could be useful for noticing low-hanging fruit for speeding things up or reducing resource waste.

What can I do with this information?

If total pipeline durations are long, either reduce the number of pipeline jobs or speed them up. Speeding them up almost always has no downsides. Reducing the number of jobs is a tradeoff between convenience of development and resource usage. Two ideas for reducing the number of jobs are to make some jobs manual-only, if they are very unlikely to find problems. Or run them on a schedule rather than on every commit, if it’s OK for them to catch problems up to a week after they’re introduced.

If total artifact storage use is high, store fewer artifacts, or expire them after a week (or so). They are likely not so useful after that point anyway.

If artifacts are being used to cache build dependencies, then consider moving those dependencies into a pre-built container image instead. It may be cached better between CI runners.

This script is rubbish, how do I improve it?

Merge requests welcome on, or perhaps you’d like to integrate it into so that the data could be visualised over time? The same query code should work for all GitLab instances, not just GNOME’s.

How does it work?

It queries the GitLab API in a few ways, and then applies a very simple model to the results.

It can take a while to run when querying for large projects or for periods of over a couple of weeks, as it needs to make a REST request for each CI job individually.

Hans Petter Jansson: GNOME at 25: A Health Checkup

Planet GNOME - Sht, 23/07/2022 - 12:25pd

Around the end of 2020, I looked at GNOME's commit history as a proxy for the project's overall health. It was fun to do and hopefully not too boring to read. A year and a half went by since then, and it's time for an update.

If you're seeing these cheerful-as-your-average-wiphala charts for the first time, the previous post does a better job of explaining things. Especially so, the methodology section. It's worth a quick skim.

What's new
  • Fornalder gained the ability to assign cohorts by file suffix, path prefix and repository.
  • It filters out more duplicate authors.
  • It also got better at filtering out duplicate and otherwise bogus commits.
  • I added the repositories suggested by Allan and Federico in this GitHub issue (diff).
  • Some time passed.
Active contributors, by generation

As expected, 2020 turned out interesting. First-time contributors were at the gates, numbering about 200 more than in previous years. What's also clear is that they mostly didn't stick around. The data doesn't say anything about why that is, but you could speculate that a work-from-home regime followed by a solid staycation is a state of affairs conductive to finally scratching some tangential — and limited — software-themed itch, and you'd sound pretty reasonable. Office workers had more time and workplace flexibility to ponder life's great questions, like "why is my bike shed the wrong shade of beige" or perhaps "how about those commits". As one does.

You could also argue that GNOME did better at merging pull requests, and that'd sound reasonable too. Whatever the cause, more people dipped their toes in, and that's unequivocally good. How to improve? Rope them into doing even more work! And never never let them go.

2021 brought more of the same. Above the 2019 baseline, another 200 new contributors showed up, dropped patches and bounced.

Active contributors, by affiliation

Unlike last time, I've included the Personal and Other affiliations for this one, since it puts corporate contributions in perspective; GNOME is a diverse, loosely coupled project with a particularly long and fuzzy tail. In terms of how spread out the contributor base is across the various domains, it stands above even other genuine community projects like GNU and the Linux kernel.

Commit count, by affiliation

To be fair, the volume of contributions matters. Paid developers punch way above their numbers, and as we've seen before, Red Hat throws more punches than anyone. Surely this will go on forever (nervous laugh).

Eazel barely made the top-15 cut the last time around. It's now off the list. That's what you get for pushing the cloud, a full decade ahead of its time.

Active contributors, by repository

Slicing the data per repository makes for some interesting observations:

  • Speaking of Eazel… Nautilus may be somewhat undermaintained for what it is, but it's seen worse. The 2005-2007 collapse was bad. In light of this, the drive to reduce complexity (cf. eventually removing the compact view etc) makes sense. I may have quietly gnashed my teeth at this at one point, but these days, Nautilus is very pleasant to use for small tasks. And for Big Work, you've got your terminal, a friendly shell and the GNU Coreutils. Now and forever.
  • Confirming what "everyone knows", the maintainership of Evolution dwindled throughout the 2010s to the point where only Milan Crha is heroically left standing. For those of us who drank long and deep of the kool-aid it's something to feel apprehensive (and somewhat guilty) about.
  • Vala played an interesting part in the GNOME infrastructure revolution of 2009-2011. Then it sort of… waned? Sure, Rust's the hot thing now, but I don't think it could eat its entire lunch.
  • GLib is seriously well maintained!
Commit count, by repository

With commit counts, a few things pop out that weren't visible before:

  • There's the not at all conspicuously named Tracker, another reminder of how transformative the 2009-2011 time frame really was.
  • The mid-2010s come off looking sort of uneventful and bland in most of the charts, but Builder bucked that trend bigly.
  • Notice the big drop in commits from 2020 to 2021? It's mostly just the GTK team unwinding (presumably) after the 4.0 release.
Active contributors, by file suffix

I devised this one mainly to address a comment from Smeagain. It's a valid concern:

There are a lot of people translating with each getting a single commit for whatever has been translated. During the year you get larger chunks of text to translate, then shortly before the release you finish up smaller tasks, clean up translations and you end up with lots of commits for a lot of work but it's not code. Not to discount translations bit you have a lot of very small commits.

I view the content agnosticism as a feature: We can't tell the difference in work investment between two code commits (perhaps a one-liner with a week of analysis behind it vs. a big chunk of boilerplate being copied in from some other module/snippet database), so why would we make any assumptions about translations? Maybe the translator spent an hour reviewing their strings, found a few that looked suspicious, broke out their dictionary, called a friend for advice on best practice and finally landed a one-line diff.

Therefore we treat content type foo the same as content type bar, big commits the same as small commits, and when tallying authors, few commits the same as many — as long as you have at least one commit in the interval (year or month), you'll be counted.

However! If you look at the commit logs (and relevant infrastructure), it's pretty clear that hackers and translators operate as two distinct groups. And maybe there are more groups out there that we didn't know about, or the nature of the work changed over time. So we slice it by content type, or rather, file suffix (not quite as good, but much easier). For files with no suffix separator, the suffix is the entire filename (e.g. Makefile).

A subtlety: Since each commit can touch multiple file types, we must decide what to do about e.g. a commit touching 10 .c files and 2 .py files. Applying the above agnosticism principle, we identify it as doing something with these two file types and assign them equal weight, resulting in .5 c commits and .5 py commits. This propagates up to the authors, so if in 2021 you made the aforementioned commit plus another one that's entirely vala, you'll tally as .5 c + .5 py + 1.0 vala, and after normalization you'll be a ¼ c, ¼ py and ½ vala author that year. It's not perfect (sensitive to what's committed together), but there are enough commits that it evens out.

Anyway. What can we tell from the resulting chart?

  • Before Git, commit metadata used to be maintained in-band. This meant that you had to paste the log message twice (first to the ChangeLog and then as CVS commit metadata). With everyone committing to ChangeLogs all the time, it naturally (but falsely) reads as an erstwhile focal point for the project. I'm glad that's over.
  • GNOME was and is a C project. Despite all manner of rumblings, its position has barely budged in 25 years.
  • Autotools, however, was attacked and successfully dethroned. Between 2017 and 2021, ac and am gradually yielded to Meson's build.
  • Finally, translators (po) do indeed make up a big part of the community. There's a buried surprise here, though: Comparing 2010 to 2021, this group shrank a lot. Since translations are never "done" — in fact, for most languages they are in a perpetual state of being rather far from it — it's a bit concerning.
The bigger picture

I've warmed to Philip's astute observation:

Thinking about this some more, if you chop off the peak around 2010, all the metrics show a fairly steady number of contributors, commits, etc. from 2008 through to the present. Perhaps the interpretation should not be that GNOME has been in decline since 2010, but more that the peak around 2010 was an outlier.

Some of the big F/OSS projects have trajectories that fit the following pattern:

f (x) = ∑ [n = 1:x] (R * (1 – a)(n – 1))

That is, each year R new contributors are being recruited, while a fraction a of the existing contributors leave. R and a are both fairly constant, but since attrition increases with project size while recruitment depends on external factors, they tend to find an equilibrium where they cancel each other out.

For GNOME, you could pick e.g. R = 130 and a = .15, and you'd come close. Then all you'd need is some sharpie magic, and…


Not a bad fit. Happy 25th, GNOME.

Debarshi Ray: Toolbx — bypassing the immutability of OCI containers

Planet GNOME - Pre, 22/07/2022 - 6:48md

This is a deep dive into some of the technical details of Toolbx. I find myself regularly explaining them to various people, so I thought that I should write them down. Feel free to read and comment, or you can also happily ignore it.

The problem

OCI containers are famous for being immutable. Once a container has been created with podman create, it’s attributes can’t be changed anymore. For example, the bind mounts, the environment variables, the namespaces being used, and all the other attributes that can be specified via options to the podman create command. This means that once there’s a Toolbx, it wouldn’t be possible to give it access to a new set of files from the host if the need arose. The Toolbx would have to be deleted and re-created with access to the new paths.

This is a problem, because a Toolbx is where the user sets up her development and troubleshooting environment. Re-creating a Toolbx might mean reinstalling a number of different packages, tweaking configuration files, redeploying various artifacts and so on. Having to repeat all that in the middle of a long hacking session, just because the container’s attributes need to be tweaked, can be annoying.

This is unlike Flatpak containers, where it’s possible to override the permissions of a Flatpak either persistently through flatpak override or temporarily during flatpak run.

Secondly, as the Toolbx code evolves, we want to be able to transparently update existing Toolbxes to enable new features and fix bugs. It would be a real drag if users had to consciously re-create their containers.

The solution

Toolbx bypasses this by using a special entry point for the container. Those inquisitive types who have run podman inspect on a Toolbx container might have noticed that the toolbox executable itself is the container’s entry point.

$ podman inspect --format "{{.Config.Cmd}}" --type container fedora-toolbox-36 toolbox --log-level debug debug init-container ...

This means that when Toolbx starts a container using podman start, the toolbox init-container command gets run as the first process inside the container. Only after this has run, does the user’s interactive shell get spawned.

Instead of setting up the container entirely through podman create, Toolbx tries to use this reflexive entry point as much as possible. For example, Toolbx doesn’t use podman create --volume /tmp:/tmp to give access to the host’s /tmp inside the container. It bind mounts the entire root filesystem from the host at /run/host in the container with podman create --volume /:/run/host. Then, later when the container is started, toolbox init-container recursively bind mounts the container’s /run/host/tmp to /tmp. Since the container has its own mount namespace, the /run/host and /tmp bind mounts are neatly hidden away from the host.

Therefore, if in future additional host locations need to be exposed within the Toolbx, then those can be added to toolbox init-container, and once the user restarts the container after updating the toolbox executable, the new locations will show up inside the existing container. Similarly, if the mount parameters of an existing location need to be tweaked, or if a host location needs to be removed from the container.

This is not restricted to just bind mounts from the host. The same approach with toolbox init-container is used to configure as many different aspects of the container as possible. For example, setting up users, keeping the timezone and DNS configuration synchronized with the host, and so on.

Further details

One might wonder how a Toolbx container manages to have a toolbox executable inside it, especially since the toolbox package is not installed within the container. It is achieved by bind mounting the toolbox executable invoked by the user on the host to /usr/bin/toolbox inside the container.

This has some advantages.

There is always only one version of the toolbox executable that’s involved — the one that’s on the host. This means that the exact invocation of toolbox init-container, which is baked into the Toolbx and shows up in podman inspect, is the only interface that needs to be kept stable as the Toolbx code evolves. As long as toolbox init-container can be invoked with that specific command line, everything else can be changed because it’s the same executable on both the host and inside the container.

If the container had a separate toolbox package in it, then the user might have to separately update another executable to get the expected results, and we would have to ensure that different mismatched versions of the executable can work with each other across the host and the container. With a growing number of containers, the former would be a nightmare for the user, while the latter would be almost impossible to test.

Finally, having only one version of the toolbox executable makes it a lot easier for users to file bug reports. There’s only one version to report, not several spread across different environments.

This leads to another problem

Once you let this sink in, you might realize that bind mounting the toolbox executable from the host into the Toolbx means that an executable from a newer or different operating system might be running against an older or different run-time environment inside the container. For example, an executable from a Fedora 36 host might be running inside a Fedora 35 Toolbx, or one from an Arch Linux host inside an Ubuntu container.

This is very unusual. We only expect executables from an older version of an OS to keep working on newer versions of the same OS, but never the other way round, and definitely not across different OSes.

I will leave you with that thought and let you puzzle over it, because it will be the topic of a future post.


Subscribe to AlbLinux agreguesi