You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 1 ditë 18 orë më parë

Pablo Correa Gomez: Analysis of GNOME Foundation’s public economy: concerns and thoughts

Hën, 20/05/2024 - 12:11pd

Apart from Software Development, I also have an interest in governance and finances. Therefore, last July, I was quite happy to attend my first Annual General Meeting (AGM), taking place in  GUADEC in Riga. I was a bit surprised by the format, as I was expecting something closer to an assembly than to a presentation with a Q&A at the end. It was still interesting to witness, but I was even more shocked by the huge negative cash flow (difference between revenue and expenditures). With the numbers presented, the foundation had lost approximately 650 000 USD in the 2021 exercise, and 300 000 USD in the 2022 exercise. And nobody seemed worry about it. I would have expected that such difference would come consequence of a great investment aimed at improving the situation of the foundation long-term. However, nothing like that was part of the AGM. This left me thinking, and a bit worried about what was going on with the financials and organization of the foundation. After asking a member of the Board in private, and getting no satisfactory response, I started doing some investigations

Public information research

The GNOME Foundation (legally GNOME Foundation Inc) has the 501(c)3 status. It means it is tax exempt. As part of such status, the tax payments, economic status and whereabouts of the GNOME Foundation Inc are public. So I had some look at the tax filling declarations of the last years. These contain detailed information about income and expenses, net assets (e.g: money in bank accounts), retribution of the Board, Executive director, and key employees, amount of money spent on fulfilling the goals of the foundation, and lots of other things. Despite their wide goal, the tax fillings are not very hard to read, and it’s easy to learn how much money the  foundation made or spent. Looking at the details, I found several worrying things, like the fact that revenue and expenses in the Annual Report presented in the AGM did not match those in the tax reports, that most expenses were aggregated in sections that required no explanation, or that there were some explanations for expenses required but missing. So I moved on to open a confidential issue in the Board Team in gitlab expressing my concerns.

The answer mostly covered an explanation of the big deficits in the previous year (that would have been great to have in the Annual Report), but was otherwise generally disappointing. Most of my concerns (all of which are detailed below) were answered with nicely-written variations of: “that’s a small problem, we are aware of it and working on it”, or “this is not common practice and you can find unrelated information in X place”. It has been 6 months, a new tax statement and annual report are available, but problems persist. So I am sharing publicly my concerns with several goals:

  • Make these concerns available to the general GNOME community. Even though everything I am presenting comes from public sources, it is burdensome to research, and requires some level of experience with bureaucracy.
  • Show my interest about the topic, as I plan to present myself to the Board of Directors in the next elections. My goal is to become part of the Finance Committee to help improve in the transparency and efficiency of accounting.
  • Make the Board aware of my concerns (and hopefully show that others also share them), so things can be improved regardless of whether I get or don’t get elected for the board
Analysis of data and concerns

The first analysis I did some months ago was not very detailed, and quite manual. This time, I gather information in more detailed, and compiled it in an spread sheet that I made publicly available. All the numbers are taken from GNOME’s Annual Reports and from the Tax declarations available in Pro Publica. I am very happy to get those values reviewed, as there could always be mistakes. I am still fairly certain that small errors won’t change my concerns, since those are based on patterns and not on one-time problems. So to my concerns:

  • Non-matching values between reports and taxes: in the last 3 years, for revenue and income, only the revenue in presented for Fiscal Year 2021/2022 matches what is actually declared. For the rest, differences vary, but go up to close to 9%. I was told that some difference is expected (as these numbers are crafted a bit earlier than taxes), the Board had worked on it, and the last year (the only one with at least revenue matching) is certainly better. But there are still something like 18 000 USD of mismatch in expenses. For me, this is a clear sign, that something is going wrong with the accounting of the foundation, even if improved in the last year.
  • Non-matching values between reports from different years: each Annual Report contains not only the results for that year, but also from the previous one. However, numbers only match half of the time. This is still the case for the latest report in 2023, where suddenly 10 000 USD disappeared from 2022’s expenses, growing the difference from what was declared that year to 27 000 USD. This again shows accountability issues, as previous-years’ numbers should certainly not diverge even more from the tax declarations than initial numbers.
  • Impossibility to match tax declarations and Annual Reports: the way the annual reports are presented, makes it impossible to get a more detailed picture of how are expenses and revenue split. For example, more than 99% of the revenue in 2023 is grouped under a single tax category, while the previous year at least 3 where used. However, the split in the Annual Reports remains roughly the same. So either the accounting is wrong in one of those years, or the split of expenses for the Annual Report was crafted from different data sources. Another example is how “Staff” makes the greatest expense until it ceases to exist in the latest report. However, staff-related expenses in the taxes do not make up for the “Staff” expense in the repots. The chances are that part of that is due to subcontracting, and thus counted in “Fees for services, Other” in the taxes. Unfortunately that category has its own issues.
  • Missing information in the tax declaration: most remarkably, in the tax fillings of fiscal years 2020/2021 and 2021/2022, the category: “Fees for services, Other” represents more than 10% of the expense, which is clearly stated that it should be explained in a later part of the tax filling. However, it is not. I was told 6 months ago that might have to be with some problem with ProPublica not getting the data, and that they would try to fix it. But I was not provided with the information, and 6 months later the public tax fillings still have not been amended.
  • Lack of transparency on expenses:
    • First, in the last 2 tax fillings, more than 50% of expenses lay under “Other salaries and wages”, and “Fees for services, Other”. These fields do not provide enough transparency (maybe they would if the previous point was addressed), and means most of the expenses actually go unaccounted.
    • Second, in the Annual Reports. For the previous 2 years, the biggest expenses were by far “Staff”. There exists a website with the staff and their roles, but there is no clear explanation of which money goes to whom or why. This can be a great problem if some part of the community does not feel supported in its affairs by the foundation. Compare for example with Mastodon’s Annual Report, where everybody on a pay-slip of free-lancing is accounted and written down how much they earn. This is made worse since the current year’s Annual Report has completely removed that category in favor of others. Tax fillings (once available) will, however, provide more context if proper explanations regarding “Fees for services, Other” is finally available.
  • Different categories and reporting formats: the reporting format changed completely in 2021/2022 compared to previous years, and changed completely again this year. This is a severe issue for transparency, since continuously updating formats make it hard to compare between years (which as noted above, is useful!). One of course can understand that things need to be updated to improve things, but such drastic changes do not help with transparency.

There are certainly other small things that I noticed that caught my attention. However, I hope these examples are enough to get my point across. And there is no need to make this blog post even longer!

Conclusions

My main conclusion from the analysis is that the foundation accounting and decision-making regarding expenses has been sub-par in the last years. It is also a big issue that there is a huge lack in transparency regarding the economic status and decision-making of the foundation. I learned more about the economic status of the foundation by reading tax fillings than by reading Annual Reports. Unfortunately, opening an issue with the Board six months ago to share these concerns has not make it better. It could possibly be, that things are much better than they look from outside, but the lack of transparency is making it not appear as so. I hope that I can join the Finance Committee, and help address these issues in the short term!

Sam Thursfield: Status update, 19/05/2024 – GNOME OS and more

Dje, 19/05/2024 - 6:36md

Seems this is another of those months where I did enough stuff to merit two posts. (See Thursday’s post on async Rust). Sometimes you just can’t get out of doing work, no matter how you try. So here is part 2.

A few weeks ago I went to the USA for a week to meet a client team who I’ve been working with since late 2022. This was actually the first time I left Europe since 2016*. Its wild how a Euro is now pretty much equal value to a US dollar, but everything costs about double compared to Europe. It was fun though and good practice for another long trip to the Denver GUADEC in July.

* The UK is still part of Europe, it hasn’t physically moved, has it?

GNOME OS stuff

The GNOME OS project has at least 3 active maintainers and a busy Matrix room, which makes it fairly healthy as GNOME modules go. There’s no ongoing funding for maintenance though and everyone who contributes is doing so mostly as a volunteer — at least, as far as I’m aware. So there are plenty of plans and ideas for how it could develop, but many of them are incomplete and nobody has the free time to push them to completion.

We recently announced some exciting collaboration between Codethink, GNOME and the Sovereign Tech Fund. This stint of full time work will help complete several in-progress tasks. Particularly interesting to me is finishing the migration to systemd-sysupdate (issue 832), and creating a convenient developer workflow and supporting tooling (issue 819) so we can finally kill jhbuild. Plus, of course, making the openQA tests great again.

Getting to a point where the team could start work, took a lot of work, most of which isn’t visible to the outside world. Discussions go back at least to November 2023. Several people worked over months on scoping, estimates, contracts and resourcing the engineering team before any of the coding work started: Sonny Piers working to represent GNOME, and on the Codethink side, Jude Onyenegecha and Weyman Lo, along with Abderrahim Kitouni and Javier Jardón (who are really playing for both teams ;-).

I’m not working directly on the project, but I’m helping out where I can on the communications side. We have at least 3 IRC + Matrix channels where communication happens every day, each with a different subset of people and cocumentation is scattered all over the place. Some of the Codethink team are seasoned GNOME contributors, others are not, and the collaborative nature of the GNOME OS project – there is no “BDFL” figure who takes all the decisions – means it’s hard to get clear answers around how things should be implemented. Hopefully my efforts will mean we make the most of the time available.

You can read more about the current work here on the Codethink blog: GNOME OS and systemd-sysupdate, the team will hopefully be posting regular progress updates to This Week In GNOME, and Martín Abente Lahaye (who very recently joined the team on the Codethink side \o/) is opening public discussions around the next generation developer experience for GNOME modules – see the discussion here.

Tiny SPARQL, Twinql, Sqlite-SPARQL, etc.

We’re excited to welcome Demigod and Rachel to the GNOME community, working on a SPARQL web IDE as part of Google Summer of Code 2024.

Since this is going to hopefully shine a new light on the SPARQL database project, it seems like a good opportunity to start referring to it by a better name than “Tracker SPARQL”, even while we aren’t going to actually rename the whole API and release 4.0 any time soon.

There are a few name ideas already, the front runners being Tiny SPARQL or Twinql, which I still can’t quite decide which I prefer. The former is unique but rather utilitarian, while the latter is a nicer name but is already used by a few other (mostly abandoned) projects. Which do you prefer? Let me know in the comments..


Minilogues and Minifreaks

I picked up a couple of hardware synthesizers, the Minilogue XD and the Minifreak. I was happy for years with my OP-1 synth, but after 6 years of use it has so many faults to be unplayable, and replacing it would cost more than a second hand car, plus its a little too tiny for on-stage use.

The Minilogue XD is one of the only mainstream synths to have an open SDK for custom oscillators and effects, full respect to Korg for their forward thinking here … although their Linux tooling is a closed source binary with an critical bug that they won’t fix, so, still some way to go before they get 10/10 for openness.

The Minifreak, by contrast, has a terrible Windows-only firmware update system, which works so poorly that I already had to the return the synth once to Arturia after a firmware update caused it to brick itself. There’s a stark lesson here in having open protocols which hopefully Arturia can pick up on. This synth has absolutely incredible sound design capabilities though so I decided to keep it and just avoid ever updating the firmware.

Here’s a shot of the Minifreak next to another mini freak:

Allan Day: GNOME maintainers: here’s how to keep your issue tracker in good shape

Pre, 17/05/2024 - 5:29md

One of the goals of the new GNOME project handbook is to provide effective guidelines for contributors. Most of the guidelines are based on recommendations that GNOME already had, which were then improved and updated. These improvements were based on input from others in the project, as well as by drawing on recommendations from elsewhere.

The best example of this effort was around issue management. Before the handbook, GNOME’s issue management guidelines were seriously out of date, and were incomplete in a number of areas. Now we have shiny new issue management guidelines which are full of good advice and wisdom!

The state of our issue trackers matters. An issue tracker with thousands of open issues is intimidating to a new contributor. Likewise, lots of issues without a clear status or resolution makes it difficult for potential contributors to know what to do. My hope is that, with effective issue management guidelines, GNOME can improve the overall state of its issue trackers.

So what magic sauce does the handbook recommend to turn an out of control and burdensome issue tracker into a source of calm and delight, I hear you ask? The formula is fairly simple:

  • Review all incoming issues, and regularly conduct reviews of old issues, in order to weed out reports which are ambiguous, obsolete, duplicates, and so on
  • Close issues which haven’t seen activity in over a year
  • Apply the “needs design” and “needs info” labels as needed
  • Close issues that have been labelled “need info” for 6 weeks
  • Issues labelled “needs design” get closed after 1 year of inactivity, like any other
  • Recruit contributors to help with issue management

To some readers this is probably controversial advice, and likely conflicts with their existing practice. However, there’s nothing new about these issue management procedures. The current incarnation has been in place since 2009, and some aspects of them are even older. Also, personally speaking, I’m of the view that effective issue management requires taking a strong line (being strong doesn’t mean being impolite, I should add – quite the opposite). From a project perspective, it is more important to keep the issue tracker focused than it is to maintain a database of every single tiny flaw in its software.

The guidelines definitely need some more work. There will undoubtedly be some cases where an issue needs to be kept open despite it being untouched for a year, for example, and we should figure out how to reflect that in the guidelines. I also feel that the existing guidelines could be simplified, to make them easier to read and consume.

I’d be really interested to hear what changes people think are necessary. It is important for the guidelines to be something that maintainers feel that they can realistically implement. The guidelines are not set in stone.

That said, it would also be awesome if more maintainers were to put the current issue management guidelines into practice in their modules. I do think that they represent a good way to get control of an issue tracker, and this could be a really powerful way for us to make GNOME more approachable to new contributors.

Sam Thursfield: Status update, 16/05/2024 – Learning Async Rust

Enj, 16/05/2024 - 2:10md

This is another month where too many different things happened to stick them all in one post together. So here’s a ramble on Rust, and there’s more to come in a follow up post.

I first started learning Rust in late 2020. It took 3 attempts before I could start to make functional commandline apps, and the current outcome of this is the ssam_openqa tool, which I work on partly to develop my Rust skills. This month I worked on some intrusive changes to finally start using async Rust in the program.

How it started

Out of all the available modern languages I might have picked to learn, I picked Rust partly for the size and health of its community: every community has its issues, but Rust has no “BDFL” figure and no one corporation that employs all the core developers, both signs of a project that can last a long time. Look at GNOME, which is turning 27 this year.

Apart from the community, learning Rust improved the way I code in all languages, by forcing more risks and edge cases to the surface and making me deal with them explicitly in the design. The ecosystem of crates has most of what you could want (although there is quite a lot of experimentation and therefore “churn”, compared to older languages). It’s kind of addictive to know that when you’ve resolved all your compile time errors, you’ll have a program that reliably does what you want.

There are still some blockers to me adopting Rust everywhere I work (besides legacy codebases). The “cycle time” of the edit+compile+test workflow has a big effect on my happiness as a developer. The fastest incremental build of my simple CLI tool is 9 seconds, which is workable, and when there are compile errors (i.e. most of the time) its usually even faster. However, a release build might take 2 minutes. This is 3000 lines of code with 18 dependencies. I am wary of embarking on a larger project in Rust where the cycle time could be problematically slow.

Binary size is another thing, although I’ve learned several tricks to keep ssam_openqa at “only” 1.8MB. Use a minimal arg parser library instead of clap. Use minreq for HTTP. Follow the min-size-rust guidelines. Its easy to pull in one convenient dependency that brings in a tree of 100 more things, unless you are careful. (This is a problem for C programmers too, but dependency handling in C is traditionally so horrible that we are already conditioned to avoid too many external helper libraries).

The third thing I’ve been unsure about until now is async Rust. I never immediately liked the model used by Rust and Python of having a complex event loop hidden in the background, and a magic async keyword that completely changes how a function is executed, and requires all other functions to be async such as you effectively have two *different* languages: the async variant, and the sync variant; and when writing library code you might need to provide two completely different APIs to do the same thing, one async and one sync.

That said, I don’t have a better idea for how to do async.

Complicating matters in Rust is the error messages, which can be mystifying if you hit an edge case (see below for where this bit me). So until now I learned to just use thread::spawn for background tasks, with a std::sync::mpsc channel to pass messages back to the main thread, and use blocking IO everywhere. I see other projects doing the same.

How it’s going

My blissful ignorance came to an end due to changes in a dependency. I was using the websocket crate in ssam_openqa, which embeds its own async runtime so that callers can use a blocking interface in a thread. I guess this is seen as a failed experiment, as the library is now “sluggishly” maintained, the dependencies are old, and the developers recommend tungstenite instead.

Tungstenite seems unusable from sync code for anything more than toy examples, you need an async wrapper such as async-tungstenite (shout out to slomo for this excellent library, by the way). So, I thought, I will need to port my *whole codebase* to use an async runtime and an async main loop.

I tried, and spent a few days lost in a forest of compile errors, but its never the correct approach to try and port code “in one shot” and without a plan. To make matters worse, websocket-rs embeds an *old* version of Rust’s futures library. Nobody told me, but there is “futures 0.1” and “futures 0.3.” Only the latter works with the await keyword; if you await a future from futures 0.1, you’ll get an error about not implementing the expected trait. The docs don’t give any clues about this, eventually I discovered the Compat01As03 wrapper which lets you convert types from futures 0.1 to futures 0.3. Hopefully you never have to deal with this as you’ll only see futures 0.1 on libraries with outdated dependencies, but, now you know.

Even better, I then realized I could keep the threads and blocking IO around, and just start an async runtime in the websocket processing thread. So I did that in its own MR, gaining an integration test and squashing a few bugs in the process.

The key piece is here:

use tokio::runtime; use std::thread; ... thread::spawn(move || { let runtime = runtime::Builder::new_current_thread() .enable_io() .build() .unwrap(); runtime.block_on(async move { // Websocket event loop goes here

This code uses the tokio new_current_thread() function to create an async main loop out of the current thread, which can then use block_on() to run an async block and wait for it to exit. It’s a nice way to bring async “piece by piece” into a codebase that otherwise uses blocking IO, without having to rewrite everything up front.

I have some more work in progress to use async for the two main loops in ssam_openqa: these currently have manual polling loops that periodically check various message queue for events and then call thread::sleep(250), which work fine in practice for processing low frequency control and status events, but it’s not the slickest nor most efficient way to write a main loop. The classy way to do it is using the tokio::select! macro.

When should you use async Rust?

I was hoping for a simple answer to this question, so I asked my colleagues at Codethink where we have a number of Rust experts.

The problem is, cooperative task scheduling is a very complicated topic. If I convert my main loop to async, but I use the std library blocking IO primitives to read from stdin rather than tokio’s async IO, can Rust detect that and tell me I did something wrong? Well no, it can’t – you’ll just find that event processing stops while you’re waiting for input. Which may or may not even matter.

There’s no way automatically detect “syscall which might wait for user input” vs “syscall which might take a lot of CPU time to do something”, vs “user-space code which might not defer to the main loop for 10 minutes”; and each of these have the same effect of causing your event loop to freeze.

The best advice I got was to use tokio console to monitor the event loop and see if any tasks are running longer than they should. This looks like a really helpful debugging tool and I’m definitely going to try it out.

So I emerge from the month a bit wiser about async Rust, no longer afraid to use it in practice, and best of all, wise enough to know that its not an “all or nothing” switch – its perfectly valid to mix and sync and async in different places, depending on what performance characteristics you’re looking for.

Jussi Pakkanen: Generative non-AI

Mar, 14/05/2024 - 7:47md

In last week's episode of the Game Scoop podcast an idea was floated that modern computer game names are uninspiring and that better ones could be made by picking random words from existing NES titles. This felt like a fun programming challenge so I went and implemented it. Code and examples can be found in this GH repo.

Most of the game names created in this way are word salad gobbledigook or literally translated obscure anime titles (Prince Turtles Blaster Family). Running it a few times does give results that are actually quite interesting. They range from games that really should exist (Operation Metroid) to surprisingly reasonable (Gumshoe Foreman's Marble Stadium), to ones that actually made me laugh out loud (Punch-Out! Kids). Here's a list of some of my favourites:

  • Ice Space Piano
  • Castelian Devil Rainbow Bros.
  • The Lost Dinosaur Icarus
  • Mighty Hoops, Mighty Rivals
  • Rad Yoshi G
  • Snake Hammerin'
  • MD Totally Heavy
  • Disney's Die! Connors
  • Monopoly Ransom Manta Caper!
  • Revenge Marble
  • Kung-Fu Hogan's F-15
  • Sinister P.O.W.
  • Duck Combat Baseball

I emailed my findings back to the podcast host and they actually discussed it in this week's show (video here starting at approximately 35 minutes). All in all this was an interesting exercise. However pretty quickly after finishing the project I realized that doing things yourself is no longer what the cool kids are doing. Instead this is the sort of thing that is seemingly tailor-made for AI. All you have to do is to type in a prompt like "create 10 new titles for video games by only taking words from existing NES games" and post that to tiktokstagram.

I tried that and the results were absolute garbage. Since the prompt has to have the words "video game" and "NES", and LLMs work solely on the basis of "what is the most common thing (i.e. popular)", the output consists almost entirely of the most well known NES titles with maybe some words swapped. I tried to guide it by telling it to use "more random" words. The end result was a list of ten games of which eight were alliterative. So much for randomness.

But more importantly every single one of the recommendations the LLM created was boring. Uninspired. Bland. Waste of electricity, basically.

Thus we find that creating a list of game names with an LLM is easy but the end result is worthless and unusable. Doing the same task by hand did take a bit more effort but the end result was miles better because it found new and interesting combinations that a "popularity first" estimator seem to not be able to match. Which matches the preconception I had about LLMs from prior tests and seeing how other people have used them.

Sudhanshu Tiwari: GSoC Introductory Post

Dje, 12/05/2024 - 12:06pd

My journey as a GNOME user started in 2020 when I first set up Ubuntu on my computer, dual-booting it with Windows. Although I wasn't aware of GNOME back then, what I found fascinating was that despite Ubuntu being open source, it's performance and UI was comparable to Windows. I switched to become a regular user of Ubuntu and loved the way the GNOME Desktop Environment seamlessly performed different tasks. I could run multiple instances of various applications at the same time without it lagging or crashing down, which was often a problem in Windows.

A beginning in open sourceThe first time I came across the term "open source" was while installing the MingW GCC Compiler for C++ from SourceForge. I had a rough idea of what the term meant but being a complete noob at the time, I didn't make a decision of whether to start contributing. When I felt I had enough skills to contribute, I was introduced to p5.js, which is a JavaScript library for creative coding. With my familiarity in JavaScript, the codebase of p5.js was easy to understand, and thus began my journey as an open source contributor. Opening my first PR in p5.js gave me a feeling of accomplishment that reminded me of the time I compiled my first C++ program. I started contributing more and began to learn about the GNOME environment and wanted to contribute to the desktop environment I had been a user of.
Contributing to GNOME

I learnt about the libraries GLib and GTK that empower programmers to build apps using modern programming techniques. I scrambled through documentation and watched some introductory videos about GLib, GObject, and GObject Introspection, and diving deeper into this repository of knowledge I found myself wanting to learn more about how GNOME apps are built. The GNOME Preparatory Bootcamp for GSoC & Outreachy conducted by GNOME Africa prepared me to become a better contributor. Thanks to Pedro Sader Azevedo and Olosunde Ayooluwa for teaching us about setting up the development environment and getting started with the contribution process. It was around this time that I found out about the programming language Vala and a prospective GSoC project that peaked my interest. I was always fascinated by the low-level implementation details of compilers and how compilers work, and this project was related to the Vala compiler.
Learning the Vala language

Vala is an object-oriented programming language built on top of the GObject type system. It contains many high-level abstractions which the native C ABI does not provide, thus making it an ideal language to build GNOME applications. Vala is not widely used, so there are few online resources to learn it, however the Vala tutorial provides a robust documentation and is a good starting point for beginners. The best way to learn something is learning by doing so I decided to learn Vala by building apps using GTK and Libadwaita. However, being completely new to the GNOME environment, this approach got me limited success. I haven't yet learnt GTK or Libadwaita but I did manage to understand Vala language constructs by reading through the source code of some Vala applications. I worked on some issues in the Vala repository and this gave me a sneak peek into the working of the Vala compiler. I got to learn about how it builds the Vala AST and compiles the Vala code into GObject C, although I still have a lot to learn to understand how it is put together.
My GSoC ProjectAs part of my GSoC project we have to add support for the latest GIR attributes to the Vala compiler and the Vala API generator. We can do this by including these attributes in the parsing and generation of GIR files, and linking them with Vala language constructs if needed. This also involves adding test cases for these attributes in the test suite to make sure that the .gir and .vapi files are generated correctly. Once this is done we need to work on Valadoc. Valadoc parses documentation in the Gtkdoc format, but this project involves making it parse documentation in the GI-Docgen format too. Adding this support will require creation of some new files and modifying the documentation parser in Valadoc. After implementing this support the plan is to modernise the appearance of valadoc.org. The website was clearly built a while ago and needs redesign to make it more interactive and user friendly. This will require changing some CSS styles and JavaScript code of the website. With the completion of this project, the look of the website will be changed to be at par with the online documentation of any other programming language.

Thanks to my mentor Lorenz Wildberg, I now have a coherent idea about what needs to be done in the project and we have a workable plan to achieve it. I'm very optimistic about the project, and I'm sure that we will be able to meet all the project goals within the stipulated timeline. In the coming few days I plan to read the Vala documentation and understand the codebase so that I can get started with achieving project objectives in the coding period.





Marcus Lundblad: May Maps

Sht, 11/05/2024 - 3:23md

 

It's about time for the spring update of goings on in Maps!

There's been some changes going on since the release of 46.


Vector Map by Default

The vector map is now being used by default, and with it Maps supports dark mode (also the old raster tiles has been retired, though there still exists the hidden feature of running with a local tile directory. Which was never really intended for general use but more as a way to experiment with offline map support). The plan will be to eventually support proper offline map support with a way to download areas in a more user-friendly and organized way then to provide a raw path…).

Dark Improvements

Following the introduction of dark map support the default rendering of public transit routes and lines has been improved for the dark mode to give better contrast (something that trickier before when the map view was always light even when the rest of the UI, such as the sidebar itinerary was shown in dark mode).



More Transit Mode Icons

Jakub Steiner and Sam Hewitt has been working on designing icons for some additional modes of transit, such as trolley buses, taxi, and monorail.Trolley bus routes

This screenshot was something I “mocked” by changing the icon for regular bus to temporarily use the newly designed trolley bus icon as we don't currently have any supported transit route provider in Maps currently that exposed trolley bus routes. I originally made this for an excursion with a vintage trolley bus I was going to attend, but that was cancelled in the last minute because of technical issues.Showing a taxi station

And above we have the new taxi icon (this could be used both for showing on-demand communal taxi transit and for taxi stations on the map.

These icons have not yet been merged into Maps, as there's still some work going on finalizing their design. But I thought I still wanted to show them here…

Brand Logos

For a long time we have shown a title image from Wikidata or Wikipedia for places when available. Now we show a logo image (using the Wikidata reference for the brand of a venue) when available, and the place has no dedicated article).

Explaining Place Types

As sometimes it can be a bit hard to determine the exact type from the icons shown on the map. And especially for more generic types, such as shops where we have dedicated icons for some, and a generic icon. We now show the type also in the place bubble (using the translations extracted from the iD OSM editor).


Places with a name shows the icon and type description below the name, dimmed.


For unnamed places we show the icon and type instead of the name, in the same bold style as the name would normally use.

Additional Things

Another detail worth mentioning is that you can now clear the currently showing route from the context menu so you won't have to open the sidebar again and manually erase the filled in destinations.

 

Another improvement is that if you already enter a starting point with ”Route from Here“, or enter an address in the sidebar and then use the “Directions”  button from a place bubble, that starting point will now be used instead of the current location.

Besides this, also some old commented-out code was removed… but there's no screenshots of that, I'm afraid ☺

Tanmay Patil: Acrostic Generator: Part one

Sht, 11/05/2024 - 7:15pd

It’s been a while since my last blog post, which was about my Google Summer of Code project. Even though it has been months since I completed GSoC, I have continued working on the project, increasing acrostic support in Crosswords.

We’ve added support for loading Acrostic Puzzles in Crosswords, but now it’s time to create some acrostics.

Now that Crosswords has acrostic support, I can use screenshots to help explain what an acrostic is and how the puzzle works.

Let’s load an Acrostic in Crosswords first.

Acrostic Puzzle loaded in Crosswords

The main grid here represents the QUOTE: “CARNEGIE VISITED PRINCETON…” and if we read out the first letter of each clue answer (displayed on the right) it forms the SOURCE. For example, in the image above, name of the author is “DAVID ….”.
Now, the interesting part is answers for the clues fit it in the SOURCE.

Let’s consider another small example:
QUOTE: “To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment.”
AUTHOR: “Ralph Waldo Emerson”

If you see correctly, letters of SOURCE here are part of the QUOTE. One set of answers to clues could be:

Solutions generated using AcrosticGenerator. Read the first letter of each Answer from top to bottom. It forms ‘Ralph Waldo Emerson’.

Coding the Acrostic Generator

As seen above, to create an acrostic, we need two things: the QUOTE and the SOURCE string. These will be our inputs from the user.

Additionally, we need to set some constraints on the generated word size. By default, we have set MIN_WORD_SIZE to 3 and MAX_WORD_SIZE to 20.. The user is allowed to change it. However, users are allowed to change these settings.

Step 1: Check if we can create an acrostic from given input

You must have already guessed it. We check if the characters in the SOURCE are available in the QUOTE string. To do this, we utilize IPuzCharset data structure.
Without going in much detail, it simply stores characters and their frequencies.
For example, for the string “MAX MIN”, it’s charset looks like [{‘M’: 2}, {‘A’: 1}, {‘X’:1}, {‘I’: 1}, {’N’: 1}].

First, We build a charset of the source string and then iterate through it. For the source string to be valid, the count of every character in the source charset should be less than or equal to the count of that character in the quote charset.

for (iter = ipuz_charset_iter_first (source_charset);
iter;
iter = ipuz_charset_iter_next (iter))
{
IPuzCharsetIterValue value;
value = ipuz_charset_iter_get_value (iter);

if (value.count > ipuz_charset_get_char_count (quote_charset, value.c))
{
// Source characters are missing in the provided quote
return FALSE;
}
}

return TRUE;

Since, now we have a word size constraint, we need to add one more check.
Let’s understand through this an example.

QUOTE: LIFE IS TOO SHORT
SOURCE: TOLSTOI
MIN_WORD_SIZE: 3
MAX_WORD_SIZE: 20

Since, MIN_WORD_SIZE is set to 3, the generated answers should have a minimum of three letters in them.

Possible solutions considering every solution
has a length equal to the minimum word size:
T _ _
O _ _
L _ _
S _ _
T _ _
O _ _
O _ _

If we take the sum of the number of letters in the above solutions, It’s 21. That is greater than number of letters in the QUOTE string (14). So, we can’t create an acrostic from the above input.

if ((n_source_characters * min_word_size) > n_quote_characters)
{
//Quote text is too short to accomodate the specificed minimum word size for each clue
return FALSE;
}

While writing this blog post, I found out we called this error “SOURCE_TOO_SHORT”. It should be “QUOTE_TOO_SHORT” / “SOURCE_TOO_LARGE”.

Stay tuned for further implementation in the next post!

Sudhanshu Tiwari: Being a beginner open source contributor

Sht, 11/05/2024 - 5:26pd
In October 2023, with quite a bit of experience in web development and familiarity with programming concepts in general, I was in search of avenues where I could put this experience to good use. But where could a beginner programmer get the opportunity to work with experienced developers? And that too, in a real-world project with a user-base in millions...
Few people would hire a beginner! We all know the paradox of companies intent on hiring experienced people for entry-level roles. That's where it gets tricky because we can't really have an experience without being hired. Well... maybe we can :)
What is open source software?


Open source software is source code made available to the public, allowing anyone to view, modify, and distribute the software. It is free and the people who work to improve it are most often not paid. The source code of open source software provides a good opportunity for a beginner to understand, work on, and modify a project to improve its usability.

Why contribute to open source?
Open source contribution has many benefits. It is especially beneficial for a beginner; working on open source projects hones ones skills as a developer and provides a good foundation for a future career in the software industry. Here are some of the major benefits that open source contribution provides:

  • Experience of working on large projects
     Open source projects often have large and complex codebases. Working on such a project requires one to understand the ins and outs of the codebase, how things are put together and how they work to result in a fully functioning software.

  • Read code written by others
     The source code of an open source software can be read and modified by anyone, this allows hundreds of people to make code contributions ranging from fixing bugs, adding a new feature to just updating the documentation. To do any of this, we need to read and understand the code written by others and make implement our own changes. This allows the contributor to learn good programming practices like writing readable and well documented code, using version control like Git and Github correctly, etc.

  • Ability to work in a team
     An open source project is a joint endeavour and requires collaboration. Any code that is written in the project must be readable and understandable by every other contributor, and this team effort results in an efficient and correctly functioning software. Often, when enhancing the software by adding a new feature or updating legacy code, people need to reach a consensus on what features need to be implemented, how they should be implemented and what platforms/libraries should be used. This requires discussion with contributors and the users, and hones ones ability to work in a team which is an invaluable skill to have in software development.

  • Opportunity to work with experienced developers
     Since many projects are quite old and have been there for a long time, they have many people with tens of years of experience who are maintainers and have been writing and fixing the codebase since years. This is a good opportunity for a beginner to  learn the best programming practices from people with more experience. This helps them in becoming employable and gaining the "experience" that companies demand from prospective employees. 

  • Using programming skills to benefit end users
     Large projects often have millions of dedicated users that use the software on a daily basis. Just like VLC Media Player or Chromium, these softwares are quite popular and have a loyal fanbase. If anyone contributes to make the software better, it improves the user experience for millions of people. This contribution might be a small optimization that makes the software load faster, or a new feature that users have been requesting - in any case, it ends up improving the experience for its day to day users and makes a meaningful impact on the community.

  • A chance to network with others
     Contributing to open source is a fun and pleasant experience. It allows us to meet people from different backgrounds with different levels of experience. Contributors are often geographically distributed but have the same goal - to ensure the success of the project by benefiting its end users. This common goal allows us to connect and interact with people from diverse backgrounds and different opinions. This ends up being an enriching and learning journey, it broadens our perspectives, and makes us a better developer. 
Interested in contributing to open source? This article provides a step by step guide on how you can get started with open source contribution. In case of any doubts, please feel free to contact me on my email : sudhanshu.98t@gmail.com or connect with me on LinkedIn !