You are here

Agreguesi i feed

More than enough is too much.

Planet Debian - Hën, 22/10/2018 - 4:25pd

*sigh*. CoC should NOT be a beating stick, of course...

子貢問、「師与商也孰賢乎。」子曰、「師也過。商也不及。」曰、「然則師愈与。」子曰、「過猶不及也。」 Hideki Yamane noreply@blogger.com Henrich plays with Debian

BGP LLGR: robust and reactive BGP sessions

Planet Debian - Dje, 21/10/2018 - 9:44md

On a BGP-routed network with multiple redundant paths, we seek to achieve two goals concerning reliability:

  1. A failure on a path should quickly bring down the related BGP sessions. A common expectation is to recover in less than a second by diverting the traffic to the remaining paths.

  2. As long as a path is operational, the related BGP sessions should stay up, even under duress.

Detecting failures fast: BFD

To quickly detect a failure, BGP can be associated with BFD, a protocol to detect faults in bidirectional paths,1 defined in RFC 5880 and RFC 5882. BFD can use very low timers, like 100 ms.

However, when BFD runs in a process on top of a generic kernel,2 notably when running BGP on the host, it is not unexpected to loose a few BFD packets on adverse conditions: the daemon handling the BFD sessions may not get enough CPU to answer in a timely manner. In this scenario, it is not unlikely for all the BGP sessions to go down at the same time, creating an outage, as depicted in the last case in the diagram below.

Examples of failures on a network using BGP as the underlying routing protocol. A link failure is detected by BFD and the failed path is removed from the ECMP route. However, when high CPU usage on the bottom router prevents BFD packets to be processed timely, all paths are removed.

So far, we have two contradicting roads:

  • lower the BFD timers to quickly detect a failure along the path, or
  • raise the BFD timers to ensure BGP sessions remain operational.
Fix false positives: BGP LLGR

Long-lived BGP Graceful Restart is a new BGP capability to retain stale routes for a longer period after a session failure but treating them as least-preferred. It also defines a well-known community to share this information with other routers. It is defined in the Internet-Draft draft-uttaro-idr-bgp-persistence-04 and several implementations already exist:

  • Juniper JunOS (since 15.1, see the documentation),
  • Cisco IOS XR (unfortunately only for VPN and FlowSpec families),
  • BIRD (since 1.6.5 and 2.0.3, both yet to be released, sponsored by Exoscale), and
  • GoBGP (since 1.33).

The following illustration shows what happens during two failure scenarios. Like without LLGR, in ❷, a link failure is detected by BFD and the failed path is removed from the route as two other paths remain with a higher preference. A couple of minutes later, the faulty path has its stale timer expired and will not be used anymore. Shortly after, in ❸, the bottom router experiences high CPU usage, preventing BFD packets to be processed timely. The BGP sessions are closed and the remaining paths become stale but as there is no better path left, they are still used until the LLGR timer expires. In the meantime, we expect the BGP sessions to resume.

Examples of failures on a network using BGP as the underlying routing protocol with LLGR enabled.

From the point of view of the top router, the first failed path was considered as stale because the BGP session with R1 was down. However, during the second failure, the two remaining paths were considered as stale because they were tagged with the well-known community LLGR_STALE (65535:6) by R2 and R3.

Another interesting point of BGP LLGR is the ability to restart the BGP daemon without any impact—as long as all paths keep a steady state shortly before and during restart. This is quite interesting when running BGP on the host.3

BIRD

Let’s see how to configure BIRD 1.6. As BGP LLGR is built on top of the regular BGP graceful restart (BGP GR) capability, we need to enable both. The timer for BGP LLGR starts after the timer for BGP GR. During a regular graceful restart, routes are kept with the same preference. Therefore it is important to set this timer to 0.

template bgp BGP_LLGR { bfd graceful; graceful restart yes; graceful restart time 0; long lived graceful restart yes; long lived stale time 120; }

When a problem appears on the path, the BGP session goes down and the LLGR timer starts:

$ birdc show protocol R1_1 all name proto table state since info R1_1 BGP master start 11:20:17 Connect Preference: 100 Input filter: ACCEPT Output filter: ACCEPT Routes: 1 imported, 0 exported, 0 preferred Route change stats: received rejected filtered ignored accepted Import updates: 2 0 0 0 4 Import withdraws: 0 0 --- 0 0 Export updates: 12 10 0 --- 2 Export withdraws: 1 --- --- --- 0 BGP state: Connect Neighbor address: 2001:db8:104::1 Neighbor AS: 65000 Neighbor graceful restart active LL stale timer: 112/-

The related paths are marked as stale (as reported by the s in 100s) and tagged with the well-known community LLGR_STALE:

$ birdc show route 2001:db8:10::1/128 all 2001:db8:10::1/128 via 2001:db8:204::1 on eth0.204 [R1_2 10:35:01] * (100) [i] Type: BGP unicast univ BGP.origin: IGP BGP.as_path: BGP.next_hop: 2001:db8:204::1 fe80::5254:3300:cc00:5 BGP.local_pref: 100 via 2001:db8:104::1 on eth0.104 [R1_1 11:22:51] (100s) [i] Type: BGP unicast univ BGP.origin: IGP BGP.as_path: BGP.next_hop: 2001:db8:104::1 fe80::5254:3300:6800:5 BGP.local_pref: 100 BGP.community: (65535,6)

We are left with only one path for the route in the kernel:

$ ip route show 2001:db8:10::1 2001:db8:10::1 via 2001:db8:204::1 dev eth0.204 proto bird metric 1024 pref medium

To upgrade BIRD without impact, it needs to run with the -R flag and the graceful restart yes directive should be present in the kernel protocols. Then, before upgrade, stop it using SIGKILL instead of SIGTERM to avoid a clean close of the BGP sessions.

Juniper JunOS

With JunOS, we only have to enable BGP LLGR for each family—assuming BFD is already configured:

# Enable BGP LLGR edit protocols bgp group peers family inet6 unicast set graceful-restart long-lived restarter stale-time 2m

Once a path is failing, the associated BGP session goes down and the BGP LLGR timer starts:

> show bgp neighbor 2001:db8:104::4 Peer: 2001:db8:104::4+179 AS 65000 Local: 2001:db8:104::1+57667 AS 65000 Group: peers Routing-Instance: master Forwarding routing-instance: master Type: Internal State: Connect Flags: <> Last State: Active Last Event: ConnectRetry Last Error: None Export: [ LOOPBACK NOTHING ] Options: <Preference HoldTime Ttl AddressFamily Multipath Refresh> Options: <BfdEnabled LLGR> Address families configured: inet6-unicast Holdtime: 6 Preference: 170 NLRI inet6-unicast: Number of flaps: 2 Last flap event: Restart Time until long-lived stale routes deleted: inet6-unicast 00:01:05 Table inet6.0 Bit: 20000 RIB State: BGP restart is complete Send state: not advertising Active prefixes: 0 Received prefixes: 1 Accepted prefixes: 1 Suppressed due to damping: 0 LLGR-stale prefixes: 1

The associated path is marked as stale and is therefore inactive as there are better paths available:

> show route 2001:db8:10::4 extensive […] BGP Preference: 170/-101 Source: 2001:db8:104::4 Next hop type: Router, Next hop index: 778 Next hop: 2001:db8:104::4 via em1.104, selected Protocol next hop: 2001:db8:104::4 Indirect next hop: 0xb1d27c0 1048578 INH Session ID: 0x15c State: <Int Ext> Inactive reason: LLGR stale Local AS: 65000 Peer AS: 65000 Age: 4 Metric2: 0 Communities: llgr-stale Accepted LongLivedStale Localpref: 100 Router ID: 1.0.0.4 […]

Have a look at the GitHub repository for the complete configurations as well as the expected outputs during normal operations. There is also a variant with the configurations of BIRD and JunOS when acting as a BGP route reflector. Now that FRR got BFD support, I hope it will get LLGR support as well.

  1. With point-to-point links, BGP can immediately detect a failure without BFD. However, with a pair of fibers, the failure may be undirectional, leaving it undetected by the other end until the expiration of the hold timer. ↩︎

  2. On a Juniper MX, BFD is usually handled directly by the real-time microkernel running on the packet forwarding engine. The BFD control packet contains a bit indicating if BFD is implemented by the forwarding plane or by the control plane. Therefore, you can check with tcpdump how a router implements BFD. Here is an example where 10.71.7.1, a Linux host running BIRD, implements BFD in the control plane, while 10.71.0.3, a Juniper MX, does not:

    $ sudo tcpdump -pni vlan181 port 3784 IP 10.71.7.1 > 10.71.0.3: BFDv1, Control, State Up, Flags: [none] IP 10.71.0.3 > 10.71.7.1: BFDv1, Control, State Up, Flags: [Control Plane Independent]

    ↩︎

  3. Such a feature is the selling point of BGP graceful restart. However, without LLGR, non-functional paths are kept with the same preference and are not removed from ECMP routes. ↩︎

Vincent Bernat https://vincent.bernat.ch/en Vincent Bernat

Web browser integration of VLC with Bittorrent support

Planet Debian - Dje, 21/10/2018 - 9:50pd

Bittorrent is as far as I know, currently the most efficient way to distribute content on the Internet. It is used all by all sorts of content providers, from national TV stations like NRK, Linux distributors like Debian and Ubuntu, and of course the Internet archive.

Almost a month ago a new package adding Bittorrent support to VLC became available in Debian testing and unstable. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

Since the plugin was made available for the first time in Debian, several improvements have been made to it. In version 2.2-4, now available in both testing and unstable, a desktop file is provided to teach browsers to start VLC when the user click on torrent files or magnet links. The last part is thanks to me finally understanding what the strange x-scheme-handler style MIME types in desktop files are used for. By adding x-scheme-handler/magnet to the MimeType entry in the desktop file, at least the browsers Firefox and Chromium will suggest to start VLC when selecting a magnet URI on a web page. The end result is that now, with the plugin installed in Buster and Sid, one can visit any Internet Archive page with movies using a web browser and click on the torrent link to start streaming the movie.

Note, there is still some misfeatures in the plugin. One is the fact that it will hang and block VLC from exiting until the torrent streaming starts. Another is the fact that it will pick and play a random file in a multi file torrent. This is not always the video file you want. Combined with the first it can be a bit hard to get the video streaming going. But when it work, it seem to do a good job.

For the Debian packaging, I would love to find a good way to test if the plugin work with VLC using autopkgtest. I tried, but do not know enough of the inner workings of VLC to get it working. For now the autopkgtest script is only checking if the .so file was successfully loaded by VLC. If you have any suggestions, please submit a patch to the Debian bug tracking system.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Weblate 3.2.2

Planet Debian - Sht, 20/10/2018 - 1:00md

Weblate 3.2.2 has been released today. It's a second bugfix release for 3.2 fixing several minor issues which appeared in the release.

Full list of changes:

  • Remove no longer needed Babel dependency.
  • Updated language definitions.
  • Improve documentation for addons, LDAP and Celery.
  • Fixed enabling new dos-eol and auto-java-messageformat flags.
  • Fixed running setup.py test from PyPI package.
  • Improved plurals handling.
  • Fixed translation upload API failure in some corner cases.
  • Fixed updating Git configuration in case it was changed manually.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Michal Čihař https://blog.cihar.com/archives/debian/ Michal Čihař's Weblog, posts tagged by Debian

So I wrote a basic BASIC

Planet Debian - Sht, 20/10/2018 - 11:01pd

So back in June I challenged myself to write a BASIC interpreter in a weekend. The next time I mentioned it was to admit defeat. I didn't really explain in any detail, because I thought I'd wait a few days and try again and I was distracted at the time I wrote my post.

As it happened that was over four months ago, so clearly it didn't work out. The reason for this was because I was getting too bogged down in the wrong kind of details. I'd got my heart set on doing this the "modern" way:

  • Write a lexer to spit the input into tokens
    • LINE-NUMBER:10, PRINT, "Hello, World"
  • Then I'd take those tokens and form an abstract syntax tree.
  • Finally I'd walk the tree evaluating as I went.

The problem is that almost immediately I ran into problems, my naive approach didn't have a good solution for identifying line-numbers. So I was too paralysed to proceed much further.

I sidestepped the initial problem and figured maybe I should just have a series of tokens, somehow, which would be keyed off line-number. Obviously when you're interpreting "traditional" BASIC you need to care about lines, and treat them as important because you need to handle fun-things like this:

10 PRINT "STEVE ROCKS" 20 GOTO 10

Anyway I'd parse each line, assuming only a single statement upon a line (ha!) you can divide it into:

  • Number - i.e. line-number.
  • Statement.
  • Newline to terminate.

Then you could have:

code{blah} .. code[10] = "PRINT STEVE ROCKS" code[20] = "GOTO 10"

Obviously you spot the problem there, if you think it through. Anyway. I've been thinking about it off and on since then, and the end result is that for the past two evenings I've been mostly writing a BASIC interpreter, in golang, in 20-30 minute chunks.

The way it works is as you'd expect (don't make me laugh ,bitterly):

  • Parse the input into tokens.
  • Store those as an array.
  • Interpet each token.
    • No AST
    • No complicated structures.
    • Your program is literally an array of tokens.

I cheated, horribly, in parsing line-numbers which turned out to be exactly the right thing to do. The output of my naive lexer was:

INT:10, PRINT, STRING:"Hello World", NEWLINE, INT:20, GOTO, INT:10

Guess what? If you (secretly) prefix a newline to the program you're given you can identify line-numbers just by keeping track of your previous token in the lexer. A line-number is any number that follows a newline. You don't even have to care if they sequential. (Hrm. Bug-report?)

Once you have an array of tokens it becomes almost insanely easy to process the stream and run your interpreter:

program[] = { LINE_NUMBER:10, PRINT, "Hello", NEWLINE, LINE_NUMBER:20 ..} let offset := 0 for( offset < len(program) ) { token = program[offset] if ( token == GOTO ) { handle_goto() ; } if ( token == PRINT ) { handle_print() ; } .. handlers for every other statement offset++ }

Make offset a global. And suddenly GOTO 10 becomes:

  • Scan the array, again, looking for "LINE_NUMBER:10".
  • Set offset to that index.

Magically it all just works. Add a stack, and GOSUB/RETURN are handled with ease too by pushing/popping the offset to it.

In fact even the FOR-loop is handled in only a few lines of code - most of the magic happening in the handler for the "NEXT" statement (because that's the part that needs to decide if it needs to jump-back to the body of the loop, or continue running.

OK this is a basic-BASIC as it is missing primtives (CHR(), LEN,etc) and it only cares about integers. But the code is wonderfully simple to understand, and the test-case coverage is pretty high.

I'll leave with an example:

10 REM This is a program 00 REM 01 REM This program should produce 126 * 126 * 10 02 REM = 158760 03 REM 05 GOSUB 100 10 FOR i = 0 TO 126 20 FOR j = 0 TO 126 STEP 1 30 FOR k = 0 TO 10 40 LET a = i * j * k 50 NEXT k 60 NEXT j 70 NEXT i 75 PRINT a, "\n" 80 END 100 PRINT "Hello, I'm multiplying your integers" 110 RETURN

Loops indented for clarity. Tokens in upper-case only for retro-nostalgia.

Find it here, if you care:

I had fun. Worth it.

I even "wrote" a "game":

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

GNOME Foundation Hackfest 2018

Planet Debian - Pre, 19/10/2018 - 5:38md

This week, the GNOME Foundation Board of Directors met at the Collabora office in Cambridge, UK, for the second annual Foundation Hackfest. We were also joined by the Executive Director, Neil McGovern, and Director of Operations, Rosanna Yuen. This event was started by last year’s board and is a great opportunity for the newly-elected board to set out goals for the coming year and get some uninterrupted hacking done on policies, documents, etc. While it’s fresh in our mind, we wanted to tell you about some of the things we have been working on this week and what the community can hope to see in the coming months.

Wednesday: Goals

On Wednesday we set out to define the overall goals of the Foundation, so we could focus our activities for the coming years, ensuring that we were working on the right priorities. Neil helped to facilitate the discussion using the Charting Impact process. With that input, we went back to the purpose of the Foundation and mapped that to ten and five year goals, making sure that our current strategies and activities would be consistent with reaching those end points. This is turning out to be a very detailed and time-consuming process. We have made a great start, and hope to have something we can share for comments and input soon. The high level 10-year goals we identified boiled down to:

  • Sustainable project and foundation
  • Wider awareness and mindshare – being a thought leader
  • Increased user base

As we looked at the charter and bylaws, we identified a long-standing issue which we need to solve — there is currently no formal process to cover the “scope” of the Foundation in terms of which software we support with our resources. There is the release team, but that is only a subset of the software we support. We have some examples such as GIMP which “have always been here”, but at present there is no clear process to apply or be included in the Foundation. We need a clear list of projects that use resources such as CI, or have the right to use the GNOME trademark for the project. We have a couple of similar proposals from Allan Day and Carlos Soriano for how we could define and approve projects, and we are planning to work with them over the next couple of weeks to make one proposal for the board to review.

Thursday: Budget forecast

We started the second day with a review of the proposed forecast from Neil and Rosanna, because the Foundation’s financial year starts in October. We have policies in place to allow staff and committees to spend money against their budget without further approval being needed, which means that with no approved budget, it’s very hard for the Foundation to spend any money. The proposed budget was based off the previous year’s actual figures, with changes to reflect the increased staff headcount, increased spend on CI, increased staff travel costs, etc, and ensure after the year’s spending, we follow the reserves policy to keep enough cash to pay the foundation staff for a further year. We’re planning to go back and adjust a few things (internships, marketing, travel, etc) to make sure that we have the right resources for the goals we identified.

We had some “hacking time” in smaller groups to re-visit and clarify various policies, such as the conference and hackfest proposal/approval process, travel sponsorship process and look at ways to support internationalization (particularly to indigenous languages).

Friday: Foundation Planning

The Board started Friday with a board-only (no staff) meeting to make sure we were aligned on the goals that we were setting for the Executive Director during the coming year, informed by the Foundation goals we worked on earlier in the week. To avoid the “seven bosses” problem, there is one board member (myself) responsible for managing the ED’s priorities and performance. It’s important that I take advantage of the opportunity of the face to face meeting to check in with the Board about their feedback for the ED and things I should work together with Neil on over the coming months.

We also discussed a related topic, which is the length of the term that directors serve on the Foundation Board. With 7 staff members, the Foundation needs consistent goals and management from one year to the next, and the time demands on board members should be reduced from previous periods where the Foundation hasn’t had an Executive Director. We want to make sure that our “ten year goals” don’t change every year and undermine the strategies that we put in place and spend the Foundation resources on. We’re planning to change the Board election process so that each director has a two year term, so half of the board will be re-elected each year. This also prevents the situation where the majority of the Board is changed at the same election, losing continuity and institutional knowledge, and taking months for people to get back up to speed.

We finished the day with a formal board meeting to approve the budget, more hack time on various policies (and this blog!). Thanks to Collabora for use of their office space, food, and snacks – and thanks to my fellow Board members and the Foundation’s wonderful and growing staff team

ramcq http://ramcq.net Robotic Tendencies

translation-finder 0.1

Planet Debian - Pre, 19/10/2018 - 4:30md

Setting up translation components in Weblate can be tricky in some cases, especially if you lack knowledge of the translation format you are using. Also this is something we wanted to automate from the very beginning, but there were always more pressing things to implement. But now the time is coming as I've just made first beta release of translation-finder, tool to help with this.

The translation-finder will look at filesystem (eg. checked out repository) and tries to find translatable files. So far the heuristics is pretty simple, but still it detects just fine most of the projects currently hosted on our hosted localization platform. Still if you find issue with that, you're welcome to provide feedback in our issue tracker.

The integration into Weblate will come in next weeks and will be able to enjoy this new feature in the 3.3 release.

Filed under: Debian English SUSE Weblate

Michal Čihař https://blog.cihar.com/archives/debian/ Michal Čihař's Weblog, posts tagged by Debian

Debian GSoC 2018 report

Planet Debian - Pre, 19/10/2018 - 10:26pd

One of my major contributions to Debian in 2018 has been participation as a mentor and admin for Debian in Google Summer of Code (GSoC).

Here are a few observations about what happened this year, from my personal perspective in those roles.

Making a full report of everything that happens in GSoC is close to impossible. Here I consider issues that span multiple projects and the mentoring team. For details on individual projects completed by the students, please see their final reports posted in August on the mailing list.

Thanking our outgoing administrators

Nicolas Dandrimont and Sylvestre Ledru retired from the admin role after GSoC 2016 and Tom Marble has retired from the Outreachy administration role, we should be enormously grateful for the effort they have put in as these are very demanding roles.

When the last remaining member of the admin team, Molly, asked for people to step in for 2018, knowing the huge effort involved, I offered to help out on a very temporary basis. We drafted a new delegation but didn't seek to have it ratified until the team evolves. We started 2018 with Molly, Jaminy, Alex and myself. The role needs at least one new volunteer with strong mentoring experience for 2019.

Project ideas

Google encourages organizations to put project ideas up for discussion and also encourages students to spontaneously propose their own ideas. This latter concept is a significant difference between GSoC and Outreachy that has caused unintended confusion for some mentors in the past. I have frequently put teasers on my blog, without full specifications, to see how students would try to respond. Some mentors are much more precise, telling students exactly what needs to be delivered and how to go about it. Both approaches are valid early in the program.

Student inquiries

Students start sending inquiries to some mentors well before GSoC starts. When Google publishes the list of organizations to participate (that was on 12 February this year), the number of inquiries increases dramatically, in the form of personal emails to the mentors, inquiries on the debian-outreach mailing list, the IRC channel and many project-specific mailing lists and IRC channels.

Over 300 students contacted me personally or through the mailing list during the application phase (between 12 February and 27 March). This is a huge number and makes it impossible to engage in a dialogue with every student. In the last years where I have mentored, 2016 and 2018, I've personally but a bigger effort into engaging other mentors during this phase and introducing them to some of the students who already made a good first impression.

As an example, Jacob Adams first inquired about my PKI/PGP Clean Room idea back in January. I was really excited about his proposals but I knew I simply didn't have the time to mentor him personally, so I added his blog to Planet Debian and suggested he put out a call for help. One mentor, Daniele Nicolodi replied to that and I also introduced him to Thomas Levine. They both generously volunteered and together with Jacob, ensured a successful project. While I originally started the clean room, they deserve all the credit for the enhancements in 2018 and this emphasizes the importance of those introductions made during the early stages of GSoC.

In fact, there were half a dozen similar cases this year where I have interacted with a really promising student and referred them to the mentor(s) who appeared optimal for their profile.

After my recent travels in the Balkans, a number of people from Albania and Kosovo expressed an interest in GSoC and Outreachy. The students from Kosovo found that their country was not listed in the application form but the Google team very promptly added it, allowing them to apply for GSoC for the first time. Kosovo still can't participate in the Olympics or the World Cup, but they can compete in GSoC now.

At this stage, I was still uncertain if I would mentor any project myself in 2018 or only help with the admin role, which I had only agreed to do on a very temporary basis until the team evolves. Nonetheless, the day before student applications formally opened (12 March) and after looking at the interest areas of students who had already made contact, I decided to go ahead mentoring a single project, the wizard for new students and contributors.

Student selections

The application deadline closed on 27 March. At this time, Debian had 102 applications, an increase over the 75 applications from 2016. Five applicants were female, including three from Kosovo.

One challenge we've started to see is that since Google reduced the stipend for GSoC, Outreachy appears to pay more in many countries. Some women put more effort into an Outreachy application or don't apply for GSoC at all, even though there are far more places available in GSoC each year. GSoC typically takes over 1,000 interns in each round while Outreachy can only accept approximately 50.

Applicants are not evenly distributed across all projects. Some mentors/projects only receive one applicant and then mentors simply have to decide if they will accept the applicant or cancel the project. Other mentors receive ten or more complete applications and have to spend time studying them, comparing them and deciding on the best way to rank them and make a decision.

Given the large number of project ideas in Debian, we found that the Google portal didn't allow us to use enough category names to distinguish them all. We contacted the Google team about this and they very quickly increased the number of categories we could use, this made it much easier to tag the large number of applications so that each mentor could filter the list and only see their own applicants.

The project I mentored personally, a wizard for helping new students get started, attracted interest from 3 other co-mentors and 10 student applications. To help us compare the applications and share data we gathered from the students, we set up a shared spreadsheet using Debian's Sandstorm instance and Ethercalc. Thanks to Asheesh and Laura for setting up and maintaining this great service.

Slot requests

Switching from the mentor hat to the admin hat, we had to coordinate the requests from each mentor to calculate the total number of slots we wanted Google to fund for Debian's mentors.

Once again, Debian's Sandstorm instance, running Ethercalc, came to the rescue.

All mentors were granted access, reducing the effort for the admins and allowing a distributed, collective process of decision making. This ensured mentors could see that their slot requests were being counted correctly but it means far more than that too. Mentors put in a lot of effort to bring their projects to this stage and it is important for them to understand any contention for funding and make a group decision about which projects to prioritize if Google doesn't agree to fund all the slots.

Management tools and processes

Various topics were discussed by the team at the beginning of GSoC.

One discussion was about the definition of "team". Should the new delegation follow the existing pattern, reserving the word "team" for the admins, or should we move to the convention followed by the DebConf team, where the word "team" encompasses a broader group of the volunteers? A draft delegation text was prepared but we haven't asked for it to be ratified, this is a pending task for the 2019 team (more on that later).

There was discussion about the choice of project management tools, keeping with Debian's philosophy of only using entirely free tools. We compared various options, including Redmine with the Agile (Kanban) plugin, Kanboard (as used by DebConf team), and more Sandstorm-hosted possibilities, such as Wekan and Scrumblr. Some people also suggested ideas for project management within their Git repository, for example, using Org-mode. There was discussion about whether it would be desirable for admins to run an instance of one of these tools to manage our own workflow and whether it would be useful to have all students use the same tool to ease admin supervision and reporting. Personally, I don't think all students need to use the same tool as long as they use tools that provide public read-only URLs, or even better, a machine-readable API allowing admins to aggregate data about progress.

Admins set up a Git repository for admin and mentor files on Debian's new GitLab instance, Salsa. We tried to put in place a process to synchronize the mentor list on the wiki, the list of users granted team access in Salsa and the list of mentors maintained in the GSoC portal. This could be taken further by asking mentors and students to put a Moin Category tag on the bottom of their personal pages on the wiki, allowing indexes to be built automatically.

Students accepted

On 23 April, the list of selected students was confirmed. Shortly afterward, a Debian blog appeared welcoming the students.

OSCAL 2018, Albania and Kosovo visit

I traveled to Tirana, Albania for OSCAL'18 where I was joined by two of the Kosovan students selected by Debian. They helped run the Debian booth, comprising a demonstration of software defined radio from Debian Hams.

Enkelena Haxhiu and I gave a talk together about communications technology. This was Enkelena's first talk. In the audience was Arjen Kamphuis, he was one of the last people to ask a question at the end. His recent disappearance is a disturbing mystery.

DebConf18

A GSoC session took place at DebConf18, the video is available here and includes talks from GSoC and Outreachy participants past and present.

Final results

Many of the students have already been added to Planet Debian where they have blogged about what they did and what they learned in GSoC. More will appear in the near future.

If you like their project, if you have ideas for an event where they could present it or if you simply live in the same region, please feel free to contact the students directly and help them continue their free software adventure with us.

Meeting more students

Google's application form for organizations like Debian asks us what we do to stay in contact with students after GSoC. Crossing multiple passes in the Swiss and Italian alps to find Sergio Alberti at Capo di Lago is probably one of the more exotic answers to that question.

Looking back at past internships

I first mentored students in GSoC 2013. Since then, I've been involved in mentoring a total of 12 students in GSoC and 3 interns in Outreachy as well as introducing many others to mentors and organizations. Several of them stay in touch and it's always interesting to hear about their successes as they progress in their careers and in their enjoyment of free software.

The Outreachy organizers have chosen a picture of two of my former interns, Urvika Gola (Outreachy 2016) and Pranav Jain (GSoC 2016) for the mentors page of their web site. This is quite fitting as both of them have remained engaged and become involved in the mentoring process.

Lessons from GSoC 2018, preparing for 2019

One of the big challenges we faced this year is that as the new admin team was only coming together for the first time, we didn't have any policies in place before mentors and students started putting significant effort in to their proposals.

Potential mentors start to put in significant effort from February, when the list of participating organizations is usually announced by Google. Therefore, it seems like a good idea to make any policies clear to potential mentors before the end of January.

We faced a similar challenge with selecting mentors to attend the GSoC mentor summit. While some ideas were discussed about the design of a selection process or algorithm, the admins fell back on the previous policy based on a random selection as mentors may have anticipated that policy was still in force when they signed up.

As I mentioned already, there are several areas where GSoC and Outreachy are diverging, this already led to some unfortunate misunderstandings in both directions, for example, when people familiar with Outreachy rules have been unaware of GSoC differences and vice-versa and I'll confess to being one of several people who has been confused at least once. Mentors often focus on the projects and candidates and don't always notice the annual rule changes. Unfortunately, this requires involvement and patience from both the organizers and admins to guide the mentors through any differences at each step.

The umbrella organization question

One of the most contentious topics in Debian's GSoC 2018 program was the discussion of whether Debian can and should act as an umbrella organization for smaller projects who are unlikely to participate in GSoC in their own right.

As an example, in 2016, four students were mentored by Savoir Faire Linux (SFL), makers of the Ring project, under the Debian umbrella. In 2017, Ring joined the GNU Project and they mentored students under the GNU Project umbrella organization. DebConf17 coincidentally took place in Montreal, Canada, not far from the SFL headquarters and SFL participated as a platinum sponsor.

Google's Mentor Guide explicitly encourages organizations to consider this role, but does not oblige them to do so either:

Google’s program administrators actually look quite fondly on the umbrella organizations that participate each year.

For an organization like Debian, with our philosophy, independence from the cloud and distinct set of tools, such as the Salsa service mentioned earlier, being an umbrella organization gives us an opportunity to share the philosophy and working methods for mutual benefit while also giving encouragement to related projects that we use.

Some people expressed concern that this may cut into resources for Debian-centric projects, but it appears that Google has not limited the number of additional places in the program for this purpose. This is one of the significant differences with Outreachy, where the number of places is limited by funding constraints.

Therefore, if funding is not a constraint, I feel that the most important factor to evaluate when considering this issue is the size and capacity of the admin team. Google allows up to five people to be enrolled as admins and if enough experienced people volunteer, it can be easier for everybody whereas with only two admins, the minimum, it may not be feasible to act as an umbrella organization.

Within the team, we observed various differences of opinion: for example some people were keen on the umbrella role while others preferred to restrict participation to Debian-centric projects. We have the same situation with Outreachy: some mentors and admins only want to do GSoC, while others only do Outreachy and there are others, like myself, who have supported both programs equally. In situations like this, nobody is right or wrong.

Once that fundamental constraint, the size of the admin team, is considered, I personally feel that any related projects engaged on this basis can be evaluated for a wide range of synergies with the Debian community, including the people, their philosophy, the tools used and the extent to which their project will benefit Debian's developers and users. In other words, this doesn't mean any random project can ask to participate under the Debian umbrella but those who make the right moves may have a chance of doing so.

Financial

Google pays each organization an allowance of USD 500 for each slot awarded to the organization, plus some additional funds related to travel. This generally corresponds to the number of quality candidates identified by the organization during the selection process, regardless of whether the candidate accepts an internship or not. Where more than one organization requests funding (a slot) for the same student, both organizations receive a bounty, we had at least one case like this in 2018.

For 2018, Debian has received USD 17,200 from Google.

GSoC 2019 and beyond

Personally, as I indicated in January that I would only be able to do this on a temporary basis, I'm not going to participate as an admin in 2019 so it is a good time for other members of the community to think about the role. Each organization who wants to participate needs to propose a full list of admins to Google in January 2019, therefore, now is the time for potential admins to step forward, decide how they would like to work together as a team and work out the way to recruit mentors and projects.

Thanks to all the other admins, mentors, the GSoC team at Google, the Outreachy organizers and members of the wider free software community who supported this initiative in 2018. I'd particularly like to thank all the students though, it is really exciting to work with people who are so open minded, patient and remain committed even when faced with unanticipated challenges and adversity.

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Release 0.2 of free software archive system Nikita announced

Planet Debian - Enj, 18/10/2018 - 2:40md

This morning, the new release of the Nikita Noark 5 core project was announced on the project mailing list. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.2 since version 0.1.1 (from NEWS.md):

  • Fix typos in REL names
  • Tidy up error message reporting
  • Fix issue where we used Integer.valueOf(), not Integer.getInteger()
  • Change some String handling to StringBuffer
  • Fix error reporting
  • Code tidy-up
  • Fix issue using static non-synchronized SimpleDateFormat to avoid race conditions
  • Fix problem where deserialisers were treating integers as strings
  • Update methods to make them null-safe
  • Fix many issues reported by coverity
  • Improve equals(), compareTo() and hash() in domain model
  • Improvements to the domain model for metadata classes
  • Fix CORS issues when downloading document
  • Implementation of case-handling with registryEntry and document upload
  • Better support in Javascript for OPTIONS
  • Adding concept description of mail integration
  • Improve setting of default values for GET on ny-journalpost
  • Better handling of required values during deserialisation
  • Changed tilknyttetDato (M620) from date to dateTime
  • Corrected some opprettetDato (M600) (de)serialisation errors.
  • Improve parse error reporting.
  • Started on OData search and filtering.
  • Added Contributor Covenant Code of Conduct to project.
  • Moved repository and project from Github to Gitlab.
  • Restructured repository, moved code into src/ and web/.
  • Updated code to use Spring Boot version 2.
  • Added support for OAuth2 authentication.
  • Fixed several bugs discovered by Coverity.
  • Corrected handling of date/datetime fields.
  • Improved error reporting when rejecting during deserializatoin.
  • Adjusted default values provided for ny-arkivdel, ny-mappe, ny-saksmappe, ny-journalpost and ny-dokumentbeskrivelse.
  • Several fixes for korrespondansepart*.
  • Updated web GUI:
    • Now handle both file upload and download.
    • Uses new OAuth2 authentication for login.
    • Forms now fetches default values from API using GET.
    • Added RFC 822 (email), TIFF and JPEG to list of possible file formats.

The changes and improvements are extensive. Running diffstat on the changes between git tab 0.1.1 and 0.2 show 1098 files changed, 108666 insertions(+), 54066 deletions(-).

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Free software activities in September 2018

Planet Debian - Dje, 30/09/2018 - 9:02md

Here is my monthly update covering what I have been doing in the free software world during September 2018 (previous month):

More hacking on the Lintian static analysis tool for Debian packages:

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

Debian
  • As a member of the Debian Python Module Team I pushed a large number of changes across 100s of repositories including removing empty debian/patches/series & debian/source/options files, correcting email addresses, dropping generated .debhelper dirs, removing trailing whitespaces, respecting the nocheck build profile via DEB_BUILD_OPTIONS and correcting spelling mistakes in debian/control files.

  • Added a missing dependency on golang-golang-x-tools for digraph(1) in dh-make-golang as part of the Debian Go Packaging Team.


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project

  • "Frontdesk" duties, triaging CVEs, responding to user questions, etc.

  • Issued DLA 1492-1 fixing a string injection vulnerability in the dojo Javascript library.

  • Issued DLA 1496-1 to correct an integer overflow vulnerability in the "Little CMS 2" colour management library. A specially-crafted input file could have lead to a heap-based buffer overflow.

  • Issued DLA 1498-1 for the curl utility to fix an integer overflow vulnerability (background).

  • Issued DLA 1501-1 to fix an out-of-bounds read vulnerability in libextractor, a tool to extract meta-data from files of arbitrary type.

  • Issued DLA 1503-1 to prevent a potential denial of service and a potential arbitrary code execution vulnerability in the kamailio SIP (Session Initiation Protocol) server. A specially-crafted SIP message with an invalid Via header could cause a segmentation fault and crash the server due to missing input validation.

  • Issued ELA 34-1 for the Redis key-value database where the redis-cli tool could have allowed an attacker to achieve code execution and/or escalate to higher privileges via a specially-crafted command line.


Uploads

I also uploaded the following packages as a member of the Debian Python Module Team: django-ipware (2.1.0-1), django-adminaudit (0.3.3-2), python-openid (2.2.5-7), python-social-auth (1:0.2.21+dfsg-3). python-vagrant (0.5.15-2) & python-validictory (0.8.3-3)

Finally, I sponsored the following uploads: bm-el (201808-1), elpy (1.24.0-1), mutt-alias-el (1.5-1) & android-platform-external-boringssl (8.1.0+r23-2).


Debian bugs filed


FTP Team

As a Debian FTP assistant I ACCEPTed 81 packages: adios, android-platform-system-core, aom, appmenu-registrar, astroid2, black, bm-el, colmap, cowpatty, devpi-common, equinox-bundles, fabulous, fasttracker2, folding-mode-el, fontpens, ganeti-2.15, geomet, golang-github-google-go-github, golang-github-gregjones-httpcache, hub, infnoise, intel-processor-trace, its-playback-time, jsonb-api, kitinerary, kpkpass, libclass-tiny-chained-perl, libmoox-traits-perl, librda, libtwitter-api-perl, liburl-encode-perl, libwww-oauth-perl, llvm-toolchain-7, lucy, markdown-toc-el, mmdebstrap, mozjs60, mutt-alias-el, nvidia-graphics-drivers-legacy-390xx, o-saft, pass-tomb, pass-tomb-basic, pgformatter, picocli, pikepdf, pipewire, poliastro, port-for, pyagentx, pylint2, pynwb, pytest-flask, python-argon2, python-asteval, python-caldav, python-djangosaml2, python-pcl, python-persist-queue, python-rfc3161ng, python-treetime, python-x2go, python-x3dh, python-xeddsa, rust-crossbeam-deque, rust-iovec, rust-phf-generator, rust-simd, rust-spin, rustc, sentinelsat, sesman, sphinx-autobuild, sphinxcontrib-restbuilder, tao-pegtl, trojan, ufolib2, ufonormalizer, unarr, vlc-plugin-bittorrent, xlunzip & xxhash.

I additionally filed 6 RC bugs against packages that had potentially-incomplete debian/copyright files against adios, pgformatter, picocli, python-argon2, python-pcl & python-treetime.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

nanotime 0.2.3

Planet Debian - Dje, 30/09/2018 - 5:14md

A minor maintenance release of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it now uses a more rigorous S4-based approach thanks to a rewrite by Leonardo Silvestri.

This release disables some tests on the Slowlaris platform we are asked to conform to (which is a good thing as wider variety of test platforms widens test converage) yet have no real access to (which is bad thing, obviously) beyind what the helpful rhub service offers. We also updated the Travis setup. No code changes.

Changes in version 0.2.3 (2018-09-30)
  • Skip some tests on Solaris which seems borked with timezones. As we have no real, no fixed possible (Dirk in #42).

  • Update Travis setup

Once this updates on the next hourly cron iteration, we also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

All i wanted to do is check an error code

Planet Debian - Dje, 30/09/2018 - 2:03md
I was feeling a little under the weather last week and did not have enough concentration to work on developing a new NetSurf feature as I had planned. Instead I decided to look at a random bug from our worryingly large collection.

This lead me to consider the HTML form submission function at which point it was "can open, worms everywhere". The code in question has a fairly simple job to explain:
  1. A user submits a form (by clicking a button or such) and the Document Object Model (DOM) is used to create a list of information in the web form.
  2. The list is then converted to the appropriate format for sending to the web site server.
  3. An HTTP request is made using the correctly formatted information to the web server.
    However the code I was faced with, while generally functional, was impenetrable having accreted over a long time.

    At this point I was forced into a diversion to fix up the core URL library handling of query strings (this is used when the form data is submitted as part of the requested URL) which was necessary to simplify some complicated string handling and make the implementation more compliant with the specification.

    My next step was to add some basic error reporting instead of warning the user the system was out of memory for every failure case which was making debugging somewhat challenging. I was beginning to think I had discovered a series of very hairy yaks although at least I was not trying to change a light bulb which can get very complicated.

    At this point I ran into the form_successful_controls_dom() function which performs step one of the process. This function had six hundred lines of code, hundreds of conditional branches 26 local variables and five levels of indentation in places. These properties combined resulted in a cyclomatic complexity metric of 252. For reference programmers generally try to keep a single function to no more than a hundred lines of code with as few local variables as possible resulting in a CCM of 20.

    I now had a choice:

    • I could abandon investigating the bug, because even if I could find the issue changing such a function without adequate testing is likely to introduce several more.
    • I could refactor the function into multiple simpler pieces.
    I slept on this decision and decided to at least try to refactor the code in an attempt to pay back a little of the technical debt in the browser (and maybe let me fix the bug). After several hours of work the refactored source has the desirable properties of:
    • multiple straightforward functions
    • no function much more than a hundred lines long
    • resource lifetime is now obvious and explicit
    • errors are correctly handled and reported

    I carefully examined the change in generated code and was pleased to see the compiler output had become more compact. This is an important point that less experienced programmers sometimes miss, if your source code is written such that a compiler can reason about it easily you often get much better results than the compact alternative. However even if the resulting code had been larger the improved source would have been worth it.
    After spending over ten hours working on this bug I have not resolved it yet, indeed one might suggest I have not even directly considered it yet! I wanted to use this to explain a little to users who have to wait a long time for their issues to get resolved (in any project not just NetSurf) just how much effort is sometimes involved in a simple bug.
    Vincent Sanders noreply@blogger.com Vincents Random Waffle

    RcppAPT 0.0.5

    Planet Debian - Dje, 30/09/2018 - 2:08pd

    A new version of RcppAPT – our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like – is now on CRAN.

    This version is a bit of experiment. I had asked on the r-package-devel and r-devel list how I could suppress builds on macOS. As it does not have the required libapt-pkg-dev library to support the apt, builds always failed. CRAN managed to not try on Solaris or Fedora, but somewhat macOS would fail. Each. And. Every. Time. Sadly, nobody proposed a working solution.

    So I got tired of this. Now we detect where we build, and if we can infer that it is not a Debian or Ubuntu (or derived system) and no libapt-pkg-dev is found, we no longer fail. Rather, we just set a #define and at compile-time switch to essentially empty code. Et voilà: no more build errors.

    And as before, if you want to use the package to query the system packaging information, build it on system using apt and with its libapt-pkg-dev installed.

    A few other cleanups were made too.

    Changes in version 0.0.5 (2017-09-29)
    • NAMESPACE now sets symbol registration

    • configure checks for suitable system, no longer errors if none found, but sets good/bad define for the build

    • Existing C++ code is now conditional on having a 'good' build system, or else alternate code is used (which succeeds everywhere)

    • Added suitable() returning a boolean with configure result

    • Tests are conditional on suitable() to test good builds

    • The Travis setup was updated

    • The vignette was updated and expanded

    Courtesy of CRANberries, there is also a diffstat report for this release.

    A bit more information about the package is available here as well as as the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

    Valutakrambod - A python and bitcoin love story

    Planet Debian - Sht, 29/09/2018 - 10:20md

    It would come as no surprise to anyone that I am interested in bitcoins and virtual currencies. I've been keeping an eye on virtual currencies for many years, and it is part of the reason a few months ago, I started writing a python library for collecting currency exchange rates and trade on virtual currency exchanges. I decided to name the end result valutakrambod, which perhaps can be translated to small currency shop.

    The library uses the tornado python library to handle HTTP and websocket connections, and provide a asynchronous system for connecting to and tracking several services. The code is available from github.

    There are two example clients of the library. One is very simple and list every updated buy/sell price received from the various services. This code is started by running bin/btc-rates and call the client code in valutakrambod/client.py. The simple client look like this:

    import functools import tornado.ioloop import valutakrambod class SimpleClient(object): def __init__(self): self.services = [] self.streams = [] pass def newdata(self, service, pair, changed): print("%-15s %s-%s: %8.3f %8.3f" % ( service.servicename(), pair[0], pair[1], service.rates[pair]['ask'], service.rates[pair]['bid']) ) async def refresh(self, service): await service.fetchRates(service.wantedpairs) def run(self): self.ioloop = tornado.ioloop.IOLoop.current() self.services = valutakrambod.service.knownServices() for e in self.services: service = e() service.subscribe(self.newdata) stream = service.websocket() if stream: self.streams.append(stream) else: # Fetch information from non-streaming services immediately self.ioloop.call_later(len(self.services), functools.partial(self.refresh, service)) # as well as regularly service.periodicUpdate(60) for stream in self.streams: stream.connect() try: self.ioloop.start() except KeyboardInterrupt: print("Interrupted by keyboard, closing all connections.") pass for stream in self.streams: stream.close()

    The library client loops over all known "public" services, initialises it, subscribes to any updates from the service, checks and activates websocket streaming if the service provide it, and if no streaming is supported, fetches information from the service and sets up a periodic update every 60 seconds. The output from this client can look like this:

    Bl3p BTC-EUR: 5687.110 5653.690 Bl3p BTC-EUR: 5687.110 5653.690 Bl3p BTC-EUR: 5687.110 5653.690 Hitbtc BTC-USD: 6594.560 6593.690 Hitbtc BTC-USD: 6594.560 6593.690 Bl3p BTC-EUR: 5687.110 5653.690 Hitbtc BTC-USD: 6594.570 6593.690 Bitstamp EUR-USD: 1.159 1.154 Hitbtc BTC-USD: 6594.570 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Bl3p BTC-EUR: 5687.110 5653.690 Paymium BTC-EUR: 5680.000 5620.240

    The exchange order book is tracked in addition to the best buy/sell price, for those that need to know the details.

    The other example client is focusing on providing a curses view with updated buy/sell prices as soon as they are received from the services. This code is located in bin/btc-rates-curses and activated by using the '-c' argument. Without the argument the "curses" output is printed without using curses, which is useful for debugging. The curses view look like this:

    Name Pair Bid Ask Spr Ftcd Age BitcoinsNorway BTCEUR 5591.8400 5711.0800 2.1% 16 nan 60 Bitfinex BTCEUR 5671.0000 5671.2000 0.0% 16 22 59 Bitmynt BTCEUR 5580.8000 5807.5200 3.9% 16 41 60 Bitpay BTCEUR 5663.2700 nan nan% 15 nan 60 Bitstamp BTCEUR 5664.8400 5676.5300 0.2% 0 1 1 Bl3p BTCEUR 5653.6900 5684.9400 0.5% 0 nan 19 Coinbase BTCEUR 5600.8200 5714.9000 2.0% 15 nan nan Kraken BTCEUR 5670.1000 5670.2000 0.0% 14 17 60 Paymium BTCEUR 5620.0600 5680.0000 1.1% 1 7515 nan BitcoinsNorway BTCNOK 52898.9700 54034.6100 2.1% 16 nan 60 Bitmynt BTCNOK 52960.3200 54031.1900 2.0% 16 41 60 Bitpay BTCNOK 53477.7833 nan nan% 16 nan 60 Coinbase BTCNOK 52990.3500 54063.0600 2.0% 15 nan nan MiraiEx BTCNOK 52856.5300 54100.6000 2.3% 16 nan nan BitcoinsNorway BTCUSD 6495.5300 6631.5400 2.1% 16 nan 60 Bitfinex BTCUSD 6590.6000 6590.7000 0.0% 16 23 57 Bitpay BTCUSD 6564.1300 nan nan% 15 nan 60 Bitstamp BTCUSD 6561.1400 6565.6200 0.1% 0 2 1 Coinbase BTCUSD 6504.0600 6635.9700 2.0% 14 nan 117 Gemini BTCUSD 6567.1300 6573.0700 0.1% 16 89 nan Hitbtc+BTCUSD 6592.6200 6594.2100 0.0% 0 0 0 Kraken BTCUSD 6565.2000 6570.9000 0.1% 15 17 58 Exchangerates EURNOK 9.4665 9.4665 0.0% 16 107789 nan Norgesbank EURNOK 9.4665 9.4665 0.0% 16 107789 nan Bitstamp EURUSD 1.1537 1.1593 0.5% 4 5 1 Exchangerates EURUSD 1.1576 1.1576 0.0% 16 107789 nan BitcoinsNorway LTCEUR 1.0000 49.0000 98.0% 16 nan nan BitcoinsNorway LTCNOK 492.4800 503.7500 2.2% 16 nan 60 BitcoinsNorway LTCUSD 1.0221 49.0000 97.9% 15 nan nan Norgesbank USDNOK 8.1777 8.1777 0.0% 16 107789 nan

    The code for this client is too complex for a simple blog post, so you will have to check out the git repository to figure out how it work. What I can tell is how the three last numbers on each line should be interpreted. The first is how many seconds ago information was received from the service. The second is how long ago, according to the service, the provided information was updated. The last is an estimate on how often the buy/sell values change.

    If you find this library useful, or would like to improve it, I would love to hear from you. Note that for some of the services I've implemented a trading API. It might be the topic of a future blog post.

    As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

    Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

    Pulling back

    Planet Debian - Sht, 29/09/2018 - 6:15md

    I've updated my fork of the monkey programming language to allow object-based method calls.

    That's allowed me to move some of my "standard-library" code into Monkey, and out of Go which is neat. This is a simple example:

    // // Reverse a string, // function string.reverse() { let r= ""; let l = len(self); for( l > 0 ) { r += self[l-1]; l--; } return r; }

    Usage is the obvious:

    puts( "Steve".reverse() );

    Or:

    let s = "Input"; s = s.reverse(); puts( s + "\n" );

    Most of the pain here was updating the parser to recognize that "." meant a method-call was happening, once that was done it was otherwise only a matter of passing the implicit self object to the appropriate functions.

    This work was done in a couple of 30-60 minute chunks. I find that I'm only really able to commit to that amount of work these days, so I've started to pull back from other projects.

    Oiva is now 21 months old and he sucks up all my time & energy. I can't complain, but equally I can't really start concentrating on longer-projects even when he's gone to sleep.

    And that concludes my news for the day.

    Goodnight dune..

    Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

    Self-plotting output from feedgnuplot and python-gnuplotlib

    Planet Debian - Sht, 29/09/2018 - 1:40md

    I just made a small update to feedgnuplot (version 1.51) and to python-gnuplotlib (version 0.23). Demo:

    $ seq 5 | feedgnuplot --hardcopy showplot.gp $ ./showplot.gp [plot pops up] $ cat showplot.gp #!/usr/bin/gnuplot set grid set boxwidth 1 histbin(x) = 1 * floor(0.5 + x/1) plot '-' notitle 1 1 2 2 3 3 4 4 5 5 e pause mouse close

    I.e. there's now support for a fake gp terminal that's not a gnuplot terminal at all, but rather a way to produce a self-executable gnuplot script. 99% of this was already implemented in --dump, but this way to access that functionality is much nicer. In fact, the machine running feedgnuplot doesn't even need to have gnuplot installed at all. I needed this because I was making complicated plots on a remote box, and X-forwarding was being way too slow. Now the remote box creates the self-plotting gnuplot scripts, I scp those, evaluate them locally, and then work with interactive visualizations.

    The python frontend gnuplotlib has received an analogous update.

    Dima Kogan http://notes.secretsauce.net Dima Kogan

    MicroDebConf Brasília 2018

    Planet Debian - Pre, 28/09/2018 - 11:20md

    After I came back to my home city (Brasília) I felt the necessity to promote and help people to contribute to Debian, some old friends from my former university (Univesrity of Brasília) and the local comunnity (Debian Brasília) came up with the idea to run a Debian related event and I just thought: “That sounds amazing!”. We contacted the university to book a small auditorium there for an entire day. After that we started to think, how should we name the event? The Debian Day was more or less one month ago, someone speculated a MiniDebConf but I thought that it was going to be much smaller than regular MiniDebConfs. So we decided to use a term that we used sometime ago here in Brasília, we called MicroDebConf :)

    MicroDebConf Brasília 2018 took place at Gama campus of University of Brasília on September 8th. It was amazing, we gathered a lot of students from university and some high schools, and some free software enthisiastics too. We had 44 attendees in total, we did not expect all these people in the begining! During the day we presented to them what is Debian Project and the many different ways to contribute to it.

    Since our focus was newcommers we started from the begining explaining how to use Debian properly, how to interact with the community and how to contribute. We also introduced them to some other subjects such as management of PGP keys, network setup with Debian and some topics about Linux kernel contributions. As you probably know, students are never satisfied, sometimes the talks are too easy and basic and other times are too hard and complex to follow. Then we decided to balance the talks level, we started from Debian basics and went over details of Linux kernel implementation. Their feedback was positive, so I think that we should do it again, atract students is always a challenge.

    In the end of the day we had some discussions regarding what should we do to grow our local community? We want more local people actually contributing to free software projects and specially Debian. A lot of people were interested but some of them said that they need some guidance, the life of a newcommer is not so easy for now.

    After some discussion we came up with the idea of a study group about Debian packaging, we will schedule meetings every week (or two weeks, not decided yet), and during these meetings we will present about packaging (good practices, tooling and anything that people need) and do some hands-on work. My intention is document everything that we will do to facilitate the life of future newcommers that wants to do Debian packaging. My main reference for this study groups has been LKCamp, they are a more consolidated group and their focus is to help people start contributing to Linux kernel.

    In my opinion, this kind of initiative could help us on bring new blood to the project and disseminate the free software ideas/culture. Other idea that we have is to promote Debian and free software in general to non technical people. We realized that we need to reach these people if we want a broader community, we do not know how exactly yet but it is in our radar.

    After all these talks and discussions we needed some time to relax, and we did that together! We went to a bar and got some beer (except people with less than 18 years old :) and food. Of course that ours discussions about free software kept running all night long.

    The following is an overview about this conference:

    • We probably defined this term and are the first organizing a MicroDebConf (we already did it in 2015). We should promote more this kind of local events

    • I guess we inspired a lot of young people to contribute to Debian (and free software in general)

    • We defined a way to help local people starting contributing to Debian with packaging. I really like this idea of a study group, meet people in person is always the best way to create bonds

    • Now we hopefully will have a stronger Debian community in Brasília - Brazil \o/

    Last but not least, I would like to thank LAPPIS (a research lab which I was part in my undergrad), they helped us with all the logistics and buroucracies, and Collabora for the coffee break sponsorship! Collabora, LAPPIS and us share the same goal: promote FLOSS to all these young people and make our commuity grow!

    Lucas Kanashiro http://blog.kanashiro.xyz/ Lucas Kanashiro’s blog

    aristotledndalignments

    Planet Debian - Enj, 27/09/2018 - 10:12md

    Aristotle’s distinction in EN between brutishness and vice might be comparable to the distinction in Dungeons & Dragons between chaotic evil and lawful evil, respectively.

    I’ve always thought that the forces of lawful evil are more deeply threatening than those of chaotic evil. In the Critical Hit podcast, lawful evil is equated with tyranny.

    Of course, at least how I run it, Aristotelian ethics involves no notion of evil, only mistakes about the good.

    Sean Whitton https://spwhitton.name//blog/ Notes from the Library

    Debian Policy call for participation -- September 2018

    Planet Debian - Enj, 27/09/2018 - 10:07md

    Here’s a summary of some of the bugs against the Debian Policy Manual that are thought to be easy to resolve.

    Please consider getting involved, whether or not you’re an existing contributor.

    For more information, see our README.

    #152955 force-reload should not start the daemon if it is not running

    #172436 BROWSER and sensible-browser standardization

    #188731 Also strip .comment and .note sections

    #212814 Clarify relationship between long and short description

    #273093 document interactions of multiple clashing package diversions

    #314808 Web applications should use /usr/share/package, not /usr/share/doc/package

    #348336 Clarify Policy around shared configuration files

    #425523 Describe error unwind when unpacking a package fails

    #491647 debian-policy: X font policy unclear around TTF fonts

    #495233 debian-policy: README.source content should be more detailed

    #649679 [copyright-format] Clarify what distinguishes files and stand-alone license paragraphs.

    #682347 mark ‘editor’ virtual package name as obsolete

    #685506 copyright-format: new Files-Excluded field

    #685746 debian-policy Consider clarifying the use of recommends

    #694883 copyright-format: please clarify the recommended form for public domain files

    #696185 [copyright-format] Use short names from SPDX.

    #697039 expand cron and init requirement to check binary existence to other scripts

    #722535 debian-policy: To document: the “Binary-Only” field in Debian changes files.

    #759316 Document the use of /etc/default for cron jobs

    #770440 debian-policy: policy should mention systemd timers

    #780725 PATH used for building is not specified

    #794653 Recommend use of dpkg-maintscript-helper where appropriate

    #809637 DEP-5 does not support filenames with blanks

    #824495 debian-policy: Source packages “can” declare relationships

    #833401 debian-policy: virtual packages: dbus-session-bus, dbus-default-session-bus

    #845715 debian-policy: Please document that packages are not allowed to write outside their source directories

    #850171 debian-policy: Addition of having an ‘EXAMPLES’ section in manual pages debian policy 12.1

    #853779 debian-policy: Clarify requirements about update-rc.d and invoke-rc.d usage in maintainer scripts

    #904248 Add netbase to build-essential

    Sean Whitton https://spwhitton.name//blog/ Notes from the Library

    My Work on Debian LTS (September 2018)

    Planet Debian - Enj, 27/09/2018 - 11:40pd

    In September 2018, I did 10 hours of work on the Debian LTS project as a paid contributor. Thanks to all LTS sponsors for making this possible.

    This is my list of work done in September 2018:

    • Upload of polarssl (DLA 1518-1) [1].
    • Work on CVE-2018-16831 discovered in the smarty3 package. Plan (A) was to backport latest smarty3 release to Debian stretch and jessie, but runtime tests against GOsa² (one of the PHP applications that utilize smarty3) already failed for Debian stretch. So, this plan was dropped. Plan (B) then was extracting a patch [2] for fixing this issue in Debian stretch's smarty3 package version from a manifold of upstream code changes; finally with the realization that smarty3 in Debian jessie is very likely not affected. Upstream feedback is still pending, upload(s) will occur in the coming week (first week of Octobre).

    light+love
    Mike

    References

    [1] https://lists.debian.org/debian-lts-announce/2018/09/msg00029.html

    [2] https://salsa.debian.org/debian/smarty3/commit/8a1eb21b7c4d971149e76cd2b...

    sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

    Faqet

    Subscribe to AlbLinux agreguesi