You are here

Agreguesi i feed

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2019

Bits from Debian - Enj, 18/07/2019 - 2:08md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 201 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 7 hours (out of 14 hours allocated plus 7 extra hours from May, thus carrying over 14h to July).
  • Adrian Bunk did 6 hours (out of 8 hours allocated plus 8 extra hours from May, thus carrying over 10h to July).
  • Ben Hutchings did 17 hours (out of 17 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 17 hours (out of 17 hours allocated plus 0.25 extra hours from May, thus carrying over 0.25h to July).
  • Emilio Pozuelo Monfort did not provide his June report yet. (He got 17 hours allocated and carried over 0.25h from May).
  • Hugo Lefeuvre did 4.25 hours (out of 17 hours allocated and he gave back 12.75 hours to the pool, thus he’s not carrying over any hours to July).
  • Jonas Meurer did 16.75 hours (out of 17 hours allocated plus 1.75h extra hours from May, thus he is carrying over 2h to July).
  • Markus Koschany did 17 hours (out of 17 hours allocated).
  • Mike Gabriel did 9.75 hours (out of 17 hours allocated, thus carrying over 7.25h to July).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated plus 6h from June, then he gave back 1.5h to the pool, thus he is carrying over 8h to July).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 17 hours (out of 17 hours allocated).
  • Thorsten Alteholz did 17 hours (out of 17 hours allocated).
DebConf sponsorship

Thanks to the Extended LTS service, Freexian has been able to invest some money in DebConf sponsorship. This year, Debconf attendees should have Debian LTS stickers and flyer in their welcome bag. And while we were thinking of marketing, we also opted to create a promotional video explaining LTS and Freexian’s offer. This video will be premiered at Debconf 19!

Evolution of the situation

We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker (now for oldoldstable as Buster has been released and thus Stretch became oldoldstable) currently lists 41 packages with a known CVE and the dla-needed.txt file has 43 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2019

Planet Ubuntu - Enj, 18/07/2019 - 2:08md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 201 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 7 hours (out of 14 hours allocated plus 7 extra hours from May, thus carrying over 14h to July).
  • Adrian Bunk did 6 hours (out of 8 hours allocated plus 8 extra hours from May, thus carrying over 10h to July).
  • Ben Hutchings did 17 hours (out of 17 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 17 hours (out of 17 hours allocated plus 0.25 extra hours from May, thus carrying over 0.25h to July).
  • Emilio Pozuelo Monfort did not provide his June report yet. (He got 17 hours allocated and carried over 0.25h from May).
  • Hugo Lefeuvre did 4.25 hours (out of 17 hours allocated and he gave back 12.75 hours to the pool, thus he’s not carrying over any hours to July).
  • Jonas Meurer did 16.75 hours (out of 17 hours allocated plus 1.75h extra hours from May, thus he is carrying over 2h to July).
  • Markus Koschany did 17 hours (out of 17 hours allocated).
  • Mike Gabriel did 9.75 hours (out of 17 hours allocated, thus carrying over 7.25h to July).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated plus 6h from June, then he gave back 1.5h to the pool, thus he is carrying over 8h to July).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 17 hours (out of 17 hours allocated).
  • Thorsten Alteholz did 17 hours (out of 17 hours allocated).
DebConf sponsorship

Thanks to the Extended LTS service, Freexian has been able to invest some money in DebConf sponsorship. This year, Debconf attendees should have Debian LTS stickers and flyer in their welcome bag. And while we were thinking of marketing, we also opted to create a promotional video explaining LTS and Freexian’s offer. This video will be premiered at Debconf 19!

Evolution of the situation

We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker (now for oldoldstable as Buster has been released and thus Stretch became oldoldstable) currently lists 41 packages with a known CVE and the dla-needed.txt file has 43 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

next-20190718: linux-next

Kernel Linux - Enj, 18/07/2019 - 5:34pd
Version:next-20190718 (linux-next) Released:2019-07-18

Ubuntu Studio: Ubuntu Studio 18.10 Reaches End-Of-Life (EOL)

Planet Ubuntu - Enj, 18/07/2019 - 3:00pd
As of today, July 18, 2019, Ubuntu Studio 18.10 has reached the end of its support cycle. We strongly urge all users of 18.10 to upgrade to Ubuntu Studio 19.04 for support through January 2020 and then after the release of Ubuntu Studio 19.10, codenamed Eoan Ermine, in October 2019 which will also be supported […]

Kees Cook: security things in Linux v5.2

Bits from Debian - Enj, 18/07/2019 - 2:07pd

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn’t. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang’s -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang’s implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang’s implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one’s threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with “nosmt“). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn’t been. To avoid things like Use-After-Free heap grooming, Peter Xu added the new “vm.unprivileged_userfaultfd” sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings (“fixmap”) where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Kees Cook: security things in Linux v5.2

Planet Ubuntu - Enj, 18/07/2019 - 2:07pd

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn’t. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang’s -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang’s implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang’s implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one’s threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with “nosmt“). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn’t been. To avoid things like Use-After-Free heap grooming, Peter Xu added the new “vm.unprivileged_userfaultfd” sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings (“fixmap”) where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Steve Kemp: Building a computer - part 2

Bits from Debian - Mër, 17/07/2019 - 8:45md

My previous post on the subject of building a Z80-based computer briefly explained my motivation, and the approach I was going to take.

This post describes my progress so far:

  • On the hardware side, zero progress.
  • On the software-side, lots of fun.

To recap I expect to wire a Z80 microprocessor to an Arduino (mega). The arduino will generate a clock-signal which will make the processor "tick". It will also react to read/write attempts that the processor makes to access RAM, and I/O devices.

The Z80 has a neat system for requesting I/O, via the use of the IN and OUT instructions which allow the processor to read/write a single byte to one of 255 connected devices.

To experiment, and for a memory recap I found a Z80 assembler, and a Z80 disassembler, both packaged for Debian. I also found a Z80 emulator, which I forked and lightly-modified.

With the appropriate tools available I could write some simple code. I implemented two I/O routines in the emulator, one to read a character from STDIN, and one to write to STDOUT:

IN A, (1) ; Read a character from STDIN, store in A-register. OUT (1), A ; Write the character in A-register to STDOUT

With those primitives implemented I wrote a simple script:

; ; Simple program to upper-case a string ; org 0 ; show a prompt. ld a, '>' out (1), a start: ; read a character in a,(1) ; eof? cp -1 jp z, quit ; is it lower-case? If not just output it cp 'a' jp c,output cp 'z' jp nc, output ; convert from lower-case to upper-case. yeah. math. sub a, 32 output: ; output the character out (1), a ; repeat forever. jr start quit: ; terminate halt

With that written it could be compiled:

$ z80asm ./sample.z80 -o ./sample.bin

Then I could execute it:

$ echo "Hello, world" | ./z80emulator ./sample.bin Testing "./sample.bin"... >HELLO, WORLD 1150 cycle(s) emulated.

And that's where I'll leave it for now. When I have the real hardware I'll hookup some fake-RAM containing this program, and code a similar I/O handler to allow reading/writing to the arduino's serial-console. That will allow the same code to run, unchanged. That'd be nice.

I've got a simple Z80-manager written, but since I don't have the chips yet I can only compile-test it. We'll see how well I did soon enough.

John Goerzen: Tips for Upgrading to, And Securing, Debian Buster

Bits from Debian - Mër, 17/07/2019 - 7:41md

Wow.  Once again, a Debian release impresses me — a guy that’s been using Debian for more than 20 years.  For the first time I can ever recall, buster not only supported suspend-to-disk out of the box on my laptop, but it did so on an encrypted volume atop LVM.  Very impressive!

For those upgrading from previous releases, I have a few tips to enhance the experience with buster.

AppArmor

AppArmor is a new line of defense against malicious software.  The release notes indicate it’s now enabled by default in buster.  For desktops, I recommend installing apparmor-profiles-extra apparmor-notify.  The latter will provide an immediate GUI indication when something is blocked by AppArmor, so you can diagnose strange behavior.  You may also need to add userself to the adm group with adduser username adm.

Security

I recommend installing these packages and taking note of these items, some of which are different in buster:

  • unattended-upgrades will automatically install security updates for you.  New in buster, the default config file will also apply stable updates in addition to security updates.
  • needrestart will detect what processes need a restart after a library update and, optionally, restart them. Beginning in buster, it will not automatically restart them when in noninteractive (unattended-upgrades) mode. This can be changed by editing /etc/needrestart/needrestart.conf (or, better, putting a .conf file in /etc/needrestart/conf.d) and setting $nrconf{restart} = 'a'. Edit: If you have an Intel CPU, installing iucode-tool intel-microcode will let needrestart also check on your CPU microcode.
  • debian-security-support will warn you of gaps in security support for packages you are installing or running.
  • package-update-indicator is useful for desktops that won’t be running unattended-upgrades. I believe Gnome 3 has this built in, but for other desktops, this adds an icon when updates are available.
  • You can harden apt with seccomp.
  • You can enable UEFI secure boot.

Tuning

If you hadn’t noticed, many of these items are links into the buster release notes. It’s a good document to read over, even for a new buster install.

Canonical Design Team: Issue #2019.07.22 – Kubeflow and Conferences, 2019

Planet Ubuntu - Mër, 17/07/2019 - 5:50md
  • Kubeflow at OSCON 2019 – Over 10 sessions! Covering security, pipelines, productivity, ML ops and more. Some of the sessions are led by end-users, which means you’ll get the real deal about using Kubeflow in your production solution
  • Kubeflow at KubeCon Europe 2019 in Barcelona – The top Kubeflow events from Kubecon in Barcelona, 2019. Tutorials, Pipelines, and Kubeflow 1.0 ruminations. The discussion on when Kubeflow will reach 1.0 should be of interest to those waiting for that milestone.
  • Kubeflow Contributor Summit 2019 – Presentations and Slide decks, 22+ of them. Reviewing them will help you understand how the sausage is made. One of the interesting videos focuses on a panel discussion with machine learning practitioners and experts discussing the dynamics of machine learning at their workplace.
  • Kubeflow events calendar – Find a past or future event. This is a great resource for reviewing content from community leaders and leveling up on the current state of Kubeflow. If you are aware of something that is missing, feel free to add the content through github – become a community member! 
  • Use Case Spotlight: IBM’s photo-scraping scandal shows what a weird bubble AI researchers live in. This bubble is all about data – who owns it, who can monopolize it, who is monetizing it, and what the expectations around it. The expectations is the crux of the issue – people using the data may be at odds with the people supplying the data.

The post Issue #2019.07.22 – Kubeflow and Conferences, 2019 appeared first on Ubuntu Blog.

Jonathan Dowland: Nadine Shah

Bits from Debian - Mër, 17/07/2019 - 5:45md

ticket and cuttings from gig

On July 8 I went to see Nadine Shah perform at the Whitley Bay Playhouse as part of the Mouth Of The Tyne Festival. It was a fantastic gig!

I first saw Nadine Shah — as a solo artist — supporting the Futureheads in the same venue, back in 2013. At that point, she had either just released her debut album, Love Your Dum and Mad, or was just about to (It came out sometime in the same month), but this was the first we heard of her. If memory serves, she played with a small backing band (possibly just drummer, likely co-writer Ben Hillier) and she handled keyboards. It's a pretty small venue. My friends and I loved that show, and as we talked about how good it was, what it reminded us of, (I think we said stuff like "that was nice and gothy, I haven't heard stuff like that for ages"), we hadn't realised that she was sat right behind us, with a grin on her face!

Since then shes put out two more albums, Fast Food which got a huge amount of airplay on 6 Music (and was the point at which I bought into her) and the Mercury-nominated Holiday Destination, a really compelling evolution of her art and a strong political statement.

Kinevil 7 inch

It turns out, though, that I think we saw her before that, too: A local band called Kinevil (now disbanded) supported Ladytron at Digital in Newcastle in 2008. I happen to have their single "Everything's Gone Black" on vinyl (here it is on bandcamp) and noticed years later that the singer is credited as Nadine Shar.

This year's gig was my first gig of 2019, and it was a real blast. The sound mix was fantastic, and loud. The performance was very confident: Nadine now exclusively sings, all the instrument work is done by her band which is now five-strong. The saxophonist made some incredible noises that reminded me of some synth stuff from mid-90s Nine Inch Nails records. I've never heard a saxaphone played that way before. Apparently Shah has been on hiatus for a while for personal reasons and this was her comeback gig. Under those circumstances, it was very impressive. I hope the reception was what she hoped for.

Daniel Pocock: Google, Money and Censorship in Free Software communities

Planet Ubuntu - Mër, 17/07/2019 - 12:05pd

On 30 June 2019, I sent the email below to the debian-project mailing list.

It never appeared.

Alexander Wirt (formorer) has tried to justify censoring the mailing list in various ways. Wirt has multiple roles, as both Debian mailing list admin and also one of Debian's GSoC administrators and mentors. Google money pays for interns to do work for him. It appears he has a massive conflict of interest when using the former role to censor posts about Google, which relates to the latter role and its benefits.

Wirt has also made public threats to censor other discussions, for example, the DebConf Israel debate. In that case he has wrongly accused people of antisemitism, leaving people afraid to speak up again. The challenges of holding a successful event in that particular region require a far more mature approach, not a monoculture.

Why are these donations and conflicts of interest hidden from the free software community who rely on, interact with and contribute to Debian in so many ways? Why doesn't Debian provide a level playing field, why does money from Google get this veil of secrecy?

Is it just coincidence that a number of Google employees who spoke up about harassment are forced to resign and simultaneously, Debian Developers who spoke up about abusive leadership are obstructed from competing in elections? Are these symptoms of corporate influence?

Is it coincidence that the three free software communities censoring my recent blog about human rights from their Planet sites (FSFE, Debian and Mozilla, evidence of censorship) are also the communities where Google money is a disproportionate part of the budget?

Could the reason for secrecy about certain types of donation be motivated by the knowledge that unpleasant parts of the donor's culture also come along for the ride?

The email the cabal didn't want you to see Subject: Re: Realizing Good Ideas with Debian Money Date: Sun, 30 Jun 2019 23:24:06 +0200 From: Daniel Pocock <daniel@pocock.pro> To: debian-project@lists.debian.org, debian-devel@lists.debian.org On 29/05/2019 13:49, Sam Hartman wrote: > > [moving a discussion from -devel to -project where it belongs] > >>>>>> "Mo" == Mo Zhou <lumin@debian.org> writes: > > Mo> Hi, > Mo> On 2019-05-29 08:38, Raphael Hertzog wrote: > >> Use the $300,000 on our bank accounts? > > So, there were two $300k donations in the last year. > One of these was earmarked for a DSA equipment upgrade. When you write that it was earmarked for a DSA equipment upgrade, do you mean that was a condition imposed by the donor or it was the intention of those on the Debian side of the transaction? I don't see an issue either way but the comment is ambiguous as it stands. Debian announced[1] a $300k donation from Handshake foundation. I couldn't find any public disclosure about other large donations and the source of the other $300k. In Bits from the DPL (December 2018), former Debian Project Leader (DPL) Chris Lamb opaquely refers[2] to a discussion with Cat Allman about a "significant donation". Although there is a link to Google later in Lamb's email, Lamb fails to disclose the following facts: - Cat Allman is a Google employee (some people would already know that, others wouldn't) - the size of the donation - any conditions attached to the donation - private emails from Chris Lamb indicated he felt some pressure, influence or threat from Google shortly before accepting their money The Debian Social Contract[3] states that Debian does not hide our problems. Corporate influence is one of the most serious problems most people can imagine, why has nothing been disclosed? Therefore, please tell us, 1. who did the other $300k come from? 2. if it was not Google, then what is the significant donation from Cat Allman / Google referred[2] to in Bits from the DPL (December 2018)? 3. if it was from Google, why was that hidden? 4. please disclose all conditions, pressure and influence relating to any of these donations and any other payments received Regards, Daniel 1. https://www.debian.org/News/2019/20190329 2. https://lists.debian.org/debian-devel-announce/2018/12/msg00006.html 3. https://www.debian.org/social_contract Censorship on the Google Summer of Code Mentor's mailing list

Google also operates a mailing list for mentors in Google Summer of Code. It looks a lot like any other free software community mailing list except for one thing: censorship.

Look through the "Received" headers of messages on the mailing list and you can find examples of messages that were delayed for some hours waiting for approval. It is not clear how many messages were silently censored, never appearing at all.

Recent attempts to discuss the issue on Google's own mailing list produced an unsurprising result: more censorship.

However, a number of people have since contacted me personally about their negative experiences with Google Summer of Code. I'm publishing below the message that Google didn't want you to see.

Subject: [GSoC Mentors] discussions about GSoC interns/students medical status Date: Sat, 6 Jul 2019 10:56:31 +0200 From: Daniel Pocock <daniel@pocock.pro> To: Google Summer of Code Mentors List <google-summer-of-code-mentors-list@googlegroups.com> Hi all, Just a few months ago, I wrote a blog lamenting the way some mentors have disclosed details of their interns' medical situations on mailing lists like this one. I asked[1] the question: "Regardless of what support the student received, would Google allow their own employees' medical histories to be debated by 1,000 random strangers like this?" Yet it has happened again. If only my blog hadn't been censored. If our interns have trusted us with this sensitive information, especially when it concerns something that may lead to discrimination or embarrassment, like mental health, then it highlights the enormous trust and respect they have for us. Many of us are great at what we do as engineers, in many cases we are the experts on our subject area in the free software community. But we are not doctors. If an intern goes to work at Google's nearby office in Zurich, then they are automatically protected by income protection insurance (UVG, KTG and BVG, available from all major Swiss insurers). If the intern sends a doctor's note to the line manager, the manager doesn't have to spend one second contemplating its legitimacy. They certainly don't put details on a public email list. They simply forward it to HR and the insurance company steps in to cover the intern's salary. The cost? Approximately 1.5% of the payroll. Listening to what is said in these discussions, many mentors are obviously uncomfortable with the fact that "failing" an intern means they will not even be paid for hours worked prior to a genuine accident or illness. For 1.5% of the program budget, why doesn't Google simply take that burden off the mentors and give the interns peace of mind? On numerous occasions Stephanie Taylor has tried to gloss over this injustice with her rhetoric about how we have to punish people to make them try harder next year. Many of our interns are from developing countries where they already suffer injustice and discrimination. You would have to be pretty heartless to leave these people without pay. Could that be why Googlespeak clings to words like "fail" and "student" instead of "not pay" and "employee"? Many students from disadvantaged backgrounds, including women, have told me they don't apply at all because of the uncertainty about doing work that might never be paid. This is an even bigger tragedy than the time mentors lose on these situations. Regards, Daniel 1. https://danielpocock.com/google-influence-free-open-source-software-community-threats-sanctions-bullying/ -- Former Debian GSoC administrator https://danielpocock.com

Ubucon Europe 2019: Our Diamond Sponsor – Ubuntu!

Planet Ubuntu - Mar, 16/07/2019 - 6:30md

Our Diamond Sponsor of this event is Ubuntu, an open source software operating system that runs from the desktop, to the cloud, to all your internet connected things.

Linux was already established in 2004, but it was fragmented into proprietary and unsupported community editions, and free software was not a part of everyday life for most computer users. That’s when Mark Shuttleworth gathered a small team of Debian developers who together founded Canonical and set out to create an easy-to-use Linux desktop called Ubuntu.
However, the governance of Ubuntu is somewhat independent of Canonical, with volunteer leaders from around the world taking responsibility for many critical elements of the project. Mark Shuttleworth, as project founder, short-lists public nominees as candidates for the Community Council and Technical Board, and they in turn screen and nominate candidates for a wide range of boards, councils and teams that take responsibility for aspects of the project.

Thanks to them, we have received a significant support to sustain our event and our journey to give you one of the best open source experiences in Sintra.

What to jump onboard as well?
Visit our Call for Sponsor post for more information.

Holger Levsen: 20190716-wanna-work-on-lts

Bits from Debian - Mar, 16/07/2019 - 5:56md
Wanna work on Debian LTS (and get funded)?

If you are in Curitiba and are interested to work on Debian LTS (and get paid for that work), please come and talk to me, Debian LTS is still looking for more contributors! Also, if you want a bigger challenge, extended LTS also needs more contributors, though I'd suggest you start with regular LTS

On Thursday, July 25th, there will also be a talk titled "Debian LTS, the good, the bad and the better" where we plan to present what we think works nicely and what doesn't work so nicely yet and where we also want to gather your wishes and requests.

If cannot make it to Curitiba, there will be a video stream (and the possibility to ask questions via IRC) and you can always send me an email or ping on IRC if you want to work on LTS.

Ubucon Europe 2019: Call for Sponsors

Planet Ubuntu - Mar, 16/07/2019 - 3:29md
Corporate sponsorships

This event can only be possible thanks to our sponsors. Your investment helps us create a greater experience for the open source community, while you still benefit from a considerable amount of exposure.

If you are interested in sponsoring the event, please view the packages offered below and get in touch with us (the document describes how to do so).

CHECK OUR SPONSOR PACKAGES Individual sponsorships

Individual sponsorships are donations made by individuals help this Ubucon happen as well. Individual sponsors will not be provided with free tickets but will be highlighted on the website and during the event. Donate by clicking here

Balint Reczey: Introducing ubuntu-wsl, the package making Ubuntu better and better on WSL

Planet Ubuntu - Mar, 16/07/2019 - 9:31pd

The Ubuntu apps for the Windows Subsystem for Linux provide the very same packages you can find on Ubuntu servers, desktops, cloud instances and containers, and this ensures maximal compatibility with other Ubuntu installations. Until recently there was little work done to integrate Ubuntu with the Windows system running the WSL environment, but now this is changing.

In Ubuntu metapackages collect packages useful for a common purpose by depending on them and ubuntu-wsl is the new metapackage to collect integration packages to be installed on every Ubuntu WSL system. It pulls in wslu, “A collection of utilities for WSL” to let you create shortcuts on the Windows desktop with wslusc, start the default Windows browser with wslview, and do a few other things:

With updates to the ubuntu-wsl metapackage we will add new features to Ubuntu WSL installations to make them even more comfortable to use, thus if you have an older installation please install the package manually:

sudo apt update sudo apt install ubuntu-wsl

Oh, and one more thing, you can even set up sound and run graphical apps if you make a few manual steps. For details check out https://wiki.ubuntu.com/WSL!

next-20190716: linux-next

Kernel Linux - Mar, 16/07/2019 - 7:43pd
Version:next-20190716 (linux-next) Released:2019-07-16

The Fridge: Ubuntu Weekly Newsletter Issue 587

Planet Ubuntu - Mar, 16/07/2019 - 12:18pd

Welcome to the Ubuntu Weekly Newsletter, Issue 587 for the week of July 7 – 13, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Full Circle Magazine: Full Circle Weekly News #139

Planet Ubuntu - Hën, 15/07/2019 - 8:43md
System 76’s Linux-powered Thelio desktop now available with 3rd gen AMD Ryzen Processors
https://betanews.com/2019/07/07/system76-linux-thelio-amd-ryzen3/

PyOxidizer Can Turn Python Code Into Apps for Windows, MacOS, Linux

https://fossbytes.com/pyoxidizer-can-turn-python-code-apps-for-windows-macos-linux/

Thousands of Android Apps Can Track Your Phone — Even if You Deny Permissions
https://www.theverge.com/2019/7/8/20686514/android-covert-channel-permissions-data-collection-imei-ssid-location

Anubis Android Banking Malware Returns with Extensive Financial App Hit List
https://www.zdnet.com/article/anubis-android-banking-malware-returns-with-a-bang/

Mozilla Firefox and the Nomination for Internet Villain Award
https://itsfoss.com/mozilla-internet-villain/

Ubuntu LTS Will Now Get the Latest Nvidia Driver Updates
https://itsfoss.com/ubuntu-lts-latest-nvidia-drivers/

Credits:
Ubuntu “Complete” sound: Canonical
  Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

Thierry Carrez: Open source in 2019, Part 3/3

Planet Ubuntu - Hën, 15/07/2019 - 3:52md

21 years in, the landscape around open source evolved a lot. In part 1 and part 2 of this 3-part series, I explained why today, while open source is more necessary than ever, it appears to no longer be sufficient. In this part, I'll discuss what we, open source enthusiasts and advocates, can do about that.

This is not a call to change open source

First, let me clarify what we should not do.

As mentioned in part 2, since open source was coined in 1998, software companies have evolved ways to retain control while producing open source software, and in that process stripped users of some of the traditional benefits associated with F/OSS. But those companies were still abiding to the terms of the open source licenses, giving users a clear base set of freedoms and rights.

Over the past year, a number of those companies have decided that they wanted even more control, in particular control of any revenue associated with the open source software. They proposed new licenses, removing established freedoms and rights in order to be able to assert that level of control. The open source definition defines those minimal freedoms and rights that any open source software should have, so the Open Source Initiative (OSI), as steadfast guardians of that definition, rightfully resisted those attempts.

Those companies quickly switched to attacking OSI's legitimacy, pitching "Open Source" more as a broad category than a clear set of freedoms and rights. And they created new licenses, with deceptive naming ("community", "commons", "public"...) in an effort to blur the lines and retain some of the open source definition aura for their now-proprietary software.

The solution is not in redefining open source, or claiming it's no longer relevant. Open source is not a business model, or a constantly evolving way to produce software. It is a base set of user freedoms and rights expressed in the license the software is published under. Like all standards, its value resides in its permanence.

Yes, I'm of the opinion that today, "open source" is not enough. Yes, we need to go beyond open source. But in order to do that, we need to base that additional layer on a solid foundation: the open source definition.

That makes the work of the OSI more important than ever. Open source used to be attacked from the outside, proprietary software companies claiming open source software was inferior or dangerous. Those were clear attacks that were relatively easy to resist: it was mostly education and advocacy, and ultimately the quality of open source software could be used to prove our point. Now it's attacked from the inside, by companies traditionally producing open source software, claiming that it should change to better fit their business models. We need to go back to the basics and explain why those rights and freedoms matter, and why blurring the lines ultimately weakens everyone. We need a strong OSI to lead that new fight, because it is far from over.

A taxonomy of open source production models

As I argued in previous parts, how open source is built ultimately impacts the benefits users get. A lot of us know that, and we all came up with our own vocabulary to describe those various ways open source is produced today.

Even within a given model (say open collaboration between equals on a level playing field), we use different sets of principles: the OpenStack Foundation has the 4 Opens (open source, open development, open design, open community), the Eclipse Foundation has the Open Source Rules of Engagement (open, transparent, meritocracy), the Apache Foundation has the Apache Way... We all advocate for our own variant, focusing on differences rather than what we have in common: the key benefits those variants all enable.

This abundance of slightly-different vocabulary makes it difficult to rally around and communicate efficiently. If we have no clear way to differentiate good all-benefits-included open source from twisted some-benefits-withheld open source, the confusion (where all open source is considered equal) benefits the twisted production models. I think it is time for us to regroup, and converge around a clear, common classification of open source production models.

We need to classify those models based on which benefits they guarantee to the users of the produced software. Open-core does not guarantee availability, single-vendor does not provide sustainability nor does it allow to efficiently engage and influence the direction of the software, while open-collaboration gives you all three.

Once we have this classification, we'll need to heavily communicate around it, with a single voice. As long as we use slightly different terms (or mean slightly different things when using common terms), we maintain confusion which ultimately benefits the most restrictive models.

Get together

Beyond that, I think we need to talk more. Open source conferences used to be all about education and advocacy: what is this weird way of producing software, and why you should probably be interested in it. Once open source became ubiquitous, those style of horizontal open source conferences became less relevant, and were soon replaced by more vertical conferences around a specific stack or a specific use case.

This is a good evolution: this is what winning looks like. The issue is: the future of open source is not discussed anymore. We rest on our laurels, while the world continually evolves and adapts. Some open source conference islands may still exist, with high-level keynotes still raising the issues, but those are generally one-way conversations.

To do this important work of converging vocabulary and defining common standards on how open source is produced, Twitter won't cut it. To bootstrap the effort we'll need to meet, get around a table and take the time to discuss specific issues together. Ideally that would be done around some other event(s) to avoid extra travel.

And we need to do that soon. This work is becoming urgent. "Open source" as a standard has lots of value because of all the user benefits traditionally associated with free and open source software. That created an aura that all open source software still benefits from today. But that aura is weakening over time, thanks to twisted production models. How much more single-vendor open source can we afford until "open source" no longer means you can engage with the community and influence the direction of the software ?

So here is my call to action, which concludes this series.

In 2019, open source is more important than ever. Open source has not "won", this is a continuous effort, and we are today at a critical junction. I think open source advocates and enthusiasts need to get together, defining clear, standard terminology on how open source software is built, and start communicate heavily around it with a single voice. And beyond that, we need to create forums where those questions on the future of open source are discussed. Because whatever battles you win today, the world does not stop evolving and adapting.

Obviously I don't have all the answers. And there are lots of interesting questions. It's just time we have a place to ask those questions and discuss the answers. If you are interested and want to get involved, feel free to contact me.

Canonical Design Team: 在边缘端部署Kubernetes第一部分——模块搭建

Planet Ubuntu - Hën, 15/07/2019 - 3:17md

为帮助解决电信,多媒体,运输,物流,农业和其他细分市场的独特挑战,边缘计算继续备受关注,迎来了大增长。如果你刚接触以上几个边缘计算体系结构,下图是新兴架构体系的简单抽象。

在此图中,你可以看到边缘云位于现场设备旁边。事实上,有一个极端边缘计算的概念,它将计算资源放在现场设备上——即最左边的圆圈。连接你办公室,家电和
所有传感器网关设备就是一个极端边缘计算的例子。

到底什么是边缘计算呢?

边缘计算是云计算的一种变体,你的基础设施服务(计算,存储和网络)在物理上更靠近生成数据的现场设备。从而为你提供更低延迟和更低网络流量的双重优势。低延迟可提高现场设备的性能,使其不仅能够更快地响应,还能响应更多事件。降低网络流量有助于降低成本并提高整体吞吐量——你的核心数据中心可以支持更多现场设备。应用程序或服务是否位于边缘云或核心数据中心将取决于用例。

如何才能创建边缘云呢?

边缘云服务应该有至少两层,两层都将最大限度地提高操作效率和开发人员的工作效率,且每层都以不同的方式构建。

第一层是基础设施即服务(IaaS),除此还提供计算和存储资源,IaaS层应该满足超低延迟和高带宽的性能需求。

第二层是Kubernetes层,提供一个让你运行应用和服务的通用平台。当然,是否用Kubernetes是可选的,但今天它已经被证明是一个让企业和组织充分利用边缘计算能力的平台。你可以在现场设备、边缘云、核心数据中心和公有云上部署你的Kubernetes。这种多云部署功能为你提供了在选择的任何地方部署工作负载的完全灵活性。Kubernetes为你的开发人员提供了简化其devop实践的能力,并最大限度地缩短了与异构操作环境集成所花费的时间。

接下来的问题是,怎么部署这些层?在Canonical,我们通过使用定义明确的专用技术来实现这一目标。让我们先开始Kubernetes所需要的IaaS层。

物理基础设施生命周期管理

第一步是考虑物理基础设施,什么技术可以更有效地管理基础设施,将原始的硬件转换到你的IaaS层。在这方面,Metal-as-a-Service (MAAS),裸机即为服务已经被证明了其具有的高效性。MAAS提供可用于硬件发现的底层系统,使你可以灵活地分配计算资源并动态重新利用它们。这些底层系统通过开放API将裸机服务器暴露给更高级别的业务流程,就像你使用OpenStack和公共云一样。

随着最新版MAAS发布,你可以基于KVM pod自动创建边缘云,从而有效地使操作者能够创建具有预定义资源集(内存,处理器,存储和超额预订比)的虚拟机。你可以通过命令行和浏览器界面以及MAAS API来完成上面操作。你也可以是用Canonical的高级编排解决方案Juju来构建自己的自动化框架。

正如我们在柏林的OpenStack峰会期间所展示的那样。MAAS还可以被优化过的方式部署以便在机架交换机上运行。

边缘应用的编排

一旦边缘云的物理基础架构的发现和配置完成,第二步就是选择一个业务流程工具,以便在边缘基础架构上轻松安装Kubernetes或其他软件。你可以通过Juju简单安装一个完全兼容上游Kubernetes的Charmed Kubernetes。使用Kubernetes时,你可以安装容器化工作负载,为其提供最高性能。 在电信领域,容器网络功能(CNFs)等工作负载非常适合这种架构。

Charmed Kubernetes还有其他的优点。能够在虚拟化环境中运行或直接在裸机上运行,全自动Charmed Kubernetes部署内置高可用性设计,允许就地,零停机升级。这些都是经过验证的,真正具有弹性的边缘基础架构和解决方案。Charmed Kubernetes的另一个好处是能够自动检测和配置GPGPU资源,以加速AI模型论证和容器化转码工作负载。

下一步

当选择好了合适的技术,现在是时候部署环境和开始验证程序。下一部分的博客文章将包含实际操作的例子。

The post 在边缘端部署Kubernetes第一部分——模块搭建 appeared first on Ubuntu Blog.

Faqet

Subscribe to AlbLinux agreguesi