You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 6 months 2 ditë më parë

Rhonda D'Vine: Enigma

Pre, 18/01/2019 - 4:57md

Just the other day a working colleague asked me what kind of music I listen to, especially when working. It's true, music helps me to focus better and work more concentrated. But it obviously depends on what kind of music it is. And there is one project I come to every now and then. The name is Enigma. It's not disturbing, good for background, with soothing and non-intrusive vocals. Here are the songs:

  • Return To Innocence: This is quite likely the song you know from them, which also got me hooked up originally.
  • Push The Limits: A powerful song. The album version is even a few minutes longer.
  • Voyageur: Love the rhythm and theme in this song.

Like always, enjoy.

/music | permanent link | Comments: 0 |

Stephen Michael Kellat: Following The Drum Beat

Enj, 17/01/2019 - 5:17pd

If you've followed the news in the United States of America, you've probably seen something called a "government shutdown". In non-USA terms, the legislature is refusing to grant supply to the executive. In parliamentary systems of government, the government would fall and there would be an election. In the United States we just get this current weird situation where parts of the government are funded and other parts aren't. Especially disconcerting is that the majority of federal law enforcement agencies have their employees working without pay until the impasse is resolved.

You might be thinking that that is an academic exercise that you have no connection to at all. As a member of the community, though, I've been recalled to duty Thursday and I won't be paid at all until this impasse is resolved. Thursday will be the 27th day of this debacle. I've probably made it a bit more real for you now compared to any news reports you've seen on the BBC, CanBC, AusBC, or from your favorite wire service.

Currently the legal guidance offered is that the civilian staff returning to duty without pay will be prosecuted under a choice of statutes if they set up "GoFundMe"-type efforts to seek funds to ease the lack of cashflow. One possibility is just as a criminal ethics violation but another is under the prohibition on accepting bribes. Due to some perverse consequences of how various mandatory separation payments are structured and the current constitutional prohibitions on payments without valid appropriations as well as due to restrictions on what the HR staff can do right now, I technically cannot quit my job at the moment.

Remember, I am working to build up Erie Looking Productions and am still seeking clients as well as jobs for handling. There is a version of a resume posted to LinkedIn and my mobility is constrained temporarily while some tricky things would have to be worked out. Problems are not insurmountable but they certainly are big. Once the current unpleasantness is resolved, I am more than likely able to cross borders to places in the English-speaking Pacific to work if I don't find something based in the USA.

As to software-related matters, the fiddling with Greenstone that I had coming up is on hold. I really wanted to see if I could get it to compile on a Raspberry Pi 3B+ that runs Ubuntu. It doesn't seem too cross-architecture portable. It does seem like an excellent candidate for snapcrafters to take a try working on, though.

Simon Raffeiner: Updating Micron 1100 Series SSD firmware on Linux

Mër, 16/01/2019 - 7:13md

I've had quite a number of performance-related issues with Micron 1100 Series M.2 SATA SSDs in various constellations over the last 18 months. Turns out a firmware update to a rather "secret" version fixes that.

The post Updating Micron 1100 Series SSD firmware on Linux appeared first on LIEBERBIBER.

Kees Cook: security things in Linux v4.20

Mar, 25/12/2018 - 12:59pd

Previously: v4.19.

Linux kernel v4.20 has been released today! Looking through the changes, here are some security-related things I found interesting:

stackleak plugin

Alexander Popov’s work to port the grsecurity STACKLEAK plugin to the upstream kernel came to fruition. While it had received Acks from x86 (and arm64) maintainers, it has been rejected a few times by Linus. With everything matching Linus’s expectations now, it and the x86 glue have landed. (The arch-specific portions for arm64 from Laura Abbott actually landed in v4.19.) The plugin tracks function calls (with a sufficiently large stack usage) to mark the maximum depth of the stack used during a syscall. With this information, at the end of a syscall, the stack can be efficiently poisoned (i.e. instead of clearing the entire stack, only the portion that was actually used during the syscall needs to be written). There are two main benefits from the stack getting wiped after every syscall. First, there are no longer “uninitialized” values left over on the stack that an attacker might be able to use in the next syscall. Next, the lifetime of any sensitive data on the stack is reduced to only being live during the syscall itself. This is mainly interesting because any information exposures or side-channel attacks from other kernel threads need to be much more carefully timed to catch the stack data before it gets wiped.

Enabling CONFIG_GCC_PLUGIN_STACKLEAK=y means almost all uninitialized variable flaws go away, with only a very minor performance hit (it appears to be under 1% for most workloads). It’s still possible that, within a single syscall, a later buggy function call could use “uninitialized” bytes from the stack from an earlier function. Fixing this will need compiler support for pre-initialization (this is under development already for Clang, for example), but that may have larger performance implications.

raise faults for kernel addresses in copy_*_user()

Jann Horn reworked x86 memory exception handling to loudly notice when copy_{to,from}_user() tries to access unmapped kernel memory. Prior this, those accesses would result in a silent error (usually visible to callers as EFAULT), making it indistinguishable from a “regular” userspace memory exception. The purpose of this is to catch cases where, for example, the unchecked __copy_to_user() is called against a kernel address. Fuzzers like syzcaller weren’t able to notice very nasty bugs because writes to kernel addresses would either corrupt memory (which may or may not get detected at a later time) or return an EFAULT that looked like things were operating normally. With this change, it’s now possible to much more easily notice missing access_ok() checks. This has already caught two other corner cases even during v4.20 in HID and Xen.

spectre v2 userspace mitigation

The support for Single Thread Indirect Branch Predictors (STIBP) has been merged. This allowed CPUs that support STIBP to effectively disable Hyper-Threading to avoid indirect branch prediction side-channels to expose information between userspace threads on the same physical CPU. Since this was a very expensive solution, this protection was made opt-in (via explicit prctl() or implicitly under seccomp()). LWN has a nice write-up of the details.

jump labels read-only after init

Ard Biesheuvel noticed that jump labels don’t need to be writable after initialization, so their data structures were made read-only. Since they point to kernel code, they might be used by attackers to manipulate the jump targets as a way to change kernel code that wasn’t intended to be changed. Better to just move everything into the read-only memory region to remove it from the possible kernel targets for attackers.

VLA removal finished

As detailed earlier for v4.17, v4.18, and v4.19, a whole bunch of people answered my call to remove Variable Length Arrays (VLAs) from the kernel. I count at least 153 commits having been added to the kernel since v4.16 to remove VLAs, with a big thanks to Gustavo A. R. Silva, Laura Abbott, Salvatore Mesoraca, Kyle Spiers, Tobin C. Harding, Stephen Kitt, Geert Uytterhoeven, Arnd Bergmann, Takashi Iwai, Suraj Jitindar Singh, Tycho Andersen, Thomas Gleixner, Stefan Wahren, Prashant Bhole, Nikolay Borisov, Nicolas Pitre, Martin Schwidefsky, Martin KaFai Lau, Lorenzo Bianconi, Himanshu Jha, Chris Wilson, Christian Lamparter, Boris Brezillon, Ard Biesheuvel, and Antoine Tenart. With all that done, “-Wvla” has been added to the top-level Makefile so we don’t get any more added back in the future.

Given the holidays, Linus opened the merge window before v4.20 was released, letting everyone send in pull requests in the week leading up to the release. v4.21 is in the making. :) Happy New Year everyone!

Edit: clarified stackleak details, thanks to Alexander Popov.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Julian Andres Klode: An Introduction to Go

Hën, 24/12/2018 - 3:15md

(What follows is an excerpt from my master’s thesis, almost all of section 2.1, quickly introducing Go to people familiar with CS)

Go is an imperative programming language for concurrent programming created at and mainly developed by Google, initially mostly by Robert Griesemer, Rob Pike, and Ken Thompson. Design of the language started in 2007, and an initial version was released in 2009; with the first stable version, 1.0 released in 2012 1.

Go has a C-like syntax (without a preprocessor), garbage collection, and, like its predecessors devloped at Bell Labs – Newsqueak (Rob Pike), Alef (Phil Winterbottom), and Inferno (Pike, Ritchie, et al.) – provides built-in support for concurrency using so-called goroutines and channels, a form of co-routines, based on the idea of Hoare’s ‘Communicating Sequential Processes’ 2.

Go programs are organised in packages. A package is essentially a directory containing Go files. All files in a package share the same namespace, and there are two visibilities for symbols in a package: Symbols starting with an upper case character are visible to other packages, others are private to the package:

func PublicFunction() { fmt.Println("Hello world") } func privateFunction() { fmt.Println("Hello package") } Types

Go has a fairly simple type system: There is no subtyping (but there are conversions), no generics, no polymorphic functions, and there are only a few basic categories of types:

  1. base types: int, int64, int8, uint, float32, float64, etc.
  2. struct
  3. interface - a set of methods
  4. map[K, V] - a map from a key type to a value type
  5. [number]Type - an array of some element type
  6. []Type - a slice (pointer to array with length and capability) of some type
  7. chan Type - a thread-safe queue
  8. pointer *T to some other type
  9. functions
  10. named type - aliases for other types that may have associated methods:

    type T struct { foo int } type T *T type T OtherNamedType

    Named types are mostly distinct from their underlying types, so you cannot assign them to each other, but some operators like + do work on objects of named types with an underlying numerical type (so you could add two T in the example above).

Maps, slices, and channels are reference-like types - they essentially are structs containing pointers. Other types are passed by value (copied), including arrays (which have a fixed length and are copied).

Conversions

Conversions are the similar to casts in C and other languages. They are written like this:

TypeName(value) Constants

Go has “untyped” literals and constants.

1 // untyped integer literal const foo = 1 // untyped integer constant const foo int = 1 // int constant

Untyped values are classified into the following categories: UntypedBool, UntypedInt, UntypedRune, UntypedFloat, UntypedComplex, UntypedString, and UntypedNil (Go calls them basic kinds, other basic kinds are available for the concrete types like uint8). An untyped value can be assigned to a named type derived from a base type; for example:

type someType int const untyped = 2 // UntypedInt const bar someType = untyped // OK: untyped can be assigned to someType const typed int = 2 // int const bar2 someType = typed // error: int cannot be assigned to someType Interfaces and ‘objects’

As mentioned before, interfaces are a set of methods. Go is not an object-oriented language per se, but it has some support for associating methods with named types: When declaring a function, a receiver can be provided - a receiver is an additional function argument that is passed before the function and involved in the function lookup, like this:

type SomeType struct { ... } func (s *SomeType) MyMethod() { } func main() { var s SomeType s.MyMethod() }

An object implements an interface if it implements all methods; for example, the following interface MyMethoder is implemented by *SomeType (note the pointer), and values of *SomeType can thus be used as values of MyMethoder. The most basic interface is interface{}, that is an interface with an empty method set - any object satisfies that interface.

type MyMethoder interface { MyMethod() }

There are some restrictions on valid receiver types; for example, while a named type could be a pointer (for example, type MyIntPointer *int), such a type is not a valid receiver type.

Control flow

Go provides three primary statements for control flow: if, switch, and for. The statements are fairly similar to their equivalent in other C-like languages, with some exceptions:

  • There are no parentheses around conditions, so it is if a == b {}, not if (a == b) {}. The braces are mandatory.
  • All of them can have initialisers, like this

    if result, err := someFunction(); err == nil { // use result }

  • The switch statement can use arbitrary expressions in cases

  • The switch statement can switch over nothing (equals switching over true)

  • Cases do not fall through by default (no break needed), use fallthrough at the end of a block to fall through.

  • The for loop can loop over ranges: for key, val := range map { do something }

Goroutines

The keyword go spawns a new goroutine, a concurrently executed function. It can be used with any function call, even a function literal:

func main() { ... go func() { ... }() go some_function(some_argument) } Channels

Goroutines are often combined with channels to provide an extended form of Communicating Sequential Processes 2. A channel is a concurrent-safe queue, and can be buffered or unbuffered:

var unbuffered = make(chan int) // sending blocks until value has been read var buffered = make(chan int, 5) // may have up to 5 unread values queued

The <- operator is used to communicate with a single channel.

valueReadFromChannel := <- channel otherChannel <- valueToSend

The select statement allows communication with multiple channels:

select { case incoming := <- inboundChannel: // A new message for me case outgoingChannel <- outgoing: // Could send a message, yay! } The defer statement

Go provides a defer statement that allows a function call to be scheduled for execution when the function exits. It can be used for resource clean-up, for example:

func myFunc(someFile io.ReadCloser) { defer someFile.close() /* Do stuff with file */ }

It is of course possible to use function literals as the function to call, and any variables can be used as usual when writing the call.

Error handling

Go does not provide exceptions or structured error handling. Instead, it handles errors by returning them in a second or later return value:

func Read(p []byte) (n int, err error) // Built-in type: type error interface { Error() string }

Errors have to be checked in the code, or can be assigned to _:

n0, _ := Read(Buffer) // ignore error n, err := Read(buffer) if err != nil { return err }

There are two functions to quickly unwind and recover the call stack, though: panic() and recover(). When panic() is called, the call stack is unwound, and any deferred functions are run as usual. When a deferred function invokes recover(), the unwinding stops, and the value given to panic() is returned. If we are unwinding normally and not due to a panic, recover() simply returns nil. In the example below, a function is deferred and any error value that is given to panic() will be recovered and stored in an error return value. Libraries sometimes use that approach to make highly recursive code like parsers more readable, while still maintaining the usual error return value for public functions.

func Function() (err error) { defer func() { s := recover() switch s := s.(type) { // type switch case error: err = s // s has type error now default: panic(s) } } } Arrays and slices

As mentioned before, an array is a value type and a slice is a pointer into an array, created either by slicing an existing array or by using make() to create a slice, which will create an anonymous array to hold the elements.

slice1 := make([]int, 2, 5) // 5 elements allocated, 2 initialized to 0 slice2 := array[:] // sliced entire array slice3 := array[1:] // slice of array without first element

There are some more possible combinations for the slicing operator than mentioned above, but this should give a good first impression.

A slice can be used as a dynamically growing array, using the append() function.

slice = append(slice, value1, value2) slice = append(slice, arrayOrSlice...)

Slices are also used internally to represent variable parameters in variable length functions.

Maps

Maps are simple key-value stores and support indexing and assigning. They are not thread-safe.

someValue := someMap[someKey] someValue, ok := someMap[someKey] // ok is false if key not in someMap someMap[someKey] = someValue
  1. Frequently Asked Questions (FAQ) - The Go Programming Language https://golang.org/doc/faq#history [return]
  2. HOARE, Charles Antony Richard. Communicating sequential processes. Communications of the ACM, 1978, 21. Jg., Nr. 8, S. 666-677. [return]

Daniel Pocock: Merry Christmas from the Balkans

Dje, 23/12/2018 - 11:27md

This Christmas I'm visiting the Balkans again. It is the seventh time in the last two years that I have been fortunate enough to visit this largely undiscovered but very exciting region of Europe.

A change of name

On Saturday I visited Skopje, the capital of Macedonia. Next month their country will finalize their name change to the Republic of Northern Macedonia.

Prishtina

From Skopje, I travelled north to Prishtina, the capital of Kosovo.

I had dinner with four young women who have become outstanding leaders in the free software movement in the region, Albiona, Elena, Amire and Enkelena.

The population of Kosovo is over ninety percent Muslim, not everybody observes Christmas as a religious festival but nonetheless the city of Prishtina is decorated beautifully with several large trees in the pedestrianised city centre.

Dougie Richardson: Passwordless SSH access on a Pi

Dje, 23/12/2018 - 12:25md

Passwordless SSH access is convenient, especially as everything is on my local network. I only really access the Pi remotely and you can configure it to use RSA keys. I’m on Ubuntu Linux so open a terminal and create an RSA key (if you don’t have one): You’ll need to upload it to the Pi: […]

The post Passwordless SSH access on a Pi appeared first on The Midlife Geek.

Lubuntu Blog: Sunsetting i386

Pre, 21/12/2018 - 1:43pd
Lubuntu has been and continues to be the go-to Ubuntu flavor for people who want the most from their computers, especially older hardware that cannot handle today’s workloads. However, the project and computing as a whole has drastically changed in many ways since its origin ten years ago. Computers have become faster, more secure, and […]

Eric Hammond: Using AWS SSM Parameter Store With Git SSH Keys

Pre, 21/12/2018 - 1:00pd

and employing them securely

At Archer, we have been moving credentials into AWS Systems Manager (SSM) Parameter Store and AWS Secrets Manager. One of the more interesting credentials is an SSH key that is used to clone a GitHub repository into an environment that has IAM roles available (E.g., AWS Lambda, Fargate, EC2).

We’d like to treat this SSH private key as a secret that is stored securely in SSM Parameter Store, with access controlled by AWS IAM, and only retrieve it briefly when it is needed to be used. We don’t even want to store it on disk when it is used, no matter how temporarily.

After a number of design and test iterations with Buddy, here is one of the approaches we ended up with. This is one I like for how clean it is, but may not be what ends up going into the final code.

This solution assumes that you are using bash to run your Git commands, but could be converted to other languages if needed.

Using The Solution

Here is the bash function that retrieves the SSH private key from SSM Parameter Store, adds it to a temporary(!) ssh-agent process, and runs the desired git subcommand using the same temporary ssh-agent process:

git-with-ssm-key() { ssm_key="$1"; shift ssh-agent bash -o pipefail -c ' if aws ssm get-parameter \ --with-decryption \ --name "'$ssm_key'" \ --output text \ --query Parameter.Value | ssh-add -q - then git "$@" else echo >&2 "ERROR: Failed to get or add key: '$ssm_key'" exit 1 fi ' bash "$@" }

Here is a sample of how the above bash function might be used to clone a repository using a Git SSH private key stored in SSM Parameter Store under the key “/githubkeys/gitreader”:

git-with-ssm-key /githubsshkeys/gitreader clone git@github.com:alestic/myprivaterepo.git

Other git subcommands can be run the same way. The SSH private key is only kept in memory and only during the execution of the git command.

How It Works

The main trick here is that ssh-agent can be run specifying a single command as an argument. That command in this case is a bash process that turns around and runs multiple commands.

It first gets the SSH private key from SSM Parameter Store, and adds the key to the ssh-agent process by passing it on stdin. Then it runs the requested git command, with the ssh-agent verifying identity to GitHub using the SSH private key.

When the git command has completed, the parent ssh-agent also disappears, cleaning up after itself.

Note: The current syntax doesn’t work with arguments that include spaces and other strange characters that might need quoting or escaping. I’d love to fix this, but note that this is only needed for commands that interact with the remote GitHub service.

Setting Up SSM Parameter Store

Now let’s go back and talk about how we might set up the AWS SSM Parameter Store and GitHub so that the above can access a repository.

Create a new SSH key with no passphrase (as it will be used by automated processes). This does go to disk, so do it somewhere safe.

keyname="gitreader" # Or something meaningful to you ssh-keygen -t rsa -N "" -b 4096 -C "$keyname" -f "$keyname.pem"

Upload the SSH private key to SSM Parameter Store:

ssm_key="/githubsshkeys/$keyname" # Your choice description="SSH private key for reading Git" # Your choice aws ssm put-parameter \ --name "$ssm_key" \ --type SecureString \ --description "$description" \ --value "$(cat $keyname.pem)"

Note: The above uses the default AWS SSM key in your account, but you can specify another with the --key-id option.

Once the SSH private key is safely in SSM Parameter Store, shred/wipe the copy on the local disk using something like (effectiveness may vary depending on file system type and underlying hardware):

shred -u "$keyname.pem" # or wipe, or your favorite data destroyer Setting Up GitHub User

The SSH public key can be used to provide access with different Git repository hosting providers, but GitHub is currently the most popular.

Create a new GitHub user for automated use:

https://github.com/

Copy the SSH public key that we just created

cat "$keyname.pem.pub"

Add the new SSH key to the GitHub user, pasting in the SSH public key value:

https://github.com/settings/ssh/new

Do not upload the SSH private key to GitHub. Besides, you’ve already shredded it.

Setting Up GitHub Repo Access

How you perform this step depends on how you have set up GitHub.

If you want the new user to have read-only access (and not push access), then you probably want to use a GitHub organization to own the repository, add the new user to a team that has read-only access to the repository.

Here’s more information about giving teams different levels of access in a GitHub organization:

https://help.github.com/articles/about-teams/

Alternatively, you can add the new GitHub user as a collaborator on a repository, but that will allow anybody with access to the SSH private key (which is now located in SSM Parameter Store) to push changes to that repository, instead of enforcing read-only.

Once GitHub is set up, you can go back and use the git-with-ssm-key command that was shown at the start of this article. For example:

git-with-ssm-key "$ssm_key" clone git@github.com:MYORG/MYREPO.git

If you have given your GitHub user write access to a repo, you can also use the push and related git subcommands.

Cleanup

Once you are done with testing this setup, you can clean up after yourself.

Remove the SSM Parameter Store key/value.

aws ssm delete-parameter \ --name "$ssm_key"

If you created a GitHub user and no longer need it, you may delete it carefully. WARNING! Make sure you sign back in to the temporary GitHub user first! Do not delete your main GitHub user!

https://github.com/settings/admin

When the GitHub user is deleted, GitHub will take care of removing that user from team membership and repository collaborator lists.

GitHub vs. AWS CodeCommit

For now, we are using GitHub at our company, which is why we need to go through all of the above rigamarole.

If we were using AWS CodeCommit, this entire process would be easier, because we could just give the code permission to read the Git repository in CodeCommit using the IAM role in Lambda/Fargate/EC2.

Original article and comments: https://alestic.com/2018/12/aws-ssm-parameter-store-git-key/

Ubuntu Podcast from the UK LoCo: S11E41 – Forty-One Jane Doe’s

Enj, 20/12/2018 - 4:00md

This week we have been playing Super Smash Bros Ultimate and upgrading home servers from Ubuntu 16.04 to 18.04. We discuss Discord Store confirming Linux support, MIPS going open source, Microsoft Edge switching to Chromium and the release of Collabora Online Developer Edition 4.0 RC1. We also round up community news and events.

It’s Season 11 Episode 41 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Jonathan Riddell: Achievement of the Week

Enj, 13/12/2018 - 7:41md

This week I gave KDE Frameworks a web page after only 4 years of us trying to promote it as the best thing ever since cabogganing without one.  I also updated the theme on the KDE Applications 18.12 announcement to this millennium and even made the images in it have a fancy popup effect using the latest in JQuery Bootstrap CSS.  But my proudest contribution is making the screenshot for the new release of Konsole showing how it can now display all the cat emojis plus one for a poodle.

So far no comments asking why I named my computer thus.

by

Alan Pope: Fixing Broken Dropbox Sync Support

Enj, 13/12/2018 - 12:15md

Like many people, I've been using Dropbox to share files with friends and family for years. It's a super convenient and easy way to get files syncronised between machines you own, and work with others. This morning I was greeted with a lovely message on my Ubuntu desktop.

It says "Can't sync Dropbox until you sign in and move it to a supported file system" with options to "See requirements", "Quit Dropbox" and "Sign in".

Dropbox have reduced the number of file systems they support. We knew this was coming for a while, but it's a pain if you don't use one of the supported filesystems.

Recently I re-installed my Ubuntu 18.04 laptop and chose XFS rather than the default ext4 partition type when installing. That's the reason the error is appearing for me.

I do also use NextCloud and Syncthing for syncing files, but some of the people I work with only use Dropbox, and forcing them to change is tricky.

So I wanted a solution where I could continue to use Dropbox but not have to re-format the home partition on my laptop. The 'fix' is to create a file, format it ext4 and mount it where Dropbox expects your files to be. That's essentially it. Yay Linux. This may be useful to others, so I've detailed the steps below.

Note: I strongly recommend backing up your dropbox folder first, but I'm sure you already did that so let's assume you're good.

This is just a bunch of commands, which you could blindly paste en masse, or, preferably one-by-one, checking it did what it says it should, before moving on. It worked for me, but may not work for you. I am not to blame if this deletes your cat pictures. Before you begin, stop Dropbox completely. Close the client.

I've also put these in a github gist.

# Location of the image which will contain the new ext4 partition DROPBOXFILE="$HOME"/.dropbox.img # Current location of my Dropbox folder DROPBOXHOME="$HOME"/Dropbox # Where we will copy the folder to. If you have little space, you could make this # a folder on a USB drive DROPBOXBACKUP="$HOME"/old_Dropbox # What size is the Dropbox image file going to be. It makes sense to set this # to whatever the capacity of your Dropbox account is, or a little more. DROPBOXSIZE="20G" # Create a 'sparse' file which will start out small and grow to the maximum # size defined above. So we don't eat all that space immediately. dd if=/dev/zero of="$DROPBOXFILE" bs=1 count=0 seek="$DROPBOXSIZE" # Format it ext4, because Dropbox Inc. says so sudo mkfs.ext4 "$DROPBOXFILE" # Move the current Dropbox folder to the backup location mv "$DROPBOXHOME" "$DROPBOXBACKUP" # Make a new Dropbox folder to replace the old one. This will be the mount point # under which the sparse file will be mounted mkdir "$DROPBOXHOME" # Make sure the mount point can't be written to if for some reason the partition # doesn't get mounted. We don't want dropbox to see an empty folder and think 'yay, let's delete # all his files because this folder is empty, that must be what they want' sudo chattr +i "$DROPBOXHOME" # Mount the sparse file at the dropbox mount point sudo mount -o loop "$DROPBOXFILE" "$DROPBOXHOME" # Copy the files from the existing dropbox folder to the new one, which will put them # inside the sparse file. You should see the file grow as this runs. sudo rsync -a "$DROPBOXBACKUP"/ "$DROPBOXHOME"/ # Create a line in our /etc/fstab so this gets mounted on every boot up echo "$DROPBOXFILE" "$DROPBOXHOME" ext4 loop,defaults,rw,relatime,exec,user_xattr 0 0 | sudo tee -a /etc/fstab # Let's unmount it so we can make sure the above line worked sudo umount "$DROPBOXHOME" # This will mount as per the fstab sudo mount -a # Set ownership and permissions on the new folder so Dropbox has access sudo chown $(id -un) "$DROPBOXHOME" sudo chgrp $(id -gn) "$DROPBOXHOME"

Now start Dropbox. All things being equal, the error message will go away, and you can carry on with your life, syncing files happily.

Hope that helps. Leave a comment here or over on the github gist.

Benjamin Mako Hill: Awards and citations at computing conferences

Dje, 09/12/2018 - 9:20md

I’ve heard a surprising “fact” repeated in the CHI and CSCW communities that receiving a best paper award at a conference is uncorrelated with future citations. Although it’s surprising and counterintuitive, it’s a nice thing to think about when you don’t get an award and its a nice thing to say to others when you do. I’ve thought it and said it myself.

It also seems to be untrue. When I tried to check the “fact” recently, I found a body of evidence that suggests that computing papers that receive best paper awards are, in fact, cited more often than papers that do not.

The source of the original “fact” seems to be a CHI 2009 study by Christoph Bartneck and Jun Hu titled “Scientometric Analysis of the CHI Proceedings.” Among many other things, the paper presents a null result for a test of a difference in the distribution of citations across best papers awardees, nominees, and a random sample of non-nominees.

Although the award analysis is only a small part of Bartneck and Hu’s paper, there have been at least two papers have have subsequently brought more attention, more data, and more sophisticated analyses to the question.  In 2015, the question was asked by Jaques Wainer, Michael Eckmann, and Anderson Rocha in their paper “Peer-Selected ‘Best Papers’—Are They Really That ‘Good’?

Wainer et al. build two datasets: one of papers from 12 computer science conferences with citation data from Scopus and another papers from 17 different conferences with citation data from Google Scholar. Because of parametric concerns, Wainer et al. used a non-parametric rank-based technique to compare awardees to non-awardees.  Wainer et al. summarize their results as follows:

The probability that a best paper will receive more citations than a non best paper is 0.72 (95% CI = 0.66, 0.77) for the Scopus data, and 0.78 (95% CI = 0.74, 0.81) for the Scholar data. There are no significant changes in the probabilities for different years. Also, 51% of the best papers are among the top 10% most cited papers in each conference/year, and 64% of them are among the top 20% most cited.

The question was also recently explored in a different way by Danielle H. Lee in her paper on “Predictive power of conference‐related factors on citation rates of conference papers” published in June 2018.

Lee looked at 43,000 papers from 81 conferences and built a regression model to predict citations. Taking into an account a number of controls not considered in previous analyses, Lee finds that the marginal effect of receiving a best paper award on citations is positive, well-estimated, and large.

Why did Bartneck and Hu come to such a different conclusions than later work?

Distribution of citations (received by 2009) of CHI papers published between 2004-2007 that were nominated for a best paper award (n=64), received one (n=12), or were part of a random sample of papers that did not (n=76).

My first thought was that perhaps CHI is different than the rest of computing. However, when I looked at the data from Bartneck and Hu’s 2009 study—conveniently included as a figure in their original study—you can see that they did find a higher mean among the award recipients compared to both nominees and non-nominees. The entire distribution of citations among award winners appears to be pushed upwards. Although Bartneck and Hu found an effect, they did not find a statistically significant effect.

Given the more recent work by Wainer et al. and Lee, I’d be willing to venture that the original null finding was a function of the fact that citations is a very noisy measure—especially over a 2-5 post-publication period—and that the Bartneck and Hu dataset was small with only 12 awardees out of 152 papers total. This might have caused problems because the statistical test the authors used was an omnibus test for differences in a three-group sample that was imbalanced heavily toward the two groups (nominees and non-nominees) in which their appears to be little difference. My bet is that the paper’s conclusions on awards is simply an example of how a null effect is not evidence of a non-effect—especially in an underpowered dataset.

Of course, none of this means that award winning papers are better. Despite Wainer et al.’s claim that they are showing that award winning papers are “good,” none of the analyses presented can disentangle the signalling value of an award from differences in underlying paper quality. The packed rooms one routinely finds at best paper sessions at conferences suggest that at least some additional citations received by award winners might be caused by extra exposure caused by the awards themselves. In the future, perhaps people can say something along these lines instead of repeating the “fact” of the non-relationship.


Omer Akram: Introducing PySide2 (Qt for Python) Snap Runtime

Pre, 07/12/2018 - 6:11md
Lately at Crossbar.io, we have been PySide2 for an internal project. Last week it reached a milestone and I am now in the process of code cleanup and refactoring as we had to rush quite a few things for that deadline. We also create a snap package for the project, our previous approach was to ship the whole PySide2 runtime (170mb+) with the Snap, it worked but was a slow process, because each new snap build involved downloading PySide2 from PyPI and installing some deb dependencies.

So I decided to play with the content interface and cooked up a new snap that is now published to snap store. This definitely resulted in overall size reduction of the snap but at the same time opens a lot of different opportunities for app development on the Linux desktop.

I created a 'Hello World' snap that is just 8Kb in size since it doesn't include any dependencies with it as they are provided by the pyside2 snap. I am currently working on a very simple "sound recorder" app using PySide and will publish to the Snap store.

With pyside2 snap installed, we can probably export a few environment variables to make the runtime available outside of snap environment, for someone who is developing an app on their computer.

Jonathan Riddell: www.kde.org

Enj, 06/12/2018 - 5:44md

It’s not uncommon to come across some dusty corner of KDE which hasn’t been touched in ages and has only half implemented features. One of the joys of KDE is being able to plunge in and fix any such problem areas. But it’s quite a surprise when a high profile area of KDE ends up unmaintained. www.kde.org is one such area and it was getting embarrassing. February 2016 we had a sprint where a new theme was rolled out on the main pages making the website look fresh and act responsively on mobiles but since then, for various failures of management, nothing has happened. So while the neon build servers were down for shuffling to a new machine I looked into why Plasma release announcements were updated but not Frameworks or Applications announcments. I’d automated Plasma announcements a while ago but it turns out the other announcements are still done manually, so I updated those and poked the people involved. Then of course I got stuck looking at all the other pages which hadn’t been ported to the new theme. On review there were not actually too many of them, if you ignore the announcements, the website is not very large.

Many of the pages could be just forwarded to more recent equivalents such as getting the history page (last update in 2003) to point to timeline.kde.org or the presentation slides page (last update for KDE 4 release) to point to a more up to date wiki page.

Others are worth reviving such as KDE screenshots page, press contacts, support page. The contents could still do with some pondering on what is useful but while they exist we shouldn’t pretend they don’t so I updated those and added back links to them.

While many of these pages are hard to find or not linked at all from www.kde.org they are still the top hits in Google when you search for “KDE presentation” or “kde history” or “kde support” so it is worth not looking like we are a dead project.

There were also obvious bugs that needed fixed for example the cookie-opt-out banner didn’t let you opt out, the font didn’t get loaded, the favicon was inconsistent.

All of these are easy enough fixes but the technical barrier is too high to get it done easily (you need special permission to have access to www.kde.org reasonably enough) and the social barrier is far too high (you will get complaints when changing something high profile like this, far easier to just let it rot). I’m not sure how to solve this but KDE should work out a way to allow project maintenance tasks like this be more open.

Anyway yay, www.kde.org is now new theme everywhere (except old announcements) and pages have up to date content.

There is a TODO item to track website improvements if you’re interested in helping, although it missed the main one which is the stalled port to WordPress, again a place it just needs someone to plunge in and do the work. It’s satisfying because it’s a high profile improvement but alas it highlights some failings in a mature community project like ours.

by

Colin Watson: Deploying Swift

Mar, 04/12/2018 - 2:37pd

Sometimes I want to deploy Swift, the OpenStack object storage system.

Well, no, that’s not true. I basically never actually want to deploy Swift as such. What I generally want to do is to debug some bit of production service deployment machinery that relies on Swift for getting build artifacts into the right place, or maybe the parts of the Launchpad librarian (our blob storage service) that use Swift. I could find an existing private or public cloud that offers the right API and test with that, but sometimes I need to test with particular versions, and in any case I have a terribly slow internet connection and shuffling large build artifacts back and forward over the relevant bit of wet string makes it painfully slow to test things.

For a while I’ve had an Ubuntu 12.04 VM lying around with an Icehouse-based Swift deployment that I put together by hand. It works, but I didn’t keep good notes and have no real idea how to reproduce it, not that I really want to keep limping along with manually-constructed VMs for this kind of thing anyway; and I don’t want to be dependent on obsolete releases forever. For the sorts of things I’m doing I need to make sure that authentication works broadly the same way as it does in a real production deployment, so I want to have Keystone too. At the same time, I definitely don’t want to do anything close to a full OpenStack deployment of my own: it’s much too big a sledgehammer for this particular nut, and I don’t really have the hardware for it.

Here’s my solution to this, which is compact enough that I can run it on my laptop, and while it isn’t completely automatic it’s close enough that I can spin it up for a test and discard it when I’m finished (so I haven’t worried very much about producing something that runs efficiently). It relies on Juju and LXD. I’ve only tested it on Ubuntu 18.04, using Queens; for anything else you’re on your own. In general, I probably can’t help you if you run into trouble with the directions here: this is provided “as is”, without warranty of any kind, and all that kind of thing.

First, install Juju and LXD if necessary, following the instructions provided by those projects, and also install the python-openstackclient package as you’ll need it later. You’ll want to set Juju up to use LXD, and you should probably make sure that the shells you’re working in don’t have http_proxy set as it’s quite likely to confuse things unless you’ve arranged for your proxy to be able to cope with your local LXD containers. Then add a model:

juju add-model swift

At this point there’s a bit of complexity that you normally don’t have to worry about with Juju. The swift-storage charm wants to mount something to use for storage, which with the LXD provider in practice ends up being some kind of loopback mount. Unfortunately, being able to perform loopback mounts exposes too much kernel attack surface, so LXD doesn’t allow unprivileged containers to do it. (Ideally the swift-storage charm would just let you use directory storage instead.) To make the containers we’re about to create privileged enough for this to work, run:

lxc profile set juju-swift security.privileged true lxc profile device add juju-swift loop-control unix-char \ major=10 minor=237 path=/dev/loop-control for i in $(seq 0 255); do lxc profile device add juju-swift loop$i unix-block \ major=7 minor=$i path=/dev/loop$i done

Now we can start deploying things! Save this to a file, e.g. swift.bundle:

series: bionic description: "Swift in a box" applications: mysql: charm: "cs:mysql-62" channel: candidate num_units: 1 options: dataset-size: 512M keystone: charm: "cs:keystone" num_units: 1 swift-storage: charm: "cs:swift-storage" num_units: 1 options: block-device: "/etc/swift/storage.img|5G" swift-proxy: charm: "cs:swift-proxy" num_units: 1 options: zone-assignment: auto replicas: 1 relations: - ["keystone:shared-db", "mysql:shared-db"] - ["swift-proxy:swift-storage", "swift-storage:swift-storage"] - ["swift-proxy:identity-service", "keystone:identity-service"]

And run:

juju deploy swift.bundle

This will take a while. You can run juju status to see how it’s going in general terms, or juju debug-log for detailed logs from the individual containers as they’re putting themselves together. When it’s all done, it should look something like this:

Model Controller Cloud/Region Version SLA swift lxd localhost 2.3.1 unsupported App Version Status Scale Charm Store Rev OS Notes keystone 13.0.1 active 1 keystone jujucharms 290 ubuntu mysql 5.7.24 active 1 mysql jujucharms 62 ubuntu swift-proxy 2.17.0 active 1 swift-proxy jujucharms 75 ubuntu swift-storage 2.17.0 active 1 swift-storage jujucharms 250 ubuntu Unit Workload Agent Machine Public address Ports Message keystone/0* active idle 0 10.36.63.133 5000/tcp Unit is ready mysql/0* active idle 1 10.36.63.44 3306/tcp Ready swift-proxy/0* active idle 2 10.36.63.75 8080/tcp Unit is ready swift-storage/0* active idle 3 10.36.63.115 Unit is ready Machine State DNS Inst id Series AZ Message 0 started 10.36.63.133 juju-d3e703-0 bionic Running 1 started 10.36.63.44 juju-d3e703-1 bionic Running 2 started 10.36.63.75 juju-d3e703-2 bionic Running 3 started 10.36.63.115 juju-d3e703-3 bionic Running

At this point you have what should be a working installation, but with only administrative privileges set up. Normally you want to create at least one normal user. To do this, start by creating a configuration file granting administrator privileges (this one comes verbatim from the openstack-base bundle):

_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ') for param in $_OS_PARAMS; do if [ "$param" = "OS_AUTH_PROTOCOL" ]; then continue; fi if [ "$param" = "OS_CACERT" ]; then continue; fi unset $param done unset _OS_PARAMS _keystone_unit=$(juju status keystone --format yaml | \ awk '/units:$/ {getline; gsub(/:$/, ""); print $1}') _keystone_ip=$(juju run --unit ${_keystone_unit} 'unit-get private-address') _password=$(juju run --unit ${_keystone_unit} 'leader-get admin_passwd') export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v3 export OS_USERNAME=admin export OS_PASSWORD=${_password} export OS_USER_DOMAIN_NAME=admin_domain export OS_PROJECT_DOMAIN_NAME=admin_domain export OS_PROJECT_NAME=admin export OS_REGION_NAME=RegionOne export OS_IDENTITY_API_VERSION=3 # Swift needs this: export OS_AUTH_VERSION=3 # Gnocchi needs this export OS_AUTH_TYPE=password

Source this into a shell: for instance, if you saved this to ~/.swiftrc.juju-admin, then run:

. ~/.swiftrc.juju-admin

You should now be able to run openstack endpoint list and see a table for the various services exposed by your deployment. Then you can create a dummy project and a user with enough privileges to use Swift:

USERNAME=your-username PASSWORD=your-password openstack domain create SwiftDomain openstack project create --domain SwiftDomain --description Swift \ SwiftProject openstack user create --domain SwiftDomain --project-domain SwiftDomain \ --project SwiftProject --password "$PASSWORD" "$USERNAME" openstack role add --project SwiftProject --user-domain SwiftDomain \ --user "$USERNAME" Member

(This is intended for testing rather than for doing anything particularly sensitive. If you cared about keeping the password secret then you’d use the --password-prompt option to openstack user create instead of supplying the password on the command line.)

Now create a configuration file granting privileges for the user you just created. I felt like automating this to at least some degree:

touch ~/.swiftrc.juju chmod 600 ~/.swiftrc.juju sed '/^_password=/d; s/\( OS_PROJECT_DOMAIN_NAME=\).*/\1SwiftDomain/; s/\( OS_PROJECT_NAME=\).*/\1SwiftProject/; s/\( OS_USER_DOMAIN_NAME=\).*/\1SwiftDomain/; s/\( OS_USERNAME=\).*/\1'"$USERNAME"'/; s/\( OS_PASSWORD=\).*/\1'"$PASSWORD"'/' \ <~/.swiftrc.juju-admin >~/.swiftrc.juju

Source this into a shell. For example:

. ~/.swiftrc.juju

You should now find that swift list works. Success! Now you can swift upload files, or just start testing whatever it was that you were actually trying to test in the first place.

This is not a setup I expect to leave running for a long time, so to tear it down again:

juju destroy-model swift

This will probably get stuck trying to remove the swift-storage unit, since nothing deals with detaching the loop device. If that happens, find the relevant device in losetup -a from another window and use losetup -d to detach it; juju destroy-model should then be able to proceed.

Credit to the Juju and LXD teams and to the maintainers of the various charms used here, as well as of course to the OpenStack folks: their work made it very much easier to put this together.

Eric Hammond: Guest Post: Notable AWS re:invent Sessions, by Jennine Townsend

Hën, 03/12/2018 - 1:00pd

A guest post authored by Jennine Townsend, expert sysadmin and AWS aficionado

There were so many sessions at re:Invent! Now that it’s over, I want to watch some sessions on video, but which ones?

Of course I’ll pick out those that are specific to my interests, but I also want to know the sessions that had good buzz, so I made a list that’s kind of mashed together from sessions that I heard good things about on Twitter, with those that had lots of repeats and overflow sessions, figuring those must have been popular.

But I confess I left out some whole categories! There aren’t sessions for Alexa or DeepRacer (not that I’m not interested, they’re just not part of my re:Invent followup), and I don’t administer any Windows systems so I leave out most of those sessions.

Some sessions have YouTube links, some don’t (yet) have and may never have YouTube videos, since lots of (types of) sessions aren’t recorded. (But even there, if I search the topic and speakers, I bet I can often find an earlier talk.)

There’s not much of a ranking: keynotes at the top, sessions I heard good things about in the middle, then sessions that had lots of repeats. It’s only mildly specific to my interests, so I thought other people might find it helpful. It’s also not really finished, but I wanted to get started watching sessions this weekend!

Keynotes

Peter DeSantis Monday Night Live

Terry Wise Global Partner Keynote

Andy Jassy keynote

Werner Vogels keynote

Popular: Buzz during AWS re:Invent

DEV322 What’s New with the AWS CLI (Kyle Knapp, James Saryerwinnie)

SRV409 A Serverless Journey: AWS Lambda Under the Hood

CON362 Container Power Hour with Jess, Clare, and Abby

SRV325 Using DevOps, Microservices, and Serverless to Accelerate Innovation (David Richardson, Ken Exner, Deepak Singh)

SRV375 Lambda Layers and Runtime API (Danilo Poccia) - Chalk Talk

SRV338 Configuration Management and Service Discovery (mentions CloudMap) (Alex Casalboni, Ben Kehoe) - Chalk Talk

CON367 Introducing App Mesh (Kiran Meduri, Shubha Rao, James Straub)

SRV355 Best Practices for CI/CD with AWS Lambda and Amazon API Gateway (Chris Munns) (focuses on SAM, CodeStar, I believe) - Chalk Talk

DEV327 Advanced Infrastructure as Code Programming on AWS

SRV322 From Monolith to Modern Apps: Best Practices

Popular: Repeats During AWS re:Invent

CON301 Mastering Kubernetes on AWS

ARC202 Running Lean Architectures: How to Optimize for Cost Efficiency

DEV319 Continuous Integration Best Practices

AIM404 Build, Train, and Deploy ML Models Quickly and Easily with Amazon SageMaker

STG209 Amazon S3 Storage Management (Scott Hewitt) - Chalk Talk

ENT205 Executing a Large-Scale Migration to AWS (Joe Chung, Jonathan Allen, Mike Wittig)

DEV317 Advanced Continuous Delivery Best Practices

CON308 Building Microservices with Containers

ANT323 Build Your Own Log Analytics Solutions on AWS

ANT201 Big Data Analytics Architectural Patterns and Best Practices

DEV403 Automate Common Maintenance & Deployment Tasks Using AWS Systems Manager - Builders Session

DAT356 Which Database Should I Use? - Builders Session

DEV309 CI/CD for Serverless and Containerized Applications

ARC209 Architecture Patterns for Multi-Region Active-Active Applications

AIM401 Deep Learning Applications Using TensorFlow

SRV305 Inside AWS: Technology Choices for Modern Applications

SEC401 Mastering Identity at Every Layer of the Cake

SEC371 Incident Response in AWS - Builders Session

SEC322 Using AWS Lambda as a Security Team

NET404 Elastic Load Balancing: Deep Dive and Best Practices

DEV321 What’s New with AWS CloudFormation

DAT205 Databases on AWS: The Right Tool for the Right Job

Original article and comments: https://alestic.com/2018/12/aws-reinvent-jennine/

Balint Reczey: Migrating from Bazaar to Git on Launchpad just got easier!

Sht, 24/11/2018 - 11:48md

Debian recently switched from Alioth to Salsa offering only Git hosting from now on and that simplifies the work of exiting contributors and also helps newcomers who are most likely already familiar with Git if they know at least one version control system. (Thanks to everyone involved in the transition!)

On Ubuntu’s side, most Ubuntu-specific packages and big part of Ubuntu’s infrastructure used to be maintained in Bazaar repositories in the past. Since then Git became the most widely used version control system but the Bazaar repositories did not fully disappear.

There are still hundreds of packages maintained in Bazaar in Ubuntu (packaging repositories in Bazaar by team) and Debian (lintian report) and maintaining them in Git instead could be easier in the long term.

Launchpad already supports Git and there are guidelines for converting Bazaar repositories to Git (1,2),  but if you would like to make the switch I suggest taking a look at bzr-git-mass-convert based on bzr fast-export (verifying the result with git-remote-bzr). It is a simple tool for merging multiple Bazaar branches to a single git repository set up for pushing it back to Launchpad.

We (at the Foundations Team) use it for migrating our repositories and there is also a wiki page for tracking the migration schedule of popular repositories.

LoCo Ubuntu PT: Ho Ho Ho! O Pai Natal chegou mais cedo

Pre, 23/11/2018 - 1:35md

Olá!

Trazemos ótimas novidades para a Comunidade! Acabamos de encomendar camisolas e t-shirts alusivos ao Ubuntu e à Comunidade Ubuntu Portugal e estamos a vendê-las para angariar fundos para as atividades do grupo. Mas não acaba aqui! Também temos crachás para venda.

Eis uma amostra do material que temos:

Quiçá uma prenda de Natal? Ou simplesmente um acessório altamente estiloso?

O custo das t-shirts é de 10€, das hoodies é de 25€ e os crachás custam 1€.

Para encomendares, preenche o formulário que encontrás neste link: https://tos.typeform.com/to/FU6int

Contamos com o teu contributo!

Saudações cibernauticas

Sebastian K&uuml;gler: Different indentation styles per filetype

Pre, 23/11/2018 - 9:30pd

For my hacking, I love to use the KDevelop IDE. Once in a while, I find myself working on a project that has different indentation styles depending on the filetype — in this case, C++ files, Makefiles, etc. use tabs, JavaScript and HTML files use 2 spaces. I haven’t found this to be straight-forward from KDevelop’s configuration dialog (though I just learnt that it does seem to be possible). I did find myself having to fix indentation before committing (annoying!) and even having to fix up the indentation of code committed by myself (embarrassing!). As that’s both stupid and repetitive work, it’s something I wanted to avoid. Here’s how it’s done using EditorConfig files:

  1. put a file called .editorconfig in the project’s root directory
  2. specify a default style and then a specialization for certain filetypes
  3. restart KDevelop

Here’s what my .editorconfig file looks like:

# EditorConfig is awesome: https://EditorConfig.org # for the top-most EditorConfig file, set... # root = true # In general, tabs shown 2 spaces wide [*] indent_style = tab indent_size = 2 # Matches multiple files with brace expansion notation [*.{js,html}] indent_style = space indent_size = 2

This does the job nicely and has the following advantages:

  • It doesn’t affect my other projects, so I don’t have to run around in the configuration to switch when task-switching. (Editorconfigs cascade, so will be looked up up in the filesystem tree for fallback styles.
  • It works across different editors supporting the editorconfig standards, so not just KWrite, Kate, KDevelop, but also for entirely different products.
  • It allows me to spend less time on formalities and more time on actual coding (or diving).

(Thanks to Reddit.)

Faqet