You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 6 months 4 javë më parë

Lubuntu Blog: Sunsetting i386

Pre, 21/12/2018 - 1:43pd
Lubuntu has been and continues to be the go-to Ubuntu flavor for people who want the most from their computers, especially older hardware that cannot handle today’s workloads. However, the project and computing as a whole has drastically changed in many ways since its origin ten years ago. Computers have become faster, more secure, and […]

Eric Hammond: Using AWS SSM Parameter Store With Git SSH Keys

Pre, 21/12/2018 - 1:00pd

and employing them securely

At Archer, we have been moving credentials into AWS Systems Manager (SSM) Parameter Store and AWS Secrets Manager. One of the more interesting credentials is an SSH key that is used to clone a GitHub repository into an environment that has IAM roles available (E.g., AWS Lambda, Fargate, EC2).

We’d like to treat this SSH private key as a secret that is stored securely in SSM Parameter Store, with access controlled by AWS IAM, and only retrieve it briefly when it is needed to be used. We don’t even want to store it on disk when it is used, no matter how temporarily.

After a number of design and test iterations with Buddy, here is one of the approaches we ended up with. This is one I like for how clean it is, but may not be what ends up going into the final code.

This solution assumes that you are using bash to run your Git commands, but could be converted to other languages if needed.

Using The Solution

Here is the bash function that retrieves the SSH private key from SSM Parameter Store, adds it to a temporary(!) ssh-agent process, and runs the desired git subcommand using the same temporary ssh-agent process:

git-with-ssm-key() { ssm_key="$1"; shift ssh-agent bash -o pipefail -c ' if aws ssm get-parameter \ --with-decryption \ --name "'$ssm_key'" \ --output text \ --query Parameter.Value | ssh-add -q - then git "$@" else echo >&2 "ERROR: Failed to get or add key: '$ssm_key'" exit 1 fi ' bash "$@" }

Here is a sample of how the above bash function might be used to clone a repository using a Git SSH private key stored in SSM Parameter Store under the key “/githubkeys/gitreader”:

git-with-ssm-key /githubsshkeys/gitreader clone git@github.com:alestic/myprivaterepo.git

Other git subcommands can be run the same way. The SSH private key is only kept in memory and only during the execution of the git command.

How It Works

The main trick here is that ssh-agent can be run specifying a single command as an argument. That command in this case is a bash process that turns around and runs multiple commands.

It first gets the SSH private key from SSM Parameter Store, and adds the key to the ssh-agent process by passing it on stdin. Then it runs the requested git command, with the ssh-agent verifying identity to GitHub using the SSH private key.

When the git command has completed, the parent ssh-agent also disappears, cleaning up after itself.

Note: The current syntax doesn’t work with arguments that include spaces and other strange characters that might need quoting or escaping. I’d love to fix this, but note that this is only needed for commands that interact with the remote GitHub service.

Setting Up SSM Parameter Store

Now let’s go back and talk about how we might set up the AWS SSM Parameter Store and GitHub so that the above can access a repository.

Create a new SSH key with no passphrase (as it will be used by automated processes). This does go to disk, so do it somewhere safe.

keyname="gitreader" # Or something meaningful to you ssh-keygen -t rsa -N "" -b 4096 -C "$keyname" -f "$keyname.pem"

Upload the SSH private key to SSM Parameter Store:

ssm_key="/githubsshkeys/$keyname" # Your choice description="SSH private key for reading Git" # Your choice aws ssm put-parameter \ --name "$ssm_key" \ --type SecureString \ --description "$description" \ --value "$(cat $keyname.pem)"

Note: The above uses the default AWS SSM key in your account, but you can specify another with the --key-id option.

Once the SSH private key is safely in SSM Parameter Store, shred/wipe the copy on the local disk using something like (effectiveness may vary depending on file system type and underlying hardware):

shred -u "$keyname.pem" # or wipe, or your favorite data destroyer Setting Up GitHub User

The SSH public key can be used to provide access with different Git repository hosting providers, but GitHub is currently the most popular.

Create a new GitHub user for automated use:

https://github.com/

Copy the SSH public key that we just created

cat "$keyname.pem.pub"

Add the new SSH key to the GitHub user, pasting in the SSH public key value:

https://github.com/settings/ssh/new

Do not upload the SSH private key to GitHub. Besides, you’ve already shredded it.

Setting Up GitHub Repo Access

How you perform this step depends on how you have set up GitHub.

If you want the new user to have read-only access (and not push access), then you probably want to use a GitHub organization to own the repository, add the new user to a team that has read-only access to the repository.

Here’s more information about giving teams different levels of access in a GitHub organization:

https://help.github.com/articles/about-teams/

Alternatively, you can add the new GitHub user as a collaborator on a repository, but that will allow anybody with access to the SSH private key (which is now located in SSM Parameter Store) to push changes to that repository, instead of enforcing read-only.

Once GitHub is set up, you can go back and use the git-with-ssm-key command that was shown at the start of this article. For example:

git-with-ssm-key "$ssm_key" clone git@github.com:MYORG/MYREPO.git

If you have given your GitHub user write access to a repo, you can also use the push and related git subcommands.

Cleanup

Once you are done with testing this setup, you can clean up after yourself.

Remove the SSM Parameter Store key/value.

aws ssm delete-parameter \ --name "$ssm_key"

If you created a GitHub user and no longer need it, you may delete it carefully. WARNING! Make sure you sign back in to the temporary GitHub user first! Do not delete your main GitHub user!

https://github.com/settings/admin

When the GitHub user is deleted, GitHub will take care of removing that user from team membership and repository collaborator lists.

GitHub vs. AWS CodeCommit

For now, we are using GitHub at our company, which is why we need to go through all of the above rigamarole.

If we were using AWS CodeCommit, this entire process would be easier, because we could just give the code permission to read the Git repository in CodeCommit using the IAM role in Lambda/Fargate/EC2.

Original article and comments: https://alestic.com/2018/12/aws-ssm-parameter-store-git-key/

Ubuntu Podcast from the UK LoCo: S11E41 – Forty-One Jane Doe’s

Enj, 20/12/2018 - 4:00md

This week we have been playing Super Smash Bros Ultimate and upgrading home servers from Ubuntu 16.04 to 18.04. We discuss Discord Store confirming Linux support, MIPS going open source, Microsoft Edge switching to Chromium and the release of Collabora Online Developer Edition 4.0 RC1. We also round up community news and events.

It’s Season 11 Episode 41 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Jonathan Riddell: Achievement of the Week

Enj, 13/12/2018 - 7:41md

This week I gave KDE Frameworks a web page after only 4 years of us trying to promote it as the best thing ever since cabogganing without one.  I also updated the theme on the KDE Applications 18.12 announcement to this millennium and even made the images in it have a fancy popup effect using the latest in JQuery Bootstrap CSS.  But my proudest contribution is making the screenshot for the new release of Konsole showing how it can now display all the cat emojis plus one for a poodle.

So far no comments asking why I named my computer thus.

by

Alan Pope: Fixing Broken Dropbox Sync Support

Enj, 13/12/2018 - 12:15md

Like many people, I've been using Dropbox to share files with friends and family for years. It's a super convenient and easy way to get files syncronised between machines you own, and work with others. This morning I was greeted with a lovely message on my Ubuntu desktop.

It says "Can't sync Dropbox until you sign in and move it to a supported file system" with options to "See requirements", "Quit Dropbox" and "Sign in".

Dropbox have reduced the number of file systems they support. We knew this was coming for a while, but it's a pain if you don't use one of the supported filesystems.

Recently I re-installed my Ubuntu 18.04 laptop and chose XFS rather than the default ext4 partition type when installing. That's the reason the error is appearing for me.

I do also use NextCloud and Syncthing for syncing files, but some of the people I work with only use Dropbox, and forcing them to change is tricky.

So I wanted a solution where I could continue to use Dropbox but not have to re-format the home partition on my laptop. The 'fix' is to create a file, format it ext4 and mount it where Dropbox expects your files to be. That's essentially it. Yay Linux. This may be useful to others, so I've detailed the steps below.

Note: I strongly recommend backing up your dropbox folder first, but I'm sure you already did that so let's assume you're good.

This is just a bunch of commands, which you could blindly paste en masse, or, preferably one-by-one, checking it did what it says it should, before moving on. It worked for me, but may not work for you. I am not to blame if this deletes your cat pictures. Before you begin, stop Dropbox completely. Close the client.

I've also put these in a github gist.

# Location of the image which will contain the new ext4 partition DROPBOXFILE="$HOME"/.dropbox.img # Current location of my Dropbox folder DROPBOXHOME="$HOME"/Dropbox # Where we will copy the folder to. If you have little space, you could make this # a folder on a USB drive DROPBOXBACKUP="$HOME"/old_Dropbox # What size is the Dropbox image file going to be. It makes sense to set this # to whatever the capacity of your Dropbox account is, or a little more. DROPBOXSIZE="20G" # Create a 'sparse' file which will start out small and grow to the maximum # size defined above. So we don't eat all that space immediately. dd if=/dev/zero of="$DROPBOXFILE" bs=1 count=0 seek="$DROPBOXSIZE" # Format it ext4, because Dropbox Inc. says so sudo mkfs.ext4 "$DROPBOXFILE" # Move the current Dropbox folder to the backup location mv "$DROPBOXHOME" "$DROPBOXBACKUP" # Make a new Dropbox folder to replace the old one. This will be the mount point # under which the sparse file will be mounted mkdir "$DROPBOXHOME" # Make sure the mount point can't be written to if for some reason the partition # doesn't get mounted. We don't want dropbox to see an empty folder and think 'yay, let's delete # all his files because this folder is empty, that must be what they want' sudo chattr +i "$DROPBOXHOME" # Mount the sparse file at the dropbox mount point sudo mount -o loop "$DROPBOXFILE" "$DROPBOXHOME" # Copy the files from the existing dropbox folder to the new one, which will put them # inside the sparse file. You should see the file grow as this runs. sudo rsync -a "$DROPBOXBACKUP"/ "$DROPBOXHOME"/ # Create a line in our /etc/fstab so this gets mounted on every boot up echo "$DROPBOXFILE" "$DROPBOXHOME" ext4 loop,defaults,rw,relatime,exec,user_xattr 0 0 | sudo tee -a /etc/fstab # Let's unmount it so we can make sure the above line worked sudo umount "$DROPBOXHOME" # This will mount as per the fstab sudo mount -a # Set ownership and permissions on the new folder so Dropbox has access sudo chown $(id -un) "$DROPBOXHOME" sudo chgrp $(id -gn) "$DROPBOXHOME"

Now start Dropbox. All things being equal, the error message will go away, and you can carry on with your life, syncing files happily.

Hope that helps. Leave a comment here or over on the github gist.

Benjamin Mako Hill: Awards and citations at computing conferences

Dje, 09/12/2018 - 9:20md

I’ve heard a surprising “fact” repeated in the CHI and CSCW communities that receiving a best paper award at a conference is uncorrelated with future citations. Although it’s surprising and counterintuitive, it’s a nice thing to think about when you don’t get an award and its a nice thing to say to others when you do. I’ve thought it and said it myself.

It also seems to be untrue. When I tried to check the “fact” recently, I found a body of evidence that suggests that computing papers that receive best paper awards are, in fact, cited more often than papers that do not.

The source of the original “fact” seems to be a CHI 2009 study by Christoph Bartneck and Jun Hu titled “Scientometric Analysis of the CHI Proceedings.” Among many other things, the paper presents a null result for a test of a difference in the distribution of citations across best papers awardees, nominees, and a random sample of non-nominees.

Although the award analysis is only a small part of Bartneck and Hu’s paper, there have been at least two papers have have subsequently brought more attention, more data, and more sophisticated analyses to the question.  In 2015, the question was asked by Jaques Wainer, Michael Eckmann, and Anderson Rocha in their paper “Peer-Selected ‘Best Papers’—Are They Really That ‘Good’?

Wainer et al. build two datasets: one of papers from 12 computer science conferences with citation data from Scopus and another papers from 17 different conferences with citation data from Google Scholar. Because of parametric concerns, Wainer et al. used a non-parametric rank-based technique to compare awardees to non-awardees.  Wainer et al. summarize their results as follows:

The probability that a best paper will receive more citations than a non best paper is 0.72 (95% CI = 0.66, 0.77) for the Scopus data, and 0.78 (95% CI = 0.74, 0.81) for the Scholar data. There are no significant changes in the probabilities for different years. Also, 51% of the best papers are among the top 10% most cited papers in each conference/year, and 64% of them are among the top 20% most cited.

The question was also recently explored in a different way by Danielle H. Lee in her paper on “Predictive power of conference‐related factors on citation rates of conference papers” published in June 2018.

Lee looked at 43,000 papers from 81 conferences and built a regression model to predict citations. Taking into an account a number of controls not considered in previous analyses, Lee finds that the marginal effect of receiving a best paper award on citations is positive, well-estimated, and large.

Why did Bartneck and Hu come to such a different conclusions than later work?

Distribution of citations (received by 2009) of CHI papers published between 2004-2007 that were nominated for a best paper award (n=64), received one (n=12), or were part of a random sample of papers that did not (n=76).

My first thought was that perhaps CHI is different than the rest of computing. However, when I looked at the data from Bartneck and Hu’s 2009 study—conveniently included as a figure in their original study—you can see that they did find a higher mean among the award recipients compared to both nominees and non-nominees. The entire distribution of citations among award winners appears to be pushed upwards. Although Bartneck and Hu found an effect, they did not find a statistically significant effect.

Given the more recent work by Wainer et al. and Lee, I’d be willing to venture that the original null finding was a function of the fact that citations is a very noisy measure—especially over a 2-5 post-publication period—and that the Bartneck and Hu dataset was small with only 12 awardees out of 152 papers total. This might have caused problems because the statistical test the authors used was an omnibus test for differences in a three-group sample that was imbalanced heavily toward the two groups (nominees and non-nominees) in which their appears to be little difference. My bet is that the paper’s conclusions on awards is simply an example of how a null effect is not evidence of a non-effect—especially in an underpowered dataset.

Of course, none of this means that award winning papers are better. Despite Wainer et al.’s claim that they are showing that award winning papers are “good,” none of the analyses presented can disentangle the signalling value of an award from differences in underlying paper quality. The packed rooms one routinely finds at best paper sessions at conferences suggest that at least some additional citations received by award winners might be caused by extra exposure caused by the awards themselves. In the future, perhaps people can say something along these lines instead of repeating the “fact” of the non-relationship.


Omer Akram: Introducing PySide2 (Qt for Python) Snap Runtime

Pre, 07/12/2018 - 6:11md
Lately at Crossbar.io, we have been PySide2 for an internal project. Last week it reached a milestone and I am now in the process of code cleanup and refactoring as we had to rush quite a few things for that deadline. We also create a snap package for the project, our previous approach was to ship the whole PySide2 runtime (170mb+) with the Snap, it worked but was a slow process, because each new snap build involved downloading PySide2 from PyPI and installing some deb dependencies.

So I decided to play with the content interface and cooked up a new snap that is now published to snap store. This definitely resulted in overall size reduction of the snap but at the same time opens a lot of different opportunities for app development on the Linux desktop.

I created a 'Hello World' snap that is just 8Kb in size since it doesn't include any dependencies with it as they are provided by the pyside2 snap. I am currently working on a very simple "sound recorder" app using PySide and will publish to the Snap store.

With pyside2 snap installed, we can probably export a few environment variables to make the runtime available outside of snap environment, for someone who is developing an app on their computer.

Jonathan Riddell: www.kde.org

Enj, 06/12/2018 - 5:44md

It’s not uncommon to come across some dusty corner of KDE which hasn’t been touched in ages and has only half implemented features. One of the joys of KDE is being able to plunge in and fix any such problem areas. But it’s quite a surprise when a high profile area of KDE ends up unmaintained. www.kde.org is one such area and it was getting embarrassing. February 2016 we had a sprint where a new theme was rolled out on the main pages making the website look fresh and act responsively on mobiles but since then, for various failures of management, nothing has happened. So while the neon build servers were down for shuffling to a new machine I looked into why Plasma release announcements were updated but not Frameworks or Applications announcments. I’d automated Plasma announcements a while ago but it turns out the other announcements are still done manually, so I updated those and poked the people involved. Then of course I got stuck looking at all the other pages which hadn’t been ported to the new theme. On review there were not actually too many of them, if you ignore the announcements, the website is not very large.

Many of the pages could be just forwarded to more recent equivalents such as getting the history page (last update in 2003) to point to timeline.kde.org or the presentation slides page (last update for KDE 4 release) to point to a more up to date wiki page.

Others are worth reviving such as KDE screenshots page, press contacts, support page. The contents could still do with some pondering on what is useful but while they exist we shouldn’t pretend they don’t so I updated those and added back links to them.

While many of these pages are hard to find or not linked at all from www.kde.org they are still the top hits in Google when you search for “KDE presentation” or “kde history” or “kde support” so it is worth not looking like we are a dead project.

There were also obvious bugs that needed fixed for example the cookie-opt-out banner didn’t let you opt out, the font didn’t get loaded, the favicon was inconsistent.

All of these are easy enough fixes but the technical barrier is too high to get it done easily (you need special permission to have access to www.kde.org reasonably enough) and the social barrier is far too high (you will get complaints when changing something high profile like this, far easier to just let it rot). I’m not sure how to solve this but KDE should work out a way to allow project maintenance tasks like this be more open.

Anyway yay, www.kde.org is now new theme everywhere (except old announcements) and pages have up to date content.

There is a TODO item to track website improvements if you’re interested in helping, although it missed the main one which is the stalled port to WordPress, again a place it just needs someone to plunge in and do the work. It’s satisfying because it’s a high profile improvement but alas it highlights some failings in a mature community project like ours.

by

Colin Watson: Deploying Swift

Mar, 04/12/2018 - 2:37pd

Sometimes I want to deploy Swift, the OpenStack object storage system.

Well, no, that’s not true. I basically never actually want to deploy Swift as such. What I generally want to do is to debug some bit of production service deployment machinery that relies on Swift for getting build artifacts into the right place, or maybe the parts of the Launchpad librarian (our blob storage service) that use Swift. I could find an existing private or public cloud that offers the right API and test with that, but sometimes I need to test with particular versions, and in any case I have a terribly slow internet connection and shuffling large build artifacts back and forward over the relevant bit of wet string makes it painfully slow to test things.

For a while I’ve had an Ubuntu 12.04 VM lying around with an Icehouse-based Swift deployment that I put together by hand. It works, but I didn’t keep good notes and have no real idea how to reproduce it, not that I really want to keep limping along with manually-constructed VMs for this kind of thing anyway; and I don’t want to be dependent on obsolete releases forever. For the sorts of things I’m doing I need to make sure that authentication works broadly the same way as it does in a real production deployment, so I want to have Keystone too. At the same time, I definitely don’t want to do anything close to a full OpenStack deployment of my own: it’s much too big a sledgehammer for this particular nut, and I don’t really have the hardware for it.

Here’s my solution to this, which is compact enough that I can run it on my laptop, and while it isn’t completely automatic it’s close enough that I can spin it up for a test and discard it when I’m finished (so I haven’t worried very much about producing something that runs efficiently). It relies on Juju and LXD. I’ve only tested it on Ubuntu 18.04, using Queens; for anything else you’re on your own. In general, I probably can’t help you if you run into trouble with the directions here: this is provided “as is”, without warranty of any kind, and all that kind of thing.

First, install Juju and LXD if necessary, following the instructions provided by those projects, and also install the python-openstackclient package as you’ll need it later. You’ll want to set Juju up to use LXD, and you should probably make sure that the shells you’re working in don’t have http_proxy set as it’s quite likely to confuse things unless you’ve arranged for your proxy to be able to cope with your local LXD containers. Then add a model:

juju add-model swift

At this point there’s a bit of complexity that you normally don’t have to worry about with Juju. The swift-storage charm wants to mount something to use for storage, which with the LXD provider in practice ends up being some kind of loopback mount. Unfortunately, being able to perform loopback mounts exposes too much kernel attack surface, so LXD doesn’t allow unprivileged containers to do it. (Ideally the swift-storage charm would just let you use directory storage instead.) To make the containers we’re about to create privileged enough for this to work, run:

lxc profile set juju-swift security.privileged true lxc profile device add juju-swift loop-control unix-char \ major=10 minor=237 path=/dev/loop-control for i in $(seq 0 255); do lxc profile device add juju-swift loop$i unix-block \ major=7 minor=$i path=/dev/loop$i done

Now we can start deploying things! Save this to a file, e.g. swift.bundle:

series: bionic description: "Swift in a box" applications: mysql: charm: "cs:mysql-62" channel: candidate num_units: 1 options: dataset-size: 512M keystone: charm: "cs:keystone" num_units: 1 swift-storage: charm: "cs:swift-storage" num_units: 1 options: block-device: "/etc/swift/storage.img|5G" swift-proxy: charm: "cs:swift-proxy" num_units: 1 options: zone-assignment: auto replicas: 1 relations: - ["keystone:shared-db", "mysql:shared-db"] - ["swift-proxy:swift-storage", "swift-storage:swift-storage"] - ["swift-proxy:identity-service", "keystone:identity-service"]

And run:

juju deploy swift.bundle

This will take a while. You can run juju status to see how it’s going in general terms, or juju debug-log for detailed logs from the individual containers as they’re putting themselves together. When it’s all done, it should look something like this:

Model Controller Cloud/Region Version SLA swift lxd localhost 2.3.1 unsupported App Version Status Scale Charm Store Rev OS Notes keystone 13.0.1 active 1 keystone jujucharms 290 ubuntu mysql 5.7.24 active 1 mysql jujucharms 62 ubuntu swift-proxy 2.17.0 active 1 swift-proxy jujucharms 75 ubuntu swift-storage 2.17.0 active 1 swift-storage jujucharms 250 ubuntu Unit Workload Agent Machine Public address Ports Message keystone/0* active idle 0 10.36.63.133 5000/tcp Unit is ready mysql/0* active idle 1 10.36.63.44 3306/tcp Ready swift-proxy/0* active idle 2 10.36.63.75 8080/tcp Unit is ready swift-storage/0* active idle 3 10.36.63.115 Unit is ready Machine State DNS Inst id Series AZ Message 0 started 10.36.63.133 juju-d3e703-0 bionic Running 1 started 10.36.63.44 juju-d3e703-1 bionic Running 2 started 10.36.63.75 juju-d3e703-2 bionic Running 3 started 10.36.63.115 juju-d3e703-3 bionic Running

At this point you have what should be a working installation, but with only administrative privileges set up. Normally you want to create at least one normal user. To do this, start by creating a configuration file granting administrator privileges (this one comes verbatim from the openstack-base bundle):

_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ') for param in $_OS_PARAMS; do if [ "$param" = "OS_AUTH_PROTOCOL" ]; then continue; fi if [ "$param" = "OS_CACERT" ]; then continue; fi unset $param done unset _OS_PARAMS _keystone_unit=$(juju status keystone --format yaml | \ awk '/units:$/ {getline; gsub(/:$/, ""); print $1}') _keystone_ip=$(juju run --unit ${_keystone_unit} 'unit-get private-address') _password=$(juju run --unit ${_keystone_unit} 'leader-get admin_passwd') export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v3 export OS_USERNAME=admin export OS_PASSWORD=${_password} export OS_USER_DOMAIN_NAME=admin_domain export OS_PROJECT_DOMAIN_NAME=admin_domain export OS_PROJECT_NAME=admin export OS_REGION_NAME=RegionOne export OS_IDENTITY_API_VERSION=3 # Swift needs this: export OS_AUTH_VERSION=3 # Gnocchi needs this export OS_AUTH_TYPE=password

Source this into a shell: for instance, if you saved this to ~/.swiftrc.juju-admin, then run:

. ~/.swiftrc.juju-admin

You should now be able to run openstack endpoint list and see a table for the various services exposed by your deployment. Then you can create a dummy project and a user with enough privileges to use Swift:

USERNAME=your-username PASSWORD=your-password openstack domain create SwiftDomain openstack project create --domain SwiftDomain --description Swift \ SwiftProject openstack user create --domain SwiftDomain --project-domain SwiftDomain \ --project SwiftProject --password "$PASSWORD" "$USERNAME" openstack role add --project SwiftProject --user-domain SwiftDomain \ --user "$USERNAME" Member

(This is intended for testing rather than for doing anything particularly sensitive. If you cared about keeping the password secret then you’d use the --password-prompt option to openstack user create instead of supplying the password on the command line.)

Now create a configuration file granting privileges for the user you just created. I felt like automating this to at least some degree:

touch ~/.swiftrc.juju chmod 600 ~/.swiftrc.juju sed '/^_password=/d; s/\( OS_PROJECT_DOMAIN_NAME=\).*/\1SwiftDomain/; s/\( OS_PROJECT_NAME=\).*/\1SwiftProject/; s/\( OS_USER_DOMAIN_NAME=\).*/\1SwiftDomain/; s/\( OS_USERNAME=\).*/\1'"$USERNAME"'/; s/\( OS_PASSWORD=\).*/\1'"$PASSWORD"'/' \ <~/.swiftrc.juju-admin >~/.swiftrc.juju

Source this into a shell. For example:

. ~/.swiftrc.juju

You should now find that swift list works. Success! Now you can swift upload files, or just start testing whatever it was that you were actually trying to test in the first place.

This is not a setup I expect to leave running for a long time, so to tear it down again:

juju destroy-model swift

This will probably get stuck trying to remove the swift-storage unit, since nothing deals with detaching the loop device. If that happens, find the relevant device in losetup -a from another window and use losetup -d to detach it; juju destroy-model should then be able to proceed.

Credit to the Juju and LXD teams and to the maintainers of the various charms used here, as well as of course to the OpenStack folks: their work made it very much easier to put this together.

Eric Hammond: Guest Post: Notable AWS re:invent Sessions, by Jennine Townsend

Hën, 03/12/2018 - 1:00pd

A guest post authored by Jennine Townsend, expert sysadmin and AWS aficionado

There were so many sessions at re:Invent! Now that it’s over, I want to watch some sessions on video, but which ones?

Of course I’ll pick out those that are specific to my interests, but I also want to know the sessions that had good buzz, so I made a list that’s kind of mashed together from sessions that I heard good things about on Twitter, with those that had lots of repeats and overflow sessions, figuring those must have been popular.

But I confess I left out some whole categories! There aren’t sessions for Alexa or DeepRacer (not that I’m not interested, they’re just not part of my re:Invent followup), and I don’t administer any Windows systems so I leave out most of those sessions.

Some sessions have YouTube links, some don’t (yet) have and may never have YouTube videos, since lots of (types of) sessions aren’t recorded. (But even there, if I search the topic and speakers, I bet I can often find an earlier talk.)

There’s not much of a ranking: keynotes at the top, sessions I heard good things about in the middle, then sessions that had lots of repeats. It’s only mildly specific to my interests, so I thought other people might find it helpful. It’s also not really finished, but I wanted to get started watching sessions this weekend!

Keynotes

Peter DeSantis Monday Night Live

Terry Wise Global Partner Keynote

Andy Jassy keynote

Werner Vogels keynote

Popular: Buzz during AWS re:Invent

DEV322 What’s New with the AWS CLI (Kyle Knapp, James Saryerwinnie)

SRV409 A Serverless Journey: AWS Lambda Under the Hood

CON362 Container Power Hour with Jess, Clare, and Abby

SRV325 Using DevOps, Microservices, and Serverless to Accelerate Innovation (David Richardson, Ken Exner, Deepak Singh)

SRV375 Lambda Layers and Runtime API (Danilo Poccia) - Chalk Talk

SRV338 Configuration Management and Service Discovery (mentions CloudMap) (Alex Casalboni, Ben Kehoe) - Chalk Talk

CON367 Introducing App Mesh (Kiran Meduri, Shubha Rao, James Straub)

SRV355 Best Practices for CI/CD with AWS Lambda and Amazon API Gateway (Chris Munns) (focuses on SAM, CodeStar, I believe) - Chalk Talk

DEV327 Advanced Infrastructure as Code Programming on AWS

SRV322 From Monolith to Modern Apps: Best Practices

Popular: Repeats During AWS re:Invent

CON301 Mastering Kubernetes on AWS

ARC202 Running Lean Architectures: How to Optimize for Cost Efficiency

DEV319 Continuous Integration Best Practices

AIM404 Build, Train, and Deploy ML Models Quickly and Easily with Amazon SageMaker

STG209 Amazon S3 Storage Management (Scott Hewitt) - Chalk Talk

ENT205 Executing a Large-Scale Migration to AWS (Joe Chung, Jonathan Allen, Mike Wittig)

DEV317 Advanced Continuous Delivery Best Practices

CON308 Building Microservices with Containers

ANT323 Build Your Own Log Analytics Solutions on AWS

ANT201 Big Data Analytics Architectural Patterns and Best Practices

DEV403 Automate Common Maintenance & Deployment Tasks Using AWS Systems Manager - Builders Session

DAT356 Which Database Should I Use? - Builders Session

DEV309 CI/CD for Serverless and Containerized Applications

ARC209 Architecture Patterns for Multi-Region Active-Active Applications

AIM401 Deep Learning Applications Using TensorFlow

SRV305 Inside AWS: Technology Choices for Modern Applications

SEC401 Mastering Identity at Every Layer of the Cake

SEC371 Incident Response in AWS - Builders Session

SEC322 Using AWS Lambda as a Security Team

NET404 Elastic Load Balancing: Deep Dive and Best Practices

DEV321 What’s New with AWS CloudFormation

DAT205 Databases on AWS: The Right Tool for the Right Job

Original article and comments: https://alestic.com/2018/12/aws-reinvent-jennine/

Balint Reczey: Migrating from Bazaar to Git on Launchpad just got easier!

Sht, 24/11/2018 - 11:48md

Debian recently switched from Alioth to Salsa offering only Git hosting from now on and that simplifies the work of exiting contributors and also helps newcomers who are most likely already familiar with Git if they know at least one version control system. (Thanks to everyone involved in the transition!)

On Ubuntu’s side, most Ubuntu-specific packages and big part of Ubuntu’s infrastructure used to be maintained in Bazaar repositories in the past. Since then Git became the most widely used version control system but the Bazaar repositories did not fully disappear.

There are still hundreds of packages maintained in Bazaar in Ubuntu (packaging repositories in Bazaar by team) and Debian (lintian report) and maintaining them in Git instead could be easier in the long term.

Launchpad already supports Git and there are guidelines for converting Bazaar repositories to Git (1,2),  but if you would like to make the switch I suggest taking a look at bzr-git-mass-convert based on bzr fast-export (verifying the result with git-remote-bzr). It is a simple tool for merging multiple Bazaar branches to a single git repository set up for pushing it back to Launchpad.

We (at the Foundations Team) use it for migrating our repositories and there is also a wiki page for tracking the migration schedule of popular repositories.

LoCo Ubuntu PT: Ho Ho Ho! O Pai Natal chegou mais cedo

Pre, 23/11/2018 - 1:35md

Olá!

Trazemos ótimas novidades para a Comunidade! Acabamos de encomendar camisolas e t-shirts alusivos ao Ubuntu e à Comunidade Ubuntu Portugal e estamos a vendê-las para angariar fundos para as atividades do grupo. Mas não acaba aqui! Também temos crachás para venda.

Eis uma amostra do material que temos:

Quiçá uma prenda de Natal? Ou simplesmente um acessório altamente estiloso?

O custo das t-shirts é de 10€, das hoodies é de 25€ e os crachás custam 1€.

Para encomendares, preenche o formulário que encontrás neste link: https://tos.typeform.com/to/FU6int

Contamos com o teu contributo!

Saudações cibernauticas

Sebastian K&uuml;gler: Different indentation styles per filetype

Pre, 23/11/2018 - 9:30pd

For my hacking, I love to use the KDevelop IDE. Once in a while, I find myself working on a project that has different indentation styles depending on the filetype — in this case, C++ files, Makefiles, etc. use tabs, JavaScript and HTML files use 2 spaces. I haven’t found this to be straight-forward from KDevelop’s configuration dialog (though I just learnt that it does seem to be possible). I did find myself having to fix indentation before committing (annoying!) and even having to fix up the indentation of code committed by myself (embarrassing!). As that’s both stupid and repetitive work, it’s something I wanted to avoid. Here’s how it’s done using EditorConfig files:

  1. put a file called .editorconfig in the project’s root directory
  2. specify a default style and then a specialization for certain filetypes
  3. restart KDevelop

Here’s what my .editorconfig file looks like:

# EditorConfig is awesome: https://EditorConfig.org # for the top-most EditorConfig file, set... # root = true # In general, tabs shown 2 spaces wide [*] indent_style = tab indent_size = 2 # Matches multiple files with brace expansion notation [*.{js,html}] indent_style = space indent_size = 2

This does the job nicely and has the following advantages:

  • It doesn’t affect my other projects, so I don’t have to run around in the configuration to switch when task-switching. (Editorconfigs cascade, so will be looked up up in the filesystem tree for fallback styles.
  • It works across different editors supporting the editorconfig standards, so not just KWrite, Kate, KDevelop, but also for entirely different products.
  • It allows me to spend less time on formalities and more time on actual coding (or diving).

(Thanks to Reddit.)

Ubuntu Podcast from the UK LoCo: S11E37 – Thirty Seven: Essays On Life, Wisdom, And Masculinity

Enj, 22/11/2018 - 4:00md

This week we’ve been building a new home server using SnapRAID and upgrading a Thinkpad to Ubuntu 16.04. Samsung debut the beta of Linux on DeX, Wireframe Magazine is out, the Raspberry Pi 3 Model A+ is available, Ubuntu 18.04 will be supported for 10 years and we round up community news.

It’s Season 11 Episode 37 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Colin King: High-level tracing with bpftrace

Enj, 22/11/2018 - 1:37md
Bpftrace is a new high-level tracing language for Linux using the extended Berkeley packet filter (eBPF).  It is a very powerful and flexible tracing front-end that enables systems to be analyzed much like DTrace.

The bpftrace tool is now installable as a snap. From the command line one can install it and enable it to use system tracing as follows:

sudo snap install bpftrace
sudo snap connect bpftrace:system-trace

To illustrate the power of bpftrace, here are some simple one-liners:

# trace openat() system calls
sudo bpftrace -e 'tracepoint:syscalls:sys_enter_openat { printf("%d %s %s\n", pid, comm, str(args->filename)); }'
Attaching 1 probe...
1080 irqbalance /proc/interrupts
1080 irqbalance /proc/stat
2255 dmesg /etc/ld.so.cache
2255 dmesg /lib/x86_64-linux-gnu/libtinfo.so.5
2255 dmesg /lib/x86_64-linux-gnu/librt.so.1
2255 dmesg /lib/x86_64-linux-gnu/libc.so.6
2255 dmesg /lib/x86_64-linux-gnu/libpthread.so.0
2255 dmesg /usr/lib/locale/locale-archive
2255 dmesg /lib/terminfo/l/linux
2255 dmesg /home/king/.config/terminal-colors.d
2255 dmesg /etc/terminal-colors.d
2255 dmesg /dev/kmsg
2255 dmesg /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache

# count system calls using tracepoints:
sudo bpftrace -e 'tracepoint:syscalls:sys_enter_* { @[probe] = count(); }'
@[tracepoint:syscalls:sys_enter_getsockname]: 1
@[tracepoint:syscalls:sys_enter_kill]: 1
@[tracepoint:syscalls:sys_enter_prctl]: 1
@[tracepoint:syscalls:sys_enter_epoll_wait]: 1
@[tracepoint:syscalls:sys_enter_signalfd4]: 2
@[tracepoint:syscalls:sys_enter_utimensat]: 2
@[tracepoint:syscalls:sys_enter_set_robust_list]: 2
@[tracepoint:syscalls:sys_enter_poll]: 2
@[tracepoint:syscalls:sys_enter_socket]: 3
@[tracepoint:syscalls:sys_enter_getrandom]: 3
@[tracepoint:syscalls:sys_enter_setsockopt]: 3
...

Note that it is recommended to use bpftrace with Linux 4.9 or higher.

The bpftrace github project page has an excellent README guide with some worked examples and is a very good place to start.  There is also a very useful reference guide and one-liner tutorial too.

If you have any useful btftrace one-liners, it would be great to share them. This is an amazingly powerful tool, and it would be interesting to see how it will be used.

Costales: Ubuntu Community Appreciation Day: Thanks Rudy (~cm-t)!

Mar, 20/11/2018 - 6:53md
Today is the Ubuntu Appreciation Day in which we share our thanks to people in our community for making Ubuntu great.

This year, I want to thank you to Rudy (~cm-t)! Why? Because IMHO he is an incredible activist, helpful, funny, always with a smile. He prints passion in everything related to Ubuntu. A perfect example for everyone!



Thanks Rudy |o/

Rhonda D&#39;Vine: TDOR 2018

Mar, 20/11/2018 - 11:11pd

Today is Transgender Day Of Remembrance. Today is a black day for trans people around the globe. We mourn the trans folks that aren't amongst us anymore due to hate crime violence against them. Reach out to the trans folks that are part of your life, that you know, ask them if they are in need of emotional support on this day. There are more trans folks getting killed for being trans than there are days in a year. Furthermost black trans women of color. If you feel strong enough you can read about it in this article.

Also, we are facing huge threats for our mere existence all over the world these days. If you follow any social media, check the hashtag #WontBeErased. The US government follows a path of Erasing Gender left and right, which also affects intersex people likewise and manifests the gender binary and gender separation even further, also hurting cis people. Now also in Ontario, Canada, gender identity gets erased, too. And Brazil, where next year's DebConf will be held, which already has the highest trans murders in the world, has elected Bolsonaro, a right wing extremist who is outspokenly gay antagonist and misogynist. And then there is Tanzania which started a hunt for LGBTIQ people. And those reports are only the tip of the iceberg. I definitely missed some other countries shit, like Ukraine (where next year's European Lesbian* Conference is taking place) or Austrian's government being right-winged and cutting the social system left and right so we are in need of Wieder Donnerstag (a weekly Thursday demonstration) again.

I'm currently drafting the announce mail to send out about the creation of the Debian Diversity Team which we finally formed. It is more important than ever to make it clear and visible that discrimination has no place within Debian, and that we in fact are a diverse community. I can understand the wish that it should focus on the visibility and welcoming aspects of the team, and especially to not make it look like it's a reaction to those world events. Which it isn't, this is in the works since two years now. And I totally agree with that. I just have a hard time to not add a solidarity message alongside mentioning that we are aware of the crap that's going on in the world and that we see your pain, and share it. So yes, the team has finally formed, but the announcement mail through debian-devel-announce about it is still pending. And we are in contact with the local team for next year's DebConf and following the news about Brazil to figure out how to make it as safe as possible for attendees, so that fear shouldn't be the guiding factor for you to not attend.

Stay strong, sending you hugs if wanted.

/personal | permanent link | Comments: 2 |

Stephen Kelly: Composing AST Matchers in clang-tidy

Mar, 20/11/2018 - 10:16pd

When creating clang-tidy checks, it is common to extract parts of AST Matcher expressions to local variables. I expanded on this in a previous blog.

auto nonAwesomeFunction = functionDecl( unless(matchesName("^::awesome_")) ); Finder->addMatcher( nonAwesomeFunction.bind("addAwesomePrefix") , this); Finder->addMatcher( callExpr(callee(nonAwesomeFunction)).bind("addAwesomePrefix") , this);

Use of such variables establishes an emergent extension API for re-use in the checks, or in multiple checks you create which share matcher requirements.

When attempting to match items inside a ForStmt for example, we might encounter the difference in the AST depending on whether braces are used or not.

#include <vector> void foo() { std::vector<int> vec; int c = 0; for (int i = 0; i < 100; ++i) vec.push_back(i); for (int i = 0; i < 100; ++i) { vec.push_back(i); } }

In this case, we wish to match the push_back method inside a ForStmt body. The body item might be a CompoundStmt or the CallExpr we wish to match. We can match both cases with the anyOf matcher.

auto pushbackcall = callExpr(callee(functionDecl(hasName("push_back")))); Finder->addMatcher( forStmt( hasBody(anyOf( pushbackcall.bind("port_call"), compoundStmt(has(pushbackcall.bind("port_call"))) )) ) , this);

Having to list the pushbackcall twice in the matcher is suboptimal. We ca do better by defining a new API function which we can use in AST Matcher expressions:

auto hasIgnoringBraces = [](auto const& Matcher) { return anyOf( Matcher, compoundStmt(has(Matcher)) ); };

With this in hand, we can simplify the original expression:

auto pushbackcall = callExpr(callee(functionDecl(hasName("push_back")))); Finder->addMatcher( forStmt( hasBody(hasIgnoringBraces( pushbackcall.bind("port_call") )) ) , this);

This pattern of defining AST Matcher API using a lambda function finds use in other contexts. For example, sometimes we want to find and bind to an AST node if it is present, ignoring its absense if is not present.

For example, consider wishing to match struct declarations and match a copy constructor if present:

struct A { }; struct B { B(B const&); };

We can match the AST with the anyOf() and anything() matchers.

Finder->addMatcher( cxxRecordDecl(anyOf( hasMethod(cxxConstructorDecl(isCopyConstructor()).bind("port_method")), anything() )).bind("port_record") , this);

This can be generalized into an optional() matcher:

auto optional = [](auto const& Matcher) { return anyOf( Matcher, anything() ); };

The anything() matcher matches, well, anything. It can also match nothing because of the fact that a matcher written inside another matcher matches itself.

That is, matchers such as

functionDecl(decl()) functionDecl(namedDecl()) functionDecl(functionDecl())

match ‘trivially’.

If a functionDecl() in fact binds to a method, then the derived type can be used in the matcher:

functionDecl(cxxMethodDecl())

The optional matcher can be used as expected:

Finder->addMatcher( cxxRecordDecl( optional( hasMethod(cxxConstructorDecl(isCopyConstructor()).bind("port_method")) ) ).bind("port_record") , this);

Yet another problem writers of clang-tidy checks will find is that AST nodes CallExpr and CXXConstructExpr do not share a common base representing the ability to take expressions as arguments. This means that separate matchers are required for calls and constructions.

Again, we can solve this problem generically by creating a composition function:

auto callOrConstruct = [](auto const& Matcher) { return expr(anyOf( callExpr(Matcher), cxxConstructExpr(Matcher) )); };

which reads as ‘an Expression which is any of a call expression or a construct expression’.

It can be used in place of either in matcher expressions:

Finder->addMatcher( callOrConstruct( hasArgument(0, integerLiteral().bind("port_literal")) ) , this);

Creating composition functions like this is a very convenient way to simplify and create maintainable matchers in your clang-tidy checks. A recently published RFC on the topic of making clang-tidy checks easier to write proposes some other conveniences which can be implemented in this manner.

Stephen Michael Kellat: Hitting a Break Point

Mar, 20/11/2018 - 3:55pd

Well, I had a weekend off sick. The time has come to put things in motion. Health concerns pushed up my timetable for what was discussed prior.

I am seeking support to be able to undertake freelance work. The first project would be to finally close out the Outernet/Othernet research work to get it submitted. Beyond that there would be technical writing as well as making creative works. Some of that would involve creating “digital library” collections but also helping others create print works instead.

Who could I help/serve? Unfortunately we have plenty of small, underfunded groups in my town. The American Red Cross no longer maintains a local office and the Salvation Army has no staff presence locally. Our county-owned airport verges on financial collapse and multiple units of government have difficulty staying solvent. There are plenty of needs to cover as long as someone had independent financial backing.

Besides, I owe some edits of Xubuntu documentation too.

It isn’t like “going on disability” as it is called in American parlance is immediate let alone simple. One of two sets of paperwork has to eventually go into a cave in Pennsylvania for centralized processing. I wish I were kidding but that cave is located near Slippery Rock. Both processes are backlogged only 12-18 months at last report. For making a change in the short term, that doesn’t even exist as an option on the table.

That’s why I’m asking for support. I’ve grown tired of spending multiple days at work depressed. Showing physical symptoms of depression in the workplace isn’t good either especially when it results in me missing work. When you can’t help people who are in the throes of despair frequently by their own fault, how much more futile can it get?

I set the goal on Liberapay lower than what I get now. While it would be a pay cut, I’d still be able to pay the bills. It is time to move to doing something constructive for society instead of merely fueling the machinery of government. For as often as I get asked how I sleep at night, I want to move past the answer being “terribly”.

The relevant Liberapay page is here. Folks like Pepper & Carrot use it. If the goal can be initially met by December 7th, I would be ready for the potential budget snafu at work like the three we already had at the start of the year.

I just look forward to some day being able to talk about doing good things instead of having to be cryptic due to security restrictions.

The Fridge: Ubuntu Weekly Newsletter Issue 554

Hën, 19/11/2018 - 11:18md

Welcome to the Ubuntu Weekly Newsletter, Issue 554 for the week of November 11 – 17, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Faqet