Without a doubt, Kubernetes is the most important thing that has happened in enterprise computing in the past two decades, rivalling the transformation that swept over the datacenter with server virtualization, first in the early 2000s on RISC/Unix platforms and then during the Great Recession when commercial-grade server virtualization became available on X86 platforms at precisely the moment it was most needed.
All things being equal, the industry would have probably preferred to go straight to containers, which are lighter weight than server virtualization and which are designed explicitly for service-oriented architectures – now called microservices – but it is the same idea of chopping code into smaller chunks so it can be maintained, extended, or replaced piecemeal.
This is precisely why Google spent so much time in the middle 2000s creating what are now seen as relatively rudimentary Linux containers and the Borg cluster and container controllers. Seven years ago, as it was unclear what the future platform might look like; OpenStack, which came out of NASA and Rackspace Hosting, was a contender, and so was Mesos, which came out of Twitter, but Kubernetes, inspired by Borg and adopting a universal container format derived from Docker, has won.
Kubernetes clusters are typically used by several teams in an organization. In other cases, Kubernetes may be used to deliver applications to end users requiring segmentation and isolation of resources across users from different organizations. Secure sharing of Kubernetes control plane and worker node resources allows maximizing productivity and saving costs in both cases.
The Kubernetes Multi-Tenancy Working Group is chartered with defining tenancy models for Kubernetes and making it easier to operationalize tenancy related use cases. This blog post, from the working group members, describes three common tenancy models and introduces related working group projects.
We will also be presenting on this content and discussing different use cases at our Kubecon EU 2021 panel session, Multi-tenancy vs. Multi-cluster: When Should you Use What?.
I decided to purchase a System76 Thelio Major desktop for use in the studio, in order to cut through my video renders and other workloads faster. After spending some time with this awesome desktop, I give it a full review in this video.
This week we have been deploying bitwarden_rs and get the Stream Deck to work well on Ubuntu. We discuss how much we really use desktop environments, bring you a GUI love and go over all your wonderful feedback.
It’s Season 14 Episode 06 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.
New to Emacs and want to learn how to config it? Install Emacs and follow along with me on the stream. Though I have been using a preconfigured Emacs distribution (Doom Emacs), in this livestream I will start with a fresh installation of GNU Emacs and write a config to suit my needs.
In our previous post, we announced that Android now supports the Rust programming language for developing the OS itself. Related to this, we are also participating in the effort to evaluate the use of Rust as a supported language for developing the Linux kernel. In this post, we discuss some technical aspects of this work using a few simple examples.
C has been the language of choice for writing kernels for almost half a century because it offers the level of control and predictable performance required by such a critical component. Density of memory safety bugs in the Linux kernel is generally quite low due to high code quality, high standards of code review, and carefully implemented safeguards. However, memory safety bugs do still regularly occur. On Android, vulnerabilities in the kernel are generally considered high-severity because they can result in a security model bypass due to the privileged mode that the kernel runs in.
The Google security blog has a detailed article on what a device driver written in Rust looks like. "That is, we use Rust's ownership discipline when interacting with C code by handing the C portion ownership of a Rust object, allowing it to call functions implemented in Rust, then eventually giving ownership back.
After greenlighting plans to use the Rust programming language in Android’s low-level system-code, Google is now throwing its weight behind the move to allow Rust as a supported language for developing the Linux kernel.
Google looks at Rust as a memory-safe language that it hopes will help curb the growing number of memory-based security vulnerabilities in the mobile operating system. It believes the Linux kernel should use Rust for the same reasons.
“We feel that Rust is now ready to join C as a practical language for implementing the kernel. It can help us reduce the number of potential bugs and security vulnerabilities in privileged code while playing nicely with the core kernel and preserving its performance characteristics,” wrote Wedson Almeida Filho from Google's Android Team.
OpenZFS 2.1 is nearing release as the next feature update to this open-source ZFS file-system implementation currently supporting Linux and FreeBSD systems.
OpenZFS 2.1 is an exciting evolutionary update over last November's OpenZFS 2. release. The headline feature of OpenZFS 2.1 is Distributed Spare Raid "dRAID" support. OpenZFS 2.1 is also introducing a new "compatibility" property for Zpool feature sets, a new zpool_influxdb command for Zpool statistics into InfluxDB time-series databases, and various other alterations.
One of the key tasks assigned to the memory-management subsystem is to optimize the system's use of the available memory; that means pushing out pages containing unused data so that they can be put to better use elsewhere. Predicting which pages will be accessed in the near future is a tricky task, and the kernel has evolved a number of mechanisms designed to improve its chances of guessing right. But the kernel not only often gets it wrong, it also can expend a lot of CPU time to make the incorrect choice. The multi-generational LRU patch set posted by Yu Zhao is an attempt to improve that situation. In general, the kernel cannot know which pages will be accessed in the near future, so it must rely on the next-best indicator: the set of pages that have been used recently. Chances are that pages that have been accessed in the recent past will be useful again in the future, but there are exceptions. Consider, for example, an application that is reading sequentially through a file. Each page of the file will be put into the page cache as it is read, but the application will never need it again; in this case, recent access is not a sign that the page will be used again soon.
The kernel tracks pages using a pair of least-recently-used (LRU) lists. Pages that have been recently accessed are kept on the "active" list, with just-accessed pages put at the head of the list. Pages are taken off the tail of the list if they have not been accessed recently and placed at the head of the "inactive" list. That list is a sort of purgatory; if some process accesses a page on the inactive list, it will be promoted back to the active list. Some pages, like those from the sequentially read file described above, start life on the inactive list, meaning they will be reclaimed relatively quickly if there is no further need for them.
There are more details, of course. It's worth noting that there are actually two pairs of lists, one for anonymous pages and one for file-backed pages. If memory control groups are in use, there is a whole set of LRU lists for each active group.
Zhao's patch set identifies a number of problems with the current state of affairs. The active/inactive sorting is too coarse for accurate decision making, and pages often end up on the wrong lists anyway. The use of independent lists in control groups makes it hard for the kernel to compare the relative age of pages across groups. The kernel has a longstanding bias toward evicting file-backed pages for a number of reasons, which can cause useful file-backed pages to be tossed while idle anonymous pages remain in memory. This problem has gotten worse in cloud-computing environments, where clients have relatively little local storage and, thus, relatively few file-backed pages in the first place. Meanwhile, the scanning of anonymous pages is expensive, partly because it uses a complex reverse-mapping mechanism that does not perform well when a lot of scanning must be done.
The process of hardening the kernel can benefit in a number of ways from support by the compiler. In recent years, the Kernel Self Protection Project has brought this support from the grsecurity/PaX patch set into the kernel in the form of GCC plugins; LWN looked into that process back in 2017. A recent discussion has highlighted the fact that the use of GCC plugins brings disadvantages as well, and some developers would prefer to see those plugins replaced.
The discussion started when Josh Poimboeuf reported an issue he encountered when building out-of-tree modules with GCC plugins enabled. In his case, the compilation would fail when the GCC version used to compile the module was even slightly different from the one used to build the kernel. He included a patch to change the error he received into a warning and disable the affected plugin. Later in the thread, Justin Forbes explained how the problematic configuration came about; it happens within the Fedora continuous-integration system, which starts by building a current toolchain snapshot. Other jobs then compile out-of-tree modules with the new toolchain, without recompiling the kernel itself. Since GCC plugins were enabled, all jobs with out-of-tree modules have been failing.
The idea of changing the error into a warning was met with a negative response from the kernel build-system maintainer, Masahiro Yamada, who stated: "We are based on the assumption that we use the same compiler for in-tree and out-of-tree". Poimboeuf responded that what he sees in real-world configurations doesn't match that assumption.
The recent proposal from David Hildenbrand to remove support for the /dev/kmem special file has not sparked a lot of discussion. Perhaps that is because today's youngsters, lacking an understanding of history, may be wondering what that file is in the first place and, thus, be unclear on why it may matter. Chances are that /dev/kmem will not be missed, but in passing it takes away a venerable part of the Unix kernel interface. /dev/kmem provides access to the kernel's address space; it can be read from or written to like an ordinary file, or mapped into a process's address space. Needless to say, there are some mild security implications arising from providing that sort of access; even read access to this file is generally enough to expose credentials and allow an attacker to take over a system. As a result, protections on /dev/kmem have always tended to be restrictive, but it remains the sort of open back door into the kernel that makes anybody who worries about security worry even more.
It is a rare Linux system that enables /dev/kmem now. As of the 2.6.26 kernel release in July 2008, the kernel only implements this special file if the CONFIG_DEVKMEM configuration option is enabled. One will have to look long and hard for a distributor that enables this option in 2021; most of them disabled it many years ago. So its disappearance from the kernel is unlikely to create much discomfort.
It's worth noting that Linux systems still support /dev/mem (without the "k"), which once provided similar access to all of the memory in the system. It has long been restricted to I/O memory; system RAM is off limits. The occasional user-space device driver still needs /dev/mem to function, but it's otherwise unused.
Feature development for this quarter's Mesa 21.1 release is now over with it having been branched from main and the first release candidate issued.
Taking over Mesa 21.1 release management duties is Eric Engestrom. Eric issued a brief Mesa 21.1.0-rc1 announcement. Weekly release candidates of Mesa 21.1 are expected until Mesa 21.1.0 is ready to officially ship sometime in May. It should be early to mid May but with release delays being quite common for Mesa3D we'll see how this cycle play out.
Hello everyone!
Once again, a new release cycle has started. Please test this first release candidate, and report any issue here: https://gitlab.freedesktop.org/mesa/mesa/-/issues/new
Issues that should block the release of 21.1.0 should be added to the corresponding milestone: https://gitlab.freedesktop.org/mesa/mesa/-/milestones/25
Cheers, Eric
Recently from NVIDIA we received the rest of the NVIDIA RTX 30 series line-up for cards we haven't been able to benchmark under Linux previously, thus it's been a busy month of Ampere benchmarking for these additional cards and re-testing the existing parts. Coming up next week will be a large NVIDIA vs. AMD Radeon Linux gaming benchmark comparison while in this article today is an extensive look at the GPU compute performance for the complete RTX 20 and RTX 30 series line-up under Linux with compute tests spanning OpenCL, Vulkan, CUDA, and OptiX RTX under a variety of compute and rendering workloads.
Now having access to the current RTX 30 series line-up, first up is a look at the NVIDIA GPU compute performance across all these cards and the prior generation RTX 20 parts. All of these new and existing graphics cards were freshly (re)tested on Ubuntu 20.04 with the Linux 5.8 kernel and using the NVIDIA 460.67 driver stack with CUDA 11.2 as the latest software components as of testing time.
In your never-ending quest to secure your Linux servers, you've probably found a lot of times the breaches happen through SSH. No matter how secure it is, it can still be cracked. That's why you might need to consider setting up a tarpit for that service.
In this article, we will create an SNS topic with an access policy that will allow our own account to perform all SNS actions on the topic. We will carry out this activity using Terraform. Before we proceed with the article, it is assumed that you have a basic understanding of SNS and Terraform. You can also check my article here if you want to learn to create an SNS topic using Cloudformation. Click here to see all arguments and parameters available for SNS in Terraform. You can then use them to customize the SNS.
Screen is a handy tool as it allows users to save and come back to terminal sessions without having to keep the terminal window open. While many Linux users use this software on Linux servers, it can also be useful to Ubuntu users who want to always come back to a terminal program without having to keep the terminal open at all times.
In this video, we are looking at how to install Atom Text Editor on Deepin 20.2.
If you use Arduino frequently, the default interface can feel monotonous and boring. Against a white background, the text may be hard to read. Ever thought of adding more color and variety to your IoT development? For this, you should be able to customize your Arduino IDE with different background themes, colors, and font schemes.
As the following steps illustrate, it’s actually quite easy to personalize your Arduino IDE experience. Whether you prefer a Count Dracula dark theme or an ocean-green font style, we have you covered. There’s no need for any advanced programming editors, such as command shell, Atom, or Notepad++.
Today we are looking at how to install a MUGEN GAME on a Chromebook. Please follow the video/audio guide as a tutorial where we explain the process step by step and use the commands below.
If you have any questions, please contact us via a YouTube comment and we would be happy to assist you!
This article explains how to use YouTube on TV (https://youtube.com/tv) on a Raspberry Pi, and control it using the YouTube app from your mobile device, almost as if you're using a Chromecast.
Once you set up everything, you'll be able to use the cast button from the YouTube app on your phone to connect to YouTube on TV running on your Raspberry Pi (using Chromium web browser in kiosk mode), and use your phone as a YouTube remote. You'll be able to play videos, add videos to queue, change the volume using the phone's volume keys, etc. Also, multiple phones (so multiple users) can connect, play and add videos to the queue at the same time.
Note that I've only tested this using Android phones, so I'm not sure if it also works with iOS. I guess it should, but I don't own any iOS devices.
Learning Linux commands are getting easier day by day! If you know how to use man pages properly, you are halfway across Linux commandline journey. There are also some good man page alternatives available which helps you to display Linux commands cheatsheets. Unlike the man pages, these tools will only display concise examples for most commands and exclude all other theoretical part. Today, let us discuss one more useful addition to this list. Say hello to eg, a command line cheatsheet tool to display useful examples for Linux commands.
Eg provides practical examples for many Linux and Unix commands. If you want to quickly find out examples of a specific Linux command without going through the lengthy man pages, eg is your companion. Just run eg followed by the name of the command and get the concise examples of the given command right at the Terminal window. It is that simple!
In this tutorial, we will show you how to install Ruby on Rails on Debian 10. For those of you who didn’t know, Ruby on Rails (RoR) is a web application framework based on the Ruby programming language. It is a server-side MVC (Model-View-Controller) framework that provides default structures for a database, an internet service, and sites. It allows you to use Ruby in combination with HTML, CSS, and similar programming languages.
This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you through the step-by-step installation of Ruby on Rails on a Debian 10 (Buster).
The website loading speed or response time is very important for any webmaster because it will impact search engine rankings and user experience. So if you are a system administrator or webmaster then it is important for you to test your website speed and take immediate action to speed up it. There are several web-based and command-line tools available to test your website speed.
Firstly a disclaimer, I’m not an expert on this and I’m not trying to instruct anyone who is aiming to become an expert. The aim of this blog post is to help someone who has a single kernel issue they want to debug as part of doing something that’s mostly not kernel coding. I welcome comments about the second step to kernel debugging for the benefit of people who need more than this (which might include me next week). Also suggestions for people who can’t use a kvm/qemu debugger would be good.
Below is a command to run qemu with GDB. It should be run from the Linux kernel source directory. You can add other qemu options for a blog device and virtual networking if necessary, but the bug I encountered gave an oops from the initrd so I didn’t need to go further. The “nokaslr” is to avoid address space randomisation which deliberately makes debugging tasks harder (from a certain perspective debugging a kernel and compromising a kernel are fairly similar). Loading the bzImage is fine, gdb can map that to the different file it looks at later on.
When the personal computer was young, a household was likely to have one (or fewer) computers in it. Children played games on it during the day, and parents did accounting or programming or roamed through a BBS in the evening. Imagine a one-computer household today, though, and you can predict the conflict it would create. Everyone would want to use the computer at the same time, and there wouldn't be enough keyboard and mouse to go around.
This is, more or less, the same scenario that's been happening to the IT industry as computers have become more and more ubiquitous. Demand for services and servers has increased to the point that they could grind to a halt from overuse. Fortunately, we now have the concept of load balancing to help us handle the demand.
Hangover is the project that crafts Wine with a modified QEMU and other bits for allowing x86 32-bit and 64-bit Windows programs to run on alternative architectures under Linux. But before getting too excited, at this stage it still supports a limited number of real-world software packages and the architecture support is primarily focused on AArch64 and PPC64LE. While Linux is seemingly the primary focus, there is also some macOS support with Hangover too.
The Darkside Detective: A Fumble in the Dark, what was originally called Season 2, is officially out now with full Linux support so you can crack some unique cases. Note: key provided by the publisher.
"The Darkside Detective: A Fumble in the Dark is a serial adventure game where you help a duo of investigators crack supernatural cases in the city of Twin Lakes. Whether it’s a noise complaint due to ritual-performing neighbours or Mothman loitering around porch lights, Detective McQueen and his sidekick Officer Patrick Dooley are just a text box away! Point, click, or tap your way through six new cases as you get to the bottom of each mystery."
Metro Exodus, a long-time fan favorite, is finally here in Linux. After a long wait of over two years, Linux users can finally get their hands on the third installment of the Metro trilogy. Although a few unofficial ports of the game was available, this is an official release by 4A Games.
It is a first-person shooter game with gorgeous ray tracing graphics and the story is set in Russian wilderness across vast lands. The brilliant story-line spans an entire year through spring, summer and autumn to the nuclear winter. The game is a combination of fast-paced combat and stealth with exploration and survival and is easily one of the most immersive games in Linux.
Probably the biggest update and expansion launch for Stellaris yet, Stellaris: Nemesis and the 3.0 'Dick' update are out now. Paradox actually released something of a double-patch with both 3.0 and 3.0.1 landing today to bring big new features and some needed fixes found while testing.
Are you a fan of OpenTTD or the Transport Fever series? You should look at Voxel Tycoon, a brand new Early Access release on Steam that comes with Linux support and it's looking very good. Note: personal purchase.
It's technically been available for a long time already, as it had a pre-alpha available on itch.io which has now been removed. Steam is the main store for it now. This is also the first time I've jumped in to play and it impresses instantly. It offers up a Cities Skylines level of beautiful simplicity in the presentation of it, which is good because these types of transport sims usually confuse the heck of it me. Here though, it's just great.
Available now with Linux support, Rain on Your Parade is a comedy puzzle-like game about being a cloud and raining all over everyone and not giving a hoot. Note: key provided by the developer.
Rain on Your Parade is like nothing else! You could say it's in the spirit of games like Untitled Goose Game, the idea is that you're just there to mess with people and have as much fun as you can while doing so — it's a total joy. There's plenty more to it than just raining on people though, there's a certain strategy you will need to it. The game also gradually gives you a few fun tools to cause more havoc too including thunder, lightning, tornadoes and more.
LXQt 0.17 is the first update to this lightweight Qt5 desktop environment for 2021. It is focused on being a classic desktop with a modern look and feel.
LXQt is a very light and simple desktop environment built using the Qt libraries. It was formed by the merger of the LXDE and Razor-qt project. Currently, almost all major Linux distributions provides LXQt options as it is very lightweight with features loaded.
The LXQt desktop provides its own list of components specifically designed using Qt frameworks which gives you a stable yet super fast desktop experience.
Last year I decided to add an improvement to my home office, a treadmill. I thought about during the total lock-down we suffered in Spain in spring 2020. The reason for not trying it earlier was that I was skeptical about keeping my productivity level while walking on the treadmill. Finally I made the decision to try out and asked for one to The Three Wise Men. They were kind enough to bring it.
After a few weeks using it, I would like to report about it since several of my colleagues at MBition/Daimler asked me about it. I hope that reading about my experience is useful to you.
[...]
I detected a productivity decrease in those activities which require high levels of concentration or creativity like complex meetings with many people that I facilitate, those times when I need to come up with new ideas, analysis of complex data, etc. The good news is that I did detect this productivity reduction early. The longer I have been using the treadmill though, the more type tasks I can do without noticing a productivity reduction. So I plan to try again during the coming weeks some of those activities that I dropped early on.
Walking is helping me a lot to go through those days with back to back meetings. I pay better attention and get less bored compared to standing up. As I get more used to walking, my energy levels during the afternoons are increasing, as mentioned, which helps me to get through the last couple of hours during long working days. It was not like that at first though, so plan accordingly.
Here under studies for episode 35, a montage of roughly 30 Sketches, featuring Pepper at various ages, Arra dragon, Carrot and Torreya (a new character), Arra's pilot.
gThumb, GNOME image viewer and organizer, released version 3.11.3 a few days ago. Here’s how to install it in Ubuntu 20.04, Ubuntu 18.04, Ubuntu 20.10 via PPA.
gThumb 3.11.3 adds support for JPEG XL – the next generation image coding standard.
JPEG XL (.jxl) is based on ideas from Google’s Pik format and Cloudinary’s FUIF format. It is the next-generation, general-purpose image compression codec by the JPEG committee. Some popular apps, e.g., ImageMagick, XnView MP, have already added support for the image format.
Denver-based System76 has announced preliminary details of their upcoming COSMIC desktop environment, which will arrive in the June 2021 release of Pop!_OS 21.04.
According to the blog post, System76 will provide “a honed desktop user experience in Pop!_OS through our GNOME-based desktop environment: COSMIC.” The design has been refined through extensive testing and user feedback, resulting in greater ease of use and efficiency.
For a while, we have heard this justification, among distributions that have refused systemd as init, that elogind was adopted as a necessity for running wayland which is the future of graphical desktops. Some future this is, but anyway. For Artix it was a day one decision, for Void and Adelie it has been a year or more, for Slackware a few months. But it is only a handful of distros at this point that have totally refused the use of elogind and stick to consolekit2 (which was recently upgraded upstream unlike common myth that has it abandoned). Consolekit2 can handle logind functionality but can’t provide seat management. So there is seatd, a daemon that just does this. So count the different functions, unrelated to each other, that systemd provides all in one huge blob.Sway is the equivalent of i3 for wayland. For i3 users their setup transfers 100% to sway, all modified functionality will be there once you login.
Wlroots is wayland’s modular library. This is native in obarun and maintained to help wayland function without systemd/elogind. It is currently kept back at 0.12 as Arch but will be soon upgraded to 0.13.
Greetd – wlgreet is a display manager compatible with wayland. This has a PKGBUILD at the link above so you can build it appropriately for Obarun.
None of these are officially supported as a desktop setup in Obarun yet, but the setup is a demonstration that it can be done. Can it be done with sysv, openrc, runit? We don’t know. Can it be done with s6? Since it is done within Obarun it can be done. Whether you can handle the complex service setup without 66 and in lack of any other service manager, who knows! Of course you can cheat and employ 66, do the setup, then remove 66 and leave it as is, as a pseudo custom s6 setup of services. Don’t cry if one day you decide to switch services, disable one, enable another, without 66. The procedure can make a tough man cry.
Gavin Falconer or bbsg in Obarun forum, has started this thread for the discussion of this project/solution. A how-to instructional thread. A few of us have tried it and made it work. I am sure it will receive plenty of attention and refinements in the near future, and possibly be adopted officially as an Obarun setup, with all the related packages added to the repositories. For now it is a community project, and it is proof that it can be done.
Promising to be the "most advanced release ever," Zorin OS 16 is based on Ubuntu 20.04 LTS (Focal Fossa) and features a revamped look and feel to make the transition from Windows to Linux easier and more enjoyable, as the main goal of Zorin OS remains to be the number one MS Windows alternative for Linux newcomers.
The new look of Zorin OS 16 consists of a new theme that's easier on the eyes and features beautiful animations, new artwork and wallpapers, as well as revamped lock screen that features a blurred version of the desktop background.
Zorin OS 16 was one of my picks for distributions to look out for in 2021. They always do something interesting with every major upgrade, and it looks like Zorin OS 16 is going to be an exciting release to talk about.
The Zorin team announced the availability of Zorin OS 16 (based on Ubuntu 20.04 LTS) beta along with all the new features that come with it.
Here, I will mention the highlights of the new release along with a video tour (with the download link at the bottom).
SUSECON is only a few weeks away, and the excitement is starting to build. While working on my keynote, I took some time to look back on how much has happened since SUSE completed its acquisition of Rancher Labs in December of last year. It’s just unreal how fast things are moving. It hasn’t even been six months, and more things have happened than many companies can accomplish in a year or more.
IBM is joining the Eclipse Adoptium Working Group as an enterprise member. IBM is a founding and active member of the AdoptOpenJDK community, which is moving under the stewardship of the Eclipse Foundation to form the Adoptium working group.
IBM has joined the Eclipse Adoptium working group as an enterprise member and committed to building and publishing Java SE TCK-certified JDK binaries with OpenJ9 free of charge.
As a solutions architect, I spend a lot of my time talking to my customers about Kubernetes. In every one of those conversations, it’s guaranteed that the topic of Kubernetes Operators will come up. Operators, and their relationship to Red Hat OpenShift, aren't always clear to those who are just starting out on their container adoption journey.
Kubernetes has allowed the deployment and management of distributed applications to be heavily automated. A lot of that automation comes out of the box but Kubernetes wasn’t designed to know about all application types. So sometimes it’s necessary to extend the understanding of a specific type of application that Kubernetes has. Otherwise you have to manage a large part of these applications manually that ultimately defeats the purpose of deploying on Kubernetes. Operators allow you to capture how you can write code to automate a task beyond what Kubernetes itself provides.
This post assumes you know what Kubernetes is and how it works and have some knowledge of OpenShift. So what are Operators and why are they so important in explaining what Red Hat OpenShift is?
MontaVista Software has today joined the Rocky Enterprise Software Foundation as a Principal Sponsor, endorsing the Rocky Linux distribution as an alternative solution to the CentOS Linux project as a Red Hat Enterprise Linux compatible solution. MontaVista will continue providing the MVShield program services for Long term support of both CentOS Linux and Rocky Linux baselines.
Ubuntu Touch can run on dozens of smartphones that originally shipped with Android thanks to the Halium tool that allows the Linux distribution to communicate with the hardware in those phones using Android drivers.
But if you’re running Ubuntu Touch on a PinePhone, you don’t need Halium since the phone supports mainline Linux kernels or modified kernels like Megi’s kernel, which is usually way ahead of the pack when it comes to adding features to support the phone’s hardware.
Up until recently Ubuntu Touch for the Pinephone had been using a version of Linux kernel 5.6, but recently the developers have started to move to Megi’s 5.10 and 5.11 kernels which brings improved hardware stability and reliability.
Canonical has announced the optimization of MicroK8s, its lightweight Kubernetes distribution.
Started in 2018, MicroK8s has matured into a robust tool favoured by developers for efficient workflows and delivering production-grade features for companies building Kubernetes edge and IoT production environments.
Canonical released MicroK8s 1.21, featuring a 32.5 percent smaller footprint to enable Kubernetes deployments of 540MB, thereby easing clustering on platforms such as the Raspberry Pi and Nvidia Jetson.
Canonical took another step toward expanding the use of MicroK8s containers on low-power edge devices with the release of version 1.21 of MicroK8s, the stripped down, single-node version of Kubernetes. Released in conjunction with Kubernetes 1.21, which also extends to the Charmed Kubernetes and kubeadm variants, MicroK8s 1.21 has a 32.5 percent smaller RAM footprint than v1.20, “as benchmarked against single node and multi-node deployments,” says Canonical.
[...]
For server implementations on x86-based devices, meanwhile, MicroK8s 1.21 joins Kubernetes 1.21 in adding “seamless integration” of MicroK8s with Nvidia’s latest version of its GPU Operator, which helps provision GPU worker nodes in a Kubernetes cluster. MicroK8s can now consume a GPU or even a Multi-instance GPU (MIG) using a single command and is compatible with specialized Nvidia hardware such as the DGX and EGX.
MicroK8s is typically used for offline development, prototyping, and testing of Kubernetes (k8s) applications on a desktop before deploying them to the cloud as appliances. However, in Oct. 2019, Canonical began expanding the applicability of MicroK8s on edge devices for applications such as clustering when it announced a new feature in Ubuntu 19.10 that enabled “strict confinement” support for MicroK8s. Strict confinement of the non-elastic, rails-based MicroK8s enabled complete isolation and a tightly secured production-grade Kubernetes environment within a small footprint that could run on the Raspberry Pi 4.
The French speaking Ubuntu community really likes making ubuntu t-shirts. Each six months, releases after releases, they make new ones. Here is the Hirsute Hippo. You can buy it before the 26th of April for €15 (+ shipping costs) and receive it at the end of May 2021. You can buy it later but it will be more expensive and you will not have any garanty of stock.
The Ubuntu community is, and always will be, a major part of the Ubuntu project. It is one of the biggest reasons all of this (gestures around) even exists. Over the past month or so, the beginnings of a new Community team has been taking shape inside Canonical with the specific purpose of serving the community.
We try to do as much user testing as we can at Canonical, and one of the techniques that we employ is user interviews. Our UX team will talk to users regularly – usually there’s a user interview happening on every day of the week.
Aside: if you’re interested, you can sign up to join the Canonical user interview panel.
In response to the growing need for data protection, ATP Electronics, the global leader in specialized storage and memory solutions, has launched the SecurStor microSD cards – the latest in its line of secure NAND flash storage products for the Internet of Things (IoT), education, automotive, defense, aerospace and other applications requiring confidentiality and reliability.
“Removable storage media such as microSD cards provide great convenience and versatility for storing and transporting data. However, such convenience also exposes them to risks of unauthorized access,” said Chris Lien, ATP Embedded Memory Business Unit Head. “In many instances, the boot image may be compromised, corrupting the operating system or rendering the system unusable. Malware may be introduced, or private information may be disclosed and used for damaging intents. Amidst such dangerous scenarios, we have made security a key priority for all ATP products.”
Rockchip RK3566 & RK3568 processors were just officially announced at the end of the year, and soon followed with announcements of related such as Core-3568J AI Core system-on-module, some Android 11 TV boxes, Station P2 mini PC, and RK3566/RK3568 development boards.
But it did not take long, as RK3566/RK3568 are about the get support for mainline Linux, with engineers from Collabora and Rockchip having recently committed preliminary support for RK356x platforms, notably using Pine64 Quartz64 SBC for testing.
GreenWaves Technologies introduced the GAP8 low-power RISC-V IoT processor optimized for artificial intelligence applications in 2018. The multi-core (8+1) RISC-V processor is especially suitable for image and audio algorithms including convolutional neural network (CNN) inference.
The same year, the company launched the GAPUINO development kit that sold and (still sells) for $229 with QVGA camera and a multisensor board with four microphones, an STMicro VL53 Time of flight sensor, an IR sensor, a pressure sensor, a light sensor, a temperature & humidity sensor, and a 6-axis accelerometer/gyroscope. But there’s now a much more affordable solution to evaluate GAP8 multi-core RISC-V MCU with PerfXLab Perf-V Beetle board.
The board can be programmed with Python (MicroPython?), plot graphics with Matlab, control up to four relays, and integrate with Axon Cloud apps. As I understand it, the MCU – USB switch selects the source of the data: either USB or UART.
There are already many ESP8266 boards around, so Axon is probably mostly interesting when connected with a LoRa module as it offers an ultra-compact WiFi + LoRa IoT solution.
RAKWireless just had their “Big Tech Bloom” event services they announced many new LPWAN products ranging from WisDM fleet management system, OpenWrt based Wisgate OS, new industrial LoRaWAN gateways like WisGate Edge Lite 2, their first STM32WL module, as well as 9 new modules for WisBlock modular IoT platform with MIC, e-Paper display, GPS, an ESP32 based WisBlock core, etc…
But today, I’ll have a look at the new $99 WisGate Developer Base, a USB dongle that connects to a laptop for LoRaWAN networks evaluation, for example, to check the coverage before installing a new gateway. Alternatively, it could also be used to add LoRaWAN gateway capability to existing embedded hardware like routers or industrial PCs.
A few years about we wrote about BASpi I/O Raspberry Pi HAT compatible with BACNet, a data communication protocol for Building Automation and Control Networks, also known as the ISO 16484-6 standard, and used for HVACs, lightings, elevators, fire safety, and other systems found in buildings.
A new single-board computer from Chinese semiconductor designer Allwinner will feature an SoC based on the RISC-V 640bit architecture. However, the board lacks power due to its single-core CPU. Since it supports Linux and should cost less than US$15, the SBC should still be versatile enough for tinkering and projects.
Allwinner Technology today announced the launch of ãâ¬ÅD1ãâ¬Â processor, which is the world’s first mass-produced application processor equipped with T-Head Xuantie 906 based on RISC-V, providing an exciting new smart chipset for immediate use in today’s developing Artificial Intelligence of Things (AIoT) market.
Last year, we reported that Allwinner was working on an Alibaba XuanTie C906 based RISC-V processor that would be found in low-cost Linux capable single board computers selling for as low as $12.
The good news is that we won’t have to wait much longer as Allwinner D1 RISC-V processor is slated for an announcement next week, and a business card-sized SBC, also made by Allwinner, will become available in May. Some of the information is already available to developers in China, and CNX Software managed to obtain information about the Linux RISC-V SBC and Allwinner D1 processor.
Seeed unveiled a $195, Raspberry Pi CM4-based “ReTerminal” HMI device with a 5-inch, 1280 x 720 touchscreen, a crypto chip, WiFi/BT, GbE, micro-HDMI, CSI, 2x USB, and 40-pin and PCIe expansion.
Seeed has announced a modular human-machine interface (HMI) device based on the Raspberry Pi Compute Module 4. The ReTerminal, which will go on pre-order later this month starting at $195, features a 5-inch touchscreen.
In many respects we think of artificial intelligence as being all encompassing. One AI will do any task we ask of it. But in reality, even when AI reaches the advanced levels we envision, it won’t automatically be able to do everything. The Fraunhofer Institute for Microelectronic Circuits and Systems has been giving this a lot of thought.
[...]
As a test case, an Arduino Nano 33 BLE Sense was employed to build a demonstration device. Using only the onboard 9-axis motion sensor, the team built an untethered gesture recognition controller. When a button is pressed, the user draws a number in the air, and corresponding commands are wirelessly sent to peripherals. In this case, a robotic arm.
Linux for Chromebooks has come a long way since Google introduced it in Chrome OS 69 a couple of years ago. On supported devices, it opened the door to an extensive library of desktop apps for users, like video editing tools and IDEs. GPU acceleration was an important milestone that made graphic intensive Linux app usable on Chrome OS. This is thanks to Virgil 3D, a component that allows the Linux container to tap into the hardware's GPU. In exciting news shared by Luke Short from VMware, Google is working on adding Vulkan passthrough into Virgil to improve app performance.
A round of commits spotted on GitLab shows that Google's Chia-l Wu has been working around a year to add Vulkan passthrough support into Virgil 3D from the QEMU hypervisor. Wu helped Valve in the past by submitting a set of patches for Mesa — an OpenGL library — to reduce load times in games. Work-in-progress code allows commercial and Proton-based Steam games to run on Chromebooks.
It's been no secret lately that Apache Hadoop, once the poster child of big data, is past its prime. But since April 1st, the Apache Software Foundation (ASF) has announced the retirement to its "Attic" of at least 19 open source projects, 13 of which are big data-related and ten of which are part of the Hadoop ecosystem.
My peers at Mozilla are running workshops on opportunity sizing. If you're unfamiliar, opportunity sizing is when you take some broad guesses at how impactful some new project might be before writing any code. This gives you a rough estimate of what the upside for this work might be.
The goal here is to discard projects that aren't worth the effort. We want to make sure the juice is worth the squeeze before we do any work.
If this sounds simple, it is. If it sounds less-than-scientific, it is! There's a lot of confusion around why we do opportunity sizing, so here's a blog post.
In the eight years since I wrote that blog post not much has changed in how we think about and use our various devices. Each device is still a world unto itself. Sure, there are cloud applications and services that provide support for coordinating some of “my stuff” among devices. Collaborative applications and sync services are more common and more powerful—particularly if you restrict yourself to using devices from a single company’s ecosystem. But my various devices and their idiosyncratic differences have not “faded into the background.”
Why haven’t we done better? A big reason is conceptual inertia. It’s relatively easy for software developers to imagine and implement incremental improvement to the status quo. But before developers can create a new innovative system (or users can ask for one) they have to be able to envision it and have a vocabulary for talking about it. So, I’m going to coin a term, Personal Digital Habitat, for an alternative conceptual model for how we could integrate our personal digital devices. For now, I’ll abbreviate it as PDH because each for the individual words are important. However, if it catches on I suspect we will just say habitat, digihab, or just hab.
A Personal Digital Habitat is a federated multi-device information environment within which a person routinely dwells. It is associated with a personal identity and encompasses all the digital artifacts (information, data, applications, etc.) that the person owns or routinely accesses. A PDH overlays all of a person’s devices1 and they will generally think about their digital artifacts in terms of common abstractions supported by the PDH rather than device- or silo-specific abstractions. But presentation and interaction techniques may vary to accommodate the physical characteristics of individual devices.
Well, that was the most short sighted and optimistic take ever, eh? It feels like a decade since I wrote that, and a world away from where we all stand today. I would normally write a post on this day to talk about some of the work that I’ve been doing at Mozilla over the past 12 months, but that seems kinda insignificant right now. The global pandemic has hit the world hard, and while we’re starting to slowly to recover, it’s going to be a long process. Many businesses world wide, including Mozilla, felt the direct impact of the pandemic. I count myself fortunate to still have a stable job, and to be able to look after my family during this time. We’re all still healthy, and that’s all that really matters right now.
Richard Stallman has offered a sort of apology for his badly received defence of Professor Minsky on an MIT mailing list and the Free Software Foundation (FSF) believes that should end the matter.
For those who came in late, a victim of billionaire Jeffrey Epstein testified that she was forced to have sex with MIT professor Marvin Minsky, who was Stallman's chum. Stallman quit the FSF after he made comments in support of Minsky.
Hi there! We’re excited to kick off the GNU Assembly and its web site! This place intends to be a collaboration platform for the developers of GNU packages who are all “hacking for user freedom” and who share a vision for the umbrella project.
Truth be told, this is an old story finally becoming a reality. Almost ten years ago, Andy Wingo (of GNU Guile) emailed GNU maintainers...
A new organization for maintainers and contributors to GNU tools, the GNU Assembly, has announced its existence.
In this article, we will give you a short introduction to a wonderful open-source tool called Twine that allows you to write your own interactive stories easily. What it is, why we should use it, we will recommend a tutorial and more.
If you are running a kernel built with CONFIG_PREEMPT=y, RCU read-side critical sections can be preempted by higher-priority tasks, regardless of whether these tasks are executing kernel or userspace code. If there are enough higher-priority tasks, and especially if someone has foolishly disabled realtime throttling, these RCU read-side critical sections might remain preempted for a good long time. And as long as they remain preempted, RCU grace periods cannot complete. And if RCU grace periods cannot complete, your system has an OOM in its future.
This is where RCU priority boosting comes in, at least in kernels built with CONFIG_RCU_BOOST=y. If a given grace period is blocked only by preempted RCU read-side critical sections, and that grace period is at least 500 milliseconds old (this timeout can be adjusted using the RCU_BOOST_DELAY Kconfig option), then RCU starts boosting the priority of these RCU readers to the level specified by the rcutree.kthread_prio kernel boot parameter, which defaults to FIFO priority 2. RCU does this using one rcub kthread per rcu_node structure. Given a default Kconfig, this works out to one rcub kthread per 16 CPUs.
Sometimes it’s useful to establish a connection between a signal and a slot that should be activated only once. This is not how signal/slot connections normally behave.
[...]
The static_cast isn’t technically necessary, in this case. But it becomes necessary, should we want to also pass some other arguments (for instance, if we want the connection to be queued as well as single-shot). This closed a long-standing and very voted feature request. Sometimes, by removing the pebble in your shoe, you make many other people happy.
Agile software development is a methodology related to application development focusing on an iterative process, where cross-functional teams collaborate to produce better solutions. Agile frameworks are unique methods or techniques in the development process following Agile principles. Most companies use these frameworks to mitigate their particular needs. Many popular Agile frameworks are available in the market. Different businesses utilize them according to their specific needs. It is significant for the product’s success to embrace a solid framework that aligns with the team’s requirements. That’s where we come in. Today we will help you to choose an Agile framework that matches your team requirements.
Oil Shell is a not-really drop-in replacement for Bash with its own programming language for "serious" shell programming. The latest release has some very minor changes to the oil-language and that's about it. It may be worth a look if you want a shell language where you can use variables without quotes.
[...]
You can take a look at the "Oil Language Idioms" for examples of how regular bash compares to oil and acquire the technology from http://www.oilshell.org/releases.html if you want to see how it works. None of the GNU/Linux distributions we looked at have OSH in their repositories, so you will have to compile it yourself. It is a quick and small compile with nearly no dependencies, so you'll have it up and running in less than a minute.
On Friday, April 9th, the Rust Compiler team had a planning meeting for the April steering cycle.
Every fourth Friday, the Rust compiler team decides how it is going to use its scheduled steering and design meeting time over the next three Fridays.
The Chrome team is delighted to announce the promotion of Chrome 90 to the stable channel for Windows, Mac and Linux. This will roll out over the coming days/weeks.
Chrome 90.0.4430.72 contains a number of fixes and improvements -- a list of changes is available in the log. Watch out for upcoming Chrome and Chromium blog posts about new features and big efforts delivered in 90.
Google officially promoted Chrome 90 to its stable channel today as the latest feature update to their cross-platform web browser.
Exciting us the most with Chrome 90 is AV1 encode support now in place with the main use-case being for WebRTC usage. Chrome is making use of the reference libaom encoder for CPU-based AV1 encoding and with powerful enough hardware can be used for real-time video conferencing.
ProtonMail is one of the best secure email services out there. While alternatives like Tutanota already offer a calendar feature, ProtonMail did not offer it for all the users.
The calendar feature (in beta) was limited to paid users. Recently, in an announcement, ProtonMail has made it accessible for all users for free.
It is worth noting that it is still in beta but accessible to more users.
The Linux Foundation has launched a new research division to look at the impact of open source. Linux Foundation Research aims to broaden the understanding of open source projects, ecosystems, and impact by looking at open source collaboration.
Data and storage technologies are evolving. The SODA Foundation is conducting a survey to identify the current challenges, gaps, and trends for data and storage in the era of cloud-native, edge, AI, and 5G. Through new insights generated from the data and storage community at large, end-users will be better equipped to make decisions, vendors can improve their products, and the SODA Foundation can establish new technical directions — and beyond!
The SODA Foundation is an open source project under Linux Foundation that aims to foster an ecosystem of open source data management and storage software for data autonomy. SODA Foundation offers a neutral forum for cross-project collaboration and integration and provides end-users quality end-to-end solutions. We intend to use this survey data to help guide the SODA Foundation and its surrounding ecosystem on important issues.
Projects, even of the open-source variety, sometimes have secrets that need to be maintained. They can range from things like signing keys, which are (or should be) securely stored away from the project's code, to credentials and tokens for access to various web-based services, such as cloud-hosting services or the Python Package Index (PyPI). These credentials are sometimes needed by instances of the running code, and some others benefit from being stored "near" the code, but these types of credentials are not meant to be distributed outside of the project. They can sometimes mistakenly be added to a public repository, however, which is a slip that attackers are most definitely on the lookout for. The big repository-hosting services like GitHub and GitLab are well-placed to scan for these kinds of secrets being committed to project repositories—and they do.
Source-code repositories represent something of an attractive nuisance for storing this kind of information; project developers need the information close to hand and, obviously, the Git repository qualifies. But there are a few problems with that, of course. Those secrets are only meant to be used by the project itself, so publicizing them may violate the terms of service for a web service (e.g. Twitter or Google Maps) or, far worse, allow using the project's cloud infrastructure to mine cryptocurrency or allow anyone to publish code as if it came from the project itself. Also, once secrets get committed and pushed to the public repository, they become part of the immutable history of the repository. Undoing that is difficult and doesn't actually put the toothpaste back in the tube; anyone who cloned or pulled from the repository before it gets scrubbed still has the secret information.
Once a project recognizes that it has inadvertently released a secret via its source-code repository, it needs to have the issuer revoke the credential and, presumably issue a new one. But there may be a lengthy window of time before the mistake is noticed; even if it is noticed quickly, it may take some time to get the issuer to revoke the secret. All of that is best avoided, if possible.
Five years ago, we looked at an effort to assist in the assignment of Common Vulnerabilities and Exposures (CVE) IDs, especially for open-source projects. Developers in the free-software world have often found it difficult to obtain CVE IDs for the vulnerabilities that they find. The Distributed Weakness Filing (DWF) project was meant to reduce the friction in the CVE-assignment process, but it never really got off the ground. In a blog post, Josh Bressers said that DWF was hampered by trying to follow the rules for CVEs. That has led to a plan to restart DWF, but this time without the "yoke of legacy CVE".
Red Hat CodeReady Dependency Analytics, powered by Snyk Intel Vulnerability database, helps developers find, identify, and fix security vulnerabilities in their code. In the latest 0.3.2 release, we focused on supporting vulnerability analysis for Golang application dependencies, providing easier access to vulnerability details uniquely known to Snyk, and other user experience improvements.
Open source solutions can offer an accessible and powerful way to enhance your security-testing capabilities.
Security updates have been issued by Debian (xorg-server), Fedora (kernel), openSUSE (clamav, fluidsynth, python-bleach, spamassassin, and xorg-x11-server), Red Hat (gnutls and nettle, libldb, and thunderbird), Scientific Linux (thunderbird), SUSE (clamav, util-linux, and xorg-x11-server), and Ubuntu (network-manager and underscore).
Over the last few years, researchers have found a shocking number of vulnerabilities in seemingly basic code that underpins how devices communicate with the Internet. Now, a new set of nine such vulnerabilities are exposing an estimated 100 million devices worldwide, including an array of Internet-of-things products and IT management servers. The larger question researchers are scrambling to answer, though, is how to spur substantive changes—and implement effective defenses—as more and more of these types of vulnerabilities pile up.
Dubbed Name:Wreck, the newly disclosed flaws are in four ubiquitous TCP/IP stacks, code that integrates network communication protocols to establish connections between devices and the Internet. The vulnerabilities, present in operating systems like the open source project FreeBSD, as well as Nucleus NET from the industrial control firm Siemens, all relate to how these stacks implement the “Domain Name System” Internet phone book. They all would allow an attacker to either crash a device and take it offline or gain control of it remotely. Both of these attacks could potentially wreak havoc in a network, especially in critical infrastructure, health care, or manufacturing settings where infiltrating a connected device or IT server can disrupt a whole system or serve as a valuable jumping-off point for burrowing deeper into a victim's network.
This release fixes the security vulnerability described in our April 15th post.
This release fixes the security vulnerability described in our April 15th post.
Amazon.com Inc. last year told smart-thermostat maker Ecobee it had to give the tech giant data from its voice-enabled devices even when customers weren’t using them.
Amazon responded that if Ecobee didn’t serve up its data, the refusal could affect Ecobee’s ability to sell on Amazon’s retail platform...
The devastating coronavirus (COVID-19) global pandemic transformed adoption of holding oral proceedings by video conference, so as to sustain access to justice for parties. In 2019, about 900 oral proceedings before the Examining Divisions were held by video conference, increasing to more than 2,300 in 2020. Additionally, oral proceedings are now routinely held before the Opposition Divisions and the Boards of Appeal. In 2020, more than 300 oral proceedings before the Opposition Divisions were held by video conference and the EPO plans to achieve the same number monthly in 2021. Between May and October 2020, 120 oral proceedings before the Boards of Appeal were held by video conference.
The French government claims that patents are not “the issue”: in other words, no patent would hinder the manufacture of vaccines.
It is true that no patents have yet been granted directly for a covid-19 vaccine as such, since such a grant requires the filing of an application with a Patent Office, which will examine it during a procedure that will last approximately two years.
However, the lack of granted patents does not mean that there are no patents that could impede the manufacture of vaccines.
Firstly, it is the patent application, not the granting of the patent, that constitutes the origin of the patent right, from which point an infringement action is possible.
Second, vaccine manufacturing involves processes, possibly pre-dating the pandemic and be the subject of patents. This is the case for messenger RNA and lipid nanoparticles, for example, for which BioNTech[2] et Moderna[3] hold numerous patents. Similarly, Oxford University holds patents on the recombinant DNA technology used in Astra Zeneca’s vaccine, to which Oxford University has granted an exclusive license[4]. Moreover, Moderna’s announcement that it will not assert its patents during the pandemic does not mean that it is waiving its rights. Moderna is not giving up anything, but rather exercising its right by subjecting access to the patented technology to certain conditions[5].
Of course, patents could prove useless without the know-how required to implement their teachings. This know-how will be particularly necessary when it comes to compiling a file in order to obtain a marketing authorization. However, there is nothing, other than the lack of political will, to prevent the ex officio license from extending to the patent as well as to the know-how necessary for its exploitation.
It has been a banner quarter for those trying to avoid merits-based review of their patents at the USPTO. With 74 procedural denials to start the first three months of 2021—a quarterly record—it is projected that denials will rise by nearly 30% for the year, from 228 to nearly 300. If this trend holds, the Board will deny more petitions in 2020 without reaching the merits than in doing so. That’s in just over a year of Fintiv itself being precedential. If that projection holds, that will mean more than six hundred petitions will have been paid for and filed with the Board that will not have gotten a hearing on the merits.