Promising a comfortable, fully customizable, and more efficient workflow, the Launch keyboard features an aluminum chassis with rubber feet for stability and a detachable lift bar to adjust it at a 15 degrees angle. It features a high-speed USB hub with two USB-C and USB-A ports that offer up to 10 Gbps transfers.
Since it’s a configurable keyboard, Launch comes with additional keycaps and a convenient keycap puller so you can easily swap keys to match your personal workflow, a novel split Space Bar that can be swapped with a Shift, Backspace, or Function key to reduce hand fatigue while typing, as well as only three keycap sizes to vastly expand configuration options.
System76 is best known for selling Linux laptops and developing the Ubuntu-based Pop!_OS distro, but keyboards?
Today (May 12) is launch date for the System76 ‘Launch’, a compact and highly configurable mechanical keyboard which — fact fans — is entirely open source. Schematics for the PCB and chassis, and firmware code are all freely available.
Naturally System76’s first keyboard offers tight integration with Pop!_OS and its novel tiling features. But you don’t need to run Pop!_OS to use it. This keyboard and its desktop companion app work on Linux (naturally) but also macOS and Windows too.
Per the press documentation, the Launch is “engineered to be comfortable, fully customizable” and make day-to-day life with a keyboard more efficent. Users of Pop!_OS will particularly benefit from this, but the Launch also works on other distros, Mac, and Windows. It’s handmade in Denver, Colorado, USA, consisting of an aluminum chassis, a custom PCB, and rubber feet.
The System76 Launch configurable keyboard is designed to provide the user controlled keyboard experience, with open source mechanical and electrical design. It has open source firmware and associated software, and a large number of user configuration opportunities.
Linux PC company System76 has been selling laptop and desktop computers with Linux software for over a decade. The company also develops one of the most popular Linux distribution in the market named Pop!_OS. But now, PC manufacturer System76 is becoming an accessory maker too.
System76 has announced today the launch of the Launch Keyboard. This is a configurable keyboard designed and made in-house by System76. They say that it is “engineered to be comfortable, fully customizable, and make your workflow more efficient”. This mechanical keyboard has a lot of interesting features that I am very curious to try out such as the Split Spacebar offering a unique customization option, ability to easily remap keys with their configuration app and multiple layers functionality providing many ways to personalize it.
You may be wondering “what is so special about a mechanical keyboard?”, a few years ago I was wondering that too. I have been using a mechanical keyboard for a little while now and I can say with confidence that I will never go back. For years, I thought it was just a hipster thing to want a fancy keyboard vs the $10 keyboards I had been using most of my computing life. I never truly understood the value but that’s really just because I had never taken the leap into getting one since they tend to have a relatively high price attached to them.
A decade ago, I said the Chromebooks would be Windows PC killers. I got that wrong. But I wasn't as wrong as you might think. Today, Microsoft is hard at work turning Windows from a standalone PC operating system into a cloud-based Desktop-as-a-Service (DaaS) with its Cloud PC model. Who had that idea first? Who proved that users would accept a cloud-based desktop? That would be Google with the Chrome OS.
Recently Signal had an Amazing idea for a marketing campaign on Instagram, rather than just running boring Instagram ads they instead decided to show put the sort of data that was being used to target that user to show the importance of privacy.
Migrating from one platform to another is always a challenge, and when it comes to Linux, there are some ways you can make the process easier. In this video, I go over ten tips that can help make your transition to Linux smoother.
Linux is getting official support for Apple’s M1 Macs, and we could see a June release date for the upcoming Linux Kernel 5.13 release. The first Release Candidate build of Linux Kernel 5.13 was released this week, and Linus Torvalds has confirmed that it supports Apple’s M1 chip.
The release notes of the new 5.13 Kernel notes that it adds support for several chips based on the ARM architecture, including the M1. This will offer users the ability to run Linux natively on the new M1 MacBook Air, MacBook Pro, Mac mini, and iMac.
In addition to the initial batch of AMDGPU changes for Linux 5.14 that were mailed in on Thursday to DRM-Next, the initial DRM-Misc-Next pull also was sent off on its way to DRM-Next ahead of this next kernel cycle.
Years ago Phoronix readers may recall the SimpleDRM driver that was proposed as a very simple DRM/KMS driver for frame-buffer drivers. Coming with Linux 5.14 is now a new "SimpleDRM" driver that is a new take on the solution. The driver finally being mainlined is a Direct Rendering Manager / Kernel Mode-Setting driver for simple frame-buffer platform devices.
The Adreno 660 is the GPU found within the Snapdragon 888 SoC as a significantly improved graphics processor compared to the Adreno 650. Support for the Adreno 660 is now on the way to the open-source MSM DRM driver for the Linux kernel.
Jonathan Marek is again the one tackling this support. Enabling the Adreno 660 within the open-source MSM DRM driver though isn't too great of a burden over the existing Adreno 650 support. For this kernel driver it's just a little over one hundred lines of new kernel code involved.
Back in April we wrote about the AMDVLK 2021.Q2.2 Vulkan driver update for Radeon Linux systems while as some driver deja vu this driver version with the same changes have been re-released.
As noted last month, with the AMDVLK 2021.Q2.2 driver release the headers have been re-based against Vulkan API 1.2.174 and there are two listed new features. AMDVLK 2021.Q2.2 adds support to dynamic enable of color writes and partial nested command buffer support in the GPU debug layer.
The latest AMD ROCm compute stack has nothing new for Linux desktop users, and there is no mention of OpenCL in the release notes. It is still incapable of providing compute capabilities to desktop applications like Blender. Data center customers can enjoy new platform macros and several other improvements to the ROCm tools and libraries.
AMD has released new binary versions their AMDVLK 2021.Q2.2 driver that was originally released on April 28th. We have no idea what, if anything, is different in the new re-release, We can only speculate that the only change in the re-release is that the "new" version is compiled with an updated version of AMD's LLVM compiler fork.
[...]
The release notes remain the same as they were when 2021.Q2.2 was first released on April 28th. The old release notes mention that the Vulkan apiVersion (from the Vulkan headers) was bumbed to 1.2.17, that color writes can be enabled dynamically and that there were three minor bug-fixes relating to DCC color compression, the AMD switchable graphics layer being ignored in some cases and out of memory errors if AMDVLK is installed on a machine with no AMD GPU.
Szyszka is a new batch file renaming tool written in Rust programming language with GTK+ 3 toolkit. And it works on Linux, Windows, and Mac OS.
The name, Szyszka, is Polish word which means Pinecone. The tool has a very simple user interface, simple click “Add Entries”, press and hold Shift, or Ctrl to select your desired files. Add folder is not supported in the first 1.0 release, it is however marked as planned feature.
F2 is a command-line file and folder batch renaming tool written in Go. The tool is fast, safe (runs several validations before renaming, and allows undoing the batch rename), and runs on Linux, macOS and Microsoft Windows.
The mass rename command line tool is fairly new, having its first stable release back in February 2021, but it's already quite mature, with features like string replacement, insertion of text as a prefix, suffix or other position in the file name, change the letter case, rename using auto-incremental numbers, and so on. Find and replace using regular expressions is also supported.
The tool can show a preview of the new file and folder names (simply omit the -x command line flag, which is used to apply the changes), and it also supports undoing the last batch renaming operation in case you change your mind and want to revert the changes.
To ensure that the rename operations are safe, F2 also runs several validations before carrying out a rename operation. In case the tool finds conflicts, like the target destination already existing, invalid characters in the target path, an empty filename, etc., it can automatically resolve them using the --fix-conficts / -F flag.
Files are one of the most important things that you interact with on a Linux PC. Some of the most common files you will encounter on a Linux system include configuration files, log files, and scripts.
The ability to easily view files from the command line is a powerful feature that Linux provides to its users. This guide will show you the different command-line utilities that you can use to view files in Linux.
[...]
This guide has shown you the different ways in which you can view files in Linux. Being able to view and work with files directly from the command line is key. While these utilities offer features that allow you to search for strings, there are various other commands like the grep utility that you can use for filtering output on your system.
In addition to the terminal, users can also manage and navigate through their file system graphically. Several file manager applications are available on Linux that you can try for free.
Want to translate a text string between multiple languages using the terminal? Maybe you came across a message written in a different language while browsing the internet and want to know what it means. Luckily, Linux has several command-line applications that you can use to convert words from one language to another.
In this article, we will discuss two utilities, DeepL Translator and Translate Shell, which allow a user to translate strings to another language directly from the system terminal.
Nautilus Terminal, a plugin that embeds a terminal into Nautilus file manager, was updated recently with support for the latest Nautilus (Files) 40.
Using the Nautilus Terminal plugin, you can embed a terminal into the Nautilus (Files) file manager window, which can be toggled using the F4 key (this is configurable). The terminal follows the file manager navigation, with cd being automatically executed when navigating through folders in Nautilus. You can also drag and drop files or folders onto Nautilus Terminal, and it will auto-complete their path.
To understand the concept of Docker containers or containerization in general, it’s important to first understand the Docker container lifecycle. Maintaining a microservice application deployed through containers with complex requirements is not easy. To add to it, the only possible way to maintain Docker containers is through a command line. Hence, keeping track of each Docker container through a single command line becomes difficult.
Docker comes packed with tools and commands to manage our containers in the most efficient manner. Hence, leveraging these commands will make your lives a lot easier. If you know your Docker container commands, spinning up a Docker container is just a piece of cake.
In this tutorial of the Docker tutorial series, we will discuss the Docker container lifecycle in detail. We will discuss all the possible states in the Docker lifecycle and see how to manage containers in all these states with the help of corresponding Docker commands.
You can use the Docker push command to push images to the Docker hub. Docker hub allows us to create repositories where we can store and manage Docker images. Repositories are a set of similar images identified by their tags. For example, Docker contains several versions of Ubuntu images inside the Ubuntu repository. Each Ubuntu image is identified by a separate tag such as xenial, 18.04, 20.04, focal, etc.
Pushing images to the Docker hub is fairly simple. Once you have pushed images to the Docker hub, you can easily share them with your organization members. In fact, you can even use the Docker push command to push images to your private and locally hosted repositories. You can create local private registries using the registry image that the Docker hub provides.
In this article we will learn what the NTP is, how to sync your server time and date using systemd-timesyncd network time service, and how to change the timezone in Linux.
You can easily keep your system’s date and time accurate by using NTP (Network Time Protocol). It lets you to synchronize computer clocks through network connections and keep them accurate. Basically a client requests the current time from a remote server, and uses it to set its own clock.
Ubuntu 21.04 was made available last month, and it has been quite the hit with both end users and businesses. Although you won't find a massive amount of new features, what is there should be considered a significant step forward for enterprise and other business use cases.
One particular feature that network and security admins will greatly appreciate is the ability to easily connect Ubuntu Desktop to an Active Directory domain. With this newly added ability, Linux desktops have become a more viable option for companies. The added benefit of this is users will be working on a more reliable and secure platform.
We can use the Docker tag command to add metadata to Docker images. They convey essential information about the version of a specific image. Docker registries such as Docker hub store images in repositories. A repository is a set of similar images but different versions identified using tags. For example, the Ubuntu repository in the Docker hub has several Ubuntu images, but all of them have different tags such as 18.04, focal, xenial, bionic, etc.
Docker tag is just a way to refer to a particular version of an image. A fair analogy is how we use Git tags to refer to specific commits in history. We can use Docker tags to provide specific labels to an image. They can be considered an alias to image IDs.
If you’ve been around for a while in the world of computers and, why not, even cybersecurity, you may have heard the term checksum thrown around here and there, even in casual “How have you been” conversations. That’s mainly because checksums are still a reliable way to assess whether or not a batch of data or a single item corresponds to certain parameters or has suffered various modifications.
Docker provides us with various tools and utilities to create, manage, and share applications in isolated and packaged environments called containers. It uses multi-layered read-only templates called images to define the container environment which will run our application. We can use some important Docker image commands to maintain and manipulate Docker images easily.
After having worked with Docker for a considerable amount of time, you might have several Docker images already in your system. If you don’t know the basic image commands, it might be very difficult to manage such a huge number of images. To make this easier, Docker allows us to use simple image commands in the command-line to easily manage tons of images simultaneously.
When it comes to the personalization and customizability of Linux desktops, the KDE Plasma desktop environment takes the cake with an incredible amount of themes and tweaks. It offers a wide range of options to make your desktop look unique.
While customizing the icons or splash screen are straightforward in KDE Plasma, not many people know that you can change the login screen theme. This article is there to help you out.
Docker logs provide essential information about the commands and processes that are being executed inside the container. This is helpful in cases when your containers fail to work or gets crashed. You can tail Docker logs to find the exact set of commands that were responsible for the failure. Docker logs also help you to monitor the processes inside the container by live-streaming the process details.
Docker provides us with logging mechanisms that can be used to perform debugging at the daemon as well as container level. In this article, we will discuss how to display the container logs and tail Docker logs to get only the specific lines. You can check out our complete free Docker tutorial.
You can use the Docker start command to startup containers that are stopped. You can use it to start one or more than one stopped container simultaneously. The Docker container start command will start the container and run it in the background. This will start all the processes running inside the container. This is different from the Docker run command which is used to create a new container. When we execute the run command on an image, it will pull the image, create a new container, and start it automatically. However, you can only invoke the Docker start command on containers that have already been created before.
Docker allows us to use the Docker rm and Docker stop commands to remove or stop one or more containers. However, if you want to stop and remove all the containers simultaneously, you can combine sub-commands to list all containers with the Docker stop and remove commands. Moreover, we can only remove those containers that are not actively running in our host machine.
Hence, it’s very necessary to stop all the containers before we try to remove them. We can either use the force option along with the Docker rm command to remove all containers forcefully or first stop all Docker containers and then remove them. We can also use the Docker kill command along with a sub-command to kill all the containers simultaneously.
We can use the Docker commit command to commit changes to Docker containers. Consider the following situation. When you want to run an application inside Docker containers, you might have to install packages and dependencies inside the container. Initially, you can use Dockerfile instructions to install these packages directly. However, once you have created a container, it’s not possible to keep making changes inside the Dockerfile every time you want to install something inside the container.
Also, as soon as you exit the container, all the changes inside it are lost immediately. So, you will have to go through the same process again and again. Hence, if you want the changes to persist, you can use the Docker commit command. The commit command will save any changes you make to the container and create a new image layer on top of it.
In Kubernetes, cluster capacity planning is critical to avoid overprovisioned or underprovisioned infrastructure. IT admins need a reliable and cost-effective way to maintain operational clusters and pods in high-load situations and to scale infrastructure automatically to meet resource requirements.
You can use the Docker container create command to start a container from an image. However, the container create command only creates a writable container layer over the image. Simply put, it creates a container instance but does not start a container. The container create command is almost similar to “docker run -d” with the exception that it never starts a container. You can then use the “docker start” command to start the container whenever you want.
This command is useful when you just want to set up the configuration of the container beforehand so that it is ready when you want to start the container. On running the container create command, the status of the container is created.
You can use the Docker Exec Command to execute commands inside running Docker containers. If you already have a Docker container running and you want to execute an executable command inside it, you can use the Docker exec command. However, the only constraint is that the target container’s primary process (PID = 1) should be running.
Suppose you have an Ubuntu container running in the background. And you want to create a file inside the container but you don’t have access to the bash of the container. In such a case, you can use the Docker exec command to run a touch command inside the container. This will create your new file.
You can use the Docker container rm or Docker rm command to remove or delete Docker containers. However, before you remove a container, you need to make sure that the container is not actively running. You can stop the containers using the Docker stop command before removing the containers. Another workaround is that you can use the –force option to forcefully remove containers. If you want to delete or remove all containers together, you can use a sub-command to list all container IDs along with the Docker rm command.
In this tutorial, we will show you how to install Vivaldi Browser on Ubuntu 20.04 LTS. For those of you who didn’t know, Vivaldi is quite an instant web browser. Created and developed by a former Opera developer, it adds many particularly modifications good options. For example, it uses the Blink engine which is the same as Google Chrome so we will have guaranteed compatibility and speed. On the other hand, it has quite a high customization capacity at almost all levels.
This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you through the step-by-step installation of PufferPanel on Ubuntu 20.04 (Focal Fossa). You can follow the same instructions for Ubuntu 18.04, 16.04, and any other Debian-based distribution like Linux Mint.
Creating users in Ubuntu can be done with one of two commands: adduser and useradd. This can be a little confusing at first, because both of these commands do the same thing (in different ways) and are named very similarly. I’ll go over the useradd command first and then I’ll explain how adduser differs. You may even prefer the latter, but we’ll get to that in a moment.
If you’re using Ubuntu, you may need to know how to restart your network interface. Thankfully, Ubuntu makes it very easy to restart the network interface. In this guide, we’ll go over various ways you can restart the Ubuntu network interface.
When it comes to a server, users are essential—without users to serve, then there’s no real need for a server in the first place. The subject of user management itself within the world of IT is in and of itself quite vast. Entire books have been written on particular authentication methods, and whole technologies (such as Lightweight Directory Access Protocol, or LDAP) exist around it. This article will look at managing users that exist locally to our server and the groups that help define what they can do.
Today we've seen the first official gameplay of the upcoming Total War: WARHAMMER III from Creative Assembly, SEGA and porting studio Feral Interactive.
Before it gets into the gameplay though, it also shows off the Trial By Fire trailer that was released by itself yesterday which gets you in the mood for some action. This is the final game in the Total War Warhammer trilogy, and it's pleasing to see that we will have all three officially on Linux.
The KDE team announced the release of KDE Plasma 5.22 Beta and it is available for download and test. We take a look at what's incoming in this new Plasma release.
KDE Plasma 5.22 brings big changes like the new Plasma System Monitor app introduced in the KDE Plasma 5.21 release as a replacement for KSysguard as the default system monitoring app, a new adaptive panel transparency feature to help you make both the panel and the panel widgets more transparent, support for activities on Wayland, as well as support for searching through menu items from the Global Menu applet on Wayland.
Task Manager’s “Highlight Windows” feature has been improved as well to only highlight windows when hovering over their thumbnail in the tooltip by default, it’s now possible to change the text size in sticky note widgets, accessibility and keyboard navigability has been greatly improved in System Settings, as well as overall Wayland support.
This is the Beta release of Plasma 5.22. To make sure that end-users have the best possible experience with Plasma 5.22, KDE is releasing today this test version of the software. We encourage the more adventurous to test-run it and report problems so that developers may iron out the wrinkles before the final release scheduled for the 8th of June.
Plasma 5.22 is gearing up to be a leap forward with regards to stability and usability. Developers have concentrated their efforts on ironing out hundreds of bugs and removing paper cuts, while also tweaking details. The aim of Plasma 5.22 is to allow users to become even more productive and enjoy a smoother experience when using KDE’s Plasma desktop.
GNOME 40 was released at the end of March, and yesterday I added the last bits of it to Gentoo. You may not think that's fast, and you'd be right, but it's a lot faster than any GNOME release has been added to Gentoo that I can recall. I wasn't looking to become Gentoo's GNOME maintainer when I joined the team 18 months ago. I only wanted to use a GNOME release that was a little less stale. So how did I get here?
Pretty cool! Seems I announced OpenIndiana 2020.4 exactly a year ago, on the same day :)
OpenIndiana is a Solaris OS desktop environment based on the Illumos project that continues the OpenSolaris development.
This new ISO contains some vital fixes, like dhcpcd not starting properly, Software Station random crashes, a fix to start VirtualBox properly on the live session, and some software updates.
As we trundle along and northern hemisphere spring interrupts my coding with activities like “you have rhubarb, bake something”, the KDE-on-FreeBSD team keeps chasing rainbows and software updates.
Data protection is essential, especially as you move from development to production. Technologies fail, people make mistakes, disasters happen, and malicious actors disrupt. Implementing effective data protection with the right tools is critical but can be a challenge in today’s hybrid- and multi-cloud IT landscapes.
A startup bringing personal workspaces in the cloud for students, workers, coders, and creators along with a Linux project for developers, system administrators and users are teaming up to extend the use of a secure desktop from any device, anywhere.
Shells and openSUSE Project have entered into a partnership to expand the use of Shells with the availability of openSUSE distributions on Shells’ private virtual desktop environment powered by cloud computing.
[...]
A key member of the Shells tech team involved with the collaboration is Debian developer and former Purism Chief Technical Officer Zlatan Todoric.
“The Shells and openSUSE collaboration is one of those that we all enjoy in the FLOSS community,” Todoric said, who is currently serving at Shells’ Vice President of Technology. “Sharing knowledge, ideas, and helping each other to benefit the entire community is obviously what it is all about. openSUSE is a well known integrator and has vast experience in desktop and cloud environments, and having them as an option for our Shells cloud computers is a win-win solution for everyone. The collaboration will continue to expand even further as time goes on and I can already tell you that the openSUSE experience on Shells is going to be loved.”
Quarkus is an exciting development in open source technologies. This Kubernetes-native Java application framework brings the familiar reliability and maturity of Java with container-ready capabilities and developer-friendly features such as live reload, fast boot time, and imperative and reactive coding styles.
As organizations take advantage of cloud-native microservices architectures, Quarkus allows developers to more quickly build, test, and deploy their applications, improving application time to market.
The Fedora Council has approved a change from from the Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) license to the Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0) license for material classified as “content”. This message is the official public announcement of that change, which is effective as of today, the 13th of May 2021.
IBM has announced Project CodeNet, a large-scale research dataset aimed at helping teach AI to code.
“Computer scientists have been long fascinated by the possibility of computers programming computers,” according to the announcement, but the problem is not easily solved. If, for example, programming language translation were easy, legacy languages like COBOL would have been converted to modern alternatives by now. But, programming languages have context and complexity that go beyond a straightforward rules-based translation approach.
High-speed network packet processing presents a challenging performance problem on servers. Modern network interface cards (NICs) can process packets at a much higher rate than the host can keep up with on a single CPU. So, to scale the processing on the host, the Linux kernel sends packets to multiple CPUs using a hardware feature named Receive Side Scaling (RSS). RSS relies on a flow hash to spread incoming traffic across the RX IRQ lines, which will be handled by different CPUs. Unfortunately, there can be a number of situations where the NIC hardware RSS features fail; for instance, if the received traffic is not supported by the NIC RSS engine. When RSS is not supported by the NIC, the card delivers all packets to the same RX IRQ line and thus the same CPU.
Previously, if hardware features did not match the deployment use case, there was no good way to fix it. But eXpress Data Path (XDP) offers a high-performance, programmable hook that makes routing to multiple CPUs possible, so the Linux kernel is no longer limited by the hardware. This article shows how to handle this situation in software, with a strong focus on how to solve the issue using XDP and a CPUMAP redirect.
[...]
XDP runs an eBPF-program at the earliest possible point in the driver receive path, when DMA rx-ring is synced for the CPU. This eBPF program parses the received frames and returns an action or verdict, acted on by the networking stack.
As a software developer, your primary role is to deliver bits: pieces of executable ones and zeros that work as designed and expected. How do those bits make it into a container or virtual machine (VM)? Who cares?
You. You care. I know you do, because developers are wired like that. At some point, obviously, you’ve observed your code working properly (because all your code is bug-free, right?), so you’re very much interested in seeing that it runs the exact same way in testing, staging, and production.
Hi Fedora users, developers, and friends! It’s time to start thinking about Test Days for Fedora Linux 35.
For anyone who isn’t aware, a Test Day is an event to get a bunch of interested users and developers together to test a specific feature or area of the distribution. Test Days usual focused around IRC for interaction and a wiki page for instructions and results You can run a Test Day on just about anything for which it would be useful to do some fairly focused testing in ‘real time’ with a group of testers; it doesn’t have to be code. For instance, we often run Test Days for l10n/i18n topics. For more information on Test Days, see the wiki.
Anyone who wants to can host their own Test Day, or you can request that the QA group helps you out with organization or any combination of the two. To propose a Test Day, just file a ticket in fedora-qa repo. See fedora-qa#624 as an example. For instructions on hosting a Test Day, see the wiki.
You can see the schedule by looking at the repo. There are many slots open right now. Consider the development schedule, though, in deciding when you want to run your Test Day. For some topics, you may want to avoid the time before the Beta release or the time after the feature freeze or the Final Freeze.
Site Reliability Engineering (SRE) continues to gain momentum among IT organizations. According to the Upskilling 2021: Enterprise DevOps Skills Report, 47 percent of survey respondents (up from 28 percent in 2020) say SRE is a must-have process and framework skill. As the demand for strong SRE skills rises, so does SRE hiring.
However, a challenge for business and hiring managers is determining which skills, traits, and competencies make a strong site reliability engineer. In light of the upcoming SRE-focused SKILup Day conference, I asked several DevOps Institute Ambassadors and SRE subject matter experts to weigh in on what makes a great SRE. Here’s what they had to say:
In the first article in this series reviewing The Age of Sustainable Development by Jeffrey Sachs, I discussed the impact of economic development on the environment, and I explained how open organization principles can help us begin building sustainable, global economic development plans for the future.
Linux containers have greatly simplified software distribution. The ability to package an application with everything it needs to run has helped increase stability and reproducibility of environments.
While there are many public registries where you can upload, manage, and distribute container images, there are many compelling arguments in favor of hosting your own container registry. Let's take a look at the reasons why self-hosting makes sense, and how Pulp, a free and open source project, can help you manage and distribute containers in an on-premises environment.
The following contributors got their Debian Developer accounts in the last two months:
Jeroen Ploemen (jcfp) Mark Hindley (leepen) Scarlett Moore (sgmoore) Baptiste Beauplat (lyknode) The following contributors were added as Debian Maintainers in the last two months:
Gunnar Ingemar Hjalmarsson Stephan Lachnit
Congratulations!
The UBports community this week released Ubuntu Touch OTA-17 as the latest version of this Ubuntu smartphone/tablet spin that is currently supporting more than two dozen different devices.
Ubuntu Touch OTA-17 brings support for the Xiaomi Redmi Note 7 Pro and Xiaomi Redmi 3s/3x/3sp devices on top of exisiting devices supported. Ubuntu Touch OTA-17 also brings support for near-field communication (NFC) hardware where it's supported by devices with the Android 9 hardware compatibility layer. Ubuntu Touch OTA-17 also brings various camera software fixes, a Macedonian keyboard layout, and upgrades from Mir 1.2 to Mir 1.8.1.
The Ubuntu in the wild blog post ropes in the latest highlights about Ubuntu and Canonical around the world on a bi-weekly basis. It is a summary of all the things that made us feel proud to be part of this journey. What do you think of it?
Sailfish OS is a Linux-based operating system for mobile devices that made its debut on the Jolla Phone, which first shipped in 2013. Jolla shifted its focus from hardware to software years ago, and with Sailfish OS 4.1 rolling out now, the company says it’s ending support for the original Jolla Phone.
Sailfish OS 4.1 does, however, support a number of newer devices including the Jolla C, Jolla Tablet, and several Sailfish Xperia smartphones as well as the Planet Computers Gemini PDA.
The latest version of Sailfish OS also brings a number of bug fixes and a handful of new apps and features.
He added, “It’s a clear indication that the adaptive nature of FPGAs doesn’t need to be relegated to just the power user-programmable logic engineer anymore. And with Ubuntu support on the way, these dev kits could go mainstream in a hurry.”
IAR Systems has extended its build tools portfolio and now supports deployment in Linux-based frameworks Renesas’ low-power RL78 MCUs, enabling organisations to streamline building and testing workflows.
Forecr’s compact, $905 “DSBox-NX2” edge AI system integrates the Jetson Xavier NX version of its $242 “DSBoard-NX2” carrier board, which also supports the Nano and TX2 NX. Features include 8GB LPDDR4, 16GB eMMC, GbE, HDMI, 2x USB, CAN, and 3x M.2.
Ankara, Turkey based Forecr has begun shipping a DSBox-NX2 embedded computer that runs Ubuntu 18.04 with Nvidia JetPack on Nvidia’s Jetson Xavier NX module. The DSBox-NX2 is based on a Jetson carrier board called the DSBoard-NX2, which like the DSBox-NX2 appears to have been introduced earlier this year. Forecr is a brand and sub-business of eight-year-old, Ankara-based Mist Elektronik, which created the unit after becoming an Nvidia partner.
Becoming an astronaut is probably one of the top careers on any child’s list, but it’s not all that practical, especially when they’re still seven years old. That’s why Gordon Callison wanted to create a virtual shuttle mission control game that simulates a space shuttle launch with tons of different features for his kid to use.
The project he made is composed of many different panels that compose a box with three main surfaces that display/control various aspects of the shuttle’s journey. These include pre-flight checks on the right, launching the shuttle in the middle, and telemetry displays on the right. The whole thing fits neatly into a briefcase, but don’t let that relatively small size mislead you- it’s packed with plenty of LEDs and buttons. To control all of these, Gordon went with an Arduino Mega, along with a couple of shift registers for toggling a bank of 32 LEDs on and off. Sound effects can also be played through an Uno and Adafruit Sound Board whenever the shuttle takes off or is done orbiting.
There’s really no joy in saving money until it comes time to spend it, of course. But in an effort to gamify things a bit, YouTuber “Max 3D Design” has come up with a beautiful slot machine that surely puts a spin on traditional piggy banks.
The device itself was modeled in Fusion 360 and the fairly substantial design took a week of printing to produce. It features four LED matrices that rotate reel symbols, obscured by a thin film to make it appear as one display. Inside a screw conveyor system is used to transport coins, which eventually pop out of an opening at the end. This screw is actuated by a small stepper motor, and the gaming process is started by dropping a coin past a pair of wires under the control of an Arduino Uno.
As one of the kernel DCO advocates, I’ve written many times about using the DCO instead of a CLA for copyright and patent contributions under open source licences. In spite of my obvious biases, I’ll try to give a factual overview of the cases for the DCO and CLA system. First, it should be noted that both the DCO and any CLA are types of Contribution Agreements (a set of terms by which contributors are agreeing to be bound). It should also be acknowledged that the DCO is a far more recent invention than CLAs. The DCO was first pioneered by the Linux kernel in 2004 (having been designed by Diane Peters, then of OSDL) and was subsequently adopted by a broad range of open source projects. However, in legal terms, the DCO is much less well understood than a standard CLA type agreement between the contributor and some entity, which is largely the reason you find a number of lawyers still advocating for the use of CLAs in various open source projects: because they’d like to stick with something that has more miles on it, or because they’re invested in the older model of community, largely pioneered by Apache. The biggest problem today is that the operation of most CLAs is asymmetrical: they take from the contributor more rights than the open source code actually needs, so lets begin with a summary of each type of Contribution Agreement.
Software Freedom Conservancy is pleased to announce that Daniel Pono Takamori has joined as Community Organizer and Nonprofit Problem Solver. Takamori brings a wealth of skills acquired in his previous positions at other prestigious FOSS organizations, including the Linux Foundation, the Apache Software Foundation and the Oregon State University Open Source Lab. Takamori has spoken on a variety of topics at FOSS events, including recently as a keynoter at SeaGL.
Enterprises have a deep appreciation for the value of open source software with 100% of the information technology (IT) decision-makers in a recent survey saying that “using open source provides benefits for their organization.” The survey of 200 IT decision-makers was conducted by Vanson Bourne.
[...]
Of the 200 respondents, 25% were from medium-size enterprises of 500-999 employees and 75% were from large enterprises with more than 1,000 employees. They came from a cross-section of industries and had knowledge of open source software.
Large enterprise respondents were most likely to have moved databases and applications to cloud services. Just 15% of large enterprises continue to have all their databases and applications running at their on-premises data center, compared with 29% of medium-size enterprises.
This is the 91st issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.
The online DockerCon 2021 will focus on building containerized applications and managing an application's delivery lifecycle. More on the DockerCon live event.
Last year Google introduced support for “back-forward cache” on Android, which enables instantaneous page loading when users navigate using backward or forward button. As per a new document spotted by us, Google Chrome 92 update will also enable default support for back-forward cache on desktop platforms, such as Windows, Linux and macOS.
The Brazilian LibreOffice community is pleased to announce the immediate availability of the Portuguese Impress 7.0 Guide, the complete guidebook for creating high quality presentations in any environment, be it family, cultural or professional.
The book is 330 pages, and details the fundamentals of Impress, before covering the concepts of slide masters, styles, presentation templates, graphic objects, transition effects, object animations, export to other formats and much more. It’s rich in illustrations and examples – as well as scripts for the most important operations when editing and running presentations.
The documentation team in Brazil grew with the arrival of Luciana Mota, Diego Marques Pereira and Márcia Buffon Machado. Here are the newcomers’ messages to all!
Command Popup is a pop-up window that lets you search for commands that are present in the main menu and run them. This was requested in bug tdf#91874 and over-time accumulated over 14 duplicated bugs reports, so it was a very requested feature.
I'm intrigued by similar functionality in other programs, because it enables very quick access to commands (or programs) and at the same time don't need to move your hand off the keyboard. It also makes it easy to search for commands - especially in an application like LibreOffice with humongous main menu. So I decided to try to implement it for LibreOffice.
The GNU project has released several new versions of libraries that are a part of a mysterious project of theirs called "GNUstep". The "GNUstep" is not a desktop environment or a window manager or an application suite, it is, apparently, just a collection libraries you could use to make those things. The latest GNUstep libraries may be worth exploring if you are a developer who wants to use a graphical toolkit that's not Qt or GTK to create applications that are suitable for those square computer monitors with a 800x600 pixel resolution that were trendy in the early 1990s.
Greetings, hackers of spaceship Earth! Today's missive is about cross-module inlining in Guile.
a bit of history
Back in the day... what am I saying? I always start these posts with loads of context. Probably you know it all already. 10 years ago, Guile's partial evaluation pass extended the macro-writer's bill of rights to Schemers of the Guile persuasion. This pass makes local function definitions free in many cases: if they should be inlined and constant-folded, you are confident that they will be. peval lets you write clear programs with well-factored code and still have good optimization.
The peval pass did have a limitation, though, which wasn't its fault. In Guile, modules have historically been a first-order concept: modules are a kind of object with a hash table inside, which you build by mutating. I speak crassly but that's how it is. In such a world, it's hard to reason about top-level bindings: what module do they belong to? Could they ever be overridden? When you have a free reference to a, and there's a top-level definition of a in the current compilation unit, is that the a that's being referenced, or could it be something else? Could the binding be mutated in the future?
Some projects use .C as a file extension for C++ source code. This is ill-advised, because it is can't really be made to work automatically and reliably.
In this weeks TPF Marketing Committee meeting I made an elevator pitch for a "Perl Community Dashboard". It was well received so I have taken the action item to expound upon the idea here to gather more input. Understand this then as the minimum viable product to go from 0 to 1, something achievable that we can build upon.
This is the second in a series of articles about features that first appeared in a version of Python 3.x. Python 3.1 was first released in 2009, and even though it has been out for a long time, many of the features it introduced are underused and pretty cool. Here are three of them.
[...]
Python allows the -m flag to execute modules from the command line. Even some standard-library modules do something useful when they're executed; for example, python -m cgi is a CGI script that debugs the web server's CGI configuration.
However, until Python 3.1, it was impossible to execute packages like this. Starting with Python 3.1, python -m package will execute the __main__ module in the package. This is a good place to put debug scripts or commands that are executed mostly with tools and do not need to be short.
Python 3.0 was released over 11 years ago, but some of the features that first showed up in this release are cool—and underused. Add them to your toolkit if you haven't already.
The Rust 2021 Edition Working Group has scheduled the new version for release in October, with what it says are small changes that amount to a significant improvement.
This is the "third edition of the Rust language," said Mara Bos, founder and CTO of Fusion Engineering and a Rust Library Team member. The previous editions are Rust 2015 and Rust 2018.
"Edition" is a special concept in Rust, as explained here. Updates to Rust ship frequently, but the special feature of an edition is that it can include incompatible changes. A crate (Rust term for a library) has to be explicitly configured to support an edition so older code will continue to work correctly. The Rust compiler can link crates of any edition.
It’s the most FUD-iest time of the year, these days leading up to the tax filing deadline. FUD is a sales and marketing acronym for “fear, uncertainty and doubt,” sensations that can motivate customers to spend money and act rashly in order to alleviate their anxieties. And no industry is better at turning FUD into gold than online tax preparation services. These companies offer to take wary and confused filers by the digital hand with such programs as TurboTax and lead them through the forms and schedules that must be accurately filled out and postmarked this year by midnight this coming Monday.
[...]
But lobbyists have successfully beaten back every effort to get the IRS into the tax-prep business, generally by throwing up a lot of FUD about big government, conflicts of interest and even creeping socialism. ProPublica noted that when Barack Obama was campaigning for president in 2007 he pledged to implement a simple return system at the IRS: “No more worry,” he said. “No more waste of time, no more extra expense for a tax preparer.” No way, said the lobbyists. No way, echoed the invertebrate U.S. Congress. Objections to the idea don’t pass the smell test.
The team behind the KrakenD stateless, distributed, high-performance API gateway that enables microservices adoption, announced this week that the Linux Foundation will host the continued development of the gateway under a new name: the Lura Project.
KrakenD was created five years ago as a library for engineers to create fast and reliable API gateways. It has been in production among a range of Internet companies since 2016.
WebAssembly, or Wasm for brevity, is a standardized binary format that allows software written in any language to run without customizations on any platform, inside sandboxes or runtimes – that is virtual machines – at near native speed. Since those runtimes are isolated from their host environment, a WebAssembly System Interface (WASI) gives developers – who adopt Wasm exactly to be free to write software once, but ignoring where it will run – a single, standard way to call the low-level functions that are present on any platform.
The previous article in this series describes the goals, design principles and architecture of WASI. This time, we present real-world, usable projects and services based on WASI, that also clarify its role in the big picture: to facilitate the containerization of virtually any application, much more efficiently than bulkier containers like Docker may do.
Threat actors are abusing the Microsoft Build Engine (MSBuild) to deploy remote access tools (RATs) and information-stealing malware filelessly as part of an ongoing campaign.
MSBuild (msbuild.exe) is a legitimate and open-source Microsoft development platform, similar to the Unix make utility, for building applications.
CloudLinux announced today that its KernelCare service for the Raspberry Pi platform adds support for the Raspberry Pi OS (previously called Raspbian), the most widely used operating system on the popular low-cost platform.
KernelCare for IoT already supports Ubuntu Focal Fossa for 64-bit ARM, and now adds support for Raspberry Pi OS, the operating system officially provided by the Raspberry Pi Foundation. Raspberry Pi OS is a free Debian-based operating system specifically-optimized for Raspberry Pi hardware.
Yesterday, Mozilla and Google filed a joint submission to the public consultation on amending the Information and Communications Technology (ICT) Act organised by the Government of Mauritius. Our submission states that the proposed changes would disproportionately harm the security of Mauritian users on the internet and should be abandoned. Mozilla believes that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. The proposals under these amendments are fundamentally incompatible with this principle and would fail to achieve their projected outcomes.
Under Section 18(m) of the proposed changes, the ICTA could deploy a “new technical toolset” to intercept, decrypt, archive and then inspect/block https traffic between a local user’s Internet device and internet services, including social media platforms.
Another couple of weeks passed. A Lot of things happening, lots of anger and depression in folks due to handling in pandemic, but instead of blaming they are willing to blame everybody else including the population. Many of them want forced sterilization like what Sanjay Gandhi did during the Emergency (1975). I had to share ‘So Long, My son‘. A very moving tale of two families of what happened to them during the one-child policy in China. I was so moved by it and couldn’t believe that the Chinese censors allowed it to be produced, shot, edited, and then shared worldwide. It also won a couple of awards at the 69th Berlin Film Festival, silver bear for the best actor and the actress in that category. But more than the award, the theme, and the concept as well as the length of the movie which was astonishing. Over a 3 hr. something it paints a moving picture of love, loss, shame, relief, anger, and asking for forgiveness. All of which can be identified by any rational person with feelings worldwide.
[...]
I had written about caste issues a few times on this blog. This again came to the fore as news came that a Hindu sect used forced labor from Dalit community to make a temple. This was also shared by the hill. In both, Mr. Joshi doesn’t tell that if they were volunteers then why their passports have been taken forcibly, also I looked at both minimum wage prevailing in New Jersey as a state as well as wage given to those who are in the construction Industry. Even in minimum wage, they were giving $1 when the prevailing minimum wage for unskilled work is $12.00 and as Mr. Joshi shared that they are specialized artisans, then they should be paid between $23 – $30 per hour. If this isn’t exploitation, then I don’t know what is.
And this is not the first instance, the first instance was perhaps the case against Cisco which was done by John Doe. While I had been busy with other things, it seems Cisco had put up both a demurrer petition and a petition to strike which the Court stayed. This seemed to all over again a type of apartheid practice, only this time applied to caste. The good thing is that the court stayed the petition.
Dr. Ambedkar’s statement “if Hindus migrate to other regions on earth, Indian caste would become a world problem” given at Columbia University in 1916, seems to be proven right in today’s time and sadly has aged well. But this is not just something which is there only in U.S. this is there in India even today, just couple of days back, a popular actress Munmun Dutta used a casteist slur and then later apologized giving the excuse that she didn’t know Hindi. And this is patently false as she has been in the Bollywood industry for almost now 16-17 years. This again, was not an isolated incident. Seema Singh, a lecturer in IIT-Kharagpur abused students from SC, ST backgrounds and was later suspended. There is an SC/ST Atrocities Act but that has been diluted by this Govt. A bit on the background of Dr. Ambedkar can be found at a blog on Columbia website. As I have shared and asked before, how do we think, for what reason the Age of Englightenment or the Age of Reason happened. If I were a fat monk or a priest who was privileges, would I have let Age of Enlightenment happen. It broke religion or rather Church which was most powerful to not so powerful and that power was more distributed among all sort of thinkers, philosophers, tinkers, inventors and so on and so forth.
Back in 2005, an ebullient Apple CEO Steven P. Jobs announced the integration of podcasting into Version 4.9 of its desktop iTunes software, calling podcasting “TiVo for radio.”
Sixteen years later, during its April 20, 2021, “Spring Loaded” event, Apple has once again signaled a long-term corporate commitment to podcasting. But this time, instead of introducing listeners to the medium, Apple is creating the technical infrastructure for paid subscriptions through its Apple Podcasts service.
Creators will now have the option to require a payment for audiences to access their content on Apple’s platform, with Apple taking a 30% cut of the revenue.