I like round-number milestones. Especially if they allow one to showcase nice things. For example, sometime ago, I managed to revitalize my fairly ancient LG laptop by installing MX Linux on it. This restored a great deal of speed and nimbleness to the system, allowing it to remain modern and relevant for a bit longer.
Now that my HP machine has reached its double-digit age, I thought of upgrading its Linux system. At the moment, the machine dual-boots Windows 7 (indeed, relax) and Kubuntu 20.04. Things work reasonably well. Spec-wise, the 2010 laptop comes with a first-gen i5 processor, 4 GB of RAM, 7,200rpm hard disk, and Nvidia graphics. Technically, not bad at all, even today. Well, I decided to try some modern distro flavors, to see what gives.
[...]
Trawling through the online forums, I've found a few other mentions of similar problems. Of course, almost every legacy system issue is rather unique, so I can't draw any concrete conclusions here. But it does feel like Linux is leaving old stuff behind. 'Tis a paradox really. On one hand, Linux is well-known for being able to run (and pride itself for being able to do so) on ancient, low-end hardware. On the other hand, providing and maintaining support for an infinite amount of ancient systems is difficult.
And if you do recall my older content, I had a somewhat similar problem on my T42 laptop. Back when it had its tenth birthday, I booted it up after a long pause, and tried using Linux on it yet again. And I had problems finding Linux drivers for its ATI card - Windows drivers were easily and readily available. The problems aren't identical, but they are definitely indicative. Oh well. I may continue testing and playing with the old HP Pavilion, but I might not be able to really show you how well it carries into modern age. Hopefully, you found something useful in this wee sad article.
Folks have been using inexpensive single-board computers like the Raspberry Pi to create DIY home servers for about as long as inexpensive SBCs have been a thing. But the ZimaBoard is one of the first I’ve seen that’s custom made to be used as a DIY, hackable server.
The ZimaBoard is a small, fanless computer powered by a 6-watt Intel Apollo Lake processor with support for hard drives and SSDs.
Unlike discord, Matrix doesn't make you pay to use your own custom emotes or stickers, you just need to go and host them yourself. Luckily doing so is surprisignly [sic] easy and can be done for free.
Etebase is a set of client libraries and a server for building end-to-end encrypted applications. Tom Hacohen, who previously appeared on FLOSS Weekly episode 524 to talk about securely syncing contacts, calendars, tasks and notes with his product EteSync, is back to talk about his new baby: Etebase. This is a great discussion as more and more consumers and users are interested in encryption and securing their private information across all platforms they use today.
ARM virtualization company Corellium has managed to get Ubuntu Linux running on the next-generation Apple M1.
The news comes from Corellium CEO, Chris Wade, who mentioned on Twitter how "Linux is now completely usable on the Mac mini M1. Booting from USB a full Ubuntu desktop (rpi). Network works via a USB c dongle. Update includes support for USB, I2C, DART. We will push changes to our GitHub and a tutorial later today.".
Impressive speedy work, and a separate project to the recently revealed Asahi Linux which is also aiming to do the same thing. Two heads are better than one, as they say. The Corellium team mentioned on Twitter they full back the Asahi project too, so it's wonderful to see true cooperation.
Right now this effort doesn't appear to have full GPU acceleration so it's doing software rendering, making it less suitable for a daily driver but work is ongoing towards that. Eventually everything will be in place, and it's taking far less time than I personally expected to see it running on such brand new hardware from Apple.
The initial announcement came with a warning that the "very early" beta was for "advanced users only", and that USB support and a more complete release was on the way.
As Wade has now noted, users can now boot from USB to a full Ubuntu desktop.
Corellium’s CTO Chris Wade on Wednesday tweeted two photos of Ubuntu’s Groovy Gorilla running on the Mac Mini M1, adding that it was “completely usable” after booting from a ‘live’ USB drive.
The Linux Memory Management layer supports the very common technique of virtual memory. Linux splits blocks of virtual memory into areas specified by the c structure vm_area_struct. Each vm_area_struct contain information associated with mapped memory and are used to find the associated pages of memory which contain the actual information. Virtual memory areas (VMAs) could be the contents of a file on disk, the memory that contains the program, or even the memory the program uses during execution. Literally everything that is run on Linux uses vm_area_struct for memory mapping. This vital area of the kernel needs to be quick and avoid contention whenever possible.
With the BUS1 in-kernel IPC not panning out and not seeing any major code work in nearly two years, the user-space based, D-Bus compatible DBus-Broker remains the performant and current option for those looking at something faster and more reliable than D-Bus itself.
One of the last pieces of the puzzle for supporting an entirely Vulkan-based Wayland compositor is coming together with a new extension that looks like it will be merged soon and there already being work pending against Sway/WLROOTS to make use of the Vulkan path.
The VK_EXT_physical_device_drm extension to Vulkan has been in the works for a number of months and is for allowing the mapping of Vulkan physical devices and DRM nodes. VK_EXT_physical_device_drm allows for querying DRM properties for physical devices and in turn matching the with DRM nodes on Linux systems.
For those still making use of pre-GCN AMD graphics cards supported by the R600 Gallium3D driver (namely the Radeon HD 5000/6000 series), the open-source "R600g" Gallium3D driver now has nearly feature complete NIR support.
Gert Wollny has been near single handedly working on NIR support for the R600g driver to make use of this modern graphics driver intermediate representation as an alternative to the long-standing Gallium3D TGSI IR.
According to Istio’s support policy, LTS releases like 1.7 are supported for three months after the next LTS release. Since 1.8 was released on November 19th, support for 1.7 will end on February 19th, 2021.
At that point we will stop back-porting fixes for security issues and critical bugs to 1.7, so we encourage you to upgrade to the latest version of Istio (1.8.2). If you don’t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.
The VideoLAN team announced the release of VLC 3.0.12 as the thirteenth version of the “Vetinari” branch.
The new release features native support for Apple Silicon hardware, the M1 processor in new versions of the MacBook Air, MacBook Pro, and Mac mini.
Deskreen is a new free and open source application that can be used to make any device (in the same WiFi / LAN network) with a web browser, a second screen for your computer. The tool runs on Linux, Windows and macOS.
With Deskreen you can use a phone, tablet (no matter if they use Android, iOS, etc.), smart TV and any other device that has a screen and a web browser (without needing any plugins; it needs JavaScript to be enabled), as a second screen via WiFi or LAN.
Docker is a combo of ‘platform as a service’ products and services which use OS virtualisation to provide software in packages called containers.
Containers contain everything an app, tool or service needs to run, including all libraries, dependencies, and configuration files. Containers are also isolated from each other (and the underlying host system), but can communicate through pre-defined channels.
Sometimes video and audio needs to be separated into individual files (aka demuxed). This can be handy when some audio artifacts need to be removed (e.g. noise or buzz) from the audio track (aka stream). This can be done easily...
Container technology provides a means for developers and system administrators to build and package applications together with libraries, binaries, and configuration files so they can run independently from the host operating system and kernel version. You can run the same container application, unchanged, on laptops, data center virtual machines, and on a cloud environment.
WoofQ is the build system for EasyOS. It has scripts '0setup', '1download', '2createpackages' and '3buildeasydistro', that are run in that order. The script '2createpackages' splits each input package into _EXE, _DEV, _DOC and _NLS components.
Recently, when compiling LibreOffice in EasyOS on the Pi4, the configure step reported that the system boost libraries cannot be used, as some header files were missing. So, I had to use the internal boost, which does make the final LibreOffice PET bigger than it could have been.
Want to install packages on Arch Linux but do not know how? A lot of people face this problem when they first migrate from Debian-based distributions to Arch. However, you can easily manage packages on your Arch-based system using package managers.
Pacman is the default package manager that comes pre-installed in every Arch distribution. But still, there's a need for other package managers as Pacman doesn't support packages from the Arch User Repository.
Systemd a standard process for managing start-up services in Linux operating systems. It is used for controlling which programs run when the Linux system boots up. It is a system manager and has become the new standard for Linux operating systems. Systemd allows you to create a custom systemd service to run and manage any process. In this tutorial, we will explain how to manage services with systemd on Linux.
In this video, we are looking at how to install Synfig Studio on Linux Mint 20.1.
In this video, I am going to show how to install Ubuntu Unity Remix 20.10.
In Linux, the find command is used to search for files or folders from the command line. It is a complex command and has a large number of options, arguments, and modes.
The most common use of the find command is to search for files using either a regular expression or the complete filename(s) to be searched.
In Linux, the command ‘cp‘, which standards for ‘Copy‘ is used to copy files and folders to another folder. It is available by default in Linux as part of the GNU Coreutils set of tools.
The most basic use of the cp command is to specify the files to be copied as the arguments and to specify the target folder as the last argument.
We use the cp command in Linux to copy files and directories from one directory to another. It can be simply used to copy a few files or directories, or it can be used with the '-r' argument (which stands for ‘recursive‘) to copy a directory and the whole directory tree structure underneath it.
The ‘/dev‘ directory in Linux and Unix based systems contains files corresponding to devices attached to the system. For example, as seen in the screenshot below, the CD drive is accessed using ‘cdrom‘, DVD drive with ‘dvd‘, hard drives are accessed using ‘sda1‘, ‘sda2‘, etc.
All these files communicate with the Linux system through the respective files in ‘/dev‘. The input/output processing of the devices takes place through these files. This is due to an important feature of the filesystem in Linux: everything is either a file or a directory.
/dev/null is a pseudo-device file in Linux, which is used to discard output coming from programs, especially the ones executed on the command line. This file behaves like a sink, i.e. a target file which can be written, however as soon as any stream of data is written to this file, it is immediately deleted.
This is useful to get rid of the output that is not required by the user. Programs and processes can generate output logs of huge length, and it gets messy at times to analyze the log.
Evolved from Unix, Linux provides users with a low-cost, secure way to manage their data center infrastructure. Due to its open source architecture, Linux can be tricky to learn and requires command-line interface knowledge as well as the expectation of inconsistent documentation.
In short, Linux is an OS. But Linux has some features and licensing options that set it apart from Microsoft and Apple OSes. To understand what Linux can do, it helps to understand the different Linux OS components and associated lingo.
In Linux, programs are very commonly accessed using the command line and the output, as such, is displayed on the terminal screen. The output consists of two parts: STDOUT (Standard Output), which contains information logs and success messages, and STDERR (Standard Error), which contains error messages.
Many times, the output contains a lot of information that is not relevant, and which unnecessarily utilizes system resources. In the case of complex automation scripts especially, where there are a lot of programs being run one after the other, the displayed log is huge.
To move files from one directory to another, the ‘mv‘ command is used in Linux. This command is available in Linux by default and can be used to move files as well as directories.
In this article, you will learn how to list file directory structure and limit the depth of recursive file display in Linux.
We will use the top command-line tool, which is a task manager in Unix and Linux systems that shows all the details about running processes with memory usage.
In this article, you will learn how to extract Email addresses from a text file in Linux, using the handy command-line tool Grep.
You've heard it before: change your password regularly. That can sometimes seem like a pain, but fortunately, changing your Linux password is easy. Today we'll show you how to change the current user's password, other users' passwords, and the superuser password with a few simple commands.
It is very easy to generate random numbers in Unix. Easiest way is to use the variable $RANDOM.
Every time if you echo $RANDOM, you would get a new number between 0 and 32767.
This guide will walk you through the steps to check or find IP address in Linux using ip and hostname commands from command line interface.
Node.js is an open-source and cross-platform JavaScript runtime environment used to run JavaScript code on the server-side. It is primarily used for non-blocking, event-driven servers, traditional web sites and back-end API services.
You already know how to install Node.js and NPM using three different ways. If your application is running on the Node.js server then I would recommend updating Node.js version regularly to improve the security. There are several ways you can update your Node.js version in Linux system.
Don’t use a certain application anymore? Remove it.
In fact, removing programs is one of the easiest ways to free up disk space on Ubuntu and keep your system clean.
In this beginner’s tutorial, I’ll show you various ways of uninstalling software from Ubuntu.
Apache server is one of the most popular and open source web servers that is developed and maintained by Apache Software Foundation. Apache is by far the most commonly used Web Server application in Linux operating systems, but it can be used on nearly all OS platforms Windows, MAC OS, OS/2, etc. It enables the developers to publish their content over the internet
In this article, we will explain how to install and configure the Apache webserver on Debian 10 OS.
Spotify is a free music streaming service that offers additional premium content at a minimal subscription fee. It's a widely successful music service with several million users and millions of songs at your fingertips. With Spotify, you can listen to your favorite artists, the latest hits, exclusives, and new discoveries on the go. Spotify is available on Windows, macOS, Linux (Debian), along with Android, iOS, and Windows Phone smartphones and tablets.
We will learn in this article how to install Spotify on the latest version of Ubuntu, Mint, and Fedora.
SOGo is a free and open-source collaborative software with a focus on simplicity and scalability. It provides an AJAX-based Web interface and supports multiple native clients through the use of standard protocols such as CalDAV, CardDAV, and GroupDAV, as well as Microsoft ActiveSync. It also offers address book management, calendaring, and Web-mail clients along with resource sharing and permission handling.
In this tutorial, we will show you how to install SOGo on an Ubuntu 20.04 based virtual private server.
Learn how to install LXD on a Ubuntu Linux system, including how to install and initialise LXD manually, use --preseed and how to script the lxd install
iTunes for Linux systems doesn’t sound realistic because officially it is available only for Windows and macOS. However, using Wine on Ubuntu and other Linux, is absolutely possible just like any other native Linux application.
Those who are using Apple devices can understand the value of the iTunes application on their systems. It let you not only listen to music available on your iPhone, PC, and other devices but also let access various other things such as Radio, iTunes Store, and more. Once logged in with Apple ID, in addition to managing, playing, and downloading music tracks, the iTunes app also enables direct access to the music streaming service of Apple Music.
Enhance your system security with tlog, a terminal logging utility.
Upgrading your Ubuntu version from one version to the latest version is one of the best features of Ubuntu. It is always recommended to upgrade your current Ubuntu version regularly in order to benefits from the latest security patches. You will get several benefit including, the latest software, new security patches and upgraded technology with a new version.
As of now, Ubuntu 20.04 LTS is the latest Ubuntu version and you will keep getting updates and support till April 2025.
Before starting any upgrade process, it is a good idea to backup any important files, system settings, and critical content for precaution. Also remember, you cannot downgrade it. You cannot go back to Ubuntu 18.04 without reinstalling it.
Do you script in bash? If so, you can provide your users with a more robust and simple TUI for entering information into scripts.
Krita is a free and open-source painting tool for artists and also known as a Photoshop alternative software, Krita has been in development for 10+ years and recently it came to life and having a good response now.
This tutorial will be helpful for beginners to install Krita 4.4.2 in Ubuntu 20.10, Ubuntu 20.04, Ubuntu 18.04 and Linux Mint 20.1, and older versions.
The latest version of Krita is 4.4.2 and announced with over 300 changes with new features also.
This tutorial will be helpful for beginners to install VLC in Ubuntu 20.04, Ubuntu 20.10, Ubuntu 18.04 and LinuxMint 20.1.
VLC is a free and open-source cross-platform multimedia player and it is one of the best media player for Linux used by millions of peoples to play multimedia files such as DVD, VCD, MP4, MKV, Mp3, and various formats.
VLC released the thirteenth version of “Vetinari” branch 3.0.12.
The v6 has been recently released with all major improvements needed. And named Wine, the popular layer of compatibility for running Windows apps on Linux. Undoubtedly, this going to be the first major release by the project in this year 2021. And all happen with following Wine’s schedule of making one major release every year with improvements and fresh updates.
Wine can’t be listed among emulators as what the previous version is said as. It is a compatibility layer designed to allow games and apps to run on non-native environments like Linux, and originally was for only Microsoft.
All Linux users with Wine will be allowed to easily access more than 27000 Windows apps and games on Linux. This apps also includes popular ones such as Adobe Photoshop and Microsoft Office. After year’s worth of development that saw over around 8300 changes this Wine 6.0 came up. And all these is been shared by Alexandre Julliard the person who created this in the release announcement.
Gabe Newell of Valve Software (Steam) recently spoke to 1 NEWS in New Zealand about everything that has been going on and teased a few fun details. For those who didn't know, Newell has been staying in New Zealand since early 2020 and decided to stay after a holiday when COVID-19 got much worse.
Newell continues to talk very highly of New Zealand, even somewhat jokingly mentioning that some Valve staffers appear to strongly want to move their work over there now too. Newell mentioned why there's no reason other game companies couldn't move to New Zealand, and joked how they're a producer of "not-stupidium" seemingly referring to how well New Zealand has dealt with COVID-19.
[...]
Nice to see they continue to keep Linux in their sights for games too with all their recent games (Artifact, Underlords and Half-Life: Alyx) all having Linux builds, although Alyx is not directly mentioned on the store page for Linux it is available.
Firaxis has confirmed the next DLC that forms part of the New Frontier Pass for Civilization VI will be releasing on January 28. Here's some highlights of what's to come.
While the full details are yet to be released, Firaxis did a developer update video to tease some of it. There's going to be a new civilization with Vietnam joining the world, two new leaders for existing civilizations (China and Mongolia), a new "Monopolies and Corporations" game mode with expanded economic options which sounds really quite interesting.
Krita is one of the best open-source paint applications available for Linux. With their latest 4.4.2 release, it should get more exciting for all the users across multiple platforms.
In their official announcement, they mention it as a “bugfix release” but do not let that fool you. It is indeed a significant release with over 300 changes and some new key feature additions to let you make the most out of it.
Krita, free open-source painting program, released version 4.4.2 yesterday with some key new features though it’s mainly a bug-fix release.
Krita 4.4.2 comes with over 300 changes.
My outreachy internship has definitely taught me a lot of things including writing blog posts, reporting tasks, expressing myself and of course improving as a developer. When we developed a project timeline before submitting the final application weeks back, my mentor and I underestimated some of the issues because there were some hidden difficulties we only found out later.
Initially, my timeline was set to using the first week to understand the inner workings of the debugger, using week 2-4 on the backtrace full command, using week 5-7 to display the current line of the source code when displaying the current frame in the debugger and the task for week 8-13 were still to be decided upon by my mentor and I within the course of the internship.
In my previous post I discussed my most recent contributions to flexbox code in WebKit mainly targeted at reducing the number of interoperability issues among the most popular browsers. The ultimate goal was of course to make the life of web developers easier. It got quite some attention (I loved Alan Stearns’ description of the post) so I decided to write another one, this time focused in the changes I recently landed in WebKit (Safari’s engine) to improve the handling of elements with aspect ratio inside flexbox, a.k.a make images work inside flexbox. Some of them have been already released in the Safari 118 Tech Preview so it’s now possible to help test them and provide early feedback.
As many users have noticed, you cannot install all the software you want on your computer via gnome-software. This restriction has been imposed by the developers...
I found Leah through a fascinating tweet where she charted out her IRC activity over the past 10 years. Leah’s setup is just as interesting, mostly in that there’s no desktop environment. Leah also helps maintain Void Linux, which is a rolling release built from scratch. It’s a little too hardcore for me, but it seems pretty beloved on Reddit. So this setup is technical and intense, but also a lot of fun.
Slimjet is built on top of the Chromium open-source project on which Google Chrome is also based. It enjoys the same speed and reliablity provided by the underlying blink engine as Google Chrome. However, many additional features and options have been added in Slimjet to make it more powerful, intelligent and customizable than Chrome. In addition to that, Slimjet DOES NOT send any usage statistics back to Google’s server like Google Chrome, which is a growing concern for many Chrome users due to the ubiquitous presence and reach of the advertising empire.
SUSE provides public cloud customers with PAYG (Pay-As-You-Go) images on AWS, Azure, and GCP. Instances created from these images connect to a managed update infrastructure. So if you need to update your instances with the latest software updates or install that needed package using zypper, usually you can be assured that the underlying repositories are there with no further hassles. There are exceptions, though. Instances configured to utilize a proxy server or traverse firewalls, NAT gateways, proxies, security rules, Zscalar, or other security and network devices may run into problems. The purpose of this post is to address some of the more commonly occurring configuration issues seen with public cloud environments.
This is the fifth blog of a series that provides insight into SUSE Linux Enterprise product development. You will get a first-hand overview of SUSE, the SLE products, what the engineering team does to tackle the challenges coming from the increasing pace of open source projects, and the new requirements from our customers, partners and business-related constraints.
[...]
Based on our joint schedule, openSUSE Leap and SLE have a predictable release time frame: a release every 12 months and a 6 months support overlap for the former and new release, thus when the time is ready a snapshot of openSUSE Tumbleweed is made and both openSUSE and SLE will use this snapshot to create our next distributions versions. With this picture, we are not talking about our distribution per se yet, it’s only a pool of packages sources that we will use to build our respective distribution. But before going into how it’s built, note that it’s a simplified view because of course, there is always some back and forth between for instance openSUSE Leap/SLE and openSUSE Tumbleweed; it’s not just a one-way sync because during the development phase of our distributions, bugs are found and of course fixes are submitted back to Factory so openSUSE Tumbleweed also receives fixes from the process. For the sake of simplifying the picture we did not add these contributions as arrows. Also at SUSE, Open source is in our genes so we have always contributed to openSUSE but, since 2017, our SUSE Release Team had enforce a rule called “Factory First Policy“, which force code submissions for SLE to be pushed to Factory first before it lands in SLE. This is a continuation of the “Upstream First” principle on the distribution level. It reduces maintenance effort and leverages the community.
Red Hat rolled out updates to its CentOS Stream platform targeted at alleviating support issues tied to the new Linux platform that is set to supersede its long-standing CentOS Linux project.
The CentOS Stream platform will include “no- and low-cost” programs that will allow individual Red Hat Enterprise Linux (RHEL) subscriptions to run on up to 16 systems in a production environment. This includes the ability to run these RHEL systems on major public cloud environments like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). This option will be available by Feb. 1, 2021.
Red Hat is now also making it possible to add development teams to its Red Hat Developer program by using a team member’s existing RHEL subscription. This will allow RHEL to be deployed using Red Hat’s Cloud Access program on top of those major cloud providers.
Red Hat has announced two new programs for RHEL: no-cost RHEL for small production workloads and no-cost RHEL for customer development teams.
The terms of the no-cost RHEL program formerly limited its use to single-machine developers. Red Hat has now expanded the terms of the program so that the Individual Developer subscription for RHEL can be used in production for up to 16 systems.
On December 8, 2020, Red Hat announced a major change to the enterprise Linux ecosystem: Red Hat will begin shifting our work from CentOS Linux to CentOS Stream on December 31, 2021. We and the CentOS Project governing board believe that CentOS Stream represents the best way to further drive Linux innovation. It will give everyone in the broader ecosystem community, including open source developers, hardware and software creators, individual contributors, and systems administrators, a closer connection to the development of the world’s leading enterprise Linux platform.
When we announced our intent to transition to CentOS Stream, we did so with a plan to create new programs to address use cases traditionally served by CentOS Linux. Since then, we have gathered feedback from the broad, diverse, and vocal CentOS Linux user base and the CentOS Project community. Some had specific technical questions about deployment needs and components, while others wondered what their options were for already- or soon-to-be deployed systems. We’ve been listening. We know that CentOS Linux was fulfilling a wide variety of important roles.
We made this change because we felt that the Linux development models of the past 10+ years needed to keep pace with the evolving IT world. We recognize the disruption that this has caused for some of you. Making hard choices for the future isn’t new to Red Hat. The introduction of Red Hat Enterprise Linux and the deprecation of Red Hat Linux two decades ago caused similar reactions. Just as in the past, we’re committed to making the RHEL ecosystem work for as broad a community as we can, whether it’s individuals or organizations seeking to run a stable Linux backend; community projects maintaining large CI/Build systems; open source developers looking toward "what’s next;" educational institutions, hardware, and software vendors looking to bundle solutions; or enterprises needing a rock-solid production platform.
In January 2021, Red Hat announced that Red Hat Enterprise Linux can be used at no cost for up to 16 production servers. In this article, I want to provide step-by-step instructions on how to install RHEL 8.3 in a VM.
First off, download the official and updated QCOW2 image named rhel-8.3-x86_64-kvm.qcow2 (the name will likely change later as RHEL moves to higher versions). Creating an account on the Red Hat Portal is free, there is an integration with 3rd party authorization services like GitHub, Twitter or Facebook, however for successful host registration username and password needs to be created.
To use RHEL in a cloud environment like Amazon, Azure or OpenStack, simply upload the image and start it. It’s cloud-init ready, make sure to seed the instance with data like usernames, passwords and/or ssh-keys. Note that root account is locked, there is no way to log in without seeding initial information.
If Red Hat's new no-cost offering for up to 16 production systems for RHEL doesn't fit your requirements and are evaluating alternatives to CentOS 8 that will be EOL'ed this year, Rocky Linux remains one of the leading contenders and is on track for its inaugural release in Q2 of this year.
Rocky Linux and CloudLinux's AlmaLinux appear to be the two main contenders (along with existing players like Oracle Linux) coming out of last month's announcement that CentOS 8 will be EOL'ed at the end of 2021.
Today I actually also attended the super low key design team video chat, which involved a brain storm session for Fedora 35 that was exciting!
This means that you claim that the problem has been dealt with. If this is not the case it is now your responsibility to reopen the Bug report if necessary, and/or fix the problem forthwith.
(NB: If you are a system administrator and have no idea what this message is talking about, this may indicate a serious mail system misconfiguration somewhere. Please contact owner@bugs.debian.org immediately.)
Back in October, LWN looked at a conversation within the Debian project regarding whether it was permissible to ship Kubernetes bundled with some 200 dependencies. The Debian technical committee has finally come to a conclusion on this matter: this bundling is acceptable and the maintainer will not be required to make changes
Today, we are pleased to announce that fabre.debian.net has migrated to FOSSHOST
FOSSHOST provides us a VPS instance which is located at OSU Open Source Lab. It improves a lack of enough server resources then service availability especially.
A Debian LTS logo Like each month, have a look at the work funded by Freexian’s Debian LTS offering.
Debian project funding
In December, we put aside 2100 EUR to fund Debian projects. The first project proposal (a tracker.debian.org improvement for the security team) was received and quickly approved by the paid contributors, then we opened a request for bids and the bid winner was announced today (it was easy, we had only one candidate). Hopefully this first project will be completed until our next report.
We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.
I'm glad Linux Mint exists. That's a strange statement, coming from someone who has never opted to make it their default desktop distribution. I've never been a fan of Cinnamon or Mate, and I've always thought Xfce was a solid desktop, but just not for me.
Even though I'm not terribly keen on the offered desktops for Linux Mint, I still believe it to be a fantastic distribution. Why is that? One reason is that it's most ardent fans are almost Apple-like in their fanaticism. From my perspective, that's a good thing. Linux has long needed a desktop distribution which elicited that much excitement from the user base. Once upon a time, that title would have been bestowed upon Ubuntu. Alas, a few bad choices along the way and the rabid fanbase isn't quite so rabid.
Renesas unveiled three low-end “RZ/G2L” members of its RZ/G2 family of Linux-driven IoT SoCs with single or dual -A55 cores plus a Mali-G31, Cortex-M33, and up to dual GbE support. There is also a SMARC module and dev kit.
Renesas’ RZ/G2 line of industrial-focused system-on-chips include the hexa-core RZ/GM and octa-core RZ-G2H, both with mixtures of Cortex-A57 and -A53 cores and 4K support, as well as two dual-core models: a Cortex-A53 based RZ/G2E with HD video and a Cortex-A57-equipped RZ/G2N with 4K. Instead of filling in the middle of the Linux-focused product line with some quad-core models, the Japanese chipmaker has instead come back with three new low-end models, featuring single or dual-core Cortex-A55 cores.
Renesas Electronics Corporation announced RZ/G2L MPUs, allowing enhanced processing for an extensive variety of AI applications. The RZ/G2L group of 64-bit MPUs includes three new MPU models featuring Arm Cortex-A55, and an optional Cortex-M33 core. These are RZ/G2L, RZ/G2LC, and RZ/G2UL MPUs. The Cortex-A55 CPU core typically delivers approximately 20 percent improved processing performance compared with the previous Cortex-A53 core, and according to Renesas, is around six times faster in “essential processing for AI applications”.
Avalue’s fanless, rugged “EMS-TGL” embedded PC runs Linux or Win 10 on embedded versions of Intel’s 11th Gen ULP3 Core CPUs with up to 64GB DDR4-3200, 3x M.2, 1GbE and 2.5GbE ports, and optional “IET” expansion.
Avalue, which recently launched a pair of NUC-APL mini-PCs based on Intel’s Apollo Lake, announced a larger, but similarly fanless embedded computer with Intel’s 10nm, 11th Gen “Tiger Lake” ULP3 processors. The rugged EMS-TGL runs Linux and Win 10 and supports applications including digital signage, smart retail, and computer vision.
Meanwhile, folks who are still interested in weird phones might have to look to smaller companies like F(x)Tec, Planet Computers, Pine64, and Purism, which have developed phones with features like built-in keyboards, support for GNU/Linux distributions and other free and open source operating systems, and physical kill switches for wireless, mic, and camera functions, among other things.
MicroMod is a modular interface ecosystem for quick embedded development and prototyping. MicroMod comes with two components, that is a microcontroller “processor board” and a carrier board. PC industry’s M.2 connector is the interface between these two components. The carrier boards are for the usage of various peripherals and the processor board act as the brain of the application system.
Odroid continues to move beyond the simple realm of Single Board Computers (SBCs) to become and more and more credible player as a portable consoles manufacturer. After introducing the Odroid Go and the Odroid Go Advance (that both cow_killer and I reviewed), they have announced at the end of December 2020 that they were going to release yet another version, the Odroid Go Super.
As usual, I suggest adding from now to your favourite ecommerce shopping chart all needed hardware, so that at the end you will be able to evaluate overall costs and decide if continuing with the project or removing them from shopping chart. So, hardware will be only:
- Raspberry PI Zero W (including proper power supply or using a smartphone micro usb charger with at least 3A) or newer Raspberry PI Board
If you’ve ever wanted to wind balls of yarn, then look no further than this automated machine from Mr Innovative. The YouTuber’s DIY device is powered by an Arduino Nano and an A4988 stepper driver, spinning up a round conglomeration of yarn via a NEMA17 motor and a timing belt.
The ball is wound on an offset spindle, which is mechanically controlled to pitch back and forth and spin itself as the overall assembly rotates, producing an interesting geometric pattern.
Little Bee is an affordable, open-source hardware, and high-performance current probe and magnetic field probe designed to debug and analyze electronic devices at a much lower cost than existing solutions such as Migsic CP2100B or I-prober 520. This type of tool is especially important for power electronics, which has become ever more important with electric vehicles, alternative energy solutions, and high-efficiency power supplies.
The Coanda effect, as you may or may not know, is what causes flowing air to follow a convex surface. In his latest video, James Bruton shows how the concept can used as a sort of inverted ping pong ball waterfall or staircase.
His 3D-printed rig pushes balls up from one fan stage to another, employing curved ducts to guide the lightweight orbs on their journey.
The fan speeds are regulated with an Arduino Uno and motor driver, and the Arduino also dictates how fast a feeder mechanism inputs balls via a second driver module. While the setup doesn’t work every time, it’s still an interesting demonstration of this natural phenomenon, and could likely be perfected with a bit more tinkering.
Open source, a revolutionary idea for ICT innovations, also makes sense for business. The key is its adoption to an organisation’s culture and budget If one were to make an internet search for the very active Information Technology and Communication (ICT) areas of innovation, the usual suspects likely to show up are intelligent machines like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL); human-machine interactions like bots, augmented realities, voice and gesture-enabled interfaces; ubiquitous computing like resilient cloud and quantum computing; and autonomous machines that include the like of drones and self-driving vehicles.
Compared to the pace of development a couple of decades ago, today all these areas continue to develop at extremely high velocities. A deep dive into any of the technical areas will show up a common thread: open source.
Once you’ve done the update the Xiaomi app will not work anymore, and you’d only access the robot vacuum cleaner via its web interface which, in most cases, comes with the same features as the mobile app minus cloud connectivity. However, if you change your mind, you can simply factory reset the device to remove Valetudo and continue with the Xiaomi app, at least on Roborock models.
Apache CloudStack (CS), the Apache Software Foundation’s cloud infrastructure project, has pushed out new long term support version 4.15, providing users with a new UI, various VMware-related improvements and a way to define role based users in projects.
The software was originally developed in 2008 at what soon became Cloud.com, a start-up that was bought by Citrix in 2011. The infrastructure as a service platform was accepted into the Apache Incubator in 2012 and graduated its process in 2013. Customers include Verizon, TomTom, SAP, Huawei, Disney, Cloudera, BT, Autodesk, and Apple.
When the Subversion project started in the early year 2000, I was there. I joined the project and participated in the early days of its development as I really believed in creating an “improved CVS” and I thought I could contribute to it.
While I was involved with the project, I noticed the lack of a decent mailing list archive for the discussions and set one up under the name svn.haxx.se as a service for myself and for the entire community. I had the server and the means to do it, so why not?
After some years I drifted away from the project. It was doing excellently and I was never any significant contributor. Then git and some of the other distributed version control systems came along and in my mind they truly showed the world how version control should be done…
The mailing list archive however I left, and I had even added more subversion related lists to it over time. It kept chugging along without me having to do much. Mails flew in, got archived and were made available for the world to search for and link to. Today it has over 390,000 emails archived from over twenty years of rather active open source development on multiple mailing lists. It is fascinating that no less than 46 persons have written more than a thousand emails each on those lists during these two decades.
The online version of the curl book “everything curl” has been moved to the address shown in the title:
everything.curl.dev
This, after I did a very unscientific and highly self-selective poll on twitter on January 18 2020...
Even though Brave browser was caught up in some controversies last year, it looks like they managed to become the first major web browser to add support for InterPlantetary File System (IPFS) protocol with the help of Protocol Labs.
This support was introduced with v1.19.86 release.
In case you didn’t know, IPFS is a peer to peer protocol that lets you store and share files. You can safely assume it as something similar to the BitTorrent protocol with some technical differences.
Just because it is totally a decentralized system to store and share files, it can be quite effective to fight censorship by big tech and the government.
What is the relevance I hear you ask. Well, I provide Chromium packages for Slackware, both 32bit and 64bit versions. These chromium packages are built on our native Slackware platform, as opposed to the official Google Chrome binaries which are compiled on an older Ubuntu probably, for maximum compatibility across Linux distros where these binaries are used. One unique quality of my Chromium packages for Slackware is that I provide them for 32bit Slackware. Google ceased providing official 32bit binaries long ago.
In my Slackware Chromium builds, I disable some of the more intrusive Google features. An example: listening all the time to someone saying “OK Google” and sending the follow-up voice clip to Google Search.
And I create a Chromium package which is actually usable enough that people prefer it over Google’s own Chrome binaries, The reason for this usefulness is the fact that I enable access to Google’s cloud sync platform through my personal so-called “Google API key“. In Chromium for Slackware, you can logon to your Google account, sync your preferences, bookmarks, history, passwords etc to and from your cloud storage on Google’s platform. Your Chromium browser on Slackware is able to use Google’s location services and offer localized content; it uses Google’s translation engine, etcetera. All that is possible because I formally requested and was granted access to these Google services through their APIs within the context of providing them through a Chromium package for Slackware.
The API key, combined with my ID and passphrase that allow your Chromium browser to access all these Google services are embedded in the binary – they are added during compilation. They are my key, and they are distributed and used with written permission from the Chromium team.
These API keys are usually meant to be used by software developers when testing their programs which they base on Chromium code. Every time a Chromium browser I compiled talks to Google through their Cloud Service APIs, a counter increases on my API key. Usage of the API keys for developers is rate-limited, which means if an API key is used too frequently, you hit a limit and you’ll get an error response instead of a search result. So I made a deal with the Google Chromium team to be recognized as a real product with real users and an increased API usage frequency. Because I get billed for every access to the APIs which exceeds my allotted quota and I am generous but not crazy. I know that several derivative distributions re-use my Chromium binary packages (without giving credit) and hence tax the usage quota on my Google Cloud account, but I cover this through donations, thank you my friends, and no thanks to the leeches of those distros.
Starting with Firefox 85, which will be released January 25, 2021, Firefox for Android users will be able to install supported Recommended Extensions directly from addons.mozilla.org (AMO). Previously, extensions for mobile devices could only be installed from the Add-ons Manager, which caused some confusion for people accustomed to the desktop installation flow. We hope this update provides a smoother installation experience for mobile users.
As a quick note, we plan to enable the installation buttons on AMO during our regularly scheduled site update on Thursday, January 21. These buttons will only work if you are using a pre-release version of Firefox for Android until version 85 is released on Tuesday, January 25.
This wraps up our initial plans to enable extension support for Firefox for Android. In the upcoming months, we’ll continue to work on optimizing add-on performance on mobile. As a reminder, you can use an override setting to install other extensions listed on AMO on Firefox for Android Nightly.
Starting with Firefox 85, which will be released January 25, 2021, Firefox for Android users will be able to install supported Recommended Extensions directly from addons.mozilla.org (AMO). Previously, extensions for mobile devices could only be installed from the Add-ons Manager, which caused some confusion for people accustomed to the desktop installation flow. We hope this update provides a smoother installation experience for mobile users.
As a quick note, we plan to enable the installation buttons on AMO during our regularly scheduled site update on Thursday, January 21. These buttons will only work if you are using a pre-release version of Firefox for Android until version 85 is released on Tuesday, January 25.
The release of Apple Silicon-based Macs at the end of last year generated a flurry of news coverage and some surprises at the machine’s performance. This post details some background information on the experience of porting Firefox to run natively on these CPUs.
We’ll start with some background on the Mac transition and give an overview of Firefox internals that needed to know about the new architecture, before moving on to the concept of Universal Binaries.
We’ll then explain how DRM/EME works on the new platform, talk about our experience with macOS Big Sur, and discuss various updater problems we had to deal with. We’ll conclude with the release and an overview of various other improvements that are in the pipeline.
GIMP (GNU Image Manipulation Program) is a cross-platform tool for quality image creation and manipulation and advanced photo retouching. GIMP provides features to produce icons, graphical design elements, and art for user interface components and mockups. Price: Free.
As part of GNU, Guix aims to bring freedom to computer users all over the world, no matter the languages they (prefer to) speak. For example, Guix users asking for help can expect an answer even if they do so in languages other than English.
We also offer translated software for people more comfortable with a language other than English. Thanks to many people who contribute translations, GNU Guix and the packages it distributes can be used in various languages, which we value greatly. We are happy to announce that Guix’ website can now be translated in the same manner. If you want to get a glimpse on how the translation process works, first from a translator’s, then from a programmer’s perspective, read on.
The process for translators is kept simple. Like lots of other free software packages, Guix uses GNU Gettext for its translations, with which translatable strings are extracted from the source code to so-called PO files. If this is new to you, the magic behind the translation process is best understood by taking a look at one of them. Download a PO file for your language at the Fedora Weblate instance.
Even though PO files are text files, changes should not be made with a text editor but with PO editing software. Weblate integrates PO editing functionality. Alternatively, translators can use any of various free-software tools for filling in translations, of which Poedit is one example, and (after logging in) upload the changed file. There also is a special PO editing mode for users of GNU Emacs. Over time translators find out what software they are happy with and what features they need.
Help with translations is much appreciated. Since Guix integrates with the wider free software ecosystem, if you intend to become a translator, it is worth taking a look at the styleguides and the work of other translators. You will find some at your language’s team at the Translation Project (TP).
Shay Banon first announced that Elastic would move its Apache 2.0-licensed source code in Elasticsearch and Kibana to be dual licensed under Server Side Public License (SSPL) and the Elastic License. "To be clear, our distributions starting with 7.11 will be provided only under the Elastic License, which does not have any copyleft aspects. If you are building Elasticsearch and/or Kibana from source, you may choose between SSPL and the Elastic License to govern your use of the source code."
In another post Banon added some clarification. "SSPL, a copyleft license based on GPL, aims to provide many of the freedoms of open source, though it is not an OSI approved license and is not considered open source."
Software developers are highly sought-after tech professionals, and the demand for their skills is continually increasing. In this Life in Tech article, we’ll provide a general look at the various duties and requirements associated with the role of software developer.
Let’s start with a basic description before getting into the nuances and specifics. Briefly, then, software developers conceive, design, and build computer programs, says ComputerScience.org. To accomplish this, they identify user needs, write and test new software, and maintain and improve it as needed. Software developers occupy crucial roles in a variety of industries, including tech, entertainment, manufacturing, finance, and government.
How do others program? I realized today that I've never actually seen it; in more than 30 years of coding, I've never really watched someone else write nontrivial code over a long period of time. I only see people's finished patches—and I know that the patches I send out for review sure doesn't look much like the code I initially wrote. (There are exceptions for small bugfixes and the likes, of course.)
Over the years, I found myself multiple times using Gonum Plot. I do find it as a very good and easy to use plotting tool for Go.
The problem I found myself, over and over, dealing with is the tickers scale. If you know before-hand the values that can be expected to be created by the application, it is very straightforward, but the majority of times, this is not the case. I often find myself creating a plotting application on data that track events that have not yet happened and cannot predict their range.
To solve the issue, I create a package that has a struct that implements the Ticker interface and provides tickers that are usually sensible. Since this struct only works for integer scales, I called it sit, which stands for “Sensible Int Ticks”.
It's pretty safe to say that most of the modern web would not exist without JavaScript. It's one of the three standard web technologies (along with HTML and CSS) and allows anyone to create much of the interactive, dynamic content we have come to expect in our experiences with the World Wide Web. From frameworks like React to data visualization libraries like D3, it's hard to imagine the web without it.
There's a lot to learn, and a great way to begin learning this popular language is by writing a simple application to become familiar with some concepts. Recently, some Opensource.com correspondents have written about how to learn their favorite language by writing a simple guessing game, so that's a great place to start!
As was previously discussed, since the 6.0.0 release of Qt, Qt 3D no longer ships as a pre-compiled module. If you need to use it on your projects, try out the new features, or just see your existing application is ready for the next chapter of Qt’s life, you need to compile Qt 3D from source.
In order to do this, you can do it the traditional way ([cq]make ...; make; make install) or use the Conan-based system that is being pioneered with the latest version of the MaintenanceTool.
Several readers have expressed concerned that Qt open-source downloads have disappeared but The Qt Company has now commented it's only a temporary issue due to a "severe hardware failure" in the cloud.
Qt's open-source online installer and offline packages are not currently working for the open-source options but the commercial downloads are working. While that may raise concerns given Qt's increasing commercial focus, The Qt Company posted to their blog that this interruption around open-source package downloads is due to a reported major hardware problem at their cloud provider.
Fortunally, the Qt API provides multiple ways to implement custom shapes, that depending on the needs might be enough.
There is the Canvas API using the same API as the canvas API on the web but in QML. It’s easy to use but very slow and I wouldn’t recommend it.
Instead of the Canvas API, from the QML side, there is the QtQuick Shapes module. This module allows creating more complex shapes directly from the QML with a straightforward declarative API. In many cases, this is good enough for the application developer but this module doesn’t offer a public C++ API.
If you need more controls, using C++ will be required to implement custom QQuickItem. Unfortunately drawing on the GPU using QQuickItem is more complex than the QPainter API. You can’t just use commands like drawRect, but will need to convert all your shapes in triangles first. This involves a lot of maths like it can be seen in the example from the official documentation or from the KDAB tutorial (Efficient custom shapes in Qt Quick).
A QPainer way is also available with QQuickPaintedItem, but it is slow because it renders your shape in a textured rectangle in the Scene Graph.
What is a role? Put simply, roles are a form of code reuse. Often, the term shared behavior is used. Roles are said to be consumed and the methods ( including attribute accessors ) are flattened into the consuming class.
One of the major benefits of roles is they attempt to solve the diamond problem encountered in multi-inheritance by requiring developers to resolve name collisions manually that arise in multi-inheritance. Don't be fooled however, roles are a form of multi-inheritance.
I often see roles being used in ways they shouldn’t be. Let’s look at the mis-use of roles, then see an example of shared behavior.
I’m using that word inheritance a lot for a reason, one of the two ways I see roles most often misused is to hide an inheritance nightmare.
"Look ma, no multi-inheritance support, no problem. I’ll just throw stuff in roles and glum them on wherever I really want to use inheritance. It all sounds fancy, but I am just lumping stuff into a class cause I don’t really understand OO principals."
Many tools for network analysis have existed for quite some time. Under Linux, for example, these are Wireshark, tcpdump, nload, iftop, iptraf, nethogs, bmon, tcptrack as well as speedometer and ettercap. For a detailed description of them, you may have a look at Silver Moon’s comparison [1].
So, why not use an existing tool, and write your own one, instead? Reasons I see are a better understanding of TCP/IP network protocols, learning how to code properly, or implementing just the specific feature you need for your use case because the existing tools do not give you what you actually need. Furthermore, speed and load improvements to your application/system can also play a role that motivates you to move more in this direction.
In the wild, there exist quite several Python libraries for network processing and analysis. For low-level programming, the socket library [2] is the key. High-level protocol-based libraries are httplib, ftplib, imaplib, and smtplib. In order to monitor network ports and the packet stream competitive candidates, are python-nmap [3], dpkt [4], and PyShark [5] are used. For both monitoring and changing the packet stream, the scapy library [6] is widely in use.
In this article, we will have a look at the PyShark library and monitor which packages arrive at a specific network interface. As you will see below, working with PyShark is straightforward. The documentation on the project website will help you for the first steps — with it, you will achieve a usable result very quickly. However, when it comes to the nitty-gritty, more knowledge is necessary.
PyShark can do a lot more than it seems at first sight, and unfortunately, at the time of this writing, the existing documentation does not cover that in full. This makes it unnecessarily difficult and provides a good reason to look deeper under the bonnet.
In an earlier post I complained about spreadsheet programs: Excel, LibreOffice Calc and Gnumeric. All of them confuse non-dates with dates, and automatically interpret certain number strings with 2 colons as [h]:mm:ss. Grrr.
Recently, there have been a lot of improvements in rustdoc. It was possible thanks to our new contributors. In light of these recent contributions, a few changes were made in the rustdoc team.
@jyn514 noticed a while ago that most of the work in Rustdoc is duplicated: there are actually three different abstract syntax trees (ASTs)! One for doctree, one for clean, and one is the original HIR used by the compiler. Rustdoc was spending quite a lot of time converting between them. Most of the speed improvements have come from getting rid of parts of the AST altogether.
The Optional object type in Java was introduced with version 8 of Java. It is used when we want to express that a value might not be known (yet) or it’s not applicable at this moment. Before Java 8 developers might have been tempted to return a null value in this case.
Oracle on Tuesday released GraalVM 21.0 as the latest version of their Java VM/JDK that also supports other languages and modes of execution.
One of the notable additions with GraalVM 21.0 is supporting Java on Truffle, as an example JVM implementation using the Truffle interpreter. GraalVM's Truffle framework is an open-source library for writing programming language interpreters. With Java on Truffle, it's of the same nature as the likes of JavaScript, Ruby, Python, and R within the GraalVM ecosystem. Java on Truffle allows for improved isolation from the host JVM, run Java bytecode in a separate context from the JVM, running in the context of a native image but with dynamically loaded bytecode allowed, and other Truffle framework features. More details about the Java on Truffle implementation via the GraalVM manual.
Standards are boring. Satisfied users may not want to migrate to other boards the market tries to sell them.
So Arm market is flooded with piles of small board computers (SBC). Often they are compliant to standards only when it comes to connectors.
But our hardware is not standard
It is not a matter of ‘let produce UEFI ready hardware’ but rather ‘let write EDK2 firmware for boards we already have’.
Look at Raspberry/Pi then. It is shitty hardware but got popular. And group of people wrote UEFI firmware for it. Probably without vendor support even.
[...]
At the end you will have SBSA compliant hardware running SBBR compliant firmware.
Congratulations, your board is SystemReady SR compliant. Your marketing team may write that you are on same list as Ampere with their Altra server.
Users buy your hardware and can install whatever BSD, Linux distribution they want. Some will experiment with Microsoft Windows. Others may work on porting Haiku or other exotic operating system.
But none of them will have to think “how to get this shit running”. And they will tell friends that your device is as boring as it should be when it comes to running OS on it == more sales.
In previous years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 10 of 21 Days of Productivity in 2021.
When I was in primary school in the days before the commercial internet, teachers would often give my class an assignment to keep a journal. Sometimes it was targeted at something particular, like a specifically formatted list of bugs and descriptions or a weekly news article summary for a civics class.
It’s been a tough and stressful year for many of us, and we are all in need of some care and attention
Microsoft Teams isn't just there to make employees' lives easier. It's also there to give bosses data about so many things.
The Linux Foundation and The Blacks In Technology Foundation have joined hands to launch a new scholarship program to help more Black individuals get started with an IT career.
Blacks in Technology will award 50 scholarships per quarter to promising individuals. The Linux Foundation will provide each of these recipients with a voucher to register for any Linux Foundation administered certification exam at no charge, such as the Linux Foundation Certified IT Associate, Certified Kubernetes Administrator, Linux Foundation Certified System Administrator and more.
The Linux Foundation has announced the availability of a new training program designed to introduce open source best practices.
The course, called Open Source Management & Strategy, includes seven modules designed to help executives, managers, software developers and engineers “understand and articulate the basic concepts for building effective open source practices within their organization,” according to the Foundation’s press release.
Security updates have been issued by Fedora (coturn, dovecot, glibc, and sudo), Mageia (openldap and resource-agents), openSUSE (dnsmasq, python-jupyter_notebook, viewvc, and vlc), Oracle (dnsmasq and xstream), SUSE (perl-Convert-ASN1, postgresql, postgresql13, and xstream), and Ubuntu (nvidia-graphics-drivers-418-server, nvidia-graphics-drivers-450-server, pillow, pyxdg, and thunderbird).
According to a report from Check Point Research (CPR), the malware variant, named FreakOut, specifically targets Linux devices that run unpatched versions of certain software.
Fileless malware is a growing concern for Linux administrators. Linux is considered a very secure OS by design - and rightfully so. With its robust privilege system and the “many eyes” of the open-source community scrutinizing the increasingly popular OS’s code for security vulnerabilities, Linux users are generally much safer than their Windows-using counterparts. That being said, sound administration and the implementation of security best practices can help prevent fileless malware attacks and other dangerous modern exploits that threaten Linux systems.
Amid all the conversation about Signal, and the debate over decentralization, one thing has often not been raised: all of these things require an Internet connection.
[...]
“Blogs” have a way to reblog (even a built-in RSS reader to facilitate that), but framed a different way, they are broadcast messages. They could, for instance, be useful for a “send help” message to everyone (assuming that people haven’t all shut off notifications of blogs due to others using them different ways).
Briar’s how it works page has an illustration specifically of how blogs are distributed. I’m unclear on some of the details, and to what extent this applies to other kinds of messages, but one thing that you can notice from this is that a person A could write a broadcast message without Internet access, person B could receive it via Bluetooth or whatever, and then when person B gets Internet access again, the post could be distributed more widely. However, it doesn’t appear that Briar is really a full mesh, since only known contacts in the distribution path for the message would repeat it.
There are some downsides to Briar. One is that, since an account is fully localized to a device, one must have a separate account for each device. That can lead to contacts having to pick a specific device to send a message to. There is an online indicator, which may help, but it’s definitely not the kind of seamless experience you get from Internet-only messengers. Also, it doesn’t support migrating to a new phone, live voice/video calls, or attachments, but attachments are in the works.
How can I put four years into a pie? I’m thinking of Inauguration Day 2017 through to today, Inauguration Day 2021. In truth things started back in 2015, when Donald Trump announced his run for the United States’ presidency, and I don’t know how long things will continue past the moment when President-Elect Joe Biden becomes President Joe Biden.
For the United States, it’s been a hell of a time. For the world, it’d been even worse. Every generation thinks that they lived through more than anyone else, that they had it worse. I had a Boomer tell me that the existential stress of COVID is nothing compared to the Vietnam War. I’m sure when we are living through a global water crisis, I’ll tell the kids that we had it bad too. Everyday I listen to the radio and read Twitter, aware that the current state of endless wars – wars against terrorism and drugs, organized crime and famine, climate change and racism – is global, and not limited to just what’s happening to and around me. That makes it feel worse and bigger and I wonder if earlier generations can really grasp how big that is.
[...]
So I will put my hope into this pie. I put my pain and anger into the dough. I will put my tears and helplessness and bitterness into the filling. I will cover it sweetness and the delicate hope I’ve spun out of sugar. Soon I will bake it and share it with the three other people I see because the most important thing about surviving these past years, these past months and weeks and days, is that we did it together. We will commiserate on what we’ve overcome, and we will share our hope and the sweetness of the moment, as the spun sugar dissolves on our tongues. There is so much we have left to do, so much we must do. We will be angry in the future, we may be angry later today, but until then, we have pie.
WSOU Investments filed the most cases, Google was the most sued business, and Rabicoff Law and Fish & Richardson were the busiest firms, according to new data
Note, I’m calling these folks “ACTING ____” because it is simpler and makes sense. BUT, the “acting” title is a term of art defined within the US Code. To avoid some of the legal requirements associated with being an “acting director,” the temporary leadership is using the longer title of someone “Performing the functions and duties of ____”
Acting Director – Drew Hirshfeld.
On January 20, 2021, Unified Patents added a new PATROLL contest, with a $3,000 cash prize, seeking prior art on at least claim 1 of U.S. Patent 8,165,867. This patent is owed by Cedar Lane Technologies, Inc., an NPE. The '867 patent relates to wirelessly controlling an electronic device with another electronic device in real time. The '867 patent has been asserted against D-Link, Disney, Dish Network, Comcast, LG, iHeart Media, ViacomCBS, TCL Communication, and SiriusXM.