Gut Ding braucht Weile. Almost three years ago, we added high-resolution wheel scrolling to the kernel (v5.0). The desktop stack however was first lagging and eventually left behind (except for an update a year ago or so, see here). However, I'm happy to announce that thanks to José Expósito's efforts, we now pushed it across the line. So - in a socially distanced manner and masked up to your eyebrows - gather round children, for it is storytime.
The x86/IRQ changes for the Linux 5.15 kernel bring some unexpected improvements to old hardware.
In particular, some old Intel and ALi hardware is seeing some work done for this modern Linux kernel.
David Airlie submitted today the Direct Rendering Manager (DRM) graphics/display driver updates for the Linux 5.15 merge window.
Having missed the update for the 5.13 kernel entirely, I thought I'd just skip ahead to merge up with 5.14 and started looking at/working on it today. The size of the changes are depressingly large and whilst it's mostly trivial changes, and features I wouldn't implement in MuQSS, I'm once again left wondering if I should be bothering with maintaining this patch-set, as I've mentioned before on this blog.
The size of my user-base seems to be diminishing with time, and I'm getting further and further out of touch with what's happening in the linux kernel space at all, with countless other things to preoccupy me in my spare time.
As much as I still prefer running my own kernel on my hardware, I'm having trouble motivating myself after the last 18 months of world madness due to Covid19 and feel that I should really sadly bring this patch-set to a graceful end. My first linux kernel patches stretch back 20 years and with almost no passion for working on it any more, I feel it may be long overdue.
Con Kolivas has worked on many patches for the Linux kernel over the past two decades and particularly focused on innovations around desktop performance/interactivity. For over a decade now he's primarily been focused on maintaining his work out-of-tree and not catering to mainline acceptance but now he is thinking of bowing out once more and ending his kernel development effort.
Over the past decade he's been maintaining his "-ck" patches out-of-tree and updating them for each new kernel series with a variety of improvements to enhance the interactivity and performance of the kernel. He's also been maintaining his MuQSS scheduler that is the successor to his former "BFS" Brain Fuck Scheduler.
It's been a while since last running benchmarks evaluating the performance of GCC's profile guided optimizations (PGO) for helping to optimize the performance. But stemming from the discussions around PGO'ing the Linux kernel (though that effort is stalled for now), several Phoronix readers inquired about seeing some fresh PGO figures with GCC 11. So here are such benchmarks of GCC 11 with the upcoming Ubuntu 21.10 running on an AMD Ryzen 9 5950X desktop.
Using the latest Ubuntu 21.10 daily image at the time with its GCC 11.2 compiler and other updated toolchain components, I ran some fresh benchmarks looking at the impact of PGO.
The benchmarks were first carried out without using any PGO / profile-based optimizations. After that all of the open-source C/C++ benchmarks were re-built with the necessary support to enable profile collection, all of the benchmarks repeated just to generate the necessary profile data without making use of the benchmark results, and then all of the benchmarks each rebuilt against their respective profile data. This is a rather best case scenario for PGO performance evaluation with the profiles matching the specific workloads / code paths being tested by the benchmark. These tests are mainly being put out for reference and curiosity purposes for helping those decide whether it's worthwhile looking closer at profile guided optimizations for your particular workloads or performance critical code-bases. All other CFLAGS/CXXFLAGS were maintained the same throughout testing besides just adjusting the PGO options for the given build.
In this tutorial, we will show you how to install Nginx on Debian 11. For those of you who didn’t know, Nginx is a free, open-source webserver that provides HTTP, reverse proxy, caching, and load-balancing functionality. It’s a great alternative to Apache, and it’s easy to set up.
This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you through the step-by-step installation of the Nginx webserver on a Debian 11 (Bullseye).
Changing or editing a WordPress Admin password is a superuser-oriented activity. In such a case, you can still log in to the WordPress website account and edit the other existing user profile information or even further customize the site information.
However, for one reason or another, you might feel like the integrity or security of your admin passwords has been compromised. It could also be due to a site security policy put in place by the company represented by the WordPress site where admin user passwords are changed weekly or monthly.
With the freedom and open-source nature of the WordPress content management system, taking over online content publishing is easy, flexible, and manageable.
It is important for WordPress database administrators to have a grip on all database user contributions and interactions within such platforms. There are several reasons that will force a database administrator to create users with different privileges via the MySQL client or shell.
Keeping your system up to date is an important factor for anyone from simple desktop users, developers, sysadmins; well, let’s face it, anyone with a device that is especially connected to the Internet. Debian, by default, is not set up for automatic updates. However, with enabling and configuring unattended-upgrades packages, you can easily apply security, package, or even new feature upgrades in an easy, simple, efficient way if you do not always have the time to check or forget. IT is highly recommended to enable this just for security alone.
FTP, short for File Transfer Protocol, is a popular protocol for transferring files to and from an FTP server. However, it is fraught with security risks since it sends data and sensitive information such as usernames and passwords in plain text. VSFTPD ( Very Secure FTP Daemon ) is a fast, secure and stable FTP server that uses encryption to secure data exchanged with the server.
Going from ~11 FPS to ~602 FPS for an open-source game marks the latest work on Zink for OpenGL atop Vulkan within Mesa.
Last week with my latest Zink OpenGL-on-Vulkan benchmarks among the games tested was the promising Tesseract game. While Tesseract hasn't seen a new release in more than a half-decade, due to it being open-source and benchmark-friendly, it was among the games tested.
Zink continues to be a promising Mesa driver for Linux that runs OpenGL on top of Vulkan. It's not finished yet and so continues seeing some big performance fixes.
The latest comes from another blog post by developer Mike Blumenkrantz, who noted from a Phoronix benchmark that performance actually went down recently with Zink instead of up. The game in question was Tesseract, an open-source engine derived from Cube 2: Sauerbraten with more modern rendering features added in.
The food system is about to get a bit more depth to it in the Hearth & Home update for Valheim, with new items and ways to cook and there's a way to puke it all up too.
Not only does the new system spread out foods into different categories based on what they will give you (like more health, more stamina), they've also split the meats from different animals now too. Inventory management was already a nuisance and this is probably going to amplify that problem unless they have some new tricks they've not shown yet.
You will also get onions to plant and cooking has been extended with new steps too. You cauldron now needs cooking extensions built like other crafting stations do, and bread / pies need to be baked so everything takes that little bit longer. The highlight though is clearly the Bukeberries, enabling you to throw up all your food if you decide you want to devour a different type.
The first half of this article introduced Shipwars, a browser-based video game that’s similar to the classic Battleship tabletop game, but with a server-side AI opponent. We set up a development environment to analyze real-time gaming data and I explained some of the ways you might use game data analysis and telemetry data to improve a product.
KDE Plasma 5.22.5 is here as the last point release in the series, improving the System Monitor utility to correctly display IPv4 address information when IPv6 is disabled and to make the “Export Page” function work as it’s supposed to, improve the Plasma Panels to use the correct edge-specific theme graphics if available, and improve the window maximization and full-screen effects to cross-fade again.
Also improved in this release is the Digital Clock widget, whose calendar popup’s header now looks correctly in right-to-left (RTL) text mode and make the list of timezones scrollable. Moreover, the KDE Plasma 5.22.5 update improves the Plasma Discover graphical package manager to make some of its UI elements display shortcut keys in their tooltips.
The latest version of Manjaro Linux, codenamed Pahvo, includes improvements associated with the official desktops along with several new features.
Manjaro is a powerful Arch-based Linux distribution that provides a coherent system out of the box. If you want to experience the power of Arch Linux without having to deal with the initial learning curve, try your hands on Manjaro.
The developers recently released Manjaro 21.1.0 Pahvo, the latest stable version of this distro. Check out what's new in this iteration below.
I read a lot of negativity about YaST on the webs, Reddit, YouTubes… other places… and I wanted to write a counter to all those negative statements. Why? YaST was the biggest selling point for me to go openSUSE when I departed the Mandrake / Mandriva world about 10 years ago (at the time of writing). I use YaST regularly and have grown to truly enjoy the tools for system administration. I am not good at remembering the various commands in the terminal to do a thing even though I do take a number of notes. YaST is just so quick to get to a solution, especially when there are lots of little steps involved. I originally was going to make it 8 reasons, then 10 but after getting to 16, I decided I had to pare it down and will probably have to do follow up blatherings on the various modules. Here are my reasons why YaST makes openSUSE Awesome.
Consolidated Control Center of Tools
This is my primary love for YaST. I know I can go to one place and get to any system level function for my computer. I have this general requirement that I want all my tools in one spot, I do not want to have to hunt for the proper tool to accomplish a specified task, with YaST, I get that and managing my openSUSE machines is super convenient. I don’t have to remember any esoteric commands in the terminal, as much as I love the terminal and the power it provides. I often cannot remember the commands to fix or alter a thing. This is especially true with functions I do not perform regularly.
openSUSE set the standard for me with YaST, for me to consider any Linux distribution, I must have a “Control Center” for all my system management tools. Basically, at this point, I am spoiled and although I can get along fine with other distributions, I never feel fully comfortable with a system that doesn’t have this luxury item.
We’re early on in this brave, new world of hybrid work, but already there are some trends worth keeping an eye on, say management experts and IT leaders who are busy bridging the remote and in-office gap.
Consider these emerging issues that technology leaders should monitor and manage during this transition.
I've been working on portals recently and one of the issues for me was that the documentation just didn't quite hit the sweet spot. At least the bits I found were either too high-level or too implementation-specific. So here's a set of notes on how a portal works, in the hope that this is actually correct.
First, Portals are supposed to be a way for sandboxed applications (flatpaks) to trigger functionality they don't have direct access too. The prime example: opening a file without the application having access to $HOME. This is done by the applications talking to portals instead of doing the functionality themselves.
There is really only one portal process: /usr/libexec/xdg-desktop-portal, started as a systemd user service. That process owns a DBus bus name (org.freedesktop.portal.Desktop) and an object on that name (/org/freedesktop/portal/desktop). You can see that bus name and object with D-Feet, from DBus' POV there's nothing special about it. What makes it the portal is simply that the application running inside the sandbox can talk to that DBus name and thus call the various methods. Obviously the xdg-desktop-portal needs to run outside the sandbox to do its things.
We recently published the results of our survey from earlier this year where we asked more than 500 IT and Security practitioners about their container and Kubernetes adoption and security strategies. One of the key takeaways was that organizations need to build a bridge between DevOps and security to realize the benefits of tools like containers and Kubernetes. This is because responsibility for securing cloud-native development tools like these is highly decentralized.
This is the third and final part of a series I promised during my Nest With Fedora talk (also called “Exploring Our Bugs”). In this post, I’ll analyze the time it takes to resolve bug reports from Fedora Linux 19 to Fedora Linux 32. If you want to do your own analysis, the Jupyter notebook and source data are available on Pagure. These posts are not written to advocate any specific changes or policies. In fact, they may ask more questions than they answer.
An important consideration when looking at bugs is the time to resolution (TTR). How quickly are bugs resolved one way or another? The first thing I looked at is the TTR across all of our releases. As you might expect, it skews very heavily to the left. One surprising thing his how many bugs took multiple years to close. As a percentage, it was relatively small, but some bugs went almost 14 years before being closed. This is another time I wish it were easy to get a count of how many times a bug has been bumped to a later version.
After several months of development, Linux Lite 5.6 is here based on the recently released Ubuntu 20.04.3 LTS point release in the Ubuntu 20.04 LTS (Focal Fossa) operating system series, but it ships with the Linux 5.4 LTS kernel by default instead of its newer Linux 5.11 HWE (Hardware Enablement) kernel borrowed from the Ubuntu 21.04 (Hirsute Hippo) release.
However, Linux Lite users will be able to install any kernel they want from Linux 3.13 to the latest Linux 5.14 from the distro’s repositories. On top of that, the Linux Lite 5.6 release ships with Python3 as default Python implementation, the ability to install Linux Lite directly from the Lite Welcome tool, updated Help Manual and Papirus icon theme, new wallpapers, and various bug fixes.
When you click on the download button on the Ubuntu website, it gives you a few options. Two of them are Ubuntu Desktop and Ubuntu Server.
This could confuse new users. Why are there two (actually 4 of them)? Which one should be downloaded? Ubuntu desktop or server? Are they the same? What is the difference?
I am going to explain the difference between the desktop and server editions of Ubuntu. I’ll also explain which variant you should be using.
While focused on the openSUSE Innovator initiative as an openSUSE member and official Intel oneAPI innovator, I tested the Beast Canyon NUC 11 machine on openSUSE Leap 15.3 and Tumbleweed. With all the work, we made available in the SDB an article on how to use the GNA Technologie on the openSUSE platform. More information can be found at https://en.opensuse.org/SDB:Install_GNA_in_NUC_Beast_Canyon.
Beast Canyon (still on pre-order) is the highest-performing Intel€® NUC available today. Beast Canyon is the evolution of Intel’s modular gaming mini PC, a more compact gaming PC than most gamers could dream of building on their own. The equipment in some models has the Core i9-11900KB processor with the GNA feature: Gaussian & Neural Accelerator Library.
Arm China (å®â°Ã¨Â°â¹Ã§Â§âæŠâ¬) has apparently split from Arm Holdings in an interesting saga. Last year we noted Allwinner R329 processor featured Arm China AIPU with 256 MOPS. But this AI accelerator was nowhere to be found on the official Arm website, which seemed odd.
But it appears there have been conflicts with and within Arm China for a while. Allen Wu, who was President of ARM Greater China, Member of the Executive Committee between 2014 and 2018, and has been Chairman and CEO, Arm Technology (China) since April 2018, has set up an investment fund called Alphatecture Hong Kong Ltd in 2019 in order invest in Bestechnic, an ARM licensee and developer of audio chips, reaping hundreds of dollars in profit for himself.
Arm Technology (China)’s board of directors was not impressed and voted 7 to 1 on June 4, 2020 to dismiss him, but Allen refused to leave since he still holds the company’s seal, and he remains the legal representative of the company according to the Chinese law. There’s a legal process to retrieve the company’s seal, but it can take years.
Website Analytics is a crucial tool for website admins, content creators and marketers. While Google Analytics is the primary choice for many site admins and content creators, it is not easy to use, manage and learn.
Furthermore, Google Analytics is a web service hosted and managed by Google, users do not own their data or better control over the analytics script.
So, here Plausible comes to the rescue as an open-source alternative which focuses on privacy and decision.
Plausible is completely free self-hosted web analytic software for website admins, owners and content creators to keep track of their users and reader activities.
Unlike several others self-hosted open-source web analytics, Plausible is made to follow and comply with the new EU user privacy standards.
It is the right choice for bloggers, freelancers, startups and companies.
A detailed agenda will be announced in a few weeks. Current thinking however is to center the agenda on Rust interest groups and domain working groups, those brave explorers who are trying to put Rust to use on all kinds of interesting domains, such as game development, cryptography, machine learning, formal verification, and embedded development. If you run an interest group and I didn’t list your group here, perhaps you want to get in touch! We’ll be talking about how these groups operate and how we can do a better job of connecting interest groups with the Rust org.
On 13 September, Mozilla will host the next installment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.
For this installment, we’re checking in on the Digital Markets Act. Our panel of experts will discuss the key outstanding questions as the debate in Parliament reaches its fever pitch.
At some point Firefox started remebering which window was open on which desktop, which meant that if you're running KDE or GNOME ...etc and open several Firefox windows on different virtual desktops, when you restart Firefox, each window will be restored to the desktop it was open on. Apparently that started with Firefox 77.
Foreman OpenSCAP plugin stores security scanner results in Foreman’s PostgreSQL database providing integrated UI, API and CLI experience. To simplify our use case, let’s assume that each report has many-to-many association to security rules with some result (pass or fail). This gives us two main SQL tables: report and rule and a join table between them. For simplicity, let’s ignore the result which should be an extra column in the join table.
Part of the Free Software Foundation (FSF's) core mission is to advance policies that will promote the progress of free software and freedom. Because copyright handling has been a topic of concern lately, we are taking this opportunity to explain the four purposes behind FSF copyright handling, as well as examine the impact of potential alternatives.
For some GNU packages, the ones that are FSF-copyrighted, we ask contributors for two kinds of legal papers: copyright assignments, and employer copyright disclaimers. We drew up these policies working with lawyers in the 1980s, and they make possible our steady and continuing enforcement of the GNU General Public License (GPL).
These papers serve four different but related legal purposes, all of which help ensure that the GNU Project's goals of freedom for the community are met.
One purpose is to give explicit permission to include the material in that GNU package. That is the most basic need.
The second purpose is to empower the FSF to go to court and say, "That company is infringing our copyright when it tramples the freedom of users, denying them the freedom that our license gives them." The assignment does this by transferring the copyright to the FSF. (This form of support for GNU is one of the original purposes for founding the FSF.)
A third purpose is to make it possible to add additional permission to specific pieces of code. For example, to take code released under GNU GPL version-3-or-later and release it under GNU Lesser GPL version-3-or-later.
Today, we are launching IPFire on AWS ARM-based instances, making IPFire cheaper, more versatile and more secure for all your cloud-based projects.
Having been around for a little while, Lightning Wire Labs ported IPFire to the new ARM-based processors from AWS with IPFire 2.25 - Core Update 160.
The cloud is here to stay. Lightning Wire Labs proudly has a large customer base with large cloud envirtonments secured by IPFire.
Most people only know that I work in IT. Some even call me a hacker – which I really appreciate :-) However, by university degree I am an environmental engineer (and English - Hungarian translator). Even if I never worked in my field, except for some student jobs, I still follow any news related to the environment closely. This is why I was very happy to learn, that my home city, Budapest, introduced bee pastures in the city.
[...]
Taking a side, seeing everything in black and white is easier: less thinking, feeling of belonging. But that’s not how I work in IT or in real life.