vUsually, the operating system in your computer doesn't allow you to change its functions and other aspects. But, you can find some that allow you full control over all aspects. Not only that, but they also allow you to distribute your version freely. These are known as free and open-source software. So, they were made mainly to provide more control to home users over their software.
Now, there are many such viable options available for you out there. Linux is undoubtedly the most prominent one among them. After all, it has reached a leading position among all non-proprietary operating systems. Now, there are various distributions from Linux that you can choose from. The information given below can help you know all that you need to know about them.
This blog title should really be, “Why you always, always, always want conflict detection turned on on all the networks MAAS touches,” but that’s really long as a title. But hear me out.
As promised, here is another DHCP blog, this time explaining how you can have multiple DHCP servers on the same subnet, serving overlapping IP addresses. There are a lot of network-savvy folks who will tell you that serving the same set of IP addresses from two different DHCP servers just won’t work. While that’s a really good rule to follow, it isn’t totally accurate under all conditions.
It’s possible to have more than one DHCP server on the same network and still have everything work right, with no conflicts and no dropped packets or IP requests. It’s really not that hard to pull together, either, but there are some things to know, and some things to consider before we investigate that situation. For this blog, we’ll put some of the overlooked facets of DHCP in bold text. Let’s take a look.
I recently started working for InfluxData as a Developer Advocate on Telegraf, an open source server agent to collect metrics. Telegraf builds from source to ship as a single Go binary. The latest - 1.19.1 was released just yesterday. Part of my job involves helping users by reproducing reported issues, and assisting developers by testing their pull requests. It’s fun stuff, I love it. Telegraf has an extensive set of plugins which supports gathering, aggregating & processing metrics, and sending the results to other systems. Telegraf has a huge set of plugins, and there’s super-diverse ways our users deploy Telegraf, sometimes I have to stand up one-off environments to reproduce reported issues. So I thought I’d write up the basics of what I do, partly for me, and partly for my co-workers who also sometimes need to do this. My personal and work computers both run Kubuntu 21.04. Sometimes issues are reported against Telegraf on other Linux distributions, or LTS releases of Ubuntu. In the past I’d use either VirtualBox or QEMU to create entire Virtual Machines for each Linux distribution or product I’m working with. Both can be slow to stand up clean machines, and take a fair chunk of disk space. These days I prefer to use LXD. LXD is a system container manager, whose development is funded and led by Canonical, my previous employer. It’s super lightweight, easy to use and fast to setup. So it’s the tool I reach for most for these use cases. Note that LXD can also launch Virtual Machines but I tend not to use that feature, preferring lightweight containers.
[...]
I’ve been a big fan of LXD for some years now. I’ve found it a super fast, reliable way for me to spin up lightweight machines running random Linux distributions, and throw them away when done. It helps keep all those random and unstable pieces of software I’m testing nicely compartmentalised, and easy to nuke.
A lot of Bitwarden client and interfaces are missing a lot of absolutely basic features to make them even remotely usable but today we're looking at one that integrates with Dmenu that actually does everything I would need.
This week we’ve been configuring new-ish HP Microservers and entering our first game jam. We discuss Project Kebe, an open source Snap Store implementation, and respond to all your wonderful feedback.
It’s Season 14 Episode 18 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.
AMD engineers and their partners continue work towards upstreaming Secure Encrypted Virtualization's Secure Nested Paging (SEV-SNP) support for the mainline Linux kernel.
AMD SEV-SNP debuted this year with EPYC 7003 "Milan" processors. SEV-SNP offers additional hardware features for EPYC's virtualization capabilities. With SEV-SNP there is additional memory integrity protections around replay protection, data corruption, memory aliasing, and memory re-mapping. There are also other hardware protections with SEV-SNP as outlined in the comparison below.
As many developers in the northern hemisphere start to dream about their quickly arriving summer vacations, the increasing pace of kernel development gives no sign of taking a break any time soon. In fact, 5.13 was a record breaking release in the number of developers: exactly 2,062 developers contributed to this release - 336 of them for the first time. This was also the first kernel release with over 2,000 unique contributors. Collabora is, of course, the proud employer of a small, but very active, fraction of these developers.
As usual, our team is working all around the kernel, fixing bugs and writing new features. In this release, Boris Brezillon fixed some hard-to-track bugs in the Panfrost DRM driver, improving the overall support for the platform. Ezequiel Garcia continued to improve the VP8 stateless codec and this time he moved part of the driver out of staging. Sebastian Reichel, who is on a quest to organize the device-tree description of drivers on the Power-Supply subsystem that he maintains, converted most of the DT descriptors to use the DT schema, such that they can be checked for compliance automatically. Dafna Hirschfeld and Enric Balletbo i Serra worked on MediaTek devices, fixing DRM/multimedia bugs and improving power management support, respectively.
Thomas Gleixner has announced the release of the real-time "RT" patches for the Linux 5.13, the first update since the patches were re-based early on back during the 5.12 release candidates.
This morning's 5.13-rt1 release re-bases these real-time patches against the Linux 5.13 code-base, contains a rework of the locking core code, also reworks "large parts" of the memory management code, and other updates.
QNAP provides the QTS 5.0 operating system as a beta version for some of its NAS systems. The update comes with some fundamental changes, including an updated Linux kernel to version 5.10. Until now, the manufacturer used kernel versions 4.2.8 or 4.14.24, depending on the processor.
In practical use, the new Linux kernel is intended to increase the speed of PCI Express SSDs with NVMe protocol in particular when they function as a cache, for example when buffering data on hard drives. QNAP speaks in the announcement also of “improvements with AMD processors” without going into detail.
At the end of last year we reported on the possibility of an Intel Command Center / graphics driver control panel for Linux but not set in stone. The latest to report on the matter of an Intel Linux graphics GUI solution is that it's still being evaluated by the company.
When recently inquiring about the state of IGC compiler usage by their Mesa drivers, I also asked whether there was anything new to report on the prospects of an Intel Linux graphics driver control panel akin to the Intel Command Center on Windows.
For today's benchmarking is a look at how the GNU Compiler Collection has performed over the past few years going from the GCC 8 stable series introduced in 2018 through the recently released GCC 11.1 stable feature release plus also including the current early development snapshot of GCC 12.
Benchmarked are GCC 8.5, 9.4, 10.3, 11.1, and 12.0 (20210701). All of these compiler releases were benchmarked using the same Intel Core i9 10980XE (Cascade Lake X) system running Ubuntu 21.04 with the Linux 5.11 kernel.
Each compiler was built from source in the same (release) manner. A variety of open-source C/C++ benchmarks were then carried out each time in looking at the resulting performance while keeping to the same CFLAGS/CXXFLAGS throughout the entire duration of benchmarks. This follows our other recent GCC 11 vs. LLVM Clang 12 compiler benchmarking, tuning flag comparisons, and more. If there is enough interest similar compiler comparison tests can also be done on AArch64.
The community's telemetry concerns were received and addressed two months ago.
If you want an enhanced level of privacy protection, transparency of the VPN service, and full-fledged Linux support, ProtonVPN is a fantastic choice.
However, the pricing plan may prove to be expensive if you want to use it on more than two devices compared to other VPN providers.
I think it is worth it if you regularly rely on a VPN connection to hide your IP address, use torrents, unblock geological restrictions, and more. And if you rarely use a VPN, you could look at some of the other VPN options available for Linux.
What do you think about ProtonVPN? Have you tried it yet? Let me know your thoughts in the comments down below.
You often hear that disk space is cheap and plentiful. And it’s true that a 4TB mechanical hard disk drive currently retails for less than 100 dollars. But like many users we’ve migrated to running Linux on M.2 Solid State Drives (SSDs). They are NVMe drives reaching read and write speeds of over 5,000MB/s. That’s over 20 times faster than a 7,200 RPM traditional hard drive.
M.2 SSDs do functionally everything a hard drive does, but help to make a computer feel far more responsive. M.2 are NVMe drives which reduce I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple long command queues, and reduced latency. M.2 drives are more expensive than mechanical hard drives in terms of dollar per gigabyte. And M.2 with really large capacities are thin on the ground and expensive, so most users settle for lower capacity drives.
cloud-init is an awesome technology that can be used to customize Linux images for deployment, that lets you do all kinds of neat things such as automatically creating users, installing packages, resetting SSH keys, and more. However, it's often shrouded in mystery. In this video, I'll walk you through using it to create a user, set the hostname, and install some packages.
In this example, we will show how to modify an existing unit file.
There are three main directories where unit files are stored on the system but the ‘/etc/systemd/system/’ directory is reserved for unit files created or customized by the system administrator.
TeamViewer is a perfect software solution for users seeking reputable remote desktop and access solutions. Team Viewer parades itself with five useful mantras: Connect, Engage, Support, Enhance, and Manage. If your remote desktop and access solution checks these five boxes, then you have found a haven.
With a TeamViewer suite, you can achieve customer-first engagement, augmentative reality, IT management, and remote connectivity solutions. Any device connection is possible. Any user and process support are also possible regardless of the time zone and location configuration.
When you deploy a container on your network, if it cannot find a DNS server defined in /etc/resolv.conf, by default it will take on the DNS configured for the host machine. That may be fine and dandy for certain situations. But what if (maybe for security reasons), you do not want your containers using the same DNS as your hosts. Say, for example, your host servers use a specific DNS server to prevent users from visiting particular sites. Or maybe you have different DNS configurations for VPNs.
CentOS 8 will reach end-of-life on December 31, 2021. So if you are using CentOS 8 operating system then it is recommended to upgrade it to centos alternative distributions named Alma Linux.
In this guide, we will show you how to migrate CentOS 8 to the new AlmaLinux 8.3.
Zlib is an open source library for used for data compression.
As an end user, you are likely to encounter the need of installing Zlib (or zlib devel package) as a dependency of another application.
But here comes the problem. If you try installing Zlib on Ubuntu, it will throw “unable to locate package zlib” error.
Opening folders in Ubuntu is one of the basic tasks you will perform as a regular Ubuntu user. Although there are many ways to do so, we all have our preferences in which way to opt for when accessing folders on our system.
Encryption and security for protecting files and sensitive documents have long been a concern for users. Even as more and more of our data is housed on websites and cloud services, protected by user accounts with ever-more secure and challenging passwords, there's still great value in being able to store sensitive data on our own filesystems, especially when we can encrypt that data quickly and easily.
Age allows you to do this. It is a small, easy-to-use tool that allows you to encrypt a file with a single passphrase and decrypt it as required.
The Age tool allows you to encrypt and decrypt sensitive files with a single passphrase, says Sumantro Mukherjee.
This article provides step-by-step instructions to set up and use the tool.
Pylint is a Python static code analysis tool which looks for programming errors, helps enforcing a coding standard, sniffs for code smells, and offers simple refactoring suggestions.
It’s highly configurable, having special pragmas to control its errors and warnings from within your code, as well as from an extensive configuration file. It is also possible to write your plugins for adding your checks or for extending pylint in one way or another.
One of the great advantages of using PyLint is that it is open-source and free. So you can include it in a wide variety of projects. Also, it integrates seamlessly with many popular IDEs so you can use it without any problems. Moreover, you can use it as a standalone application to increase the flexibility of your application.
ODBC is an open specification for providing application developers with a predictable API with which to access Data Sources. Data Sources include SQL Servers and any Data Source with an ODBC Driver.
With the need for an open-source implementation and compatibility with other operating systems, unixODBC was born. This project also has a graphical interface that you can use but its potential is in the binaries that offer compatibility with this implementation.
You may have encountered some commands which take a while to complete processing. There are some tricks that could help gain control once you launch the command so that you can do other things meanwhile.
It is now possible to upgrade Linux Mint 20 and 20.1 to version 20.2.
If you’ve been waiting for this we’d like to thank you for your patience.
I made a video last week where I did a base installation of Gentoo inside a virtual machine. Viewers asked me if I could do a video on how to install Xorg and desktop environment or window manager. So...continuing along on the Gentoo journey, I'm going to show you how to install Xorg and Dwm.
Some new users have trouble understanding the Linux directory structure, so I thought I'd take a moment to demystify those strange folder names. It's not nearly as complicated as you might think. Once you understand what's what, it all starts to make sense.
That said, let's take a look at these strange directories.
If you think about the interests of people in the primitive age, archery will come to your mind very fast. People of that time had to practice this activity to collect their food. But nowadays, there is no scope for us to practice this interesting activity. Some people again develop an interest in this activity as a hobby, but there is no better resource to practice it. Eventually, you can try it only if you compete. However, if throwing the arrow right to the point seems more interesting to you, PlayStore provides very good news for you. You can now try the best archery games for Android to practice archery.
With a whole decoration system coming, survival game Vintage Story is going to be expanding in a big way with the upcoming Homesteading update that has a Release Candidate out now.
Vintage Story is an uncompromising wilderness survival sandbox game inspired by lovecraftian horror themes. You're probably tempted to say "hey it looks like Minecraft", but this is far away from it. Having played it, it's clearly aiming at a different market. You find yourself in a ruined world reclaimed by nature and permeated by unnerving temporal disturbances. Everything about it takes time, it's a slow and thoughtful game with some pretty deep game mechanics.
A big focus for VKD3D-Proton 2.4 has been on the performance front. VKD3D-Proton 2.4 improves swapchain latency and frame pacing by up to one frame, avoids pipeline compilation stutter in some cases, rewrites the image layout handling code to improve GPU-bound performance improvements, and more. Thanks to the improved image layout handling for color and depth stencil targets, games like Horizon Zero Dawn can be 15~20% faster while games like Death Stranding are ~10% faster and many other games 5~10% faster.
VKD3D-Proton is the project that Valve funds to act as a translation layer between Direct3D 12 to Vulkan for use with Steam Play Proton and a big new release is out. If you wish to know more about Steam Play and Proton do check out our dedicated section.
Version 2.4 brings multiple performance enhancements including: an improvement to swapchain latency and frame pacing by up to one frame, lookup of format info was optimized, it now avoids potential pipeline compilation stutter in certain scenarios. They also noted some significant GPU-bound performance improvements thanks to a rewrite of how it handles image layouts for colour and depth-stencil targets with Horizon Zero Dawn seeing a noted ~15-20% GPU bound uplift, Death Stranding ~10% and around 5-10% in other titles too.
In need of some new games and don't know where to look? Here to help! Fanatical just recently launched the Killer Bundle 18 and it's a pretty good one. Note: we're not affiliated with Fanatical currently, just pointing out a good deal.
The bundle comes in two editions giving you either 8 games/DLC at €£4.29 or 11 for €£6.89. It's an incredibly easy deal to recommend since it contains some really good games.
Following on from the Vulkan upgrade, Good Shepherd Entertainment and developer Urban Games have released the next major upgrade for Transport Fever 2. This Summer Update touches on all parts of the game, greatly improving many aspects.
Plasma Mobile just got a brand new selection of ringtones, alarms and system sounds thanks to the participants of the KDE/LMMS Plasma Mobile Sound Contest. Judges from both communities selected a variety of sounds made specifically for Plasma Mobile and now we are thrilled to announce the winners!
In addition to The Qt Company being busy at work on the Qt 6.2 toolkit, they have also been busy preparing Qt Creator 5.0 as their Qt/C++ focused integrated development environment.
Over 120 individual programs plus dozens of programmer libraries and feature plugins are released simultaneously as part of KDE Gear.
Today they all get new bugfix source releases.
The last maintenance release of the 21.04 series is out with improvements to same track transitions, improved Wayland support, as well as fixing issues with rotoscoping and the speech to text module. This version also adds support for the WebP image format. Due to technical issues there won’t be a Windows version for this release.
Pop!_OS is a modern-day take on Ubuntu by System76, an American company that specializes in building Linux-based notebooks and servers. It provides a clean and concise desktop experience for professionals working in the STEM field.
Some of Pop!_OS's biggest features include native support for AMD and Nvidia GPUs, out-of-the-box encryption support, and a highly customizable workspace.
The latest version of Pop!_OS just hit the market and offers some exciting new features, including a fresh take on the GNOME environment called COSMIC. It also adds support for an intuitive dock as well as tiling window managers.
In this video, I am going to show an overview of Nitrux 1.5.0 and some of the applications pre-installed.
The kernel team is working on final integration for kernel 5.13. This version was recently released and will arrive soon in Fedora Linux. As a result, the Fedora kernel and QA teams have organized a test week from Sunday, July 11, 2021 through Sunday, July 18, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.
With the release of Red Hat Enterprise Linux (RHEL) 8.3 last November, Red Hat added support for resilient edge computing architectures based on proven features within RHEL. RHEL for Edge creates operating system images that update "atomically" with automatic rollback on failure.
These operating system images provide efficient updates to RHEL systems by delivering only the deltas to an updated operating system image instead of the complete image. RHEL for Edge enables rapid application of security updates and patches with minimum downtime and continuity of operations if updates should fail.
Red Hat has received a "positive" rating from Gartner, Inc. in its 2021 Vendor Rating Report. Among Gartner's findings is that "Red Hat is still Red Hat," and retains its neutrality and strategy independent of IBM.
Gartner published its "2021 Vendor Rating: IBM report1" in early June, and we're pleased to report that Red Hat received an overall positive rating* from Gartner. We believe we received the rating because we remain focused on delivering the Red Hat portfolio of open hybrid cloud products and services with IBM as one of many strategic partners that help us deliver the offerings for digital transformation that our customers need.
During my career, I've had many different job titles. Sysadmin, IT consultant, IT project manager, and DevOp to name a few. But are these different jobs or just other names for the same thing? Decide for yourself while reading about my roles in three different kinds of organizations.
This is the first In a series of five articles showing how developers can use an extensive set of metrics offered by Red Hat OpenShift to diagnose and fix performance problems. I'll describe a real-life success story where I did performance testing on the Service Binding Operator to get it accepted into the Developer Sandbox for Red Hat OpenShift. I'll describe the Service Binding Operator's performance challenges, how I planned my troubleshooting, and how I created and viewed the metrics.
This first article lays out the motivation for the whole effort, the Service Binding Operator's environment and testing setup, the requirements I had to meet to get the Service Binding Operator accepted into the Developer Sandbox, and the tooling made available by Developer Sandbox for performance testing.
This month I accepted 105 and rejected 6 packages. The overall number of packages that got accepted was 111.
Ubuntu is by far the most popular Linux distro in terms of its userbase. And if people are not using Ubuntu directly, they are likely using one of the many popular Ubuntu-based distros like Linux Mint, Pop_OS!, MX Linux, and the lots. However, today, we will look at elementary OS – a popular Ubuntu-based distribution geared towards Mac users – and see how it stacks against its old man.
You see, despite its popularity, Ubuntu isn’t loved by all users. In fact, the distro faces its fair share of criticism. This is where Ubuntu-based distros sweep in, taking the positives of Ubuntu, dumping the negatives, and throwing in some additional tweaks of their own to create a unique spin.
As such, with elementary OS, you get the stable Ubuntu base along with access to Ubuntu’s large software repository. However, elementary OS ditches Ubuntu’s user interface and aesthetics in favor of their own custom Mac-inspired beginner-friendly UI – the Pantheon desktop. It also gives a lot more attention to user privacy and security compared to Ubuntu.
But these are just the cliff notes! Down below, we have a much more comprehensive take on elementary OS vs. Ubuntu, giving you a detailed look at both OS’s pros and cons. So if you are stuck choosing which distro is right for you, this read should definitely help out.
Linux Mint team announced the second point release for Mint 20 today. Features Kernel 5.4, Ubuntu 20.04 package base, and Cinnamon 5.0, Xfce 4.16, MATE 1.24 for each desktop edition.
Linux Mint 20.2 will be supported until 2025. It comes with improved Update Manager that supports installing updates for applets, desklets, themes, and extensions.
As well, it now displays software update notifications if it has been available for more than 7 logged-in days or older than 15 calendar days. However, you can change the time period or disable the notification entirely.
Linux Mint 20.2 is now available to download. In this tutorial post, we will show you the easiest way to upgrade to Linux Mint 20.2. Linux Mint 20.2 “Uma” is a long-term support release and it is supported until 2025. It is based on Ubuntu 20.04 LTS.
The good news for Linux Mint users as Linux Mint 20.2 is now available to download. Linux Mint 20.2 is a long-term support release and it is supported until 2025.
Linux Mint 20.2 “Uma” is based on Ubuntu 20.04 LT. The latest version of Linux Mint 20.2 comes with updated software and new features to make your desktop experience more comfortable.
Linux Mint 20.2 is a long term support release which will be supported until 2025. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.
Linux Mint 20.2 is a long term support release which will be supported until 2025. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.
Linux Mint 20.2 is a long term support release which will be supported until 2025. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.
Linux Mint 20.2 "Uma" is now available as the latest update to this popular desktop Linux distribution built off Ubuntu LTS releases.
Out today is Linux Mint 20.2 with the Cinnamon, MATE, and Xfce desktop options. The flagship Linux Mint 20.2 edition with their GNOME-forked Cinnamon desktop is now up to version 5.0. Cinnamon 5.0 has a new search feature for its Nemo file manager, memory leak fixes, better handling of updates around Spices, and a wide variety of other mostly small updates.
The Linux Mint 20.2 release is now available to download. We look at what's new in Linux Mint 20.2 (there's a fair bit) plus share a link to download it.
Linux Mint, one of the more popular Linux distributions, has released version 20.2, nicknamed “Uma.” It comes with new features, upgraded internals, and other changes. Today we’re taking a closer look at what’s new.
Changes and Upgrades in Mint 20.2
The Linux Mint operating system has long been known for its user-friendliness and stability. Uma maintains that legacy, bringing only a few changes to Mint’s update manager and the stock app collection, in addition to several improvements under the hood. Below are the highlights.
Linux Mint 20.2 beta arrived a few weeks ago. And now, the final stable release for Linux Mint 20.2 is here.
This release is an LTS upgrade based on Ubuntu 20.04 that is supported until 2025.
Let us take a quick look at what’s new with this release and how do you get it.
Aldec announced a “TySOM-M-MPFS250” SBC that runs Linux on Microchip’s RISC-V based, FPGA equipped PolarFire SoC and offers 2x GbE, 2x FMC, 2x micro-USB, PCIe x4, CAN, HDMI, and PMOD.
EDA design verification firm Aldec has introduced a new member of its Linux-driven TySOM FPGA prototyping boards built around Microchip’s hybrid Arm/FPGA PolarFire SoC. The TySOM-M-MPFS250 is the first of a series of TySOM-M boards based on the low-power, security-enhanced PolarFire SoC, which combines Microchip’s PolarFire FPGA with a SiFive RISC-V CPU. The approximately Cortex-A35 like RISC-V cores comprise 4x 1.5GHz U54-MC CPU cores and a fifth monitor core.
The card is based on the existing JODY-W2 module based on the NXP, previously Marvell, 88W8987 chipset with data rates up to 433 Mbit/s. The module requires a host processor running a Linux or Android operating system, and QNX support is in the works.
Taiwan Commate Computer Inc, also known as just Commell, has unveiled LE-37O 3.5-inch single board computer based on Intel Tiger Lake UP3 embedded processors for industrial applications.
The SBC supports up to 32GB DDR4 single-channel memory, up to four displays through DisplayPort, HDMI, VGA, and LVDS interfaces, and offers both Gigabit Ethernet and 2.5 GbE ports, as well as PCIe Gen4 via an M.2 socket.
[...]
The company provides Windows 10 64-bit driver for the board. Linux support is unclear. It’s not the first Tiger Lake SBC from Commell, as we previously covered Commell LP-179 Pico-ITX motherboard with a more compact form factor.
NASA’s history if full of fascinating facts and trivia. For example, the Apollo Guidance Computer, which handled navigation and control for both the Apollo Command Module and the Apollo Lunar Module, ran with less RAM and processing power than a TI-83 graphing calculator. But reading that fact isn’t the same as actually experiencing the Apollo Guidance Computer for yourself. That’s why a maker used three Arduino boards to create a simulator.
The actual Apollo Guidance Computer sat inside of a protective metal enclosure that is rather boring, but the DSKY (Display and Keyboard) interface, which acted like a terminal, had a very distinctive design. It had 19 buttons, including a numerical pad, situated below two displays. The left display had a several indicator lights, similar to your car’s dashboard, to show the statuses and warnings. The right display had numerical readouts for the program number, verb, and noun, as well as data and a computer activity status light.
Libre-SOC that started out as Libre RISC-V in aspiring to be an open-source software/hardware Vulkan accelerator but then renamed to Libre-SOC after changing over to the OpenPOWER architecture is now seeing test fabrication done using TSMC's 180nm process.
This test ASIC is being fabbed thanks to Imec's MPW Shuttle Service, Libre-SOC is all about being a hybrid CPU/GPU that's 100% open-source but obviously not the fastest compared to today's graphics processors. For achieving the graphics acceleration the plan is basically to use the likes of Mesa's software OpenGL/Vulkan implementations atop this Power-based chip.
Nixie tubes are fun little devices that act like seven-segment display modules in that they can be lined up together in order to form a larger number by showing digits 0 through 9. One maker, Marcin Saj, has created a unique project that uses a series of six Nixie tubes that can show the current time, temperature, and humidity all within a compact footprint. It is also able to receive commands via the Arduino Cloud service and an Alexa skill, thus enabling users to toggle various functions on or off with a smart speaker or phone.
Bouffalo Lab BL602, and its big brother BL604 with extra GPIOs, are RISC-V microcontrollers with WiFi and Bluetooth LE that offer an alternative to Espressif Systems ESP32 WiSoC, although it has been joined with Espressif’s own RISC-V solution with ESP32-C3.
Soon after the “announcement” in October 2020, we found out the SDK and a relatively cheap BL602 board, but the SDK has many closed-source binaries. Soon after Sipeed and Pine64 expressed their interest in developing an open-source toolchain and even an open-source WiFi (and BLE) stack. Time has passed and even got a Pinecone board in January, but did not do anything with it, especially seeing the status of the software.
OpenUK has released the second of its three-part probe into the state of open source in Britain, finding that an overwhelming majority of businesses use the wares – but noticeably fewer are willing to contribute code back.
"The first of its kind, this report makes visible the current business adoption of open source software in the UK and provides a baseline of what will be an annual review – to capture the growth, shifts and changes of open source software use in the UK in the coming years," claimed Jennifer Barth, PhD, founder of consultancy firm Smoothmedia and the leader of the research which led to the report.
A couple of months ago, I finally left Opera as my default browser on Linux. That was a hard sell because the Opera Workspaces feature was something I didn't think I could leave behind. And yet, the load the browser placed on my machine (especially when using Google Docs) was too big an issue to ignore. I'd be working along, minding my own business, when all of a sudden Opera would bring the desktop to a grinding halt.
As the Perf-Tools team, we are responsible for the Firefox Profiler. This tool is built directly into Firefox to understand the program runtime and analyze it to make it faster. If you are not familiar with it, I would recommend looking at our user documentation.
If you are curious about the profiler but not sure how to get to know it, I’ve also given a FOSDEM talk about using the Firefox Profiler for web performance analysis this year. If you are new to this tool, you can check it out there.
During our talks with the people who use the Firefox Profiler frequently, we realized that new features can be too subtle to notice or easily overlooked. So we’ve decided to prepare this newsletter to let you know about the new features and the improvements that we’ve made in the past 6 months. That way, you can continue to use it to its full potential!
As the Digital Markets Act (DMA) progresses through the legislative mark-up phase, we’re today publishing our policy recommendations on how lawmakers in the European Parliament and EU Council should amend it.
We welcomed the publication of the DMA in December 2020, and we believe that a vibrant and open internet depends on fair conditions, open standards, and opportunities for a diversity of market participants. With targeted improvements and effective enforcement, we believe the DMA could help restore the internet to be the universal platform where any company can advertise itself and offer its services, any developer can write code and collaborate with others to create new technologies on a fair playing field, and any consumer can navigate information, use critical online services, connect with others, find entertainment, and improve their livelihood
In a few weeks, Firefox will start the by-default rollout of DNS over HTTPS (or DoH for short) to its Canadian users in partnership with local DoH provider CIRA, the Canadian Internet Registration Authority. DoH will first become a default for 1% of Canadian Firefox users on July 20 and will gradually reach 100% of Canadian Firefox users in late September 2021 – thereby further increasing their security and privacy online. This follows the by-default rollout of DoH to US users in February 2020.
As part of the rollout, CIRA joins Mozilla’s Trusted Recursive Resolver (TRR) Program and becomes the first internet registration authority and the first Canadian organization to provide Canadian Firefox users with private and secure encrypted Domain Name System (DNS) services.
Summer is synonymous with the opportunity to participate in beautiful projects. Let’s look at the students who work in improving LibreOffice during the Google Summer of Code. This year, four of the approved GSoC projects for the LibreOffice community are mentored by Collabora developers. Find out about the improvements they are currently implementing!
The Free Software Foundation (FSF) is the most uncompromising nonprofit leader working for software freedom. For years, people have described the FSF's community of staff, volunteers, and contributors as being the "lighthouse" others use to find their way to software freedom, and we take that responsibility seriously. Swapping out one set of programs you use for another set may not seem like that much of a challenge, but those who bend over backwards to avoid nonfree software even in the form of nonfree JavaScript can tell you how many roadblocks there are along the way to software freedom; a cursory examination of the programs the average person depends on can show how deeply nonfree software has seeped into daily life.
We will never stop aiming to be that "lighthouse." At the same time, we recognize that a stance like ours can sometimes be a deterrent to people making important incremental improvements in their practices. For years, we've been holding the principled finish line, and we'll continue to do so. Now, we're developing an actionable set of steps to help support individuals in making the step-by-step improvements that they can. By supporting them in taking a step at a time, we're confident that we can help bring more people to a fully free setup than ever before. We're calling this campaign the "freedom ladder," and we need your support to help others begin climbing it.
In the free software community, we sometimes use the term "throwing over the wall" to describe when a person or group releases a program as free software, but doesn't provide any insight into its development. While this is absolutely better than releasing the software as proprietary, it forgoes opportunities for engagement and collaboration. We don't want our advocacy to be this way, and want to involve you as much as we can. From the first day that we began formulating the concept, we knew that we were going to need the help of the community in getting it right. Each of the fourteen members on the FSF staff came to free software in a different way, and we are all still at different places on the freedom ladder. While comparing our experiences has been instructive, we know that it's nowhere near exhaustive. Maybe you've already "arrived" at the combination of a fully free operating system and BIOS, or maybe you're still on Windows but have started to use LibreOffice. Either way, we need your participation.
When I write about programming, I spend a lot of time trying to come up with good examples. I haven’t seen a lot written about how to make examples, so here’s a little bit about my approach to writing examples!
The basic idea here is to start with real code that you wrote and then remove irrelevant details to make it into a self-contained example instead of coming up with examples out of thin air.
I’ll talk about two kinds of examples: realistic examples and suprising examples.
[...]
The example I just gave of explaining how to use sort with lambda is pretty simple and it didn’t take me a long time to come up with, but turning real code into a standalone example can take a really long time!
For example, I was thinking of including an example of some weird CSS behaviour in this post to illustrate how it’s fun to create examples with weird or surprising behaviour. I spent 2 hours taking a real problem I had this week, making sure I understood what was actually happening with the CSS, and making it into a minimal example.
In the end it “just” took 5 lines of HTML and a tiny bit of CSS to demonstrate the problem and it doesn’t really look like it took hours to write. But originally it was hundreds of lines of JS/CSS/JavaScript, and it takes time to untangle all that and come up with something small that gets at the heart of the issue!
My previous blog post was about some measurements I made to Refterm. It got talked about on certain places on the net from where it got to Twitter. Then Twitter did the thing it always does, which is to make everything terrible. For example I got dozens of comments saying that I was incompetent, an idiot, a troll and even a Microsoft employee. All comments on this blog are manually screened but this was the only time when I just had to block them all. Going through those replies seemed to indicate that absolutely everyone involved [1] had bad communication and also misunderstood most other people involved. Thus I felt compelled to write a followup explaining what the blog post was and was not about. Hopefully this will bring the discussion down to a more civilized footing.
[...]
I pondered for a while whether I should mention the memcpy thing and now I really wish I hadn't. But not for the reasons you might think.
The big blunder I did was to mention SIMD by name, because the issue was not really about SIMD. The compiler does convert the loop to SIMD automatically. I don't have a good reference, but I have been told that Google devs have measured that 1% of all CPU usage over their entire fleet of computers is spent on memcpy. They have spent massive amounts of time and resources on improving its performance. At least as late as 2013, performance optimizing memcpy was still subject to fundamental research (or software patents at least). For reference here is the the code for the glibc version of memcpy, which seems to be doing some special tricks.
If this is the case and the VS stdlib provides a really fast memcpy then rolling your own does cause a performance hit (though in this particular case the difference is probably minimal, possibly even lost in the noise). On the other hand it might be that VS can already optimize the simple version to the optimal code in which case the outcome is the same for both approaches. I don't know what actually happens and finding out for sure would require its own set of tests and benchmarks.
Over the last year, I’ve been using fish as my shell on Linux. Before that, I have tried both zsh (with and without oh-my-zsh) and bash. For bash, I wrote my own configuration framework, which - let’s be real - everyone needs to do and probably has done at some point.
Last week, I decided to switch back from fish to bash. This blog is the story why I did that and what I’m using now. I might look into zsh again at some point, but not now.
No matter how long you work as an application developer and no matter what programming language you use, you probably still struggle to increase your development productivity. Additionally, new paradigms, including cloud computing, DevOps, and test-driven development, have significantly accelerated the development lifecycle for individual developers and multifunctional teams.
You might think open source tools could help fix this problem, but I'd say many open source development frameworks and tools for coding, building, and testing make these challenges worse. Also, it's not easy to find appropriate Kubernetes development tools to install on Linux distributions due to system dependencies and support restrictions.
Fortunately, you can increase development productivity on Linux with Quarkus, a Kubernetes-native Java stack. Quarkus 2.0 was released recently with useful new features for testing in the developer console.
Amazon's legal stall tactics seem to have paid off.
The Open 3D Foundation, recently formed by the Linux Foundation, aims to accelerate developer collaboration and support open source projects related to 3D graphics rendering and development.
The nonprofit Linux Foundation announce they are currently in the process of forming the Open 3D Foundation to accelerate developer collaboration on 3D game and simulation technology. The Open 3D Foundation is being created to support open source projects that advance capabilities related to 3D graphics, rendering, authoring, and development. Amazon Web Services, Inc. is also making available an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license The Open 3D Foundation and Open 3D Engine Project will enable developers to collaborate on building games and simulations as well as the AWS engine.
In The Business Reinvention of Japan, Ulrike Schaede explores Japan's approach to economic development in the late 20th and early 21st century. Her thesis is that this approach—what she calls an "aggregate niche strategy"—offers important lessons for the West by balancing the pursuit of corporate profit with social stability, economic equality, and social responsibility and sustainability.
It's also a case study in the power of open organization principles, which come to life in Schaede's account. I would argue that Japan's "aggregate niche strategy" was successful, in part, because of them.
Security updates have been issued by CentOS (linuxptp), Fedora (kernel and php), Gentoo (bladeenc, blktrace, jinja, mechanize, privoxy, and rclone), Oracle (linuxptp, ruby:2.6, and ruby:2.7), Red Hat (kernel and kpatch-patch), SUSE (kubevirt), and Ubuntu (avahi).
Trend Micro's Erin Sindelar considers the Linux-focused cybersecrity strategy needed to tackle an increase in Linux-based malware
Last week’s European Patent Office hearing into the legality of oral proceedings by video conference faced technical issues of its own, as Douglas Drysdale and Ellie Purnell of HGF discovered.
The second round of oral proceedings in case G1/21, held on Friday 2 July, have concluded, but European Patent Office (EPO) users and interested parties around the world will have to wait for the decision in writing to find out the result.
The chair of the EPO’s Enlarged Board of Appeal (EBA) announced in the early afternoon that the board had heard enough from both the appellant and the representatives for the president of the EPO to deliberate the matter and take a decision in due course.
Pharma rights holders in the UK will be relieved that the NHS has failed to overturn rules barring third parties from seeking compensation for losses resulting from the enforcement of patents later invalidated
In a long awaited decision, the Enlarged Board of Appeal ("EBA") of the European patent Office ("EPO") confirmed that a European patent application can be refused on the basis of double patenting under Articles 97(2) and 125 EPC, if a patent with the same effective date has already been granted for the same subject matter. The decision revises the position previously taken by the Boards of Appeal that it is the application date-and not the effective date-that is relevant for the purpose of double patenting assessment.
Absent any direct legal provision in the European Patent Convention ("EPC"), it has been an established practice at the EPO to raise a double patenting objection in applications sharing the application date with patents granted to the applicant for identical subject matter. This situation would commonly arise in divisional applications. However, a patent granted from a priority application would not trigger double patenting because a longer protection afforded by the follow-up patent would justify an applicant's legitimate interest in having two patents. See our March 2019 Alert, Extension of Protection for Up to One Year Possible?"
In the referral case T 318/14, the Board of Appeal asked the EBA to clarify the legal basis in the EPC for refusing an application on the ground of double patenting. The EBA was also asked whether the double patenting prohibition equally applies to all three cases, namely, (i) two separate applications having the same application date; (ii) parent and divisional applications; and (iii) a second application claiming priority from a first application-so-called "internal priority."
In its decision G 4/19, the EBA goes to great length to analyze the original legislative intent in view of the preparatory documents of the EPC as well as the national provisions. The EBA concludes that Article 125 EPC obliges the EPO to abide by generally recognized principle of procedural law such as the prohibition of double patenting. According to the EBA, it was the legislator's intent to apply this principle to all three cases mentioned above, regardless of whether there is a legitimate interest in a second patent. Thus, contrary to previous case law, an applicant will no longer be able to pursue a follow-up application, if the priority application has already matured into a patent directed to the same subject matter.