Video editing is a complex topic with a wide-range of software to choose from. In this series, produced with Gardiner Bryant, we will cover some of the common video editing suites available for Linux. We will then jump into some of the other software that you can choose when producing multimedia content on Linux. Instead of just telling you about how awesome Librem 14 actually is, we thought it would be useful to show Librem 14 in action.
Say goodbye to proprietary music players filled with ads, tracking, and profiling.
In this video, we are looking at how to install OBS Studio on Zorin OS 16.
When Linux Eeasy Anti Cheat support was first announcement it had some issues that made it much harder to implement for many games that it needed to be but now there's absolutely no way to make the EAC process
A small SNAFU in Linux kernel land meant that a notification regarding the stable review cycle for the 5.16.3 release didn't reach everyone it should have.
For the first time in the 31-year history of the Linux kernel, there were over 999 commits to a stable version, which caused a very minor problem.
Greg Kroah-Hartman, lead maintainer of the -stable branch, has a set of scripts which CC various interested parties when there's been a new release.
"Usually I split big ones out in two releases over the week," he told The Reg. "This time, I did it all at once to see what it would stress. The 'bug' of not copying some people on an email is the only thing that broke that I noticed, so we did pretty well."
He told the kernel development mailing list: "Found the problem, this was the first set of -rc releases that we have over 999 commits and the script was adding the cc: to msg.000 not msg.0000. I'll fix this up."
The Linux framebuffer device (fbdev) subsystem has long languished in something of a purgatory; it was listed as "orphaned" in the MAINTAINERS file and saw fairly minimal maintenance, mostly driven by developers working elsewhere in the kernel graphics stack. That all changed, in an eye-opening way, on January 17, when Linus Torvalds merged a change to make Helge Deller the new maintainer of the subsystem. But it turns out that the problems in fbdev run deep, at least according to much of the rest of the kernel graphics community. By seeming to take on the maintainer role in order to revert the removal of some buggy features from fbdev, Deller has created something of a controversy.
Part of the concern within the graphics community is the accelerated timeline that these events played out on. Deller posted his intention to take over maintenance of the framebuffer on Friday, January 14, which received an ack from Geert Uytterhoeven later that day. Two days later, before any other responses had come in, Deller sent a pull request to Torvalds to add Deller as the fbdev maintainer, which was promptly picked up. On January 19, Deller posted reversions of two patch sets that removed scrolling acceleration from fbdev. In the meantime, those reversions had already been made in Deller's brand new fbdev Git tree.
As of this writing, just short of 7,000 non-merge commits have been pulled into the mainline kernel repository for the 5.17 release. The changes pulled thus far bring new features across the kernel; read on for a summary of what has been merged during the first half of the 5.17 merge window.
The page structure is at the core of the memory-management subsystem. One of these structures exists for every page of physical memory in the system; they are used to track the status of memory as it is used (and reused) during the lifetime of the system. Physical pages can adopt a number of different identities over time; they can hold user-space data, kernel data structures, DMA buffers, and so on. Regardless of how a page is used, struct page is the data structure that tracks its state. These structures are stored in a discontiguous array known as the system memory map.
There are a few problems that have arisen with this arrangement. The page structure was significantly reorganized for 4.18, but the definition of struct page is still a complicated mess of #ifdefs and unions with no mechanisms to ensure that the right fields are used at any given time. The unlucky developer who needs to find more space in this structure will be hard put to understand which bits might be safe to use. Subsystems are normally designed to hide their internal data structures, but struct page is heavily used throughout the kernel, making any memory-management changes more complicated. One possible change — reducing the amount of memory consumed by page structures by getting rid of the need for a structure for every page — is just a distant dream under the current organization.
So there are a lot of good reasons to remove information from struct page and hide what remains within the memory-management subsystem. One of the outcomes from the folio discussions has been a renewed desire to get a handle on struct page, but that is not a job for the faint of heart — or for the impatient. Many steps will be required to reach that goal. The merging of the initial folio patches for 5.16 was one such step; the advent of struct slab in 5.17 is another.
As expected, Intel's open-source "ANV" driver is ready to go with Vulkan 1.3 for Mesa 22.0.
On Tuesday was The Khronos Group's announcement of the Vulkan 1.3 specification. Both Intel and Radeon (RADV) had launch-day driver patches ready with the merge requests timed for the embargo lift. This was great timing and showing the successes these days of the open-source Linux GPU drivers compared to the OpenGL API support delays experienced years ago. RADV managed to mainline its patches that same day while the Intel ANV patches were pending a bit longer as they were merging the Vulkan dynamic rendering support as required by Vulkan 1.3.
After six months of reverse-engineering, the new Arm “Valhall” GPUs (Mali-G57, Mali-G78) are getting free and open source Panfrost drivers. With a new compiler, driver patches, and some kernel hacking, these new GPUs are almost ready for upstream.
In 2021, there were no Valhall devices running mainline Linux. While a lack of devices poses an obvious obstacle to device driver development, there is no better time to write drivers than before hardware reaches end-users. Developing and distributing production-quality drivers takes time, and we don’t want users to be reliant on closed source blobs. If development doesn’t start until a device hits shelves, that device could reach “end-of-life” by the time there are mature open drivers. But with a head start, we can have drivers ready by the time devices reach end users.
Let’s see how.
Here's a war story from Alyssa Rosenzweig on the process of writing a free driver for Arm's "Valhall" GPUs without having the hardware to test it on.
The first set of feature updates have been submitted to DRM-Next for staging until the Linux 5.18 kernel cycle begins around the end of March.
It was less than one week ago Linux 5.17-rc1 released that marked the end of the merge window for Linux 5.17. However, due to the cut-off of new DRM-Next material happening prior to the merge window, there is already a lot of new code ready to get staged in DRM-Next for the follow-on kernel cycle (5.18).
Sent out today were the first of several drm-misc-next pull requests expected for Linux 5.18. The drm-misc-next area continues collecting the Direct Rendering Manager changes for the core subsystem code and smaller drivers. Expect more drm-misc-next pull requests along with the big Intel and AMD driver feature pull requests to continue coming over the next several weeks.
Shortly after I joined the Mesa team at Intel in the summer of 2014, I was sitting in the cube area asking Ken questions, trying to figure out how Mesa was put together, and I asked, “Why don’t you use LLVM?” Suddenly, all eyes turned towards Ken and myself and I realized I’d poked a bear. Ken calmly explained a bunch of the packaging/shipping issues around having your compiler in a different project as well as issues radeonsi had run into with apps bundling their own LLVM that didn’t work. But for the more technical question of whether or not it was a good idea, his answer was something about trade-offs and how it’s really not clear if LLVM would really gain them much.
That same summer, Connor Abbott showed up as our intern and started developing NIR. By the end of the summer, he had a bunch of data structures a few mostly untested passes, and a validator. He also had most of a GLSL IR to NIR pass which mostly passed validation. Later that year, after Connor had gone off to school, I took over NIR, finished the Intel scalar back-end NIR consumer, fixed piles of bugs, and wrote out-of-SSA and a bunch of optimization passes to get it to the point where we could finally land it in the tree at the end of 2014. Initially, it was only a few Intel folks and Emma Anholt (Broadcom, at the time) who were all that interested in NIR. Today, it’s integral to the Mesa project and at the core of every driver that’s still seeing active development. Over the past seven years, we (the Mesa community) have poured thousands of man hours (probably millions of engineering dollars) into NIR and it’s gone from something only capable of handling fragment shaders to supporting full Vulkan 1.2 plus ray-tracing (task and mesh are coming) along with OpenCL 1.2 compute.
Was it worth it? That’s the multi-million dollar (literally) question. 2014 was a simpler time. Compute shaders were still newish and people didn’t use them for all that much more than they would have used a fancy fragment shader for a couple years earlier. More advanced features like Vulkan’s variable pointers weren’t even on the horizon. Had I known at the time how much work we’d have to put into NIR to keep up, I may have said, “Nah, this is too much effort; let’s just use LLVM.” If I had, I think it would have made the wrong call.
So, I recently upgraded to a dual-monitor setup (1080p + 1440p).
While I was excited about the productivity boost by getting things done faster without the need to manage/minimize active windows constantly, there were a few nuances that I came across.
To my surprise, Flameshot refused to work. And, for the tutorials or articles I write, a screenshot tool that offers minor editing or annotation capabilities comes in handy.
If you have a similar requirement and are confused, the GNOME Screenshot tool is an option that works with multiple screens flawlessly.
However, it does not offer annotations. So, I will have to separately open the image using another image editor or Ksnip to make things work.
In this post, you will learn how to configure Pure-FTPD.
Pure-FTPD is a free FTP server which mainly focuses on security. It can be setup really easily within five minutes and it does not take much time or effort to setup. Pure-FTPD offers many features like limiting simultaneous users, Limiting bandwidth on each user to avoid saturation of the network speed, hiding files through permissions and moderating new uploads and content. In this tutorial we will see how to easily configure Pure-FTPD server with Self Signed Certificate
File Transfer Protocol (FTP) is a way to receive or transfer data from one server to another. It is a standard communication protocol that enables the transfer or receiving of data over network. For in our case, We can use SFTP protocol for linux servers to transfer files, but if we have to create a FTP server we can use Pure-FTPD
In this multi-part tutorial, we cover how to provision RHEL VMs to a vSphere environment from Red Hat Satellite. Learn how to prepare the Satellite environment in this post.
In this tutorial, we are going to learn how to install Redmine on Ubuntu 20.04.
Redmine is a free and open-source, web-based project management and issue tracking tool. It allows users to manage multiple projects and associated subprojects. It has project wikis and forums, time tracking, and role-based project controls.
Grafana is free and open-source analytics and visualization tool. It's a multi-platform web-based application that provides customizable charts, graphs, and alerts for supported data sources.
By default, Grafana supports multiple data sources like Prometheus, Graphite, InfluxDB, Elasticsearc, MySQL, PostgreSQL, Zabbix, etc. It allows you to create an interactive and beautiful dashboard for your application monitoring system.
This tutorial will show you how to install Grafana with Nginx as a Reverse Proxy on the Rocky Linux system.
In this post, you will learn how to Install Lighttpd on CentOS 8
Lighttpd is an open-source, secure, fast, flexible, and more optimized web server designed for speed-critical environments with less memory utilization as compared to other web servers. It can handle up to 10,000 parallel connections in one server with effective CPU-load management. Also, It comes with an advanced feature set such as FastCGI, SCGI, Auth, Output-Compression, URL-Rewriting and many more. Lighttpd is an excellent solution for every Linux server, due to its high-speed io-infrastructure that allows us to scale several times better performance with the same hardware than with other alternative web-servers.
In this article we will learn how to Install Lighttpd Web server on CentOS 8.
In this post, you will learn how to install Flameshot on RHEL / CentOS
Flameshot is a powerful open source screenshot and annotation tool for Linux, Flameshot has a varied set of markup tools available, which include Freehand drawing, Lines, Arrows, Boxes, Circles, Highlighting, Blur. Additionally, you can customize the color, size and/or thickness of many of these image annotation tools.
Snap is a software packaging and deployment system developed by Canonical for operating systems that use the Linux kernel. The packages, called snaps, and the tool for using them, snapd, work across a range of Linux distributions and allow upstream software developers to distribute their applications directly to users. Snaps are self-contained applications running in a sandbox with mediated access to the host system.
Greeting for the day! We are going to convert Ubuntu 20.04 in Zentyal today. The Server is a very popular OS among Linux admins across the planet. Though Zentyal community edition comes as dedicated os too, I was just testing what if we convert running Ubuntu Machine to the server? The verdict was clear that Servers get ready much quicker in comparison to installing dedicated OS instead. Thought to create a write-up for the same. We have categorized the article into three parts. First, a brief introduction of the server and its features. Second, how to convert Ubuntu into the Server. The third part will be having a conclusion and other views regarding the scenario.
Ansible is a powerful open source tool that helps you automate many of your IT infrastructure operations, from the smallest of tasks to the largest. Ansible has hundreds of modules to help you accomplish your configuration needs, both official and community-developed. When it comes to complex and lengthy workflows, though, you need to consider how to optimize the way you use these modules so you can speed up your playbooks.
Previously, I wrote about making your Ansible playbooks run faster. Here are five ways I make my Ansible modules work faster for me.
Linux distributions. So, what is a DNS? A DNS server is a service that helps resolve a fully qualified domain name (FQDN) into an IP address and performs a reverse translation of an IP address to a user-friendly domain name.
Why is name resolution important? Computers locate services on servers using IP. However, IPs are not as user-friendly as domain names. It would be a big headache to remember each IP address associated with every domain name. So instead, a DNS server steps in and helps resolve these domain names to computer IP addresses.
The DNS system is a hierarchy of replicated database servers worldwide that begin with the “root servers” for the top-level domains (.com, .net, .org, etc.). The root servers point to the “authoritative” servers located in ISPs and large companies that turn the names into IP addresses. The process is known as “name resolution.” Using our www.business.com example, COM is the domain name, and WWW is the hostname. The domain name is the organization’s identity on the Web, and the hostname is the name of the Web server within that domain. Debian DNS server setup can be found the link.
openSUSE Kubic is a certified Kubernetes Distribution based on openSUSE MicroOS. Calico is an open-source project that can be used by Kubernetes to deploy a pod network to the cluster. In this blog, I will show you how to deploy a Kubernetes Cluster based on Calico and openSUSE Kubic by a Virtual Machine. We are going to deploy a cluster that has a master and a worker.
I was intended to use Oracle VM VirtualBox. However, it turned out that on my machine, when I tried to run kubeadm at openSUSE Kubic in VirtualBox, it always stuck at watchdog: BUG: soft lockup - CPU#? stuck for xxs! with CPU usage around 100%. As a result, I switched to VMware Workstation Pro and the issue got solved. Guess it’s caused by some bugs of VirtualBox.
In my last article i showed how to use the new features included in Debian Bullseye to easily create backups of your libvirt managed domains.
A few years ago as this topic came to my interest, i also implemented a rather small utility (POC) to create full and incremental backups from standalone qemu processes: qmpbackup
The workflow for this is a little bit different from the approach i have taken with virtnbdbackup.
While with libvirt managed virtual machines, the libvirt API provides all necessary API calls to create backups, a running qemu process only provides the QMP protocol socket to get things going.
The developer of Vampire Survivors, an absolute smash-hit on Steam has confirmed that a Linux version is in the works. Their latest update post mentioned it might be available by the end of the month, if all goes well.
Developed by poncle, it arrived on Steam in Early Access for Windows on December 17 - 2021 and suddenly on January 6 - 2022 it starting gathering thousands of players. More arrived each day, and this game of complete chaos suddenly managed to be a total hit with an all-time peak player count of 37,075 and that was only hit yesterday so it's continuing to grow all the time. On Steam, it's managed to hit an Overwhelmingly Positive rating too.
Today Paradox Interactive and Double Eleven have done a surprise launch of the Prison Architect: Perfect Storm expansion. Plus, as always for Paradox, there's a free update out now too called The Tower. So not only do you have to worry about what the inmates have smuggled around but you now also need to look to the skies. No one wants to sit in a freezing cold cell, or have wild rats running across their feet.
Do you feel the need for some new games? Perhaps to continue building up a collection for the upcoming Steam Deck? Now is yet another chance for you with the Steam Lunar New Year Sale 2022. Not only is there a big sale but if you head over to the Points Shop, you'll also get a new sticker each day too.
Don't worry, gamers, there will be plenty of RGB lighting.
In addition to closing in on the Godot 4.0 release, another equally exciting effort in the open-source game engine space is the Open 3D Engine originally from the Amazon Lumberyard code and backed by the Linux Foundation and other organizations. Open 3D Engine 2111.2 is out today as the newest stable point release for this less than one year old open-source game engine effort.
Back in December saw the release of O3DE 21.11 as the first major release of this open-source game engine under the Apache 2.0 license. O3DE 2111.2 is the latest in that stable lineage for this game engine. a
Open 3D Engine release 2111.2 is a maintenance and quality of life improvement release based on 2111.1. This release is bugfix-only and contains no new features.
One of the brave but unsuccessful plays from Nokia during their glory years was the N-Gage, an attempt to merge a Symbian smartphone and a handheld game console. It may not have managed to dethrone the Game Boy Advance but it still has a band of enthusiasts, and among them is [Michael Fitzmayer] who has produced a CMake-based toolchain for the original Symbian SDK. This is intended to ease development on the devices by making them more accessible to the tools of the 2020s, and may serve to bring a new generation of applications to those old Nokias still lying forgotten in dusty drawers.
I usually learn something between semesters when I have holidays. During September - October 2021, I tried learning some Qt and looking around codebase for KDE apps. But something just didn't work out. I suspect my leaning style wasn't correct.
It is available at the usual place https://community.kde.org/Schedules/KDE_Gear_22.04_Schedule
Dependency freeze is in six weeks (March 10) and Feature Freeze a week after that, make sure you start finishing your stuff!
Want some cool desktop animations? the ‘Burn My Windows’ extension added some more animation effects for Ubuntu 20.04+, Fedora workstation, and other Linux with GNOME 3.36+.
Previously when user clicks to close an app window, the extension applies a burning window down effect.
[...]
See the short videos for new effects when closing app windows:
energizea energizeb Matrix T-Rex-Attack tv-close wisps There’s also new “Broken Glass” effect in upcoming release to shatter your windows into a shower sharp shards!
For each animation, there’s a setting page to change the animation speed, scale, color, etc.
The traditional package manager in Puppy Linux is the "Puppy Package Manager", often just known as the "PPM".
EasyOS has a derivative of the PPM, named "PETget". However, I have never been entirely happy with that name, as the package manager can install virtually any type of package -- .deb, .rpm. .tgz, .tar.zst, .tar.xz, etc., as well as .pet packages.
I am posting these thoughts while filling in time.
Right now, my main desktop PC is doing a complete recompile in OpenEmbedded. This is now "revision 7", and the binary packages created will have "-r7" in their names.
For more than 7 years now, OPNsense is driving innovation through modularising and hardening the open source firewall, with simple and reliable firmware upgrades, multi-language support, fast adoption of upstream software updates as well as clear and stable 2-Clause BSD licensing.
22.1, nicknamed "Observant Owl", features the upgrade to FreeBSD 13, switch to logging supporting RFC 5424 with severity filtering, improved tunable sysctl value integration, faster boot sequence and interface initiation and dynamic IPv6 host alias support amongst others.
On the flip side major operating system changes bear risk for regression and feature removal, e.g. no longer supporting insecure cryptography in the kernel for IPsec and switching the Realtek vendor driver back to its FreeBSD counterpart which does not yet support the newer 2.5G models. Circular logging support has also been removed. Make sure to read the known issues and limitations below before attempting to upgrade.
Download links, an installation guide[1] and the checksums for the images can be found below as well.
OPNsense, the FreeBSD-based firewall/router software stack forked from pfSense, is out with its first major release of 2022.
OPNsense has now been going on seven years strong and with OPNsense 22.1 is another big step forward for the BSD router/firewall OS project. OPNsense 22.1 shifts the base package set to the excellent FreeBSD 13.
OPNsense 22.1 also features logging improvements, better sysctl tuning integration, faster booting/start-up, and a range of other enhancements.
IBM posted strong results Monday for its fourth quarter, with its best sales growth in more than a decade. The results suggest that CEO Arvind Krishna’s strategy for returning the legacy tech giant to growth is beginning to pay off.
I recently chatted with Mark Cheshire, director of product, Red Hat, to discuss the nuances between API management and service mesh. According to Cheshire, API management and service mesh can work quite well side-by-side for particular use cases. For example, a large organization using service mesh could benefit from applying API management that wraps microservices in a usable contract for internal departments. Or, API management could help a company expose specific APIs from the mesh to outside partners.
States like Tennessee are modernizing their legacy, siloed and on-premises systems to a more integrated and agile infrastructure to keep pace with the digital demands of customers.
In an exclusive StateScoop interview, KPMG managing director, advisory Mark Calem, Red Hat chief technologist, North America public sector David Egts and Tennessee Department of Human Services chief information officer Wayne Glaus discuss how states can use open-source platforms to engage with constituents more fully and to improve the digital services they deliver.
For almost five years, Boston University and Red Hat, a leading provider of open-source computer software solutions, have collaborated to drive innovative research and education in open-source technology. Now that partnership has announced the first recipients of the Red Hat Collaboratory Research Incubation Awards. (Open source means that the original source code is made available for use or modification by users and developers.)
The awards are administered through BU’s Red Hat Collaboratory, housed within the Rafik B. Hariri Institute for Computing and Computational Science & Engineering, and Red Hat Research. “This collaborative model gives us the opportunity to increase the diversity and richness of open engineering and operations projects we undertake together, and also allows us to pursue fundamental research under one umbrella,” says Heidi Picher Dempsey, Red Hat research director, Northeast United States.
The purpose of this blog is to explain to system developers some of the new C++, C, Go or Rust features in Red Hat Enterprise Linux (RHEL) 9.
I’ve had many conversations recently that have me looking at a crucial question that impacts neurodivergent corporate employees and their managers: how do we understand, encourage, measure, nurture, and assess the career development of neurodivergent people? Development opportunities, and how managers assess performance, are critical aspects of career growth, financial compensation, morale, feelings of self worth, happiness, employee retention, and the ability of individuals and companies to achieve their goals. And yet, I believe it is something that can be subjective, underappreciated, and under-invested in. As a late-diagnosed autistic person who has had significant anxiety, social phobia, and other mental health conditions for my entire career, and as someone who has been in senior leadership roles, leading hundreds of employees, I’ve thought about this quite a bit.
Progvis finally made it into Debian! What is it, you ask? It is a great tool to teach about memory management and concurrency.
Open source MANO release ELEVEN is here with another set of exciting features for the telco world!!
Following on from our previous post about accessibility by design, we’d like to share our accessibility documentation process here in the Web & Design team. In the Vanilla squad, we work hard to make sure the Vanilla framework is as accessible as possible. We don’t claim to be perfect, but accessibility is a real priority to us and we’re continuously trying to improve. We’ve recently started writing some accessibility docs to go alongside our components. They outline how the components work, and any important accessibility considerations to note in implementation.
The team at Canonical, the provider of Ubuntu, today announced the release of the MLOps platform, Charmed Kubeflow 1.4. With this, data science teams are empowered to collaborate on AI/ML innovation from concept to production on any cloud.
The solution is free to use and can be deployed in any environment without constraints, paywall, or restrictive features. This release brings users a centralized, browser-based MLOps platform that runs effectively on any conformant Kubernetes.
Brian Benchoff’s “minimum viable computer'” is a Linux handheld computer powered by an Allwinner F1C100s ARM9 processor that could fit into your pocket and should cost about $15 (BoM cost) to manufacture in quantity.
The open-source hardware Linux “computer” comes with 32MB or 64MB RAM, a 2.3-inch color display, a 48-key keyboard, a USB port, and is powered by two AAA batteries. Don’t expect a desktop environment, but it can run a terminal to execute scripts, or even run Doom.
India's minister of state for Electronics and IT Rajeev Chandrasekhar has revealed the nation's government intends to develop a policy that will encourage development of an "indigenous mobile operating system".
Speaking at the launch of a policy vision for Indian tech manufacturing, Chandrasekhar said India's Ministry of Electronics and Information Technology believes the market could benefit from an alternative to Android and iOS and could "even create a new handset operating system" to improve competition, according to the Press Trust of India.
"We are talking to people. We are looking at a policy for that," Chandrasekhar told local media, adding that start-ups and academia are being considered as likely sources of talent and expertise to build the OS.
"If there is some real capability then we will be very much interested in developing that area because that will create an alternative to iOS and Android which then an Indian brand can grow,” he added.
The minister offered no timeframe for a decision on whether to proceed with the policy, nor the level of assistance India's government might provide.
Nor did he say much to suggest he knows that past attempts to create alternative mobile operating systems, or national operating systems, have cratered.
Even Microsoft, famously, failed to make an impact with Windows Phone despite throwing billions at the OS and acquiring Nokia to ensure supply of handsets to run it. Mozilla's Firefox OS was discontinued after efforts to crack India's mobile market with low-cost devices failed. The Linux Foundation's Tizen hasn't found a lot of love.
Working with vintage computer technology can feel a bit like the digital equivalent of archeology. Documentation is often limited or altogether absent today — if it was ever even public in the first place. So you end up reverse engineering a device’s functionality through meticulous inspection and analysis. Spencer Nelson has a vintage NeXT keyboard from the ’80s and wanted to get it working with modern computers via USB. To make that happen, he reverse engineered the protocol and used an Arduino as an adapter.
NeXT was a computer company founded by Steve Jobs in the ’80s, in the period after he left Apple. A little over ten years later, Apple bought NeXT and Jobs rejoined the company. NeXT only released a few computers, but they are noteworthy and desirable to collectors. This particular keyboard is from 1988 and worked with the first generation NeXT Computer. Unlike modern keyboards that share the USB protocol, keyboards from this era utilized proprietary protocols. This particular model had an enigmatic protocol that Nelson became obsessed with deciphering.
The NeXT computer was introduced in 1988, with the high-end machine finding favor with universities and financial institutions during its short time in the marketplace. [Spencer Nelson] came across a keyboard from one of these machines, and with little experience, set about figuring out how it worked.
The keyboard features a type of DIN connector and speaks a non-ADB protocol to the machine, but [Spencer] wanted to get it speaking USB for use with modern computers. First attempts at using pre-baked software found online to get the keyboard working proved to be unreliable. [Spencer] suspected that the code, designed to read 50 microsecond pulses from the keyboard, was miscalibrated.
The EU Parliament, says FSFE, missed the chance to introduce strong requirements for interoperability based on Open Standards: “This is a lost chance to leverage competition with accessible and non-discriminatory technical specifications [that would allow] market actors to innovate on top of technical specification standards and build their own services”.
However, things look better than they did before for digital and consumer rights in EU, and let’s hope, as FSFE puts it, that getting Device Neutrality in european legislations does become the first step towards real digital interoperability of digital products and services. I mean, we have already endured too much idiocy likke this around non-interoperable electronic components like, haven’t we?
The SHA badge used an ESP32 as its processor, and paired it with a touch keypad and an e-ink screen. Its then novel approach of having a firmware that could load MicroPython apps laid the groundwork for the successful open source badge.team firmware project, meaning that it remains versatile and useful to this day.
Microsoft Paint was one of the first creative outlets for many children when they first laid hands on a computer in the 1990s. Now, [Volos Projects] has brought the joy of this simple application to a more compact format on the ESP32!
Nexcom’s Linux-ready “AIEdge-X 100-VPU” edge AI mini-PC combines an Apollo Lake SoC with up to 2x Myriad X VPUs. Key specs include 2x GbE, 2x HDMI 2.0, 2x USB 3.0, and an M.2 M-key slot.
The fanless, 179.5 x 106 x 37mm AIEdge-X 100-VPU, which follows other AIEdge-X systems such as Nexcom’s larger, 9th Gen Coffee Lake powered AIEdge-X 300, is primarily designed for smart retail applications such as smart signage, automated checkout machines, QSRs (Quick Service Restaurants), drive-thru kiosks, and endless aisles, which refers to online shopping from a brick-and-mortar store. Other applications include license plate recognition, body temperature checking, transit kiosks, and other smart city and edge AI tasks.
As of yesterday Intel's contributed Programmable Services Engine "PSE" support has been merged into mainline Coreboot for supporting this Arm-based dedicated offload engine found within select Intel processors.
Sarcasm is notoriously difficult to distinguish in online communities. So much, in fact, that a famous internet rule called Poe’s Law is named after the phenomenon. To adapt, users have adopted several methods for indicating implied sarcasm such as the /s tag, but more recently a more obvious sarcasm indicator has appeared that involves random capitalization througout the sarcastic phrase. While this looks much more satisfying than other methods, it is a little cumbersome to type unless you have this sarcasm converter for your keyboard.
The device, built by [Ben S], is based around two Raspberry Pi Pico development boards and sits between a computer and any standard USB keyboard. The first Pi accepts the USB connection from the keyboard and reads all of the inputs before sending what it reads to the second Pi over UART. If the “SaRcAsM” button is pressed, the input text stream is converted to sarcasm by toggling the caps lock key after every keystroke.
Browsing through the recent projects on Hackaday.io, we’ve found this entry by [NanoCodeBug]: a single-PCB low-power trinket reviving the “pocket pet” concept while having some fun in the process! Some serious thought was put into making this device be as low-power as possible – with a gorgeous Sharp memory LCD and a low-power-friendly SAMD21, it can run for two weeks on a pair of mere AAA batteries, and possibly more given a sufficiently polished firmware. The hardware has some serious potential, with the gadget’s platform lending itself equally well to Arduino or CircuitPython environments, the LCD being overclock-able to 30 FPS, mass storage support to enable pet transfer and other PC integrations, a buzzer for all of your sound needs, and an assortment of buttons to help you create mini-games never seen before. [NanoCodeBug] has been working on the hardware diligently for the past month, having gone through a fair few revisions – this is shaping up to be a very polished gadget!
[Voja Antonic] has been building digital computers since before many of us were born. He designed with the Z80 when it was new, and has decades of freelance embedded experience, so when he takes the time to present a talk for us, it’s worth paying attention.
For his Remoticon 2022 presentation, he will attempt to teach us how to become a hardware expert in under forty minutes. Well, mostly the digital stuff, but that’s enough for one session if you ask us. [Voja] takes us from the very basics of logic gates, through combinatorial circuits, sequential circuits, finally culminating in the description of a general-purpose microprocessor.
The MIDI format has long been used to create some banging electronic music, so it’s refreshing to see how [John P. Miller] applied the standard in his decidedly analog self-playing robotic xylophone.
Framed inside a fetching Red Oak enclosure, the 25-key instrument uses individual solenoids for each key, meaning that it has no problem striking multiple bars simultaneously. This extra fidelity really helps in reproducing the familiar melodies via the MIDI format. The tracks themselves can be loaded onto the device via SD card, and selected for playback with character LCD and rotary knob.
The software transposes the full MIDI music spectrum of a particular track into a 25-note version compatible with the xylophone. Considering that a piano typically has 88 keys, some musical concessions are needed to produce a recognizable playback, but overall it’s an enjoyable musical experience.
So basically I understand Oniro aims to provide a vendor-agnostic platform to develop software that runs on various operating systems and hardware in order to reduce fragmentation in the consumer and IoT device industry. I will not insert an xkcd meme here, but you know what I mean. Right now, Oniro relies on the Poky/Yocto Project build system and supports three operating systems with Linux, ZephyrOS, and FreeRTOS allowing it to be used in application processors and microcontrollers.
Highlights: new releases of Scribus, Flameshot, Surge, ZynAddSubFX, Zrythm, Giada; Audacity resurrects real-time effects, Ardour gets cue markers.
Once again, the COVID pandemic has forced linux.conf.au to go virtual, thus depriving your editor of a couple of 24-hour, economy-class, middle-seat experiences. This naturally leads to a set of mixed feelings. LCA has always put a priority on interesting keynote talks, and that has carried over into the online event; the opening keynote for LCA 2022 was given by Brian Kernighan. Despite being seen as a founder of our community, Kernighan is rarely seen at Linux events; he used his LCA keynote to reminisce for a while on where Unix came from and what its legacy is.
He began by introducing Bell Labs, which was formed by US telecommunications giant AT&T to carry out research on how to improve telephone services. A lot of inventions came out of Bell Labs, including the transistor, the laser, and fiber optics. Such was the concentration of talent there that, at one point, Claude Shannon and Richard Hamming shared an office. Kernighan joined Bell Labs in 1967, when there were about 25 people engaged in computer-science research.
Early on, Bell Labs joined up with MIT and General Electric to work on a time-sharing operating system known as Multics. As one might have predicted, the attempted collaboration between a research lab, a university, and a profit-making company did not work all that well; Multics slipped later and later, and Bell Labs eventually pulled out of the project. That left two researchers who had been working on Multics — Ken Thompson and Dennis Ritchie — without a project to work on.
After searching for a machine to work on, Thompson eventually found an old PDP-7, which was already obsolete at that time, to do some work on filesystem design. The first Unix-like system was, in essence, a test harness to measure filesystem throughput. But he and Ritchie later concluded that it was something close to the sort of timesharing system they had been trying to build before. This system helped them to convince the lab to buy them a PDP-11/20 for further development. [Brian Kernighan] The initial plan was to create a system for document processing, with an initial focus of, inevitably, preparing patent applications. The result was "recognizably Unix" and was used to get real work done.
MongoDB is a NoSQL general-purpose document-oriented database that is free to use. It is a scalable, versatile NoSQL document database platform built to overcome the constraints of previous NoSQL solutions and the approach of relational databases. It helps the user store and deals with an enormous amount of data.
MongoDB’s horizontal scaling and load balancing capabilities have given application developers unprecedented flexibility and scalability. There are different MongoDB editions; however, we will focus on MongoDB Atlas in this article.
MongoDB Atlas is a multi-cloud database service created by the MongoDB team. Atlas makes it easy to deploy and manage databases while also giving users the flexibility they need to develop scalable, high-performance global applications on the cloud providers of their choice.
It is the world’s most popular cloud database for modern applications. Developers can use Atlas to deploy fully managed cloud databases on AWS, Azure, or Google Cloud. Developers can relax easily knowing that they have rapid access to the availability, scalability, and compliance they need for enterprise-level application development.
Node-firebird-driver-native version 2.4.0 has been released with a few features added.
Rqlite 7.0 is now available as a lightweight, distributed relational database. This open-source database system for cluster setups is built atop SQLite while aiming to be easy-to-use and fault-tolerant.
The AgensGraph Development Team are pleased to announce the release of AgensGraph v2.5.
AgensGraph is a new generation multi-model graph database for the modern complex data environment. AgensGraph is a multi-model database, which supports the relational and graph data model at the same time that enables developers to integrate the legacy relational data model and the flexible graph data model in one database. AgensGraph supports ANSI-SQL and openCypher (http://www.opencypher.org). SQL queries and Cypher queries can be integrated into a single query in AgensGraph.
AgensGraph is based on the powerful PostgreSQL RDBMS, and is very robust, fully-featured and ready for enterprise use. AgensGraph is optimized for handling complex connected graph data and provides plenty of powerful database features essential to the enterprise database environment including ACID transactions, multi-version concurrency control, stored procedure, triggers, constraints, sophisticated monitoring and a flexible data model (JSON). Moreover, AgensGraph leverages the rich eco-systems of PostgreSQL and can be extended with many outstanding external modules, like PostGIS.
For more details please see the release notes.
Apache AGE(incubating) is a PostgreSQL extension that provides graph database functionality.
AGE is an acronym for A Graph Extension, and is inspired by Bitnine's fork of PostgreSQL 10, AgensGraph, which is a multi-model database. The goal of the project is to create single storage that can handle both relational and graph model data so that users can use standard ANSI SQL along with openCypher, the Graph query language.
Apache Kafka is continuing to build out its event data streaming technology platform as the open source project moves forward.
Apache Kafka 3.1 became generally available on Jan. 24, providing users of the open source event streaming technology with a series of new features.
Organizations use Kafka to enable real-time data streams that can be used for operations, business intelligence and data analytics.
Kafka is a developed by an open source community of developers that includes Confluent, an event streaming vendor that provides a commercial platform for Kafka, as well as Red Hat, which has a managed Kafka service.
Gartner analyst Merv Adrian said he looks at Kafka as a data source that feeds a database.
"More uses and users are moving upstream to engage with data in motion, before it comes to rest, and Kafka and its adjacent technologies are moving to capture share of that business," Adrian said.
In earlier posts (starting from this one) I ported LibreOffice's build system to Meson. The aim has not been to be complete, but to compile and link the main executables. On Linux this is fairly easy as you can use the package manager to install all dependencies (and there are quite a few of them).
[...]
It does on my machine. It probably won't do so on yours. Some of the deps I used could not be added to WrapDB yet or are missing review. If you want to try, the code is here.
The problematic (from a build system point of view) part of compiling an executable and then running it to generate source code for a different target works without problems. In theory you should be able to generate VS project files and build it with those, but I only used Ninja because it is much faster.
Interoperability is a very important aspect of the LibreOffice. Today, LibreOffice can load and save various file formats from many different office applications from different companies across the world. But bugs (specially regression bugs) are inevitable parts of every software. There are situations where the application does not behave as it should, and a developer should take action and fix it, so that it will behave according to the expectation of the user.
What if you encounter a bug in LibreOffice, and how does a developer fix the problem? Here we discuss the steps needed to fix a bug. In the end, we provide a test and make sure that the same problem does not happen in the future.
[...]
The bug reporter should carefully describe the “actual results” and why it is different from the “expected results”. This is also important because the desired software’s behavior is not always as obvious as it seems to be for the bug reporter.
Let’s talk about a recently fixed regression bug: The “NISZ LibreOffice Team” reported this bug. The National Infocommunications Service Company Ltd. (NISZ) has an active team in LibreOffice development and QA.
Chile is in the midst of governmental changes, and with these changes comes the opportunity for the people of Chile to make their voices heard for long-term benefits to their digital rights and freedoms. Chilean activists have submitted three constitutional proposals relating to free software and user freedom, but they need signatures in order to have these proposals submitted to the constitutional debate.
We encourage free software community members in Chile to have a look at these proposals, and sign those that uphold digital freedom and autonomy. The deadline for collecting signatures is February 1st.
Some further explanation and other information gathered by one of our community members, Felix Freeman, is included below. The English version of Felix's message is provided below.
I am happy to announce a new major release of GNU poke, version 2.0.
This release is the result of a full year of development. A lot of things have changed and improved with respect to the 1.x series; we have fixed many bugs and added quite a lot of new exciting and useful features.
See the complete release notes at https://jemarch.net/poke-2.0-relnotes.html for a detailed description of what is new in this release.
We have had lots of fun and learned quite a lot in the process; we really wish you will have at least half of that fun using this tool!
A Python "frozenset" is simply a set object that is immutable—the objects it contains are determined at initialization time and cannot be changed thereafter. Like sets, frozensets are built into the language, but unlike most of the other standard Python types, there is no way to create a literal frozenset object. Changing that, by providing a mechanism to do so, was the topic of a recent discussion on the python-ideas mailing list.
[...]
In the end, this "feature" would not be a big change, either in CPython, itself, or for the Python ecosystem, but it would remove a small wart that might be worth addressing. Consistency and avoiding needless work when creating a literal frozenset both seem like good reasons to consider making the change. Whether a Python Enhancement Proposal (PEP) emerges remains to be seen. If it does, no major opposition arises, and the inevitable bikeshed-o-rama over its spelling ever converges, it just might appear in an upcoming Python—perhaps even Python 3.11 in October.
The Teraform for each meta argument allows you to use a map or a set of strings to deploy multiple similar objects (such as virtual machines) without having to define a separate resource block for each one. This is great for making our Terraform plans more efficient!
Note: for_each was added in Terraform 0.12.6, and support for using it with Terraform modules was added in 0.13. Let’s go straight into looking at some examples of how to use Terraform for each loops.
Why do we build radios or clocks when you can buy them? Why do we make LEDs blink for no apparent purpose? Why do we try to squeeze one extra frame out of our video cards? We don’t know why, but we do. That might be the same attitude most people would have when learning about esolangs — esoteric programming languages — we don’t know why people create them or use them, but they do.
We aren’t talking about mainstream languages that annoy people like Lisp, Forth, or VBA. We aren’t talking about older languages that seem cryptic today like APL or Prolog. We are talking about languages that are made to be… well… strange.
Perl possesses a rich and expressive set of operators. So rich, in fact, that other adjectives can come to mind, such as prolix, or even Byzantine.
Requests for help navigating Perl's operator space appear repeatedly on outlets such as PerlMonks. These seem to me to involve two sorts of confusion: precedence (discussed here) and functionality (string versus numeric -- maybe another blog post).
The precedence warnings category has some help here, though as of Perl 5.34 there are only two diagnostics under it:
One of the things people often complain about when doing Async Rust is cancellation. This has always been a bit confusing to me, because it seems to me that async cancellation should feel a lot like panics in practice, and people don’t complain about panics very often (though they do sometimes). This post is the start of a short series comparing panics and cancellation, seeking after the answer to the question “Why is async cancellation a pain point and what should we do about it?” This post focuses on explaining Rust’s panic philosophy and explaining why I see panics and cancellation as being quite analogous to one another.
A home-built railway is one of the greatest things you could possibly use to shift loads around your farm. [Tim] and [Sandra] of YouTube channel [Way Out West] have just such a setup, but they needed some switching points to help direct carriages from one set of rails to another. Fabrication ensued!
In standardized accountings of trade, money and materials flow in opposite directions. But when embodied resources are considered, net flows of money and resources goes in the same direction.
The overall result is that “Rich nations accomplish a net appropriation of materials, energy, land, and labor”.
And what is really interesting (not because it is new, but because how it is backed and quantified by data) is the final implication:
[Regardless of moral and ethics issues] “we cannot all grow. Since this growth-based model of development requires the appropriation of resources from poorer regions, it seems illusory for all poorer nations to be able to catch-up”, and if those countries must develop, the richer ones have to give up something.
If you ever been curious how old-school jukeboxes work, it’s all electromechanical and no computers. In a pair of videos, [Technology Connections] takes us through a detailed dive into the operation of a 1970 Wurlitzer Statesman model 3400 that he bought with his allowance when he was in middle school. This box can play records at either 33-1/3 or 45 RPM from a carousel of 100 discs, therefore having a selection of 200 songs. This would have been one of the later models, as Wurlitzer’s jukebox business was in decline and they sold the business in 1973.
[...]
External appearances aside, it’s the innards of this mechanical wonder that steal the show. The mechanism is known as the Wurlamatic, invented by Frank B. Lumney and Ronald P. Eberhardt in 1967. Check out the patent US3690680A document for some wonderful diagrams and schematics that are artwork unto themselves.
Today’s supply chain issues can make it hard to buy microcontrollers, or really any kind of semiconductor. But for those keeping retrocomputers alive, this problem has always existed: ancient components might have been out of production for decades, with a dwindling supply of second-hand parts or “new old stock” as the only option. If a rare CPU breaks, you might have no option but to replace the entire computer.
[Piotr Patek] ran into this issue when he obtained an Elektronika MK-85 programmable calculator with a broken CPU. Unable to find a replacement, he decided instead to build a pin-compatible CPU unit based on an STM32 microcontroller. Of course no modern CPU is pin-compatible with a Soviet design from the 1980s, so [Piotr] had to design a small interposer PCB to match the original pinout. This also gave him enough space to add an efficient DC/DC converter chip that generates the 2.5 V supply for the STM32.
Chinese researchers are reporting that applying an electric field to pea plants increased yields. This process — known as electroculture — has been tested multiple times, but in each case there are irregularities in the scientific process, so there is still an opportunity for controlled research to produce meaningful data.
This recent research used two plots of peas planted from the same pods. The plants were tended identically except one plot was stimulated by an electric field. The yield on the stimulated plot was about 20% more than the control plot.
Here is my own synthesis, as simple as possible, of a much geekier post about a very geeky concept that, in an age where so much depends on how software is used AROUND you, becomes every year more important for everybody.
A Software Bill of Materials (SBOM) is becoming an increasingly expected requirement from software releases. Reading through blog posts and social media, there still seems that some confusion persists about what an SBOM can/could do for your project.
The Linux Foundation summarises the progress made in 2021 towards its goal of ensuring anyone can start an open-source technology career.
The development and expansion of the EV charging software ecosystem is a critical component to the mainstream adoption of electric vehicles. However, the industry has become complex and fragmented, with multiple isolated solutions and inconsistent technology standards. This slows and threatens the adoption of EVs.
In response, PIONIX has developed a project called EVerest, an open-source software stack designed to establish a common base layer for a unified EV charging ecosystem.
EVerest has gained some serious cred in the developer world, with its biggest support LF Energy (the Linux open-source foundation for the power systems sector). I spoke to the project’s brainchild, Dr. Marco Möller, managing director of PIONIX, to find out more.
Did you know that one of OSI’s members is leading the effort to take open source to infinity and beyond?! Libre Space Foundation (LSF) is a non-profit foundation registered in Greece whose vision is “an Open and Accessible Outer Space for all.” The organization works to promote, advance and develop free and open source technologies and knowledge for space.
Recently, Libre Space Foundation, on behalf of the OpenSatCom.org activity of the European Space Agency, partnered with Inno3 to investigate open source development models in the satellite communications industry and share their findings in a report. As the authors explain, “..the SATCOM industry has been traditionally multiple vertical ecosystems and moved towards some standardization (through efforts like CCSDS, ECSS, DVB, etc.) on various of its parts. Yet it is far from an Open Ecosystem and specific actions should be taken to explore this direction for the benefit of the SATCOM industry.”
An attacker could exploit some of these vulnerabilities to take control of an affected system.
The risks of a Kubernetes (K8s) deployment are actually the same as in traditional Linux servers.
Thanks to Somewhat Reticent for being always on alert and contributing:
Do you need pkexec and polkit on a WM? NO! CVE-2021-4034
Not unless you want some automated menu and icons to click on and use various user/root rights to execute a gui! Otherwise you are “safe“.
Don’t think because RH is reporting this the only affected parties are RHEL users, anyone who uses their systemd elogind and polkit derivatives are equally affected.
But gksu/gksudo was insecure and had to be erased from nearly every distro that is an IBM “client”.
Free and open source software (FOSS) is about much more than driving costs down, in some cases even down to zero – it’s about giving control back to users, developers and even nations. With FOSS, everyone gains the freedom to study, improve and share the software – and to use it whenever and wherever they want, without being restricted by vendor lock-in strategies.
FOSS has been widely used amongst government bodies and public services, so thanks to the coordination of their recently formed Open Source Programme Office (OSPO), the European Commission has started a series of hackathon and “bug bounty” programmes to help selected projects find (and potentially fix) security issues.
Before we get into this, I have seen a lot of people on Twitter blaming systemd for this vulnerability. It should be clarified that systemd has basically nothing to do with polkit, and has nothing at all to do with this vulnerability, systemd and polkit are separate projects largely maintained by different people.
We should try to be empathetic toward software maintainers, including those from systemd and polkit, so writing inflammatory posts blaming systemd or its maintainers for polkit does not really help to fix the problems that made this a useful security vulnerability.
First, they came for Windows. Then, for Tux. As cool as Linux is, it's increasingly becoming a target for ransomware-friendly cyber criminals intent on ruining people's days.
The ioXT Alliance, which offers a certification program for IoT security, announced it has certified 195 products and grown to 580 members. Meanwhile, Timesys is seeking participants for a survey on IoT security.
Don’t look up. From catastrophic data breaches, to spyware attacks that haunt people with the specter of their own private communications, to the routine exploitation of our personal data for profit and political manipulation, privacy violations have become daily news. None of this will stop until we do something about it.
On Data Protection Day 2022, we are urging governments around the world to take action to prevent rampant data violations. To do so, they must enact and strongly enforce data protection laws.
Data protection laws are a critical tool to ensure minimum rules are in place to safeguard our personal information online and offline. The European Union was one of the first movers, establishing a data protection framework in the ‘90s, and working continuously to improve it. Other countries followed suit. Brazil and Ecuador are among the latest to pass strong, modern data protection laws.
Other countries are lagging behind, and despite constant privacy scandals, some have no comprehensive data protection laws at all. Others have passed promising laws, but ignore them. Even where strong laws exist, enforcing them is proving harder than expected. Here’s a look at countries with some of the best and worst data protection laws in 2022.
If you don’t, it will be used against you. Maybe already it is.
There is evidence that biometric mass surveillance in EU Member States and by EU agencies has already resulted in violations of EU data protection law, and unduly restricted people’s rights including their privacy, right to free speech, right to protest and not to be discriminated against. That is why you must “reclaim your face”!
Because innovation, of course.
According to the Washington Post and to this summary there is a startup throwing money and human ingenuity to tackle “a flaw in our waste management systems that many people probably aren’t aware of.”
Lasso Loop, the story says, is developing “a hefty home appliance machine that automatically sorts and breaks down the recyclables you toss inside it”.
… it’s their “Business as Usual” foundation.
[...]
If none of those issues existed, it would indeed become physically and geopolitically feasible to replace all the ” fossil fuel-burning machines [i.e. ALL] Power plants, cars, and trucks, HVAC systems, stoves, roofs, etc [of TODAY]” with the same number of the same things, just electric.
If none of those issues existed, every owner of a car or a single-family home with disposable income left could follow advice like “make your next car electric, turn your home into a big battery”.
As reality stands today, instead… first, most people who don’t fit that profile today may never become part of the mass market that certain strategies would need to function IF they were feasible.
Second, I have a strong feeling that in the next years a non-negligible number of people who do fit that profile today will find advice like “move to an apartment building properly served by public transit” much more interesting, if not the only alternative still affordable, than “keep owning a car and a single-family home”. What will happen of certain strategies then?
Today’s GDP report showed that the U.S. economy grew by 5.7% in 2021, the most robust economic growth since 1984.
The crucial role of digital technology in this rebound is not doubted. Digital services and tools empowered the recovery by giving Americans more choices than ever before to more safely get back to work, school, shopping, and leisure. New digital-enabled options like remote work, remote classes, and contactless shopping options like buy online, pickup in-store/curbside and home delivery sparked this positive growth, despite ongoing pandemic challenges.
Digital services and tools have been essential to carry small and medium-sized businesses (SMBs) through the pandemic, helping them connect with both workers and customers alike. By utilizing both new and existing digital tools, businesses were able to rapidly expand contactless shopping, dining, and entertainment options. For example, retailers rapidly expanded omnichannel offerings, allowing consumers to safely shop from home and choose whether to pick up their orders curbside in front of physical stores or have their orders delivered to their homes.