Let us say you want to support Linux and buy an actual Linux desktop OS like you buy Windows desktop operating system from the market. How much would it cost price-wise, and what would you get in return when you buy a yearly subscription?
We have some well-known Linux vendors that only target enterprise Linux desktop users. For example, a software developer working in a bank, government, or research facility will likely buy an enterprise Linux desktop subscription. In addition, these vendors have tie-up with OEMs such as Dell or HP to offer pre-installed Linux desktop workstations or laptops.
The Librem 14 is our most secure laptop to date. We aim to make the Librem 14 as secure as possible out of the box for the widest range of customers while also taking ease-of-use and overall convenience into account. We also avoid security measures that take control away from you and give it to us. While we think you should trust us, you shouldn’t have to trust us to be secure.
While we always keep the average customer’s security in mind, we also have a number of customers who face more extreme threats and are willing to trade some convenience for extra security. Those customers have sometimes asked me which combination of options would make their Librem 14 order the most secure.
In this post I will provide what I think are the highest security options you can apply to a Librem 14 order, along with some additional steps to take once you receive your Librem 14. Before I get started though, I want to note that even with these recommendations, there are still additional, more extreme steps a person could take. While I’m providing high security recommendations, my goal here is still to strike a reasonable balance between high security and some level of convenience. For those of you facing even more extreme threats with a higher tolerance for inconvenience, treat these recommendations as a baseline to build on.
The kernel provides a number of macros internally to allow code to generate warnings when something goes wrong. It does not, however, provide a lot of guidance regarding what should happen when a warning is issued. Alexander Popov recently posted a patch series adding an option for the system's response to warnings; that series seems unlikely to be applied in anything close to its current form, but it did succeed in provoking a discussion on how warnings should be handled.
Warnings are emitted with macros like WARN() and WARN_ON_ONCE(). By default, the warning text is emitted to the kernel log and execution continues as if the warning had not happened. There is a sysctl knob (kernel/panic_on_warn) that will, instead, cause the system to panic whenever a warning is issued, but there is a lack of options for system administrators between ignoring the problem and bringing the system to a complete halt.
Popov's patch set adds another option in the form of the kernel/pkill_on_warn knob. If set to a non-zero value, this parameter instructs the kernel to kill all threads of whatever process is running whenever a warning happens. This behavior increases the safety and security of the system over doing nothing, Popov said, while not being as disruptive as killing the system outright. It may kill processes trying to exploit the system and, in general, prevent a process from running in a context where something is known to have gone wrong.
There were a few objections to this option, starting with Linus Torvalds, who pointed out that the process that is running when a warning is issued may not have anything to do with the warning itself. The problem could have happened in an interrupt handler, for example, or in a number of other contexts. "Sending a signal to a random process is just voodoo programming, and as likely to cause other very odd failures as anything else", he said.
One does not normally expect a lot of disagreement over a 13-line patch that effectively tweaks a single line of code. Occasionally, though, such a patch can expose a disagreement over how the behavior of the kernel should be managed. This patch from Drew DeVault, who is evidently taking a break from stirring up the npm community, is a case in point. It brings to light the question of how the kernel community should pick default values for configurable parameters like resource limits.
The kernel implements a set of resource limits applied to each (unprivileged) running process; they regulate how much CPU time a process can use, how many files it can have open, and more. The setrlimit() man page documents the full set. Of interest here is RLIMIT_MEMLOCK, which places a limit on how much memory a process can lock into RAM. Its default value is 64KB; the system administrator can raise it, but unprivileged processes cannot.
Once upon a time, locking memory was a privileged operation. The ability to prevent memory from being swapped out can present resource-management problems for the kernel; if too much memory is locked, there will not be enough left for the rest of the system to function normally. The widespread use of cryptographic utilities like GnuPG eventually led to this feature being made available to all processes, though. By locking memory containing sensitive data (keys and passphrases, for example), GnuPG can prevent that data from being written to swap devices or core-dump files. To enable this extra security, the kernel community opened up the mlock() system call to all users, but set the limit for the number of pages that can be locked to a relatively low value.
One of the key features of the extended BPF virtual machine is the verifier built into the kernel that ensures that all BPF programs are safe to run. BPF developers often see the verifier as a bit of a mixed blessing, though; while it can catch a lot of problems before they happen, it can also be hard to please. Comparisons with a well-meaning but rule-bound and picky bureaucracy would not be entirely misplaced. The bpf_loop() proposal from Joanne Koong is an attempt to make pleasing the BPF bureaucrats a bit easier for one type of loop construct.
To do its job, the verifier must simulate the execution of each BPF program loaded into the kernel. It makes sure that the program does not reference memory that should not be available to it, that it doesn't leak kernel memory to user space, and many other things — including that the program will actually terminate and not lock the kernel into an infinite loop. Proving that a program will terminate is, as any survivor of an algorithms class can attest, a difficult problem; indeed, it is impossible in the general case. So the BPF verifier has had to find ways to simplify the problem.
Initially, "simplifying the problem" meant forbidding loops altogether; when a program can only execute in a straight-through manner, with no backward jumps, it's clear that the program must terminate in finite time. Needless to say, BPF developers found this rule to be a bit constraining. To an extent, loops can be simulated by manually unrolling them, but that is tiresome for short loops and impractical for longer ones. So work soon began on finding a way to allow BPF programs to contain loops. Various approaches to the loop problem were tried over the years; eventually bounded loop support was added to the 5.3 kernel in 2019.
Bootlin has been delivering training courses in the field of Embedded Linux since its creation in 2004, delivering over 430 courses to more than 4500 engineers just since 2009, in over 40 countries, with a high-level of quality and a full transparency, with fully open training materials and publicly available training evaluations.
Wayland 1.20.0 is released!
This release contains the following major changes:
- FreeBSD support has been entirely upstreamed and has been added to our continuous integration system. - The autotools build system has been dropped. Meson has replaced it. - A few protocol additions: wl_surface.offset allows clients to update a surface's buffer offset independently from the buffer, wl_output.name and description allow clients to identify outputs without depending on xdg-output-unstable-v1. - In protocol definitions, events have a new "type" attribute and can now be marked as destructors. - A number of bug fixes, including a race condition when destroying proxies in multi-threaded clients.
Commit history since RC1 below.
Simon Ser (2): meson: override dependencies to ease use as subproject build: bump to version 1.20.0 for the official release
git tag: 1.20.0
Wayland 1.20 is out today as the latest version of the reference Wayland library/support code and core protocol.
While work on the core Wayland code itself has slowed down in recent years, Wayland 1.20 is a fairly notable update. In particular, this first Wayland release in nearly one year is bringing fully upstreamed FreeBSD support. All of the FreeBSD support patches have worked their way upstream into Wayland 1.20 and it's ready to be supported with this release. There is also now FreeBSD continuous integration (CI) test coverage to ensure the FreeBSD support remains in good shape and hopefully won't regress.
One of the big issues I have when working on Turnip driver development is that when compiling either Mesa or VK-GL-CTS it takes a lot of time to complete, no matter how powerful the embedded board is. There are reasons for that: typically those board have limited amount of RAM (8 GB for the best case), a slow storage disk (typically UFS 2.1 on-board storage) and CPUs that are not so powerful compared with x86_64 desktop alternatives.
[...]
Icecream is a distributed compilation system that is very useful when you have to compile big projects and/or on low-spec machines, while having powerful machines in the local network that can do that job instead. However, it is not perfect: the linking stage is still done in the machine that submits the job, which depending on the available RAM, could be too much for it (however you can alleviate this a bit by using ZRAM for example).
One of the features that icecream has over its alternatives is that there is no need to install the same toolchain in all the machines as it is able to share the toolchain among all of them. This is very useful as we will see below in this post.
It is not too often getting to talk about performance optimizations for Mesa's Virgl code that along with in conjunction with related "Virgil" components allows for hardware-accelerated 3D/OpenGL running within virtual machines. Hitting Mesa 22.0 this week though is some Virgl code improvements for allowing lower memory use within virtual machines.
PostgreSQL users and developers are generally aware that it is best to minimize the number of tasks performed as superuser, just as at the operating system level most Linux and UNIX users are aware that it's best not to do too many things as root. For that reason, PostgreSQL has over the last few years introduced a number of predefined roles that have special privileges and which in some case can be used in place of the superuser role. For instance, the pg_read_all_data role, new in version 14, has the ability to read all data in every table in the database - not only the tables that currently exist, but any that are created in the future. In earlier versions, you could achieve this effect only by handing out superuser permissions, which is not great, because the superuser role can do much more than just read all the data in the database. The new predefined role allows for a very desirable application of the principle of least privilege.
Unfortunately, the predefined roles which exist in current releases of PostgreSQL do not, in my view, really come close to solving the problem. It's good that we have them, but there are still a large number of things which can't be done without superuser privileges, and even if we make as much progress in the next 3 years as we have in the past 10, we still won't be all that close to a full solution. We need to do better. Consider, for example, the case of a service provider who would like to support a database with multiple customers as tenants. The customers will naturally want to feel as if they have the powers of a true superuser, with the ability to do things like create new roles, drop old ones, change permissions on objects that they don't own, and generally enjoy the freedom to bypass permission checks at the SQL level which superusers enjoy. The service provider, who is the true superuser, also wants this, but does not want the customers to be able to do the really scary things that a superuser can do, like changing archive_command to rm -rf / or deleting the entire contents of pg_proc so that the system crashes and the database in which the operation was performed is permanently ruined.
Today we are looking at how to install Webull Desktop on a Chromebook. Please follow the video/audio guide as a tutorial where we explain the process step by step and use the commands below.
In this tutorial, we will show you how to install GlassFish on Debian 11. For those of you who didn’t know, the GlassFish server is a free-ware, lightweight application server for the development and deployment of Java platforms and web technologies based on Java technology. It supports the latest Java platforms such as Enterprise JavaBeans, JavaServer Faces, JPA, JavaServer Pages, and many more. GlassFish comes with a simple and user-friendly administration console with an update tool for updates and add-on components.
This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you through the step-by-step installation of the GlassFish on a Debian 11 (Bullseye).
Kubernetes is an open-source container orchestration system for automating computer application deployment, scaling, and management.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation.
Starting with kernel 5.17 the kernel supports the builtin privacy screens built into the LCD panel of some new laptop models.
This means that the drm drivers will now return -EPROBE_DEFER from their probe() method on models with a builtin privacy screen when the privacy screen provider driver has not been loaded yet.
To avoid any regressions distors should modify their initrd generation tools to include privacy screen provider drivers in the initrd (at least on systems with a privacy screen), before 5.17 kernels start showing up in their repos.
Want to change the position of notifications in Ubuntu?
As you no doubt know, Ubuntu shows app and other notifications at the top of the screen, just beneath the clock (as in upstream GNOME Shell). This position makes sense within the default UX. The top of the screen in GNOME Shell is where status bar items sit, and notification toasts live in the calendar applet (which is accessed by clicking the clock).
But you’re not everyone.
Perhaps you want to move notifications to the top right of your display. This is where Ubuntu used to show notifications (and is where many other Linux distros and desktop environments still do).
Today, in this article we will learn how to install aaPanel with Ubuntu 21.04. aaPanel is an alternative to a web control panel like cPanel. Even the free version of this panel can fulfill basic needs. Provided with quick updates, rich documentation is provided. Today, in this article we will learn how to install aaPanel with Ubuntu 21.04.
OpenGamePanel is a open-source server management panel based on PHP/MYSQL. It is a server management tool which provides alot of features. Most of the game servers/voice servers can be easily installed just by selecting it in the list. The main features includes : Custom Web FTP, Auto Updates, Easy installation of servers. There are several game server management panel available in the market but the once which is fairly easy to use and install is OpenGamePanel. It also provides prebuilt plugins for better and advanced experience. You can rent out servers to clients using the panel itself. We can also configure multiple machines to be used and managed by a single web panel.
If ever you've been curious about making music, you'll be pleased to know that the open source digital audio workstation Ardour makes it easy and fun, regardless of your level of experience. Ardour is one of those unique applications that manages to span beginner-level hobbyists all the way to production-critical professionals and serves both equally well. Part of what makes it great is its flexibility in how you can accomplish any given task and how most common tasks have multiple levels of possible depth. This article introduces you to Ardour for making your own music, assuming that you have no musical experience and no knowledge of music production software. If you have musical experience, it's easy to build on what this article covers. If you're used to other music production applications, then this quick introduction to how the Ardour interface is structured ought to be plenty for you to explore it in depth at your own pace.
This is the continuation of Kubernetes introduction guide. In this article, we are going to learn about important features of Kubernetes which will help you to understand the functional concepts of Kubernetes in deeper level.
Using S3 replication, you can setup automatic replication of S3 objects from one bucket to another. The source and destination bucket can be within the same AWS account or in different accounts. You can also replicate objects from one source bucket to multiple destination buckets.
Use LogMeIn Hamachi VPN service? Haguichi is a graphical app to make easy to join, create and manage Hamachi networks in Linux.
Haguichi is a free and open-source app that provides a stylish GTK UI for the official Hamachi for Linux. It has both dark and light window mode that shows a searchable and sortable network list in the left and details and actions in the right.
It’s well integrated with the Gnome desktop with notifications and system tray indicator applet, and make it easy to backup and restore configuration, as well as manage customize commands via Preferences dialog. And, it supports a list of keyboard shortcuts to make network and command actions more efficient.
Knowing how to transfer files securely between Linux hosts is a useful skill to have if you are regularly working with Linux servers. You may just need to transfer a handful of files, or you may want to look at backing up files from one Linux host to another. What ever the reason, there are a number of ways you can transfer files securely on Linux. Continue reading to find out about some of the more common or popular ways to transfer files.
Turns out that I have one of them, ARMA3, and I could test that it indeed works, by filtering for BattlEye only servers and trying to join such servers from Linux with Proton (note that you need to disable most of your mods if you intend on joining online competition). It seems like Steam downloads a specific BattlEye package before running ARMA3 after this update, so you don’t really have to do anything on your end.
A fresh month has arrived and so it brings with it a new Humble Choice, the curated monthly bundle along with a few other game bundles to go over. Here's a roundup of how they all work on Linux either natively or with Steam Play Proton.
First up is Humble Choice for December. Here you pay for whatever tier you feel is the best value to get access to the Humble Trove (a ton of DRM-free games), a discount at the Humble Store and the ability to claim Steam keys (sometimes GOG keys) for multiple titles - the amount of which depends on what tier you buy into.
Looks like publisher Devolver Digital was right to back this one, as Loop Hero from developer Four Quarters has managed to hit a million sales on Steam.
A game all about repetition. Loop Hero sees you constantly run through a procedurally generated map, where your character automatically walks around and engages in battle with various creatures. It's also a deck-builder, although you deck are map tiles so you build up the map from a blank slate with each loop. It's deliciously addictive to keep playing through while it reveals small bits of story.
Another developer is looking into native Linux builds for their game, this time it's Tomas Sala for The Falconeer in preparation for the upcoming Steam Deck handheld.
"Soar through the skies aboard a majestic warbird, explore a stunning oceanic world and engage in epic aerial dogfights, in this BAFTA nominated air combat game from solo developer, Tomas Sala.
An incredibly impressive double-episode total conversation for Doom 2, we have Ashes 2063 and Ashes: Afterglow. The first episode our own BTRE talked a bit about back in 2018, and since then it's been remastered and a second episode released only recently. Now, they're both available easily from Mod DB.
"Explore and scavenge through dozens of intricate maps, and use your scratched together arsenal to fight hordes of dangerous raiders and mutants in this expansive GZDoom TC. Ashes is part Duke Nukem 3D, part Doom, thrown into a blender with Mad Max, Fallout and Stalker for that refreshing post-apocalyptic twist."
SCS has upgraded both Euro Truck Simulator 2 & American Truck Simulator with some major improvements, and it seems they may work even better on the upcoming Steam Deck now.
We'll go over some of the extra content for each below, but first, some tech changes have come to each game. For starters, gamepad support on both has been greatly improved. A big change considering all the different controls needed, with the primary aim to allow navigating the entire UI without an addition device. They said they plan to keep making improvements on this too.
Another big change is the inclusion of SDF (Signed Distance Fields) fonts, which "allows texts and fonts to be displayed perfectly in any resolution, scale, or distance".
GNOME 41.2 is here five weeks after GNOME 41.1 to update the Orca screen reader accessibility tool with improved behavior when the focused back/forward button is pressed, improved presentation of subscript and superscript elements, the ability to identify and present custom-element images, improved speech generator for browser alerts, support for handling name/description change floods in the event manager, improved presentation of indeterminate progress bars (busy indicators), and better Python 3.10 compatibility.
If you have been following GNOME Shell development, you might have heard about this change before. Why it took so long to have this merged?
The showstopper was probably what you would suspect the least: applications that are not handling events. If an application is not reading events in time (is temporarily blocking the main loop, frozen, slow, in a breakpoint, …), these events will queue up.
But this queue is not infinite, the client would eventually be shutdown by the compositor. With these input devices that could take a long… less than half a second. Clearly, there had to be a solution in place before we rolled this in.
There’s been some back and forth here, and several proposed solutions. The applied fix is robust, but unfortunately still temporary, a better solution is being proposed at the Wayland library level but it’s unlikely to be ready before GNOME 42. In the mean time , users can happily shake their input devices without thinking how many times a second is enough.
Until the next adventure!
If you want to switch to Linux but don’t know which distro to choose for your aging PC, Zorin OS 16 Lite is probably the perfect choice.
Zorin OS is an Ubuntu-based Linux distribution designed especially for newcomers to Linux. There are several ways you can get started with Zorin OS.
There is Zorin OS Core which is the free edition of the distro and comes with GNOME as a desktop environment. If you prefer the Xfce, you can try Zorin OS Lite, which is targeted for basic use on low-spec PCs. On top of that, there is also a paid version which is called Zorin OS Pro.
Ultimately the main difference between the Zorin OS Lite and the Core or the Pro version is that the Lite version is going to be using Xfce as the desktop environment while the other versions are going to be using a heavily modified version on GNOME.
Seasoned Kali Linux users are already aware of this, but for the ones who are not, we do also produce weekly builds that you can use as well. If you cannot wait for our next release and you want the latest packages (or bug fixes) when you download the image, you can just use the weekly image instead. This way you’ll have fewer updates to do. Just know that these are automated builds that we do not QA like we do our standard release images. But we gladly take bug reports about those images because we want any issues to be fixed before our next release!
Coming three months after Kali Linux 2021.3, the Kali Linux 2021.4 release is here with Linux kernel 5.14, support for the recently launched Raspberry Pi Zero 2 W single-board computer (unfortunately without Nexmon support), improved support for Apple Silicon (M1) Macs, extended compatibility for the Samba client to support almost all Samba servers out there, and easier configuration of package manager’s mirrors
In this video, I am going to show an overview of EndeavourOS 21.4 and some of the applications pre-installed.
Today we are looking at Zorin OS 16 Lite edition. It is based on Ubuntu 20.04, Linux Kernel 5.11, XFCE 4.16, and uses about 1.4 - 1.7 GB of ram when idling. Enjoy!
In this video, we are looking at Zorin OS 16 Lite.
...some plugins are disabled due to missing dependencies. I don't know if any of those are important. I threw in everything I could think of into the DEPENDS variable, but something is still missing. Well, if anyone reports that one of those missing plugins is required, I will have to hunt down the required dependencies. I have a particular interest in using Claws Mail to download everything from my gmail account. Need to sort that out, so that the downloaded emails will be permanently stored.
Two more optional dependencies of Claws Mail are 'libetpan' and 'bogofilter', now also compiled in OE. Recipe for libetpan...
Who remembers the Sol-20? Us neither, but it was an important milestone on the path to where we, and our computers, are today. Without the Sol-20 the home computer world would be very different. This important point in home computer history is an excellent choice, then, for a retro computer reproduction project such as that carried out by Michael Gardi (and highlighted by Hackaday) using a Raspberry Pi in place of the Intel 8080 at the original computer’s heart.
The first fully assembled microcomputer with both a built-in keyboard and a TV output, the Sol-20 had the misfortune to be released in 1976, a year before Apple, Commodore and Tandy came and stomped all over the market with the Apple II, Pet and TRS-80. Initially sold in three versions - a motherboard kit; the Sol-10 added a case, keyboard and power supply, but came with no expansion slots; and the Sol-20 beefed up that power supply and added five S-100 bus slots (the Sol-20 would be by far the most popular model). The computer stayed in production until 1979 and would sell around 12,000 units, making them incredibly rare today. For contrast, total Apple II sales would hit around six million, including a million in 1983 alone.
For the 2021 version, having an authentic-looking case was a priority. The distinctive blue original was made of sheet metal with wooden sides, but Gardi reached for his 3D printer rather than his cutting torch to make the build more accessible to others. The sides are made from walnut, a material slightly befitting the aesthetic of the time.
Gardi also made a matching display for the Sol-20, again 3D printed and embellished with walnut, it utilises a 4:3 LCD panel and connects to the Pi via an HDMI cable.
First, if you’ve been testing RHEL 9.0 Beta, you’ll notice that the versions of Podman, Buildah, and Skopeo are identical. This is because the AppStream channel in RHEL 8 and RHEL 9 are meant to be quite similar. The idea is that you’ll be able to upgrade much easier.
You’ll notice this synchronization as RHEL 9 GA releases, and it is planned to continue until RHEL 8.10 when the versions of Podman, Buildah, and Skopeo will freeze. At this point, the RHEL 9 Container Tools Application Stream will be the source for the latest Container Tools.
When firefighters arrive on the scene of a fire, they often have only seconds to decide where to focus their attention to save the most lives. Visibility may be low and they may not have enough information about who is in a building or where they are located. How could technology be applied to help these everyday heroes make better split-second decisions?
The Call for Code Honoring Everyday Heroes Challenge asked participants to develop new technology solutions to address challenges faced by first responders, delivery personnel, childcare workers, healthcare frontline workers, educators, and many more who have been invaluable to society during the COVID-19 pandemic. Technology solutions would need to run on a Samsung tablet, smartphone, and/or wearable device and use IBM open hybrid cloud technologies such as IBM Cloud and IBM Watson. Participants also had access to Samsung toolkits, as well as data from The Weather Company. Teams had four weeks to create promising, innovative new solutions that can be nurtured, improved, and put to work through the Call for Code incubation framework with IBM and Samsung Electronics.
Today, we are sharing that Werner Knoblich, Red Hat’s senior vice president and general manager for the Europe, Middle East, and Africa (EMEA) region has decided to retire from Red Hat at the end of 2021. IT industry leader and Red Hatter Hans Roth, who is currently senior vice president and general manager of Global Services and Technical Enablement, will succeed him in the role beginning in January.
Knoblich has been a strong and passionate advocate for our customers and Red Hatters throughout his tenure. His mantra, ‘know your culture first, then build your employee engagement into it,’ has consistently been at the heart of his leadership style in addition to a deep commitment to open source ways of working to create a highly engaged and results-driven team.
Red Hat Product Security is committed to providing tools and security data to help you better understand security threats. This data has been available on our Security Data page and is also available in a machine-consumable format with the Security Data API. By exposing a list of endpoints to query security data, this tool allows you to programmatically query the API for data that was previously exposed only through files on our Security Data page. To understand how we share our security data, take a look at this post.
This post will cover how the Security Data API can be used to address real-world security use cases and concerns programmatically.
These selected use cases are based on questions which were sent to the Red Hat Product Security team in recent months. Each of these examples can be easily modified to address your own needs.
From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed and analyzed.
At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end users. Where data has traditionally lived in the datacenter or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.
This is where edge computing comes in.
Jyväskylä University of Applied Sciences (JAMK) offers its 8,500 students high-quality education, which is built to meet the needs of the labor market.
It is beneficial for both students and the job market in the region that student qualifications match the job requirements. JAMK has good relations with local companies and organizations, and 86% of JAMK computer science students are employed soon after studies. JAMK faculty and staff consider it important to listen with an attentive ear to the requirements set for experts in the future. Solutions based on open source are on the rise.
A seemingly straightforward question aimed at candidates for the in-progress Fedora elections led to a discussion on the Fedora devel mailing list that branched into a few different directions. The question was related to a struggle that the distribution has had before: whether using non-free Git forges is appropriate. One of the differences this time, though, is that the focus is on where source-git (or src-git) repositories will be hosted, which is a separate question from where the dist-git repository lives.
FOSS Force has learned that on Thursday the AlmaLinux Foundation, the nonprofit organization behind the eponymous freshman Linux distribution that’s positioning itself as a drop-in CentOS alternative, will announce that Codenotary has joined its governance board as its first top-tier Platinum member, and that AlmaLinux board member Jack Aboutboul has taken a job as VP of product at Codenotary.
In an email exchange with FOSS Force, Aboutboul verified Codenotary’s Platinum membership, his employment there, and that he will continue to hold his positions at AlmaLinux.
Houston-based startup Codenotary markets highly scalable open source software built around its immudb (for immutable database, a fast and cryptographically-verifiable ledger database) for helping companies protect their software supply chain, which has become increasingly important in the wake of the Solarwinds software supply chain attack that surfaced late last year. The company’s software is available for enterprises to run on their own equipment or in cloud instances, or through Codenotary’s Software as a Service offering called Codenotary Cloud.
I have been curious about open source and how to get started. I wasn’t sure of the right way to start so i made some research and spoke to a friend who introduced me to outreachy as a good place to start. I submitted my initial application and made it to the contribution stage. It was exciting to see my first merged contribution to open source. I look forward to an exciting internship with the Debian community and Outreachy!
At Canonical, we love Flutter and we can’t stop talking about it. Our Flutter developers have been working on bringing support to desktop operating systems since July 2020.
This includes our new Ubuntu Desktop installer, built with Flutter, which will be the default user journey in our upcoming 22.04 LTS release. (If you want to see how it’s coming along you can test it out here.)
Continuing our Flutter journey, we recently partnered with Invertase to bring FlutterFire support to Desktop and Dart. In this blog post, we’ll go over what Flutter’s Firebase announcement means for desktop developers, how to get started with Flutter on Desktop, and where to go to keep an eye on this exciting project!
We at Canonical, the company behind Ubuntu, are pleased to join hands with the Magma Foundation. Magma connects the world to a faster network by providing operators an open, flexible, and extendable mobile core network solution. Its simplicity and low-cost structure empower innovators to build mobile networks that were never imagined before.
We decided to support this open source project because of our wider telco efforts. Our goal is to enable everyone to build an end-to-end private LTE or 5G based on open source tools. This is also the reason for Canonical committing efforts to projects, such as OpenRAN, OSM, and OMEC.
The purpose of this tutorial is to show how to change the system hostname on Ubuntu 22.04 Jammy Jellyfish Linux. This can be done via command line or GUI, and will not require a reboot in order to take effect.
The hostname of a Linux system is important because it is used to identify the device on a network. The hostname is also shown in other prominent places, such as in the terminal prompt. This gives you a constant reminder of which system you are working with.
Hostnames give us a way to know which device we are interacting with either on the network or physically, without remembering a bunch of IP addresses that are subject to change. You should pick a descriptive hostname like “ubuntu-desktop” or “backup-server” rather than something ambiguous like “server2.”
Are you considering downloading Ubuntu 22.04 but need to know the system requirements? In this article, we’ll go over the minimum recommended system requirements for running Ubuntu 22.04 Jammy Jellyfish. Whether you want to upgrade to Ubuntu 22.04, or install the operating system on a PC or as a virtual machine, we’ll help you make sure you have the required hardware.
Ubuntu is an inherently lightweight operating system, capable of running on some pretty outdated hardware. Canonical (the developers of Ubuntu) even claims that, generally, a machine that can run Windows XP, Vista, Windows 7, or x86 OS X will be able to run Ubuntu 22.04 even faster. Let’s take a closer look at the hardware requirements below.
At the end of October came the pleasant surprise of the introduction of the Raspberry Pi Zero 2 W. This drop-in replacement to the original Raspberry Pi Zero features a more powerful 1.0GHz quad-core Cortex-A53 compared to the miniscule 1GHz single-core design of the original Pi Zero while boasting 512MB of LPDDR2 RAM. Here are some initial benchmarks of the Raspberry Pi Zero 2 W for those curious about its performance.
Over the past month I've been playing with a Raspberry Pi Zero 2 W, kindly provided by the Raspberry Pi Foundation. This 65 x 30 mm single board computer has been working out well and offering a nice performance potential for its size factor -- much more interesting for any modest workloads than the original single-core Raspberry Pi Zero.
You probably don’t think about aluminum a lot. We do – it’s one of our 14 focus materials. But we really dug deep when we decided the Fairphone 4 was going to have an aluminum case rather than plastic. And it turns out, aluminum is really interesting.
Start with this: It’s the most common mineral in the Earth’s crust. It’s forged in stars when magnesium picks up an extra electron. So when it comes to supply, there’s a lot of it. But it’s hard to make usable. In 1825, Danish chemist Hans Christian Oersted managed to produce the first malleable aluminum, but it was an outrageously expensive process. For decades, aluminum was as expensive as gold. Napoleon III’s state dinners were proudly served on aluminum plates, and his son waved an aluminum rattle.
In 1886, two inventors simultaneously came up with a process in which aluminum oxide is melted in cryolite (sodium aluminum fluoride) and subjected to an electric current. And to this day, that’s how aluminum is made.
Ibase’s IP65-protected, 27-inch “OFP-W2700” panel PC runs Linux or Win 10 on a choice of Ryzen V1000, Whiskey Lake, or Apollo Lake along with SATA, 2x GbE, 4x USB 3.0, and up to 3x M.2.
Ibase announced a 27-inch, open frame panel PC series that comes in three x86 flavors that support Linux Kernel 4+ or Windows 10. The OFP-W2700 series provides both portrait and landscape display modes. The semi-rugged, IP65-protected systems support both indoor and semi-outdoor environments for infotainment terminal and self-service kiosk applications. One image suggests the product is just the thing for a gym treadmill display.
ECS offers support for Windows 10 and Linux operating systems, and both models are most of the same specifications, except the thicker LIVA Z3E adds a 2.5-inch SATA bay and two RS232 DB9 ports. The new Jasper Lake lake model should be up to 35% faster than the “previous generation” which should be the LIVA Z2 based on Gemini Lake processors.
As familiar as we all are with the UNO, there’s probably a lot you don’t know about the iconic Arduino microcontroller board. Put on your rose-tinted spectacles, and let’s wax poetic about the origins of this beloved maker board.
If you’ve ever used a real TeleType machine or seen a movie with a newsroom, you know that one TeleType makes a lot of noise and several make even more.[CuriousMarc] acquired the silent replacement, a real wonder of its day, the TI Silent 703. The $2,600 machine was portable if you think hauling a 25-pound suitcase around is portable. In 1971, it was definitely a step up.19
The machine used a thermal printer, could have a built-in acoustic coupler for talking over the phone. You could also get a dual tape drive that acted like a mostly silent paper tape reader and punch.
Of course, thermal printers require thermal paper, which has its own issues. [Marc] doesn’t just turn the machine on, but connects it through an RS232 analyzer and scope to get it working as a real I/O device. He also tears into it, something you probably couldn’t do back in the day since you probably leased them rather than pay the total price which is almost $18,000 today.
SiFive has been busy. Just a few days after SiFive Performance P650 announcement, the company has announced the SiFive Essential 6-Series RISC-V processor family starting with four 64-bit/32-bit real-time core, and two Linux capable application cores, plus the SiFive 21G3 release with various improvements to existing families.
Intel's oneDNN Deep Neural Network Library that is part of their oneAPI toolkit is out with version 2.5 and brings RISC-V CPU support among other updates.
Intel's oneDNN library that helps developers build out deep learning applications continues to support more operating system platforms and hardware architectures. While obviously catering to Intel's own CPUs/GPUs, oneDNN has also built up support for AArch64, POWER, IBM Z, NVIDIA GPUs, and now with oneDNN 2.5 is even RISC-V processor ISA support.
Taking place in San Francisco from Monday through yesterday evening was the RISC-V Summit for discussions around this dominant open-source processor ISA. For those that did not make it to the event, many of the slide decks are available.
The 2021 RISC-V Summit covered the XiangShan as an open-source high performance RISC-V processor out of China, various RISC-V demonstrations, various IoT / edge computing talks in the context of using RISC-V, various Linux kernel features for this ISA, different RISC-V extensions, the various wares of leading RISC-V designer SiFive, and much more.
SiFive and AB Open demoed a “SiFive RISC-V Rack Cluster” that runs Linux on four of SiFive’s RISC-V U74 based HiFive Unmatched boards. AB Open and Future Computing recently announced a PC design based on the SBC.
At the RISC-V Summit in San Francisco this week, RISC-V chipmaker and designer SiFive demonstrated a SiFive RISC-V Rack Cluster with 4x HiFive Unmatched SBCs. The four-way, rackmount cluster collaboration with AB Open appears to be the first cluster kit using RISC-V technology. Last month, AB Open joined with Future Computing to unveil a custom desktop PC based on the Unmatched and has posted some open source hardware and software files for the project (see farther below).
SiFive has been busy. Just a few days after SiFive Performance P650 announcement, the company has announced the SiFive Essential 6-Series RISC-V processor family starting with four 64-bit/32-bit real-time core, and two Linux capable application cores, plus the SiFive 21G3 release with various improvements to existing families.
A year of virtual conferences that began with linux.conf.au will end on a high note next week as Collaborans will be presenting three talks at the Open Source Summit Japan + Automotive Linux Summit 2021, taking place entirely online December 14-15.
Open Source Summit Japan is "the leading conference connecting the Japanese open source ecosystem under one roof", while the Automotive Linux Summit "gathers the most innovative minds from automotive expertise and open source excellence, for discussions and learnings that propel the future of embedded devices in the automotive arena."
The Call For Participation is now open for the Distribution Devroom at the upcoming FOSDEM 2022, to be hosted virtually on February 6th.
Pocket has long been known as the go-to place to discover, save and spend time with great stories from around the web. As we look to the year ahead of us, we will continue to empower users to spend time with the stories that matters most to them and to help users discover the very best of the web.
The release of the specialized browser Tor Browser 11.0.2 , focused on ensuring anonymity, security and privacy, is presented . When using the Tor Browser, all traffic is redirected only through the Tor network, and it is impossible to contact directly through the standard network connection of the current system, which does not allow tracing the user’s real IP address (in the event of a browser hacking, attackers can gain access to the system parameters of the network, so for a complete to block potential leaks, use products such as Whonix ). Tor Browser builds are prepared for Linux, Windows and macOS.
For additional protection, the Tor Browser includes the HTTPS Everywhere add-on , which allows you to use traffic encryption on all sites where possible. To reduce the threat from attacks using JavaScript and block plugins by default, a NoScript add-on is included… To combat blocking and traffic inspection, alternative transports are used. To protect against highlighting visitor-specific features, the APIs WebGL, WebGL2, WebAudio, Social, SpeechSynthesis, Touch, AudioContext, HTMLMediaElement, Mediastream, Canvas, SharedWorker, WebAudio, Permissions, MediaDevices.enumerateDevices and screen.orientation are disabled or limited, and are also disabled telemetry sending tools, Pocket, Reader View, HTTP Alternative-Services, MozTCPSocket, “link rel = preconnect”, libmdns modified.
It’s that time of year again — when all of the mail carriers have overflowing trucks, malls are miraculously busy and budgets are tight. Yes, it is holiday shopping time. This year more than 84% of Americans plan to buy holiday gifts with estimates that Americans will spend at least as much on gifts as last year — $789 billion on people’s present purchases alone. And as much as we love our family and friends, buying gifts for them can be just stressful. While Mozilla can’t make your impossible-to-shop-for dad any easier to shop for or fix the supply chain issues, we can help make the process of holiday shopping more enjoyable.
Here’s our summary of updates, events and activities in the LibreOffice project last month – click the links to learn more!
In our sixth birthday publication we are interviewing Vincent Lequertier about crucial aspects of artificial intelligence, such as its transparency, its connection to Open Science, and questions of copyright. Vincent also recommends further readings and responds to 20 Years FSFE.
A PhD candidate at the Claude Bernard university in Lyon who researches artificial intelligence for healthcare, Vincent supports software freedom and volunteers for the FSFE in his free time. He has been a part of the System Hackers, the team responsible for the technical infrastructure of the FSFE, for many years. His contribution was valuable in setting the foundation for the for the good state that the FSFE's System Hackers team is today. Vincent is also a member of the FSFE's General Assembly, and participates in the 'Public Money? Public Code!' campaign. In our interview, Vincent shares his thoughts answering questions about the current state of AI and its future implications.
As we reach the close of another year of fighting for free software, and in what is for many people the most turbulent of times, we have finalized another Free Software Foundation Bulletin. Our biannual magazine is printed as well as presented online – if you've received in the mail, we encourage you to post a picture on social media with #fsfbulletin!
Advent of Code, for those not in the know, is a yearly Advent calendar (since 2015) of coding puzzles many people participate in for a plenary of reasons ranging from speed coding to code golf with stops at learning a new language or practicing already known ones.
I usually write boring C++, but any language and then some can be used. There are reports of people implementing it in hardware, solving them by hand on paper or using Microsoft Excel… so, after solving a puzzle the easy way yesterday, this time I thought: CHALLENGE ACCEPTED! as I somehow remembered an old 2008 article about solving Sudoku with aptitude (Daniel Burrows via archive.org as the blog is long gone) and the good same old a package management system that can solve [puzzles] based on package dependency rules is not something that I think would be useful or worth having (Russell Coker).
Day 8 has a rather lengthy problem description and can reasonably be approached in a bunch of different way. One unreasonable approach might be to massage the problem description into Debian packages and let apt help me solve the problem (specifically Part 2, which you unlock by solving Part 1. You can do that now, I will wait here.)
A new package of mine arrived on CRAN yesterday in its inaugural 0.0.1 upload: qlcal.
qlcal is based on the calendaring subset of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be challenging to build). The only build requirements are Rcpp for the seamless R/C++ integration, and BH for Boost headers.
qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more.
It is easy to over-complicate log management. Almost all departments in a company need to log messages for their daily activities. However, installing several different log management and analysis systems in parallel is a nightmare both from a security and an operations perspective and wastes many resources. You cannot always reduce the number of log analysis systems, but you can reduce the complexity of log management. Let me show you, how.
Engineers from Intel and Arm in cooperation with The Khronos Group feel ready now to begin landing their SPIR-V back-end within the upstream LLVM source tree! This SPIR-V back-end for LLVM would ultimately allow LLVM front-ends for different languages to more easily target this industry-standard shader representation so that it could be ingested by Vulkan / OpenCL drivers.
Being worked on for a while has been this "LLVM-SPIRV-BAckend" as a means of generating SPIR-V binaries from LLVM. This back-end has been in the works for a while and unlike earlier SPIR-V + LLVM translation attempts this is a true back-end for LLVM. Intel for their part has been initially focused on OpenCL compute portion of SPIR-V while acknowledging the possibility of extending it to support 3D shaders for Vulkan.
In addition to the LLVM SPIR-V back-end appearing ready for merging, also working through the final steps for being mainlined in the LLVM compiler stack is also Facebook's BOLT project for optimizing the performance of binaries.
Going on for the past several years has been Facebook's BOLT to speed-up Linux binaries by collecting an execution profile for large applications/binaries and then BOLT optimizes the code layout of the binary.
OK, Perl does not literally have a warning about a 1930's pulp fiction and radio serial character. But Perl 5.28 introduced shadow as a new warning category for cases where a variable is redeclared in the same scope. Previously, such warnings were under misc.
Although I love using Raku, the fact that it is still a relatively young language means that there is a fair amount that is lacking when it comes to tooling, etc. Until recently, this included a way to calculate code coverage: how much of the code in a library is exercised (=covered) by that library’s test suite.
Now, truth be told, this feature has been available for some time in the Comma IDE. But this (together with other arguably essential developer tools like profiling, etc) is only available in the “Complete” edition, which requires a paid subscription.
Still, I knew that the Raku compiler kept track of covered lines, so I always felt like this should be doable. It only needed someone to actually do it… and it looks like someone actually did.
While there are few rules on the names of variables, classes, functions, and so on (i.e. identifiers) in the Python language, there are some guidelines on how those things should be named. But, of course, those guidelines were not always followed in the standard library, especially in the early years of the project. A suggestion to add aliases to the standard library for identifiers that do not follow the guidelines seems highly unlikely to go anywhere, but it led to an interesting discussion on the python-ideas mailing list.
To a first approximation, a Python identifier can be any sequence of Unicode code points that correspond to characters, but they cannot start with a numeral nor be the same as one of the 35 reserved keywords. That leaves a lot of room for expressiveness (and some confusion) in those names. There is, however, PEP 8 ("Style Guide for Python Code") that has some naming conventions for identifiers, but the PEP contains a caveat: "The naming conventions of Python's library are a bit of a mess, so we'll never get this completely consistent".
But consistency is just what Matt del Valle was after when he proposed making aliases for identifiers in the standard library that do not conform to the PEP 8 conventions. The idea cropped up after reading the documentation for the threading module in the standard library, which has a note near the top about deprecating the camel-case function names in the module for others that are in keeping with the guidelines in PEP 8. The camel-case names are still present, but were deprecated in Python 3.10 in favor of names that are lower case, sometimes with underscores (e.g. threading.current_thread() instead of threading.currentThread()).
How do I use bash for loop to iterate thought array values under UNIX / Linux operating systems? How can I loop through an array of strings in Bash?
The Bash provides one-dimensional array variables. Any variable may be used as an array; the declare builtin will explicitly declare an array. There is no maximum limit on the size of an array, nor any requirement that members be indexed or assigned contiguously. Arrays are indexed using integers and are zero-based. This page explains how to declare a bash array and then use Bash for Loop to iterate through array values.
Rust is called to do great things, to the extent that it has been proposed that Linux be rewritten, at least partially, in said programming language. Linus Torvalds did not close the door to this possibility, but the creator of the kernel, who is not very disruptive, showed some skepticism about how the technology from Mozilla would work when push comes to shove.
However, the strong interest in bringing Rust to Linux, together with the enormous potential that the official implementation of the language holds, suggested that its introduction was going to take place sooner rather than later, and as has been seen recently, that’s how it will be, given that some developers are taking important steps to make Rust the second language of the Linux kernel.
Before proceeding further, it is important to note that Linux, at least at the project level, not pure C for a long time. This means that Rust would not be the first outsider that “sneaks” into one of the projects that, to this day, remains one of the main bastions of the C language, which has endured and continues to endure as one of the great references of low-level programming.
You may have noticed that the design of this site has changed a bit (mainly on desktop, though a few of the changes filter out to those of you who read from a narrower port of view). My main motivation is to make the site look a bit more punchy.
As I’ve mentioned a few times in the past: I am not a designer. I really don’t know what I’m doing, other than making stuff “look nice” to my eyes, turning it into CSS, and rolling it out and hoping for the best.
But I figured I would run a few things by you, the reader, since pleasing your eyes matters more than mine. Also: I very rarely ever mention design changes when we make them. This sometimes leads people to mail in reporting things as broken.
Traditional lensmaking is a grind — literally. One starts with a piece of glass, rubs it against an abrasive surface to wear away the excess bits, and eventually gets it to just the right shape and size for the job. Whether done by machine or by hand, it’s a time-consuming process, and it sure seems like there’s got to be a better way.
Thanks to [Moran Bercovici] at Technion: Israel Institute of Technology, there is. He leads a team that uses fluids to create complex optics quickly and cheaply, and the process looks remarkably simple. It’s something akin to the injection-molded lenses that are common in mass-produced optical equipment, but with a twist — there’s no mold per se. Instead, a UV-curable resin is injected into a 3D printed constraining ring that’s sitting inside a tank of fluid. The resin takes a shape determined by the geometry of the constraining ring and gravitational forces, hydrostatic forces, and surface tension forces acting on the resin. Once the resin archives the right shape, a blast of UV light cures it. Presto, instant lenses!
Oftentimes, the feature set for our typical fitness-focused wearables feels a bit empty. Push notifications on your wrist? OK, fine. Counting your steps? Sure, why not. But how useful are those capabilities anyway? Well, what if wearables could be used for a more dignified purpose like helping people in recovery from substance use disorder (SUD)? That’s what the researchers at the University of Massachusetts Medical School aimed to find out.
In their paper, they used a wrist-worn wearable to measure locomotion, heart rate, skin temperature, and electrodermal activity of 38 SUD patients during their everyday lives. They wanted to detect periods of stress and craving, as these parameters are possible triggers of substance use. Furthermore, they had patients self-report times during the day when they felt stressed or had cravings, and used those reports to calibrate their model.
Throughout history, the human body has been the subject of endless scrutiny and wonder. Many puzzled over the function of all these organs and fluids found inside. This included the purpose of blood, which saw itself alternately disregarded as being merely for ‘cooling the body’, to being responsible for regulating the body’s humors, leading to the practice of bloodletting and other questionable remedies. As medical science progressed, however, we came to quite a different perspective.
Simply put, our circulatory system and the blood inside it, is what allows us large, multi-celled organisms to exist. It carries oxygen and nutrients to cells, while enabling the removal of waste products as well as an easy path for the cells that make up our immune system. Our blood and the tissues involved with it are crucial to a healthy existence. This is something which becomes painfully clear when we talk about injuries and surgeries that involve severe blood loss.
While the practice of blood transfusions from donated blood has made a tremendous difference here, it’s not always easy to keep every single type of blood stocked, especially not in remote hospitals, in an ambulance, or in the midst of a war zone. Here the use of artificial blood — free from complicated storage requirements and the need to balance blood types — could be revolutionary and save countless lives, including those whose religion forbids the transfusion of human blood.
Talking about this Chinese ham radio transceiver requires a veritable flurry of acronyms: HF, SSB, QRP, and SDR to start with. [Paul] does a nice job of unboxing the rig and checking it out. The radio is a clone of a German project and provides a low-power radio with a rechargeable battery. You can see his video about the gear below.
SSB is an odd choice for low power operation, although we wonder if you couldn’t feed digital data in using a mode like PSK31 that has good performance at low power. There are several variations of the radio available and they cost generally less than $200 — sometimes quite a bit less.
Trouble In Paradise (TIP) was a popular Windows-only tool for troubleshooting Iomega Jaz and Zip drives way back when. The drives have fallen out of favor with PC, but the drives are still highly prized amongst classic Mac collectors, who use the SCSI versions as boot disks for the vintage machines. Thus, [Marcio Luis Teixeira] set about porting the TIP tool to the platform.
If you need a lens for a project, chances are pretty good that you pick up a catalog or look up an optics vendor online and just order something. Practical, no doubt, but pretty unsporting, especially when it’s possible to cast custom lenses at home using silicone molds and epoxy resins.
Possible, but not exactly easy, as [Zachary Tong] relates. His journey into custom DIY optics began while looking for ways to make copies of existing mirrors using carbon fiber and resin, using the technique of replication molding. While playing with that, he realized that an inexpensive glass or plastic lens could stand in for the precision-machined metal mandrel which is usually used in this technique. Pretty soon he was using silicone rubber to make two-piece, high-quality molds of lenses, good enough to try a few casting shots with epoxy resin. [Zach] ran into a few problems along the way, like proper resin selection, temperature control, mold release agent compatibility, and even dealing with shrinkage in both the mold material and the resin. But he’s had some pretty good results, which he shares in the video below.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) is a huge installation measured in kilometers that is listening for wrinkles in space-time. Pulling this off is a true story of hardware and software hacking, and we were lucky to have Dr. Keith Thorne dive into those details with his newly published “Extreme Instruments for Extreme Astrophysics” keynote from the 2021 Hackaday Remoticon.
Gravity causes space-time to stretch — think back to the diagrams you’ve seen of a massive orb (a star or planet) sitting on a plane with grid lines drawn on it, the fabric of that plane being stretch downward from the mass of the orb. If you have two massive entities like black holes orbiting each other, they give off gravitational waves. When they collide and merge, they create a brief but very strong train of waves. Evidence of these events are what LIGO is looking for.
The Linux Foundation has announced that the Cyber-investigation Analysis Standard Expression (CASE) is becoming a community project as part of the ââ¬â¹Ã¢â¬â¹Cyber Domain Ontology (CDO) project under the Linux Foundation. CASE is an ontology-based specification that supports automated combination and intelligent analysis of cyber-investigation information. CASE concentrates on advancing interoperability and analytics across a broad range of cyber-investigation domains, including digital forensics and incident response (DFIR).
Organizations involved in joint operations or intrusion investigations can efficiently and consistently exchange information in standard format with CASE, breaking down data silos and increasing visibility across all information sources. Tools that support CASE facilitate correlation of differing data sources and exploration of investigative questions, giving analysts a more comprehensive and cohesive view of available information, opening new opportunities for searching, pivoting, contextual analysis, pattern recognition, machine learning and visualization.
Our recently published Open Source Jobs Report examined the demand for open source talent and trends among open source professionals. What did we find?
Open source fuels the world’s innovation, yet building impactful, innovative, high-quality, and secure software at scale can be challenging when meeting the growing requirements of open source communities. Over the past two decades, we have learned that ecosystem building is complex. A solution was needed to help communities manage themselves with the proper toolsets in key functional domains.
From infrastructure to legal and compliance, from code security to marketing, our experience in project governance among communities within the Linux Foundation has accumulated years of expertise and proven best practices. As a result, we have spent the year productizing the LFX Platform, a suite of tools engineered to grow and sustain and grow the communities of today and build the communities of tomorrow.
Elastic (NYSE: ESTC) (“Elastic”), the company behind Elasticsearch and the Elastic Stack, announced new integrations and enhancements across the Elastic Security solution in its 7.16 release, enabling users to accelerate detection and response, increase real-time visibility into their data, protect endpoints against advanced attacks, and streamline workflows.
Security updates have been issued by Fedora (firefox, libopenmpt, matrix-synapse, vim, and xen), Mageia (gmp, heimdal, libsndfile, nginx/vsftpd, openjdk, sharpziplib/mono-tools, and vim), Red Hat (java-1.8.0-ibm), Scientific Linux (firefox), SUSE (kernel-rt), and Ubuntu (bluez).
Google took steps to shut down the Glupteba botnet, at least for now. (The botnet uses the bitcoin blockchain as a backup command-and-control mechanism, making it hard to get rid of it permanently.) So Google is also suing the botnet’s operators.
CISA has released Capacity Enhancement Guide (CEG): Social Media Account Protection, which details ways to protect the security of organization-run social media accounts. Malicious cyber actors that successfully compromise social media accounts—including accounts used by federal agencies—could spread false or sensitive information to a wide audience. The measures described in the CEG aim to reduce the risk of unauthorized access on platforms such as Twitter, Facebook, and Instagram.
Cisco has released a security advisory to address Cisco products affected by multiple vulnerabilities in Apache HTTP Server 2.4.48 and earlier releases. An unauthenticated remote attacker could exploit this vulnerability to take control of an affected system.
In keeping with our commitment to the security and privacy of individuals on the internet, Mozilla is increasing our oversight and adding automation to our compliance-checking of publicly trusted intermediate CA certificates (“intermediate certificates”). This improvement in automation is important because intermediate certificates play a critical part in the web PKI (Public-Key Infrastructure). Intermediate CA keys directly sign server certificates, and we currently recognize nearly 3,000 intermediate certificates, which chain up to approximately 150 root CA certificates embedded as trust anchors in NSS and Firefox. More specifically, we are updating the Mozilla Root Store Policy (MRSP) and associated guidance, improving the public review of third-party intermediate certificates on the Mozilla dev-security-policy list, and enhancing automation in the Common CA Database (CCADB).
[...]
With the CCADB, Mozilla has provided a variety of tools to examine the status of intermediate certificates where none existed before. These include improvements that allow us to automatically process CA audit reports using Audit Letter Validation (ALV), advise CAs on the status of their intermediate certificates, and provide CAs and root store operators with lists of tasks relevant to intermediate certificates listed in the CCADB.
The UK’s outgoing information commissioner Elizabeth Denham is set to join global law firm Baker McKenzie, which previously defended Facebook against privacy enforcement by her office.
Denham joined the Information Commissioner’s Office (ICO) in 2016, where she oversaw the introduction of the General Data Protection Regulation (GDPR) in May 2018, and is due to be replaced by current New Zealand privacy commissioner John Edwards.
The European Union is in the process of adopting its Digital Service Act (DSA), a law that will govern how content can be shared and viewed online. But this landmark legislation won’t help secure our rights without strong enforcement.
The enforcement mechanism of the DSA hasn’t shared the same spotlight as other “hot topics” received throughout the ongoing legislative negotiations, but it is an incredibly important one. Without effective and properly-functioning enforcement, the future “content moderation rulebook” that should revolutionise the platform governance model will remain an empty shell.
This is not the first time that the European Union has set itself up to be the forerunner in internet regulation. Back in 2018, the internationally acclaimed General Data Protection Regulation (GDPR) was labelled as the new world standard for privacy and data protection. While the GDPR is a legislative success, it has been an enforcement failure.The blame is often placed at the feet of insufficiently funded and understaffed Data Protection Authorities (DPAs), whose slow action has left a huge number of complaints — from both individuals and NGOs — unaddressed. But, in reality, this is just one part of a much more complicated story.
So what can the EU learn from its experience with the GDPR to ensure a strong enforcement of the DSA?
In November 2021, Amnesty International, along with the Internet Freedom Foundation and Article 19, drew attention to Hyderabad, a city in the Indian state of Telangana, which has established a ‘Command and Control Centre’ – a hundred and seven million dollar project that is meant to support the processing of over six hundred thousand surveillance cameras in Hyderabad at once. This, combined with Hyderabad police’s existing facial recognition software for identifying individuals will enable the police to track individuals across the city in real time.
[...]
Currently, the Indian government is deliberating the third draft of a proposed Personal Data Protection Bill. The law comes with some important milestones regarding regulating cross-border data flows and prior consent before use of personal data, although experts concur that some of the most worrying aspects of the proposed bill are unchecked biases, overbroad authority of the government to bypass the law, and built-in obstacles to informing data subjects if there has been a breach of their personal data.
The escalating use of FRT in India despite the absence of related legislation can be in part understood by examining how FRT has been used in the past, since this may help predict whether the future use of a large urban CCTV network along with FRT is likely to result in any effective oversight.
In October 2021, in response to questions about uncovered footage of Hyderabad’s police forcing civilians walking on streets to remove their masks and capturing their photos and in some cases, also demanding their fingerprint, the local police stations stated that this was a part of ‘the patrolling cops official duty’ and that the police was scanning ‘suspicious persons only’.
On December 6, 2021, a refugee who fled Myanmar when she was sixteen, filed a class action lawsuit against Facebook in California’s Superior Court for alleged incitement to violence and facilitation of genocide in Myanmar (formerly Burma). The suit was on behalf of herself and all Rohingya who fled Myanmar on or after June 1, 2012, and who now reside in the USA as refugees or asylum seekers. A similar coordinated action is due in the United Kingdom representing Rohingya refugees in UK and Bangladesh, and a letter of notice to this effect was submitted to Facebook’s London office on the same day. The case comes two years after Facebook, in a statement, officially admitted that it hadn’t done enough to prevent its platform from “being used to foment division and incite offline violence in Myanmar.”
The European Union and its legislative body, the European Commission continues to advance its digital strategy with open source software as one of the fundamental pillars. On this occasion it was the latter that announced news for the distribution of software developed to meet internal needs of the organization.
According to the published information, the European Commission has approved a new regulation that favors free access to the software they produce as long as there are potential benefits for ‘citizens, businesses or other public services’, which from theory to practice may well encompass everything that unfolds under its roof.
This new provision is supported in turn by a recent study also carried out by the Commission on the impact of open source software in areas such as technological independence, competitiveness and innovation in the economy of the European Union. The objective is to find solid evidence with which to shape European open source policies for the next few years.
In economic terms, in fact, the calculations are most optimistic and point to a strong economic impact, of billions of euros of savings per year -by way of example, it is estimated between 65 and 95 billion euros in 2018 alone- and, with a minimal increase in the bet, there could be a growth in the EU’s GDP of around 100 billion euros.
OSI welcomes the Decision of the European Commission on the open source licensing and reuse of Commission software. The December 8 Decision means that Commission services may choose to make Commission software available under open source licenses, something OSI has long advocated and which opens great opportunities both for individuals and companies.
OSI encourages every part of the Commission to make the most of this new Decision, both for economic and civil reasons. A recent report for the Commission by Open Forum Europe (an OSI Affiliate) estimates that “open source software contributes between €65 to €95 billion to the European Union’s GDP” and observed that “if open source contributions increased by 10% in the EU, they would generate an additional 0.4% to 0.6% (around €100 billion) to the bloc’s GDP.” But as observed in the 2018 UNESCO “Paris Call” report, a document that OSI contributed to, software is also an essential element of our cultural heritage and legislators need to “create an enabling legal, policy and institutional environment where software source code can flourish as an integral part of knowledge societies” and especially leverage open source licensing to “enable effective independent auditing of software source code used to make decisions that may affect fundamental rights of human beings”.
Actually making the case for: screw patents, they are what makes the Corona pandemic that worse (now effectively killing people by US gov not enforcing Pfizer & Biontech to make the COVID19 vaccine “Open Source” or “Public Domain” aka open knowledge, as Tesla does)
One of the hottest topics in the world of scientific publishing over the last couple of decades has been the growing pressure to release the fruits of public-funded scientific research from the paywalled clutches of commercial publishers. This week comes news of a new front in this ongoing battle, as a group of Indian researchers have filed an intervention application with the help of the Indian Internet Freedom Foundation in a case that involves the publishers Elsevier, Wiley, and the American Chemical Society who have filed a copyright infringement suit against in the Delhi High Court against the LibGen & Sci-Hub shadow library websites.
The researchers all come from the field of social sciences, and they hope to halt moves to block the websites by demonstrating their importance to research in India in the light of unsustainable pricing for Indian researchers. Furthermore they intend to demonstrate a right of access for researchers and teachers under Indian law, thus undermining the legal standing of the original claim.