We cover events and user groups that are running in Denmark. This article forms part of our Linux Around The World series.
Contributing to Open Source Beyond Software Development, bringing TLS 1.3 to the Internet of Old Things, How efficient can cat(1) be, boost the speed of Unix shell programs, Running FreeBSD VNET Jails on AWS EC2 with Bastille, and more
"Retbleed" is the name given to a class of speculative-execution vulnerabilities involving return instructions. Mitigations for Retbleed have found their way into the mainline kernel but, as of this writing, some remaining problems have kept them from the stable update releases. Mitigating Retbleed can impede performance severely, especially on some Intel processors. Thomas Gleixner and Peter Zijlstra think they have found a better way that bypasses the existing mitigations and misleads the processor's speculative-execution mechanisms instead.
If a CPU is to speculate past a return instruction, it must have some idea of where the code will return to. In recent Intel processors, there is a special hidden data structure called the "return stack buffer" (RSB) that caches return addresses for speculation. The RSB can hold 16 entries, so it must drop the oldest entries if a call chain goes deeper than that. As that deep call chain returns, the RSB can underflow. One might think that speculation would just stop at that point but, instead, the CPU resorts to other heuristics, including predicting from the branch history buffer. Alas, techniques for mistraining the branch history buffer are well understood at this point.
As a result, long call chains in the kernel are susceptible to speculative-execution attacks. On Intel processors starting with the Skylake generation, the only way to prevent such attacks is to turn on the indirect branch restricted speculation (IBRS) CPU "feature", which was added by Intel early in the Spectre era. IBRS works, but it has the unwelcome side effect of reducing performance by as much as 30%. For some reason, users lack enthusiasm for this solution.
A 64-bit pointer can address a lot of memory — far more than just about any application could ever need. As a result, there are bits within that pointer that are not really needed to address memory, and which might be put to other needs. Storing a few bits of metadata within a pointer is a common enough use case that multiple architectures are adding support for it at the hardware level. Intel is no exception; support for its "Linear Address Masking" (LAM) feature has been slowly making its way toward the mainline kernel.
CPUs can support this metadata by simply masking off the relevant bits before dereferencing a pointer. Naturally, every CPU vendor has managed to support this feature differently. Arm's top-byte ignore feature allows the most-significant byte of the address to be used for non-pointing purposes; it has been supported by the Linux kernel since 5.4 came out in 2019. AMD's "upper address ignore" feature, instead, only allows the seven topmost bits to be used in this way; support for this feature was proposed earlier this year but has not yet been accepted.
One of the roadblocks in the AMD case is that this feature would allow the creation of valid user-space pointers that have the most-significant bit set. In current kernels, only kernel-space addresses have that bit set, and an unknown amount of low-level code depends on that distinction. The consequences of confusing user-space and kernel-space addresses could be severe and contribute to the ongoing CVE-number shortage, so developers are nervous about any feature that could cause such confusion to happen. Quite a bit of code would likely have to be audited to create any level of confidence that allowing user-space addresses with that bit set would not open up a whole set of security holes.
A text-to-speech (TTS) system is one attributed to the efficiency of seamlessly converting an input text file to an output audio file with reasonable clarity. Such a solution makes it possible for users to engage with a computerized environment without having to manually read through a text file or documentation file.
For instance, a text-to-speech tool is a priceless solution for users with both reading and hearing difficulties making it a perfect inclusion in an e-learning project. It is also an alternative to hiring a voice-over artist since it saves on hiring costs.
Docker has transformed the way many people develop and deploy software. It wasn't the first implementation of containers on Linux, but Docker's ideas about how containers should be structured and managed were different from its predecessors. Those ideas matured into industry standards, and an ecosystem of software has grown around them. Docker continues to be a major player in the ecosystem, but it is no longer the only whale in the sea — Red Hat has also done a lot of work on container tools, and alternative implementations are now available for many of Docker's offerings.
Continuing on with my previous article, today I am going through the process of configuring our OpenBSD httpd instance to handle dynamic content such as PHP scripts. The way httpd will accomplish this is by using the FastCGI protocol. I’ve talked about FastCGI as well as using it with Nginx in this article. So perhaps, you might want to check that out first.
Counter-Strike 1.6 is a first-person shooter game developed by Valve. Like other version of Counter-Strike. It allows us to host game servers on our server allowing us to modify the server according to our needs and gives us full control over the dedicated server. We can install apply custom modification, custom plugins, custom models etc, which may gives the user a newer and a better experience of the server. We can install custom modes like, 5v5 Automix, Zombie Mode, Deathrun, Deathmatch etc.
Systemd famously has socket units, which cause systemd itself to listen on something and start a systemd unit when the there's traffic. In other words, systemd can act like (x)inetd. When you do this for a stream-based service (such as a TCP based one), there are two options for what systemd should do when there's a connection, controlled by the Accept= setting.
The default and normal behavior (in systemd) is 'Accept=no', which causes systemd to start the service unit associated with the socket and pass it the listening socket (or sockets) to interact with. If you specify 'Accept=yes', you get xinetd-like behavior where a new instance of the service is spawned for each new connection, and the instance is passed only the connected socket. As the documentation covers, Accept=yes requires that you have a template .service unit; if you have 'oidentd.socket' set with Accept=yes, you have to have an 'oidentd@.service' unit. This service unit will be instantiated with a specific instance name.
This article illustrates how, in about 15 minutes, you can: Install and configure the Amanda backup server.
Linux Commands in 60 Seconds is a YouTube shorts series that teaches you simple examples of common Linux commands. In this video, quick examples of the head and tail commands are shown.
SASL (Simple Authentication and Security Layer) is a framework for adding and implementing authentication and authorization support to network-based or communication protocols. The SASL design and architecture permit negotiation against various authentication mechanisms.
Notably, you can use SASL alongside other protocols such as HTTP, SMTP, IMAP, LDAP, XMPP, and BEEP. This framework features a range of commands, callback procedures, options, and mechanisms.
While learning to use the command terminal can make your life easier, managing all your commands and shell scripts in a single window can prove to be a hassle. Although Linux distributions allow you to open more than one terminal window on your system, they don’t provide you with additional features and customizability options.
This is where tmux comes in. tmux is a multiplexer for your terminal. It allows you to run and manage multiple terminal sessions on your device. tmux comes with a lot of shortcuts and features that make it one of the best alternatives to the default terminal on your Linux distributions.
Being fast, convenient, and versatile, the command terminal of the Linux distribution differentiates itself from those of other operating systems. The command terminal accepts lines of text and then processes these texts into instructions for your computer. Simply put, it also allows users to execute a complex set of instructions in just a few lines.
This is one of the many examples of the command terminal that can make time-consuming and tiresome tasks easy. That being said, there may be times when having a single terminal screen might not be enough for your tasks. Worry not, as we’ve got you covered.
Introducing tmux, a tool developed by Nicholas Marriot in 2007 that enables you to open and manage multiple command terminal sessions simultaneously at a single instance. tmux enables you to create, manage as well as navigate through multiple terminal windows simultaneously.
One of the most prominent features of tmux is the customizability it offers. tmux allows you to change the themes to ensure that you’re working in an environment that fits your preference. This guide will help you learn how you can change your theme in tmux. Let’s take a look at the steps.
The command terminal is what gives Linux distributions a competitive edge over other operating systems. The ability to execute processes that require complex instructions with just a few commands gives Linux distributions an overwhelming advantage when it comes to their GUI-based competitors. Nevertheless, managing all your work on a single terminal window can be challenging. While most Linux distributions allow you to open multiple terminal windows, they don’t provide methods for managing and exchanging information between them. This is where tmux comes in.
tmux allows you to run and manage multiple instances of the terminal shell, either as multiple windows or panes in a single window.
While tmux works by creating a new session, there are ways to link it to a previously running session. This guide will help you learn how you can attach tmux to an existing session.
We’ll go over the basics of a tmux session, how to initialize it, and how you can attach your newly opened tmux window to a previously existing one.
Information lookup services and authentication protocols rely on secure passwords to remain credible—the Network Information Service is no exception. You can set up these passwords during configuration or when adding users. However, you can still change the user passwords from time to time.
Interestingly, users can effectively change their NIS passwords using the various methods. But irrespective of your chosen method, you must use the NIS yppasswd command.
This article will take you through the various ways to change your NIS passwords. Notably, it will focus on how you can do this using the yppasswd daemon.
NIS, an abbreviation for Network Information Service, is a distributed database that helps you to maintain configuration files consistently in your networks. It provides a mainframe-client indexing service that store and circulates the server configuration information. Notably, it helps to manage the host and client names between machines in a PC network environment.
With the previous introductory information, it is right to conclude that NIS provides management and lookup services for the users within a network. But this is only possible once you add the user credentials to your database.
This article will provide a step-by-step guide on adding the users to your NIS system. Besides, it will also discuss how you can check the users within your system or find a specific user within the network.
The cpufetch is a command-line application that helps Linux users find their systems’ CPU information, such as processor name, technology, microarchitecture, cores, features and performance. It’s helpful for Raspberry Pi users who don’t have enough information about their CPU. They can run this command on several Linux operating systems such as Ubuntu, Raspberry Pi and more.
This article will show you how to install cpufetch on Raspberry Pi to get the CPU information on your terminal window.
Anyone who has used a Linux system understands that the terminal is at the heart of the ecosystem. You may use it to manage your whole system, explore the filesystem, monitor your network, and create text files, among other things. So, in essence, you may do everything you want from the terminal. Switching between apps during crunch periods might have a negative impact on productivity. When dealing with text or configuration files, staying inside the terminal is your best bet.
Vim and Emacs are two of Ubuntu’s most capable command-line editors, but they have a high learning curve. Looking through the instructional materials might be overwhelming for new users. There is nano for such users. It’s a simple command-line editor for Linux. So, how can you get a nano text editor? Let’s look at how to install it on Ubuntu 22.04.
Have you heard about the new and trending JavaScript library called React.js? It’s so cool that it has been extensively used by developers to create interactive User Interfaces and components in their applications. The React library is also a great choice when you want to build fast and scalable apps with a component-based approach. If you haven’t started using it yet, it’s time you start learning. In this blog post, we will show you how to set up a React development environment on your Mac computer. Read on!
While GUADEC, the GNOME community's annual conference, has always been held in Europe (or online-only) since it began in 2000, this year's edition was held in North America, specifically in Guadalajara, Mexico, July 20-25. Rob McQueen gave a talk on the first day of the conference about providing solutions that bring some level of digital safety and autonomy to users—and how GNOME can help make that happen. McQueen is the CEO of the Endless OS Foundation, which is an organization geared toward those goals; he was also recently reelected as the president of the GNOME Foundation board of directors.
His talk was meant to introduce and describe an objective that the GNOME board has been discussing and working on regarding the state of the internet today and how GNOME can make that experience better for its users. The cloud-focused computing environment that is prevalent today has a number of problems that could be addressed by such an effort. That topic is related to what he does for work, as well, since Endless OS is working on "bridging the digital divide" by helping those who are not able to access all of the information that is available on today's internet. Some of those efforts are aimed at bringing that data to those who cannot, or perhaps choose not to, directly connect to the internet itself—or only do so sporadically.
At GUADEC 2022 in Guadalajara I gave a talk, Paying technical debt in our accessibility infrastructure. This is a transcript for that talk.
Slax is one of the most interesting lightweight Linux distributions.
It was also a suitable option for 32-bit systems, considering it is based on Slackware. If you are curious, Slackware is the oldest active Linux distribution and witnessed a major upgrade after 6 years, i.e, Slackware 15.
Slax also offered an alternative edition based on Debian, which is being actively maintained. Unfortunately, as mentioned by the creator in the blog post, the Slackware-based version (Slax 14) did not see an update for a long time (9 years).
Linux Mint 21, codenamed “Vanessa” is now available for download. It was officially released on 31st July 2022 and is based on Ubuntu 22.04 LTS codenamed “Jammy JellyFish”.
Linux Mint 21 comes in three distinct editions: the flagship Cinnamon edition (which uses the Cinnamon desktop environment by default), Xfce edition (ships with Xfce desktop environment), and MATE edition (ships with MATE desktop).
The latest edition of Mint is an LTS (Long Term Support) release and will be supported until April 2027. It ships with updated software and several new features and improvements.
SwissArmyPi is a free open-source project (MIT licensed) that transform your Raspberry Pi, or even the Zero edition into a formidable hacking and pentesting tool.
The Raspberry Pi is a popular device for doing different types of projects, and it’s extremely useful in getting things done in minutes. It replaces your old desktop setup and is a relatively cheaper option than other desktop setups. You can install any operating system on it with ease and perform your tasks at a quick pace. However, since it’s a machine, you may expect a failure at some stage, which can put you in a situation where you have no other choice except to buy a new device.
If you are a Raspberry Pi user and are curious about the lifespan of the Raspberry Pi device, read this article, as we will provide you with information about the device’s lifespan and the ways through which you can increase its lifespan of it.
Laravel validation is a way to perform a validation role. We can check the file type, file size, etc. File validation is typically used to avoid unwanted file uploads in a server or application.
Today, we will learn about file upload and storing in Laravel 9.
Laravel 9 log is an integral part of the Laravel 9 application. Basically, it is used as an application for monitoring the application activity. Laravel 9 has robust log services for converting log messages into a file. Today, we will demonstrate Laravel 9 logging. This tutorial helps you to understand your application status. What’s going on with your application. If there were an error in your software, you would see the system error message in your log file. Laravel 9 has a default logging that is based on the Laravel channel.
Laravel 9 has an excellent feature named Eloquent. It is an ORM (object-relational mapping), and it helps users to communicate between applications to databases very easily. In Laravel 9, when we use Eloquent, it works as a “Model” and communicates with the database. It helps you get data from the table in the database.
Last week I participated in the annual RStudio conference which took place in Washington DC.
I could feel the worry reading this. Maintainers of all stripes have to put up with angry people who demand they fix and update software, often with no financial or development assistance.
Jest is the most used JavaScript testing framework. In this post, you will learn how to use Jest toHaveBeenCalledWith for testing various scenarios like a partial array, partial object, multiple calls, etc. Let’s get started!
Fred Brooks observed in Mythical Man Month that adding more programmers to a project often slowed it down.
[...]
Graham was observing the early effects of SaaS and web programming. No need for porting applications to different operating systems or physical releases (floppies, CDs, or software appliances). SaaS removed the dependency hell companies often found themselves in – old versions that customers refused to upgrade from that still needed to be maintained (often with backward compatibility). The downside, he said, was that you still needed to manage servers and infrastructure. A single bug could crash all users. Hardware disks could become corrupted.
The history of modularization in JavaScript is a tedious one. ES Modules ("import") were introduced in 2015 and now seem to have broad support across different environments. But the precursor to ES Modules, CommonJS ("require"), is still widespread enough to require backward compatibility. And neither module system has an opinionated take on the actual package management (e.g., yarn and npm).
It seems that new languages are starting to converge on first-class language package manager as part of the spec. Rust has the crate build system and explicit module system. Go converged on first-class go modules. Both systems can be difficult to understand for beginners.
Jest has been the tool of choice for writing tests in JavaScript for years now. This guide will teach you how to run a single test using Jest. Let’s get going!
This is part 1 of a three-part series on interesting abstractions for zero-copy deserialization I’ve been working on over the last year. This part is about making zero-copy deserialization more pleasant to work with. Part 2 is about making it work for more types and can be found here; while Part 3 is about eliminating the deserialization step entirely and can be found here. The posts can be read in any order, though this post contains an explanation of what zero-copy deserialization is.
For the past year and a half I’ve been working full time on ICU4X, a new internationalization library in Rust being built under the Unicode Consortium as a collaboration between various companies.
There’s a lot I can say about ICU4X, but to focus on one core value proposition: we want it to be modular both in data and code. We want ICU4X to be usable on embedded platforms, where memory is at a premium. We want applications constrained by download size to be able to support all languages rather than pick a couple popular ones because they cannot afford to bundle in all that data. As a part of this, we want loading data to be fast and pluggable. Users should be able to design their own data loading strategies for their individual use cases.
See, a key part of performing correct internationalization is the data. Different locales1 do things differently, and all of the information on this needs to go somewhere, preferably not code. You need data on how a particular locale formats dates2, or how plurals work in a particular language, or how to accurately segment languages like Thai which are typically not written with spaces so that you can insert linebreaks in appropriate positions.
Given the focus on data, a very attractive option for us is zero-copy deserialization. In the process of trying to do zero-copy deserialization well, we’ve built some cool new libraries, this article is about one of them.
As mentioned in the previous posts, internationalization libraries like ICU4X need to be able to load and manage a lot of internationalization data. ICU4X in particular wants this part of the process to be as flexible and efficient as possible. The focus on efficiency is why we use zero-copy deserialization for basically everything, whereas the focus on flexibility has led to a robust and pluggable data loading infrastructure that allows you to mix and match data sources.
Deserialization is a great way to load data since it’s in and of itself quite flexible! You can put your data in a neat little package and load it off the filesystem! Or send it over the network! It’s even better when you have efficient techniques like zero-copy deserialization because the cost is low.
But the thing is, there is still a cost. Even with zero-copy deserialization, you have to validate the data you receive. It’s often a cost folks are happy to pay, but that’s not always the case.
For example, you might be, say, a web browser interested in using ICU4X, and you really care about startup times. Browsers typically need to set up a lot of stuff when being started up (and when opening a new tab!), and every millisecond counts when it comes to giving the user a smooth experience. Browsers also typically ship with most of the internationalization data they need already. Spending precious time deserializing data that you shipped with is suboptimal.
Although this module is always available, not all functions are available on all platforms. Most of the functions defined in this module call platform C library functions with the same name. It may be helpful to consult the platform documentation because these functions’ semantics vary among platforms.
Manage time with the python time module represents time in code, such as objects, numbers, and strings. However, it also provides functionality other than representing time, like waiting during code execution and measuring the efficiency of your code.
Apache Kafka is an open source project that supports industrial-strength data streaming in real-time. Kafka can process more than one hundred thousand messages per second. Some companies report the ability to process millions of messages per second.
Kafka is well suited for applications that coordinate rideshare activity, stream videos, and provide real-time fintech. If your application requires a continuous stream of time-sensitive data, Kafka will meet your needs and then some.
As it is with any complex technology, there is a learning curve. This technology requires a general knowledge about building containers and configuring Kafka. In addition, it is essential to learn programming language specifics because you can use Kafka with a variety of languages.
The focus of this article is on building a Java client that can produce and consume data to and from an OpenShift Kafka stream.
This article is the third in a series that delves into the uses of Kafka. A developer's guide to using Kafka with Java, Part 1 covered basic Kafka concepts such as installing Kafka and using the command-line client. The second article, How to create Kafka consumers and producers in Java, describes how to create Java code that interacts directly with a Kafka broker hosted on your local machine.
Now let's go a bit deeper into programming Kafka at the enterprise level. We will adapt the code created in the second article to work with Red Hat OpenShift Streams for Apache Kafka. This technology enables enterprises to work productively with Kafka by avoiding the tedious, detailed labor that goes into supporting a Kafka broker running at web scale.
First, I will provide an overview of OpenShift Streams for Apache Kafka. Then I will describe the mechanics of setting up a stream and demonstrate Java/Maven code that binds to a Kafka instance. This Java code includes tests that produce and consume messages to and from a stream.
First, we will create a maven project to develop Spring application. We already have covered this topic in our article in creating maven project. You can refer this to get in depth idea of creating maven project if you are not already familiar in: <editor make link for maven article>.
Let’s start with opening the Eclipse and clicking on the File menu. Then, select the maven project like this: File->New->Maven Project
After selecting, it will ask for some details such as project name, app name, version, packaging type, and etc. The packaging specifies final build bundle type of the project. If the application is web app, it should be war (Web Archive).
In this article, we will learn to create a spring application using Spring Tool Suit IDE. Spring Tools suit is an official IDE provided by the Spring. You can use it to create Spring application with minimum effort. This IDE is similar to your favourite IDE whether it is Eclipse, IntelliJ IDEA, or others.
When you will visit the site(spring), you will see couple of versions of IDE for different the variety of developers. You can select and download any to your local machine.
The intersection of free software and trademark law has not always been smooth. Free-software licenses have little to say about trademarks but, sometimes, trademark licenses can appear to take away some of the freedoms that free-software licenses grant. The Firefox browser has often been the focal point for trademark-related controversy; happily, those problems appear to be in the past now. Instead, the increasing popularity of the Rust language is drawing attention to its trademark policies.
When a free-software project gets a trademark, it is an indication that the name of that project has come to have some sort of value, and somebody (hopefully the people in charge of the project) want to control how it is used. They may want to prevent their project's name being used with versions that have been modified with rent-seeking or malicious features, for example. Other projects might want the exclusive right to market commercial versions of their code under the trademarked name. As a general rule, trademark licenses will restrict the changes to a project's code that can be distributed without changing the name.
As a result of those restrictions, trademark policies can appear to be a violation of free-software principles. But those restrictions apply to the trademarked name, not the code itself; any such restrictions can be avoided just by not using the name. Thus, for some time, Firefox as distributed by Debian was known as "Iceweasel" until 2016; it lacked no functionality and was entirely free software. It is worth noting that version 3 of the GNU General Public License explicitly allows the withholding of trademark rights.
The realization that first dawned on me when I signed up for Facebook in 2007 was that contrary to many previous online fora, this wasn't for making friends - it was for keeping them. I haven't used it since 2019, but I doubt it's changed much since then: you typically "friend" someone on Facebook after you've met them, not before. Quite the opposite of my early days online, spent on IRC. I wonder, how does that shape the life of someone born into our current web of Mammon, rather than stumbling, in their teens, onto a net largely constructed for fun away from profit?
My early years on IRC was before web communities as we know them, before the meme format was honed on the Something Awful forums (Stairs in my house, you ask? Yes, I was at one point in time protected.) and certainly before the time of budding megasites like MySpace. The Internet was local as much as it was global and the idealist nature of the services offered and those running them meant that truly disorderly, decentralized communication was standard fare - not a fringe activity among a shrinking group of stubborn old-timers.
You could stumble in anywhere, say hello, and start yammering away. A surly channel operator might kick you for not understanding some particular online social code, but that was about the worst that could happen. Conversations were struck up left and right, without fear of repercussion or lasting negative consequences. A naïve sentiment, perhaps, but also true for the vast majority of all of those early online interactions.
To every idea there is a season, though some ideas seem to have purpose only under faulty assumptions. In April I decided to rethink how I went about my "informal" writing, which had previously been highly intermittent, rather formal, and interminably long. In When is a Blog a Blog? I renamed my old blog to "essays" and aimed for more frequent, less formal, shorter updates in this blog.
[...]
Suddenly my blog archive has grown from 18 entries to 74. At some point I intend creating a separate list of what I consider the posts which are most likely to be of longer-term interest, because there is now a definite divide between more substantial and ephemeral posts.
Protein structures from AlphaFold are already widely used by research teams around the world. They’re cited in research on things like a malaria vaccine candidate and honey bee health. “We believe that AlphaFold is the most significant contribution AI has made to advancing scientific knowledge to date,” Pushmeet Kohli, head of AI for science at DeepMind, said in a statement.
In 1970, robotics expert Masahiro Mori first described the effect of the "uncanny valley," a concept that has had a massive impact on the field of robotics. The uncanny valley, or UV, effect, describes the positive and negative responses that human beings exhibit when they see human-like objects, specifically robots.
The UV effect theorizes that our empathy towards a robot increases the more it looks and moves like a human. However, at some point, the robot or avatar becomes too lifelike, while still being unfamiliar. This confuses the brain's visual processing systems. As a result, our sentiment about the robot plummets deeply into negative emotional territory.
For an estimated one billion people around the world drinking coffee is a daily regime.
Yet what many coffee lovers might not know is that they are often drinking a brew made, at least in part, from Brazilian beans.
"Brazilian beans have popular characteristics, and are known for their body and sweetness," says Christiano Borges, boss of the country's largest grower, Ipanema Coffees.
Global shipments of tablets fell for the fourth quarter running, with the technology analyst firm Canalys reporting a 11% year-on-year fall to 34.8 million units in the second quarter of 2022.
Chromebooks followed a similar path in recording lower shipments for the fourth consecutive quarter. However, the fall for the current quarter was much bigger, at 57%, with shipments only reaching 5.1 million units as demand from the education sector continued to wane.
Apple's shipments for the quarter fell 15% year-on-year with 12.1 million iPads shipped, while Samsung shipped 7.0 million tablets, an annual decline of 13%.
A study from the National Institutes of Health describes the immune response triggered by COVID-19 infection that damages the brain’s blood vessels and may lead to short- and long-term neurological symptoms. In a study published in Brain, researchers from the National Institute of Neurological Disorders and Stroke (NINDS) examined brain changes in nine people who died suddenly after contracting the virus.
Cornell researchers have developed a wearable earphone device – or “earable” – that bounces sound off the cheeks and transforms the echoes into an avatar of a person’s entire moving face.
A team led by Cheng Zhang, assistant professor of information science, and François Guimbretière, professor of information science, both in the Cornell Ann S. Bowers College of Computing and Information Science, designed the system, named EarIO. It transmits facial movements to a smartphone in real time and is compatible with commercially available headsets for hands-free, cordless video conferencing.
The people’s resistance in Myanmar will continue regardless of the brutality shown by the junta in the latest executions of pro-democracy activists. People already risk their lives every day, especially those who continue to participate in legitimate protest. Now, the military is reportedly deploying China-made CCTV cameras with facial recognition capabilities, making it easier for the oppressive regime to locate anyone, anytime.
Aggressive surveillance is a lived reality in Myanmar. Many people are already being tracked, and then arrested or killed for resisting the regime. Telecommunications service providers have been ordered to install intercept surveillance technologies, and regulations tightening SIM card and IMEI registration will expand the junta’s power to collect personal data and track people whenever they wish. This includes an E-ID system that will collect biometric data. Myanmar CCTV cameras will only make things worse.
Reports say that Chinese firms Zhejiang Dahua Technology, Huawei Technologies Co Ltd, and Hikvision are supplying CCTV cameras to the junta. These are all companies the U.S. already added to its 2019 economic trade restriction Entity List because the Chinese government used their products extensively in Xinjiang, where there are repeated allegations of genocide and suppression of ethnic minorities. The two local companies that have won local tenders to implement the Myanmar CCTV camera project, Fisca Security & Communication and Naung Yoe Technologies, have clear links to the Myanmar military. Fisca’s Chairman is Soe Myint Tun, a retired Deputy Commissioner of the Myanmar Police Force. Naung Yoe Technologies regularly provides equipment for the military.
Governments often cite national security and public safety concerns to promote these surveillance projects. However, the risks far outweigh any claimed benefits, as the junta can exploit these technologies to further oppress the people of Myanmar.
Man-to-man marking will be paired with drone v drone security at this winter's Fifa World Cup in Qatar.
Unmanned aerial vehicles that shoot nets to bring down small "rogue" drones will help defend venues. Fortem Technologies will provide the interceptor drones, following an agreement with Qatar's interior ministry.
It says the agreement reflects growing fears about the threat potential drone attacks pose in general.
Rex Patrick is fighting for the release of documents which expose Australia’s spying on Timor-Leste to cheat the little country out of oil and gas reserves. He is in court in Melbourne this minute but it’s a game of “We can neither confirm or deny”. The former senator and unbowed transparency warrior kicks off his first column for Michael West Media today.
If you thought the Attorney-General dropping charges against Bernard Collaery was a sign that secret trials in Australian courts and tribunals were a thing of the past, you would be wrong.
Today, in Melbourne, a secret hearing is taking place where the government is trying to prevent the public disclosure of 22-year-old cabinet documents relating to the Howard government’s plans to defraud the newly independent and impoverished nation of Timor-Leste of their oil and gas resources.
January 1 is the biggest day of the year for the National Archives. It’s the day they unveil cabinet papers from 20 years prior. Last year, among the documents named but not released was a year 2000 cabinet submission entitled “Timor Gap Negotiations”.
Europe has extreme weather, but the Greens’ approach is radical. The connection is well within grasp, but his ideology prevents him from doing so. Whether he sees it or not, this is exactly the same problem faced by his former colleagues; the ones he now scolds for costing him his seat.
Australians are getting a rude shock as they open their superannuation statements. For most, it was a rare year of losses. Only three funds ended in the black. Callum Foote and Michael West investigate super returns and the best performing fund of them all, Hostplus.
It’s an obvious thing, but one worth nothing nonetheless. When interest rates rise, share markets tend to fall. And that is precisely what happened to the savings of millions of Australians for the year to June. Opening our superannuation fund letters, we discovered we are worth less than we were the year before.
A common argument conservatives often throw is that the Founding Fathers would approve of a behavior or political/societal practice. For example, they say the Founding Fathers would not approve of separation of religion and state, which I previously wrote about on a different post.
Aside from how dumb is such argument in any sense, the idea of justifying anything morally by saying someone 300 years ago would approve of it is pathetic.
Conservatives are very eager to say that their practices are humane and what they do is to benefit humans, not themselves or their masters, which I believe are capitalists and money. However, their speeches and behavior show a different thing.
Using the Founding Fathers argument, they should also believe in slavery or segregation. George Washington, Benjamin Franklin, Thomas Jefferson, James Madison, and Patrick Henry were all slave-owners.
Among American presidents, twelve of them owned slaves. Eight of them owned slaves while in office. George Washington is believed to own more than 1500 slaves while in office, combined; each more than 600. George Washington’s slaves were not freed even when he was passing Northwest Ordinance, which banned slaver ownership in north of the Ohio river.
Jefferson fathered multiple slave children with the enslaved woman Sally Hemings, the likely half-sister of his late wife Martha Wayles Skelton.
Data shows that Rajasthan is not the best place to live in if you’re a Dalit. But what happens when a Dalit seeks justice? What are the perils and pitfalls, the long and often futile processes that he or she must face?
Very often this successfully plants the seed for a thriving interest in whatever the thing is, in that person. If it's something with utility value, I'll often rely on their relative expertise in the future.
It’s a great talk and I endorse everything she says. If you have to choose between reading whatever I write here and watching the linked video, please choose the video.
The opening is dark, but it’s important to contextualize the situation in which we find ourselves. Climate change has been badly mishandled, fascism is on the rise, and the ultra-rich are becoming more powerful.
Had been a little daunted as the person that leant me this was very enthusiastic about it. Rock isn't really my genre and 1971 is a bit before my time. However it was an engaging and entertaining read. Part social history on UK society coming to terms with the end of the 60's, part history of the evolution of the music business (from singles to albums sales), and part mini biographies of a lot of the artists involved. Definitely recommend this book.
I feel that's very similar to what Rob Conley has been saying all along about desktop publishing and liberal licenses allow the publication of material.
@lmorchard@hackers.town was talking about the difficulty of finding new friends and taking hobbies more seriously, and @craigmaloney@octodon.social was replying to say that most hobbies seem to have a half-life of 2–4 weeks for them, and I feel that so much. Painting miniatures? Making music? Painting pictures? Yesh, a bit… but not really.
Sometimes I feel that most stuff simply is boring. I am not diagnosed with anything, but I find most people boring, most hobbies boring, most work boring… Finding stuff that keeps me focused is super hard. Finding friends that have interesting things to say is hard. The question is: do I suffer from low density attention deficit, or is the world simply ruled by boring stuff: boring books, boring jobs, boring talks, boring hobbies being sold to us as interesting but … nope. It almost never is.
When my sister and I were young, we watched a television program called "Zoom", which was a remake of an earlier program that aired in the 1970s. The show was hosted by seven children and featured games, jokes, recipes, science experiments, and other content submitted by viewers. I discovered yesterday that someone had uploaded the majority of the program's early episodes to YouTube, leading to a night of nostalgia binging.
Aside from the jingle triggering strong memories from my childhood, a lyric in the show's theme song stuck out to me. The hosts invite viewers to contact the show and submit ideas, after which they say "If you like what you see, turn off the TV and do it!"
I was reading a blog post on David Graeber's book *Debt*, by Marcia B. It's a good post, and an excellent reminder that I should finish reading the damn book, and also read the new one that got published posthumously.
Betamax was superior, Minidisc bested CDs, Idiocracy is a documentary. The fact that any good idea survives is miraculous.
At home, I'm running my own router to manage Internet, run DHCP, do filter and caching etc... I'm using an APU2 running OpenBSD, it works great so far, but I was curious to know if I could manage to run NixOS on it without having to deal with serial console and installation.
It turned out it's possible! By configuring and creating a live NixOS USB image, one can plug the USB memory stick into the router and have an immutable NixOS.
Well... I use bash. I'm sufficiently well-versed in bash to sit through a full day of it at a Red Hat System Engineer course and not only not learn anything, but knowing when the instructor is wrong.
But do I love bash?
Not really, I guess. I've used it for a very long time and know its strengths and weaknesses. I usually know how to get things done with it, and most of all when it is or isn't suitable for a specific situation.
The only problem that I have with that is that nobody sticks strictly to POSIX. Sure, they might keep the shell syntax POSIX compatible, but there are assumptions made about the base utilities present on your system.
I'll give as an example the `head` command. In some very early implementations the syntax for `head` was `head -x` where x was some number. The command evolved over time along with Unix and we now have `head -n`, but both BSD and GNU utilities still respect the old syntax.
StackSmith asked[1] people to comment on their favourite shells. My answer's a bit more wacko than most. I use and mostly love eshell, which is a pure emacs-lisp program that mimics the behaviour of more traditional shells like bash.
As a sort of conclusion: for some reason bash is one of the most, if not the most, popular interactive shell. Probably because it's the default in many Linux distros, especially the biggest ones. The default effect certainly worked with me because I'm still "stuck" with it. But bash isn't the only one out there, so if you don't like it, do yourself a favor and try a different one! Some shells I can think of: fish, zsh, ksh, ion (of RedoxOS), csh.
Previously, I wrote about Cybersecurity Sensationalism and in a stroke of irony I came across a thread/tweet on my feed about a "massive widespread malware attack on GitHub". In the original tweet, the author claims that 35k repositories are infected with commits that contain malware. The evolution of the tweets are interesting
In the previous article on survival analysis for customer retention, we learned how to compute a Kaplan–Meier estimation that tells us how long customers stay with us.
As brain implants become more commonplace and may eventually be used for non-medical purposes, some experts believe they must be regulated.
Regulations should be considered "a natural next step," says Rajesh P. N. Rao, a professor at the University of Washington in Seattle with a background in computer science, engineering, and computational neuroscience, who earned his Ph.D. in artificial intelligence (AI)/computer vision, and used a postdoctoral scholarship to train in neuroscience.
Eventually, there will be two-way communication between doctors and the devices, with AI as an intermediary, Rao says. "In the future, that kind of device embedded with AI can look at what's happening in other parts of the brain to treat depression or epilepsy and stopping seizures and bridging an injured area of the brain or shaping the brain to be less depressed."
Efforts are under way to further the use of these devices. For example, BrainGate is a U.S.-based multi-institutional research effort to develop and test novel neurotechnology aimed at restoring communication, mobility, and independence. It is geared at people who still have cognitive function, but have lost bodily connection due to paralysis, limb loss, or neurodegenerative disease. BrainGate's partner institutions include Brown, Emory, and Stanford universities, as well as the University of California at Davis, Massachusetts General Hospital, and the U.S. Department of Veterans Affairs.
Last night I woke up and couldn't get back to sleep.
For some reason, rather than putting on a podcast, I turned on my
transistor radio, switched through the shortwave bands, and actually
managed to tune in Radio Havana. I think the repeater is in San
Francisco, and I've heard AM stations from at least as far as
Sacramento, so it wasn't a technical marvel, but it was something I
haven't done much in the recent past. As a teenager, in the dark old
days before public access internet, I used to listen to Radio Moscow at
night. The Soviet Union seemed like such a far-off, strange world.
In any case, I ended up looking into shortwave radio a bit more this
morning (and yes, like every hobby, you could waste a LOT of money on
shortwave radios, antennas, and the like). Of course there's an
/r/shortwave on reddit, and when I was reading through one of the
threads, someone mentioned that 30 feet of wire would make a decent-ish
antenna.
Bots are absolutely crippling the Internet ecosystem.
The "future" in the film Terminator 2 is set in the 2020s. If you apply its predictions to the running of a website, it's honestly very accurate.
Modern bot traffic is virtually indistinguishable from human traffic, and can pummel any self-hosted service into the ground, flood any form with comment spam, and is a chronic headache for almost any small scale web service operator.
They're a major part in killing off web forums, and a significant wet blanket on any sort of fun internet creativity or experimentation.
I really haven't followed much of what happens in mainstream social media land for a while now, but I heard news lately about Instagram users pushing back against a controversial redesign that more or less would have turned Instagram into a TikTok clone.
Instagram, to their credit, responded and stopped the redesign rollout (for now). But if you ask me, the writing's on the wall and has been for a couple years now. Pushing endless random content from strangers is more profitable and keeps users on the platform for longer, which you can obviously tell from the success of platforms like TikTok.
The way I consume gemini, feeds and other internet material is sometimes all over the place. Particularly over the last year it has changed and evolved. For a while I was mainly using amfora on a couple different computers to browse gemini. I've had the miniflux feed reader running on my server and have it connected to newsboat to browse the feeds in the terminal. Then I started using Offpunk for some of my gemini browsing and experimenting with its ability to sync all the things I follow in geminispace for offline reading. Then Offpunk added the abilility to view traditional rss feeds and html pages and sync those locally as well. Long story short I'm still seeing where this leads me and how well I get along with it.
Currently I have my gemini subscriptions imported into Offpunk so whenever I sync I get all the new gemini posts I'm following downloaded locally and added to my "tour". A tour is a feature that came from solderpunk's AV-98 client which in turn came from solderpunk's own VF01 gopher client. Either manually by issuing the command or automatically if you are subscribed to a gemini feed posts can get added to your tour and then your tour can be displayed back to you in consecutive order until you've read all the new posts that were in that tour. It's a handy way to read through the new stuff all at once. I briefly tried adding all of my rss feeds to Offpunk and subscribing to them but quickly realized that having that many feeds, some of which have many new posts per day, all getting added to my tour along with gemini posts was just too much. The tour got too long and unwieldy and the feed articles were interspersed with the gemini posts. I think I like keeping gemini and feeds/web separate at least mentally even though they are in the same application.
I also have started looking into RSS and subscribing to some favorite web-blogs. I am definitely interested in expanding more as time comes, but I cannot really find interesting reads out there.
Until now, I'd never "standardized" what the dates on my site mean, even though I've been consistent with them everywhere (not much to screw up anyway): both on the index and the posts themselves, the dates are always when I started writing the post (or created the file, at least). So there are unfinished (unstarted, even) posts that, were I to finish and publish, would get the date I first created them on the index. This isn't great. And something similarly ungreat happens with posts that I take a long time to write (actually actively writing), sometimes across a few days of a week -- the post has a certain date, but in reality it's published only some days later -- usually leading to interleaving of posts: start post A, start post B, publish post B, publish post A.
* Gemini (Primer) links can be opened using Gemini software. It's like the World Wide Web but a lot lighter.