The current and future potential for Linux based systems is limitless. The system’s flexibility allows for the hardware that uses it to be endlessly updated. Functionality can, therefore, be maintained even as the technology around the devices change. This flexibility also means that the function of the hardware can be modified to suit an ever-changing workplace.
For example, because the INSYS icom OS has been specifically designed for use in routers, this has allowed it to be optimised to be lightweight and hardened to increase its security.
Multipurpose OS have large libraries of applications for a diverse range of purposes. Great for designing new uses, but these libraries can also be exploited by actors with malicious intent. Stripping down these libraries to just what is necessary through a hardening process can drastically improve security by reducing the attackable surfaces.
Overall, Windows may have won the desktop OS battle with only a minority of them using Linux OS. However, desktops are only a minute part of the computing world. Servers, mobile systems and embedded technology that make up the majority are predominately running Linux. Linux has gained this position by being more adaptable, lightweight and portable than its competitors.
It has Android Apps (Google Play) and Linux Apps (crostini) support and it will receive auto-updates until September 2021.
It has Android Apps (Google Play) and Linux Apps (crostini) support and it will receive auto-updates until June 2024.
Back in March, I reported on an effort that would enable resizing of the Linux partition for Crostini-supported Chromebooks. At that time, I expected the feature to land in Chrome OS 75. I’ve checked for the feature now that Chrome OS 75 is available (again) and it’s nowhere to be seen. That’s because it was recently pushed back to Chrome OS 78.
[...]
However, other aspects need to be considered: Storage of large media files, for example, or enabling Google Drive synchronization with the Chrome OS Files app for offline file access. And then there are Android apps, so of which – particularly games – can require one or two gigabytes of space.
So far, I haven’t run into any storage issues on my Pixel Slate with 128 GB of data capacity. But it’s easy to see that the Linux container is using up the bulk of my tablet’s storage: As I understand it, /dev/vdb is the Crostini container with Linux, which is 88 GB in size with 58 GB free.
In what sounds surprising, a Linux Kernel Developer who has been working with Microsoft has revealed that Microsoft’s Azure Cloud platform has more number of Linux-based operating systems than the Windows-based operating systems. The details came up on an Openwall Open-source Security List which had an application urging Microsoft developers to join the list. The Security list left open an argument that Microsoft plays a key role in Linux development.
SUSE CaaS Platform 4, our next major release is now in beta. It has major architectural improvements for our customers. In the process of planning and developing it, we took a close look at bootstrapping clusters and managing node membership, and we listened to our customers. One of the things we heard from many of them was that they wanted a way to deploy multiple clusters efficiently, by scripting the bootstrap process or by integrating it into other management tools they use. To address this, we committed even more strongly to our upstream participation in Kubernetes development. Instead of building SUSE-specific tools as we had in earlier versions, we contributed the efforts of SUSE engineers to the upstream kubeadm component, helping it bridge the gap between its current state and the abilities we had previously implemented in the Velum web interface. Our bootstrap and node management strategy in version 4 is built on kubeadm.
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded. When APIs evolve, the old API is deprecated and eventually removed.
We are pleased to announce that the Red Hat Learning Community has reached more than 10,000 members! Since its launch in September 2018, the community has shown itself to be a valuable hub for those seeking to share knowledge and build their open source skill set.
When we first started out, this was just an idea. We set out to support, enable, and motivate new and experienced open source learners as they learn how to work with Red Hat technologies, validate their technical skill sets, build careers and pursue Red Hat Certifications. We soft launched the community in July 2018 and invited 400 Red Hat Training instructors, students, curriculum developers and certifications team members to jump-start community discussion boards and earn a founding member badge.
In early May right before the release of Red Hat Enterprise Linux 8.0 we saw the public beta of Oracle Linux 8 while today Oracle Linux 8.0 has been promoted to stable and production ready.
Oracle Linux 8.0 is available today as Oracle's re-build of Red Hat Enterprise Linux 8.0 and the features it brings while adding in some extras like the Unbreakable Enterprise Kernel option along with D-Trace integration and other bits.
The default kernel shipped by Oracle Linux 8.0 is a Linux 4.18 derived kernel that remains compatible with Red Hat's official RHEL8 kernel package.
Red Hat OpenShift 4.1 offers a developer preview of OpenShift Pipelines, which enable the creation of cloud-native, Kubernetes-style continuous integration and continuous delivery (CI/CD) pipelines based on the Tekton project. In a recent article on the Red Hat OpenShift blog, I provided an introduction to Tekton and pipeline concepts and described the benefits and features of OpenShift Pipelines. OpenShift Pipelines builds upon the Tekton project to enable teams to build Kubernetes-style delivery pipelines that they can fully control and own the complete lifecycle of their microservices without having to rely on central teams to maintain and manage a CI server, plugins, and its configurations.
At OSCON, IBM unveiled a new open source platform that promises to make Kubernetes easier to manage for DevOps teams.
As a software developer, it’s often necessary to access a relational database—or any type of database, for that matter. If you’ve been held back by that situation where you need to have someone in operations provision a database for you, then this article will set you free. I’ll show you how to spin up (and wipe out) a MySQL database in seconds using Red Hat OpenShift.
Truth be told, there are several databases that can be hosted in OpenShift, including Microsoft SQL Server, Couchbase, MongoDB, and more. For this article, we’ll use MySQL. The concepts, however, will be the same for other databases. So, let’s get some knowledge and leverage it.
The system administrator of yesteryear jockeyed users and wrangled servers all day, in between mornings and evenings spent running hundreds of meters of hundreds of cables. This is still true today, with the added complexity of cloud computing, containers, and virtual machines.
Looking in from the outside, it can be difficult to pinpoint what exactly a sysadmin does, because they play at least a small role in so many places. Nobody goes into a career already knowing everything they need for a job, but everyone needs a strong foundation. If you're looking to start down the path of system administration, here's what you should be concentrating on in your personal or formal training.
Debian 10, Linux Kernel 5.2, Pi 4 more Flaws, AMD News, System 76 Thelio and AMD, Nvidia Responds, Ubuntu Snaps, Red Hat & IBM Merge, Valve Rolls Out Steam Labs, Valve Early Access Dota Underlords
Katherine Druckman and Doc Searls talk to Linux Journal's Danna Vedder about the current state of advertising.
FreeBSD 11.3 has been released, OpenBSD workstation, write your own fuzzer for the NetBSD kernel, Exploiting FreeBSD-SA-19:02.fd, streaming to twitch using OpenBSD, 3 different ways of dumping hex contents of a file, and more.
Often, a kernel developer will try to reduce the size of an attack surface against Linux, even if it can't be closed entirely. It's generally a toss-up whether such a patch makes it into the kernel. Linus Torvalds always prefers security patches that really close a hole, rather than just give attackers a slightly harder time of it.
Matthew Garrett recognized that userspace applications might have secret data that might be sitting in RAM at any given time, and that those applications might want to wipe that data clean so no one could look at it.
There were various ways to do this already in the kernel, as Matthew pointed out. An application could use mlock() to prevent its memory contents from being pushed into swap, where it might be read more easily by attackers. An application also could use atexit() to cause its memory to be thoroughly overwritten when the application exited, thus leaving no secret data in the general pool of available RAM.
The problem, Matthew pointed out, came if an attacker was able to reboot the system at a critical moment—say, before the user's data could be safely overwritten. If attackers then booted into a different OS, they might be able to examine the data still stored in RAM, left over from the previously running Linux system.
As Matthew also noted, the existing way to prevent even that was to tell the UEFI firmware to wipe system memory before booting to another OS, but this would dramatically increase the amount of time it took to reboot. And if the good guys had won out over the attackers, forcing them to wait a long time for a reboot could be considered a denial of service attack—or at least downright annoying.
Hi Linus,
The following changes since commit 0ecfebd2b52404ae0c54a878c872bb93363ada36:
Linux 5.2 (2019-07-07 15:41:56 -0700)
are available in the Git repository at:
https://github.com/ceph/ceph-client.git tags/ceph-for-5.3-rc1
for you to fetch changes up to d31d07b97a5e76f41e00eb81dcca740e84aa7782:
ceph: fix end offset in truncate_inode_pages_range call (2019-07-08 14:01:45 +0200)
There is a trivial conflict caused by commit 9ffbe8ac05db ("locking/lockdep: Rename lockdep_assert_held_exclusive() -> lockdep_assert_held_write()"). I included the resolution in for-linus-merged.
Ceph for Linux 5.3 is bringing an addition to speed-up reads/discards/snap-diffs on sparse images, snapshot creation time is now exposed to support features like "restore previous versions", support for security xattrs (currently limited to SELinux), addressing a missing feature bit so the kernel client's Ceph features are now "luminous", better consistency with Ceph FUSE, and changing the time granularity from 1us to 1ns. There are also bug fixes and other work as part of the Ceph code for Linux 5.3. As maintainer Ilya Dryomov put it, "Lots of exciting things this time!"
At the start of the month we reported on out-of-tree kernel work to support Linux on the newer Macs. Those patches were focused on supporting Apple's NVMe drive behavior by the Linux kernel driver. That work has been evolving nicely and is now under review on the kernel mailing list.
Volleyed on Tuesday were a set of three patches to the Linux kernel's NVMe code for dealing with the Apple hardware of the past few years in order for Linux to deal with these drives.
On Apple 2018 systems and newer, their I/O queue sizing/handling is odd and in other areas not properly following NVMe specifications. These patches take care of that while hopefully not regressing existing NVMe controller support.
The Android system has shipped a couple of allocators for DMA buffers over the years; first came PMEM, then its replacement ION. The ION allocator has been in use since around 2012, but it remains stuck in the kernel's staging tree. The work to add ION to the mainline started in 2013; at that time, the allocator had multiple issues that made inclusion impossible. Recently, John Stultz posted a patch set introducing DMA-BUF heaps, an evolution of ION, that is designed to do exactly that — get the Android DMA-buffer allocator to the mainline Linux kernel.
Applications interacting with devices often require a memory buffer that is shared with the device driver. Ideally, it would be memory mapped and physically contiguous, allowing direct DMA access and minimal overhead when accessing the data from both sides at the same time. ION's main goal is to support that use case; it implements a unified way of defining and sharing such memory buffers, while taking into account the constraints imposed by the devices and the platform.
The kernel development community continues to propose new system calls at a high rate. Three ideas that are currently in circulation on the mailing lists are clone3(), fchmodat4(), and fsinfo(). In some cases, developers are just trying to make more flag bits available, but there is also some significant new functionality being discussed. clone3()
The clone() system call creates a new process or thread; it is the actual machinery behind fork(). Unlike fork(), clone() accepts a flags argument to modify how it operates. Over time, quite a few flags have been added; most of these control what resources and namespaces are to be shared with the new child process. In fact, so many flags have been added that, when CLONE_PIDFD was merged for 5.2, the last available flag bit was taken. That puts an end to the extensibility of clone().
On NUMA systems with a lot of CPUs, it is common to assign parts of the workload to different subsets of the available processors. This partitioning can improve performance while reducing the ability of jobs to interfere with each other. The partitioning mechanisms available on current kernels might just do too good a job in some situations, though, leaving some CPUs idle while others are overutilized. The soft affinity patch set from Subhra Mazumdar is an attempt to improve performance by making that partitioning more porous. In current kernels, a process can be restricted to a specific set of CPUs with either the sched_setaffinity() system call or the cpuset mechanism. Either way, any process so restricted will only be able to run on the specified CPUs regardless of the state of the system as a whole. Even if the other CPUs in the system are idle, they will be unavailable to any process that has been restricted not to run on them. That is normally the behavior that is wanted; a system administrator who has partitioned a system in this way probably has some other use in mind for those CPUs.
But what if the administrator would rather relax the partitioning in cases where the fenced-off CPUs are idle and going to waste? The only alternative currently is to not partition the system at all and let processes roam across all CPUs. One problem with that approach, beyond losing the isolation between jobs, is that NUMA locality can be lost, resulting in reduced performance even with more CPUs available. In theory the AutoNUMA balancing code in the kernel should address that problem by migrating processes and their memory to the same node, but Mazumdar notes that it doesn't seem to work properly when memory is spread out across the system. Its reaction time is also said to be too slow, and the cost of the page scanning required is high.
NVMe is a protocol used by Apple for PCIe solid state drives. It replaces the older Advanced Host Controller Interface (AHCI). On Tuesday, three NVMe patches were submitted to the Linux kernel to deal with Mac SSDs that use this protocol.
In addition to Linux 5.3 bringing a VirtIO-IOMMU driver, this next kernel version is bringing another new VirtIO virtual device implementation: PMEM for para-virtualized persistent memory support for the likes of Intel Optane DC persistent memory.
One of the first PCIe Gen 4 NVMe SSDs to market has been the Corsair Force MP600. AMD included the Corsair MP600 2TB NVMe PCIe4 SSD with their Ryzen 3000 reviewer's kit and for those interested in this speedy solid-state storage here are some benchmarks compared to various other storage devices on Ubuntu Linux.
The 2TB Force Series Gen 4 MP600 SSD is rated for sequential reads up to 4950MB/s and sequential writes up to 4250MB/s and 600k IOPS random writes and 680k IOPS random reads. The MP600 relies upon 3D TLC NAND and relies upon a Phison PS5016-E16 controller. This 2TB PCIe 4.0 SSD will set you back $450 USD while a 1TB version is a modest $250 USD.
A file manager is the most used software in any digital platform. With the help of this software, you can access, manage, and decorate the files on your device. For the Linux system, this is also an important factor to have an effective and simple file manager. In this curated article, we are going to discuss a set of best Linux file manager tools which definitely help you to operate the system effectively.
Maestral is a new open source Dropbox client for macOS and Linux, that's currently in beta. It can be used both with and without a GUI, and it was created with the purpose of having a Dropbox client that supports folder syncing to drives which use filesystems like Btrfs, Ext3, ZFS, XFS or encrypted filesystems, which are no longer supported by Dropbox.
Over the past few months, I’ve written lots of reviews of open source audio software, focusing mainly on music players. Linux has a mouthwatering array of open source multimedia tools, so I’m going to turn my attention wider afield from music players. Let’s start with some multimedia candy.
GLava is an OpenGL audio spectrum visualizer for Linux. An audio visualizer works by extracting waveform and/or frequency information from the audio and feeds this information through some display rules, which produces what you see on the screen. The imagery is usually generated and rendered in real time and in a way synchronized with the music as it is played.
GLava makes a real-time audio visualizer appear as if it’s embedded in your desktop background, or in a window. When displayed as the background, it’ll display on top of your wallpaper, giving the appearance of a live, animated wallpaper.
GLava is a simple C program that sets up the necessary OpenGL and Xlib code for sets of 2D fragment shaders. The software uses PulseAudio to sync the desktop visualizer with any music source.
Learn the concept of hard links in Linux and its association with inodes in this tutorial.
Netherguild is a recent discovery that's currently in development from David Vinokurov. It's a turn-based rogue-lite strategy game, about sending a team deep below ground.
Queen's Quest 5: Symphony of Death from Brave Giant LTD and Artifex Mundi has released today, another fantastic looking hidden object game for a more casual experience.
Today Proxy Studios and Slitherine have released the latest DLC for the turn-based strategy game Warhammer 40,000: Gladius, with the Chaos Space Marines making their way across the planet.
Also available today is a big save-breaking patch. Update 1.3, which is actually a pretty huge patch for the game adds in new items, new achievements, new tips, new settings, performance improvements, fixes to the AI, save game format improvements to reduce UI lag with large saves, a mod management screen and there's quite a bit more. Good to see it really well supported a year after the original release.
Lookout! Another sale is approaching! This time it's Valve's turn, with Steam having a space themed sale for the 50th anniversary of the Apollo 11 moon landing.
Thanks to the help of nearly a thousand backers on Kickstarter, the very sweet looking puzzle-platform that mixes in some visual novel elements is fully funded.
This is quite exciting and very pleasing to see. Path of Titans from Alderon Games has hit the funding goal!
After writing about the IndieGoGo campaign starting only a few days ago, Alderon Games added a PayPal backing option to their official website. Their initial goal was only $25,234 and with both campaigns together they've managed to pull in $32,704 and they have 28 days left to go so hopefully they will get more than enough to bring us another great Linux game.
Three crowdfunding campaigns in one day? Yes! After Evan's Remains and Path of Titans getting funded, we also have the action-platformer "GIGABUSTER" which has also been fully funded and so it's coming to Linux.
Inspired by the likes of both Mega Man Zero and Mega Man X, the developer said they wanted a more modern and balanced game that was similar, so they decided to create their own hoping it will scratch your itch as well as their own. After appearing on Kickstarter, GIGABUSTER managed to jump, dash and shoot its way to victory with $11,666 in funding.
If you're a game developer or you just like making good-looking retro art you might want to take a look at SpriteStack.
I know you waited for this so long but believe me there were really good reasons. Check out the past articles concerning Latte git version and you can get a picture what major new features are introduced for v0.9. Of course this is an article for a beta release and as such I will not provide any fancy videos or screenshots; this is a goal for official stable release article.
It's been over one year since the release of Latte Dock 0.8 as this KDE-aligned desktop dock while now the v0.9 release isn't too far away.
Latte Dock 0.9 continues maturing its Wayland support though is still deemed a technology preview for the v0.9 series but should be in much better standing all-around.
Pitivi is a video editor, free and open source. Targeted at newcomers and professional users, it is minimalist and powerful. This summer I am fortunate to collaborate in Pitivi development through Google Summer of Code.
My goal is to implement an interval time system, with the support of Mathieu Duponchell, my menthor, and other members of the Pitivi community.
An interval time system is a common tool in many video editors. It will introduce new features in Pitivi. The user will be able to set up a range of time in the timeline editor, playback specific parts of the timeline, export the selected parts of the timeline, cut or copy clips inside the interval and zoom in/out the interval.
Mi proposal also includes the design of a marker system to store information at a certain time position.
Today we are looking at the first stable release of Endeavour OS. It is a project that started to continue the spirit of the recently discontinued Antergos. The developing team exists out of Antergos developers and community members.
As you can see in this first stable release, it is far from just a continuing of Antergos as we know it. The stable release is an offline Calamres installer and it just came with a customized XFCE desktop environment. They are planning to have an online installer again in the future, which will give a person an option to choose between 10 desktop environments, similar to Antergos.
It is based on Arch, Linux Kernel 5.2, XFCE 4.14 pre2 and it uses about 500mb of ram.
In this video, we look at Endeavour OS 2019.07.15.
I am pleased to announce the July 2019 release of the PClinuxOS KDE Darkstar is ready for download.
Recently I gave a syslog-ng introductory workshop at Pass the SALT conference in Lille, France. I got a lot of positive feedback, so I decided to turn all that feedback into a blog post. Naturally, I shortened and simplified it, but still managed to get enough material for multiple blog posts.
Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.
RPM of PHP version 7.387RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 28-29 and Enterprise Linux.
RPM of PHP version 7.2.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Enterprise Linux.
RPM of QElectroTech version 0.70, an application to design electric diagrams, are available in remi for Fedora and Enterprise Linux 7.
A bit more than 1 year after the version 0.60 release, the project have just released a new major version of their electric diagrams editor.
The kernel team is working on final integration for kernel 5.2. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, July 22, 2019 through Monday, July 29, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.
Like each month, here comes a report about the work of paid contributors to Debian LTS.
This week the team behind Linux Mint announced the release of Linux Mint 19.2 beta, a desktop Linux distribution used for producing a modern operating system. This release is codenamed as Tina.
This release comes with updated software and refinements and new features for making the desktop more comfortable to use.
Continuing my previous Mem. Comparison 2018, here's my 2019 comparison with all editions of Ubuntu 19.04 "Disco Dingo". The operating system editions I use here are the eight: Ubuntu Desktop, Kubuntu, Lubuntu, Xubuntu, Ubuntu MATE, Ubuntu Studio, Ubuntu Kylin, and Ubuntu Budgie. I installed every one of them on my laptop and (immediately at first login) took screenshot of the System Monitor (or Task Manager) without doing anything else. I present here the screenshots along with each variant's list of processes at the time I took them. And, you can download the ODS file I used to create the chart below. Finally, I hope this comparison helps all of you and next time somebody can make better comparisons.
This is a follow-up to the End of Life warning sent earlier this month to confirm that as of today (July 18, 2019), Ubuntu 18.10 is no longer supported. No more package updates will be accepted to 18.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.
The original End of Life warning follows, with upgrade instructions:
Ubuntu announced its 18.10 (Cosmic Cuttlefish) release almost 9 months ago, on October 18, 2018. As a non-LTS release, 18.10 has a 9-month support cycle and, as such, the support period is now nearing its end and Ubuntu 18.10 will reach end of life on Thursday, July 18th.
At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 18.10.
The supported upgrade path from Ubuntu 18.10 is via Ubuntu 19.04. Instructions and caveats for the upgrade may be found at:
https://help.ubuntu.com/community/DiscoUpgrades
Ubuntu 19.04 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:
https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce
Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.
On behalf of the Ubuntu Release Team,
Adam Conrad
CMake is an open-source, cross-platform family of tools designed to build, test and package software. It is used to control the software compilation process and generate native makefiles and workspaces that can be used in any compiler environment.
While some users of CMake want to stay up to date with the latest release, others want to be able to stay with a known version and choose when to move forward to newer releases, picking up just the minor bug fixes for the feature release they are tracking. Users may also occasionally need to roll back to an earlier feature release, such as when a bug or a change introduced in a newer CMake version exposes problems within their project.
Craig Scott, one of the co-maintainers of CMake, sees snaps as an excellent solution to these needs. Snaps’ ability to support separate tracks for each feature release in addition to giving users the choice of following official releases, release candidates or bleeding edge builds are an ideal fit. When he received an invitation to the 2019 Snapcraft Summit, he was keen to work directly with those at the pointy end of developing and supporting the snap system.
Looking ahead to Ubuntu 19.10 as the cycle before Ubuntu 20.04 LTS, one of the areas exciting us with the work being done by Canonical is (besides the great upstream GNOME performance work) easily comes down to the work they are pursuing on better ZFS On Linux integration with even aiming to offer ZFS as a file-system option from their desktop installer. A big role in their ZoL play is also the new "Zsys" component they have been developing.
Kali is based on Debian Linux (like Raspbian, the default Raspberry Pi OS) and includes specialist tools to support penetration testing by devices such as the Pi.
Kali Linux (version 2019.2a) for Raspberry Pi 2, 3 and 4 is available in a 32-bit image (893 MB), but the 64-bit version is promised soon.
he third edition of the Operating-System-Directed Power-Management (OSPM) summit was held May 20-22 at the ReTiS Lab of the Scuola Superiore Sant'Anna in Pisa, Italy. The summit is organized to collaborate on ways to reduce the energy consumption of Linux systems, while still meeting performance and other goals. It is attended by scheduler, power-management, and other kernel developers, as well as academics, industry representatives, and others interested in the topics.
The kernel's deadline scheduling class (SCHED_DEADLINE) enables realtime scheduling where every task is guaranteed to meet its deadlines. Unfortunately SCHED_DEADLINE's current view on CPU capacity is far too simple. It doesn't take dynamic voltage and frequency scaling (DVFS), simultaneous multithreading (SMT), asymmetric CPU capacity, or any kind of performance capping (e.g. due to thermal constraints) into consideration.
In particular, if we consider running deadline tasks in a system with performance capping, the question is "what level of guarantee should SCHED_DEADLINE provide?". An interesting discussion about the pro and cons of different approaches (weak, hard, or mixed guarantees) developed during this presentation. There were many different views but the discussion didn't really conclude and will have to be continued at the Linux Plumbers Conference later this year.
The topic of guaranteed performance will become more important for mobile systems in the future as performance capping is likely to become more common. Defining hard guarantees is almost impossible on real systems since silicon behavior very much depends on environmental conditions. The main pushback on the existing scheme is that the guaranteed bandwidth budget might be too conservative. Hence SCHED_DEADLINE might not allow enough bandwidth to be reserved for use cases with higher bandwidth requirements that can tolerate bandwidth reservations not being honored.
Validating scheduler behavior is a tricky affair, as multiple subsystems both compete and cooperate with each other to produce the task placement we observe. Valentin Schneider from Arm described the approach taken by his team (the folks behind energy-aware scheduling — EAS) to tackle this problem.
"One task per CPU" workloads, as emulated by multi-core Geekbench, can suffer on traditional two-cluster big.LITTLE systems due to the fact that tasks finish earlier on the big CPUs. Arm has introduced a more flexible DynamIQ architecture that can combine big and LITTLE CPUs into a single cluster; in this case, early products apply what's known as phantom scheduler domains (PDs). The concept of PDs is needed for DynamIQ so that the task scheduler can use the existing big.LITTLE extensions in the Completely Fair Scheduler (CFS) scheduler class.
Multi-core Geekbench consists of several tests during which N CFS tasks perform an equal amount of work. The synchronization mechanism pthread_barrier_wait() (i.e. a futex) is used to wait for all tasks to finish their work in test T before starting the tasks again for test T+1.
The problem for Geekbench on big.LITTLE is related to the grouping of big and LITTLE CPUs in separate scheduler (or CPU) groups of the so-called die-level scheduler domain. The two groups exists because the big CPUs share a last-level cache (LLC) and so do the LITTLE CPUs. This isn't true any more for DynamIQ, hence the use of the "phantom" notion here.
The tasks of test T finish earlier on big CPUs and go to sleep at the barrier B. Load balancing then makes sure that the tasks on the LITTLE CPUs migrate to the big CPUs where they continue to run the rest of their work in T before they also go to sleep at B. At this moment, all the tasks in the wake queue have a big CPU as their previous CPU (p->prev_cpu). After the last task has entered pthread_barrier_wait() on a big CPU, all tasks on the wake queue are woken up.
The typical systems used in industrial automation (e.g. for axis control) consist of a "black box" executing a commercial realtime operating system (RTOS) plus a set of control design tools meant to be run on a different desktop machine. This approach, besides imposing expensive royalties on the system integrator, often does not offer the desired degree of flexibility for testing/implementing novel solutions (e.g., running both control code and design tools on the same platform).
As is probably well known, a scheduler is the component of an operating system that decides which CPU the various tasks should run on and for how long they are allowed to do so. This happens when an OS runs on the bare hardware of a physical host and it is also the case when the OS runs inside a virtual machine. The only difference being that, in the latter case, the OS scheduler marshals tasks among virtual CPUs.
And what are virtual CPUs? Well, in most platforms they are also a kind of special task and they want to run on some CPUs ... therefore we need a scheduler for that! This is usually called the "double-scheduling" property of systems employing virtualization because, well, there literally are two schedulers: one — let us call it the host scheduler, or the hypervisor scheduler — that schedules the virtual CPUs on the host physical CPUs; and another one — let us call it the guest scheduler — that schedules the guest OS's tasks on the guest's virtual CPUs.
Now what are these two schedulers? That depends on the virtualization platform. They are always different, in the sense that it will never happen that, at runtime, a scheduler has to deal with scheduling virtual CPUs and also scheduling tasks that want to run on those same virtual CPUs (well, it can happen, but then you are not doing virtualization). They can be the same, in terms of code, or they can be completely different from that respect as well.
In the opening session of OSPM 2019, Rafael Wysocki from Intel gave a talk about potential problems faced by the designers of CPU idle-time-management governors, which was inspired by his own experience from the timer-events oriented (TEO) governor work done last year.
In the first place, he said, it should be noted that "CPU idleness" is defined at the level of logical CPUs, which may be CPU cores or simultaneous multithreading (SMT) threads, depending on the hardware configuration of the processor. In Linux, a logical CPU is idle when there are no runnable tasks in its queue, so it falls back to executing the idle task associated with it (there is one idle task for each logical CPU in the system, but they all share the same code, which is the idle loop). Therefore "CPU idleness" is an OS (not hardware) concept and if the idle loop is entered by a CPU, there is an opportunity to save some energy with a relatively small impact on performance (or even without any impact on performance at all) — if the hardware supports that.
The idle loop runs on each idle CPU and it only takes this particular CPU into consideration. As a rule, two code modules are invoked in every iteration of it. The first one, referred to as the CPU idle-time-management governor, is responsible for deciding whether or not to stop the scheduler tick and what to tell the hardware to do; the second one, called the CPU idle-time-management driver, passes the governor's decisions down to the hardware, usually in an architecture- or platform-specific way. Then, presumably, the processor enters a special state in which the CPU in question stops fetching instructions (that is, it does literally nothing at all); that may allow the processor's power draw to be reduced and some energy to be saved as a result. If that happens, the processor needs to be woken up from that state by a hardware event after spending some time, referred to as the idle duration, in it. At that point, the governor is called again so it can save the idle-duration value for future use.
If you’ve been following Apache Software Foundation (ASF) announcements for ApacheCon 2019, you must be aware of the conference in Las Vegas (ApacheCon North America) from September 9 to September 12.
And, recently, they announced their plans for ApacheCon Europe 2019 to be held on 22-24 October 2019 at the iconic Kulturbrauerei in Berlin, Germany. It is going to be one of the major events by ASF this year. In this article, we shall take a look at the details revealed as of yet.
Aaron discussed various ways to record RTSP streams when used with playbin and brought up some of his pending merge requests around the closed captioning renderer and Active Format Description (AFD) support, with a discussion about redoing the renderer properly, and in Rust.
George discussed a major re-work of the gst-omx bufferpool code that he has been doing and then moved his focus on Qt/Android support. He mostly focused on the missing bits, discussing builds and infrastructure issues with Nirbheek and myself, and going through his old patches.
TenneT is the first European cross-border electricity transmission system operator (TSO), with activities in the Netherlands and in Germany, providing uninterrupted electricity to over 41 million people. The security of our supply is among the best in Europe, with 99.99% grid availability. With the energy transition, TenneT is contributing to a future in which wind and solar energy are the most important primary sources to produce electricity.
While the Linux 4.4 kernel is quite old (January 2016), DragonFlyBSD has now re-based its AMD Radeon kernel graphics driver against that release. It is at least a big improvement compared to its Radeon code having been derived previously from Linux 3.19.
DragonFlyBSD developer François Tigeot continues doing a good job herding the open-source Linux graphics driver support to this BSD. With the code that landed on Monday, DragonFlyBSD's Radeon DRM is based upon the state found in the Linux 4.4.180 LTS tree.
Python does not lack for web frameworks, from all-encompassing frameworks like Django to "nanoframeworks" such as WebCore. A recent "spare time" project caused me to look into options in the middle of this range of choices, which is where the Python "microframeworks" live. In particular, I tried out the Bottle and Flask microframeworks—and learned a lot in the process.
I have some experience working with Python for the web, starting with the Quixote framework that we use here at LWN. I have also done some playing with Django along the way. Neither of those seemed quite right for this latest toy web application. Plus I had heard some good things about Bottle and Flask at various PyCons over the last few years, so it seemed worth an investigation.
Web applications have lots of different parts: form handling, HTML template processing, session management, database access, authentication, internationalization, and so on. Frameworks provide solutions for some or all of those parts. The nano-to-micro-to-full-blown spectrum is defined (loosely, at least) based on how much of this functionality a given framework provides or has opinions about. Most frameworks at any level will allow plugging in different parts, based on the needs of the application and its developers, but nanoframeworks provide little beyond request and response handling, while full-blown frameworks provide an entire stack by default. That stack handles most or all of what a web application requires.
The list of web frameworks on the Python wiki is rather eye-opening. It gives a good idea of the diversity of frameworks, what they provide, what other packages they connect to or use, as well as some idea of how full-blown (or "full-stack" on the wiki page) they are. It seems clear that there is something for everyone out there—and that's just for Python. Other languages undoubtedly have their own sets of frameworks (e.g. Ruby on Rails).
Before there was Big Tech, there was "adversarial interoperability": when someone decides to compete with a dominant company by creating a product or service that "interoperates" (works with) its offerings.
In tech, "network effects" can be a powerful force to maintain market dominance: if everyone is using Facebook, then your Facebook replacement doesn't just have to be better than Facebook, it has to be so much better than Facebook that it's worth using, even though all the people you want to talk to are still on Facebook. That's a tall order.
Adversarial interoperability is judo for network effects, using incumbents' dominance against them. To see how that works, let's look at a historical example of adversarial interoperability role in helping to unseat a monopolist's dominance.
The first skirmishes of the PC wars were fought with incompatible file formats and even data-storage formats: Apple users couldn't open files made by Microsoft users, and vice-versa. Even when file formats were (more or less) harmonized, there was still the problems of storage media: the SCSI drive you plugged into your Mac needed a special add-on and flaky driver software to work on your Windows machine; the ZIP cartridge you formatted for your PC wouldn't play nice with Macs.
But as office networking spread, the battle moved to a new front: networking compatibility. AppleTalk, Apple's proprietary protocol for connecting up Macs and networked devices like printers, pretty much Just Worked, providing you were using a Mac. If you were using a Windows PC, you had to install special, buggy, unreliable software.
And for Apple users hoping to fit in at Windows shops, the problems were even worse: Windows machines used the SMB protocol for file-sharing and printers, and Microsoft's support for MacOS was patchy at best, nonexistent at worst, and costly besides. Businesses sorted themselves into Mac-only and PC-only silos, and if a Mac shop needed a PC (for the accounting software, say), it was often cheaper and easier just to get the accountant their own printer and backup tape-drive, rather than try to get that PC to talk to the network. Likewise, all PC-shops with a single graphic designer on a Mac—that person would often live offline, disconnected from the office network, tethered to their own printer, with their own stack of Mac-formatted ZIP cartridges or CD-ROMs.
[...]
Someone attempting to replicate the SAMBA creation feat in 2019 would likely come up against an access control that needed to be bypassed in order to peer inside the protocol's encrypted outer layer in order to create a feature-compatible tool to use in competing products.
Another thing that's changed (for the worse) since 1993 is the proliferation of software patents. Software patenting went into high gear around 1994 and consistently gained speed until 2014, when Alice v. CLS Bank put the brakes on (today, Alice is under threat). After decades of low-quality patents issuing from the US Patent and Trademark Office, there are so many trivial, obvious and overlapping software patents in play that anyone trying to make a SAMBA-like product would run a real risk of being threatened with expensive litigation for patent infringement.
Drug prices are sky high. This is not news. A bunch of incredibly dumb policy decisions have been stacked up for decades and brought us to this place where drug prices -- especially for life-saving drugs -- would bankrupt most people. A huge part of the problem is our patent system and how we literally grant monopolies to companies over these drugs. Combine "life saving" with "monopoly" and, uh, you don't have to have a PhD in economics to know what happens to the price. Add into that our fucked up and convoluted hospital and insurance healthcare system, in which prices are hidden from patients, and you have a recipe for the most insanely exploitative "marketplace" ever.
[...]
Furthermore, while the Times is correct that this could be "done now," it seems like yet another way of treating the symptoms not the disease. Fix the fucking patent system. Fix our broken healthcare system. Do those two things and you don't have insane drug pricing any more. And, to be fair, at least the NY Times piece does acknowledge the idea that maybe we need to "blow up the patent system and start over" when it comes to pharmaceuticals. But it labels this idea as "fantastical." It may be "fantastical" to those with limited imaginations and focused on living under today's crappy, broken system. But if we want to deal with the real problems, that's one area to start.
As the 2020 election draws near, presidential candidates are putting forth numerous other solutions to the drug cost crisis. Those solutions range from the practical (tax drug companies on their price hikes) to the ambitious (let the federal government make its own drugs) to the fantastical (blow up the patent system and start over). If the plans get serious consideration, they would advance a long overdue dialogue about how the country wants to evaluate medications and what it is and isn’t willing to spend on them — a question that sits at the heart of America’s deeply flawed prescription drug system.
Having reported on this subject a few times, I will say that this is a seedier, deeper rabbit hole than you might think. While Russian news outlets do seem to be enjoying amplifying fear on this subject, there's plenty of home grown folks pushing 5G health risk claims as well. I've found a long line of academics happy to go on the record claiming 5G could pose a health risk. I've also found plenty of others proclaiming any health concerns are fluff and nonsense. But pretty uniformly you'll find one consensus buried under the mess: far, far more study is necessary before anybody engages in absolutism one way or the other.
Security updates have been issued by Arch Linux (chromium, firefox, and squid), CentOS (thunderbird and vim), Debian (libonig), SUSE (firefox, glibc, kernel, libxslt, and tomcat), and Ubuntu (libreoffice and thunderbird).
Dubbed EvilGnomes by researchers; the malware was found masquerading as a Gnome shell extension targeting Linux’s desktop users.
They were written by a user named ruri12. These packages were removed by the PyPI team on July 9, 2019. However they were available since November 2017 and had been downloaded fairly regularly.
See the original article for more details.
As always, when using a package that you aren’t familiar with, be sure to do your own thorough vetting to be sure you are not installing malware accidentally.
We've noted a few times now how the protectionist assault against Huawei hasn't been supported by much in the way of public evidence. As in, despite widespread allegations that Huawei helps China spy on Americans wholesale, nobody has actually been able to provide any hard public evidence proving that claim. That's a bit of a problem when you're talking about a global blackballing effort. Especially when previous investigations as long as 18 months couldn't find evidence of said spying, and many US companies have a history of ginning up security fears simply because they don't want to compete with cheaper Chinese kit.
That said, a new report (you can find the full thing here) dug through the CVs of many Huawei executives and employees, and found that a small number of "key mid-level technical personnel employed by Huawei have strong backgrounds in work closely associated with intelligence gathering and military activities."
Unless you've been under a rock, you've noticed hardly a day goes by without another serious security foul-up. While there's plenty of blame to go around for these endless security problems, some of it goes to developers who write bad code.
That makes sense. But when GitLab, a DevOps company, surveyed over 4,000 developers and operators, they found 68% of the security professionals surveyed believe it's a programmer's job to write secure code, but they also think less than half of developers can spot security holes.
A report based on a survey of 4,071 software professionals published this week by GitLab, a provider of a continuous integration and continuous deployment (CI/CD) platform, found that while appreciation of the potential value of DevSecOps best practices is high, the ability to implement those practices is uneven at best.
In a survey conducted by GitLab, software professionals recognize the need for security to be baked into the development lifecycle, but the survey showed long-standing friction between security and development teams remain. While 69% of developers say they’re expected to write secure code, nearly half of security pros surveyed (49%) said they struggle to get developers to make remediation of vulnerabilities a priority. And 68% of security professionals feel fewer than half of developers are able to spot security vulnerabilities later in the lifecycle.
Over on his blog, Kees Cook runs through the security changes that came in Linux 5.2.
While most of Louisiana was spared Barry’s wrath last week, Isle de Jean Charles, a quickly eroding strip of land among coastal wetlands in the Gulf of Mexico, was not. A storm surge swept over the island, about 80 miles southwest of New Orleans, early in the morning on July 13 before Barry was upgraded from a tropical storm to a category 1 hurricane.
On July 15, I met with Albert Naquin, Chief of the Isle de Jean Charles Biloxi-Chitimacha-Choctaw Tribe (IDJC) and Wenceslaus Billiot Jr., the Tribe’s deputy chief, to travel to the island and assess the damages. That afternoon, we made our way through the receding waters that still covered Island Road, the only route connecting the island to the mainland. Days after the storm, some parts of the road on the island were still submerged in three feet of water.
Chester County officials Wednesday afternoon issued a Code Red health alert for extreme heat that is expected to continue into Monday.
On every front, academics, journalists and policymakers compare the fossil fuel industry to the tobacco industry. The two industries share the same playbook: strategies of delay, exculpating blame by making the consumer responsible, denying scientific consensus, publishing industry-funded science and fostering public confusion over the real impacts of their products.
A major difference between the two industries, however, is the timescale and scope of the harms caused. While public health professionals are executing coordinated efforts for a “tobacco endgame” to reduce smoking and tobacco prevalence to five percent of the population or less, with the possibility of ending the tobacco epidemic in certain areas within a couple decades — we’re far from making similar progress when it comes to climate change.
Even if all fossil fuel production and consumption ended today, the fallout from 50 years of delay caused by industry obfuscation will have ramifications for humans and other species for centuries or even millennia. If disruptive climate change continues unabated, the impacts on the planet may be essentially irreversible, at least as far as any humanly relevant scale.
Every few years this kind of thing pops up. Some ignorant organization or policymaker thinks "oh, hey, the easy way to 'solve' piracy is just to create a giant blacklist." This sounds like a simple solution... if you have no idea how any of this works. Remember, advertising giant GroupM tried just such an approach a decade ago, working with Universal Music to put together a list of "pirate sites" for which it would block all advertising. Of course, who ended up on that list? A bunch of hip hop news sites and blogs. And even the personal site of one of Universal Music's own stars was suddenly deemed an "infringing site."
These kinds of mistakes highlight just how fraught such a process is -- especially when it's done behind the scenes by organizations that face no penalty for overblocking. In such cases you always get widespread overblocking based on innuendo, speculation, and rumor, rather than any legitimate due process or court adjudication concerning infringement. Even worse, if there was actual infringement going on, one possible legal remedy would involve getting a site to take down that content. Under a "list" approach, it's just basically a death penalty for the entire site.
Biometric databases have a hunger for data. And they're getting fed. Government agencies are shoving every face they can find into facial recognition databases. Expanding the dataset means adding people who've never committed a crime and, importantly, who've never given their explicit consent to have their personal details handed over to federal agencies.
Thanks to unprecedented levels of cooperation across all levels of government, FBI and ICE are matching faces using data collected from millions of non-criminals. The agencies are apparently hoping this will all work out OK, rather than create a new national nightmare of shattered privacy and violated rights. Or maybe they just don't care.
Germany has banned its schools from using cloud-based productivity suites from Microsoft, Google, and Apple, because the companies weren't meeting the country's privacy requirements. Naked Security reports, that the statement from the Hessische Beauftragte für Datenschutz und Informationsfreiheit (Hesse Commissioner for Data Protection and Freedom of Information, or HBDI) said, "The digital sovereignty of state data processing must be guaranteed. With the use of the Windows 10 operating system, a wealth of telemetry data is transmitted to Microsoft, whose content has not been finally clarified despite repeated inquiries to Microsoft. Such data is also transmitted when using Office 365." The HBDI also stressed that "What is true for Microsoft is also true for the Google and Apple cloud solutions. The cloud solutions of these providers have so far not been transparent and comprehensible set out. Therefore, it is also true that for schools, privacy-compliant use is currently not possible."
Germany just banned its schools from using cloud-based productivity suites from Microsoft, Google, and Apple. The tech giants aren’t satisfying its privacy requirements with their cloud offerings, it warned.
The Hessische Beauftragte für Datenschutz und Informationsfreiheit (Hesse Commissioner for Data Protection and Freedom of Information, or HBDI) made the statement following a review of Microsoft Office 365’s suitability for schools.
Did you know that Germany just banned its schools from using cloud-based productivity suites from Microsoft, Google, and Apple? The tech giants aren’t satisfying its privacy requirements with their cloud offerings, it warned. What are your thoughts?
The Hessische Beauftragte für Datenschutz und Informationsfreiheit (Hesse Commissioner for Data Protection and Freedom of Information, or HBDI) made the statement following a review of Microsoft Office 365’s suitability for schools.
Our nation's immigration agencies wield a considerable amount of power. So much power, in fact, that they're free to dump incoming immigrants off the space-time continuum at will. If a CBP officer decides a person isn't the age they say they are, they can alter the person's age so it matches the officer's beliefs.
How does the CBP accomplish this neat little trick? Well, oddly, it involves X-rays. A recent episode of This American Life details the surreal nature of this CBP-induced time warp -- one it inflicted (repeatedly!) on a 19-year-old Hmong woman coming to the United States to reunite with her fiance.
Yong Xiong was questioned by Customs officers at the Chicago airport. The CBP officer thought she was being trafficked and didn't believe the birth date on her passport. After a round of questioning meant to determine whether or not Yong was being trafficked, the CBP officer arrived at the conclusion she was, despite the officer marking "No" on ten of the eleven trafficking indicators.
So, how does the CBP try to determine someone's age when officers don't believe the person or the documents in front of them? They call in a dentist. Yong's teeth were x-rayed to determine her age. This may involve science on the front end, but the back end is mainly educated guesswork.
When the City of Baltimore agreed to settle with a victim of police brutality, it inserted the usual clauses that come with every settlement. There was the standard non-admission of wrongdoing, along with a "non-disparagement" clause the city's attorney told courts was used "in 95% of settlements" to prevent those being settled with from badmouthing the entity they sued.
Ashley Overbey received a $63,000 settlement from the city for allegations she was beaten, tased, verbally abused, and arrested after calling officers to her home to report a burglary. When a local newspaper published a story about the settlement, the City Solicitor chose to disparage Overbey by saying she was "hostile" when the police arrived at her home. As the comments filled up with invective against Overbey, she showed up in person to fire back at her detractors, claiming the police had been in the wrong and detailing some of the injuries she suffered.
The City -- which had chosen to skew public perception against Overbey by commenting on the settlement -- decided Overbey's defense of herself violated the non-disparagement clause. So, it clawed back half of her settlement -- $31,500 -- for violating its STFU clause.
To some extent we've had this discussion before, as parts of other discussions about the regulation of content online, but it's worth calling it out explicitly: regulating internet infrastructure services the same as internet edge service providers is a really bad idea. And yet, here we are. So few people seem to even care enough to make a distinction. So, let's start with the basics: "edge providers" are the companies who provide internet services that you, as a end user, interact with. Google, YouTube, Facebook, Twitter, Twitch, Reddit, Wikipedia, Amazon's e-commerce site. These are all edge providers as currently built. Infrastructure providers, however, sit a layer (or more) down from those edge providers. They're the services that make the edge services possible. This can include domain registrars and registers, CDNs, internet security companies and more. So, companies like Cloudflare, GoDaddy, Amazon's AWS, among others are examples there.
While tons of people interact with infrastructure players all the time, your average person will never even realize they're doing so -- as the interactions tend to be mediated entirely by the edge providers. For a few years now we've been seeing attempts to move the liability questions up (or, depending on your viewpoint, down) the stack from edge providers to infrastructure players. This raises a lot of significant concerns.
Salima has a problem: her Boulangism toaster is locked down with software that ensures that it will only toast bread sold to her by the Boulangism company… and as Boulangism has gone out of business, there's no way to buy authorized bread. Thus, Salima can no longer have toast.
This sneakily familiar scenario sends our resourceful heroine down a rabbit hole into the world of hacking appliances, but it also puts her in danger of losing her home -- and prosecution under the draconian terms of the Digital Millennium Copyright Act (DMCA). Her story, told in the novella “Unauthorized Bread,” which opens Cory Doctorow’s recent book Radicalized, guides readers through a process of discovering what Digital Restrictions Management (DRM) is, and how the future can look mightily grim if we don’t join forces to stop DRM now.
“Unauthorized Bread” takes place in the near future, maybe five or ten years at most, and the steady creep of technology that takes away more than it gives has simply advanced a few degrees. Salima and her friends and neighbors are refugees, and they live precariously in low-income housing equipped with high-tech, networked appliances. These gizmos and gadgets may seem nifty on the surface, but immediately begin to exact an unacceptable price, since they require residents to purchase the expensive approved bread for the toaster, the expensive approved dishes for the dishwasher, and so on. And just as Microsoft can whisk away ebooks that people “own” by closing down its ebook service, the vagaries of the business world cause Boulangism to whisk away Salima’s ability to use her own toaster.
Traditional conceptions of university-industry technology transfer typically focus on patenting and licensing of academic inventions. However, effective technology transfer often requires significant knowledge exchange between academic and commercial entities in parallel to patent licensing. Although patents on university technologies nominally disclose those inventions, a significant amount of knowledge related to practicing and commercializing them remains tacit or uncodified, residing in the mind of the faculty inventor. This chapter explores the nature of tacit knowledge and mechanisms for transferring it. It notes that the “tacit dimension” of university inventions can be quite high given the embryonic nature of such technologies. It further reveals that human and institutional connections play a critical role in transferring tacit knowledge between universities and commercial firms. In particular, networks, consulting engagements, sponsored research, proof of concept centers and incubators, and university spinoffs facilitate direct interactions between academic and commercial entities, thus promoting tacit knowledge exchange.
Industrializing emerging economies like Nigeria and Sudan through the protection of patent right holders is not an easy process; as it is possible with a strong political will on the part of government discharge its responsibilities. In this regard, one modern type of technology which can catalyze massive industrialization in these countries is reverse engineering. Success stories of most advanced nations in the world today are partly attributed to this modern type of technology. This paper explores the state of industrialization in countries like Nigeria and Sudan to assess their current status with a dire need for change in both nations. It also briefly highlights extant patent regimes in both countries, the importance of such regimes and challenges faced in their everyday implementation process. Emphasis is also on the correlation existing between industrialization and protection of patents in both countries with a view acknowledging the fact that one can hardly exist without the other. The tripartite relationship existing among reverse engineering, patent protection, and trade secrets is cursorily discussed. Further, the paper discusses the importance of employing reverse engineering as a contemporary technology for national development and industrialization. Its advantages to National Development are also discussed in the paper. It is concluded that it is necessary for Nigeria and Sudan to reconsider their policies on patent protection so as to foster economic development.
Vestas Wind Systems A/S (Vestas) and General Electric Company (GE), acting through its Renewable Energy Business, have reached an amicable settlement of all disputes related to multiple patent infringement claims in the U.S., resulting in the discontinuation of the case pending in the U.S. District Court for the Central District of California as well as all other pending proceedings related to the patents-in-suit.
[...]
Today’s announcement resolves the initial lawsuit GE filed against Vestas and Vestas-American Wind Technology Inc. on 31 July 2017, claiming infringement of its U.S. Patents No. 7,629,705 and No. 6,921,985; Vestas’ two counterclaims against GE claiming infringement of its U.S. Patents No. 7,102,247 and No. 7,859,125 on 15 December 2017; and all pending inter-partes review proceedings with respect to the patents-in-suit.
It's been a while since I've posted, as I've taken on Vice Dean duties at my law school that have kept me busy. I hope to blog more regularly as I get my legs under me. But I did see a paper worth posting mid-summer.
Wasserman & Frakes have published several papers showing that as examiners gain more seniority, their time spent examining patents decreases and their allowances come more quickly. They (and many others) have taken this to mean a decrease in patent quality.
As the longest-standing and staunchest Donald Trump supporter among IP bloggers, I must admit I'm more than a little bit disappointed at three very recent events, two of which are related to the mobile industry.
[...]
Antitrust Assistant Attorney General Makan Delrahim's subordinates made a bizarre filing in early May when they asked Judge Lucy H. Koh of the United States District Court for the Northern District of California to hold a special remedies hearing. I said "bizarre" because of the substance of the brief, the timing (more than three months after the San Jose bench trial), and the way the DOJ antagonized the FTC. That was the first time they were in the tank for Qualcomm (not counting public comments by Mr. Delrahim, a former Qualcomm outside counsel). The second time, in connection with Qualcomm's appeal of Judge Koh's certification of a consumer class, their intervention was infinitely more reasonable. But yesterday's Statement of Interest (of the United States, as the DOJ is authorized to speak on behalf of the federal government regardless of whether an independent government agency like the FTC agrees) is closer in (un)reasonableness to the DOJ's first pro-Qualcomm filing than to the second.
The district court's well-reasoned ruling is the FTC's biggest success in a long time. The DOJ should have more respect for the independent Federal Trade Commission and for the independent judiciary. Instead, the brief, filed yesterday with the United States Court of Appeals for the Ninth Circuit, arrogantly asserts that the FTC and Judge Koh failed to figure out the law.
The DOJ attacks Judge Koh's decision from three angles: merits (liability), remedies, and the public interest. As for the public-interest part, the DOJ mostly relies on the aforementioned declarations by two other departments, and Reuters' Stephen Nellis accurately described the gist of those statements as follows...
Earlier this month, we discussed how Gibson Guitar CEO James Curleigh had recently announced a shift in its IP enforcement strategy to try to be more permissive. That has since calcified into an actual formal plan, but we'll get into that more in a separate post because there is enough good and bad in it to be worth discussing. What kicked Curleigh's reveal, however, was backlash from a recent lawsuit filed by Gibson against Armadillo Distribution Enterprises, the parent owner of Dean Guitars. Dean sells several guitars that Gibson claims are trademark violations of its famed "flying v" and "explorer" body shapes. There are differences in the designs, to be clear, but there are also similarities. Even as Curleigh's plans for a more permissive IP attitude for Gibson go into effect, this lawsuit continues.
But not without Armadillo punching back, it seems. In response to the suit, Armadillo has decided to counter-sue with claims that Gibson's designs are not only too generic to be worthy of trademark protection, but also that Gibson's actions constitute interference with its legitimate business. We'll start with the trademarks.
The Senate Judiciary Committee voted on the Copyright Alternative in Small-Claims Enforcement Act, aka the CASE Act. This was without any hearings for experts to explain the huge flaws in the bill as it’s currently written. And flaws there are.
We’ve seen some version of the CASE Act pop up for years now, and the problems with the bill have never been addressed satisfactorily. This is still a bill that puts people in danger of huge, unappealable money judgments from a quasi-judicial system—not an actual court—for the kind of Internet behavior that most people engage in without thinking.
During the vote in the Senate Judiciary Committee, it was once again stressed that the CASE Act—which would turn the Copyright Office into a copyright traffic court—created a “voluntary” system.
“Voluntary” does not accurately describe the regime of the CASE Act. The CASE Act does allow people who receive notices from the Copyright Office to “opt-out” of the system. The average person is not really going to understand what is going on, other than that they’ve received what looks like a legal summons.
Between crowdsourcing and the explosion of indie video game developers, many of which are far more permissive in IP realms and far better at actually connecting with their fans, we are perhaps entering a golden age for fan involvement in the video games they love. And it's not just the indie developers getting into this game either; the AAA publishers are, too. One example of this came up last year, when Ubisoft worked with HitRECord to allow fans of the Beyond Good and Evil franchise to submit potential in-game music creations. On HitRECord, other fans would be able to vote and even remix those works. At the end of it all, any music Ubisoft used for Beyond Good and Evil 2 would be paid for out of a pool of money the company had set aside. Cool, right?
Not for some in the gaming industry itself. Many who work in the industry decried Ubisoft's program as denying those who make music professionally income for the creation of the game music. Others called Ubisoft's potential payment to fans for their creations "on-spec" solicitations, in which companies only pay for work that actually makes it into the game, a practice that is seen as generally unethical in the industry. Except neither of those criticisms were accurate. Ubisoft specifically carved out a few places for fans to put music into the game, not the entire game. And the "on-spec" accusation would only make sense if these fans were in the gaming music industry, which they weren't. Instead, Ubisoft was actually just trying to connect with its own fans and create a cool program in which those fans could contribute artistically to the game they love, and even make a little money doing so.