This should come as no surprise, as System76 already had a firmware updater (available in the Pop Shop) for its Thelio and Oryx Pro systems. However, that software was a completely different beast (originally written in Python, whereas the new software was written in Rust). Had you been running a distribution other than Pop!_OS, your only option for updating firmware was the command line tool fwupd. In a world where more and more users are adopting Linux, that's not smart business. Why? As more new (non-admin) users adopt Linux, the command line will be used less and less. Without a GUI to upgrade a system's firmware, that would equate to a large number of out-of-date firmware. It doesn't take a PhD to handle that math.
Now, however, System76 has integrated that Firmware updater in to the GNOME System Settings tool, although the tool can be integrated into any distribution that uses a non-GNOME desktop. This shift should clearly delineate the updating of firmware from standard system updates. That is not to say standard system updates aren't crucial—they are. Without regular updates, your system wouldn't receive security patches, software improvements, and new features. However, without firmware patches, your systems could be vulnerable to seriously damaging and hijacking firmware malware, which is why this move should be seen as so critical for Linux.
Switching to a less intensive OS such as Linux or Chrome OS is likely to be less taxing on your hardware, therefore yielding better performance for you. Chrome OS might not be the best option however, as it’s based around cloud storage, which isn’t cheap.
Linux, on the other hand, offers the best of both worlds. Windows users can easily get used to Linux, and the wide variety of distributions or distros (different releases of Linux OS) make using this OS quite a treat.
Anyone looking to make the switch to Linux can easily accomplish the task using only a bootable pendrive and a laptop. Just make sure the laptop’s wifi adapter is compatible with your choice of Linux distro.
Additionally, there are some things to note when shifting to Linux. You will lose out on some applications, such as Photoshop, Premiere Pro, etc. but since you’re going to be installing it on an old system, it’s unlikely you’d be using any of these softwares anyway.
YouTube is going to be essential in your journey to Open Source greatness, and Chris Titus Tech’s ‘First time Linux installation’ series and Switched To Linux’s ‘Distro Reviews’ will provide you with a lot of info when getting started.
A power outage fried hardware within one of Amazon Web Services' data centers during America's Labor Day weekend, causing some customer data to be lost.
When the power went out, and backup generators subsequently failed, some virtual server instances evaporated – and some cloud-hosted volumes were destroyed and had to be restored from backups, where possible, we're told.
A Register reader today tipped us off that on Saturday morning, Amazon's cloud biz started suffering a breakdown within its US-East-1 region.
Our tipster told us they had more than 1TB of data in Amazon's cloud-hosted Elastic Block Store (EBS), which disappeared during the outage: they were told "the underlying hardware related to your EBS volume has failed, and the data associated with the volume is unrecoverable."
[...]
Unlucky customers who had data on the zapped storage systems were told by AWS staff that, despite attempts to revive the missing bits and bytes, some of the ones and zeroes were permanently scrambled: "A small number of volumes were hosted on hardware which was adversely affected by the loss of power. However, due to the damage from the power event, the EBS servers underlying these volumes have not recovered.
The Python Package Index (PyPI) is a repository of software for the Python programming language. PyPI helps you find and install software developed and shared by the Python community.
Unix virtual memory when you have no swap space, Dsynth details on Dragonfly, Instant Workstation on FreeBSD, new servers new tech, Experimenting with streaming setups on NetBSD, NetBSD’s progress towards Steam support thanks to GSoC, and more.
What's it's like building a startup with Python and going through a tech accelerator? You're about to find out. On this episode, you'll meet Elissa Shevinsky from Faster Than Light. They are building a static code analysis as a service business for Python and other code bases. We touch on a bunch of fun topics including static code analysis, entrepreneurship, and tech accelerators.
The Software Guard Extensions (SGX) support for the Linux kernel around the memory enclaves continues to be worked on by the open-source Intel team and is now up to their twenty-second revision but it's not clear that this code is ready yet for the upcoming Linux 5.4 cycle.
Intel has worked an excruciatingly long time on these Linux patches with the v21 patches having come out in mid-July. Now at the start of September is v22 for these patches that provide support for hardware-protected/encrypted memory regions with SGX enclaves.
The encryption of data at rest is increasingly mandatory in a wide range of settings from mobile devices to data centers. Linux has supported encryption at both the filesystem and block-storage layers for some time, but that support comes with a cost: either the CPU must encrypt and decrypt vast amounts of data moving to and from persistent storage or it must orchestrate offloading that work to a separate device. It was thus only a matter of time before ways were found to offload that overhead to the storage hardware itself. Satya Tangirala's inline encryption patch set is intended to enable the kernel to take advantage of this hardware in a general manner.
The Linux storage stack consists of numerous layers, so it is unsurprising that an inline encryption implementation will require changes at a number of those layers. Hardware-offloaded encryption will clearly require support from the device driver to work, but the knowledge of which encryption keys to use typically comes from the filesystem running at the top of the stack. Communicating that information from the top to the bottom requires a certain amount of plumbing.
Looking up a file given a path name seems like a straightforward task, but it turns out to be one of the more complex things the kernel does. Things get more complicated if one is trying to write robust (user-space) code that can do the right thing with paths that are controlled by a potentially hostile user. Attempts to make the open() and openat() system calls safer date back at least to an attempt to add O_BENEATH in 2014, but numerous problems remain. Aleksa Sarai, who has been working in this area for a while, has now concluded that a new version of openat(), naturally called openat2(), is required to truly solve this problem. The immediate purpose behind openat2() is to allow a program to safely open a path that is possibly under the control of an attacker; in practice, that means placing restrictions on how the lookup process will be carried out. Past attempts have centered around adding new flags to openat(), but there are a couple of problems with that approach: openat() doesn't check for unknown flags, and the number of available bits for new flags is not large. The failure to check for unknown flags is a well-known antipattern. A program using a path-restricting flag needs to know whether the requested behavior is understood by the kernel or not; the alternative is to accept security vulnerabilities on kernels that do not implement those flags.
The Linux Foundation (LF) Technical Advisory Board (TAB) is meant to give the kernel community some representation within the foundation. In a "birds of a feather" (BoF) session at the 2019 Open Source Summit North America, four TAB members participated in an "Ask the TAB" session. Laura Abbott organized the BoF and Tim Bird, Greg Kroah-Hartman, and Steven Rostedt joined in as well. In the session, the history behind the TAB, its role, and some of its activities over the years were described.
Abbott started things off by noting that she is one of the newest members of the TAB, so she asked Kroah-Hartman, who is the longest-serving member, to give some of the history. At the time the Open Source Development Labs (OSDL) merged with the Free Standards Group in 2007 (which he characterized as "when we overthrew OSDL") to form the LF, the kernel community was quite unhappy with how OSDL had been run. The kernel developers made a list of six or eight demands and the LF met five of them. One of those was to form an advisory board to help the organization with various technical problems it might encounter.
Writing is not an easy task, and therefore any assistance provided by a useful app can be very much appreciated, and even totally relied upon. The apps included here needed to satisfy only three criteria to make it to this list: they had to be compatible for Linux, they had to be a writing tool but not a word processing app, and they had to be great.
Linux command line users must be familiar with "man" command. It stands for manual pages, which means that every Linux command or utility comes with the set of instructions or possible usage of the command. Man pages are of great help while working on the command line, but often, the documentation available via man pages is too lengthy or too confusing to learn. It also does not provide any real life examples too. All it include is the details of what that particular command does, and what are its available switches ( also called options ).
TLDR (Too Long Didn’t Read) is a community driven efforts to improve default Linux man pages, it provides an easy to under documentation for every command or utility and it also demonstrates the usage of the command with pretty simple examples. In this article, we will be learning the process to install TLDR and how to use it to work better on Linux terminals.
Valve have officially announced that the Beta for the overhaul of the Steam Library is coming on September 17th. Valve did a demo to some press outlets recently in a closed-doors session and now they've formally announced it.
This is something Valve have been working on for a long time and it is absolutely needed. The current Library feature of Steam is incredibly simplistic and when you've built up a bigger amount of games it's so a bit useless really.
Due to the US Labor Day holiday, Valve was slow in updating their monthly figures for their controversial Steam Survey of hardware/software data by polled users. At least for their initial batch of August numbers they are reporting a small increase in the Linux gaming population.
By now most Linux gamers either trust or hate the Steam Survey with some arguing its inaccurate or biased or simply broken methodology for polling enough users. But most cross-platform game developers do report it to be fairly accurate with their Linux sales generally aligning to the Steam survey metrics, at least not wildly different. If you are interested in it, the magic number for August is 0.8%.
At the recent GUADEC in Thessaloniki, I gave a talk about some strands of work that I’ve been doing around UX strategy and design/development process. I ended up skipping over some points that I’d wanted to make, and I also had some great conversations with people about the talk afterwards, so I wanted to share an updated version of the talk in blog form.
I’ll be splitting the talk into multiple posts. This first post is about creating a UX strategy for GNOME. As you might expect, this is a plan for how to improve GNOME’s user experience! In particular, it tries to answer the question of which areas and features need to be prioritised.
The approach I’ve taken in creating this strategy follows a fairly standard format: analyse the market, research user needs, identify and analyse competitors, then use that data to design a product which will succeed in the current desktop market. The main goal is to offer a product which meets user needs better than the alternatives.
In later posts in the series, I’m going to show off a set of updated designs for GNOME, which I think are a good place to start implementing the strategy that I’m laying out. For many readers, those later posts will probably be more interesting! However, I do think it’s useful to provide the strategy, in order to provide background and put that work in context.
This year we had seven people from Canonical Ubuntu desktop team in attendance. Many other companies and projects had representatives (including Collabora, Elementary OS, Endless, Igalia, Purism, RedHat, SUSE and System76). I think this was the most positive GUADEC I've attended, with people from all these organizations actively leading discussions and a general consideration of each other as we try and maximise where we can collaborate.
Of course, the community is much bigger than a group of companies. In particular is was great to meet Carlo and Frederik from the Yaru theme project. They've been doing amazing work on a new theme for Ubuntu and it will be great to see it land in a future release.
In the annual report there was a nice surprise; I made the most merge requests this year! I think this is a reflection on the step change in productivity in GNOME since switching to GitLab. So now I have a challenge to maintain that for next year...
This is the official release announcement for IPFire 2.23 - Core Update 135, which is packed with a new kernel, various bug fixes and we recommend to install it as soon as possible.
The tutorial for today is about Jupiter Lab and Fedora 30. You can see an old tutorial with Fedora 29 here. The JupyterLab is the next-generation web-based user interface for Project Jupyter. This can be installed using conda, pip or pipenv.
You can help Tails by testing the second beta for the upcoming version 4.0!
This release fixes many security vulnerabilities. You should upgrade as soon as possible.
Data Modul’s rugged, 10.1-inch “Slim Panel PC” runs Yocto 2.4 with Linux 4.9 on an i.MX6 with a capacitive touchscreen, dual CAN ports, Ethernet, mini-PCIe, and micro-HDMI ports.
Data Modul announced an “ultra-flat” Slim Panel PC with a 10.1-inch capacitive touchscreen that runs Linux on an 800MHz NXP i.MX6 Solo, DualLite, or Quad. The name derives from the 185 x 267mm system’s relatively svelte 31.7mm thickness.
To open-source fans, the lure of open-source voting systems is surely strong. So a talk at 2019 Open Source Summit North America on a project for open-source voting in San Francisco sounded promising; it is a city with lots of technical know-how among its inhabitants. While progress has definitely been made—though at an almost glacially slow speed—there is no likelihood that the city will be voting using open-source software in the near future. The talk by Tony Wasserman was certainly interesting, however, and provided a look at the intricacies of elections and voting that make it clear the problem is not as easy as it might at first appear.
Wasserman is a professor of software management practice at Carnegie Mellon Silicon Valley and a San Francisco resident; he was asked to serve on an advisory committee on open-source voting for the city. San Francisco is about 11x11km, with around 800,000 people; roughly 500,000 of those are registered voters and nearly 350,000 turned out for the November 2018 election. He said that 70% participation by registered voters is a pretty good turnout for the US.
There are two different organizations within the city government that handle elections: the elections commission and the elections department. The commission is tasked with making the policies and plans for elections, while the department actually implements them, runs the elections, and reports the results. The elections department also handles "problem" ballots and registrations; as part of that, it stores 20 years of paper ballots underneath city hall, which he found astonishing.
The goal of the project is to develop the country's first open-source voting system for political elections, which could potentially have a broad impact if it is successful, both locally and nationally. There are other justifications for it as well, including providing transparency for voters and the expectation of saving money. There are only three (down from four due to a merger) companies that sell election systems in the US; they are not cheap and moving to open-source would provide freedom from being locked into those vendors.
In my open source courses, I spend a lot of time working with new developers who are trying to make sense of issues on GitHub and figure out how to begin. When it comes to how people write their issues, I see all kinds of styles. Some people write for themselves, using issues like a TODO list: "I need to fix X and Y." Other people log notes from a call or meeting, relying on the collective memory of those who attended: "We agreed that so-and-so is going to do such-and-such." Still others write issues that come from outside the project, recording a bug or some other problem: "Here is what is happening to me..."
Because I'm getting ready to take another cohort of students into the wilds of GitHub, I've been thinking once more about ways to make this process better. Recently I spent a number of days assembling furniture from IKEA with my wife. Spending that much time with Allen keys got me thinking about what we could learn from IKEA's work to enable contribution from customers.
Workarea, the enterprise commerce platform built to unify commerce, content management, merchant insights and search, is releasing its software to the open source community. Built upon open source technologies from inception, including Ruby on Rails, MongoDB, and Elasticsearch, Workarea touts unparalleled flexibility and scale in modern cloud environments. The platform source code and demo instructions are now available on GitHub.
The waiting list for this year’s Linux Plumbers Conference is now closed. All of the spots available have been allocated, so anyone who is not registered at this point will have to wait for next year. There will be no on-site registration. We regret that we could not accommodate everyone. The good news is that all of the microconferences, refereed talks, Kernel summit track, and Networking track will be recorded on video and made available as soon as possible after the conference. Anyone who could not make it to Lisbon this year will at least be able to catch up with what went on. Hopefully those who wanted to come will make it to a future LPC.
The Linux Plumbers Conference has filled up and has closed its waiting list.
Several years ago we started a geolocation experiment called the Mozilla Location Service (MLS) to create a location service built on open-source software and powered through crowdsourced location data. MLS provides geolocation lookups based on publicly observable cell tower and WiFi access point information. MLS has served the public interest by providing location information to open-source operating systems, research projects, and developers.
Today Mozilla is announcing a policy change regarding MLS. Our new policy will impose limits on commercial use of MLS. Mozilla has not made this change by choice. Skyhook Holdings, Inc. contacted Mozilla some time ago and alleged that MLS infringed a number of its patents. We subsequently reached an agreement with Skyhook that avoids litigation. While the terms of the agreement are confidential, we can tell you that the agreement exists and that our MLS policy change relates to it. We can also confirm that this agreement does not change the privacy properties of our service: Skyhook does not receive location data from Mozilla or our users.
Our new policy preserves the public interest heart of the MLS project. Mozilla has never offered any commercial plans for MLS and had no intention to do so. Only a handful of entities have made use of MLS API Query keys for commercial ventures. Nevertheless, we regret having to impose new limits on MLS. Sometimes companies have to make difficult choices that balance the massive cost and uncertainty of patent litigation against other priorities.
Mozilla has long argued that patents can work to inhibit, rather than promote, innovation. We continue to believe that software development, and especially open-source software, is ill-served by the patent system. Mozilla endeavors to be a good citizen with respect to patents. We offer a free license to our own patents under the Mozilla Open Software Patent License Agreement. We will also continue our advocacy for a better patent system.
If one were to ask a group of free-software developers whether the community needs more software licenses, the majority of the group would almost certainly answer "no". We have the licenses we need to express a range of views of software freedom, and adding to the list just tends to create confusion and compatibility issues. That does not stop people from writing new licenses, though. While much of the "innovation" in software licenses in recent times is focused on giving copyright holders more control over how others use their code (while still being able to brand it "open source"), there are exceptions. The proposed "Cryptographic Autonomy License" (CAL) is one of those; its purpose is to give users of CAL-licensed code control over the data that is processed with that code. Van Lindberg first went to the Open Source Initiative's license-review list with the CAL in April. At that point, it ran into a number of objections and was rejected by the OSI's license review committee. The license has since been revised to address (most of) the objections that were raised; a third version with minor tweaks was posted on August 22.
At its core, the CAL is a copyleft license; distribution of derived products is only allowed if the corresponding source is made available under the same (or a compatible) license. The third version added one exception: release of source can be delayed for up to 90 days if required to comply with a security embargo.
Accommodation of new business models and technological advances has fundamentally disrupted the open source industry. Unlike on-prem solutions, which are installed in a user environment, cloud-based software remains hosted on the vendor's servers and is accessed by users through a web browser. Because cloud-based offerings do not involve software distribution, the copyleft effect of open source licences is not triggered.
Large cloud providers use their market power and infrastructure to generate significant revenues by offering proprietary services around successful open source projects, thus depriving such projects of an opportunity to commercialise similar services.
[...]
Whether the benefits of employing the Commons Clause outweigh the potential risks is likely to invoke a case by case analysis. The community consensus on the four software freedoms [the freedom to run the program for any purpose, the freedom to modify it for private or public use, the freedom to make copies and distribute the program and its derivatives] is under continuous pressure for modification. Indeed reshaping the portfolio of freedoms may not necessarily be a threat to open source as we know it, but rather an evolution thereof.
As Heather Meeker, the drafter of the Commons Clause, has noted, the choice is often between the full proprietary route and a source-available licensing. By choosing the latter, we may preserve at least some of the freedoms.
Before a program can be run, it needs to be built. It's a well-known fact that modern software, in general, consumes more runtime resources than before, sometimes to the point of forcing users to upgrade their computers. But it also consumes more resources at build time, forcing operators of the distributions' build farms to invest in new hardware, with faster CPUs and more memory. For 32-bit architectures, however, there exists a fundamental limit on the amount of virtual memory, which is never going to disappear. That is leading to some problems for distributions trying to build packages for those architectures.
Indeed, with only 32-bit addresses, there is no way for a process to refer to more than 4GB of memory. For some architectures, the limit is even less — for example, MIPS hardware without the Enhanced Virtual Addressing (EVA) extensions is hardwired to make the upper 2GB of the virtual address space accessible from the kernel or supervisor mode only. When linking large programs or libraries from object files, ld sometimes needs more than 2GB and therefore fails.
Object detection is a technology that falls under the broader domain of Computer Vision. It deals with identifying and tracking objects present in images and videos. Object detection has multiple applications such as face detection, vehicle detection, pedestrian counting, self-driving cars, security systems, etc.
I’ve been following the status of Lisp on ppc64le lately.
I’m running ppc64le Debian sid. Just after I had set up my system, I did some experimentation with what Debian packages had to offer. ECL was the only Lisp that worked, so I started using it for various projects. (I’ve since learned on #sbcl that CLISP built from source is also a good option.)
Ideally I wanted to be able to use SBCL, so I wondered how far into an SBCL bootstrap I could get with ECL as the host compiler. A few months ago, I found the answer was not very far.
This month in 2019 marks the 60th anniversary of COBOL. That’s right: In a world in which something can be outdated almost as fast as it moves off the shelf, COBOL (common business-oriented language), a code technology that predates Microsoft Windows, UNIX, Java and Linux, is 60 years old.
COBOL is a high-level programming language for business applications. It was the first popular language designed to be operating system-agnostic and is still in use in many financial and business applications today.
COBOL was designed for business computer programs in industries such as finance and human resources. Unlike some high-level computer programming languages, COBOL uses English words and phrases to make it easier for ordinary business users to understand. The language was based on Rear Admiral Grace Hopper's 1940s work on the FLOW-MATIC programming language, which was also largely text-based. Hopper, who worked as a technical consultant on the FLOW-MATIC project, is sometimes referred to as the "grandmother of COBOL."
The wonderful folks at Paleotronic (previously) have rounded up scans of articles from 1980s-era computer magazines that advised new computer users on navigating the burgeoning world of dial-up BBSes.
Dial-ups were my introduction to networked computing. We had an acoustic coupler and teletype connected to a PDP at the University of Toronto in 1977 when I was 6, but it wasn't until we got an Apple ][+ and a Hayes modem card in 1979 that the world opened up for me. That system didn't have enough expansion slots to accommodate all the cards we had for it, so installing the modem meant swapping out the 80 column card, which meant that we lost access to lower-case characters when we were online. My modem days started out in ALL CAPS.
Intel announced at IFA 2019 in Berlin that their Core i9 9900KS processor will be releasing next month.
For those losing track, the Core i9 9900KS is Intel's all-core 5GHz processor as a step above the existing Core i9 9900K. On the downside, it's still a 14nm-derived Coffeelake part.
The 5GHz all-core turbo frequency with the i9-9900KS is said to be possible with normal air cooling. The base frequency of this eight-core / sixteen thread processor will be 4.0GHz, a 400MHz increase over the 9900K. Pricing and TDP figures have yet to be announced.
[...]
Intel hasn't yet indicated if we'll be sampled with the 9900KS or Cascadelake-X for Linux benchmarking, but hopefully as certainly many Linux users are interested in the performance potential -- with real-world workloads, which was also a common theme for Intel's IFA 2019 talk.
For almost a year, threat actors could exploit a vulnerability in Samba software that allowed them to bypass file-sharing permissions and escape outside the share root directory.
The security flaw has been introduced in Samba 4.9.0, released on September 13, 2018, and can be leveraged under certain conditions.
Authentication is the first step to a solid defense, as most breaches start with poor authentication practices. A bad actor will procure a legitimate login credential (a password) from an unsuspecting user via phishing, social engineering, or even plain theft. When the password is obtained, it allows the bad actor to log in to systems as someone they are not. The network doesn’t recognize that it is a malicious actor entering the credentials, allowing them to access anything as a "legitimate" user who has permission to access. Fortunately, several tactics and technologies can help further strengthen authentication tactics.
[...]
Single sign-on (SSO) technologies and implementing multi-factor authentication can eliminate the problem of too many passwords and the lack of security as it relates to the password itself. SSO enables a user to utilize a single, strong password across the entire range of systems they need to access. Another advantage of SSO is that it can apply stronger authentication methods to systems that don’t natively support them. For example, natively many UNIX and Linux systems transmit passwords in clear-text -- an obvious risk; but an SSO solution that enables an Active Directory (AD) log-on to work for Unix/Linux will automatically extend AD’s password encryption and stronger authorization to those non-Windows systems.
WordPress 5.2.3 is now available!
This security and maintenance release features 29 fixes and enhancements. Plus, it adds a number of security fixes—see the list below.
These bugs affect WordPress versions 5.2.2 and earlier; version 5.2.3 fixes them, so you’ll want to upgrade.
If you haven’t yet updated to 5.2, there are also updated versions of 5.0 and earlier that fix the bugs for you.
Caroline said: “We’re used to official greenwash from government, but what we saw today was a mere slapdash swipe, a few drops of paint, on a canvas that otherwise entirely overlooked our climate emergency.
“Our environment was totally ignored in the overview of the UK economy, and the Chancellor only got around to a specific climate announcement two-thirds of the way into the speech.
“Any Chancellor fit for office would have announced a Green New Deal as an economic cure for the triple crisis of inequality, climate breakdown and failed finance.
“This spending review doubles down on a failed economic model that is trashing our environment, and trashing the prospects for young people.
Senator John Barrasso and the Wall Street Journal (WSJ) editorial board are once again attacking the federal electric vehicle tax credit, and are once again relying on easily debunked talking points born of the Koch network’s influence machine.
Senator Barrasso has reportedly sent a letter to Republican colleagues in the Senate, advising them not to extend the electric vehicle (EV) tax credit.
The Wall Street Journal’s editorial board cheered Senator Barrasso’s act in an editorial published Tuesday. The deception and falsehoods are so rife in the WSJ editorial that it that begs for rebuttal. So here goes.
In-house counsel in both the UK and the US say they are still struggling to get to grips with data protection laws and that strict rules around revealing details of potential infringers have meant some threats go unchecked.
Cases in which FRAND licences are discussed, and where if no licence is taken an injunction is requested, more closely resemble unpaid debt claims then IP-related cases and are thus less suitable for preliminary proceedings. Further, the fact that the case was complex, not only in relation to the patented subject-matter but also because of the international implications, led the judge to the conclusion that the case was not suitable for preliminary proceedings.
Today, three years after its inception on 1 September 2016, and with more data now available, it is time to follow up on our 2017 blog post regarding the efficiency and reliability of patent cases adjudicated by the Swedish Patent and Market Court.
The Berkheimer case has been pending for a while. Steven Berkheimer first filed his infringement lawsuit against HP back in 2012. (N.D. Ill.) In 2016, the district court dismissed the case on summary judgment — finding the claims ineligible. That decision included two key legal findings (1) eligibility is purely a question of law; and (2) the clear and convincing evidence standard does not apply to questions of eligibility. The appeal then took two years and in 2018 the Federal Circuit vacated that holding — finding that underlying issues of fact may be relevant here to the question of patent eligibility.
[...]
As I mentioned previously, the petition creates a false dichotomy. As we know from claim construction, not all questions-of-fact are given to a jury to decide. And, the Federal Circuit did not hold that eligibility is “a question of fact for the jury.” Despite the intentional misdirection by HP’s Counsel Mark Perry (Gibson Dunn) and David Salmons (Morgan Lewis), the case is interesting and important and I look for the court to take the case.
When I wrote about this case back in May 2019, I headlined the post with the court’s statement that “The Doctrine of Equivalents Applies ONLY in Exceptional Cases.” I noted that the court’s statement was “a major step without precedential backing.” It is possible that the court was simply intending to state that DOE is rare. The decision was so problematic though because “exceptional case” is a term of art used elsewhere in patent law and suggests creation of an additional test prior to allowing a patentee to rely upon DOE. Citing my post, Amgen petitioned for rehearing on the issue.
[...]
The decision as it reads now recognizes that DOE winners will be rare — and that rarity stems from the nature of the DOE test. In particular, DOE only applies when the accused device or method is different from what is claimed but may not be “substantially different” on an element-by-element basis.
Prior to filing its IPR petition, Cisco had filed a declaratory judgment (DJ) action in district court seeking to invalidate Chrimar’s U.S. Patent 8,902,760. However, rather than pursuing the case to judgment or settlement, Cisco dismissed its lawsuit without prejudice. Under typical principles of preclusion, this type of dismissal would not prohibit or estopp Cisco from challenging the patent again at a later date. However, the question here was how those rules mesh with the Patent Act.
[...]
The petitioner Cisco easily checks the boxes of the statute — it filed a civil action challenging validity before it filed the IPR petition. However, Cisco argued that that the voluntary dismissal should reset the case. The voluntary-dismissal argument had already been foreclosed in the 315(b) situation.
Cisco argued that 315(a) (unlike 315(b)) parallels estoppel provisions and thus should follow the same rules. The PTAB panel rejected that argument — following the Federal Circuit’s lead here to strictly apply the statute as written and without judicial exception. The statute “does not include an exception.” Thus, the holding: Cisco’s IPR Petition is barred by its prior DJ action even though voluntarily dismissed.
On Monday, the Patent Trial and Appeal Board (PTAB) issued an Order deciding which of the parties' (University of California/Berkeley, the University of Vienna, and Emmanuelle Charpentier, Junior Party, abbreviated to "CVC" throughout, and The Broad Institute, Massachusetts Institute of Technology, and Harvard University, Senior Party) proposed motions it will deign to consider in the first (appropriately called "motions") phase of the newly declared interference regarding priority of invention to CRISPR technology. The Board also and separately redeclared the interference and by doing so avoided considering one of CVC's motions (that arrived at Berkeley's desired outcome nevertheless).
[...]
Separately, the Board redeclared the interference to add four of CVC's pending applications to the interference. There are no other changes to the declaration; the Broad and its co-owners remain Senior Party and CVC remains Junior Party (although as the Board noted in its accompanying Order this could change depending on the outcome of the motions the parties are authorized to file), nor were there any other changes in the patents and applications in interference, the accorded priority benefits nor the claims for each party corresponding to the count (i.e., substantially all of them). As the interference has been set out the Board seems ready to address all remaining issues between the parties to produced (subject to appeal) a final determination of who owns CRISPR.
This Kat instantly liked the facepalm emoji when it was first released in 2016. It felt like some hidden inner voice was speaking out, echoing in her own ears. It had been something that had been missing for some time – an expression of frustration or embarrassment at the difficulties of a tough situation:
‘The iPhone is finally getting a facepalm emoji’ -- Telegraph
‘Apple gives the world what it needs, a facepalm emoji’ -- mashable
‘The One Emoji You've All Been Waiting For Is FINALLY Coming To Your Phone...’ -- popbuzz
‘HOERA! ER KOMT EINDELIJK EEN FACEPALM-EMOJI’ -- metronieuws.nl
[...]
Jin disagreed with the criticism. In an interview, he said that, as the owner of a clothing-trading company, he has always sought trade-mark protection for the patterns printed on the clothes by his company. As to the facepalm pattern, he had been using it on his products for almost two years, and therefore thought he should protect it as a trade mark to prevent future trade-mark squatting on rivals’ clothing products. Pressed by the reporter on whether he had known Tencent was in possession of the said emoji, Jin replied, ‘This emoji was not so popular when I started using it’. In addition, since he had only applied for the goods in class 25, he believed it ‘would not be precluding Tencent (the owner of WeChat), or the netizens and WeChat users, from using the facepalm emoji’.