For some of you, it is a time to return your educational institution and continue the important process of learning about the world around you—maybe for some of you it is the first time being part of higher education, while some of you might be long-time academic researchers and associates. For those who are sick of their thick laptops weighing down on their backpacks and who would also want something with security in mind, what better way to start the school year than with a Purism laptop?!
Have you been patiently waiting for the ability to run Linux apps on your Chromebook since word of Crostini first surfaced?
If so, your patience is about to be well rewarded.
Google is preparing to roll out this exciting Chrome OS feature as part of its next OS update, giving more users the opportunity to install and run Linux apps on their Chromebook.
The “Crostini Project” that brought Linux apps to Chromebooks has seemingly accelerated in development as of late. What appeared to be a developer-centric experiment, has quickly spread to a large number of Chrome devices and has already moved into the Beta Channel of Chrome OS.
You can now install Linux apps on dozens of Chromebook models by the flipping a switch in the Beta channel and executing a few simple lines of code. Even more exciting is the fact that support for Debian files is here meaning you can simply download the application file you want and double-click to install just like you would on any other OS.
If that’s not enough, you can even install the Gnome Software Center and install apps from the “store.” All of these combined will surely bring Linux apps to the forefront of Chrome OS’s usability and versatility.
Chrome OS is Google's Linux-based operating system for Chromebook devices, and the tech giant is currently testing support for installing and running Linux apps on Chrome OS, a feature that will be introduced to the masses with the next stable release of the operating system, Chrome OS 69, though it'll still be available in a beta form.
"Linux (Beta) for Chromebooks allows developers to use editors and command-line tools by adding support for Linux on a Chrome device," said Google in the release notes. "After developers complete the set up, they’ll see a terminal in the Chrome launcher. Developers can use the terminal to install apps or packages, and the apps will be securely sandboxed inside a virtual machine."
For the 25th anniversary of the Linux kernel, I gave a 25 years of Linux in 5 minutes lightning talk at All Things Open in Raleigh. As we approach the kernel's 27th anniversary, I'd like to take a stroll down memory lane and look back at the three releases that have been most significant to me.
Google announced earlier this year that Linux apps would eventually be supported on Chrome OS. The feature has been available for months in the Canary and Dev channels, and now works on a variety of Chromebooks from multiple manufacturers. A merged pull request on the Chromium Gerrit now confirms that any device running the Linux kernel 3.14 (or older) will never get Linux app support.
For context, Linux apps on Chrome OS run in a protected container, to prevent malicious software from interfering with the main system. This container requires features only found in recent versions of the Linux kernel, like vsock (which was added in Linux 4.8). Chromebooks usually stick with whatever kernel version they are shipped with, and many popular models are running older versions too old for containers.
While Linux 4.19 is slated to have a lot of new features as we have been covering now the past week and a half, Linus Torvalds is upset with these big pull requests and some of them being far from perfect -- to the extent of being rejected.
"So this merge window has been horrible," began Torvalds' latest kernel mailing list post. He went on to explain how he is not going to pull XArray support for Linux 4.19. He got turned off when he was going to look at the code because the XArray pull request was based upon the libnvdimm tree, which were changes Torvalds decided against pulling this cycle anyhow due to code quality concerns. And it was not communicated in the pull request why the XArray pull request was based against the libnvdimm changes, which led to another one of Torvalds' famous email blasts.
The x86 platform driver work was merged today for the Linux 4.19 kernel merge window.
Unless you were affected by one of the quirky devices now fixed up by the platform-drivers-x86 work, it mostly comes down to a random collection of hardware fixes and improvements. The changes range from the ThinkPad ACPI driver enabling support for the calculator key on at least some Lenovo laptops to the ASUS WMI drivers recognizing the lid flip event on the UX360 ZenBook Flip.
Jaegeuk Kim, the creator and lead developer of the Flash-Friendly File-System (F2FS), has finally submitted the big feature updates slated for the Linux 4.19 kernel merge window.
Hundreds (at least) of kernel bugs are fixed every month. Given the kernel's privileged position within the system, a relatively large portion of those bugs have security implications. Many bugs are relatively easily noticed once they are triggered; that leads to them being fixed. Some bugs, though, can be hard to detect, a result that can be worsened by the design of in-kernel APIs. A proposed change to how user-space accessors work will, hopefully, help to shine a light on one class of stealthy bugs.
Many system calls involve addresses passed from user space into the kernel; the kernel is then expected to read from or write to those addresses. As long as the calling process can legitimately access the addressed memory, all is well. Should user space pass an address pointing to data it should not be able to access — a pointer into kernel space, for example — bad things can happen.
"Mounting" a filesystem is the act of making it available somewhere in the system's directory hierarchy. But a mount operation doesn't just glue a device full of files into a specific spot in the tree; there is a whole set of parameters controlling how that filesystem is accessed that can be specified at mount time. The handling of these mount parameters is the latest obstacle to getting the proposed new mounting API into the mainline; should the new API reproduce what is arguably one of the biggest misfeatures of the current mount() system call?
The list of possible mount options is quite long. Some of them, like relatime, control details of how the filesystem metadata is managed internally. The dos1xfloppy option can be used with the FAT filesystem for that all-important compatibility with DOS 1.x systems. The ext4 bsddf option tweaks how free space is reported in the statfs() system call. But some options can have significant security implications. For example, the acl and noacl options control whether access control lists (ACLs) are used on the filesystem; turning off ACLs by accident on the wrong filesystem risks exposing files that should not be accessible.
Reinette Chatre of Intel posted a patch for a new chip feature called Cache Allocation Technology (CAT), which "enables a user to specify the amount of cache space into which an application can fill". Among other things, Reinette offered the disclaimer, "The cache pseudo-locking approach relies on generation-specific behavior of processors. It may provide benefits on certain processor generations, but is not guaranteed to be supported in the future."
Thomas Gleixner thought Intel's work looked very interesting and in general very useful, but he asked, "are you saying that the CAT mechanism might change radically in the future [that is, in future CPU chip designs] so that access to cached data in an allocated area which does not belong to the current executing context wont work anymore?"
Reinette replied, "Cache Pseudo-Locking is a model-specific feature so there may be some variation in if, or to what extent, current and future devices can support Cache Pseudo-Locking. CAT remains architectural."
We are pleased to announce that the RT Microconference has been accepted into the 2018 Linux Plumbers Conference! The Real-Time patch (also known as PREEMPT_RT) has been developed out of tree since 2004. Although it hasn’t yet been fully merged, several enhancements came to the Linux kernel directly as the result of the RT patch. These include, mutexes, high resolution timers, lockdep, ftrace, RT scheduling, SCHED_DEADLINE, RCU_PREEMPT, cross-arch generic interrupt logic, priority inheritance futexes, threaded interrupt handlers, to name a few. All that is left is the conversion of the kernel spinning locks into mutexes, and the transformation is complete. There’s talk about that happening by the end of this year or early next year.
LF Networking (LFN), launched on January 1st of this year, has already made a significant impact in the open source networking ecosystem gaining over 100 members in the just the first 100 days. Critically, LFN has also continues to attract support and participation from many of the world’s top network operators, including six new members announced in May: KT, KDDI, SK Telecom, Sprint, Swisscom; and Deutsche Telekom announced just last month. In fact, member companies of LFN now represent more than 60% of the world’s mobile subscribers. Open source is becoming the de facto way to develop software and it’s the technical collaboration at the project level that makes it so powerful.
Similar to the demos in the LFN Booth at ONS North America, the LFN Booth at ONS Europe will once again showcase the top, community-led, technical demos from the LFN family of projects. We have increased the number of demo stations from 8 to 10, and for the first time, are showcasing demos from the big data analytics project PNDA, and demos that include the newly added LFN Project, Tungsten Fabric (formerly OpenContrail). Technology from founding LFN Projects FD.io, ONAP, OPNFV, and OpenDaylight will also be represented, along with adjacent projects like Acumos, Kubernetes, OpenCI, Open Compute Project, and OpenStack.
Building on the Virtual Central Office demo shown at the OPNFV Summit last year, a team from Red Hat and 10+ participating companies, including China Mobile, have expanded to show a mobile access network configuration using vRAN for the LTE RAN and vEPC built in open source. Another demo showcasing collaboration from 10+ companies, Orange will showcase their Orange OpenLab which is based on several LFN projects. OpenLab allows for the management of CI/CD pipelines, and provides a stable environment for developers. Other operator-led demos include CCVPN (Cross Domain and Cross Layer VPN), from China Mobile and Vodafone, that demonstrates ONAP orchestration capability; and a demo from AT&T showcasing the design, configuration, and deployment of a closed loop instance acting on a VNF (vCPE).
Programmers may love hot newer languages like Kotlin and Rust, but according to a Cloud Foundry Foundation (CFF) recent survey of global enterprise developers and IT decision makers, Java and Javascript are the top dog enterprise languages.
[...]
This is coming hand-in-glove with the growth of cloud-native development. Multi-cloud users, for example, report using more developer languages, but the majority uses Java and JavaScript, followed by 50 percent saying they use C++.
The CFF's results are confirmed by RedMonk's recent language rankings. RedMonk also placed Java and JavaScript at the top tier of development languages. Java is alive and well.
In contrast to CFF's findings, however, RedMonk found Python and PHP used more frequently than C# and C++, but only marginally. As RedMonk's Stephen O'Grady wrote, "the numerical ranking is substantially less relevant than the language's tier or grouping." All four of these languages are alive and well.
Windmill Enterprise, developer of the Cognida network and platform with a focus on enterprise blockchain innovation, joined the Linux Foundation this week, and two projects – the Linux Foundation Networking community and EdgeX Foundry.
Windmill joins existing Linux Foundation members like AT&T, Google, IBM and DellEMC, and companies including Samsung and Analog Devices who are working collaboratively with the EdgeX Foundry community to address complex issues at the edge of IoT and Industrial IoT networks.
When mobile blockchain meets edge computing, IoT and IIoT developers have a decentralized data management framework available. Despite their being thousands of projects using blockchain in service today in finance, healthcare and logistics, its application in mobile services including IoT remains nascent.
The Cloud Native Computing Foundation (CNCF) is expanding its roster, announcing that it has accepted the Open Metrics project as a Sandbox effort.
The CNCF Sandbox is a place for early-stage projects, and it was first announced in March. The Sandbox replaces what had originally been called the Inception project level.
With Open Metrics, Richard Hartmann, technical architect at SpaceNet, Prometheus team member, and founder of OpenMetrics, aims to bring useful metrics to cloud-native deployments. At its core, Open Metrics is an effort to develop a neutral metrics exposition format.
"OpenMetrics does not limit or define what metrics to send, on purpose," Hartmann told ServerWatch. "What it does do is define an efficient way to transport those metrics over the wire, and a flexible and powerful way to attach information to them: label sets."
As covered earlier this month, Emil Velikov at Collabora has been working on EGLDevice support for Mesa. These EGL extensions originally developed by NVIDIA are being pursued by Mesa developers for better dealing with the enumeration and querying of multiple GPUs on a system.
Right now there is the DRI_PRIME environment variable to allow toggling between systems primarily with two GPUs (namely, Optimus notebooks have been the main use-case) but using EGLDevice support by the Mesa drivers the matter of GPU selection for OpenGL rendering can be made by the application/toolkit developer and for other scenarios like multi-GPU systems running without a display server.
One day after announcing the GeForce RTX 2070/2080 series, NVIDIA has released a new Linux driver. But it's not a major new driver branch at this time (that's presumably coming closer to the 20 September launch date) with the Turing GPU support, but is a point release delivering a practical bug fix.
The sole change listed in today's NVIDIA 396.54 driver update is, "Fixed a resource leak introduced in the 390 series of drivers that could lead to reduced performance after starting and stopping several OpenGL and/or Vulkan applications."
Daniel Vetter of Intel's Open-Source Technology Center team has written his first blog post in a while on Linux graphics. In this latest post he is answering why there isn't a 2D user-space API in the Direct Rendering Manager (DRM) code.
While Linux DRM has advanced on many fronts in the past few years, it doesn't offer any generic 2D acceleration API. The reasons for that come down to there being no 2D acceleration standard akin to OpenGL/Vulkan for 3D (granted, there's OpenVG for vector graphics and some other limited alternatives, but nothing as dominant), each hardware blitter engine being different, and other complexities that make 2D acceleration harder than one might otherwise think.
On his blog, Daniel Vetter answers an often-asked question about why the direct rendering manager (DRM) does not have a 2D API (and won't in the future)...
The DRM (direct rendering manager, not the content protection stuff) graphics subsystem in the linux kernel does not have a generic 2D accelaration API. Despite an awful lot of of GPUs having more or less featureful blitter units. And many systems need them for a lot of use-cases, because the 3D engine is a bit too slow or too power hungry for just rendering desktops.
It’s a FAQ why this doesn’t exist and why it won’t get added, so I figured I’ll answer this once and for all.
The forth release candidate for the Mesa 18.2.0 is now available.
As per the issue tracker [1] we still have a number of outstanding bugs blocking the release.
The fourth release candidate of Mesa 18.2 is out today rather than the final release due to open blocker bugs still persisting.
Mesa 18.2-RC4 ships with 18 fixes ranging from GLSL compiler fixes, RADV Vulkan driver fixes, some Intel i965 work, EGL on Android, and various other not too notable bug fixes.
- On Monday NVIDIA introduced the GeForce RTX 20 series while today they have begun making some more performance details of these Turing-powered GPUs succeeding the GeForce GTX 1000 "Pascal" series.
NVIDIA has posted about how with the RTX 2080 graphics card it's now possible to game at 60 FPS at 4K with HDR capabilities. They have also shared some relative performance metrics of the GTX 1080 vs. RTX 2080 vs. RTX 2080 with select games where their deep-learning DLSS is supported.
Given Monday's press conference by NVIDIA where they launched the RTX 20 series and much of the two-hour-long event was focused on ray-tracing for games, you may be wondering about the state of Linux affairs...
While the GeForce RTX 20 series should work fine with NVIDIA's proprietary Linux driver come 20 September, NVIDIA's RTX ray-tracing technology is still largely tied to Windows and Direct3D 12. But they are working on bringing support for RTX to Vulkan API and that frees it up to be supported on Linux.
While just yesterday NVIDIA released their 396.54 Linux driver update which some may overlook, its actually a significant performance update for Linux gamers – so definitely do not miss out on this update if you’re a Linux gamer using an NVIDIA card. NVIDIA released this 396.54 update specifically to address a resource leak that was plaguing the drivers back to the 390 series, and the resource leak was lowering performance after Vulkan and OpenGL applications had stopped and started on the system – though NVIDIA hasn’t gone into specific details regarding exactly why this was happening.
Yesterday NVIDIA released the 396.54 Linux driver update and while from being another point release might feel like a mundane update hot on the heels of the GeForce RTX 2070/2080 series debut, it's actually a significant driver update for Linux gamers. Here are some benchmarks showcasing the performance fix that warranted this new driver release.
As mentioned in yesterday's article, the 396.54 was released to fix a resource leak that had been existent going back to the 390 series driver. This resource leak could lead to lower performance after several OpenGL or Vulkan applications have started/stopped on the system... That's about all of the details they've made public. But in knowing that it was performance related and that they began investigating this issue when seeing some differences in Phoronix benchmark results compared to past articles and spent several weeks analyzing the issue, I fired up the 396.54 Linux driver right away for some game benchmarking.
Windows 10 is much better at dealing with multithreaded tasks but Linux has been optimized for both high core counts and NUMA for quite a while, so looking at the performance difference is quite interesting. Phoronix tested a variety of Linux flavours as well as Windows 10 Pro and the performance differences are striking, in some cases we see results twice as fast on Linux as Win10. That does not hold true for all tests as there are some benchmarks which Windows excels at. Take a look at this full review as well as those under the fold for a fuller picture.
Flowcharts are a great way to formalize the methodology for a new project. My team at work uses them as a tool in our brainstorming sessions and—once the ideation event wraps up—the flowchart becomes the project methodology (at least until someone changes it). My project methodology flowcharts are high-level and pretty straightforward—typically they contain just process, decision, and terminator objects—though they can be composed of many tens of these objects.
I work primarily in my Linux desktop environment, and most of my office colleagues use Windows. However, we're increasing our use of G Suite in part because it minimizes distractions related to our various desktop environments. Even so, I would prefer to find an open source tool—preferably a standalone app, rather than one that's part of another suite—that offers great support for flowcharts and is available on all the desktops our team uses.
Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 176.
The SDL2 library that offers a cross-platform hardware abstraction layer primarily and primarily used by Linux/Windows/macOS/iOS/Android games now has a sensor API.
Initial work landed in SDL2 on Tuesday by Sam Lantinga for offering a hardware sensor API as their latest major addition to the library. The API is quite generic in being able to query the number of supported sensors, sensor names, types of sensors, read the sensor data, etc.
The support we talked about earlier this month for offering VKD3D-based Direct3D 12 support on macOS for Windows games/applications running under Wine is now merged.
Crazy Justice [Steam], the third-person shooter with Battle Royale modes is supposed to be entering Early Access tomorrow so they've finally put out a proper gameplay video.
The Battle Royale modes are supposed to be free to play, however, we've been getting some conflicting reports from their Discord channel. It's not entirely clear if they will be free to play while it's in Early Access or not. It would be strange if it wasn't considering their Steam page clearly says so.
In a move that's both hilarious and also quite important, GOG have launched a new website named 'FCK DRM' to help people understand what DRM is and how it can be harmful.
I'm sure most of you know by know how much of a nuisance DRM can be, it's in games, movies and more and the purpose is supposed to be to deter piracy. However, a fair amount of the time it does end up hurting people purchasing games from legitimate sources.
Molecats is a puzzle game that gives you indirect-control, as you steer your little friends through the environment by changing it. They've announced that it's officially releasing on August 28th.
With an interesting top-down visual style, The Other Half [Official Site] looks really quite compelling and it's releasing with Linux support on November 2nd.
After watching the trailer, I instantly felt like I needed to know more and see more.
PULSAR: Lost Colony is a co-op starship simulation game where you and some friends can assume specific roles on your very own ship, it's a very cool idea and their latest content update is out. I've played it on and off and it very much speaks to my inner Star Trek nerd, being a captain of my own ship and being able to order friends about is pretty amusing.
Are you itching to play some Battle Royale on Linux? Now is your chance as the indie FPS game War Brokers [Steam, Official Site] just added a Battle Royale mode.
It probably won't obviate the need for Linux gamers to beg for ports of their favorite games. But Steam's update to Steam Play, its buy-once, play-on-any-platform engine, intends to improve its ability to deliver a no-compromise gaming experience when playing Windows games on a Linux box. And it'll make it easier for developers who use the popular Vulkan engine to create compatible games.
In an announcement on its Steam for Linux group on Tuesday, Valve rolled out a beta of the new version, with an initial list of more than 25 games that have been checked for compatibility. It includes a couple of Warhammer 40,000: Dawn of War games (Dark Crusade and Soulstorm), Star Wars: Battlefront 2, Tekken 7 and Nier: Automata. Steam seems to be going through the entire catalog checking existing games, creating a whitelist of titles that deliver an "identical (save for an expected moderate performance impact) experience."
Valve wants as many gamers as possible able to play titles from the Steam store, which is why for the past eight years buying a Steam game meant getting Windows, Mac, and Linux versions if they were available. This is called Steam Play, and it just received a major overhaul to open up thousands more games to Linux.
Linux users now have access to many more Windows titles on Steam with a new version of Steam Play, according to a blog post from Valve on Tuesday.
While those who use the Linux operating system currently have access to “more than 3000” Linux-compatible games in the Steam Store, this new update will eventually give them access to the entire back catalog of games previously only playable by Windows (and sometimes Mac) users.
Steam Play, released 2010, allows users to purchase games once on Steam and then have access to the title on all available computer platforms for the game, whether PC, Linux, or Mac. This new update to Steam Play is a huge upgrade for Linux users. Valve also wrote that there is an added benefit for developers who can “easily leverage their work from other platforms to target Linux.”
In brief: Valve has always cared about its Linux fanbase, but its latest announcement makes that clearer than ever - the company has rolled out a new version of Steam Play that integrates several third-party compatibility tools directly into the Steam Client, making it easier than ever for Linux gamers to play otherwise-incompatible titles.
Last week, we reported that Valve appeared to be working on a set of compatibility tools that would streamline the process of playing Steam games on Linux machines.
The information came from a series of database files obtained by the Steam Tracker team, but details were vague, and much was left to the imagination - all we really knew at the time was that the toolset would be an upgraded version of the aging Steam Play.
Valve is working to bring more games to Linux users by rolling out a new beta version of Steam Play, which includes a modified version of Wine, called Proton, to provide compatibility with Windows game titles, the company announced. This essentially allows Linux users to run native Windows games without a Linux port or major hoops to jump through.
MORE GAMES ARE COMING TO LINUX thanks to an update to Valve's Steam Play service that enables Windows-only games to run on the open source operating system.
Steam has been available on Linux for some time but a lot of games on the platform were limited to only running on Windows 10 machines. Now, however, thanks to an update to Steam Play that allows users to access their Steam library on Mac and Linux machines, gaming on Linux looks to be getting a shot in the arm.
For years now, Valve has been making strides with Steam to get away from the closed-off environments of the Windows and App Stores that Microsoft and Apple provide. Valve is all about user freedom and allowing people to play games the way they want. One of the ways Valve did this was by creating a Linux distribution dedicated to getting more game support on the operating system. While somewhat successful, this newest update may be the biggest push yet.
Valve has announced that its "Steam Play" program just received an update that will, essentially, emulate Windows playback on Linux. Called "Proton," this new compatibility tool replicates the functions of WINE, another popular piece of compatibility software for the Linux OS. Since the biggest hurdle to getting games running on Linux is a lack of Direct X support, Proton shifts Direct X 11 and 12 functionality to the newer Vulkan API and even natively supports all Steam supported gamepads.
Steam for Linux launched in February 2013, bringing Valve's popular digital distribution platform off Windows for the first time. Prior to its release, users had been working around the lack of Linux support by installing Steam via the Wine Windows compatibility wrapper; the native Steam client did away with the need for this workaround, but at the cost of only supporting titles that developers had marked as natively supporting Linux. For those with a hefty library of Windows-exclusive titles, the only solution was to keep two copies of Steam installed: One copy of Steam for Linux to install native Linux titles, and one copy of Steam for Windows running via Wine to install native Windows titles.
Valve is best-known these days for its monstrously popular online storefront Steam, but has also spent much of the last six years quietly developing tools and technology to enhance Linux as an ecosystem for PC gaming, making the open-source operating system a more viable alternative to Windows PCs for some gamers.
Valve has announced that the Linux version of its Steam client can now play Windows PC games complete with native Steamworks and OpenVR support. This is a result of the company's efforts to improve quality and performance of Windows compatibility solutions for Steam games by supporting Linux compatibility layers like Wine and eventually integrating these tools directly into Steam so games would run as if they were made for Linux to begin with. The end result is a modified, open source distribution of Wine known as Proton for the Linux version of Steam.
Valve has launched a beta for a new project in support of it’s Linux development. Starting today, Valve is officially developing and supporting a fork of the WINE compatibility layer called Proton, which will run in conjunction with Steam. Proton will allow games that have not seen an official Linux release to run on Linux through this compatibility layer. WINE, which originally stood for Wine Is Not An Emulator, is a reverse-engineer of the Windows API, allowing Window applications to be run on Linux and other Unix-like operating systems.
Proton itself is released under a permissive license similar to the MIT license. The code forked from WINE was originally licensed under the Lesser GNU Public License (LGPL), which requires derivative works to also be under the LGPL. It will allow Linux to play many games, as well as providing developers a development target without making a full port. Proton uses the Vulkan graphics API to translate DirectX 11 and 12, leading to better compatibility and performance.
Having removed SteamOS from their platform a few months back, Valve have now revealed that the beta version of the company's new and improved Steam Play service goes online today, which is great news for all Steam users on Linux and MacOS.
Or should I say gamers, since that was Valve's original goal for Steam Play - gaming. The service launched back in 2010 as "a way for Steam users to access Windows, Mac and Linux versions of Steam games", and even though it made significant progress, proper implementation required a bit more.
Nantucket, the rather interesting seafaring strategy game from Picaresque Studio now has a Linux version on GOG that was released today. The Linux version was officially released earlier this month after a few months of being in beta, so it's good to see a GOG release have a reasonably quick turnaround.
If you're after a strategy game that brings things back to basics and has no moving units, take a look at Radiis. Developed by Urban Goose Games and release last month, Radiis will have you conquer a map using only buildings and it strangely works.
The Humble Spooky Horror Bundle 2018 just launched and while it doesn't have all titles on Linux, what it does have for us is good.
Valve announced today a new version of its Steam Play feature, which lets Linux, Mac, and Windows users play their games anywhere on any platform, with better compatibility for Windows game titles on Linux systems.
Even if there are already more than 3,000 games on Steam that offer support for the Linux platform, Valve wants to let Linux Steam users access even more games, especially Windows games that never received Linux support and probably won't, though the company hopes it'll also encourage developers to port their titles to the Linux platform in the near future.
Valve has today announced a new version of Steam Play that allows Linux gamers to enjoy Windows games on Linux via their new Wine-based Proton project.
As we speculated previously, Valve have now officially announced their new version of 'Steam Play' for Linux gaming using a modified distribution of Wine called Proton, which is available on GitHub.
What many people suspected turned out to be true, DXVK development was actually funded by Valve. They actually employed the DXVK developer since February 2018. On top of that, they also helped to fund: vkd3d (Direct3D 12 implementation based on Vulkan), OpenVR and Steamworks native API bridges, wined3d performance and functionality fixes for Direct3D 9 and Direct3D 11 and more.
The amount of work that has gone into this—it's ridiculous.
There were whispers about it just last week but now it’s totally official. Steam Play, which was originally intended as a single-purchase system for buying games that run on Windows, Mac, and Linux, is taking cross-platform compatibility to the next level. Yes, Valve is now testing running Windows games on Steam on Linux. And, much to the satisfaction of Linux and open source advocates, it’s doing it the right way by building on and supporting initiatives that will benefit not just Steam but the entire Linux ecosystem as well.
Valve announced today a beta of Steam Play, a new compatibility layer for Linux to provide compatibility with a wide range of Windows-only games.
We've been tracking Valve's efforts to boost Linux gaming for a number of years. As of a few months ago, things seemed to have gone very quiet, with Valve removing SteamOS systems from its store. Last week, however, it became clear that something was afoot for Linux gaming.
The announcement today spells out in full what the company has developed. At its heart is a customized, modified version of the WINE Windows-on-Linux compatibility layer named Proton. Compatibility with Direct3D graphics is provided by vkd3d, an implementation of Direct3D 12 that uses Vulkan for high performance, and DXVK, a Vulkan implementation of Direct3D 11.
Valve’s Steam game platform supports Windows, Mac, and Linux. But up until recently it was up to developers to decide which operating systems to support… and the vast majority are Windows-only, followed by a smaller number of apps that support macOS and around 3 thousand that support Linux.
But now the number of Steam games available to Linux users is a little longer… not because developers have ported their games to support the operating system, but because Valve has launched a new version of Steam Play that makes is possible to play some Windows games on Linux computers.
Valve released an update for Steam on Linux that should allow some of the most popular VR games to run on VR-ready computers without Microsoft Windows installed.
The new feature could hold enormous potential for Valve to support next generation standalone VR headsets based on Linux or SteamOS. In the near-term, the feature could also lower the cost for some early adopters who want to enjoy top tier games like Doom VFR, Google Earth VR and Beat Saber but don’t feel like shelling out the cost for a Windows 10 license alongside their shiny new VR-ready PC. It might also have an effect on VR arcades which could bypass the cost of Microsoft’s operating system.
The new feature is described as follows: “Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support.”
Heads up developers, if your players have been asking for a Linux-compatable version of your game, Valve's announced that delivering that version should be much easier going forward.
In a post on the Steam community forums, Valve representative Pierre-Loup Griffais announced that Valve is releasing a new version of Steam play that includes a new feature for Linux users. Using an improved version of the compatibility software Wine known as Proton, Griffais states that Linux users can now play games on Steam that are meant to run on Windows.
Valve’s name for its cross-platform initiative – is getting a major update, with built-in tools allowing you to run Windows games on Linux. We saw the first hints of the feature last week, and today Valve has confirmed it. It’s available right now in beta, so if you want to test the compatibility features on your own Linux install you don’t have to wait.
Last week we wrote about Valve potentially folding support for a WINE-style compatibility wrapper into Steam, allowing Linux machines to play Windows games with minimal hiccups. Now it’s a reality. Valve’s Pierre-Loup Griffais made the announcement on the “Steam for Linux” group today.
The forum post is long and very detailed, and if you’re personally invested in Linux gaming it’s probably worth a read.
Attending Akademy - the annual KDE contributors summit - is always a quite intense experience. This year it happened from 11th to 17th August in the lovely city of Vienna, Austria. It was a quite special edition. We got a higher number of attendees, including some people who have been doing KDE things for more than a decade but only now had the chance to show up and talking to people in-person. In addition, we changed the conference program a bit, moving the reports for the Working Groups from the KDE e.V. General Assembly (restricted to KDE e.V. members) to the general Akademy schedule. Also, this year we introduced four training sessions covering topics not exactly technical but of paramount important for a community like KDE: Non-violent Communication, Online Fundraising and Campaigning, Documentation writing for non-writers, and Public Speaking Training.
How often do you meet your laptop vendor in person? Last year, I picked up a KDE Slimbook, and the machine has been great, acting as my development-box-on-the-go for lots of KDE travels. It has a few stickers, and some scratches, and the screen had gotten a bit wobbly by now .. so, at this year’s Akademy I stopped by the Slimbook stand, admired the newer Slimbook II (alas, the old one isn’t written off yet), and mentioned the wobbly screen.
Kirigami used to have a Telegram channel as its main communication channel. this is of course not optimal being a closed service and many potential contributors not having an account on Telegram.
In the last few years, smartphone hardware has become powerful enough to drive conventional desktop software. A developing trend is to create laptops using hardware initially designed for smartphones and embedded systems. There are distinct advantages to this approach: those devices are usually very energy efficient, so they can yield a long runtime on a single battery charge; they're also rather inexpensive and lighter than conventional laptops.
The KDE neon team has been working with the Blue Systems hardware enablement team and the Pinebook developers to create the KDE neon Pinebook Remix. It uses our Bionic images built for arm64 to create a full featured slick desktop that runs on the best value hardware. The Pinebook comes at a low price but it’s a full laptop useful for watching videos, browsing the web or coding on KDE software. This could open up whole new markets to getting KDE software, a school which previously could only afford a couple of computers could now afford enough for a classroom, a family which previously had to share one computer could now afford a laptop for the children to learn how to code on. It’s quite exciting. And with the KDE Slimbook, neon now covers all ends of the market.
One of the things to come out of Akademy is the first community release of the KDE neon Pinebook Remix image. I’ve been carrying around the Pinebook for some time — since FOSDEM, really, where I first met some of the Pine folks. At Akademy, TL was back and we (that’s a kind of royal “we”, because TL and Rohan and Bhushan and other people did all the hard work) got around to putting the finishing touches on the Pinebook image.
GNOME Boxes is an application which makes virtualization super simple. Targeted to entry level users, gnome boxes has managed to eliminate too many configurations and settings changes needed to connect to a remote or virtual machine. There are other virtual machine client available in Linux universe but they are complex and sometimes dedicated to advanced users.
The new Yaru/Communitheme theme might be the talk of the Ubuntu town right now, but it’s not the only decent desktop theme out there.
If you want to give your Linux desktop a striking new look ahead of the autumn then the following quad-pack of quality GTK themes might help you out.
Don’t be put off by the fact you will need to manually install these skins; it’s pretty to install GTK themes on Ubuntu 18.04 LTS above, providing you set hidden folders to show (Ctrl + H) in Nautilus first.
When I last visited the question of to-do lists, I settled on a command-line utility, todo.txt. It's reasonably versatile...but I've found that I don't use it.
The first reason is that I'd really prefer a graphical user interface, not a flat text display. But also, I've found that I want a hierarchical organizer. I tend to group tasks into categories, and I plan by dividing major tasks into subtasks.
So I was intrigued when I noticed, quite by chance, that my time-tracker software (Hamster) will integrate with two task managers: Evolution and Getting Things GNOME! (GTG). I've always thought of Evolution as massive overkill, but I'd never heard of GTG, so I thought I'd give that a try.
Earlier this year, the GNOME devs decided to remove the ability of the Nautilus (Files) file manager to handle desktop icons, stating with the GNOME 3.28 release, promising to bring it back as soon as possible through a new implementation in the form of a GNOME Shell extension.
As expected, users were skeptical about the new implementation if it will offer them the same level of convenience that the previous method provided via the Nautilus file manager. We said it before and we'll say it again, desktop icons are he to stay for many years and they are not going to disappear.
Today I have good news for “classic mode” users and those used to desktop icons.
Anyone hesitant of upgrading to GNOME 3.28 because of its decision to remove desktop icons need worry no more.
A new extension for GNOME Shell brings desktop icons support back to the GNOME desktop.
It works almost exactly as you’d expect: you can see icons on your desktop and rearrange them; double-click on files/folders/apps to open them; right-click on an empty part of the desktop create a new folders or open a folder in the terminal; and perform basic file operations like copy and paste.
Flatpak 1.0 has released which is a great milestone for the Linux Desktop. I was asked at GUADEC whether a release video could be in place. In response, I spontaneously arranged to produce a voice-over with Sam during the GUADEC Video Editing BoF. Since then, I have been storyboarding, animating and editing the project in Blender. The music and soundscape has been produced by Simon-Claudius who has done an amazing job. Britt edited the voice-over and has lended me a great load of rendering power (thanks Britt!).
Outreachy is a great organization that helps women and other minorities get involved in open source software. (Outreachy was formerly the GNOME Outreach Program for Women.) I've mentored several cycles in Outreachy, doing usability testing with GNOME. I had a wonderful time, and enjoyed working with all the talented individuals who did usability testing with us.
I haven't been part of Outreachy for a few years, since I changed jobs. I have a really hectic work schedule, and the timing hasn't really worked out for me. Outreachy recently posted their call for participation in the December-March cycle of Outreachy. December to March should be a relatively stable time on my calendar, so this is looking like a great time to get involved again.
I don't know if GNOME plans to hire interns for the upcoming cycle of Outreachy, at least for usability testing. But I am interested in mentoring if they do.
Following conversations with Allan Day and Jakub Steiner, from GNOME Design, I'm thinking about changing the schedule we would use in usability testing. In previous cycles, I set up the schedule like a course on usability. That was a great learning experience for the interns, as they had a ramp-up in learning about usability testing before we did a big usability project.
TL;DR: there’s now an rsync server at rsync://images-dl.endlessm.com/public from which mirror operators can pull Endless OS images, along with an instance of Mirrorbits to redirect downloaders to their nearest—and hopefully fastest!—mirror. Our installer for Windows and the eos-download-image tool baked into Endless OS both now fetch images via this redirector, and from the next release of Endless OS our mirrors will be used as BitTorrent web seeds too. This should improve the download experience for users who are near our mirrors.
If you’re interested in mirroring Endless OS, check out these instructions and get in touch. We’re particularly interested in mirrors in Southeast Asia, Latin America and Africa, since our mission is to improve access to technology for people in these areas.
Today I am very pleased to share the hard work of the Bodhi Team which has resulted in our fifth major release. It has been quiet the journey since our first stable release a little over seven years ago and I am happy with the progress this projected has made in that time.
For those looking for a lengthy change log between the 4.5.0 release and 5.0.0, you will not find one. We have been happy with what the Moksha desktop has provided for some time now. This new major release simply serves to bring a modern look and updated Ubuntu core (18.04) to the lightning fast desktop you have come to expect from Bodhi Linux.
It has been a few years of good progress for Bodhi Linux. It is always interesting to see what a lightweight Linux distribution has to offer.
Bodhi Linux developer Jeff Hoogland announced today the release and general availability of the final Bodhi Linux 5.0 operating system series for 32-bit and 64-bit platforms.
Based on Canonical's long-term supported Ubuntu 18.04 LTS (Bionic Beaver) operating system series, Bodhi Linux 5.0 promises to offer users a rock-solid, Enlightenment-based Moksha Desktop experience, improvements to the networking stack, and a fresh new look based on the popular Arc GTK Dark theme, but colorized in Bodhi Green colors.
The latest version of the lightweight Linux distribution includes a modest set of changes mainly concerned with aesthetics. The main lure for users will be the foundational upgrade to Ubuntu 18.04 LTS ‘Bionic Beaver’.
“We have been happy with what the Moksha desktop has provided for some time now. This new major release simply serves to bring a modern look and updated Ubuntu core (18.04) to the lightning fast desktop you have come to expect from Bodhi Linux,” Bodhi developer Jeff Hoagland writes in his release announcement.
One of the best things about there being so many Linux distributions, is it can be fun to try them all. Believe it or not, "distro-hopping" is a legit hobby, where the user enjoys installing and testing various Linux-based operating systems and desktop environments. While Fedora is my reliable go-to distro, I am quite happy to try alternatives too. Hell, truth be told, I have more fun trying distributions than playing video games these days, but I digress.
A unique distribution I recommend trying is the Ubuntu-based Bodhi Linux. The operating system is lightweight, meaning it should run decently on fairly meager hardware. It uses a desktop environment called "Moksha" which is very straightforward. The Enlightenment 17 fork is a no-nonsense DE that both beginners and power users will appreciate. Today, version 5.0.0 finally becomes available. This follows a July release candidate.
Two important conferences are coming up:
* the Nextcloud conference in Berlin, Germany, from August 23 to 30, and * the MyData.org conference in Helsinki, Finland, August 29-31.
We’ll be at both, and just in time, we are proud to release UBOS beta 15!
Here are some highlights:
* Boot your Raspberry Pi from USB, not just an SDCard * The UBOS Staff has learned a very convenient new trick * UBOS now drives the LEDs on Intel NUCs and the Desktop Pi enclosure for the Raspberry Pi * Access your device from the public internet through Pagekite integration
For more info, read the detailed release notes here: https://ubos.net/docs/releases/beta15/release-notes/
Freespire 4.0 has been released. This release brings a migration of the Ubuntu 16.04 LTS codebase to the 18.04 LTS codebase, which adds many usability improvements and more hardware support. Other updates include intuitive dark mode, "night light", Geary 0.12, Chromium browser 68 and much more.
The hybrid cloud requires a consistent foundation and today, we are pleased to refine and innovate that foundation with the availability of Red Hat Enterprise Linux 7.6 beta. The latest update to Red Hat Enterprise Linux 7 is designed to deliver control, confidence, and freedom to demanding business environments, keeping pace with cloud-native innovation while supporting new and existing production operations across the many footprints of enterprise IT.
As Red Hat’s Paul Cormier states, the hybrid cloud is becoming a default technology choice. Enterprises want the best answers to meet their specific needs, regardless of whether that’s through the public cloud or on bare metal in their own datacenter. Red Hat Enterprise Linux provides an answer to a wide variety of IT challenges, providing a stable, enterprise-grade backbone across all of IT’s footprints - physical, virtual, private cloud, and public cloud. As the future of IT turns towards workloads running across heterogeneous environments, Red Hat Enterprise Linux has focused on evolving to meet these changing needs.
Red Hat announced today the availability of Red Hat Enterprise Linux 7.6 operating system for beta testing for Red Hat Enterprise Linux customers.
Red Hat Enterprise Linux 7.6 is the sixth maintenance update in the Red Hat Enterprise Linux 7 operating system series, promising innovative technologies for Linux containers and enterprise-class hybrid cloud environments, new security and compliance features, as well as improvements in the management and automation areas.
"The latest update to Red Hat Enterprise Linux 7 is designed to deliver control, confidence, and freedom to demanding business environments, keeping pace with cloud-native innovation while supporting new and existing production operations across the many footprints of enterprise IT," said Red Hat in today's announcement.
There’s no pause button for agencies as they modernize systems — they must maintain critical legacy services while developing new platforms, which can make modernization a doubly tough proposition.
Open source technologies, however, can help to lighten that load, says Adam Clater, chief architect of Red Hat’s North American public sector business.
“Open source in the current climate is very much on the tip of everyone’s tongue. As the federal government looks to dig themselves out of the technical debt and focus on modernization, as well as delivering new services to their end users, at the end of the day they do have to continue the business of the government,” said Clater. “There’s a very natural affinity toward open source technologies as they do that because open source technologies are really at the forefront of the innovation we’re seeing.”
Because of this, Clater says he’s seen a surge in adoption of open source technology in the federal government in recent years.
“I think the government is ratcheting up their participation in open source communities,” he told FedScoop. “They’ve long been participants and contributors, but with Code.gov and the memorandum around open source and open sourcing of government code, I think they’re really leaning in as both a contributor and a consumer of open source while partnering with industry in a lot of that adoption.”
It's a bit surprising that no one else seems to be following Red Hat's lead. For a company that pulled in a very profitable $3 billion in its last fiscal year, and is on track to top $5 billion, Red Hat does a lot of things right. Perhaps most interestingly, however, is how it does product development.
As Red Hat CEO Jim Whitehurst has said: "Five years ago we didn't know the technologies we'd be using today, and we don't know what will be big in five years time." That's true of all companies. What's different for Red Hat, however, is how the company works with open source communities to invent the future.
Red Hat Linux 7.6 beta is now available. According to the Red Hat blog, "Red Hat Enterprise Linux 7.6 beta adds new and enhanced capabilities emphasizing innovations in security and compliance features, management and automation, and Linux containers." See the Release Notes for more information.
Hyperconverged storage software maker Maxta on Aug. 22 introduced a new appliance with a specific function: to run its software on Red Hat Linux' virtualization framework.
This is a pre-configured system—called a Hyperconverged (Un)Appliance—consisting of Red Hat and Maxta software bundled together on Intel Data Center Blocks hardware. The joint package provides appliance-based hyperconvergence benefits without the disadvantages conventional systems have to endure, such as costs for refreshing, upgrading, VMware licensing and proprietary virtualization.
Hyperconverged (Un)Appliances collapse servers, storage and networking into a single server tier that is used to run virtual machines and containers, Maxta said. Storage is configured automatically when VMs or containers are created, allowing administrators to focus on managing applications rather than storage.
-Maxta Inc., a leading provider of hyperconvergence software, today introduced a Hyperconverged “(Un)Appliance” for Red Hat Virtualization, a pre-configured system of Red Hat Virtualization software and Maxta Hyperconvergence software bundled together on Intel€® Data Center Blocks hardware. This joint solution provides all the advantages of appliance-based hyperconvergence without any of the disadvantages – there’s no refresh tax, no upgrade tax, no VMware tax, and no proprietary virtualization.
The automobile industry is undergoing the biggest transformation in its 100-plus year history – and automotive trade is changing just as dramatically. Digitization has become at once a major competitive factor and a catalyst, influencing every company in the industry, while simultaneously proving to be a resource to be taken advantage of. Companies wishing to benefit from it should prepare to adapt organizationally, culturally, and technically while being able to manage the resulting changes.
In many ways, digitization means that companies must orient themselves to the needs of the customers economically, strategically, and technically. This customer-centric focus runs through all value chains company-wide as well as the respective individual divisions of every company, from development and production to sales and service.
Red Hat Product Security has transitioned from using its old 1024-bit DSA OpenPGP key to a new 4096-bit RSA OpenPGP key. This was done to improve the long-term security of our communications with our customers and also to meet current key recommendations from NIST (NIST SP 800-57 Pt. 1 Rev. 4 and NIST SP 800-131A Rev. 1).
The old key will continue to be valid for some time, but it is preferred that all future correspondence use the new key. Replies and new messages either signed or encrypted by Product Security will use this new key.
Managing data reconciliation through a specific process is a common necessity for projects that require Digital Process Automation (formerly known as Business Process Management), and Red Hat Process Automation Manager helps to address such a requirement. This article provides good practices and a technique for satisfying data reconciliation in a structured and clean way.
Red Hat Process Automation Manager was formerly known as Red Hat JBoss BPM Suite, so it’s worth mentioning that jBPM is the upstream project that fuels Process Automation Manager. The blog post From BPM and business automation to digital automation platforms explains the reasons behind the new name and shares exciting news for this major release.
The Flatpak framework for distributing Linux desktop applications is now in prodaction release, after three years of beta status. The framework, originally called XDG-app, is intended to make Linux more attractive to desktop app developers. Applications built as a Flatpak can be installed on just about any Linux distribution.
The open source FlatPak can be used by different types of desktop applications and is intended to be as agnostic as possible when it comes the building of applications. There are no requirements for languages, build tools, or frameworks. Users can control app updates. Flatpack uses familiar technologies such as the Bubblewrap utility for setting up containers and Systemd for setting up Linux cgroups (control groups) for sandboxes.
The members of the Fedora Engineering and Steering Committee have not only recently approved the Fedora 30 release schedule proposal, they have just recently approved a handful of Fedora 29 features.
Fedora 29 won’t be shipping until the end of October, but the Fedora 30 release schedule was confirmed to be around April 30th to May 7th of next year – the developers are planning on a massive and lengthy rebuild to occur around the end of January, then change checkpoint completion deadline by middle of February, beta freeze in early March, beta release towards the end of March, and the final freeze around the middle of April.
While Fedora 29 isn't shipping until the end of October, the release schedule for Fedora 30 was firmed up this week at the Fedora Engineering and Steering Committee meeting.
The approved schedule is aiming for the Fedora 30 Linux release to happen on 30 April but with a pre-planned fallback date of 7 May.
Version 7.3.0beta2 is released. It's now enter the stabilisation phase for the developers, and the test phase for the users.
RPM are available in the remi-php73 repository for Fedora ââ°Â¥ 27 and Enterprise Linux ââ°Â¥ 6 (RHEL, CentOS) and as Software Collection in the remi-safe repository (or remi for Fedora)
In addition to approving the Fedora 30 release schedule proposal, the members of the Fedora Engineering and Steering Committee have approved this week a number of Fedora 29 features.
A presentation from Jim Perrin and Matt Miller revealed that Fedora and CentOS dist-git will be tied together. This change will likely provide an opportunity to do crazy, awesome and beautiful stuff. But the key thing is to have a single dist-git deployment instead of 2 at start. Once that’s done, we may start thinking about what to do with it.
Also Brian Stinson described the CI effort to validate all Fedora packages using CentOS CI infrastructure. Good updates, we seem to be getting really close to a system where all of us can write tests for their packages easily and run them on builds. Brian promised that short term we should be getting notifications from the pipeline and documentation. Can’t wait!
While Debian has tens of thousands of packages in its archive and users often tend to cite the size of a package archive as one of the useful metrics for evaluating a OS/distribution or package manager's potential, not all packages are maintained the same. In acknowledging that not all packages are maintained to the same standard and some ultimately slip through the cracks, Debian developers are discussing a salvaging process.
Like other distributions, Debian has processes in place already for orphaning packages when a maintainer disappears or voluntarily gives up maintaining a particular package. But this proposed package salvaging process is for poorly maintained or completely unmaintained packages that aren't in an orphaned state -- the process to salvage a package to improve its quality would be "a weaker and faster procedure than orphaning." The package maintainers could simply be preoccupied for a number of months, lost interest in the particular package and not pursued orphaning, etc.
That August 16, 1993, a young Ian Murdock announced on Usenet "the imminent completion of a new version of Linux which I will call Debian Linux Release." Murdock, of course, had no idea that Debian would end up becoming an institution in the Linux world. This distribution, mother of many others (Ubuntu included), has completed 25 splendid years that have confirmed it as a crucial development in the world of Linux and Open Source.
On Friday, I will be attending LVEE (Linux Vacation Eastern Europe) once again after a few years of missing it for various reasons. I will be presenting a talk on my experience of working with LAVA; the talk is based on a talk given by my colleague Guillaume Tucker, who helped me a lot when I was ramping up on LAVA.
Since the conference is not well known outside, well, a part of Eastern Europe, I decided I need to write a bit on it. According to the organisers, they had the idea of having a Linux conference after the newly reborn Minsk Linux User Group organised quite a successful celebration of the ten years anniversary of Debian, and they wanted to have even a bigger event. The first LVEE took place in 2005 in a middle of a forest near Hrodna.
For personal reasons, I didn't make it to DebConf18 in Taiwan this year; but that didn't mean I wasn't interested in what was happening. Additionally, I remotely configured SReview, the video review and transcoding system which I originally wrote for FOSDEM.
The Linux-based OS Debian is 25 years old, and during its lifetime this child of the 90s has spawned its own family of operating systems.
Debian derivatives come in all shapes and sizes, from user-friendly Linux Mint to the macOS replacement Elementary OS to the privacy-centric Tails.
This gallery rounds up some of the most notable and popular Debian derivatives, as highlighted by The Debian Project and DistroWatch.
Devuan is a fork of the popular Debian Operating System upon which Ubuntu is based. It was first released in November 2014 with the aim of providing Linux users with a distro that doesn’t have the systemd daemon installed by default.
Although Devuan started when Debian adopted systemd but didn’t have a stable release until last year, 2017 in line with the release of Debian 9.
Because Devuan is virtually a replica of Debian except that it doesn’t use systemd, this article will be to highlight the differences between both OSes (starting with the most important,) so that you can see why you may prefer one over the other.
The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.
Like its bigger brother Ubuntu and other official flavors, Lubuntu is still using the old X.Org Server by default, though nothing stops users to switch to Wayland if they want a more secure and capable display server for their computers, but that's about to change in the coming years as Lubuntu will adopt Wayland by default.
Ubuntu already tried to move to Wayland by default with the now deprecated Ubuntu 17.10 (Artful Aardvark) release, but it had to switch back to X.Org Server and put Wayland on the back seat as an alternative session, which users can select from the login manager, with the Ubuntu 18.04 LTS (Bionic Beaver) release.
Tranquil PC open pre-orders on a fanless, barebones “Mini Multi Display PC” mini-PC with AMD’s Ryzen Embedded V1000 SoC, 4x simultaneous 4K DisplayPort displays, 2x GbE, and up to 32GB DDR4 and 1TB storage.
Manchester, UK based Tranquil PC has launched the first mini-PC based on the AMD Ryzen Embedded V1000. The Mini Multi Display PC is named for the Ryzen V1000’s ability to simultaneously drive four 4K displays, a feature supported here with 4x DisplayPorts. The NUC-like, aluminum frame system is moderately rugged, with 0 to 40€°C support and IP50 protection.
Aaeon’s Apollo Lake powered “PICO-APL4” SBC offers a pair each of GbE, USB 3.0, and M.2 connections plus HDMI, SATA III, and up to 64GB eMMC.
Aaeon has spun another Pico-ITX form-factor SBC featuring Intel Apollo Lake processors, following the PICO-APL3 and earlier PICO-APL1. Unlike those SBCs, the new PICO-APL4 has dual Gigabit Ethernet ports, among other minor changes.
IEI’s rugged, “TANK-860-QGW” IPC computer for M2M and IoT runs a Qnap-derived QTS Gateway Linux distro on a 4th Gen Core CPU with dual SATA bays and up to 6x PCIe slots.
IEI Technology has spun a rather singular embedded PC that aims to replace barebones IPC (interprocess communications) systems with something a bit more modern and IoT savvy. We say “a bit more” since the rugged, industrial focused TANK-860-QGW system runs on Intel’s old-school, 4th Gen “Haswell” processor. Otherwise, however, this “cloud-based IPC solution” offers up-to-date features.
The TANK-860-QGW runs on a homegrown QTS Gateway Linux distribution based on Qnap’s Linux-based QTS platform for its NAS (network attached server) systems. The system can monitor IPMI equipment, servers, PCs, and production line equipment, and can be set up as a LoRaWAN server, says IEI.
If you type Mastodon into Google around now you’ll probably happen upon a hairy chap called Brent Hinds who is apparently selling off his huge collection of guitars and amplifiers. For as well as being a prehistoric elephant, Mastodon is a beat combo and, latterly, a newish social network being promoted as “Twitter without the Nazis” or, less hysterically, “Twitter minus its bad bits”.
Mastodon was launched in August 2016 and received a guarded welcome. People got the idea: Mastodon was community owned, open source, decentralised, no advertising, no tracking, and no hate speech (probably) sort of outfit.
Unlike Twitter, Mastadon comprises software ‘instances’, so it’s a federation of little sites which self-administer. If you live mostly in one instance, that doesn’t stop you from following and being followed by members of other instances.
Essentially open source Business Process Management (BPM) software company Bonitasoft has introduced its Bonita 7.7 iteration release.
This is BPM software with Intelligent Continuous Improvement (ICI) and Continuous Delivery (CD) capabilities.
The company says that its ICI play here is a route to building what it has called adaptable ‘living’ applications.
A living application then being one that can deliver changes in terms of continuous improvement, continuous integration, continuous deployment and continuous connectivity.
A new open-source tool designed to make DNS rebinding attacks easier has been released.
The kit, dubbed ‘singularity of origin’, was launched last week by a team from NCC Group.
It simplifies the process of performing a DNS rebinding attack, where an attacker is able to takeover a victim's browser and break the single origin policy. This effectively allows an attacker to mask as the victim's IP address and potentially abuse their privileges to access sensitive information.
The tool was created with pentesters in mind, and to increase awareness for developers and security teams on how to prevent DNS rebinding, the tool’s creators said.
NCC Group’s Gerald Doussot and Roger Meyer, who wrote the tool, told The Daily Swig: “Many developers think it's safe to write software that has debug services listening only locally, but we've had several engagements where we were able to remotely compromise applications using DNS rebinding.
One of the most fascinating open networking projects to emerge earlier this year is the AT&T-initiated Akraino Edge Stack, which is being managed by the Linux Foundation. The objective of the Akraino project is to create an open source software stack that supports high-availability cloud services optimised for edge computing systems and applications.
The project has now moved into its execution phase to begin technical documentation and is already backed and supported by a strong group of telecoms operators and vendors. They include Arm, AT&T, Dell EMC, Ericsson, Huawei, Intel, Juniper Networks, Nokia, Qualcomm, Radisys, Red Hat and Wind River.
Progress, a provider of application development and digital experience technologies, has released the Progress Spark Toolkit, a set of open source ABL code and recommended best practices to enable organizations to evolve existing applications and extend their capabilities to meet market demands.
Previously only available from Progress Services, the Spark Toolkit was created in collaboration with the Progress Common Component Specification (CCS) project, a group of Progress OpenEdge customers and partners defining a standard set of specifications for the common components for building modern business applications. By engaging the community, Progress says it has leveraged best practices in the development of these standards-based components and tools to enable new levels of interoperability, flexibility, efficiencies and effectiveness.
Progress has announced the release of Progress Spark Toolkit, a set of open source Advanced Business Language (ABL) code and recommended best-practices to enable organizations to evolve existing applications and extend their capabilities to meet market demands.
IoT devices currently lack a standard way of applying security. It leaves consumers, whether business or individuals, left to wonder if their devices are secure and up-to-date. Foundries.io, a company that launched today, wants to change that by offering a standard way to secure devices and deliver updates over the air.
“Our mission is solving the problem of IoT and embedded space where there is no standardized core platform like Android for phones,” Foundries.io CEO George Grey explained.
Emerging from two years in stealth mode, Foundries.ioâ⢠today announced the world's first commercially available, continuously updated Linuxâââ¡ and Zephyrâ⢠microPlatformâ⢠distributions for the embedded, IoT, edge and automotive markets. Supported by a newly announced partner program, these microPlatformsâ⢠enable devices from light bulbs to connected cars to always be secure and updated to the latest available firmware, operating system and application(s).
A Linaro spinoff called Foundries.io unveiled a continuously updated “microPlatforms” IoT service with managed Linux and Zephyr distros. The Linux platform is based on OE/Yocto and Docker container code.
A Cambridge, UK based startup called Foundries.io, which is funded by Linaro and led by former Linaro exec George Grey, has launched a microPlatforms service with managed, subscription-based Linux and Zephyr distributions. The microPlatforms offering will target IoT, edge, and automotive applications, and provide continuous over-the-air (OTA) updates to improve security.
The distributions are designed to work with any private or public cloud platform, with the microPlatform cloud service acting as an intermediary. The microPlatforms packages include firmware, kernel, services, and applications, “delivered continuously from initial product design to end-of-life,” says Foundries.io.
oundries.io emerged from stealth with the notion that tight integration and instant software updates are the best security for edge, embedded, and IoT devices.
That philosophy is behind the company’s “microPlatforms” software that target devices running Linux or Zephyr distributions for the embedded, IoT, connected device, and edge markets. The Foundries.io platform allows for security and bug fix updates to be immediately sent to those devices. The software includes firmware, kernel, services, and application support, with Foundries.io handling the engineering, testing, and deployment of those updates.
A startup formed by members of Linaro wants to be the Red Hat of the Internet of Things, delivering configurations of Linux and the Zephyr RTOS for end nodes, gateways and cars. Foundries.io aims to provide processor-agnostic code with regular updates at a time when IoT developers have a wide variety of increasingly vendor-specific choices.
“Today every IoT product is effectively a custom design that has to be tested and maintained, and we believe that causes huge fragmentation. Our concept is to make it as easy to update an embedded product as to update a smartphone, so you don’t need a security expert,” said George Grey, chief executive of Foundries.io.
Los Angeles County’s open-source vote tally system was certified by the secretary of state Tuesday, clearing the way for redesigned vote-by-mail ballots to be used in the November election.
“With security on the minds of elections officials and the public, open-source technology has the potential to further modernize election administration, security and transparency,” Secretary of State Alex Padilla said. “Los Angeles County’s VSAP vote tally system is now California’s first certified election system to use open-source technology. This publicly-owned technology represents a significant step in the future of elections in California and across the country.”
The system — dubbed Voting Solutions for All People (VSAP) Tally Version 1.0 — went through rigorous security testing by staffers working with the secretary of state as well as an independent test lab, according to county and state officials.
California Secretary of State Alex Padilla’s office has certified the first open-source, publicly owned election technology for use in Los Angeles County — “a significant step in the future of elections in California and across the country.”
The system is known as Voting Solutions for All People (VSAP) Tally Version 1.0. Its certification will allow Los Angeles County to use its newly designed Vote By Mail (VBM) ballots in the November election.
County Registrar-Recorder/County Clerk Dean Logan, in the news release from Padilla's office, said the new system will ensure accurate and secure counting of ballots.
Logan’s office will begin distributing the new ballots on Oct. 9. Each voter’s packet will include a ballot, a postage-paid return envelope, a secrecy sleeve and an “I Voted” sticker.
“As part of the certification process, the system went through rigorous functional and security testing conducted by the Secretary of State’s staff and a certified voting system test lab,” Padilla’s office said. “The testing ensured the system’s compliance with California and federal laws, including the California Voting System Standards (CVSS).”
Los Angeles County’s open-source vote tally system was certified by the secretary of state Tuesday, clearing the way for redesigned vote-by-mail ballots to be used in the November election.
“With security on the minds of elections officials and the public, open-source technology has the potential to further modernize election administration, security and transparency,” Secretary of State Alex Padilla said. “Los Angeles County’s VSAP vote tally system is now California’s first certified election system to use open-source technology. This publicly-owned technology represents a significant step in the future of elections in California and across the country.”
The system — dubbed Voting Solutions for All People (VSAP) Tally Version 1.0 — went through rigorous security testing by staffers working with the secretary of state as well as an independent test lab, according to county and state officials.
As containers become an almost ubiquitous method of packaging and deploying applications, the instances of malware have increased. Securing containers is now a top priority for DevOps engineers. Fortunately, a number of open source programs are available that scan containers and container images. Let’s look at five such tools.
It’s increasingly clear that when it comes to artificial intelligence (AI), many organizations will be able to leverage investments made by IT vendors that are being made available as open source code. The latest example of that trend is a decision by Salesforce to make TransmogrifAI, a machine learning library that makes it simpler to consume large amounts of structured data, available as open source code on GitHub.
Shubha Nabar, senior director of data science for Salesforce Einstein, the AI platform developed by Salesforce, says the decision to make TransmogrifAI open source is driven by primarily by a desire to make AI technologies readily available and easily understandable.
When we talk about DevOps, we typically mean managing software deliverables, not infrastructure. But the overall system sanctity is deeply coupled with infrastructure integrity. How many times have you heard “But it works on my system”? Or perhaps a misconceived admin changes the configuration of the production server and things don’t work anymore. Hence, it is essential to bring infrastructure into the proven DevOps practices of consistency, traceability, and automation.
This article builds on my previous one, Continuous infrastructure: The other CI. While that article introduced infrastructure automation and infrastructure as a first-class citizen of the CI pipeline using the principles of infrastructure as code and immutable infrastructure, this article will explore the tools to achieve a CIi (continuous integration of infrastructure) pipeline through automation.
The free Hybrid Analysis malware research site used for investigating and detecting unknown malware threats now includes an accelerated search feature that roots out matches or correlations in minutes, rather than hours.
CrowdStrike donated its Falcon MalQuery new rapid-search feature to the Hybrid Analysis community platform, which has some 100,000 active users worldwide. Hybrid Analysis was acquired in fall 2017 by CrowdStrike, and also employs CrowdStrike's sandbox technology.
BlazeMeter launched an open source plugin for continuous mainframe testing.
The RTE plugin works with the company's Apache JMeter, an open source Java application designed to load test functional behavior and measure performance.
"Supporting IBM mainframe protocols TN5250 and TN3270, the JMeter RTE plugin simulates a mainframe terminal sending actions and keystrokes to the mainframe server," the company said in a statement. "By using the plugin, developers and testers can simulate filling forms or calling processes, specify the position of fields on the screen and the text to set on them, and simulate the keyboard attention keys."
When your job is to provide the cloud infrastructure to run analytics and workloads across three that are more than 100 miles apart datacenters, sucking 100-plus petabytes from each daily, it’s no longer an even remotely credible option to buy it from Megavendor X. These days, the only place to find such software is on an open source repository somewhere.
Which is exactly what Didi Chuxing, the Uber of China, did.
[...]
Five years ago, Cloudera cofounder Mike Olson wrote, “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.” In significant measure, this stems from the realities of operating at web-scale: The financial costs, never mind the technical costs, of trying to scale proprietary hardware and software systems are simply too high. Companies like Google and Facebook keep gifting genius creations to the open source community, driving innovation faster, well beyond the realm of proprietary firms’ ability to compete in data infrastructure.
VOLTTRON is an innovative open source software platform that helps users rapidly develop and deploy new control solutions for a myriad of applications in buildings, renewable energy systems and electricity grid systems. Developed by Pacific Northwest National Laboratory with funding from the Department of Energy, VOLTTRON can be downloaded from the not-for-profit Eclipse Foundation that will steward it as an open source software platform. As part of this move, PNNL has joined the Eclipse Foundation, a global organization with more than 275 members.
Flexible, scalable and cyber-secure, VOLTTRON offers paradigm-shifting capabilities for development of new analysis and management solutions for energy consumption optimization and integration of building assets with the electric grid. VOLTTRON provides the ability to shift energy demand to off-peak hours and manage a facility's load shape to reduce stress on the grid.
If you're a business that uses a monolithic architecture, the adoption of microservices might cause some anxiety on your team. After all, there isn't one comprehensive place to find answers to all the challenges that arise from managing today's cloud-native apps, and there isn't one single vendor that has all the answers.
Fortunately, the open source community can offer some help. Trends in open source software point toward a future with a completely different approach to application management. If you're willing to delve into and invest in today's leading open source microservices projects, it's possible to find everything you need to manage modern microservices applications in the cloud.
Today we shipped Notes by Firefox 1.1 for Android, all existing users will get the updated version via Google Play.
After our initial testing in version 1.0, we identified several issues with the Android’s “Custom Tab” login features. To fix those problems the new version has switched to using the newly developed Firefox Accounts Android component. This component should resolve the issues that the users experienced while signing in to Notes.
We work on Beaker because publishing and sharing is core to the Web’s ethos, yet to publish your own website or even just share a document, you need to know how to run a server, or be able to pay someone to do it for you.
So we asked ourselves, “What if you could share a website directly from your browser?”
Peer-to-peer protocols like dat:// make it possible for regular user devices to host content, so we use dat:// in Beaker to enable publishing from the browser, where instead of using a server, a website’s author and its visitors help host its files. It’s kind of like BitTorrent, but for websites!
[...]
Beaker uses a distributed peer-to-peer network to publish websites and datasets (sometimes we call them “dats”).
A few months ago, we announced an early preview release of Hubs by Mozilla, an experiment to bring Social Mixed Reality to the browser. Since then, we’ve made major strides in improving usability, performance, and support for standalone devices like the Oculus Go. Today, we’re excited to share our first big feature update to Hubs: the ability bring your videos, images, documents, and even 3D models into Hubs by simply pasting a link.
Lawmakers in the EU have proposed a new legal framework that will make it easier for police in one country to get access to user data in another country (so-called ‘e-evidence’) when investigating crimes. While the law seeks to address some important issues, there is a risk that it will inadvertently undermine due process and the rule of law in Europe. Over the coming months, we’ll be working with lawmakers in Europe to find a policy solution that effectively addresses the legitimate interests of law enforcement, without compromising the rights of our users or the security of our communications infrastructure.
Mozilla’s Internet Health Report 2018 explored concentration of power and centralization online through a spotlight article, “Too big tech?” Five U.S. technology companies often hold the five largest market capitalizations of any industry and any country in the world. Their software and services are entangled with virtually every part of our lives. These companies reached their market positions in part through massive innovation and investment, and they created extremely popular (and lucrative) user experiences. As a consequence of their success, though, the product and business decisions made by these companies move socioeconomic mountains.
And, like everyone, tech companies make mistakes, as well as some unpopular decisions. For many years, the negative consequences of their actions seemed dwarfed by the benefits. A little loss of privacy seemed easy to accept (for an American audience in particular) in exchange for a new crop of emojis. But from late 2016 through 2017, things changed. The levels of disinformation, abuse, tracking, and control crossed a threshold, sowing distrust in the public and catalyzing governments around the world to start asking difficult questions.
Since our “Too big tech?” piece was published, this trajectory of government concern has continued. The Facebook / Cambridge Analytica scandal generated testimony from Facebook CEO Mark Zuckerberg on both sides of the Atlantic. The European Commission levied a $5 billion fine on Google for practices associated with the Android mobile operating system. Meanwhile Republican Treasury Secretary Steve Mnuchin called for a serious look at the power of tech companies, and Democratic Senator Mark Warner outlined a 20 point regulatory proposal for social media and technology firms.
Presently TenFourFox uses Mozilla Addons as a repository for "legacy" (I prefer "classic" or "can actually do stuff" or "doesn't suck") add-ons that remain compatible with Firefox 45, of which TenFourFox is a forked descendant. Mozilla has now announced these legacy addons will no longer be accessible in October. I don't know if this means that legacy-only addons will no longer be visible, or no longer searchable, or whether older compatible versions of current addons will also be no longer visible, or whatever, or whether everything is going to be deleted and HTH, HAND. The blog post doesn't say. Just assume you may not be able to access them anymore.
This end-of-support is obviously to correlate with the end-of-life of Firefox 52ESR, the last version to support legacy add-ons. That's logical, but it sucks, particularly for people who are stuck on 52ESR (Windows XP and Vista come to mind). Naturally, this also sucks for alternative branches such as Waterfox which split off before WebExtensions became mandatory, and the poor beleaguered remnants of SeaMonkey.
Mozilla will stop supporting Firefox Extended Support Release (ESR) 52, the final release that is compatible with legacy add-ons, on September 5, 2018.
As no supported versions of Firefox will be compatible with legacy add-ons after this date, we will start the process of disabling legacy add-on versions on addons.mozilla.org (AMO) in September. On September 6, 2018, submissions for new legacy add-on versions will be disabled. All legacy add-on versions will be disabled in early October, 2018. Once this happens, users will no longer be able to find your extension on AMO.
After legacy add-ons are disabled, developers will still be able to port their extensions to the WebExtensions APIs. Once a new version is submitted to AMO, users who have installed the legacy version will automatically receive the update and the add-on’s listing will appear in the gallery.
By default, SUSE Linux Enterprise Server 15 instances on Azure will run on this custom-tuned kernel, although it can be easily switched back to the standard kernel using the package manager, Zypper.
SUSE has had a long history with Microsoft, and it would seem that their relationship with the software giant continues with the Linux distribution's updates to their kernel to boost performance on Azure.
The Eclipse Foundation, the platform for open collaboration and innovation, today announced that it is joining the Call for Code initiative with Founding Partner IBM to use the power of open source software and a global collaborative community of developers to help people around the world better prevent, respond to, and recover from natural disasters.
The Call for Code Global Challenge, created by David Clark Cause and powered by IBM, has more than 35 organizations asking developers to create solutions that significantly improve natural disaster preparedness and relief. This competition is the first of its kind at this global scale, encouraging developers worldwide who want to give back to their communities open software solutions that alleviate human suffering.
No, Redis is not proprietary after Redis Labs introduced a tweak to its licensing strategy. Yes, some modules from Redis Labs will now be under a weird new license hack that says, in essence, "Clouds, you're not allowed to make money from this code unless you pay us money." And yes, this hack was completely unnecessary in terms of open source evolution.
You see, we already have ways to accomplish this. Not everyone likes strategies like Open Core, but they're well-established, well-understood, and could have saved Redis Labs some headaches.
[...]
Let's be clear: Redis Labs' desire is rational and common to open source vendors. While Redis Labs didn't touch the license for Redis Core (it remains under the highly permissive BSD), the company has slapped a "Commons Clause" onto otherwise open source software to make it...not open source. The rationale?
Social networks are typically walled gardens; users of a service can interact with other users and their content, but cannot see or interact with data stored in competing services. Beyond that, though, these walled gardens have generally made it difficult or impossible to decide to switch to a competitor—all of the user's data is locked into a particular site. Over time, that has been changing to some extent, but a new project has the potential to make it straightforward to switch to a new service without losing everything. The Data Transfer Project (DTP) is a collaborative project between several internet heavyweights that wants to "create an open-source, service-to-service data portability platform".
[...]
Users will obviously need to authenticate to both sides of any transfer; that will be handled by authentication adapters at both ends. Most services are likely to use OAuth, but that is not a requirement. In addition, the paper describes the security and privacy responsibilities for all participants (service providers, users, and the DTP system) at some length. These are aimed at ensuring that users' data is protected in-flight, that the system minimizes the risks of malicious transfers, and that users are notified when transfers are taking place. In addition, a data transfer does not imply removing the data from the exporting provider; there is no provision in DTP for automated data deletion.
One of the advantages for users, beyond simply being able to get their hands on their own data, is the reduction in bandwidth use that will come because the service providers will directly make the transfer. That is especially important in places where bandwidth is limited or metered—a Google+ user could, for example, export their photos to Facebook without paying the cost of multi-megabyte (or gigabyte) transfers. The same goes for backups made to online cloud-storage services, though that is not really new since some service providers already have ways to directly store user data backups elsewhere in the cloud. For local backup, though, the bandwidth cost will have to be paid, of course.
The use cases cited in the paper paint a rosy picture of what DTP can help enable for users. A user may discover a photo-printing service that they want to use, but have their photos stored in some social-media platform; the printing service could offer DTP import functionality. Or a service that received requests from customers to find a way to get their data out of another service that was going out of business could implement an export adapter using the failing service's API. A user who found that they didn't like the update to their music service's privacy policy could export their playlists to some other platform. And so on.
KOGER€® Inc., a global financial services technology company, has announced the availability of an open-source client portal for financial institutions, asset managers, and fund administrators that works in tandem with the systems they already have in place.
Handshake has recently awarded funds to many critical free and open source software projects. In particular Conservancy has been gifted $200K for our ongoing work to support software freedom by providing a fiscal home for smaller projects, enforcing the GPL and undertaking strategic efforts to grow and improve free software. Outreachy, the organization offering biannual, paid internships for under-represented people to work in free software (itself a member project of Conservancy) has also been awarded $100,000 from these funds.
"We are grateful for this donation that will allow us to continue supporting people from underrepresented backgrounds in gaining focused experience as free software contributors and shaping the future of technology," said Marina Zhurakhinskaya, Outreachy Organizer. Donations to the Outreachy general fund support program operations and increasing awareness of opportunities in free software among people from underrepresented groups in tech.
[...]
As a small organization, we are always working to do the most with what we have. The Handshake grant allows us to tackle some of the work that we would have otherwise had to put off to a later date. Unfettered donations give us the freedom to say yes to hiring contractors to help with tasks that we don't have expertise for in house, they help us move up our timetables for critical infrastructure and they enable us to spend less time fundraising. These kinds of gifts are absolutely critical for Conservancy and for our frugal sister organizations in the free software community.
Open Collective has come up with an new initiative that makes it easy for companies to identify the open source projects that they depend on that also need funding and make a financial contribution. BackYourStack provides a new way for open source communities get paid for the work they do and become financially sustainable.
[...]
Open Collective lets its users set up pages to collect donations and membership fees where the funds required and the funds raised are explicitly shows and sponsors and the extent of their support is acknowledged. This page gives also access to an ongoing record of a project's expenses where members can submit new expenses for reimbursement and its Budget facility allows income and expenditure to be tracked.
According to its FAQs, so far Open Collective has raised $2,815,000 in funds for its members. It takes 10% plus credit card fees to cover the costs of running the platform and managing bookkeeping, taxes and the admin of reimbursing expenses and shares this commission with the host organizations that hold the money on behalf of member collectives.
Last week I carried out some tests of BSD vs. Linux on the new 32-core / 64-thread Threadripper 2990WX. I tested FreeBSD 11, FreeBSD 12, and TrueOS -- those benchmarks will be published in the next few days. I tried DragonFlyBSD, but at the time it wouldn't boot with this AMD HEDT processor. But now the latest DragonFlyBSD development kernel can handle the 2990WX and the lead DragonFly developer calls this new processor "a real beast" and is stunned by its performance potential.
When I tried last week, the DragonFlyBSD 5.2.2 stable release nor DragonFlyBSD 5.3 daily snapshot would boot on the 2990WX. But it turns out Matthew Dillon, the lead developer of DragonFlyBSD, picked up a rig and has it running now. So in time for the next 5.4 stable release or those using the daily snapshots can have this 32-core / 64-thread Zen+ CPU running on this operating system long ago forked from FreeBSD.
Proprietary software has always been about a power relationship. Copyright and other legal systems give authors the power to decide what license to choose, and usually, they choose a license that favors themselves and takes rights and permissions away from others.
The so-called “Commons Clause” purposely confuses and conflates many issues. The initiative is backed by FOSSA, a company that sells materiel in the proprietary compliance industrial complex. This clause recently made news again since other parties have now adopted this same license.
This proprietary software license, which is not Open Source and does not respect the four freedoms of Free Software, seeks to hide a power imbalance ironically behind the guise “Open Source sustainability”. Their argument, once you look past their assertion that "the only way to save Open Source is to not do open source", is quite plain: "If we can't make money as quickly and as easily as we'd like with this software, then we have to make sure no one else can as well".
These observations are not new. Software freedom advocates have always admitted that if your primary goal is to make money, proprietary software is a better option. It's not that you can't earn a living writing only Free Software; it's that proprietary software makes it easier because you have monopolistic power, granted to you by a legal system ill-equipped to deal with modern technology. In my view, it's a power which you don't deserve — that allows you to restrict others.
Of course, we all want software freedom to exist and survive sustainably. But the environmental movement has already taught us that unbridled commerce and conspicuous consumption is not sustainable. Yet, companies still adopt strategies like this Commons Clause to prioritize rapid growth and revenue that the proprietary software industry expects, claiming these strategies bolster the Commons (even if it is a “partial commons in name only”). The two goals are often just incompatible.
There appears to be no rest for Wilber as the GIMP team has updated the venerable image editor to version 2.10.6.
We were delighted to see the arrival of the Straighten button in version 2.10.4, mainly due to our inability to hold a camera straight. Version 2.10.6 extends this handy feature to include vertical straightening, so the Leaning Tower of Pisa need lean no more. As before, the user must wield the Measure tool and either let GIMP automatically work out if straightening should be vertical or horizontal, or override the application.
In a nod to East Asian writing systems, or just to those who feel the need for vertical text, GIMP has also gained a variety of vertical text options, including mixed orientation or the more Western style upright.
GNU Parallel 20180822 ('Genova') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/
Quote of the month:
GNU parallel is a thing of magic.
AMD developers have open-sourced rocprofiler for profiling the AMD GPU hardware performance counters under compute/OpenCL workloads.
Rocprofiler consists of a library and tool for accessing the AMD graphics processor hardware performance counters. They anticipate that this profiler will be bundled as part of their upcoming ROCm 1.9 release, but it can be built today and used with their existing ROCm 1.8 releases.
Fujitsu has revealed details about its new high performance CPU, destined for the Post-K supercomputer. The A64FX is a Fujitsu designed Arm processor and is of particular note as it is the first to implement the Arm v8-A SVE architecture (SVE = Scalable Vector Extensions). Architectural details of the A64FX were shared at the Hot Chips 30 symposium yesterday evening in Cupertino, California. Fujitsu today emailed HEXUS a press release concerning further Post-K CPU specifications, yet to be shared on its website.
Fujitsu today announced publication of specifications for the A64FX CPU to be featured in the post-K computer, a supercomputer being developed by Fujitsu and RIKEN as a successor to the K computer, which achieved the world’s highest performance in 2011. The organizations are striving to achieve post-K application execution performance up to 100 times that of the K computer.
Today Fujitsu published specifications for the A64FX CPU to be featured in the post-K computer, a future machine designed to be 100 times faster than the legendary K computer that dominated the TOP500 for years.
Fujitsu has announced the specifications for A64FX, an Arm CPU that will power Japan’s first exascale supercomputer. The system, known as Post-K, is scheduled to begin operation in 2021.
At the third annual PyBay Conference in San Francisco over the weekend, Python aficionados gathered to learn new tricks and touch base with old friends.
Only a month earlier, Python creator Guido van Rossum said he would step down as BDFL – benevolent dictator for life – following a draining debate over the addition of a new way to assign variables within an expression (PEP 572).
But if any bitterness about the proposal politics lingered, it wasn't evident among attendees.
Raymond Hettinger, a Python core developer, consultant and speaker, told The Register that the retirement of Python creator Guido van Rossum hasn't really changed things.
"It has not changed the tenor of development yet," he said. "Essentially, [Guido] presented us with a challenge for self-government. And at this point we don't have any active challenges or something controversial to resolve."
A major focus of recent developments in Firefox CI has been putting control of the CI process in the hands of the engineers working on the project. For the most part, that means putting configuration in the source tree. However, some kinds of configuration don’t fit well in the tree. Notably, configuration of the trees themselves must reside somewhere else.
This week's crate is wasm-bindgen-futures, a crate to make ECMAScript futures and Rust futures interoperate. Thanks to Vikrant for the suggestion!
Some time ago we released CafeOBJ 1.5.8 with some new features and bugfixes for the inductive theorem prover CITP. We are still struggling with SBCL builds on Windows, which suddendly started to produce corrupt images, something that doesn’t happen on Linux or Mac.
digest version 0.6.16 arrived on CRAN earlier today, and was just prepared for Debian as well.
digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'sha-512', 'crc32', 'xxhash32', 'xxhash64' and 'murmur32' algorithms) permitting easy comparison of R language objects.
The Broadband Forum today announced the first code release and documentation of its new Open Broadband project – Broadband Access Abstraction (OB-BAA) to enable standardized, automated and accelerated deployment of new cloud-based access infrastructure and services.
The Broadband Forum has announced the release of code and supporting documentation for Broadband Access Abstraction (OB-BAA), the first code release for the Open Broadband project.
The code and documentation offer an alternative approach for telcos looking to upgrade networks ahead of the anticipated stress caused by the introduction of more accessible and faster connectivity. The aim is to facilitate coexistence, seamless migration and the agility to adapt to an increasingly wide variety of software defined access models.
“OB-BAA enables operators to optimize their decision-making process for introducing new infrastructure based on user demand and acceptance instead of being forced into a total replacement strategy,” said Robin Mersh, Broadband Forum CEO. “By reducing planning, risks and execution time, investment in new systems and services can be incremental.”
The Forum’s Open Broadband initiative has been designed to provide an open community for the integration and testing of new open source, standards-based and vendor provided implementations. The group already counts support from the likes of BT, China Telecom, CenturyLink and Telecom Italia, as well as companies such as Broadcom and Nokia on the vendor side.
As a freely-published, open source project, BAA specifies northbound interfaces, core components, and southbound interfaces for functions associated with network access devices that have been virtualized.
Robin Mersh, CEO of the Broadband Forum, said the BAA project is an Apache 2.0 licensed open source project. The code from the project resides on GitHub and contributors develop the work on BitBucket.
Operators and equipment manufacturers involved in the project include Broadcom, BT, Calix, CenturyLink, China Telecom, Huawei, Nokia, Telecom Italia, Tibit Communications, the University of New Hampshire InterOperability Lab, and ZTE.
The BAA code will immediately be integrated into another Broadband Forum initiative — its Cloud Central Office (CloudCO) project. CloudCO is a regular standards project. It’s developing a framework for transformation of the network from fixed function, through boxes, to software-defined networking.
At 6pm on Sunday, hundreds of men arrived at Union Square in Manhattan for what Aponte told them would be a one-on-one date where they would watch her friend DJ. Once they had formed an audience, each thinking the rest of the men were just there for a show, Aponte took to the stage, explained what was going on and started whittling down the guys with questions and challenges, saying that the winner would actually go on a date with her.
The culture wars are coming for the best utopian project of the early [I]nternet. Can it survive the informational anarchy that’s disrupted the rest of media?
Advancements in computer technology over the past decades have meant that the collection of electronic data has become more commonplace in most fields of human endeavor. Many organizations now find themselves holding large amounts of data spanning many prior years. This data can relate to people, financial transactions, biological information, and much, much more.
Simultaneously, data scientists have been developing iterative computer programs called algorithms that can look at this large amount of data, analyse it and identify patterns and relationships that cannot be identified by humans. Analyzing past phenomena can provide extremely valuable information about what to expect in the future from the same, or closely related, phenomena. In this sense, these algorithms can learn from the past and use this learning to make valuable predictions about the future.
The company demoed the service with a pair of sample clips (link very much not safe for work). One blends the faces of two actresses and another swaps the background of a scene from a bedroom to a beach. It’s not the most advanced use of the technology, but the face-blending is relatively seamless, and it shows how accessible this sort of AI-powered video manipulation has become.
This week, the company is launching a new service that allows customers to commission their own deepfake clips, which can include superimposing their own faces onto the bodies of porn performers, or incorporating porn stars into different environments. “We see customization and personalization as the future,” said the company’s CEO Andreas Hronopoulos in an interview with Variety.
The court denied plaintiff's motion for a preliminary injunction but granted plaintiff's motion for a TRO precluding the launch of defendants' generic transdermal estrogen product.
Not a day passes in America without news of a drug company raising prices on prescription drugs. Americans pay two to six times more for prescription drugs than those living in other developed countries, who earn the same income.
People with chronic or life-threatening diseases, for whom drug costs are unaffordable, often skip treatment altogether. One quarter of all cancer patients chose not to fill a prescription due to cost, according to a 2013 study in the journal Oncologist. This is as drug prices for these conditions have skyrocketed. Humira for example, a widely used best-selling drug for rheumatoid arthritis, is now $2,700 per course of treatment, nearly three times what it costs in Switzerland.
The vast majority of Americans support a wide range of measures to make drugs more affordable: 92% of Americans support laws allowing the federal government to negotiate lower prices for people on Medicare, the public welfare benefit scheme targeted at senior citizens. However, with two lobbyists per member of Congress and a lobbying services’ bill that oustrips every other industry, including defence, the odds are stacked against citizens in their fight against ‘big pharma’ over drug prices.
The US and UK patent offices have granted a number of patents relating to the therapeutic use of cannabis derived products. Paradoxically, both the US and UK governments currently define cannabis and cannabis-derived products as having no medicinal benefit. Recent developments suggest that both governments may soon soften the legal definition of cannabis. This Kat takes the opportunity to ask, in view of the US and UK governments current position that cannabis has no medicinal use, how strong are the patents claiming the very same?
[...]
Recreational cannabis is subject to varying restrictions around the world. In the UK, the Misuse of Drugs Act 1971 categories cannabis and cannabinol as Class B drugs, meaning that unlicensed supply carries a maximum penalty of 5 years in prison and/or an unlimited fine. Cannabis has been fully legalized in certain US states (e.g. Colorado), and it will soon become fully legal to grow, possess and sell Cannabis in Canada.
The legality of medicinal cannabis is distinct from that of recreational cannabis. The legislation governing whether licences can be awarded to supply a controlled substance for medical purposes is dependent on whether that substance is considered to have a proven medicinal effect. Cannabis was categorized by the UN Convention on Narcotic Drugs as a drug having "no medicinal benefit" (Schedule 1). Both the US and UK currently follow this classification.
It's been a while since last having any big security bulletins for the X.Org Server even though some of the code-base dates back decades and security researchers have said the security is even worse than it looks and numerous advisories have come up in recent years. But it's not because X11 is bug-free as today three more security bulletins were made public affecting libX11.
Today's security advisory pertains to three different functions in libX11 that are affected by different issues. The security issues come down to off-by-one writes, a potential out of boundary write, and a crash on invalid reply.
Back in 2014 Black Hat Conference, crypto specialists Karsten Nohl and Jakob Lell introduced the concept of BadUSB — a USB security flaw which allows attackers to turn a USB into a keyboard which can be used to type in commands.
Now, a researcher from SYON Security has managed to build a modified USB charging cable that will enable hackers to transfer malware on your PC without you even noticing it. Behind the hood is the BadUSB vulnerability.
[...]
While BadUSB is gradually climbing the ladder towards the mainstream cyber attacks, people are also coming up with the corresponding firewalls to tackle the new age attacks.
Aqua Security released the open source kube-hunter tool for penetration testing of Kubernetes clusters, used for container orchestration.
"You give it the IP or DNS name of your Kubernetes cluster, and kube-hunter probes for security issues -- it's like automated penetration testing," the company said in an Aug. 15 blog post.
The tool -- with source code available on GitHub -- is also packaged by the company in a containerized version, which works with the company's kube-hunter Web site where test results can be seen and shared.
Open-source solutions offer numerous advantages to development-savvy teams ready to take ownership of their security challenges. Teams can implement them to provide foundational capabilities, like “process logs” or “access machine state,” swiftly; no need to wait for purchasing approval. They can build custom components on top of open-source code to fit their company’s needs perfectly. Furthermore, open-source solutions are transparent, ‘return’ great value for dollars spent (since investment makes the tool better rather than paying for a license), and receive maintenance from a community of fellow users.
A vulnerability affects all versions of the OpenSSH client released in the past two decades, ever since the application was released in 1999.
The security bug received a patch this week, but since the OpenSSH client is embedded in a multitude of software applications and hardware devices, it will take months, if not years, for the fix to trickle down to all affected systems.
[...]
This bug allows a remote attacker to guess the usernames registered on an OpenSSH server. Since OpenSSH is used with a bunch of technologies ranging from cloud hosting servers to mandate IoT equipment, billions of devices are affected.
As researchers explain, the attack scenario relies on an attacker trying to authenticate on an OpenSSH endpoint via a malformed authentication request (for example, via a truncated packet).
A kernel bug that allows a remote denial of service via crafted packets was fixed recently and the resulting patch was merged on July 23. But an announcement of the flaw (which is CVE-2018-5390) was not released until August 6—a two-week window where users were left in the dark. It was not just the patch that might have alerted attackers; the flaw was publicized in other ways, as well, before the announcement, which has led to some discussion of embargo policies on the oss-security mailing list. Within free-software circles, embargoes are generally seen as a necessary evil, but delaying the disclosure of an already-public bug does not sit well.
The bug itself, which Red Hat calls SegmentSmack, gives a way for a remote attacker to cause the CPU to spend all of its time reassembling packets from out-of-order segments. Sending tiny crafted TCP segments with random offsets in an ongoing session would cause the out-of-order queue to fill; processing that queue could saturate the CPU. According to Red Hat, a small amount of traffic (e.g. 2kbps) could cause the condition but, importantly, it cannot be done using spoofed IP addresses, so filtering may be effective, which may blunt the impact somewhat.
The Meltdown CPU vulnerability, first disclosed in early January, was frightening because it allowed unprivileged attackers to easily read arbitrary memory in the system. Spectre, disclosed at the same time, was harder to exploit but made it possible for guests running in virtual machines to attack the host system and other guests. Both vulnerabilities have been mitigated to some extent (though it will take a long time to even find all of the Spectre vulnerabilities, much less protect against them). But now the newly disclosed "L1 terminal fault" (L1TF) vulnerability (also going by the name Foreshadow) brings back both threats: relatively easy attacks against host memory from inside a guest. Mitigations are available (and have been merged into the mainline kernel), but they will be expensive for some users.
Airmail has just released an update which patches a known security vulnerability in the e-mailing service. Security analysts recently discovered that the client was vulnerable to malicious exploits that could allow foreign and unauthorized persons to access and read sent and received emails in the context of a victim user. The patch released fixes the vulnerable channels that could have been exploited to gain such unwarranted access.
A vulnerability in the Ghostscript interpreter used to decipher Adobe Postscript and PDF documents online has come to light after a report by a Google security researcher, Tavis Ormandy, and a bothersome statement by Steve Giguere, an EMEA engineer for Synopsis. As the Ghostcript page descriptive language interpreter is the most commonly employed system in numerous programs and databases, this vulnerability has a mass range of exploit and impact if manipulated.
[...]
According to Giguere, this causes second tier delay as mitigation of this depends directly upon authors resolving the issue at its core as soon as it arises, firstly, but that on its own is no use if these resolved components are not uploaded to the web servers and applications that make use of them. The issues must be resolved at the core and then updated where they are directly being used for the sake of effective mitigation. As this is a two step process, it could provide malicious attackers with all the time that they need to exploit this type of vulnerability.
Security researcher Stefan Kanthak claims the Microsoft Visual C++ Redistributable for Visual Studio 2017 executable installers (x86 and x64) were built with insecure tools from several years ago, creating a vulnerability that could allow privilege escalation.
In other words, Redmond is distributing to developers executables that install its Visual C++ runtime, and these installer programs are insecure due to being created by outdated tools. They can be exploited by malicious software to execute arbitrary code. It's not the end of the world – it's more embarrassment than anything else, due to the reliance on out-of-date tools.
Four years after the massacre, Montaser still can’t play the game that brought him and his brothers joy. The sound of a football being kicked revives memories of bombs, shrieks and bloodshed, as well as a scene that he wants to shut out forever.
“I still cannot forget. I was running quickly to flee the area. I survived, but I lost my brother and my cousins,” the 17-year-old recalls of a massacre that occurred just yards from the sparkling waters of the Mediterranean Sea.
Montaser Bakr is the sole remaining survivor of the Bakr children who the Israeli military struck on July 16, 2014, while they played football on a Gaza beach at the height of the enclave’s last war, killing four children aged between nine and 11 years old.
MSNBC is often described as the liberal version of Fox News, delivering unabashed left-leaning content for vociferously partisan viewers. But if you looked at MSNBC’s lineup of guests for August 15, you’d be hard pressed to find a more odious group of right-wing liars, warmongers and racists on Fox News or any other outlet.
MSNBC kicked it off with Andrea Mitchell interviewing mercenary Erik Prince, the billionaire founder of private military contractor Blackwater USA and the brother of Trump administration Education Secretary Betsy DeVos.
Firstly, Mitchell didn’t even get Prince’s credentials right, saying that his company Blackwater no longer exists. This is exactly what its marketing department wants you to believe: Blackwater rebranded as Xe Services following the massacre of 17 Iraqi civilians by Blackwater contractors in Nisour Square in 2007. In 2010, Prince sold Xe to a private equity firm run by a family friend, who changed the name to Academi, which later merged with rival private military contractor Triple Canopy in 2014 to form Constellis Holdings, which was in turn purchased by the private equity giant Apollo Global Management in 2016. Under the name Constellis, Blackwater is still going strong; earlier this year, Apollo was looking to sell it for between $2 billion and $2.5 billion.
Julian Assange’s mother caused excitement on Twitter, saying an ex-DNC worker leaked the Clinton emails. Christine Assange deleted her post after followers concluded that she meant Seth Rich, who was killed in 2016.
The story unfolded after Christine responded to a tweet claiming Julian Assange had given the then presidential candidate Donald Trump the “upper hand” by leaking the Clinton emails.
In a dispute between states’ rights and the congressional power to tax, you would expect conservatives to line up with the states and liberals with Congress. As the battle lines are drawn in State of New York v. Mnuchin, a lawsuit filed last month by the states of Connecticut, Maryland, New Jersey and New York, it will be Republicans defending the power of Congress and Democrats rallying to the cause of the states.
While well off most people’s radar, the case has the potential to disrupt President Donald Trump’s signature legislative achievement: last year’s massive tax cut. What remains to be seen — and will largely determine the outcome — is whether judicial conservatives align with Republicans (as they usually do) or defend the states’ rights doctrine at the heart of their legal thinking.
The lawsuit attacks the tax cut passed at the end of last year by the Republican-controlled Congress, specifically its limits on the deductibility of state and local taxes. The law resulted in much higher federal taxes for many residents of high-tax states, most of which are governed by Democrats. Last month, the states brought suit in federal court in Manhattan challenging the constitutionality of this provision of the new law. The legal consensus is that the lawsuit is unlikely to prevail. But the strange bedfellows of this issue may be causing legal analysts to underestimate its chances.
The jury in the financial fraud trial of former Trump campaign chairman Paul Manafort sent a second note to the court late Tuesday afternoon, informing the judge that there are 10 counts the jury cannot reach a verdict on. Judge T.S. Ellis III has decided there is "manifest necessity" to proceed and a verdict will be reached shortly on 8 of the counts. Judge Ellis will accept a partial verdict.
Deliberations resumed deliberations Tuesday morning after finishing its third day of deliberations without reaching a verdict. Jurors deliberated until 6:15 p.m. Monday, later than usual, before being dismissed for the day.
The Trump administration is reportedly planning to announce this week that it has reached an agreement with Mexico in its renegotiation of the North American Free Trade Agreement (NAFTA).
The function of “regime” is to construct the ideological scaffolding for the United States and its partners to attack whatever country has a government described in this manner...
Senator Elizabeth Warren at the National Press Club in Washington on Tuesday launched into a blistering attack on unfettered corporate power in America but waffled when asked about military spending and Israel’s recent brutal reaction to Palestinian resistance.
Warren outlined with great specificity a host of proposals for eliminating financial conflicts, closing revolving doors between business and government and reforming corporate structures.
She pilloried former Congressman Billy Tauzin for having done the pharmaceutical lobby’s bidding by preventing a bill for expanded Medicare coverage to include a program to negotiate lower drug prices. “In December of 2003, the very same month the bill was signed into law, PhRMA — the drug companies’ biggest lobbying group — dangled the possibility that Billy could be their next CEO,” Warren said.
“In February of 2004, Congressman Tauzin announced that he wouldn’t seek re-election. Ten months later, he became CEO of PhRMA — at an annual salary of $2 million,” Warren said. “Big Pharma certainly knows how to say ‘thank you for your service.'”
In April, we published an investigation into Michael Cohen’s past. The “Trump, Inc.” episode, reported by our partners at WNYC, traced how so many of Cohen’s associates over the years have been convicted of crimes, disbarred or faced other legal troubles.
But — at the time of the episode — the president’s former lawyer had himself never been convicted, or even accused of a crime.
Well, it’s time for an update. Cohen pleaded guilty Tuesday to eight felony counts, including tax fraud, lying to a bank and campaign finance violations. The same hour he was pleading guilty, a federal jury found another former Trump aide guilty: Paul Manafort, the erstwhile campaign chairman. Also eight counts. Also bank and tax fraud.
Lanny Davis, attorney for Michael Cohen, told ABC News’ George Stephanopoulos Wednesday that his client has information that would be "of interest" to special counsel Robert Mueller.
"I can tell you that it's my observation that what he knows that he witnessed will be of interest to the special counsel," Davis told Stephanopoulos.
Davis also named President Trump as the 'candidate' tied to Cohen's campaign finance case. He said his client was "directed" "to do a criminal act" by Trump, calling the crime what he was told to do with two women, Stormy Daniels and Karen McDougal. Davis said there is evidence that Russians are complicit with Wikileaks and members of the Trump campaign help "facilitated that conspiracy."
In the unceasing fight against fake news, Facebook has started to assign a reputation score to its user based on their “trustworthiness,” reports Washington Post.
The new rating tool revealed by Tessa Lyons, product manager and currently fighting misinformation on Facebook, is among the many other behavior clues that Facebook continuously take into consideration “as it seeks to understand risk.”
When you ask locals why Dirk Denkhaus, a young firefighter trainee who had been considered neither dangerous nor political, broke into the attic of a refugee group house and tried to set it on fire, they will list the familiar issues.
This small riverside town is shrinking and its economy declining, they say, leaving young people bored and disillusioned. Though most here supported the mayor’s decision to accept an extra allotment of refugees, some found the influx disorienting. Fringe politics are on the rise.
But they’ll often mention another factor not typically associated with Germany’s spate of anti-refugee violence: Facebook.
Everyone here has seen Facebook rumors portraying refugees as a threat. They’ve encountered racist vitriol on local pages, a jarring contrast with Altena’s public spaces, where people wave warmly to refugee families.
Two Researchers from the University of Warwick, named Karsten Miller and Carlo Schwarz, have conducted a study which analyzed the anti-refugee attacks in Germany. Some of the factors that were considered for the study included wealth, demographics, political support, newspaper sales, number of refugees, past crimes against refugees and the number of protests.
The pattern that emerged suggested that the towns where Facebook usage was higher than the average were more involved in the anti-refugee attacks.
Turkey's president Recep Erdogan is the pettiest of tyrants, ruling with an iron fist and an easily-bruised ego. In addition to snuffing out dissent in his own country with a combination of arrests and intimidation, Erdogan and his government scour the planet for non-Turkish citizens who have offended Lord Gollum.
This doesn't just take the form of content removal requests and site blocking. It also means actual arrests of foreign citizens residing in other countries. Germany's government was shocked to find an old law on its books -- one that forbade insulting foreign states -- being used against one of its own, a German comedian who wrote an immensely unflattering poem about the Turkish dictator. The government gave in at first before swiftly excising the law.
The same can't be said about the Netherlands, another country with bad laws Erdogan is more than happy to exploit to silence criticism. This makes things a little easier for the Turkish government. The last time it punished a Dutch citizen for criticizing the Turkish president, it had to wait for the journalist to visit the country before arresting her.
This time the Dutch government is going to be doing the punishing. Erdogan has spoken and, rather than being greeted with laughter followed by a dial tone, the Dutch government appears to be moving forward with a local prosecution.
On August 6, a number of giant online media companies, including Facebook, YouTube, Apple, Spotify and Pinterest, took the seemingly coordinated decision to remove all content from Alex Jones and his media outlet Infowars from their platforms.
Jones, perhaps the internet’s most notorious far-right conspiracy theorist, has claimed that the Sandy Hook shooting was a hoax, the Democratic Party is running a child sex ring inside a DC pizzeria and that the Las Vegas shooting was perpetrated by Antifa. Despite or perhaps because of such claims, his website Infowars has built up an enormous following: 3 million Americans, almost 1 percent of the population, visited the site in July 2018, according to Alexa.
[...]
Unfortunately, Facebook immediately used this new precedent to switch its sights on the left, temporarily shutting down the Occupy London page and deleting the anti-fascist No Unite the Right account (Tech Crunch, 8/1/18). Furthermore, on August 9, the independent, reader-supported news website Venezuelanalysis had its page suspended without warning.
The site does not feign neutrality, offering news and views about Venezuela from a strongly left-wing perspective. But it’s not uncritical of the Venezuelan government, either, and provides a crucial English-language resource for academics and interested parties on all sides wishing to understand events inside Venezuela from a leftist perspective, something almost completely absent in corporate media, which has been actively undermining elections (FAIR.org, 5/23/18) and openly calling for military intervention or a coup in the country (FAIR.org, 5/16/18).
For the past few years we've been covering a whole series of cases, most of them filed by (I'm not making this up) a silly law firm by the name of 1-800-Law-Firm, trying to argue that various big internet companies provided material support to ISIS or other terrorists, and therefore owe tons of money to surviving relatives of people killed by ISIS or other terrorist organizations. There have been lawsuits against Twitter, Facebook and Google/YouTube. So far, all of these lawsuits have failed miserably -- as they should.
Even if the plaintiffs could show that these platforms actively enabled terrorists to use their platform (which they do not, as all of them proactively look to remove terrorist related content), none of the cases makes an even half-hearted attempt to connect the (very unfortunate) deaths of their relatives to any actual content on these platforms. The lawsuits are basically "these bad people use Twitter/Facebook/YouTube, these people killed my relative, thus, those platforms owe me millions of dollars." That, of course, is not how the law works.
You may have noticed that an awful lot of news broke yesterday concerning a wide variety of legal cases all touching on the President. Most of the coverage, of course, went to the two big cases: the guilty verdict against former campaign chair Paul Manafort and the guilty plea by former Trump personal lawyer Michael Cohen. There were some other cases with breaking news as well, including a judge in New York rejecting Trump's attempt to dump a lawsuit filed against his private security team for apparently beating up some protesters. Also, in a (frankly, very weak) defamation lawsuit filed by former Apprentice contestant Summer Zervos, apparently Trump has refused to submit to discovery requests, leading Zervos' legal team to file a motion to compel him to respond.
Most of those cases don't cover the kinds of things we usually talk about (the defamation case being the exception -- but at this stage, there really isn't that much worth commenting on). However, there was yet another case loosely involving the President that is something we'd talk about and which concluded late Monday (though, the news broke on Tuesday as well). And that involved a defamation case filed by three Russians against Christopher Steele, author of the so-called "Steele Dossier." Back in October of last year, three Russians, Mikhail Fridman, German Khan and Peter Aven, who are all involved with Alfa-Bank, sued Fusion GPS and its founder Glenn Simpson in federal court for defamation. That case is still waiting for a ruling on both a Motion to Dismiss and an Anti-SLAPP Motion.
However, while all of that was going on, the same three Russians filed a very similar case in the DC Superior Court (the equivalent of a state court, rather than federal court). That case was filed in April of this year, and while the federal court is still dilly dallying around on it, the state court dismissed the case on anti-SLAPP grounds (which rendered a related Motion to Dismiss moot.).
One of those is this trust rating. Facebook didn’t tell the Post everything that went into the score, but it is partly related to a user’s track record with reporting stories as false. If someone regularly reports stories as false, and a fact-checking team later finds them to be false, their trust score will go up; if a person regularly reports stories as false that later are found to be true, it’ll go down.
John Calder, an independent British publisher who built a prestigious list of authors like Samuel Beckett and Heinrich Böll and spiritedly defended writers like Henry Miller against censorship, died on Aug. 13 in Edinburgh. He was 91.
Alessandro Gallenzi, who bought Mr. Calder’s publishing company in 2007 and continues to sell books under his name, confirmed the death.
Mr. Calder’s refined literary palate — sometimes at odds with his admittedly uneven commercial acumen — led him to bring out books in Britain by Eugène Ionesco, Marguerite Duras, Alain Robbe-Grillet, Claude Simon, William S. Burroughs and Nathalie Sarraute.
A few days ago, about a dozen articles and campaign sites criticizing EU plans for copyright censorship machines silently vanished from the world’s most popular search engine. Proving their point in the most blatant possible way, the sites were removed by exactly what they were warning of: Copyright censorship machines.
Among the websites that were made impossible to find: A blog post of mine in which I inform Europeans about where their governments stand on online censorship in the name of copyright and a campaign site warning of copyright law that favors corporations over free speech.
[...]
After the EFF uncovered further fraudulent removals by Topple Track and TorrentFreak covered the story, Google reportedly terminated its trusted partnership with the company. But still, as of this writing, my blog post remains unlisted on Google Search. Incredibly, not even when a company is exposed for issuing abusive takedowns are the websites they’ve previously ordered removed reinstated. Each individual author must actively put up a fight to restore the findability of their free speech. [Update: The page seems to be back in the Google index now.]
Although a lot of people use 'blockchain' as a synonym to bitcoin, the possibilities this tech offers go far beyond cryptocurrencies.
In its core, blockchain is a decentralised database of data where nothing can be added or modified without the consent of all the participants.
Publiq, which describes itself as a non-profit foundation, uses blockchain technology to create a new, decentralised environment for content publishing. Their aim is to bypass centralised management of the media sector and give authors the freedom to publish their content without any external intervention. As a bonus, blockchain technology helps authors retain copyright and monetise their work.
Publiq is founded on blockchain, which means no one can modify content at any stage of its publishing and sharing. Dr. Christian de Vartavan, adviser and global ambassador at Publiq, compares the principle of blockchain technology to an old-fashioned bill spike: you pile up the bills by sticking them on the spike one by one, and you can’t remove or modify any of the previous bills unless you take everything off, which is simply impossible with blockchain.
Without mental narrative, nothing is experienced but sensory impressions appearing to a subject with no clear shape or boundaries. The visual and auditory fields, the sensation of air going in and out of the respiratory system, the feeling of the feet on the ground or the bum in the chair. That’s it. That’s more or less the totality of life minus narrative.
When you add in the mental chatter, however, none of those things tend to occupy a significant amount of interest or attention. Appearances in the visual and auditory field are suddenly divided up and labeled with language, with attention to them determined by whichever threatens or satisfies the various agendas, fears and desires of the conceptual identity construct known as “you”. You can go days, weeks, months or years without really noticing the feeling of your respiratory system or your feet on the ground as your interest and attention gets sucked up into a relationship with society that exists solely as narrative.
“Am I good enough? Am I doing the right thing? Oh man, I hope what I’m trying to do works out. I need to make sure I get all my projects done. If I do that one thing first it might save me some time in the long run. Oh there’s Ashley, I hate that bitch. God I’m so fat and ugly. If I can just get the things that I want and accomplish my important goals I’ll feel okay. Taxes are due soon. What’s on TV? Oh it’s that idiot. How the hell did he get elected anyway? Everyone who made that happen is a Nazi. God I can’t wait for the weekend. I hope everything goes as planned between now and then.”
On and on and on and on. Almost all of our mental energy goes into those mental narratives. They dominate our lives. And, for that reason, people who are able to control those narratives are able to control us.
Julia Reda is the Member of the European Parliament who has led the fight against Article 13, a proposal to force all online services to create automatic filters that block anything claimed as a copyrighted work.
Reda has written copiously on the risks of such a system, with an emphasis on the fact that these filters are error-prone and likely to block material that doesn't infringe copyright.
Last week, Tim Cushing had a post about yet another out of control automated DMCA notifier, sending a ton of bogus notices to Google (most of which Google removed from its search engine index, since the sender, "Topple Track" from Symphonic Distribution was a part of Google's "Trusted Copyright Program," giving those notices more weight). The post listed many of the perfectly legitimate content that got removed from Google's index because of that rogue automated filter, including an EFF page about a lawsuit, the official (authorized) pages of Beyonce and Bruno Mars, and a blog post about a lawsuit by Professor Eric Goldman.
On August 13, Facebook shut down the English-language page of Telesur, blocking access for roughly half a million followers of the leftist media network until it was abruptly reinstated two days later. Facebook has provided three different explanations for the temporary disappearing, all contradicting one another, and not a single one making sense.
Telesur was created by Venezuela’s then-President Hugo Chávez in 2005 and co-funded by hemispheric neighbors Cuba, Bolivia, Nicaragua, and Uruguay — Argentina pulled support for the web and cable property in 2016. As a state-owned media property, it exists somewhere on the same continuum as RT and Al Jazeera, though like the former, Telesur has been criticized as a nakedly partisan governmental mouthpiece, and like the latter, it does engage in real news reporting. But putting aside questions of bias and agenda, Telesur does seem to exist on a separate plane than, say, Infowars, which exists primarily to peddle its particular, patently false genre of right-wing paranoia fan fiction packaged as news (and brain pills), as opposed to some garden-variety political agenda. Unlike RT, Telesur hasn’t been singled-out for a role in laundering disinformation for military intelligence purposes, nor is it a hoax factory, à la Alex Jones.
For quite some time now, we've been trying to demonstrate just how impossible it is to expect internet platforms to do a consistent or error-free job of moderating content. Especially at the scale they're at, it's an impossible request, not least because so much of what goes into content moderation decisions is entirely subjective about what's good and what's bad, and not everyone agrees on that. It's why I've been advocating for moving controls out to the end users, rather than expecting platforms to be the final arbiters. It's also part of the reason why we ran that content moderation game at a conference a few months ago, in which no one could fully agree on what to do about the content examples we presented (for every single one there were at least some people who argued for keeping the content up or taking it down).
On Twitter, I recently joked that anyone with opinions on content moderation should first have to read Professor Kate Klonick's recent Harvard Law Review paper on The New Governors: The People, Rules and Processes Governing Online Speech, as it's one of the most thorough and comprehensive explanations of the realities and history of content moderation. But, if reading a 73 page law review article isn't your cup of tea, my next recommendation is to spend an hour listening to the new Radiolab podcast, entitled Post No Evil.
I think it provides the best representation of just how impossible it is to moderate this kind of content at scale. It discusses the history of content moderation, but also deftly shows how impossible it is to do it at scale with any sort of consistency without creating new problems. I won't ruin it for you entirely, but it does a brilliant job highlighting how as the scale increases, the only reasonable way to deal with things is to create a set of rules that everyone can follow. And then you suddenly realize that the rules don't work. You have thousands of people who need to follow those rules, and they each have a few seconds to decide before moving on. And as such, there's not only no time for understanding context, but there's little time to recognize that (1) content has a funny way of not falling within the rules nicely and (2) no matter what you do, you'll end up with horrible results (one of the examples in the podcast is one we talked about last year, explaining the ridiculous results, but logical reasons, for why Facebook had a rule that you couldn't say mean things about white men, but could about black boys).
On Friday, August 17th, a group of people gathered at Twitter’s headquarters in San Francisco to raise awareness about censorship at big tech companies.
They gathered at the corner of Market and 10th streets in San Francisco. Onlookers can see the volunteers in neon vests holding hand-written signs.
[...]
Once shadowbanned, the user would be limited in certain abilities, making it harder to gain new followers.
Millions of Americans use social media to get their news, and that number is growing rapidly by the year. But when they log on, they don’t always get the full story.
Powerful social media companies are filtering the information that users receive on their platforms. As a result, the picture we get of politics is partial and distorted, like a carnival mirror.
Last month, Vice reported that Twitter was limiting the visibility of conservative accounts.
Some tweets from these accounts did not appear in searches, and the accounts themselves were made more difficult to find through the search feature. This “shadow ban” made it harder for users to get information about certain public officials — or even to learn that their presence existed.
The Indian state has an expansive legal toolkit when it comes to censorship of content, encompassing cinema, broadcast media, books, and newspapers and news magazines. Even live dramatic performances do not escape the possibility of censorship, thanks to the truly anachronistic Dramatic Performances Act and its various state government avatars. Essentially, if the government believes you are up to no good, there are laws on the books which they can use to stop you regardless of whether your chosen vehicle is a prurient pantomime, a blasphemous book, or a mischievous movie.
Confining ourselves to moving images (which are more heavily regulated than any other type of content), we are all familiar with the Censor Board – officially known as Central Board of Film Certification – and the delicate dance that Indian filmmakers play when it comes to obtaining the ubiquitous CBFC certificate we see before every film. Some of us are even familiar with the content code that all television channels in India need to comply with.
Twitter rant about the online “censorship” of “conservatives” might be that he’s the dumbest person ever elected to Congress.
Information Mini€ster Fawad Ahmed on Tuesday announced that the Pakistan Tehreek-i-Insaf-led government had lifted political censorship on state-run news organisations.
In a statement posted on Twitter, the minister said that both Pakistan Television (PTV) and Radio Pakistan would now enjoy complete editorial independence over the content they produce.
The European Commission is reportedly planning to bring in new laws that will punish social media companies if they don’t remove terrorist content within an hour of it being flagged.
The news comes courtesy of the FT, which spoke to the EU commissioner for security, Julian King, on the matter of terrorists spreading their message over social media. “We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon,” he said, after reflecting that he doesn’t think enough progress had been made in this area.
Earlier this year the EU took the somewhat self-contradictory step of imposing some voluntary guidelines on social media companies to take down material that promotes terrorism within an hour of it being flagged. In hindsight that move seems to have been made in order to lay the ground for full legislation, with Europe now being able to claim its hand has been reluctantly forced by the failure of social media companies to do the job themselves.
So long as the legal stipulation if for content to be taken down when explicitly flagged as terrorist by police authorities it should be pretty easy to enforce – indeed it could probably be automated. But legislation such as this does pose broader questions around censorship. How is ‘terrorist’ defined? Will there be a right of appeal? Will other organisations be given the power to demand content be taken down? Will this law be extended to other types of contentious content?
A rift between the Liberty University president and an on-campus newspaper indicates that campus free speech battles are not solely an issue for liberal colleges. Jerry Falwell, Jr., the president of one of the largest Christian universities in America, is a very vocal supporter of Republicans and conservatives and that support has crossed over to his college's identity. Earlier in the month, Falwell invoked his students to criticize Attorney General Jeff Sessions for not supporting President Trump enough, citing their low attendance at a 2016 event as proof that they did not back Sessions. Now World Magazine alleges that Falwell played a direct role in censoring the political views of Liberty's Champion, the on-campus paper. The alleged censorship mostly applied to criticisms of Trump.
In one allegation, Falwell reportedly directed staffers in 2016 to state the presidential candidate for which they were voting. At another point, Falwell told another editor to not run former Sports Editor Joel Schmieg's column disavowing Trump's "locker room talk" controversy. Schmieg then attempted to share his thoughts on Facebook, but later resigned when a faculty adviser communicated to him that he should refrain from repeating the action in the future. According to World Magazine, Schmieg said, "I didn't feel comfortable being told what I couldn't write about by President Falwell."
China’s internet has always been heavily censored by its government. The heavy censorship, also know as The Great Firewall, restricts users from searching or sharing certain phrases and words online — like pictures of Winnie the Pooh — to ‘protect’ Chinese citizens, or so the government says.
Understandably, not everybody is happy with the ridiculously outdated policy. That why activists at GreatFire created FreeWeibo — a search engine that collects censored and deleted posts originally posted on Sina Weibo (China’s answer to Twitter).
On this Project Censored show Mickey, Chase and their guests discuss how Internet titans like Facebook and Youtube are censoring what users can post, and what the response to such censorship might be. David Pakman is the host of the David Pakman Show, available on Free Speech TV, Youtube, and radio. Andrew Austin is a Professor of Democracy and Justice Studies at the University of Wisconsin, Green Bay. Nolan Higdon is a professor of communications and history at multiple campuses in the San Francisco Bay area, is a long time contributor to Project Censored, and is an occasional co-host of this program.
Many of us take the benefits of the Internet for granted, and it’s hard to imagine life without the connectivity it provides. And yet, for some people, living with a heavily censored and restricted Internet connection is their routine, and there’s pretty much nothing they can do about it that can’t land them in trouble with their governments. Let’s take a look at how the Internet works in some parts of the world.
[...]
China is another country notorious for the way it treats Internet access, and the situation is quite challenging for anyone who wants to visit a large number of popular websites. Many Western sites are prohibited, or tightly regulated, and various specific types of content are not allowed to be viewed by anyone. As can be expected, the government keeps a close eye on the activities of all its citizens, and you can often about someone getting punished because they’ve decided to speak out against them openly.
The Australian Government have released a draft Bill [The Assistance and Access Bill 2018] designed to compel device manufacturers and service providers to assist law enforcement in accessing encrypted information. Although apparently developed to allow government agencies access to criminals’ encrypted communications, the Bill also grants broad, sweeping powers to government agencies that will harm the security and stability of our communications and the internet at large.
President Duterte’s statement alleging that the Central Intelligence Agency (CIA) plans to assassinate him is not as crazy as it sounds.
[...]
Here are just a few examples of CIA’s covert ops against world leaders, as cited by the Guardian: “Earlier well-documented episodes include Congo’s first prime minister, Patrice Lumumba of Congo, judged by the US to be too close to close to Russia. In 1960, the CIA sent a scientist to kill him with a lethal virus, though this became unnecessary when he was removed from office in 1960 by other means.”
“Other leaders targeted for assassination in the 1960s included the Dominican dictator Rafael Trujillo, president Sukarno of Indonesia, and president Ngo Dinh Diem of South Vietnam. In 1973, the CIA helped organise the overthrow of Chile’s president, Salvador Allende, deemed to be too left wing: he died on the day of the coup.”
The CIA was reportedly involved not only in the killings of political leaders (usually done by military or opposition forces the spy agency was assisting), but also in the many coup d’etats and rebellions in South American countries, including Chile, Bolivia, Haiti, Panama, Peru, Argentina, El Salvador, Brazil, Guatemala, Uruguay, and Venezuela.
Philippines President Rodrigo Duterte is thinking of dumping his smartphone over fears that the CIA is constantly eavesdropping on his conversations and might use his private information to eventually assassinate him.
“I know, the US is listening. I’m sure it’s the CIA, it’s also the one who will kill me,” Duterte said in Cebu City on Tuesday, rejuvenating fears that Washington may seek his demise over his independent foreign policy and willingness to obtain weapons from other global suppliers.
To avert possible smartphone intrusion by outside powers, which Duterte said could include "Russia, China, Israel, and maybe Indonesia,” the 73-year-old leader is considering going back to using a basic cellphone, with which eavesdropping and interception is harder.
The Seventh Circuit just handed down a landmark opinion, ruling 3-0 that the Fourth Amendment protects energy-consumption data collected by smart meters. Smart meters collect energy usage data at high frequencies—typically every 5, 15, or 30 minutes—and therefore know exactly how much electricity is being used, and when, in any given household. The court recognized that data from these devices reveals intimate details about what’s going on inside the home that would otherwise be unavailable to the government without a physical search. The court held that residents have a reasonable expectation of privacy in this data and that the government’s access of it constitutes a “search.”
This case, Naperville Smart Meter Awareness v. City of Naperville, is the first case addressing whether the Fourth Amendment protects smart meter data. Courts have in the past held that the Fourth Amendment does not protect monthly energy usage readings from traditional, analog energy meters, the predecessors to smart meters. The lower court in this case applied that precedent to conclude that smart meter data, too, was unprotected as a matter of law. On appeal, EFF and Privacy International filed an amicus brief urging the Seventh Circuit to reconsider this dangerous ruling.
President Duterte has again claimed the CIA wants to kill him and has accused the US spy agency of bugging his telephone conversations.
Speaking at a government conference in Cebu today (Tuesday, August 21), he said: “I know, the US is listening. I’m sure it’s the CIA. It’s also the one who will kill me.”
President Rodrigo Duterte on Tuesday said the United States’ Central Intelligence Agency (CIA) could be listening to his phone conversations, as he revived his allegations that the agency was out to kill him.
General Paul Nakasone, head of the National Security Agency (NSA) and U.S. Cyber Command (CyberCom), recommended keeping the agencies under the same leader for the next two years, according to a report by the Washington Post. The Post’s sources noted that Nakasone believes CyberCom still needs intelligence support from NSA. When asked for comment by MeriTalk, NSA media relations officer Chris Augustine responded: “As NSA Director General Paul M. Nakasone has acknowledged publicly, NSA confirms that General Nakasone has completed his 90 Day assessment on the status of the dual hat arrangement. He provided this to the Secretary of Defense and the Chairman of the Joint Chiefs of Staff for their review.”
This is the sort of rummaging the Constitution is supposed to prevent. It's understandable the FBI needed some assistance tracking down robbery suspects, but this grab for a wealth of information about 45 hectares of people milling about minding their own business, isn't. And this sort of thing isn't limited to the FBI. As was covered here earlier this year, the Raleigh PD did the same thing at least four times during criminal investigations in 2017.
In this case, hundreds of people would have been swept up in the dragnet. Certainly, some post-acquisition data sifting would have occurred to narrow it down to people/devices near the location of robberies when they occurred. But whatever happens after info is obtained cannot be used to justify the original acquisition. This warrant never should have been signed.
If there's any good news coming out of this, it's that Google either didn't hand over the info requested or didn't have the info requested on hand.
In response to an EFF investigation that uncovered deeply troubling research practices by the National Institute for Technology & Standards (NIST), a senior federal scientist stripped off his clothes, had another scientist draw all over his skin with washable markers, and then posed for the camera. Those images—obtained by EFF through a Freedom of Information Act lawsuit—illustrate federal officials’ absurd reaction to an EFF investigation that showed the research exploited prisoners while bypassing ethical oversight measures.
As EFF revealed in 2016, NIST researchers partnered with the FBI on a multi-year program to advance the state of the art of tattoo recognition technology—computer algorithms that automatically identify someone by their tattoos and even identify the meaning of those tattoos. NIST documentation explicitly stated that one goal was to use this automated technology to identify a subject’s “affiliation to gangs, sub-cultures, religious or ritualistic beliefs, or political ideology”—raising major First Amendment concerns. In addition, EFF’s research discovered that NIST researches had used—and distributed to corporate and institutional researchers—images of thousands of prisoners’ tattoos without their consent and without going through the ethical oversight process that protects prisoners from being unwitting research subjects. Following EFF’s report, NIST scrambled to retroactively change the nature of the research by removing all references to religion from its already published materials and redacting tattoo images previously available on its website.
"Despite users' attempts to protect their location privacy, Google collects and stores users' location data, thereby invading users' reasonable expectations of privacy, counter to Google's own representations about how users can configure Google's products to prevent such egregious privacy violations."
The whole shebang kicked off last week when a report from the Associated Press (AP) uncovered evidence of data collection by Google using another telemetry. When asked to explain itself, it said that it was possible to turn off location tracking more fully, using a completely erroneously labelled as 'Web and App Activity'.
In March, we started unveiling what is surrounding the Orwellian project "Smart Cityââ¢" in Marseille. But, as it turns out, Marseille is but a tree hiding the forest, as predictive policing and police surveillance centers boosted by Big Data tools are proliferating all over France. Nice is a good illustration: The city's mayor, security-obsessed Christian Estrosi, has partnered with Engie Inéo and Thalès -- two companies competing in this thriving market -- for two projects meant to give birth to the "Safe Cityââ¢" in Nice. Yet, in the face of the unhindered development of these technologies meant for social control, the president of the CNIL (France's data protection agency) seems to find it urgent to... follow the situation. Which amounts to laisser-faire.
As they proliferate, police body cameras have courted controversy because of the contentious nature of the footage they capture and questions about how accessible those recordings should be.
But when it comes to the devices themselves, the most crucial function they need to perform—beyond recording footage in the first place—is protecting the integrity of that footage so it can be trusted as a record of events. At the DefCon security conference in Las Vegas on Saturday, though, one researcher will present findings that many body cameras on the market today are vulnerable to remote digital attacks, including some that could result in the manipulation of footage.
Josh Mitchell, a consultant at the security firm Nuix, analyzed five body camera models from five different companies: Vievu, Patrol Eyes, Fire Cam, Digital Ally, and CeeSc. The companies all market their devices to law enforcement groups around the US. Mitchell's presentation does not include market leader Axon—although the company did acquire Vievu in May.
In all but the Digital Ally device, the vulnerabilities would allow an attacker to download footage off a camera, edit things out or potentially make more intricate modifications, and then re-upload it, leaving no indication of the change. Or an attacker could simply delete footage they don't want law enforcement to have.
The promise of transparency and accountability police body cameras represent hasn't materialized. Far too often, camera footage goes missing or is withheld from the public for extended periods of time.
So far, body cameras have proven most useful to prosecutors. With captured footage being evidence in criminal cases, it's imperative that footage is as secure as any other form of evidence. Unfortunately, security appears to be the last thing on body cam manufacturers' minds.
An upcoming federal appeals case could restore crucial privacy protections for millions of Americans who use the internet to communicate overseas.
A federal court will be scrutinizing one of the National Security Agency’s worst spying programs on Monday. The case has the potential to restore crucial privacy protections for the millions of Americans who use the internet to communicate with family, friends, and others overseas.
The unconstitutional surveillance program at issue is called PRISM, under which the NSA, FBI, and CIA gather and search through Americans’ international emails, internet calls, and chats without obtaining a warrant. When Edward Snowden blew the whistle on PRISM in 2013, the program included at least nine major internet companies, including Facebook, Google, Apple, and Skype. Today, it very likely includes an even broader set of companies.
When new users try Privacy Badger, they often get confused about why Privacy Badger isn’t blocking anything right away. But that’s because Privacy Badger learns about trackers as you browse; up until now, it hasn’t been able to block trackers on the first few sites it sees after being installed.
With today’s update, however, new users won't have to wait to see Privacy Badger in action. Thanks to a new training regimen, your Badger will block many third party trackers out of the box.
[...]
Using Selenium for automation, our new training regimen has Privacy Badger visit a few thousand of the most popular websites on the Web, and saves what Privacy Badger learns. Then, when you install a fresh version of Privacy Badger, it will be as if your Badger has already visited and learned from all of those sites. As you continue browsing, your Badger will continue to learn and build a better understanding of which third parties are tracking you and how to block them.
Every time we update Privacy Badger, we’ll update the pre-trained list as well. If you already use the extension, these updates won’t affect you. After you install Privacy Badger, it’s on its own: your Badger uses the information it had at install time combined with what it learns from your browsing. Future updates to the pre-trained list won't affect your Badger unless you choose to reset the tracking domains it's learned about. And as always, this learning is exclusive to your browser, and EFF never sees any of your personal information.
She told BBC Radio Kent she had “never expected to get all of this”, with a bomb threat even made to her home.
Love will be held personally responsible for violating the rights of an immigrant seeking naturalization. The record shows Lanuza was exactly the kind of person we want to welcome to the US -- a person who was useful, productive, and by all accounts a model citizen. The only thing he was missing was the citizenship. And an ICE lawyer tried to take it all away and separate Lanuza from his family by submitting a forged document into evidence. The brazen dishonesty is shocking. The capricious cruelty of this move -- completely unwarranted by Lanuza's behavior during his decade in the US -- is what really sticks in your throat.
In the wake of anoth€er appar€ently vic€tim€ised whis€tleblower emer€ging from the US intel€li€gence com€munity, here is an inter€view on the sub€ject on RT...
In late July, a court in Ufa, capital of Bashkortostan, reached a final ruling in one of the largest cases concerning the Islamist party Hizb ut-Tahrir in recent years. Alleged and real members of the organisation, which is banned in Russia, have been targeted consistently over the past 15 years: since 2003, there were at least 50 trials concerning Hizb ut-Tahrir – and no less than 300 people have been convicted (mostly in Tatarstan and Bashkortostan) as a result.
On this occasion, some 21 people were sentenced to between five and 24 years imprisonment. According to the investigation, the crimes of these men included reading certain books, as well as holding meetings and discussions about Islam. The defendants were charged under two articles of Russia’s Criminal Code: on terrorist organisations and on attempts to overthrow the constitutional order.
While the American Left was plumbing the depths of its ideologically induced ignorance by conflating John Brennan’s constitutional rights with his revoked security clearance, a shining example of Deep State rot has remained largely below the radar. On June 26, Reality Winner, a 26-year-old NSA contractor arrested for leaking classified information to a news outlet, pleaded guilty as charged. Last Thursday, it was revealed the virulently anti-Trump Georgia woman faces sentencing Aug. 23. According to the prosecutors’ court filings, Winner will receive the “longest sentence served by a federal defendant for an unauthorized disclosure to the media.”
A woman in Georgia is facing what some observers are calling “the longest sentence” ever imposed on someone convicted of leaking sensitive federal data to news outlets.
Reality Winner — an ex-NSA contract employee — is the young woman looking at spending 10 years in federal prison, should the judge impose the harshest possible penalty at her sentencing hearing scheduled for August 23.
Winner has been incarcerated since June after being charged and convicted of passing to The Intercept a classified NSA document detailing Russian attempts to meddle in the 2016 presidential election. Winner was eventually identified as the source of the document and was apprehended and convicted.
It’s been nearly one month since a federal court ordered the Trump administration to reunite separated families, but hundreds of children are still waiting. In fact, as of 12:00 pm on August 16, 565 immigrant children remained in government custody.
For 366 of those children, including six who are under the age of five, reunion is made all the more complicated by the fact that the government already deported their parents — without a plan for how they would be ever be located.
After forcefully rejecting the government’s assertion that the ACLU is solely responsible for finding deported parents — rather than say, the administration who deported them — the court has ordered both us and the administration to create a plan to locate and reunite deported parents with their children.
Mendez, said to be a proficient user of the Jobs Access With Speech (JAWS) screen reading program, visited the Apple website earlier this month but encountered "multiple access barriers" that denied "full and equal access to the facilities, goods, and services offered to the public," such as being able to browse and purchase products, make service appointments, or learn of the facilities available in Apple Stores in New York, the city where Mendez is resident.
The filing provides a long list of issues with the website that it believes needs fixing, in order to comply with the ADA, in relation to screen readers. The list includes the lack of alternative text for graphics, empty links containing no text, redundant links, and linked images missing alternative text.
Further into the lawsuit they note: ..."simple compliance with the WCAG 2.0 Guidelines would provide Plaintiff and other visually-impaired consumers with equal access to the Website, Plaintiff alleges that Defendant has engaged in acts of intentional discrimination."
The Attorneys General of New York, California, Connecticut, Delaware, Hawaii, Illinois, Iowa, Kentucky, Maine, Maryland, Massachusetts, Minnesota, Mississippi, New Mexico, North Carolina, Oregon, Pennsylvania, Rhode Island, Vermont, Virginia, Washington, and the District of Columbia have filed suit in the U.S. Court of Appeals for the D.C. Circuit, asking it to reinstate the Network Neutrality rules killed by Trump FCC Chairman Ajit Pai.
The states argue that the FCC broke the rules that require administrative agencies to act on the basis of evidence, rather than whim or ideology. The Net Neutrality rule that Pai destroyed was passed after extensive consultation and an open, rigorous comment process, with hearings and other fact-finding activities.
As expected, Mozilla, 22 State attorneys general, INCOMPAS, and numerous consumer groups this week asked a U.S. appeals court to reinstate FCC net neutrality rules. The state AGs, led by New York Attorney General Barbara Underwood, filed a lawsuit back in January attempting to overturn the repeal, arguing that the decision will ultimately be a "disaster for New York consumers and businesses." Mozilla and a few other companies also filed suit, as well as consumer groups including Free Press and Public Knowledge.
We've long discussed how Verizon (like most U.S. cellular carriers) has a terribly-difficult time understanding what the word "unlimited" means. Way back in 2007 Verizon was forced to settle with the New York Attorney General after a nine-month investigation found the company was throttling its "unlimited" mobile data plans after just 5GB of data usage, without those limits being clearly explained to the end user. Of course Verizon tried for a while to eliminate unlimited data plans completely, but a little something called competition finally forced the company to bring the idea back from the dead a few years ago.
But the company's new "unlimited" data plans still suffer from all manner of fine print, limits, and caveats. That includes throttling all video by default (something you can avoid if you're willing to pay significantly more), restrictions on tethering and usage of your phone as a hotspot or modem, and a 25 GB cap that results in said "unlimited" plans suddenly being throttled back to last-generation speeds as slow as 128 kbps. In short, Verizon still pretty clearly has no damn idea what the word unlimited actually means, nor does it much care if this entire mess confuses you.
The Federal Trade Commission (FTC) is wondering whether it might be time to change how the U.S. approaches competition and consumer protection. EFF has been thinking the same thing and come to the conclusion that yes, it is. On August 20, we filed six comments with the FTC on a variety of related topics to tell them some of the history, current problems, and thoughtful recommendations that EFF has come up with in our 28 years working in this space.
Back in June 2018, the FTC announced it was going to hold hearings on “competition and consumer protection in the 21st century” and invited comment on 11 topics. As part of our continuing work looking at these areas as they intersect with the future of technology, EFF submitted comments on six of the topics listed by the FTC: competition and consumer protection issues in communication, information, and media technology networks; the identification and measurement of market power and entry barriers, and the evaluation of collusive, exclusionary, or predatory conduct or conduct that violates the consumer protection statutes enforced by the FTC, in markets featuring “platform” businesses; the intersection between privacy, big data, and competition; evaluating the competitive effects of corporate acquisitions and mergers; the role of intellectual property and competition policy in promoting innovation; and the consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics.
Our goal in submitting these comments was to provide information and recommendations to the FTC about these complicated areas of Internet and technology policy. The danger is always that reactionary policies created in response to a high-profile incident may result in rules that restrict the rights of users and are so onerous that only established, big companies can afford to comply.
Scientific Games Corp was handed a dead loss judgment for $305 million in a recent professional gambling antitrust case, which stemmed from patent misuse in an earlier lawsuit over an automatic card shuffler. This monopoly-beating jackpot will be divided among Shuffle Tech LLC, DigiDeal Corp, Aces Up Gaming, Inc and Poydras-Talrick Holdings LLC, which had claimed that Scientific Games’ patent infringement lawsuit against them was based on patents that Scientific Games knew were unenforceable.
Nokia holds a swathe of patents for its intellectual property that will be essential for the rollout of next generation mobile networks
On July 27th, a jury verdict entered in the District of Delaware awarded $4.8 million in lost profit and reasonable royalty damages to Marlboro, MA-based medical technology company Hologic Inc. after the jury determined that two of its patents were infringed by Redwood City, CA-based medical device company Minerva Surgical. At issue in the case was a technology marketed by Minerva to treat women dealing with abnormal uterine bleeding (AUB).
Last April, I had the good fortune to participate in a symposium at Penn Law School. The symposium gathered a variety of IP scholars to focus on the "historic" kinship between copyright and patent law. That kinship, first identified in Sony v. Universal Pictures, supposedly shows parallels between the two legal regimes. I use scare quotes because it is unclear that the kinship is either historic or real. Even so, there are some parallels, and a collection of papers about those parallels will be published in the inaugural issue of Penn's new Law & Innovation Journal.
The court awarded defendant only $100,000 of its claimed $1.3 million in attorney fees under 35 U.S.C. ۤ 285 because defendant failed to present sufficient evidence to support its fee claim.
Upon reading the title of this blog entry, readers may be wondering what the “ex re ipsa” doctrine involves. It therefore may be worth clarifying that it is a legal doctrine applied, for example, to cases dealing with damages, where the damage is presumed to have been caused (“causality”) when it is inherent to the activity that is the object of the complaint.
Hold was the Dean of external relations, member of the president's board and lecturer on international trade at the University of St Gallen
The AmeriKat has been noticeably whiskers down in her day job over the past few months. But now, with the frenzy of a new Court term still several weeks away, she has taken the relatively quiet opportunity to review the much awaited publication of the EU IPO's report entitled "The Baseline of Trade Secrets Litigation in the EU". This report was commissioned by the EU IPO in order to prepare the future report that will assess what impact the EU Trade Secrets Directive has had (see previous Kat posts here). That report is to be published before 9 June 2021 (just think, 2021...what might be in store for us then?).
On 1 July 2018, the IP5 Offices (EPO, KIPO, USPTO, JPO and SIPO) launched a pilot project to test a collaborative approach to international searches under the PCT, particularly with a view to assessing user interest for such a new PCT product and also look at the expected efficiency gains for the participating offices.
In short, a PCT application filed in English can be entered in the pilot, with the EPO as ISA for example. If it is accepted by the EPO, the EPO will conduct its normal search and examination of the application. Before sending their report the EPO will send the application and its provisional search and examination to colleagues in each of the other four offices, who will review the report and comment on it and possibly update the search using their own resources.
Boston-based computer software business Nuance Communications has the most grants and the highest quality patents related to speech recognition technologies, a new analysis examining the IP landscape of the field has revealed. In a report released earlier this month, IP analytics platform Relecura looked at more than 100,000 published patent applications (over half of which are granted) related to speech recognition technologies. Of these, over 33,500 have been filed in the US, compared to approximately 25,000 in China and 15,000 in Japan.
Fashion brands may find it difficult to protect their designs under traditional methods of IP protection (for more information, please see "Dressing up a brand against lookalikes: part one"). Part two of this update looks at the more unconventional method of trade dress protection and highlights previous key trade dress cases in Russia.
Attorneys for a well-known Kentucky bourbon maker are knock, knock, knockin' on Bob Dylan’s door.
Heaven Hill Distillery has filed a trademark infringement lawsuit against Heaven’s Door Spirits, a whiskey line co-owned by Dylan that was released earlier this year.
The company's name is a reference to Dylan’s 1973 song Knockin' on Heaven’s Door.
The lawsuit, filed Friday in U.S. District Court in Louisville, argues that the Bardstown-based company was founded by the Shapira family shortly after prohibition ended in the 1930s and has used the trademark for more than 80 years.
A Heaven Hill attorney sent a cease-and-desist letter to Chicago-based Heaven’s Door in April, saying the start-up distillery’s use of its trademark “will create a likelihood of confusion” with the Kentucky bourbon brand's products.
Trademark disputes in the alcohol industries are often times absurd enough to make the comments section question whether everyone involved was simply drunk. While I'm sure the lawyers on all sides tend to be sober, every once in a while you read a claim in a big-boy legal document that makes you pause and wonder. And, then, sometimes the dispute centers around a public figure punning off his own notoriety, making the trademark claims extra ludicrous.
Meet Bob Dylan. Bob used to be a counterculture folksinger hero that eschewed the trappings of materialism and sang as one of the original social justice warriors. Present day Bob sings songs on car commercials and owns a Whiskey brand. And, hey, Bob's allowed to make money, no matter how jarring this might be to those born decades ago. His Heaven's Door Whiskey is, sigh, allowed to exist. It's also allowed to fight back against the absurd trademark lawsuit brought by Heaven's Hill Distillery over its logo and trade dress.
This referral from Estonia was made in the context of proceedings that a collecting society, SNB-REACT, had initiated against an individual, Deepak Mehta, concerning the latter's alleged liability for infringement of the IP rights of 10 trade mark owners.
According to SNB-REACT, Mehta had allegedly registered a number of IP addresses and internet domain names, which unlawfully used signs identical to the trade marks owned by SNB-REACT members, together with websites unlawfully offering for sale goods bearing such signs.
Mehta, however: (1) denied that he had registered the IP addresses and domain names challenged by the claimant; (2) even if he owned 38,000 IP addresses, he had rented them to third-party companies; and (3) this activity should be regarded as akin to that of a service providing access to an electronic communications network, together with an information transmission service, being - as a result - eligible for the safe harbour protection under the Estonian provisions corresponding to Article 12 to 14 of the E-Commerce Directive.
The report, “Creative Markets and Copyright in the Fourth Industrial Era: Reconfiguring the Public Benefit for a Digital Trade Economy,” was authored by Prof. Ruth L. Okediji, the Jeremiah Smith, Jr. professor of law at Harvard Law School.
The report suggests that the rise of emerging technologies such as “big data, robotics, machine learning, and artificial intelligence (AI)” calls for “a more radical conception of global copyright norms” in order to “preserve, and even advance, public benefit in an era of digital trade.”
But what if there might be a middle ground that could thread the needle between the legality of original cartridges and the convenience of emulated ROMs? What if an online lending library, temporarily loaning out copies of ROMs tied to individual original cartridges, could satisfy the letter of the law and the interests of game preservation at the same time?
What if such a library already exists? In fact, it has for 17 years.