1) A good reason to use Linux is to have the pleasure of saying you do not use Windows when someone asks you to go to your home to fix your computer and you do not know the reason for the problem. You can say that you do not know Windows and can not fix it.
ââ¬â¹Not infrequently we find an unrecoverable pirated Windows full of malware and everything else, with important data that the user does not want to lose, but of course, did not make a simple backup. After this grim picture, we can only say: "Sorry, my friend, I do not know Windows, I only use Linux."
Longtime Linux PC vendor System76 has begun teasing a "new open-source computer" they will release in the coming weeks.
System76 [Official Site], the hardware vendor that focuses on putting out well-supported Linux laptops, desktops and servers are teasing something new.
System76 is launching a new open-source computer, which will be available for pre-order next month. Before announcing the finalized hardware, the company will be releasing a four-part animation each week with "design updates hidden within a game portion of the story". That story will contain "different worlds, each representing an antithesis to open source ideals. These themes are utilized to draw attention to the importance of open source in the evolution of technology". If you're interested, you can sign up here to follow the saga and receive updates leading up to the pre-order.
When you buy a System76 computer today, you aren't buying a machine manufactured by the company. Instead, the company works with other makers to obtain laptops, which it then loads with a Linux-based operating system -- Ubuntu or its own Pop!_OS. There's nothing really wrong with this practice, but still, System76 wants to do better. The company is currently working to manufacture its own computers ("handcrafted") right here in the USA! By doing this, System76 controls the entire customer experience -- software, service, and hardware.
Today, the company announces that the fruits of its labor -- an "open-source computer" -- will be available to pre-order in October. Now, keep in mind, this does not mean the desktop will be available next month. Hell, it may not even be sold in 2018. With that said, pre-ordering will essentially allow you to reserve your spot. To celebrate the upcoming computer, System76 is launching a clever animated video marketing campaign.
Today, the Kubernetes Project released version 1.12. The big updates in this version are the general availability of TLS bootstrapping, a maturing story around scaling, and better multitenancy. Head on over to the CoreOS Blog to check out the full details of this release.
Today, we celebrate this week’s release of Kubernetes 1.12, which brings a lot of incremental feature enhancements and bug fixes across the release that help close issues encountered by enterprises adopting modern containerized systems. Each release cycle, we’re frequently asked about the theme of the release. There are always exciting enhancements to highlight, but an important theme to note is trust and stability.
The Kubernetes project has grown immensely over the last few years and has come to be respected as a leader in container orchestration and management solutions. With that stature comes the responsibility to build APIs and tools that are well-tested, easy to maintain, highly performant, and scalable; qualities that are trusted and stable. In each of the upcoming release cycles, we expect to continue to see a community effort around prioritizing the maturation and stabilization of existing functionality over the delivery of new features.
Two high-profile open-source collaborations are putting their heads together to work out how to take Kubernetes, more familiar in hyperscale environments, out to Internet of Things edge computing projects.
The Kubernetes IoT Edge Working Group is the brainchild of the Cloud Native Computing Foundation (CNCF) and the Eclipse Foundation.
Speaking to The Register, CNCF's Chris Aniszczyk said the idea of using Kubernetes as a control plane for IoT is "very attractive".
That sums up the brief of the working group, he said, "to take the concept of running containers, and expand that to the edge".
Open Networking Summit Europe -- The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced further collaboration between telecom and cloud industry leaders enabled by the Cloud Native Computing Foundation (CNCF) and LF Networking (LFN), fueling migrations of Virtual Network Function (VNFs) to Cloud-native Network Functions (CNFs).
Three years ago, Mark Russinovich, CTO ofAzure, Microsoft's cloud program, said, "One in four [Azure] instances are Linux." Then, in 2017, it was 40 percent Azure virtual machines (VM) were Linux. Today, Scott Guthrie, Microsoft's executive vice president of the cloud and enterprise group, said in an interview, "it's about half now, but it varies on the day because a lot of these workloads are elastic, but sometimes slightly over half of Azure VMs are Linux." Microsoft later clarified, "about half Azure VMs are Linux."
This week we’ve been networking and playing with Virtual Machines. Linux gets a code of conduct, Twitter has been leaking direct messages, Google Chrome 69 signs in to all the things and we round up community goings on and events.
It’s Season 11 Episode 29 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.
He has always had a reputation as someone who provides blunt feedback to engineers, with expletive-laden emails, once describing an Intel fix as "complete and utter garbage".
A couple of surprising things happened in the kernel community on September 16: Linus Torvalds announced that he was taking a break from kernel development to focus on improving his own behavior, and the longstanding "code of conflict" was replaced with a code of conduct based on the Contributor Covenant. Those two things did not quite come packaged as a set, but they are clearly not unrelated. It is a time of change for the kernel project; there will be challenges to overcome but, in the end, less may change than many expect or fear.
Codes of conduct are designed to make open source projects more inviting to everyone, and the idea is catching on. Today, more than 40,000 projects have adopted the Contributor Covenant, including Google's artificial intelligence platform TensorFlow and the increasingly popular programming framework Vue. Even Linux is finally on board: Earlier this month the project adopted the Contributor Covenant, and Torvalds apologized for his past behavior.
André Arko, lead maintainter of the popular Ruby tool Bundler, says the Contributor Covenant has changed the project for the better. Before the project adopted the Covenant, the team struggled to find enough contributors to maintain the project. That changed quickly. "We've had dramatically more participation," he adds. That’s meant more participation from women, minorities, and other underrepresented groups, but also more contributions from white men as well.
It’s similar to how a few days ago 3D gun pioneer Cody Wilson also became wanted for “sexual assault”; it seems the US government, through entrapment, finally found a more effective way to attack him and stop his efforts of getting 3D printed guns to people via the Internet. And of course, this is very similar to what happened to Jacob Applebaum and Julian Assange. You would hope that activists and open-source leaders would have learned by now to avoid such traps, where sexuality and women are (ab)used to damage people’s reputations and gain power over them.
A small group of programmers are calling for the rescission of code contributed to Linux, the most popular open source operating system in the world, following changes made to the group’s code of conduct. These programmers, many of whom don’t contribute to the Linux kernel, see the new Code of Conduct as an attack on meritocracy—the belief that people should mainly be judged by their abilities rather than their beliefs—which is one of the core pillars of open source software development. Other developers describe these attacks on the Code of Conduct as thinly veiled misogyny.
It’s a familiar aspect of the culture war that many online and IRL communities are already dealing with, but it has been simmering in the Linux community for years. The controversy came to the surface less than two weeks after Linus Torvalds, the creator of Linux, announced he would temporarily be stepping away from the project to work on “understanding emotions.” Torvalds was heavily involved with day to day decisions about Linux development, so his departure effectively left the community as a body without a head. In Torvalds’ absence, certain developers seem committed to tearing the limbs from this body for what they perceive as an attack on the core values of Linux development.
[...]
Over the last three years, however, the verbal abuse among Linux developers, a lot of it coming from Torvalds himself, hardly abated. In fact, Elon University computer science professor Megan Squire even used machine learning to recognize Torvalds’ insults, which numbered in the thousands during a four year period. According to Squire’s analysis, most of this abusive language wasn’t gendered.
Linux kernel developers tend to take a dim view of the C++ language; it is seen, rightly or wrongly, as a sort of combination of the worst (from a system-programming point of view) features of higher-level languages and the worst aspects of C. So it takes a relatively brave person to dare to discuss that language on the kernel mailing lists. David Howells must certainly be one of those; he not only brought up the subject, but is working to make the kernel's user-space API (UAPI) header files compatible with C++.
If somebody were to ask why this goal is desirable, they would not be the first to do so. The question has not actually gotten a complete answer, but some possible motivations come to mind. The most obvious one is that some developers might actually want to write programs in C++ that need access to the kernel's API; there is no accounting for taste, after all. For most system calls, the details of the real kernel API (as opposed to the POSIX-like API exposed by the C library) tend to be hidden, but there are exceptions; the most widespread of those is almost certainly the ioctl() system call. There is a large set of structures used with ioctl(); their definition is a big part of the kernel's UAPI. If a C++ compiler cannot compile those UAPI definitions, then those ioctl() calls cannot be invoked from C++.
There was no mention of anyone having yet done so.
There are a couple of additional points to be borne in mind: one, when corporate contributions are made to the kernel, the developer has to assign copyright to the corporation. Ninety percent of code contributed to Linux fits in this bracket.
And two, soon after the SCO Group announced its decision in 2003 to sue IBM for copyright over UNIX code that it (SCO) claimed to own, the Linux kernel project decided to ask developers to provide a standard, signed form in which they assigned copyright for code changes they submitted to the project to the people running said project.
These two factors may not get in the way of some upstart wanting his/her code back. But it definitely will not make it any easier.
The second source for this article is a man of the past, Eric Raymond, once a luminary of the open source community, but now only a fringe player. Raymond wrote a blog post about the Torvalds episode, and the throwaway line "let me confirm that this threat (ie. developers asking for their code back) has teeth" seems to have got the author of the article in question a little excited.
The Linux mult-queue block I/O layer (blk-mq) has been working out well for delivering very fast performance particularly for modern NVMe solid-state storage and SCSI drives. But it turns out run-time power management hasn't been in use when blk-mq is active.
The multi-queue block code brings per-CPU software queues and these software queues can map to hardware issue queues. These multiple queues can reduce locking contention and the overall blk-mq design jives with current high-performance solid-state drive characteristics. The key drivers have been ported over to using blk-mq for a while now (end of Linux 3.xx / early 4.x kernels) and for Linux systems not using it by default can be activated easily via the scsi_mod.use_blk_mq=1 boot option.
Separate from the recent FUSE performance work talked about for making FUSE faster with the eBPF in-kernel JIT that hasn't been staged for mainlined, "File-Systems in User-Space" are set to see better performance on the next kernel (Linux 4.20~5.0) thanks to other changes.
Already having been queued for this next kernel cycle is copy_file_range support for FUSE to yield more efficient copy operations.
More than anything, open source programs are responsible for fostering “open source culture,” according to a survey The New Stack conducted with The Linux Foundation’s TODO Group. By creating an open source culture, companies with open source programs see the benefits we’ve previously reported, including increased speed and agility in the development cycle, better licence compliance and more awareness of which open source projects a company’s products depend on.
In an effort to identify early edge applications, we recently partnered with IHS Markit to interview edge thought leaders representing major telcos, manufacturers, MSOs, equipment vendors, and chip vendors that hail from open source, startups, and large corporations from all over the globe. The survey revealed that edge application deployments are still young but they will require new innovation and investment requiring open source.
The research investigated not only which applications will run on the edge, but also deployment timing, revenue potential and existing and expected barriers and difficulties of deployment. Presented onsite at ONS Europe by IHS Markit analyst Michael Howard, the results represent an early look at where organizations are headed in their edge application journeys.
American courier delivery services giant FedEx has joined Hyperledger, an open-source project established to improve cross-industry blockchain technologies, according to a press release published September 26.
Hyperledger, which is hosted by the Linux Foundation, enables organizations to build blockchain-based industry-grade applications, platforms and hardware systems in the context of their individual business transactions.
Global shipping company FedEx has joined Hyperledger, an open-source blockchain venture that now has more than 270 members, according to a press release.
FedEx is taking part in the collaborative project “to advance cross-industry blockchain technologies,” which already includes members such as American Express, Deutsche Bank, IBM, Intel and JPMorgan.
Hyperledger, an open source collaborative effort created to advance cross-industry blockchain technologies, has announced 14 members, including FedEx, have joined its growing global community.
More than 270 organisations are now contributing to the growth of Hyperledgers’ open source distributed ledger frameworks and tools.
Hyperledger, an umbrella project of open source blockchains has announced on Wednesday that it will be collaborating with 14 new members who have joined its global community.
As of now, 270 plus members are contributing to the growth of Hyperledgers’ open source distributed ledger frameworks and tools.
Global shipping giant FedEx has just become one of the 14 newest members to join the Hyperledger consortium.
Hyperledger announced that FedEx, Honeywell International, as well as a number of crypto startups, have become the newest participants in its mission to to build blockchain platforms and applications for enterprises, according to a press release on Wednesday.
Linux Foundation’s Hyperledger launched in 2016 has attracted many members to its singular technology, the latest members being FedEx, Honeywell International Inc., and Conste11ation Labs. Wanchain (WAN) also announced today that it has officially joined the hyperledger community where it will focus on “blockchain interoperability”.
The Hyperledger is an open source project focused on uniting blockchain of different cryptocurrencies and industries to work together and share value. Members of the Hyperledger community come from different sectors of the world economy. In a press release made available by the company, Hyperledger now has 14 new members cutting across different fields of endeavour, one of which is Wanchain.
FedEx the giant US courier company, proactive adopter of blockchain technology and BiTA member, has joined Linux hosted open-source project Hyperledger to further advance the use of distributed ledger in logistics, and transportation.
We are very pleased to announce that invite-only testing for Bitcoin Integration (Wanchain 3.0) is now liveââ¬Å —ââ¬Å see below for registration details. This is the Alpha testnet for Wanchain’s 3.0 launch that has been planned to go live by the end of 2018. We have been making remarkable progress on our technology and are excited to deliver this Alpha testnet ahead of schedule.
Hyperledger, an open source collaborative effort created to advance cross-industry blockchain technologies, today announced 14 members have joined its growing global community. More than 270 organizations are now contributing to the growth of Hyperledgers' open source distributed ledger frameworks and tools.
The Linux Foundation announced further collaboration between the telecom and cloud industries through its Cloud Native Computing Foundation (CNCF) and LF Networking (LFN) in order to fuel migrations of virtual network functions (VNFs) to cloud-native network functions (CNFs).
Two of the fastest-growing Linux Foundation projects – ONAP (part of LF Networking) and Kubernetes (part of CNCF) – are coming together in next-generation telecom architecture as operators evolve their VNFs into CNFs running on Kubernetes. Compared to traditional VNFs (network functions encapsulated in a virtual machine running in a virtualized environment on OpenStack or VMware, for example), CNFs (network functions running on Kubernetes on public, private, or hybrid cloud environments) are lighter weight and faster to instantiate, the foundation said. Container-based processes are also easier to scale, chain, heal, move and back up.
Cloud Native Computing Foundation (CNCF), chiefly responsible for Kubernetes, and the recently established Linux Foundation Networking (LF Networking) group are collaborating on a new class of software tools called Cloud-native Network Functions (CNFs).
CNFs are the next generation Virtual Network Functions (VNFs) designed specifically for private, public and hybrid cloud environments, packaged inside application containers based on Kubernetes.
Our ‘Blockchain development made easy’ series continues with Hyperledger Iroha, a simple blockchain platform you can use to make trusted, secure, and fast applications. What are the advantages and how can developers get started with it? We talked to Makoto Takemiya, co-founder and co-CEO of Soramitsu about what’s under this project’s hood.
2018 marks the year that open source disrupts yet another industry, and this time it’s financial services. The first-ever Open FinTech Forum, happening October 10-11 in New York City, focuses on the intersection of financial services and open source. It promises to provide attendees with guidance on building internal open source programs along with an in-depth look at cutting-edge technologies being deployed in the financial sector, such as AI, blockchain/distributed ledger, and Kubernetes.
Several factors make Open FinTech Forum special, but the in-depth sessions on day 1 especially stand out. The first day offers five technical tutorials, as well as four working discussions covering open source in an enterprise environment, setting up an open source program office, ensuring license compliance, and best practices for contributing to open source projects.
Last month we noted a new Gallium3D driver in-development by Intel dubbed "Iris" and potentially replacing their existing "classic i965" Mesa driver for recent generations of Intel HD/UHD/Iris graphics hardware. Intel developers have begun talking about this new open-source Linux GPU driver today at the XDC 2018 conference in A Coruña, Spain.
Support for the Hygon Dhyana, a Chinese x86 server CPU based on AMD Zen/EPYC, will find its way into the next Linux kernel cycle.
The partnership between AMD and Haiguang IT Co was announced earlier this year for creating x86 CPUs targeting the Chinese server market. Hygon "Dhyana" is the first family of these new x86 CPUs licensed from AMD and based upon their Zen / Family 17h architecture. For the past several months there have been rounds of kernel patches sent out for review adding this Hygon Dhyana support to the Linux kernel.
It was a bit nerve-racking seeing Mesa 18.1 still in use by the Ubuntu 18.10 "Cosmic Cuttlefish" in recent days, but fortunately it looks like the feature freeze exception is secured and Mesa 18.2 is on its way to landing.
Since yesterday, Mesa 18.2.1 is now queued in cosmic-proposed. It's not in the official "Cosmic" archive yet, but should soon be -- well in time for the Ubuntu 18.10 release expected on 18 October.
Intel VT-d revision 3.0 adds a "Scalable Mode" translation mode for enabling Scalable I/O virtualization and the patches have been in the works for supporting this within the Linux kernel.
Intel open-source developer Ashok Raj has written a detailed block post covering this Intel virtualization enhancement for directed I/O and its benefit on performance and overcoming existing I/O virtualization shortcomings.
For developers interested in delivering cross-platform Vulkan games/applications and using MoltenVK for delivering macOS/iOS support, a new release is available that has a number of feature additions.
At XDC2018 in Spain this morning the talks were focused on testing of Mesa / continuous integration. During the talk by Mark Janes, the Intel open-source crew announced the public availability of all their CI data.
While it will be a ways from release, the codename to the successor of the AMD Radeon "Navi" GPUs might be Arcturus.
Navi is the codename of the next-gen AMD GPUs due out in 2019 and is the nickname of the star Gamma Cassiopeiae. Current generation Vega also ties into the astronomical theme as it's the brightest star in the Lyra constellation.. It was "Polaris" that kicked off this theme with the Radeon RX 480 series launch. Now it looks like the AMD Navi successor might be Arcturus. Arcturus is a large red star and the brightest of the constellation of Boötes.
This year Intel HDCP support was merged into the mainline Linux kernel for those wanting to utilize this copy protection system in combination with a supported Linux user-space application, which for now appears to be limited to Chrome OS. HDCP 2.2 support is the latest revision now being worked on for the open-source Intel Direct Rendering Manager driver.
We've known Red Hat was working on converting the VirtualBox "vboxvideo" DRM/KMS driver to using the atomic APIs for atomic mode-setting to replace the legacy APIs and now those patches are out there.
Red Hat's Hans de Goede sent out the 15 patches on Wednesday for wiring up the atomic mode-setting interfaces to replace the legacy APIs. Red Hat developers have been doing this as they were the ones pushing for getting the VirtualBox guest drivers into the mainline kernel itself with Oracle's developers working on VirtualBox sadly lacking that initiative.
With macOS Mojave having been released earlier this week, I've been benchmarking this latest Apple operating system release on a MacBook Pro compared to Ubuntu 18.04.1 LTS with the latest updates as well as Intel's high-performance Clear Linux rolling-release operating systems to see how the performance compares.
MacOS Mojave is more focused on delivering the new "dark mode" and various app improvements over a particular performance focus, but from our side it's always interesting to see how Apple's latest macOS releases compare to the performance of Linux distributions on Apple's own hardware. For comparison, macOS 10.13.6 High Sierra was benchmarked alongside macOS 10.14.0 Mojave.
In the days when Linux was a fledgling operating system, font handling was often identified as a major weakness. It was true that Linux then had problems with dealing with TrueType fonts, its font subsystem was prehistoric compared to its competitors, there was a dearth of decent fonts, difficulties in adding and configuring fonts made it almost impossible for beginners to improve matters for themselves, and jagged fonts with no anti-aliasing just added to a rather amateurish looking desktop.
Fortunately, the situation is considerably better these days, with a better quality of user interface typography. With the continuing improving FreeType font engine producing high quality output, natively supporting scalable font formats like TrueType, Linux is making great strides although there’s still some way to go. Dealing with fonts under Linux can sometimes be tricky.
After some pretty quick updates following the initial release of Steam Play, things have quietened down somewhat. However, work on the next version of Steam Play is in progress.
It was expected it would slow down after the initial release period, since they were rapidly pushing out fixes to get it into a somewhat stable state. Stable enough for them to put it into the main Steam client that is, since you no longer need to opt into any beta to access Steam Play.
Blade Symphony: Harmonious Prelude, a big update to this Source Engine powered "tactical slash-em-up" sword-fighting video game, is now available including with Linux support.
Blade Symphony: Harmonious Prelude is a unique sword fighting action game with support for 1 vs. 1, 2 vs.2, and sandbox free-for-all fighting, along with other game modes. This is the sword fighting game we talked about earlier this month.
Blade Symphony, a sword-fighting game from Puny Human powered by the Source Engine has officially released for Linux today.
The Linux release comes at the same time as the game receiving a massive update called Harmonious Prelude. There's so many things that have changed, it would be completely silly for me to list them all here. It's quite a different game, but you don't need to take my word for it as it's also free to try for a few days!
Mixing in some pretty good XCOM combat mechanics with a dash of FTL-style travelling through nodes, Depth of Extinction is now out.
Developed by HOF Studios, Depth of Extinction takes you on a bit of an epic journey through an unforgiving underwater world. I'm a big fan of both XCOM and FTL and I can see the inspiration clearly, although the way everything fits together makes it really quite compelling and unique in its own right.
The Linux version of BATTLETECH [Official Site] has seen some delays since the game’s launch earlier in the year. The communication from the developer has been spotty at times so it’s good to see that they’ve finally gotten around to delivering on a Linux version for the game. It’s not quite bug-free yet so it’s available to download by opting into the “public_beta_linux” branch on Steam.
I must admit, I was a little slow on the uptake with the last article about Geneshift getting a Battle Royale mode as it's now already out.
Pig Eat Ball from Mommy's Best Games is frankly one of the most utterly bonkers games I think I've ever played.
I don't quite know how to describe it, that's just how completely crazy the game is! To be clear though, I've had a ridiculous amount of fun with this one. It's quite a surprise really, as I was a bit nervous about diving into Pig Eat Ball thinking the weird factor might end up working against it. Fear not, while it dives in at the deep end of the outlandish pool, it does come out smiling at you.
The Total War team along with Feral Interactive have confirmed that Total War: THREE KINGDOMS is officially coming to Linux.
Honestly, I don't think this was supposed to be confirmed quite so soon. Especially given that Feral Interactive only teased something big for us next week. This has replaced the "Working with fire and steel" teaser that was on Feral's port radar, with one teaser still not named yet.
Feral Interactive announced today the latest Total War game they are porting to Linux and macOS.
I'm going to assume this is either a game about to be released or they're going to reveal the actual games they've been teasing lately. We know they're porting Total War: WARHAMMER II but there's two other Linux ports they've teased. They seem to be moving a bit quicker with things lately, especially since they only released Life is Strange: Before the Storm for Linux two weeks ago.
The impressive sci-fi action-platformer MegaSphere has neon lighting coming out of all ends and it looks incredible. Last night, the "TURMS Update" went live which included new enemies to face, entirely new areas to explore as well as some new game mechanics.
Gather your party together as Heroes of Hammerwatch has a rather big update out and there's some fun stuff.
The first biggest change, is that it now has full modding support along with Steam Workshop support, so it's going to be real interesting to see what people come up with for such a game. I'll be honest, I still haven't gotten around to playing this one. Sounds like I will have to with goodies like this coming to it!
The Linux desktop ecosystem offers multiple window managers (WMs). Some are developed as part of a desktop environment. Others are meant to be used as standalone application. This is the case of tiling WMs, which offer a more lightweight, customized environment. This article presents five such tiling WMs for you to try out.
The wait is over: the KDE Ubuntu 18.04 release is finally here.
Developers behind the KDE-centric Linux distro€¹ have announced that they’ve successfully rebased KDE Neon on Ubuntu 18.04 LTS ‘Bionic Beaver’, which was released earlier this year.
With the bump to Bionic KDE Neon users unlock access to newer packages, third-party tools, and hardware drivers. They also benefit from a more recent Linux kernel.
The idea is not to hack in complex applications for now, but to integrate wannabe KDE hackers into actually being KDE hackers, so I’ll focus on small tasks at first untill we have a solid base here the same way I did when I joined KDE and had those sessions with Sandro Andrade at the Universities Ruy Barbosa. Also, my german language skills are really weak, I’m also trying to learn some german here and I belive this is a good way to meet people.
Today we’re releasing the latest version of Krita! In the middle of our 2018 fundraiser campaign, we’ve found the time to prepare Krita 4.1.3. There are about a hundred fixes, so it’s a pretty important release and we urge everyone to update! Please join the 2018 fundraiser as well, so we can continue to fix bugs!
A few days ago Jupiter Broadcasting’s Chris Fisher approached me about doing an interview for his Linux Unplugged podcast, so I said sure! I talked about the Usability & Productivity initiative, Kubuntu and KDE Neon, my history at Apple, and sustainable funding models for open-source development.
We chat with Nate Graham who’s pushing to make Plasma the best desktop on the planet. We discuss his contributions to this effort, and others.
Vince is a beautiful modern GTK theme and it is compatible with all GTK3 and GTK2-based Desktop Environments including Xfce, Mate, Gnome, etc.
It has 3 colour variants which are Materia, Materia-dark, and Materia-light and they all feature a minimalist UI with clean design elements and neat animation effects.
It is based on the nana-4 Material Design theme (formerly Flat-Plat) which is based on GNOME’s Adwaita theme.
This is not the first time a theme is what can be referred to as the 3rd-generation fork from another theme. Sometimes the “generation” count goes as high as 6. But this is open-source so more power to the developer.
Blankon 11 Uluwatu is the latest version of Blankon Linux Distribution. This release ships with a custom desktop environment based-on GNOME Shell 3.26.2 called manokwari, Powered by Linux kernel 4.14 series and based-on Debian SID. Blankon installer was improved, developed using HTML5 technology, java and vala. it now support for UEFI partition.
Includes a new LibreOffice version 6.0.1.1 for default office suite. Firefox Quantum 58 as default browser, GIMP 2.8.20, Inkscape 0.92, Audacious 3.9, Corebird for default twitter app, VLC media Player 3.0, and GNOME apps 3.26.
Bodhi Linux 5.0 the latest release of Bodhi Linux has been released by Jeff Hoogland. This release ships with a latest Moksha Desktop 0.3, Powered by Linux kernel 4.15 series and Based on Canonical’s long-term supported Ubuntu 18.04 LTS (Bionic Beaver).
Bodhi Linux 5.0 promises to offer users a rock-solid, Enlightenment-based Moksha Desktop experience, improvements to the networking stack, and a fresh new look based on the popular Arc GTK Dark theme but colorized in Bodhi Green colors. also comes with a new default wallpaper, new login, and boot splash screen themes, as well as an AppPack version for those who want to have a complete application suite installed by default on their new Bodhi Linux installations.
Since LinDoz is now officially available for download, I will wrap up with a focus on what makes MakuluLinux LinDoz a compelling computing option. I no doubt will follow the Flash and the Core edition releases when those two distros are available in final form.
One of the more compelling attributes that LinDoz offers is its beautiful form. It is appealing to see. Its themes and wallpapers are stunning.
For the first time, you will be able to install the new LinDoz once and forget about it. LinDoz is now a semi-rolling release. It receives patches directly from Debian Testing and MakuluLinux.
Essential patches are pushed to the system as needed.
Caution: The LinDoz ISO is not optimized for virtual machines. I tried it and was disappointed. It loads but is extremely slow and mostly nonresponsive. Hopefully, the developer will optimize the ISO swoon to provide an additional option for testing or using this distro.
However, I burned the ISO to a DVD and had no issues with the performance in live session. I installed LinDoz to a hard drive with very satisfying results.
Initially launched this summer on July 27, 2018, the SparkyLinux 5.5 "Nibiru" Rolling operating system series brought all the latest updates and security fixes from the Debian Testing repositories a.k.a. Debian GNU/Linux 10 "Buster," and was available as MinimalGUI (Openbox), MinimalCLI, and LXQt editions.
New ISOs were made available last week with even more recent updates from the Debian Testing repositories, and today the special editions were released too as SparkyLinux 5.5 GameOver, SparkyLinux 5.5 Multimedia, and SparkyLinux 5.5 Rescue, synced with the Debian Buster repositories as of September 24, 2018.
The policy aims to cover all copyright-related aspects, bringing Gentoo in line with the practices used in many other large open source projects. Most notably, it introduces a concept of Gentoo Certificate of Origin that requires all contributors to confirm that they are entitled to submit their contributions to Gentoo, and corrects the copyright attribution policy to be viable under more jurisdictions.
Indeni, provider of the crowd-sourced network automation platform, today announced its sponsorship of AnsibleFest 2018 to showcase the collaboration between Indeni and Red Hat Ansible Automation around initiatives designed to benefit IT operations and help advance network automation solutions.
Red Hat is introducing an offering to help provide an open pathway to digital transformation.
Designed to help enterprises cut costs and speed innovation through cloud-native and container-based technologies, Red Hat infrastructure migration solution enables enterprises to break down closed technology silos centered on proprietary virtualisation.
The number one complaint we hear from customers is their struggle to run tomorrow’s workloads on yesterday’s infrastructure. With a lot of new technologies coming to the forefront—containers, microservices, and so on—modern workloads are significantly different than even three or four years ago. They’re now distributed across multiple footprints, and organizations are struggling to keep pace.
Open source software provider Red Hat announced that it’s been selected as a core technology partner by ‘X By Orange’, the new subsidiary of Orange Spain focused on business-to-business (B2B) digital services. Launched earlier this month, X by Orange is building a greenfield, cloud-native platform, enabling the service provider to embrace DevOps and agile development and more rapidly create and deliver digital services to business customers.
Orange Spain subsidiary X by Orange is embracing a cloud-native platform to deliver digital services to its business customers.
X by Orange is notable because it eschews traditional network infrastructure and legacy hardware by instead creating a separate platform that is software-based. Using Red Hat's OpenShift Container Platform, along with its consulting team, X by Orange is able to put services in a public cloud by using the greenfield, cloud-native platform.
Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced that X By Orange, a subsidiary of Orange Spain focused on business-to-business (B2B) digital services, selected Red Hat as a core technology partner to help create its software-defined strategy with Red Hat OpenShift Container Platform in collaboration with Red Hat Consulting. With the industry’s most comprehensive enterprise Kubernetes platform, X by Orange is building a greenfield, cloud-native platform, enabling the service provider to embrace DevOps and agile development and more rapidly create and deliver digital services to business customers.
Red Hat OpenShift supports two workflows for building container images for applications: the source and the binary workflows. The binary workflow is the primary focus of the Red Hat OpenShift Application Runtimes and Red Hat Fuse product documentation and training, while the source workflow is the focus of most of the Red Hat OpenShift Container Platform product documentation and training. All of the standard OpenShift Quick Application Templates are based on the source workflow.
A developer might ask, “Can I use both workflows on the same project?” or, “Is there a reason to prefer one workflow over the other?” As a member of the team that developed Red Hat certification training for OpenShift and Red Hat Fuse, I had these questions myself and I hope that this article helps you find your own answers to these questions.
In many ways, age brings refinement. Wine, cheese, and, in some cases, people, all improve as they grow older. But in the world of enterprise IT, age has a different connotation. Aged systems and software, can bring irrelevance and technical debt and, at worst, increased security risks. With the rise of Linux containers as a functional underpinning to the digitally-transforming enterprise, the ill effects of technological age are front and center.
To think of it more simply: Containers age like milk, not like wine. Think of it in terms of food: Milk is a key component in cooking, from baking to sauces. If the milk sours or goes bad, so to does the recipe. The same things happens to containers, especially as they are being looked to as key components for production systems. A stale or “soured” container could ruin an otherwise promising deployment.
Many modern developers have learned that ‘sticking with HEAD’ (the most recent stable release) can be the best way to keep their application more secure. In this new ‘devops’ world there’s a fine line between using the latest and greatest, and breaking changes introduced by an upgrade. In this post we’ll explore some configuration options in Red Hat OpenShift which can make keeping up with the latest release easier, while reducing the impact of breaking changes. For more information on image streams I encourage you to read the source-to-image FAQ by Maciej Szulik.
[...]
Using scheduled source-to-image base image streams, along with a build configuration which disables ImageChange triggers, we can strike a nice balance between “sticking with head”, and avoiding breaking changes. Consider updating the pre-installed image streams in the ‘openshift’ project to allow your developers get the latest security updates in language runtimes and build tools.
While I used CentOS images for demonstration purposes in this post, I’d recommend using RHEL images for your production applications. The Red Hat Container Catalogue contains regularly updated and certified container images, fully supported by Red Hat.
FORTIFY_SOURCE provides lightweight compile and runtime protection to some memory and string functions (original patch to gcc was submitted by Red Hat). It is supposed to have no or a very small runtime overhead and can be enabled for all applications and libraries in an operating system. The concept is basically universal meaning it can be applied to any operating system, but there are glibc specific patches available in gcc-4 onwards. In gcc, FORTIFY_SOURCE normally works by replacing some string and memory functions with their *_chk counterparts (builtins). These functions do the necessary calculations to determine an overflow. If an overflow is found, the program is aborted; otherwise control is passed to the corresponding string or memory operation functions. Again all this is normally done in assembly so the overhead is really minimal.
Six years ago, when Red Hat sponsored the Grace Hopper Celebration of Women in Computing (GHC) event for the first time, we had a small presence. There were just five Red Hatters in attendance! Being new to the event, few people knew who we were, and they were even less were familiar with open source. It was an exciting time to join this event, because across the industry, the topic of women in tech was beginning to gain momentum.
Today the idea of diversity and inclusion isn’t a new topic, but it’s still a crucial one. The role that women play in tech and the importance of creating a strong pipeline of talent will be something the industry will need to continue to address.
Behavioral changes can make desktop users grumpy; that is doubly true for changes that arrive without notice and possibly risk data loss. Such a situation recently arose in the Fedora 29 development branch in the form of a new "suspend-then-hibernate" feature. This feature will almost certainly be turned off before Fedora 29 reaches an official release, but the discussion and finger-pointing it inspired reveal some significant differences of opinion about how this kind of change should be managed.
As is my habit, I upgraded my laptop at Beta time. dnf system-upgrade didn’t work for me because of some dependency issues. In the process of working through a dnf upgrade, I discovered that it was due to some odd homegrown Python RPMs I’d made and forgotten about, and gource, which was still FBTBS. After working those out, it was uneventful.
Here’s a summary of some of the bugs against the Debian Policy Manual that are thought to be easy to resolve.
In September 2018, I did 10 hours of work on the Debian LTS project as a paid contributor. Thanks to all LTS sponsors for making this possible.
Twice a year, a new version of Ubuntu is released -- in April and October. We are currently in September, meaning a new release is just around the corner. As per normal naming guidelines (YY.MM), it will be version 18.10. In addition to a number, Canonical assigns a fun name too -- based on an animal, alphabetically, preceded by a word that starts with the same letter. In this case, Ubuntu 18.04 is using the letter "C." What is it called? Cosmic Cuttlefish.
The name and version number is only part of the tradition, however, In addition, Canonical releases a special wallpaper based on the name. The animal is often a line drawing with the background using the classic Ubuntu magenta/orange gradient color. Today, on Twitter, Canonical unveils the official Cosmic Cuttlefish wallpaper.
The Ubuntu team is pleased to announce the final beta release of the Ubuntu 18.10 Desktop, Server, and Cloud products.
Codenamed "Cosmic Cuttlefish", 18.10 continues Ubuntu's proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.
This beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu Budgie, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu flavours.
The beta images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of 18.10 that should be representative of the features intended to ship with the final release expected on October 18th, 2018.
Ubuntu, Ubuntu Server, Cloud Images: Cosmic Final Beta includes updated versions of most of our core set of packages, including a current 4.18 kernel, and much more.
The Ubuntu 18.10 Beta was released today for the official desktop, server, and cloud products. As well, 18.10 betas are out today for Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu.
It's been a busy Ubuntu 18.10 cycle while for desktop users the most evident change is the new default theme for the GNOME Shell session. Ubuntu 18.10 brings many "under the hood" upgrades from the GCC 8 compiler, the big X.Org Server 1.20 release, the new Linux 4.18 kernel, and a lot of other package upgrades.
Developers, bug battlers, and enthusiastic fans alike are invited to download Ubuntu 18.10 beta to help test the release ahead of its stable release next month.
This is the only beta build that Ubuntu or its community cohorts have released this cycle. The opt-in beta that flavors like Kubuntu and Ubuntu MATE usually make use of? Well, that was retired from the 18.10 release schedule.
Anyway, keep reading for a condensed overview of the highlights of Ubuntu 18.10 beta, or scroll on down to the download section to acquire an ISO ripe for throwing on the nearest suitably-sized USB drive.
Ubuntu MATE 18.10 is a modest, yet strategic, upgrade over our 18.04 release. If you want bug fixes and improved hardware support then 18.10 is for you. For those who prefer staying on the LTS then everything in this 18.10 release is also important for the upcoming 18.04.2 release. Read on to learn more…
We are preparing Ubuntu MATE 18.10 (Cosmic Cuttlefish) for distribution on October 18th, 2018 With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.
Thanks to all the hard work from our contributors, we are pleased to announce that the Lubuntu Cosmic Cuttlefish Beta (soon to be 18.10) has been released!
Octavo unveiled a 27 x 27mm OSD335x C-SiP package that builds on its previous Sitara AM335x based SiP by adding up to 16GB eMMC and an oscillator along with 1GB DDR3, PMIC, LDO, and EEPROM.
Octavo Systems unveiled the OSD335x C-SiP, which is its most highly integrated System-In-Package (SiP) module yet. Like its earlier models, the OSD335x C-SiP integrates the Cortex-A8 based Texas Instruments Sitara AM3358 SoC with RAM and other core features, and for the first time adds eMMC and an oscillator.
Written in C90, Wasmjit is a small embeddable WebAssembly runtime. It is portable to most environments but it primarily targets a Linux kernel module that can host Emscripten-generated WebAssembly modules.
Low-tech Magazine was born in 2007 and has seen minimal changes ever since. Because a website redesign was long overdue — and because we try to practice what we preach — we decided to build a low-tech, self-hosted, and solar-powered version of Low-tech Magazine. The new blog is designed to radically reduce the energy use associated with accessing our content.
I finally had my own house. It was a repossession, and I bought it for a song. What was supposed to be a quick remodel quickly turned into the removal of most of the drywall in the house. There was a silver lining on this cloud of drywall dust and loose insulation. Rather than constantly retro-fitting cabling and gadgets in as needed, I could install everything ahead of time. A blank canvas, when the size of a house, can overwhelm a hacker. I’ve spent hours thinking through the infrastructure of my house, and many times I’ve wished for a guide written from a hacker’s perspective. This is that guide, or at least the start of it.
IEI announced a “KINO-DH310” Mini-ITX board and a rugged “FLEX-BX200-Q370” box PC, both with 8th Gen “Coffee Lake” CPUs. The box PC ships with optional PCIe cards for AI acceleration.
We’ve already seen a lot of COM Express modules based on Intel’s 8th Gen “Coffee Lake” CPUs, and now we’re starting to see boards and systems. IEI just launched three Coffee Lake products — the KINO-DH310Mini-ITX board and “FLEX-BX200-Q370” embedded PC, covered below, as well as an IMBA-Q370 ATX board you can check out on your own. The FLEX-BX200-Q370, which provides optional AI accelerator boards, uses neither of the other boards as a mainboard.
Android's Project Treble is meant as a way to reduce the fragmentation in the Android ecosystem. It also makes porting Android 8 ("Oreo"—the first version to mandate Treble) more difficult, according to Fedor Tcymbal. He described the project and what it means for silicon and device vendors in a talk at Open Source Summit North America 2018 in Vancouver, Canada.
Tcymbal works for Mera, which is a software services company. It has worked with various system-on-chip (SoC) and device vendors to get Android running on their hardware. The company has done so for several Android versions along the way, but Android 8 "felt very different" because of Treble. It was "much more difficult to upgrade" to Oreo than to previous Android versions.
He put up several quotes from Google employees describing Treble (slides [PDF]). Those boil down to an acknowledgment that the changes made for Treble were extensive—and expensive. More than 300 developers worked for more than a year on Treble. It makes one wonder what could justify that amount of effort, Tcymbal said.
Facebook runs a lot of programs and it tries to pack as many as it can onto each machine. That means running close to—and sometimes beyond—the resource limits on any given machine. How the system reacts when, for example, memory is exhausted, makes a big difference in Facebook getting its work done. Tejun Heo came to 2018 Open Source Summit North America to describe the resource control work that has been done by the team he works on at Facebook.
[...]
It is difficult to tell whether a process is slow because of some inherent limitation in the program or whether it is waiting for some resource; the team realized it needed some visibility into that. Johannes Weiner has been working on the "pressure stall information" (PSI) metric for the last two years. It can help determine that "if I had more of this resource, I might have been able to run this percentage faster". It looks at memory, I/O, and CPU resource usage for the system and for individual cgroups to derive information that helps in "determining what's going on in the system".
PSI is used for allocating resources to cgroups, but is also used by oomd, which is the user-space OOM killer that has been developed by the team. Oomd looks at the PSI values to check the health of the system; if those values are too bad, it will remediate the problem before the kernel OOM killer gets involved.
The configuration of oomd can be workload-dependent; if the web server is being slowed down more than 10%, that is a big problem, Heo said. On the other hand, if Chef or YUM are running 40% slower, "we don't really care". Oomd can act in the first case and not in the second because it provides a way to specify context-specific actions. There are still some priority inversions that can occur and oomd can also help ameliorate those.
Those who are enthusiastic about open source languages have been contributing to open source language projects and building versions of languages including Perl, JavaScript, Go, Tcl, Ruby and Python. There has been a massive shift in the adoption of open source languages and genesis of new ones in the last 20 years. Even large corporations such as Microsoft, Google and IBM contribute to open source projects that are hosted on GitHub, and Spotify, Dropbox and Reddit are among the big names that use Python.
A different model emerged in the '80s: open source software. From the start, the idea was to create software that anyone could download and use for free. On top of that, anyone could modify and use the source code and submit modifications and bug fixes back to the original project. Over time, several different types of open source licenses evolved, and a number of software products we use every day were created.
[...]
In the early days of the web, a group known initially as the Apache Group, now the Apache Software Foundation, was developing the first free and open source web server. The organization has since expanded into many other projects. Because many of the projects are assigned names of animals — from Ant to Zookeeper — they are often collectively known as the "Apache Zoo" as I wrote back in 2015.
Remember Snort? Or Asterisk? Or Jaspersoft or Zimbra? Heck, you might still be using them. All of these open source champions—InfoWorld Best of Open Source Software Award winners 10 years ago—are still going strong. And why not? They’re still perfectly useful.
A recent Forbes article indicates that corporate engagement with open source communities has grown to become a strategic imperative over the past couple of decades. An increasing number of companies are paying their employees to contribute to such communities. This is one manifestation of a broader growing trend toward closer collaboration between companies and open source communities. Well-recognised companies such as Google, Uber, Facebook, and Twitter have open sourced their projects and encouraged their employees to contribute to open source communities. Among software developers who contribute to such communities, estimates suggest that up to 40% of them are paid by their company to do so. Some companies see this as an opportunity to enhance their employees’ skills while others aim to influence open source product development to support their own complementary products and services. Regardless of the motives, managers should consider the impact of such arrangements on the employees involved.
Vember Audio tells us that, as of 21th September 2018, Surge stopped being a commerical product and became an open-source project released under the GNU GPL v3 license. They say that, for the existing users, this will allow the community to make sure that it remains compatible as plug-in standards and Operating Systems evolve and, for everyone else, it is an exiting new free synth to use, hack, port, improve or do whatever you want with.
Reviewing Vember Audio’s Surge synth over a decade ago, we said: “This is a big, beautiful-sounding instrument. It's not cheap, but few plugins of this quality are.” Well, the sound hasn’t changed, but the price has; in fact, Surge has just been made free and open-source.
Thanks to its wavetable oscillators and FM-style algorithms, Surge is capable of creating some pretty sparkling sounds, but it also has analogue-style functions that make it suitable for producing vintage keyboard tones.
Vember Audio says that it’s been set free so that it can continue to be developed by the community and remain compatible with current standards and operating systems.
In Montreal at ApacheCon, the Apache Software Foundation (ASF) announced that Pulsar had graduated to being an Apache top-level project. This pub-sub messaging system boasts a flexible messaging model and an intuitive client application programming interface (API).
Pulsar is a highly scalable, low-latency messaging platform running on commodity hardware. It provides simple pub-sub and queue semantics over topics, lightweight compute framework, automatic cursor management for subscribers, and cross-datacenter replication. It was designed from day one to address gaps in other open-source messaging systems.
The webhint project provides an open source linting tool to check for issues with accessibility, performance, and security. The creation of websites and web apps has an increasing number of details to perfect, and webhint strives to help developers remember these details.
webhint is available as either a CLI tool or as an online scanner. The quickest way to get started with webhint is with the online scanner, which requires a public facing URL to run a report and get insights about an application.
Sauce Labs Inc., based in San Francisco, provides automated mobile app testing tools. The company this week announced support for Google Android and Apple iOS native test automation frameworks such as XcuiTest and Espresso. In this Q&A, Steven Hazel, co-founder and CTO of Sauce Labs, discusses best practices and trends around mobile app testing tools.
KDAB is the main sponsor at Qt World Summit Boston and in addition to the Introductory and Advanced one day training courses on Day 1, two talks on Day 2: Creating compelling blended 2D/3D applications – a solution for artists and developers, and KDAB’s Opensource Tools for Qt.
The 2018 Linux Security Summit North America (LSS-NA) was held last month in Vancouver, BC.
[...]
Once again, as is typical, the conference was focused around development, somewhat uniquely in the world of security conferences. It’s interesting to see more attention seemingly being paid to the lower parts of the stack: secure booting, firmware, and hardware roots of trust, as well as the continued efforts in hardening the kernel.
We recently made a change to simplify the way Chrome handles sign-in. Now, when you sign into any Google website, you’re also signed into Chrome with the same account. You’ll see your Google Account picture right in the Chrome UI, so you can easily see your sign-in status. When you sign out, either directly from Chrome or from any Google website, you’re completely signed out of your Google Account.
Following a major backlash due to questionable privacy settings in Google Chrome 69, Google today announced that it will make the new features optional in the upcoming Chrome 70 release.
In the blog post, Google said the Chrome 70, which is scheduled for mid-October release, would add sign-in controls in the “Privacy and Security” settings. This will allow users to delink the mandatory web-based sign-in with the browser sign-in. In simple words, users will now have a choice to avoid logging-in into the Chrome browser while logging-in into Google websites like Gmail, YouTube, etc.
Earlier this week, Mozilla visited Venmo’s headquarters in New York City and delivered a petition signed by more than 25,000 Americans. The petition urges the payment app to put users’ privacy first and make Venmo transactions private by default.
Also this week: A new poll from Mozilla and Ipsos reveals that 77% of respondents believe payment apps should not make transaction details public by default. (More on our poll results below.)
Millions of Venmo users’ spending habits are available for anyone to see. That’s because Venmo transactions are currently public by default — unless users manually update their settings, anyone, anywhere can see whom they’re sending money to, and why.
Mozilla’s petition urges Venmo to change these settings. By making privacy the default, Venmo can better protect its seven million users — and send a powerful message about the importance of privacy. But so far, Venmo hasn’t formally responded to our petition and to the 25,000 Americans who signed their names.
The Mozilla team has announced a new recovery key option for Firefox accounts that can be used to access Firefox data if users forget their passwords.
Starting today, users will be able to generate a one-time recovery key associated with their account. Once the key is used to access the account, it becomes invalid, and the user needs to create another one.
[...]
Sync encrypts the user’s browser data on a local computer by using Firefox account password. It then sends this encrypted data to Mozilla’s servers for storage making sure that no one can access it without the user’s password (which acts as a decryption key here).
Bonjour everyone! Here comes the twenty third installment of WebRender’s very best newsletter. This time I’m trying something a bit different. Instead of going through each pull request and bugzilla entry that landed since the last post, I’m only sourcing information from the team’s weekly meeting. As a result only the most important items make it to the list and not all items have links to their bug or pull request. Doing this allows me to spend considerably less time preparing the newsletter and will hopefully help with publishing it more often.
Last time I mentioned WebRender being enabled on nightly by default for a small subset of the users, focusing on nVidia desktop GPUs on Windows 10. I’m happy to report that we didn’t set our nightly user population on fire and that WebRender is still enabled in these configurations (as expected, sure, but with a project as large and ambitious as WebRender it isn’t something that could be taken for granted). The choice of this particular configuration of hardware and driver led to a lot of speculation online, so I just want clarify a few things. We did not strike any deal with nVidia. nVidia didn’t send engineers to help us get WebRender to work on their hardware first. No politics, I promise. We learnt from past mistakes and chose to target a small population of Firefox users at first specifically because it is small. Each combination of OS/Vendor/driver exposes its own set of bugs and a progressive and targeted rollout means we’ll be better equipped to react in a timely manner to incoming bugs than we have been with past projects. Worry not, the end game is for WebRender to be Firefox’s rendering engine for everyone. Until then, are welcome to enable WebRender manually if your OS, hardware or driver isn’t in the initial target.
Coming only two weeks after the release of the first maintenance update, LibreOffice 6.1.1, the LibreOffice 6.1.2 point release is here to address 70 bugs discovered by the development team or reported by users across several components of the office suite. The release was made during the LibreOffice Conference 2018 that takes place these days in Tirana, Albania, and the full changelog is available here.
"The Document Foundation announces LibreOffice 6.1.2, the second minor release of the LibreOffice 6.1 family, targeted at early adopters, technology enthusiasts, and power users," said Italo Vignoli in today's announcement. "The new release was launched during the LibreOffice Conference 2018, in Tirana, the capital city of Albania. LibreOffice 6.1.2 provides around 70 bug and regression fixes over the previous version."
The numbers and analysis come from Sonatype’s 2018 State of the Software Supply Chain Report, released on Tuesday. The report revealed, of the more than 300 billion open source components downloaded in the past year, one in eight have known security vulnerabilities.
GNU was publicly announced on September 27, 1983, and today has a strong following.
GNU is...
an operating system an extensive collection of computer software free software licensed under the GNU Project's own General Public License (GPL)
This week 20 years ago Google was born in a garage, so fitting in with the Silicon Valley creation story; 35 years ago the GNU open source project was announced. Two great, but very different, events. Time to look back and ask why?
The GNU movement was started to create an open source version of Unix. At the time its rationale seemed obvious and desirable. In the academic world there was a real problem in, for example, teaching operating systems. Windows was closed and proprietary and Unix was just going through some copyright upheavals that made it a risky choice for teaching. The only real alternative was Minix, which also had copyright problems.
The GNU movement would give academics what they wanted - software they could use without worrying about commercial concerns. The GNU project was, and is, a great success - even if it didn't, and still hasn't, delivered an open source version of Unix; that was achieved by Linus Torvalds and his Linux project. The GNU project did, however, deliver the GCC - GNU Compiler Collection - and many other tools that were needed to create Linux and are still needed today to make use of Linux. It is why the GNU people still insist that we call Linux "GNU Linux".
GNU Shepherd, formerly known as GNU dmd, is a service manager written in Guile and looks after the herd of system services. It provides a replacement for the service-managing capabilities of SysV-init (or any other init) with both a powerful and beautiful dependency-based system and a convenient interface.
Linux developers who contribute code to the kernel cannot rescind those contributions, according to the software programmer who devised the GNU General Public Licence version 2.0, the licence under which the kernel is released.
Richard Stallman, the head of the Free Software Foundation and founder of the GNU Project, told iTWire in response to queries that contributors to a GPLv2-covered program could not ask for their code to be removed.
"That's because they are bound by the GPLv2 themselves. I checked this with a lawyer," said Stallman, who started the free software movement in 1984.
There have been claims made by many people, including journalists, that if any kernel developers are penalised under the new code of conduct for the kernel project — which was put in place when Linux creator Linus Torvalds decided to take a break to fix his behavioural issues — then they would ask for their code to be removed from the kernel.
It seems a simple enough concept for anyone who’s spent some time hacking on open source code: once you release something as open source, it’s open for good. Sure the developer might decide that future versions of the project close up the source, it’s been known to happen occasionally, but what’s already out there publicly can never be recalled. The Internet doesn’t have a “Delete” button, and once you’ve published your source code and let potentially millions of people download it, there’s no putting the Genie back in the bottle.
But what happens if there are extenuating circumstances? What if the project turns into something you no longer want to be a part of? Perhaps you submitted your code to a project with a specific understanding of how it was to be used, and then the rules changed. Or maybe you’ve been personally banned from a project, and yet the maintainers of said project have no problem letting your sizable code contributions stick around even after you’ve been kicked to the curb?
Open data formats and open-source libraries are the lingua franca of open platforms. Take Hadoop as an example: developed as an open-source alternative to Google’s proprietary MapReduce and GFS systems (thankfully Google published research papers describing them in much detail), the Hadoop ecosystem today covers effectively 100% of the “big data” market in terms of data storage systems like HDFS and S3, data formats like Parquet, and compute systems like Apache Spark. The relationship between HDFS and S3 makes for an interesting case study: both are distributed storage systems, one available at no cost for on-prem deployments and the other available as a paid service from Amazon. Critically, both implement the same Hadoop FileSystem API and are thus interchangeable as far as downstream applications like Spark are concerned. Really a perfect example of the open platform idea! Foundry directly inherits this flexibility: we are happy to work with and write data to HDFS and in S3 interchangeably.
When Oracle released Java 10 earlier this year in March, it marked the beginning of a new era with Java development moving to a new six-month cycle. With the recent release of Java 11, we’ve now dived deeper.
It’s worth noting that Java Development Kit (JDK) 11 is the first version to be shipped as the Long Term Release Support of Java SE platform. This means that Java 11 will be supported for another eight years by Oracle and the users will be able to enjoy fixes and updates.
Oracle on Tuesday delivered Java 11, in keeping with the six-month release cadence adopted a year ago with Java 9. It is the first "Long Term Support" (LTS) release, intended for Java users who prioritize stability over Zuckerbergian fast movement and breakage.
Oracle said it will offer commercial support for Java 11 for at least eight more years. The next LTS release, Java 17, is planned for September 2021, assuming civilization is still functioning at that point.
After January 2019, Oracle will no longer provide free updates to Java 8, which means shifting to a supported version of Java, relying on OS vendors to provide Java patches, paying a third-party for support, building the OpenJDK on your own, or getting builds from AdoptOpenJDK.
One can argue that containers and DevOps were made for one another. Certainly, the container ecosystem benefits from the skyrocketing popularity of DevOps practices, both in design choices and in DevOps’ use by teams developing container technologies. Because of this parallel evolution, the use of containers in production can teach teams the fundamentals of DevOps and its three pillars: The Three Ways.
In the first four articles in this series comparing Perl 5 to Perl 6, we looked into some of the issues you might encounter when migrating code, how garbage collection works, why containers replaced references, and using (subroutine) signatures in Perl 6 and how these things differ from Perl 5.
This took me back as it was my first ever computer and I had no games so I had to program it. I would recommend that David buys a RAMPACK.
As tweeted three days ago, our still-new binb package with crisper Beamer themes for RMarkdown now contains presento. Versions 0.0.2 with this addition just arrived on CRAN.
The literal meaning of Moore’s Law is that CMOS transistor densities double every 18 to 24 months. While not a statement about processor performance per se, in practice performance and density have tracked each other fairly well. Historically, additional transistors were mostly put in service of running at higher clock speeds. More recently, microprocessors have mostly gotten more cores instead.
The practical effect has been that all the transistors delivered by process shrinks, together with design enhancements, meant that we could count on devices getting some combination of faster, cheaper, smaller, or more integrated, at an almost boringly predictable rate.
At a macro level, we’d simply live in a very different world had the successors to Intel’s first microprocessor, the 4004 released in 1971, improved at a rate akin to automobile fuel efficiency rather than their constant doubling.
The Trump administration persists in banning access to abortion for young immigrants in government custody.
We were in a Washington, D.C., appeals court on Wednesday facing off yet again with the Trump administration over its patently unconstitutional policy of obstructing young immigrant women from accessing abortion.
Last September, a 17-year-old woman known as Jane Doe arrived in the United States and discovered she was pregnant. Even though she repeatedly made it clear that she wanted an abortion, had received a decision from a state court judge waiving Texas’s requirement that she first obtain parental consent, and had access to private funding, the government refused to allow her to leave the shelter where she was staying to attend any abortion-related appointments.
We took the administration to court and won. Jane successfully obtained emergency relief from a Washington D.C. district court and was able to get her abortion. The government challenged that decision and it wound up before a three-judge panel of the Court of Appeals for the District of Columbia Circuit that included Judge Brett Kavanaugh, who issued a decision allowing the Trump administration to continue to block Jane’s access to abortion. Fortunately, his decision was later overturned by the full panel of the appeals court.
At the start of the meeting, the General Assembly adopted the NCD political declaration by acclamation, with no member state objecting. The political declaration includes commitments to reduce NCD mortality by one-third by 2030, and to scale-up funding and multi-stakeholder responses to treat and prevent NCDs.
María Fernanda Espinosa Garcés of Ecuador, the president of the UN General Assembly, then explained that the high-level meeting today will make a “comprehensive review on the overall theme of scaling up multi-stakeholder responses and prevention of NCDS.”
“What we need now is political will,” she said, because “ambitious goals require far-reaching measures.”
Luiz Otávio Pimentel is president of the National Institute of Industrial Property (INPI) of Brazil. In Geneva this week for the annual World Intellectual Property Organization General Assemblies he took time to sit down with Intellectual Property Watch’s William New. INPI is part of the Ministry of Industry, Foreign Trade and Services.
On a breaking issue, Pimentel, speaking through a translator, talked about the case in Brazil involving sofosbuvir, marketed as Sovaldi, Gilead’s effective medicine against hepatitis C that has been known for its exorbitant prices.
On appeal, the Federal Circuit affirmed in a decision that I originally noted had “a few substantial problems — most notably is the fact that unclean-hands traditionally only applies to block a party from seeking equitable relief (as opposed to legal relief).” In its new petition for writ of certiorari, the patentee here seeks to piggy-back on the recent laches decisions that limited laches to issues in equity.
The pharma giant’s basic argument is that its unclean hands cannot bar the company from asserting its legal rights. As Dan Dobbs explains in his book on remedies: “If judges had the power to deny damages and other legal remedies because a plaintiff came into court with unclean hands, citizens would not have rights, only privileges.”
World leaders and senior representatives came together today for the first-ever High-Level Meeting on the Fight to End Tuberculosis at United Nations headquarters in New York. At the meeting, heads of state adopted a political declaration with commitments to accelerate action and funding to end the tuberculosis epidemic by 2030.
Multiple Linux distributions including all current versions of Red Hat Enterprise Linux and CentOS contain a newly discovered bug that gives attackers a way to obtain full root access on vulnerable systems.
The integer overflow flaw (CVE-2018-14634)exists in a critical Linux kernel function for memory management and allows attackers with unprivileged local access to a system to escalate their privileges. Researchers from security vendor Qualys discovered the issue and have developed a proof of concept exploit.
Jann Horn, the Google Project Zero researcher who discovered the Meltdown and Spectre CPU flaws, has a few words for maintainers of Ubuntu and Debian: raise your game on merging kernel security fixes, you're leaving users exposed for weeks.
Canonical has entered the security certifications space by achieving a few important security certifications for the first time on Ubuntu.
Canonical has achieved FIPS 140-2 Level 1 certification for several cryptographic modules on Ubuntu 16.04. Canonical has also achieved Common Criteria EAL2 certification for Ubuntu 16.04. In addition, Defense Information System Agency (DISA) has published Ubuntu 16.04 Security Technical Implementation Guide (STIG) allowing Ubuntu for use by Federal agencies. Center for Internet Security (CIS) has also been publishing benchmarks for Ubuntu which hardens the configuration of Ubuntu systems to make them more secure.
Canonical has made its security certification offerings available to all Ubuntu Advantage “Server Advanced” customers.
I don't think the protocol is "provably secure," meaning that it cannot have any vulnerabilities. What this paper demonstrates is that there are no vulnerabilities under the model of the proof. And, more importantly, that PKCS #1 v1.5 is as secure as any of its successors like RSA-PSS and RSA Full-Domain.
The money will be disbursed among all 50 US states as well as Washington, DC.
UEFI rootkits are widely viewed as extremely dangerous tools for implementing cyberattacks, as they are hard to detect and able to survive security measures such as operating system reinstallation and even a hard disk replacement. Some UEFI rootkits have been presented as proofs of concept; some are known to be at the disposal of (at least some) governmental agencies. However, no UEFI rootkit has ever been detected in the wild – until we discovered a campaign by the Sednit APT group that successfully deployed a malicious UEFI module on a victim’s system.
A 16-year-old Australian teenager who repeatedly hacked Apple servers over a period of two years has evaded jail. He is set to serve a probation period of 8 months.
Researchers have found a security flaw in Apple’s Device Enrollment Program (DEP) that can allow an attacker to gain complete access to a corporate or school network.
The Device Enrollment Program (DEP) is a service provided by Apple for bootstrapping Mobile Device Management (MDM) enrollment of iOS, macOS, and tvOS devices. DEP hosts an internet-facing API at https://iprofiles.apple.com, which - among other things - is used by the cloudconfigurationd daemon on macOS systems to request DEP Activation Records and query whether a given device is registered in DEP.
In our research, we found that in order to retrieve the DEP profile for an Apple device, the DEP service only requires the device serial number to be supplied to an undocumented DEP API. Additionally, we developed a method to instrument the cloudconfigurationd daemon to inject Apple device serial numbers of our choosing into the request sent to the DEP API. This allowed us to retrieve data specific to the device associated with the supplied serial number.
Earlier this month Arm began publishing details of the ARMv8.5-A instruction set update, which is expected to be officially documented and released by the end of Q1'2019, while the LLVM compiler stack has already received initial support for the interesting additions.
Landing yesterday in LLVM Git/SVN is the new ARMv8.5-A target while hitting the tree today is the more interesting work.
A local-privilege escalation vulnerability in the Linux kernel affects all current versions of Red Hat Enterprise Linux and CentOS, even in their default/minimal installations. It would allow an attacker to obtain full administrator privileges over the targeted system, and from there potentially pivot to other areas of the network.
The evidence mounts that Russia is not telling the truth about “Boshirov” and “Petrov”. If those were real identities, they would have been substantiated in depth by now. As we know of Yulia Skripal’s boyfriend, cat, cousin and grandmother, real depth on the lives and milieu of “Boshirov” and “Petrov” would be got out. It is plainly in the interests of Russia’s state and its oligarchy to establish that they truly exist, and concern for the privacy of individuals would be outweighed by that. The rights of the individual are not prioritised over the state interest in Russia.
But equally the identification of “Boshirov” with “Colonel Chepiga” is a nonsense.
The problem is with Bellingcat’s methodology. They did not start with any prior intelligence that “Chepiga” is “Boshirov”. They rather allegedly searched databases of GRU operatives of about the right age, then trawled photos in yearbooks of them until they found one that looked a bit like “Boshirov”. And guess what? It looks a bit like “Boshirov”. If you ignore the substantially different skull shape and nose.
[...]
Yet Higgins now claims his facial identification of Chepiga as Boshirov as “definitive” and “conclusive”, despite the absence of moles, scars and blemishes. Higgins stands exposed as a quite disgusting hypocrite. Let me go further. I do not believe that Higgins did not take the elementary step of running facial recognition technology over the photos, and I believe he is hiding the results from you. Is it not also astonishing that the mainstream media have not done this simple test?
The bulk of the Bellingcat article is just trying to prove the reality of the existence of Chepiga. This is hard to evaluate, but as the evidence to link him to “Boshirov” is non-existent, is a different argument. Having set out to find a GRU officer of the same age who looks a bit like “Boshirov”, they trumpet repeatedly the fact that Chepiga is about the same age as evidence, in a crass display of circular argument.
Randy Credico, a New York comedian whom Trump ally Roger Stone says served as a backchannel for him and WikiLeaks founder Julian Assange, told Mother Jones that Stone offered last year to help pay his legal fees.
Credico told Mother Jones that Stone made the offer after telling the House Intelligence Committee that Credico served as the backchannel, which Credico has denied. Credico said he suspected that Stone didn't want him to contradict his account of interactions between himself and Assange. ADVERTISEMENT
“He knew that I was upset,” Credico said. “He wanted me to be quiet. He wanted me to go along with his narrative. He didn’t want me talking to the press and saying what I was saying.”
Corsi has been the Washington, D.C. bureau chief for the controversial far-right news outlet Infowars. He is one of 11 people associated with Stone who have been contacted by the special counsel, many of whom have given sworn testimony to the grand jury.
Corsi’s appearance means that Robert Mueller is still focusing on Stone, a longtime Trump ally and political supporter.
And now a Corsi email recently obtained by the media shows that Stone tried to contact WikiLeaks publisher Julian Assange during a very critical time in Trump’s 2016 presidential campaign.
This email is one of at least two between Stone and Corsi that make reference to conservative author Ted Malloch. The emails may provide evidence that Stone committed a crime by lying to Congress when he testified last spring.
Mr. Stone told ABC News the email, “proves I had no advance knowledge of contents of WikiLeaks’ DNC material” but wanted to know about it like everyone else.
“I condemn the treatment of Julian Assange that leads to my new role,” Hrafnsson said, as The Daily Dot reported, “but I welcome the opportunity to secure the continuation of the important work based on WikiLeaks ideals.”
The organization was founded and has been led for more than a decade by Assange, but the silver-haired Australian has been isolated in legal limbo at the Ecuadorean Embassy in London since 2012.
Icelandic investigative journalist and former WikiLeaks spokesperson Kristinn Hrafnsson is set to become the whistleblowing organization’s new editor-in-chief, replacing isolated founder Julian Assange in the role.
The announcement was made late Wednesday in a statement issued through the WikiLeaks’ official Twitter account.
Julian Assange, who remains in the Ecuadorian Embassy in London fearing extradition to the US, hasn’t been allowed to communicate with anybody but his lawyers since this March. Although the whistleblower remains the official head of WikiLeaks, an Icelandic investigative journalist has taken over his job as editor-in-chief.
WikiLeaks founder Julian Assange has appointed its former spokesman, Icelandic journalist Kristinn Hrafnsson, to replace him as editor-in-chief.
WikiLeaks on Wednesday named one-time spokesman Kristinn Hrafnsson as its new editor-in-chief. The ramifications of the move are unclear.
The organization was founded and has been led for more than a decade by Julian Assange, the 47-year-old ex-hacker, but the silver-haired Australian has been isolated for years at the Ecuadorean Embassy in London.
WikiLeaks tweeted that Assange will stay on as the group's "publisher." Assange had his communications cut in March by Ecuador's new president, and Wednesday's statement said he remained "incommunicado."
Ecuador’s president says that he is trying to work with Britain on providing a legal solution that would see Julian Assange leave the Ecuadorian Embassy in London.
Speaking on the sidelines of the UN General Assembly, President Lenin Moreno said his country will work for the WikiLeaks founder’s safety and the preservation of his human rights.
Ecuador's previous left-leaning administration gave Assange asylum in 2012, saying it feared his life was in danger for publishing thousands of diplomatic cables that put US officials in a difficult position.
Julian Assange has named Kristinn Hrafnsson as WikiLeaks' new editor-in-chief. Assange remains in the Ecuadorian embassy in London where for the last six months he has only been able to communicate with his lawyers.
In a statement released Wednesday afternoon, WikiLeaks announced Kristinn Hrafnsson, an Icelandic reporter and one-time spokesman for the organization, will replace Assange at the helm on the non-profit group.
“Due to the extraordinary circumstances where Julian Assange, the founder of WikiLeaks, has been held incommunicado (except for visits by his lawyers) for six months while arbitrarily detailed in the Ecuadorian embassy, Mr. Assange has appointed Kristinn Hrafnsson Editor in Chief of WikiLeaks,” the statement reads.
Assange has long argued that he is simply monitoring the actions of some of the world's most powerful politicians and exercising his right to free speech.
WikiLeaks tweeted that Assange will stay on as the group's "publisher."
Assange had his communications cut in March by Ecuador's new president, and Wednesday's statement said he remained "incommunicado."
WikiLeaks' job titles have proven fluid over the years. Assange has variously described himself as the group's spokesman, publisher and editor.
On September 21, the Guardian newspaper published claims, based on unnamed sources, that Ecuador, Russia and WikiLeaks had conspired to smuggle Julian Assange out of the Ecuadorian embassy in London and transport him to “another country”—most likely Russia.
According to the article, the plan was set for Christmas Eve 2017. Ecuador had granted Assange “diplomatic” status to represent its government in Russia. He was to be picked up by consular vehicles. However, the supposed plot was abandoned as “too risky,” because British authorities outright rejected any recognition of diplomatic status for Assange and vowed to arrest him as soon as he set foot outside the embassy.
Lawyers for WikiLeaks publisher Julian Assange, who has been holed up in the Ecuadorian embassy in London for the last six years, are examining....
As lawmakers consider disaster relief in the wake of Hurricane Florence, projects to rebuild North Carolina’s shrunken shorelines are likely to get a healthy chunk of government money.
To their advocates, these so-called beach nourishment initiatives are crucial steps in buffering valuable oceanfront properties from storm damage and boosting local economies that rely on tourism.
But such projects replenish the same vulnerable areas again and again, and disproportionately benefit wealthy owners of seaside lots.
Moreover, pumping millions of cubic yards of sand onto beaches can cause environmental damage, according to decades of studies. It kills wildlife scooped up from the ocean floor and smothers mole crabs and other creatures where sand is dumped, said Robert Young, a geology professor at Western Carolina University.
When Donald Trump awarded himself top marks for his administration’s disaster response in Puerto Rico, media had little trouble looking askance, contrasting Trump’s assessment with empirical data and presenting him as, at least potentially, an unreliable narrator.
That critical posture is not much in evidence, though, as Ben Bernanke, Timothy Geithner and Henry Paulson offer their assessment of the country’s financial crisis, the ten-year anniversary of which was marked this week. In an op-ed in the New York Times, the trio of economic decision-makers discuss how, though they “did not foresee the crisis,” they “moved aggressively to stop it,” and now we’re enjoying the effects: banks that are “financially stronger” and regulators “more attuned to system-wide risks.”
When thinking about Brexit and Europe, we should remember the words of Hans Magnus Enzensberger: short term hopes are futile - long term resignation is suicidal.
Over two years on from the vote, and now heading fast for the Brexit door, progressives are still in a mess when it comes to Europe and are in danger of turning a crisis into a terminal democratic and political catastrophe. How did we get here – and what do we need to consider before we make any future moves, in particular a second referendum?
Amazon, the country’s second-largest employer, has so far remained immune to any attempts by U.S. workers to form a union. With rumblings of employee organization at Whole Foods—which Amazon bought for $13.7 billion last year—a 45-minute union-busting training video produced by the company was sent to Team Leaders of the grocery chain last week, according to sources with knowledge of the store’s activities. Recordings of that video, obtained by Gizmodo, provide valuable insight into the company’s thinking and tactics.
Hello, and welcome back to another edition of A User’s Guide to Democracy! If you’re new here, you can check out our previous pieces on what you need to know about political advertising and our round up of the deadlines, rules, and links you need to vote in this year’s midterms.
Today, let’s talk about who you’re actually voting for in the midterm election: members of Congress. Made up of the House of Representatives and the Senate, Congress is tasked with making laws on our behalf. Since senators keep their jobs for six years at a time, a lot of places don’t have a Senate race this year. But no matter where you live, your congressional district is voting for a House representative in this election. So today I’m going to focus on how you can keep tabs on your representative.
[...]
One reason for the gridlock is that, these days, bills on big, national issues are written under the supervision of the Senate majority leader and the House speaker (currently Sen. Mitch McConnell and Rep. Paul Ryan). They receive guidance from only a small group of other congressional power brokers, rather than the rank-and-file lawmakers who used to contribute to the process.
The open letter to US Attorney General Jeff Sessions from ITIF published in USA Today on 25 September exemplified the lie that permeates America and other parts of the West. The social media censorship debate is not about right versus left as portrayed in the letter but something else entirely.
True conservatism and liberalism died long ago and no longer exist. Today, the real fight is between freedom of expression and global corporatism.
The censorship that is taking place on platforms such as Google, Facebook and Twitter, is the silencing and de-platforming of news and opinion sources outside of the corporate media.
For many years, we've discussed all the different ways that putting liability on intermediaries and internet platforms leads to greater censorship. The liability alone creates strong incentive to shut down speech rather than risk the potential of lawsuits and huge payments. The most obvious example of this for years has been the DMCA process, where the takedown process is quite frequently used for censorship purposes. Indeed, there are many cases where people seem to assume that they can (and should) use the DMCA to take down any content they dislike, whether or not it has anything to do with copyright at all.
This is a big part of the reason why we were so concerned with FOSTA. While the law is officially supposed to be about "sex trafficking" and "prostitution" the bill actually does absolutely nothing to help victims or go after actual traffickers. Instead, it pins massive liability (including criminal liability) on platforms if they're used for trafficking or prostitution. Given that, it now becomes much easier to take down certain content or close certain accounts by merely suggesting that they are involved in trafficking or prostitution.
Case in point: Engadget recently had a story talking about how PayPal (and to a lesser extent, Patreon) appeared to be cutting off the accounts of various ASMR YouTubers. Autonomous Sensory Meridian Response (ASMR) is a condition in which people who hear certain noises -- often whispering or soft scratching -- tend to experience a sort of "tingling" sensation. It's been talked about for years, and a bunch of YouTubers have built up followings making ASMR recordings. Earlier this year, we wrote about China banning some ASMR videos as "pornography." However, most ASMR videos are not sexual or pornographic in any way.
I was afraid that this was going to happen. If you don't recall, the official "reason" for why we needed FOSTA (originally SESTA) was that it was necessary to "take down Backpage." In the original announcement about the bill by Senator Portman, his press release quoted 20 Senators, and 11 of them mentioned Backpage.com as the reason for the bill. Not one of them seemed to mention that Backpage had already shut down its adult section months earlier. And, over the months of debate concerning FOSTA/SESTA, we noted that there was nothing in the existing law preventing federal law enforcement officials from taking down Backpage if it were actually violating the law.
And, indeed, before FOSTA was even signed into law, the DOJ seized the website and arrested its founders. Incredibly, even though Backpage was shut down before FOSTA was law, some of the bill's backers tried to credit the bill with taking down the site. The worst was Rep. Mimi Rogers, who directly tried to take credit for FOSTA taking down Backpage (even though FOSTA wasn't even signed into law at the time she took credit for it).
A former Google employee has warned of the firm's "disturbing" plans in China, in a letter to US lawmakers.
Jack Poulson, who had been a senior researcher at the company until resigning in August, wrote that he was fearful of Google's ambitions.
His letter alleges Google's work on a Chinese product - codenamed Dragonfly - would aid Beijing's efforts to censor and monitor its citizens online.
Google has said its work in China to date has been "exploratory".
Ben Gomes, Google's head of search, told the BBC earlier this week: "Right now all we've done is some exploration, but since we don't have any plans to launch something there's nothing much I can say about it."
If history is any indication, some words will be exchanged (in letter form) and then not much else will happen. Dave Maass notes the EFF brought the DEA's use of fake profiles to the company's attention four years ago. Some letter writing ensued then, but there's nothing on the record indicating the DEA has ceased setting up fake profiles or that Facebook is proactively monitoring accounts for signs of fakery. Since neither side seems to be taking the fake profile issue seriously, fake accounts set up by law enforcement will continue to proliferate.
On the plus side, law enforcement can no longer pretend it's unaware setting up fake profiles violates the terms of service. The company's "Information for Law Enforcement Authorities" has been updated to make it clear there's no law enforcement exception to the Facebook rules. But it's likely the use of fake profiles will continue unabated. After all, you can't catch scofflaws without breaking a few policies, right?
Add “a phone number I never gave Facebook for targeted advertising” to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives’ own previous statements, the company has been using contact information that users explicitly provided for security purposes—or that users never provided at all—for targeted advertising.
A group of academic researchers from Northeastern University and Princeton University, along with Gizmodo reporters, have used real-world tests to demonstrate how Facebook’s latest deceptive practice works. They found that Facebook harvests user phone numbers for targeted advertising in two disturbing ways: two-factor authentication (2FA) phone numbers, and “shadow” contact information.
Data brokers intrude on the privacy of millions of people by harvesting and monetizing their personal information without their knowledge or consent. Worse, many data brokers fail to securely store this sensitive information, predictably leading to data breaches (like Equifax) that put millions of people at risk of identity theft, stalking, and other harms for years to come.
Earlier this year, Vermont responded with a new law that begins the process of regulating data brokers. It demonstrates the many opportunities for state legislators to take the lead in protecting data privacy. It also shows why Congress must not enact a weak data privacy law that preempts stronger state data privacy laws.
A little backstory here — Brian Acton, the co-founder of Whatsapp, sold his company to Facebook for about $22 billion, back in 2014, and turned from a “poor guy” into a multi-billionaire.
For those who don’t know, this is the same guy who was one of the first ones to support “#delete campaign” back in March, when the whole Cambridge Analytica Fiasco was at its peak.
The UK's domestic-facing intelligence agency, MI5, today admitted that it captured and read Privacy International's private data as part of its Bulk Communications Data (BCD) and Bulk Personal Datasets (BPD) programmes, which hoover up massive amounts of the public's data. In further startling legal disclosures, all three of the UK's primary intelligence agencies - GCHQ, MI5, and MI6 - also admitted that they unlawfully gathered data about Privacy International or its staff.
DNA is supposed to be the gold standard of evidence. Supposedly so distinct it would be impossible to convict the wrong person, yet DNA evidence has been given far more credit than it's earned.
Part of the problem is that it's indecipherable to laypeople. That has allowed crime lab technicians to testify to a level of certainty that's not backed by the data. Another, much larger problem is the testing itself. It searches for DNA matches in samples covered with unrelated DNA. Contamination is all but assured. In one stunning example of DNA testing's flaws, European law enforcement spent years chasing a nonexistent serial killer whose DNA was scattered across several crime scenes before coming to the realization the DNA officers kept finding belonged to the person packaging the testing swabs used by investigators.
The reputation of DNA testing remains mostly untainted, rose-tinted by the mental imagery of white-coated techs working in spotless labs to deliver justice, surrounded by all sorts of science stuff and high-powered computers. In reality, testing methods vary greatly from crime lab to crime lab, as do the standards for declaring a match. People lose their freedom thanks to inexact science and careless handling of samples. And it happens far more frequently than anyone involved in crime lab testing would like you to believe.
Welcome to "Shut-up-land," where nothing about anything of substance can be said; where debate is no longer permitted.
Much of the news coverage about the Supreme Court nomination fight has focused on Christine Blasey Ford, Brett Kavanaugh, Senate Majority Leader Mitch McConnell...
For all the morning’s madness, there may have been an underlying logic. Over the weekend, as Brett Kavanaugh’s prospects appeared increasingly imperiled, Trump faced two tactical options, both of them fraught. One was to cut Kavanaugh loose. But he was also looking for ways to dramatically shift the news cycle away from his embattled Supreme Court nominee. According to a source briefed on Trump’s thinking, Trump decided that firing Rosenstein would knock Kavanaugh out of the news, potentially saving his nomination and Republicans’ chances for keeping the Senate. “The strategy was to try and do something really big,” the source said. The leak about Rosenstein’s resignation could have been the result, and it certainly had the desired effect of driving Kavanaugh out of the news for a few hours.
A new report shows the North Carolina government’s complicity with the CIA torture program and urges a state investigation.
Sheer stubbornness is required of us when our government violates the law and refuses to recognize it.
Seeking justice for the U.S. torture program of the post-9/11 period has required a lot of stubbornness. In North Carolina, a 12-year quest has led to a new report, “Torture Flights: North Carolina’s Role in the CIA Rendition and Torture Program.”
The report was released Thursday by the nongovernmental North Carolina Commission of Inquiry on Torture, a blue-ribbon panel of 10 commissioners established in 2017 after years of official inaction.
It examines the part that our state played in the CIA rendition, detention, and interrogation (RDI) program. To write it, the commission gathered all available evidence, sought public records from North Carolina government agencies, and heard testimony from torture survivors, former government officials, and legal, medical, and human rights experts.
After 9/11, the CIA created a global “gulag” of secret “black site” prisons where it systematically and secretly tortured. It also relied on foreign governments to torture prisoners.
Crime rates continue to remain at historic lows. We're safer than we've been since the mid-1960s. We should be celebrating this. Law enforcement should be celebrating this. But there's no celebration. Certainly not at the federal level. Attorney General Jeff Sessions has made remarks at a number of law enforcement events in recent months. And they've all been loaded with doom, gloom, and questionable citations.
[...]
The messages AG Sessions delivers won't change. As the head of the DOJ, he has something to sell. It isn't justice, despite the name over the door. It's prosecution, which is only part of the justice equation. All crime news is bad news, even as crime rates continue to decline. Welcome to America, where the crime rates are at historic lows but everyone thinks each successive year is the worst it's ever been.
The change prompted Raghavendran to branch out into politics and advocacy: he's joined with Environment Michigan and US PIRG to advocate for a Right to Repair bill (previously) in Michigan. Raghavendran meets with state lawmakers and has circulated a petition and compiled personal stories about the need to protect independent repair.
Repair services account for 4% of US GDP, and they create community jobs that let neighbors help each other get more use out of their own property, while diverting electronics from landfills.
When Surya Raghavendran dropped his iPhone, he learned to repair it himself. Now he wants to protect that right for everyone in his home state of Michigan.
After two days of general statements, World Intellectual Property Organization delegates delved into more substantial subjects, and convened in small closed informal discussions to try to solve issues left open during the year. Among them is the composition of WIPO Coordination Committee and Program and Budget Committee, both WIPO governing bodies. Others include potential treaties on harmonising international applications by industrial designs creators, and on the protection of broadcasting organisations against signal theft.
The Alliance for Creativity and Entertainment (ACE), the global anti-piracy coalition that counts the major Hollywood studios, Netflix, Amazon, and the BBC among its 30 members, has claimed yet another scalp. The Illuminati Kodi addon repository says that its entire team got hit with ACE letters yesterday so they have shut down with immediate effect.
The brave new path to a gatekeeper-manned, non-open internet the EU recently cut with its plainly atrocious new copyright directive was, were you to believe the general media coverage, cheered on by EU artists as a blow to Google and a boon to art because... well, nobody can actually explain that last part. And that's likely because the proposed new legislation, Article 11 and Article 13, essentially forces internet platforms to play total copyright cops or be liable for infringement while gutting the fair use type allowances that had previously been in place. Much of the European legislation that existed on the national level, and which served as the basis for this continental legislation, has done absolutely zero to provide artists or journalists any additional income. Instead, it's re-entrenched legacy gatekeepers and essentially created a legal prohibition on innovation. As the directive goes through its final stages for adoption by EU member states, the general coverage has repeated the line that artists and creators are cheering this on.
But, despite the media coverage, it isn't true that all of the artistic world is blind to exactly what was just done to the internet and the wider culture. Destructive Creation -- a collection of artists most famous for taking a monument in Europe to Soviet soldiers and painting them all as western superheroes and cultural icons -- has made its latest work an addition to a statue of Johannes Gutenberg.
Marc Ribot is a guitarist, who has released 25 albums that span more than 40 years. His work fuses genres from soul to punk to jazz to roots music.
With his latest project, “Songs Of Resistance 1942-2018,” Ribot attempts to connect current resistance against President Donald Trump’s administration to musical traditions of protest.
The album was released on September 14. It reworks songs popularized by the civil rights movement in the United States as well as songs of the anti-fascist resistance in Italy during World War II. Several original songs are featured as well.
The Supreme Court has granted a writ of certiorari in the copyright case Rimini Street Inc. v. Oracle USA Inc. following a Ninth Circuit decision in the case. See Oracle USA, Inc. v. Rimini St., Inc., 879 F.3d 948 (9th Cir. 2018). In the case, the district court sided with Oracle in its copyright suit against the DB service provider Rimini and awarded $50 million in damages, plus an additional $70 million in interest, costs, and fees. The Supreme Court case here focuses on the meaning of “full costs” as used in the Copyright Act: “In any civil action under this title, the court in its discretion may allow the recovery of full costs by or against any party. . . the court may also award a reasonable attorney’s fee to the prevailing party as part of the costs.” 17 U.S.C. €§ 505.
The question in Rimini Street v Oracle is whether the Copyright Act's allowance of "full costs" to a prevailing party is limited to taxable costs or also authorises non-taxable costs
ESPN has personified the cable and broadcast industry's tone deafness to cord cutting and TV market evolution. Executives not only spent years downplaying the trend as something only poor people do, it sued companies that attempted to offer consumers greater flexibility in how video content was consumed. ESPN execs clearly believed cord cutting was little more than a fad that would simply stop once Millennials started procreating, and ignored surveys showing how 56% of consumers would ditch ESPN in a heartbeat if it meant saving the $8 per month subscribers pay for the channel.
As the data began to indicate the cord cutting trend was very real, ESPN's first impulse was often to try and shoot the messenger. Meanwhile, execs doubled down on bloated sports licensing deals and SportsCenter set redesigns, pretty clearly unaware that the entire TV landscape was shifting beneath their feet.
By the time ESPN had lost 10 million viewers in just a few years, the company was busy pretending they saw cord cutting coming all the while. ESPN subsequently decided the only solution was to fire hundreds of longstanding sports journalists and support personnel, but not the executives like John Skipper (since resigned for other reasons) whose myopia made ESPN's problems that much worse.
Have you ever found yourself clicking-- ‘Yes I agree to these terms & conditions’, without actually reading them? Probably yes [everyone does it…even lawyers]. Did that include your registration with Twitter? If so, you may not have realized that you agreed to a licence allowing Twitter (and its partners) to use at will any of the copyright-protected content you created and uploaded on their site. But not to worry, the Paris Tribunal, in a 236-page-long decision, "righted wrongs" last month by going over Twitter’s terms and conditions with a [very] fine-tooth coomb (see for the decision in French language: Tribunal de Grande Instance, Décision du 07 août 2018, 1/4 social N€° RG 14/07300). The tribunal’s review declared ‘null and void’ most of the clauses challenged by the claimant, including the contract’s copyright licensing provisions for user-generated content.
Users are consumers, Twitter is not ‘free’
The case was brought before the Paris Tribunal by the French Consumers’ Association-- ‘Union Fédérale des Consommateurs - QUE CHOISIR’ (UFC), on behalf of the (claimed) collective interest of Twitter’s users. This type of legal action is the closest thing to a class action that exists in France. In this case, UFC’s eligibility to act on behalf of Twitter’s users relied on Article L 621 of the French Consumer Law Code, on the basis of which Twitter users were deemed consumers.
File-sharing traffic, BitTorrent in particular, is making a comeback. New data from Sandvine, shared exclusively with TorrentFreak, reveals that BitTorrent is still a dominant source of upstream traffic worldwide. According to Sandvine, increased fragmentation in the legal streaming market may play a role in this resurgence.
There are lots of calls for the platforms to police the bad speech on their platform -- disinformation and fake news; hate speech and harassment, extremist content and so on -- and while that would represent a major shift in how Big Tech relates to the materials generated and shared by its users, it's not without precedent.