If you want a computer with a Linux-based operating system pre-installed, you can never go wrong with System76 or Dell. Of course, those two companies are hardly the only ones selling Linux-powered computers. For instance, the UK-based Star Labs also sells machines with Ubuntu and Linux Mint -- two very good operating systems.
Well, Star Labs has seemingly gotten the memo on how great Zorin OS is, as the computer seller is now offering laptops with that operating system pre-installed. Zorin OS is an operating system that is ideal for those that want to switch from Windows, so having it pre-installed gives a new option for those not prepared to install a Linux-based OS on their own.
If you are in interested in seeing how the Ubuntu Linux operating system runs on the new One Mix Yoga 3 mini laptop. You are sure to be interested in the new video created by Brad Linder over at Liliputing. “ I posted some notes about what happened when I took Ubuntu 19.04 for a spin on the One Mix 3 Yoga in my first-look article, but plenty of folks who watched my first look video on YouTube asked for a video… so I made one of those too.”
The creators of the One Mix Yoga 3 have made it fairly easy to boot an alternative operating system simply by plugging in a bootable flash drive or USB storage device. As the mini laptop is powering up simply hit the delete key and you will be presented by the BIOS/UEFI menu. Simply change the boot priority order so that the computer will boot from a USB device and you are in business.
I've been at this business of putting Linux-powered computers into the homes of financially disadvantaged kids since 2005, one way or the other. That's 14 years and north of 1670 computers placed. Throughout those years, I've shared with you some of our successes, and spotlighted the indomitable spirit of the Free Open Source Community and The Linux Community as a whole. I've also shared with you the lowest of the low times for us, and me personally.
But through it all, Reglue has maintained our mission of placing first-time computers into the homes of financially disadvantaged students. By onesies and twosies mostly. A multi-machine learning center here and there, by far the greatest is the Bruno Knaapen Technology Learning Center. And as much of a challenge as that was, we have another project of even greater measure.
If you don't know who Bruno Knaapen is, I suggest you follow the link. Bruno will go down in history as a person who helped more people adapt to Linux than anyone, at any time. Bruno's online contributions are still a treasure trove of Linux knowledge. So much, individuals pay out of their pocket to make sure that information remains available. Going down that list, you will come to understand the tenacity and knowledge that man shared with his community. I was one of those that learned at his elbow.
Bill continues his distro hopping. We discuss the history of Linux and a wall-mountable timeline. Troy gives feedback on Grub. Grubb give feedback on finding the right distribution. Highlander talks communication security and hidden files. Ro's Alienware computer won't boot. David provides liks to articles.
Ubuntu sets the Internet on fire, new Linux and FreeBSD vulnerabilities raise concern, while Mattermost raises $50M to compete with Slack.
Plus we react to Facebook’s Libra confirmation and the end of Google tablets.
A new vulnerability may be the next ‘Ping of Death’; we explore the details of SACK Panic and break down what you need to know.
Plus Firefox zero days targeting Coinbase, the latest update on Rowhammer, and a few more reasons it’s a great time to be a ZFS user.
Hello and welcome to Episode #289 of Linux in the Ham Shack. In this episode, LHS gets a visit from Jon "maddog" Hall, a legend in the open source and Linux communities. He discusses--well--Linux. Everything you ever wanted to know about Linux from its early macro computing roots all the way up to the present. If there's something you didn't know about Linux, you're going to find it here. Make sure to listen to the outtake after the outro for 30 more minutes on Linux you problem didn't know anything about. Thanks to Jon for an illuminating and fascinating episode.
One of the secrets of the success of Python the language is the tireless efforts of the people who work with and for the Python Software Foundation. They have made it their mission to ensure the continued growth and success of the language and its community. In this episode Ewa Jodlowska, the executive director of the PSF, discusses the history of the foundation, the services and support that they provide to the community and language, and how you can help them succeed in their mission.
On this episode of This Week in Linux, we have a BIG announcement from Ubuntu to talk about that is bound to be polarizing. We’re also going to cover some other Distro News from OpenMandriva, Alpine Linux, openSUSE, EndeavourOS, and Regolith Linux. Then we’re going to check out some Hardware News from Pine64 for the…
Josh and Kurt talk to David Brumley. The CEO of ForAllSecure and professor at CMU. We discuss when David's team won the Cyber Grand Challenge, what the future of automated security looks like, and what ForAllSecure is doing. It's a fascinating window into the future of the industry.
Coming in on the newly scheduled day of Monday, the weekly round-up podcast Linux Gaming News Punch Episode 18 is now here.
But the Finnish-born Linux creator essentially told the Australian not to come the raw prawn.
"You've made that claim before, and it's been complete bullshit before too, and I've called you out on it then too," wrote Torvalds.
"Why do you continue to make this obviously garbage argument?"
According to Torvalds, the page cache serves its correct purpose as a cache.
"The key word in the 'page cache' name is 'cache'," wrote Torvalds.
Zhaoxin is the company producing Chinese x86 CPUs created by a joint venture between VIA and the Shanghai government. The current Zhaoxin ZX CPUs are based on VIA's Isaiah design and making use of VIA's x86 license. With the Linux 5.3 kernel will be better support for these Chinese desktop x86 CPUs.
Future designs of the Zhaoxin processors call for 7nm manufacturing, PCI Express 4.0, DDR5, and other features to put it on parity with modern Intel and AMD CPUs. It remains to be seen how well that will work out, but certainly seems to be moving along in the desktop/consumer space for Chinese-built x86 CPUs while in the server space there's the Hygon Dhyana EPYC-based processors filling the space for Chinese servers.
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today is announcing the availability of Certified Kubernetes Administrator (CKA) exam and corresponding Kubernetes Fundamentals course as in-country, instructor-led programs taught in Chinese.
According to a Cloud Native Computing Foundation survey, 44 percent of Mandarin respondents are deploying Kubernetes. There is great demand in China and the overall Asia/Pac region for training courses that will help developers accelerate their work with Kubernetes and associated technologies.
Since launching in 2017, the CKA exam has been taken by nearly 10,000 professionals around the world. Now it will be easier for Chinese users to take advantage of this offering with in-person instructors and in their local language. To register for the exam and courses, please visit: http://training.linuxfoundation.cn/
“The Kubernetes administrator courses and certified exam are among the most popular training courses we offer,” said Clyde Seepersad, general manager, Linux Foundation training. “We’re now able to make the courses and exam available in Chinese with in-country exam delivery and instructors, which we hope will increase access and opportunity to learn and apply one of today’s most relevant and pervasive open source technologies.”
Weston 6.0.1 is released with build system fixes to smooth the transition to Meson. Other miscellaneous bugfixes are also included.
Note that the PGP signing key has changed to 0FDE7BE0E88F5E48.
- (1): zunitc: Fix undeclared identifier 'NULL'
Alexandros Frantzis (1): clients/simple-dmabuf-egl: Properly check for error in gbm_bo_get_handle_for_plane
Antonio Borneo (2): clients: close unused keymap fd log: remove "%m" from format strings by using strerror(errno)
Daniel Stone (2): weston: Properly test for output-creation failure compositor: Don't ignore --use-pixman for Wayland backend
Fabrice Fontaine (1): Fix build with kernel < 4.4
Harish Krupo (4): meson.build: Fix warning for configure_file window.c: Don't assume registry advertisement order data-device: send INVALID_FINISH when operation != dnd Fix: clients/window: Premature finish request when copy-pasting
Kamal Pandey (1): FIX: weston: clients: typo in simple-dmabuf-egl.c
Luca Weiss (1): Fix incorrect include
Marius Vlad (3): meson.build/libweston: Fix clang warning for export-dynamic compositor: Fix invalid view numbering in scene-graph compositor: Fix missing new line when displaying buffer type for EGL buffer
Pekka Paalanen (7): meson: link editor with gobject-2.0 meson: link cms-colord with glib and gobject meson: link remoting with glib and gobject meson: DRM-backend demands GBM meson: dep fix for compositor.h needing xkbcommon.h build: add missing dep to x11 backend libweston: fix protocol install path
Scott Anderson (1): compositor: Fix incorrect use of bool options
Sebastian Wick (1): weston-terminal: Fix weston-terminal crash on mutter
Silva Alejandro Ismael (1): compositor: fix segfaults if wl_display_create fails
Simon Ser (1): build: bump to version 6.0.1 for the point release
Tomohito Esaki (1): cairo-util: Don't set title string to Pango layout if the title is NULL
git tag: 6.0.1
Weston 6.0 was released back in March with a remote/streaming plug-in and Meson becoming the preferred build system among other improvements. Weston 6.0.1 was released today by Simon Ser with various fixes to this reference Wayland compositor.
Weston 6.0.1 is mostly made up of Meson build system fixes/improvements to ensure a good Meson experience. There is also a fix for building with pre-4.4 kernels and a variety of other smaller fixes.
Red Hat is working on the next release of the supported enterprise distribution of OpenStack, Red Hat OpenStack Platform 15, based on the Stein community release. In this multi-part blog series, we’ll be examining some of the features that Red Hat and the open source community have collaborated on–starting with a look to future workloads, such as artificial intelligence.
"How does OpenStack enable next generation workloads?" you ask. When it comes to computer-driven decision making, machine learning algorithms can provide adaptable services that can get better over time. Some of these workloads, such as facial recognition, require GPUs to ingest and process graphical data in real time. But the more powerful GPUs often used for machine learning and such are expensive, power-hungry, and can take up a lot of room in the servers' chassis. When working with GPUs at scale, optimized utilization is key to more cost effective machine learning.
Just a few days ago I wrote how the Panfrost Gallium3D driver continues making incredible progress for this community-driven, open-source graphics driver targeting Arm Bifrost/Midgard graphics. There's yet another batch of new features and improvements to talk about.
Most of this feature work continues to be done by Panfrost lead developer Alyssa Rosenzweig who is interning at Collabora this summer and appears to be spending most of her time working on this reverse-engineered Arm graphics driver supporting their recent generations of IP.
Vulkan 1.1.112 was outed this morning as the newest documentation update to this high performance graphics and compute API.
Vulkan 1.1.112 is quite a mundane update with just documentation corrections and clarifications this go around and not any new extensions. But at least the clarifications should help out some and other maintenance items addressed by this Vulkan 1.1.112 release. It's not a surprise the release is so small considering Vulkan 1.1.111 was issued just two weeks ago.
As covered last week, the Linux kernel is finally about to see FSGSBASE support a feature supported by Intel CPUs going back to Ivybridge and can help performance. Since that earlier article the FS/GS BASE patches have been moved to the x86/cpu branch meaning unless any last-minute problems arise the functionality will be merged for the Linux 5.3 cycle. I've also begun running some benchmarks to see how this will change the Linux performance on Intel hardware.
See the aforelinked article for more background information on this functionality that's been available in patch form for the Linux kernel going back years but hasn't been mainlined -- well, until hopefully next month. FSGSBASE should help in context switching performance which is particularly good news following the various CPU vulnerabilities like Meltdown and Zombieload that have really hurt the context switching performance.
Version 2.1.11 is now out. In addition to bug fixes, this release adds one long awaited feature: the ability to detach the chat box into a separate window.
Another important change is to the server. IP bans now only apply to guest users. When a user with a registered account is banned, the ban is applied to the account only. This is to combat false positives caused by many unrelated people sharing the same IP address because of NAT.
Free collaborative drawing program Drawpile 2.1.11 was released day. This release features the ability to detach the chat box into a separate window.
Webmail is a great way to access your emails from different devices and when you are away from your home. Now, most web hosting companies include email with their server plans. And all of them offer the same three, webmail clients as well: RoundCube, Horde, and SquirrelMail. They are part of the cPanel - most popular hosting control panel.
Package managers provide a way of packaging, distributing, installing, and maintaining apps in an operating system. With modern desktop, server and IoT applications of the Linux operating system and the hundreds of different distros that exist, it becomes necessary to move away from platform specific packaging methods to platform agnostic ones. This post explores 3 such tools, namely AppImage, Snap and Flatpak, that each aim to be the future of software deployment and management in Linux. At the end we summarize a few key findings.
If you are looking for free Email clients for Linux and Windows – here are 5 of them we list which you can try and consider for casual or professional uses.
Web based email is popular today which can be accessed via browser or mobile apps. However, big and medium enterprises, generic users still prefers native desktop email clients for heavy and office uses. Microsoft Outlook is the most popular desktop email client which is of course not free and you have to pay huge licence fee to use.
There are multiple options for free desktop email clients available. Here are the best 5 free and open source email clients which you can go ahead and try then deploy for your needs.
OnlyOffice Desktop Editors is definitely an interesting office suite. Unique, fairly stylish, with reasonably good Microsoft format compatibility - I'm not sure about the background image transparency, whether it's a glitch, a bug or a PEBKAC. I also like the UI - minimalistic yet useful. Plugins are another nice feature, and you will find lots of small, elegant touches everywhere. With a free price tag, this is a rather solid contender for home use.
But there were some problems, too. The initial startup, that's a big one for newbies. Styles can be better sorted out, document loading is too slow, the UI suffers from over-simplification here and there, and the fonts need to be sharper and with more contrast, the whole new-age gray-on-gray is bad. Maybe some of these missing options are actually there in the business editions, and I'm inclined to take those for a spin, too. So far, I wouldn't call this an outright replacement for Microsoft Office, but I'm definitely intrigued, and do intend to continue and expand my testing of OnlyOffice. Very neat. I suggest you grab the program for a spin, I think you'll be pleasantly surprised.
Total War: THREE KINGDOMS was released in its all-caps glory about a month ago and saw a same-day Linux release thanks to porters Feral Interactive. The action this time around is centered in China during its fractious Three Kingdoms period of history that saw the end of the Han dynasty and warlords and coalitions battle it out for supremacy. More specifically, this Total War title also takes inspiration from the Romance of the Three Kingdoms novel and its larger-than-life heroes and villains. Developer Creative Assembly has put in plenty of time and effort to capture the feeling of both novel and the historical conflict.
At the heart of this design philosophy is the option to play the turn-based campaign in Romance mode. Veteran players that have played other Total War titles such as the Warhammer entries may be familiar with the prominence that hero units and leaders have come to take in the series. Romance mode continues this trend by making it so the commanders of retinues are key to warfare. They lead troops, use abilities to buff allies and hamper enemies, can stand up to dozens of regular troops and fight duels with enemy commanders. A more classic mode, where regular troops feature more prominently, is also available but I spent the majority of my time with the game playing in Romance mode.
Oh goodie, more space action goodness! Underspace from Pastaspace Interactive is on Kickstarter looking for funding and it seems like quite a promising game.
This is as a result of this article on Wccftech, which highlights a number of other interesting statements made by Sweeney recently. The funny this is, Valve themselves are helping to improve Wine (which Sweeney touches on below) with Steam Play (which is all open source remember) and a lot of the changes make it back into vanilla Wine.
We're in for a sadly longer wait than expected for the first-person shooter Insurgency: Sandstorm [Steam], as it's not coming until next year for Linux.
On a recent Twitch broadcast during the free weekend, it was asked in their chat "Linux will be released along with consoles or after?" to which the Lead Game Designer, Michael Tsarouhas said (here) "We haven't really announced our Linux or Mac release either, but we will just have to update you later, right now we can say we are focused on the PC post-release content and the console releases.".
Tense Reflection will ask you to think, solve and shoot as you need to solve puzzles to reload your ammo making it a rather unique hybrid of game genres.
Developed by Kommie since sometime in 2016, the gameplay is split across three different panels you will need to switch between. A colour panel to pick the colour of your shots, the puzzle panel you need to solve to apply the colour and then the shooter to keep it all going.
SCUM, a survival game from Gamepires, Croteam and Devolver Digital that was previously confirmed to eventually come to Linux is still planned.
They never gave a date for the Linux release and they still aren't, but the good news is that it still seems to be in their minds. Writing on Steam, a developer kept it short and sweet by saying "Its not to far" in reply to my comment about hoping the Linux version isn't far off. Not exactly much to go by, but it's fantastic to know it's coming as I love survival games like this.
I absolutely love real-time strategy games, so Moduwar was quite a catch to find. It seems rather unique too, especially how you control everything.
Instead of building a traditional base and units, you control an alien organism that can split and change depending on what you need to do. It sounds seriously brilliant! Even better, is that it will support Linux. I asked on the Steam forum after finding it using the Steam Discovery Queue, to which the developer replied with "Yes, there will be a Linux version, that's the plan. Thanks :)".
While the number of gaming titles being made available on Linux is increasing each month, it’s widely accepted that gaming remains one of the weakest points of all Linux based operating systems. There are options like Pop!_OS, Manjaro, etc., that deliver considerably better performance, but they’re also heavily dependent on the Valve-owned Steam game distribution platform.
Just recently, we reported Ubuntu’s plans to completely drop the support for 32-bit packages. Earlier, during the Ubuntu 17.10 development cycle, Canonical announced its plans to ditch the 32-bit installation images. Along the similar lines, Canonical had also disabled the upgrades from Ubuntu 18.04 LTS to Ubuntu 18.10.
Canonical's decision to effectively ditch official support for 32-bit x86 in Ubuntu 19.10 means the Steam gaming runtime is likely to run aground on the Linux operating system – and devs say the Wine compatibility layer for running Windows apps will be of little use.
As a result of the changes, Valve developer Pierre-Loup Griffais confirmed on Twitter: "Ubuntu 19.10 and future releases will not be officially supported by Steam or recommended to our users." This is because Steam relies on 32-bit x86, aka i386, support for running older games that are 32-bit-only. Without official 32-bit x86 support in Ubuntu, Valve is walking away from the Linux distro.
Canonical, the developer of Ubuntu, has backtracked on an earlier announcement that Ubuntu 19.10 will no longer update 32-bit packages and applications, announcing today that Ubuntu 19.10 and 20.04 will support select 32-bit apps.
The news follows Valve and the developers of Wine, an open source compatibility layer for running Windows apps on other operating systems, saying they would stop supporting Ubuntu completely.
Canonical's decision to stop updating 32-bit libraries has seen Valve react by stating Ubuntu will no longer be officially supported from Ubuntu 19.10 onwards.
Valve has announced it is pulling support for its Steam digital distribution platform on Canonical's Ubuntu Linux operating system, after the company revealed plans to drop 32-bit support.
Valve's interest in Linux is storied and ongoing: While the company focused, naturally enough, on Windows for the launch of its now-ubiquitous Steam digital distribution platform, the company announced Linux support in 2012 and launched in in 2013 via Canonical's Ubuntu Software Centre. When Valve announced its own Linux-based gaming operating system in 2013, it extended its efforts with the operating system; Steam Play, launched late last year, allowed Steam for Linux to play Windows games through a compatibility shim dubbed Proton.
Now, though, Valve has indicated that it is to drop support for the popular Ubuntu Linux distribution - though not Linux in general - over maintainer Canonical's decision to stop developing 32-bit libraries.
Valve‘s video game megastore Steam will no longer be supported for Linux operating systems after October of this year.
Announced by Valve’s Pierre-Loup Griffais on Twitter, the company will officially be dropping support for future versions of the Linux distro.
Those who frequently use Ubuntu will have until the OS’ next update, 19.10, to move another OS before Valve’s services become unsupported. The update is planned to release on October 17th, just four months away.
Apple, Canonical and Microsoft may have switched to distributing 64-Bit OSs a few years ago, but Mac OS X, Ubuntu and Windows all still support the 32-bit architecture. Microsoft integrates Windows 32-bit on Windows 64-bit (WoW64) for example, while the current version of Ubuntu still supports 32-bit. That is, until now.
Starting with 19.10 Ubuntu will contain only 64-bit code, with Canonical removing all 32-bit support. In short, no 32-bit applications will run on future versions of Ubuntu. This may not seem noteworthy considering that developers have had years to upgrade their software to 64-bit. However, Ubuntu 19.10 and newer will prevent even 64-bit applications from executing any 32-bit libraries or packages.
GAMING PORTAL Steam has announced it will ditch official support for Linux Ubuntu starting with the next release.
Last week, Canonical announced it would not be offering 32-bit builds of its software in future, and Steam responded with an almighty "erm - no".
Steam has pledged to do everything it can to avoid leaving anyone in the lurch but will be moving its attention to a yet-to-be-determined alternative Linux flavour soon.
The big problem for Steam is that so many of its classic games were only ever made available in 32-bit. By dropping support for the ageing architecture, it is essentially putting Steam in a position of borking half of the games in its library, whether that be by hiding them in the GUI or having them throw up an error code. Either way, it's not a good look.
Although there are lots of options, alternative operating systems, custom builds, emulators and so on, it's not the same as having an out-of-the-box experience.
Last week, Canonical announced that they will completely deprecate support for 32-bit (i386) hardware architectures in future Ubuntu Linux releases, starting with the upcoming Ubuntu 19.10 (Eoan Ermine) operating system, due for release later this fall on October 17th. However, the company mentioned the fact that while 32-bit support is going away, there will still be ways to run 32-bit apps on a 64-bit OS.
As Canonical didn't give more details on the matter at the time of the announcement, many users started complaining about how they will be able to run certain 32-bit apps and games on upcoming Ubuntu releases. Valve was also quick to announce that their Steam for Linux client won't be officially supported on Ubuntu 19.10 and future releases, so now Canonical has clarified the situation a bit saying only updates to 32-bit libraries are dropped.
Steam’s Linux version will not support future versions of popular and newbie-friendly distribution Ubuntu, Valve have said. The news came after Ubuntu’s makers said they’d drop 32-bit as of the next big release in October, which sounded like it would leave the great many 32-bit Steam games unplayable. Valve said they were now planning to “switch our focus” to another Linux distro. Ubuntu have since pivoted to say they’re not dropping 32-bit, they’re just going to stop updating it, which is better but still a bit of a dead end.
If you use Linux-based operating system Ubuntu to play games on Steam, you might want to think about an alternative solution. As per a tweet from Valve coder Pierre-Loup Griffais over the weekend, future versions of Ubuntu will not be officially supported by Steam, nor will it be recommended to users as a compatible OS.
The first version of Ubuntu to not be supported will be 19.10, which is scheduled to release on 17th October. That means Ubuntu users have just shy of four months to enjoy official Steam support before it's a thing of the past.
The majority of the time that Linux gets dragged in the spotlight is when there are high-profile security bugs that remind people how Linux practically runs the world behind the scenes. This time, however, the controversy is ironically around one of the operating system’s weakest points: gaming. A Valve developer just “announced” on Twitter that the company will be dropping support for future releases of Ubuntu and, as expected, it has driven Linux users into a slight frenzied panic.
Canonical's decision to cease development of 32-bit libraries in Ubuntu 19.10 "eoan" means it won't support Steam gaming runtime and devs say the Wine compatibility layer for running Windows apps will be little use.
The Steam news was reported on Twitter by Valve developer Pierre-Loup Griffais, who said "Ubuntu 19.10 and future releases will not be officially supported by Steam or recommended to our users."
Ubuntu has caused anxiety with its announcement that "the i386 architecture will be dropped" in the next release. Some presumed this meant i386 libraries would not be shipped at all, meaning that no 32-bit applications would run.
Valve's harsh announcement comes just a few days after Canonical's announcement that they will drop support for 32-bit (i386) architectures in Ubuntu 19.10 (Eoan Ermine). Pierre-Loup Griffais said on Twitter that Steam for Linux won't be officially supported on Ubuntu 19.10, nor any future releases.
The Steam developer also added that Valve will focus their efforts on supporting other Linux-based operating systems for Steam for Linux. They will be looking for a GNU/Linux distribution that still offers support for 32-bit apps, and that they will try to minimize the breakage for Ubuntu users.
I like to take care of my desktop Linux and I do so by not installing 32-bit libraries. If there are any old 32-bit applications, I prefer to install them in a LXD container. Because in a LXD container you can install anything, and once you are done with it, you delete it and poof it is gone forever!
In the following I will show the actual commands to setup a LXD container for a system with an NVidia GPU so that we can run graphical programs. Someone can take these and make some sort of easy-to-use GUI utility. Note that you can write a GUI utility that uses the LXD API to interface with the system container.
The all-powerful folks over at Valve have announced that Steam will soon not support the Linux-based operating system Ubuntu. Starting from the upcoming version 19.10, all future versions of Ubuntu will not be supported and also won’t be recommended to users as a compatible OS.
Last week, Ubuntu announced it would end support for 32-bit applications, starting with its next release.
Canonical has issued a statement on Ubuntu’s 32-bit future — and gamers, among others, are sure to relieved!
The company says Ubuntu WILL now continue to build and maintain a 32-bit archive going forward — albeit, not a full one.
In a response emailed to me (but presumably posted online somewhere) the company cite “the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community” for persuading them to change track.
That outcry, almost unparalleled in Ubuntu’s history, resulted in Valve, makers of the hugely popular games distribution service Steam, announcing that it would not support future Ubuntu releases.
This, combined with worries from users relaying on legacy applications or Windows-only software ran through WINE, has resulted in a change of plans.
Accordingly, Canonical says it “…will build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS,” they say.
Notice the word “selected” there. It seems the full 32-bit archive we enjoy now wont stick around, but a curated collection of libraries, tooling and other packages will be made available.
Valve developer announces that Steam for Linux will drop support for the upcoming Ubuntu 19.10 release and future Ubuntu releases. Softpedia News reports that "Valve's harsh announcement comes just a few days after Canonical's announcement that they will drop support for 32-bit (i386) architectures in Ubuntu 19.10 (Eoan Ermine). Pierre-Loup Griffais said on Twitter that Steam for Linux won't be officially supported on Ubuntu 19.10, nor any future releases. The Steam developer also added that Valve will focus their efforts on supporting other Linux-based operating systems for Steam for Linux. They will be looking for a GNU/Linux distribution that still offers support for 32-bit apps, and that they will try to minimize the breakage for Ubuntu users."
Having an open mind and admitting when you are wrong is a noble quality. Those that are stubborn and continue with bad ideas just to save face are very foolish. With all of that said, sometimes you have to stick with your decisions despite negative feedback because you know they are right. After all, detractors can often be very loud, but not necessarily large in numbers. Not to mention, you can't please everyone, so being indecisive or "wishy-washy" in an effort to quash negativity can make you look weak. And Canonical looks very weak today.
When the company announced it was planning to essentially stop supporting 32-bit packages beginning with the upcoming Ubuntu 19.10, I was quite impressed. Look, folks, it is 2019 -- 64-bit processors have been commonplace for a long time. It's time to pull the damn 32-bit band-aid off and get on with things. Of course, there was some negativity surrounding the decision -- as is common with everything in the world today. In particular, developers of WINE were upset, since their Windows compatibility layer depends on 32-bit, apparently. True Linux users would never bother with WINE, but I digress.
Canonical has issued a statement on Ubuntu's 32-bit future, saying it will continue to build and maintain a 32-bit archive going forward.
It seems Canonical have done a bit of a U-turn on dropping 32bit support for Ubuntu, as many expected they would do. Their official statement is now out for those interested.
Ubuntu gave a press release about their stand for 32-bit i386 packages. They will be building selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS. But not a full one.
Last week, Canonical announced that they will completely dropping support for 32-bit (i386) hardware architectures in future Ubuntu Linux releases, starting with the upcoming Ubuntu 19.10 (Eoan Ermine) operating system.
After this announcement, many of the users started complaining about how they will be able to run the 32-bit apps and games on upcoming Ubuntu releases.
At the same time, after three days. Valve announced that Ubuntu 19.10 and future releases will not be officially supported by Steam or recommended to their users.
They will evaluate ways to minimize breakage for existing users, but they will switch to a different distribution, currently TBD.
Ubuntu has reversed their decision based on the community response.
Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS.
We will put in place a community process to determine which 32-bit packages are needed to support legacy software, and can add to that list post-release if we miss something that is needed.
Community discussions can sometimes take unexpected turns, and this is one of those. The question of support for 32-bit x86 has been raised and seriously discussed in Ubuntu developer and community forums since 2014. That’s how we make decisions.
It looks like my info from this weekend was accurate, "I'm hearing that Canonical may revert course and provide limited 32-bit support." Canonical issued a statement today that they indeed will provide "selected" 32-bit packages for the upcoming Ubuntu 19.10 as well as Ubuntu 20.04 LTS.
Canonical announced that as a result of feedback, they "changed our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS...We will put in place a community process to determine which 32-bit packages are needed to support legacy software, and can add to that list post-release if we miss something that is needed."
At first glance, Canonical dropping support for 32-bit Ubuntu Linux libraries looked to be interesting -- the end of an era -- but of no real importance. Then, Canonical announced that, beginning with October's Ubuntu 19.10 release, 32-bit -computer support would be dropped. And both developers and users screamed their objections.
Canonical listened and has changed course. "Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS."
Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.
We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).
We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.
My name is Chris TallerÃÂ¥s and I’m a 23 year old dude from the Olympic city of Lillehammer in Norway and I do political activism traveling the country to fight the climate crisis and to advocate free culture/free, libre & opensource software in our kingdom.
[...]
Maybe later in 2017. I was getting tired of Windows and wanted to get into Linux...
Clear Linux is a rolling release distro that places a strong emphasis on performance. The distribution focuses on providing optimizations for Intel (and compatible) CPU platforms and often scores well in benchmark tests.
I previously experimented briefly with Clear Linux in 2017 and found it to be very minimal in its features. The distribution presented users with a command line interface by default and, while it was possible to install a desktop environment from the project's repositories, it was not focused on desktop computing. These days Clear Linux is available in several editions. There are separate builds for command line and desktop editions, along with cloud and specially tailored virtual machine builds.
I downloaded the distribution's live desktop edition which was a 2.2GB compressed file. Expanding the download unpacks a 2.3GB ISO. It actually took longer for me to decompress the file than it would have to download the extra 100MB so the compression used on the archive is probably not practical.
Trying to boot from the live desktop media quickly resulted in Clear Linux running into a kernel panic and refusing to start. This was done trying version 29410 of the distribution and, since new versions come along almost every day, I waited a while and then downloaded another version: Clear Linux 29590. The new version had an ISO approximately the same size and, after it passed its checksum, it too failed to boot due to a kernel panic.
I have used Clear Linux on this system before and, though it technically utilizes an AMD CPU, that was not an issue during my previous trial. The current situation does make me wonder if Clear Linux might have optimized itself so much that it is no longer capable of running on previous generation processors.
The Linux kernel was vulnerable for two DoS attacks against its TCP stack. The first one made it possible for a remote attacker to panic the kernel and a second one could trick the system into transmitting very small packets so that a data transfer would have used the whole bandwidth but filled mainly with packet overhead.
The IPFire kernel is now based on Linux 4.14.129, which fixes this vulnerability and fixes various other bugs.
In this video, I am going to show an overview of OpenMandriva Lx 4.0 and some of the applications pre-installed.
In this video, we look at Enso OS 0.3.1. Enjoy!
SUSE CaaS Platform 4.0 is built on top of SLE 15 SP1 and requires either the JeOS version shipped from the product repositories or a regular SLE 15 SP1 installation. Please note that SLE 15 SP1 is now officially out! Check out the official announcement for more information. Thus you should not use a SLES 15 SP1 environment with the SLE Beta Registration Code anymore. Because the SLE Beta Registration Code has expired now, but you can either use your regular SLE Registration Code or use a Trial.
As businesses are transforming their IT landscapes to support present and future demands, SUSE€® is providing the foundation for both their traditional and growing containerized workloads with the release of SUSE Linux Enterprise 15 Service Pack 1.
With the current increase in data creation, increased costs and flat to lower budgets, IT organizations are looking for ways to deploy highly scalable and resilient storage solutions that manage data growth and complexity, reduce costs and seamlessly adapt to changing demands. Today we are pleased to announce the general availability of SUSE Enterprise Storage 6, the latest release of the award-winning SUSE software-defined storage solution designed to meet the demands of the data explosion.
Happy Birthday! It’s been 1 year since we introduced the world’s first multimodal OS supporting 64-bit Arm systems (AArch64 architecture), SUSE Linux Enterprise Server for Arm 15. Enterprise early adopters and developers of Ceph-based storage and industrial automation systems can gain faster time to market for innovative Arm-based server and Internet of Things (IoT) solutions. SUSE Linux Enterprise Server for Arm is tested with a broad set of Arm System-on-a-Chip (SoC) processors, enabling enterprise-class security and greater reliability. And with your choice of Standard or Premium Support subscriptions you can get the latest security patches and fixes, and spend less time on problem resolution as compared to maintaining your own Linux distribution.
Today, SUSE releases SUSE Linux Enterprise 15 Service Pack 1, marking the one-year anniversary since we launched the world’s first multimodal OS. SUSE Linux Enterprise 15 SP1 advances the multimodal OS model by enhancing the core tenets of common code base, modularity and community development while hardening business-critical attributes such as data security, reduced downtime and optimized workloads.
Before we can answer these questions, let’s take a look at its past to give some context. Since its original release in 2010 as a joint venture by Rackspace and NASA, and its subsequent spin-off into a separate open source foundation in 2012, OpenStack has seen growth and hype that was almost unparalleled. I was fortunate enough to attend the Paris OpenStack Summit in 2014, where Mark Collier was famously driven onto stage for a keynote in one of the BMW electric sports cars. The event was huge and was packed with attendees and sponsors – almost every large technology company you can think of was there. Marketing budget had clearly been splurged in a big way on this event with lots of pizazz and fancy swag to be had from the various vendor booths. Cycle forward 4 years to the next OpenStack Summit I attended – Vancouver in May 2018. This was a very different affair – most of the tech behemoths were no longer sponsoring, and while there were some nice pieces of swag for attendees to take home, it was clear that marketing budgets had been reduced as the hype had decreased. There were less attendees, less expensive giveaways, but that ever-present buzz of open source collaboration that has always been a part of OpenStack was still there. Users were still sharing their stories, and developers and engineers were sharing their learnings with each other, just on a slightly smaller scale.
Engaging with the community has always been important for SUSE and this is no different for our Academic Program. That is why next week, the SUSE Academic Program is excited to attend and participate in a three day event hosted by one of the most respected networks in UK education.
As a grizzled veteran of the IT industry, I have been involved in many high performance computing (HPC) projects over the years, both from a hardware and software perspective. I have always found them to be intensely interesting mainly because the projects were deeply scientific in nature, whether it be decoding the human genome, designing better, more efficient vehicles or even deep space research. What’s different now is the emergence of HPC into the mainstream. Instead of it just being the preserve of academics, scientists and other boffins, normal commercial organisations are trying to harness the power of HPC to solve their business issues, notably through its application to AI and Machine Learning. As today’s technology creates vast hordes of unstructured data, unlocking the business value therein has become a key competitive advantage and almost the Holy Grail of Digital Transformation for many organisations. HPC has a key part to play in this as deriving insight from large data sets has been a major component of scientific research for many years.
The traditional way we think of data is as something that’s stored and then used later, like electricity in batteries. But today, data is always flowing, and constantly in use, much more like the electricity you pull from a grid than the energy you store in a battery. In the old days, you could wait a day, even a week, to get ahold of data. Today, it needs to be there at the flip of a switch.
So I hope everyone is enjoying Fedora Workstation 30, but we don’t rest on our laurels here so I thought I share some of things we are working on for Fedora Workstation 31. This is not an exhaustive list, but some of the more major items we are working on.
Wayland – Our primary focus is still on finishing the Wayland transition and we feel we are getting close now, and thank you to the community for their help in testing and verifying Wayland over the last few years. The single biggest goal currently is fully removing our X Windowing System dependency, meaning that GNOME Shell should be able to run without needing XWayland. For those wondering why that has taken so much time, well it is simple; for 20 years developers could safely assume we where running atop of X. So refactoring everything needed to remove any code that makes the assumption that it is running on top of X.org has been a major effort. The work is mostly done now for the shell itself, but there are a few items left in regards to the GNOME Setting daemon where we need to expel the X dependency. Olivier Fourdan is working on removing those settings daemon bits as part of his work to improve the Wayland accessibility support. We are optimistic that can declare this work done within a GNOME release or two. So GNOME 3.34 or maybe 3.36. Once that work is complete an X server (XWayland) would only be started if you actually run a X application and when you shut that application down the X server will be shut down too.
This release is an emergency release to fix a critical security vulnerability in Tor Browser.
You should upgrade as soon as possible.
Affecting the Ubuntu 19.04 (Disco Dingo), Ubuntu 18.10 (Cosmic Cuttlefish), and Ubuntu 18.04 LTS (Bionic Beaver) operating systems, the new Linux kernel security patch fixes a vulnerability (CVE-2019-12817) on 64-bit PowerPC (ppc64el) systems, which could allow a local attacker to access memory contents or corrupt the memory of other processes.
"It was discovered that the Linux kernel did not properly separate certain memory mappings when creating new userspace processes on 64-bit Power (ppc64el) systems. A local attacker could use this to access memory contents or cause memory corruption of other processes on the system," reads the security advisory.
Canonical has let it be known that minds have been changed about removing all 32-bit x86 support from the Ubuntu distribution.
Armbian provides lightweight Debian or Ubuntu images for various Arm Linux SBC...
Welcome to the Ubuntu Weekly Newsletter, Issue 584 for the week of June 16 – 22, 2019. The full version of this issue is available here.
The Raspberry Pi 4 has arrived, albeit much sooner than anyone (even Raspberry Pi themselves) expected!
But what an update it is.
Sporting several major upgrades, the new Raspberry Pi 4 can claim to be the fastest and most versatile version of the single board computer yet. Better still, the price continues to start at same low $35/€£35 entry point touted by earlier models.
Read on to learn more about the Raspberry Pi 4 specs and new features, and to find out where you can buy a Raspberry Pi 4 for yourself.
Raspberry Pi 4 is here with the upgraded technical specifications. You get up to 4 GB of RAM and you can connect two 4K displays with it. With the new hardware, you should be more comfortable using it as a desktop. Starting price remains the $35 like the previous models.
The Raspberry Pi Foundation has just launched the new Raspberry Pi 4 Model B.
It comes with some impressive upgrades which makes one of the most powerful single board computers under $40.
The Raspberry Pi 4 has launched with a 1.5GHz quad-core, Cortex-A72 Broadcom SoC, up to 4GB RAM, native GbE, USB 3.0 and Type-C ports, and a second micro-HDMI for dual 4K displays.
Eben Upton announced the Raspberry Pi 4 Model B as a “surprise,” which is generally true of any new Pi launch, but in even more so here. in February, the RPi Trading CEO hinted that the next Pi would not arrive until 2020. But here it is, checking off multiple wishlist items, including the biggest one: more RAM. Although you can buy a 1GB version for the same $35 price, there’s also a 2GB version for $45 and a 4GB model for $55, all with speedier LPDDR4 RAM.
The Raspberry Pi is a super-affordable single board computer that be can be used for a variety of different projects. Some of the most popular uses of the Raspberry Pi are to turn it into a dedicated media player with OSMC or a videogame emulation machine with RetroPie or Recalbox. Given the versatility of the Raspberry Pi, some have wondered if it could replace a traditional desktop computer. While the Raspberry Pi has significant hardware limitations, the following lightweight operating systems certainly think so. Note: The Raspberry Pi has a number of different models on the market.
Pi-top announced a “Pi-top [4]” mini-PC based on the new Raspberry Pi 4 with an integrated OLED display, a battery, and a dozen component modules ranging from sensor to potentiometers.
Pi-top has preannounced its first mini-PC form factor Pi-top and the first Pi accessory to feature the new Raspberry Pi 4 Model B. Like its earlier, laptop style Pi-top models, the Pi-top [4] offers a hacking bay for connecting component modules to the Pi’s GPIO.
The Raspberry Pi 4 has just been announced by the Raspberry Pi Foundation. The Raspberry Pi boards have been evolving quite nicely over the last few years and becoming more powerful along the way. This latest version has up to 4GB RAM, 4K dual displays, 1.5Ghz processor and USB 3.0. Instead of a Micro USB port for power like it’s predecessor, this model uses a USB Type-C port.
We have the base Raspberry Pi 4 model for $35 that has 1GB LPDDR4 RAM, $45 gets you 2GB RAM, and $55 gets you 4GB onboard RAM. You have a faster CPU, GPU and faster Ethernet and dual-band Wi-Fi. It also has twice the amount of HDMI outputs than the previous model and two USB 3 ports. The HDMI ports can power two 4K monitors at 30 frames per second.
The Raspberry Pi series is arguably responsible for accelerating the single-board computer sector, giving people a capable computing platform for a cheap price. Now, the Raspberry Pi Foundation has announced the Raspberry Pi 4, and it’s a big upgrade over the Raspberry Pi 3 Model B+.
The biggest Raspberry Pi 4 upgrade is arguably in the chipset (BCM2711), as we’ve now got four Cortex-A72 CPU cores. This is a massive step up from the Model B+, which offered power-sipping Cortex-A53 CPU cores. In fact, the foundation says you can expect a 3x performance boost over the old chipset. We also see a RAM upgrade from LPDDR2 to LPDDR4 with the new model.
The latest version of the Raspberry Pi—Raspberry Pi 4—was released today, earlier than anticipated, featuring a new 1.5GHz Arm chip and VideoCore GPU with some brand new additions: dual-HDMI 4K display output; USB3 ports; Gigabit Ethernet; and multiple RAM options up to 4GB.
The Raspberry Pi 4 is a very powerful single-board computer and starts at the usual price of $35. That gets you the standard 1GB RAM, or you can pay $45 for the 2GB model or $55 for the 4GB model—premium-priced models are a first for Raspberry Pi.
Today, Raspberry Pi is introducing a new version of its popular line of single-board computer. The Raspberry Pi 4 Model B is the fastest Raspberry Pi ever, with the company promising "desktop performance comparable to entry-level x86 PC systems."
The new model is built around a Broadcom BCM2711 SoC, which, with four 1.5GHz Cortex A72 CPU cores, should be a big upgrade over the quad core Cortex A53 CPU in the Raspberry Pi 3. The RAM options are the even bigger upgrade though, with options for 1GB, 2GB, and even 4GB of DDR4. The Pi 3 was limited to 1GB of RAM, which really stung for desktop-class use cases.
The Raspberry Pi was originally designed to provide an ultra-cheap way to encourage kids to code, but the uncased credit card sized computer has found an appreciative audience well outside of the education system, going on to sell over a million Pis in its first year alone. Each new iteration of the Pi has added something new, including a 64-bit processor, dual-band 802.11ac Wi-Fi, and Power over Ethernet (PoE) via a HAT.
Today, the Raspberry Pi Foundation announces the Raspberry Pi 4, and it’s a game changer, offering three times the processing power and four times the multimedia performance of its predecessor, the Raspberry Pi 3+. And that’s not all.
Not something we usually cover here, but it's a fun bit of hardware news. The Raspberry Pi 4 is now official and it's out and ready to pick up.
Interestingly, they also overhauled their home-grown Raspbian Linux OS, as it's now based on Debian 10 Buster. To go along with this, their original graphics stack is being retired in favour of using the Mesa "V3D" driver developed by Eric Anholt at Broadcom. They say it has allowed them to remove "roughly half of the lines of closed-source code in the platform" which is a nice win.
Managing to make it out today as a surprise is the Raspberry Pi 4. The Raspberry Pi 4 is a major overhaul and their most radical update yet while base pricing still starts out at $35 USD.
First of all, the Raspberry Pi SoC now features a quad-core Cortex-A72 CPU that can clock up to 1.5GHz for offering around three times faster performance. This SoC is the Broadcom BCM2711. There are also three variants of the Raspberry Pi 4 to offer 1GB, 2GB or 4GB of system memory -- the 1GB version will be $35, 2GB at $45, and 4GB at $55. With the new Raspberry Pi 4 SoC they are also now making use of the modern Broadcom V3D open-source driver to offer much better OpenGL support and even Vulkan eventually. The Raspberry Pi 4 with its much better graphics/display setup can even drive two displays via micro-HDMI connections.
If you're overclocking your Raspberry Pi, you might run into overheating problems. Fortunately, you can prevent this by adding sufficient cooling to your Pi
The Raspberry Pi Foundation has unveiled the Raspberry Pi 4 which is touted to be the fastest Raspberry Pi ever. The foundation promises a “complete desktop experience” with the upgrades it has included in the Raspberry Pi 4.
Physically, you wouldn’t notice any difference as the Raspberry Pi 4 Model B looks similar to the Raspberry Pi 3 Model B+. However, you’ll feel the upgrades inside the hood as the latest Raspberry Pi edition comes packed with a faster SoC. The Raspberry Pi 4 now comes with a Cortex-A72 architecture (ARM v8) 64-bit SoC at 1.5Ghz.
new version of the Raspberry PI 4 model has been released, and it is incredible update over the older model. A few years ago, I got Raspberry Pi 3 . It was my first 64-bit ARM board. It came with a 64-bit CPU. Here are the complete specs for an updated 64-bit credit card size Raspberry PI 4 desktop computer level of performance.
Your Raspberry Pi 4 will be shipped with new Debian 10 based distro giving you cutting-edge applications, new features, and modern theme.
We will release an image to support the device shortly. Our initial release will support software decoding only, although we will test the feasibility of backporting 4K (albeit SDR) decoding with Kodi v18.
We plan to support 4K and HDR with Kodi Matrix (v19) builds which are expected to be released as a stable version by the end of 2020 / early 2021. Kodi is removing support for proprietary hardware decoding interfaces and for Linux the new method of hardware accelerated playback will be via V4L2/GBM. Development for this is still in early stages, and a number of important changes are expected to land in Linux kernel 5.2 at a minimum. Presently, the Pi 4 is supported on a 4.19 kernel, so it will be some time before all of the necessary support is there for hardware accelerated playback and features such as HD audio passthrough.
The work on V4L2/GBM will be beneficial to other devices such as Vero 4K +, and will allow us to begin developing support for x86 devices in the near future as well.
OSMC would like to thank the Raspberry Pi foundation for making test units available in advance of the release and for their continued support.
We have a surprise for you today: Raspberry Pi 4 is now on sale, starting at $35. This is a comprehensive upgrade, touching almost every element of the platform. For the first time we provide a PC-like level of performance for most users, while retaining the interfacing capabilities and hackability of the classic Raspberry Pi line.
With the launch of the Raspberry Pi 4 SBC series, the Raspberry Pi Foundation released a new version of Raspbian OS, the official Raspberry Pi operating system based on the popular Debian GNU/Linux distribution. This release adds numerous new features and improvements, but the biggest change is that it supports the Raspberry Pi 4 Model B single-board computer.
Another major change in the new Raspbian OS release is that the entire operating system has been rebased on the soon-to-be-released Debian GNU/Linux 10 "Buster" operating system series, due for release next month on July 6th. This means that there are numerous updated components included in Raspbian 2019-06-20, along with lots of bug fixes and improvements.
Digital camera startup Octopus Cinema has been designing the "OCTOPUSCAMERA" as a digital cinema camera that's professional grade yet is an open platform with removable/upgradeable parts and this camera platform itself is running Linux.
The OCTOPUSCAMERA supports up to 5K full frame recording, weighs less than 1kg, and is powered by Linux. It's a rather ambitious device and they aim to be shipping in 2020.
From May 21-25, Red Hat OpenShift Container Storage rolled into KubeCon Europe 2019 in Barcelona, Spain, a rare chance to bring different parts of the Red Hat community together from across Europe and the U.S. While there, we took the opportunity to sit down with members of the teams that are shaping the next evolution of container native storage in Red Hat OpenShift and throughout the Kubernetes ecosystem.
We’ve put together highlights from Barcelona, where you’ll see what happens when you gather 7,700 people from the Kubernetes ecosystem in one place. You’ll also hear from members of Red Hat’s team in Barcelona—Distinguished Engineer Ju Lim, Senior Architect Annette Clewett, Rook Senior Maintainer Travis Nielsen and others—about what’s exciting them now, and what’s ahead.
During KubeCon + CloudNativeCon, Barcelona, we sat down with Bassam Tabbara – CEO and founder of Upbound to talk about the company he is building to make the next decade about Open / Open Source Cloud, breaking away from the proprietary cloud. Tabbara shared his insights into how AWS, Azure and the rest leverage open source technologies to create the proprietary clouds. He wants to change that.
Kiwi TCMS is the winner at OpenAwards'19 category Best Tech Community! Big thanks to the jury, our contributors and core-team and the larger open source and quality assurance communities who voted for us and supported the project during all of those years.
As this week was my last week in my exams I worked in many minor points to finish it and finish all missing parts for phase1.
Dear FreeBSD community:
As I have a highly-visible role within the community, I want to share some news. I have decided the time has come to move on from my role with the FreeBSD Foundation, this Friday being my last day. I have accepted a position within a prominent company that uses and produces products based on FreeBSD.
My new employer has included provisions within my job description that allow me to continue supporting the FreeBSD Project in my current roles, including Release Engineering.
There are no planned immediate changes with how this pertains to my roles within the Project and the various teams of which I am a member.
FreeBSD 11.3 and 12.1 will continue as previously scheduled, with no impact as a result of this change.
I want to thank everyone at the FreeBSD Foundation for providing the opportunity to serve the FreeBSD Project in my various roles, and their support for my decision.
I look forward to continue supporting the FreeBSD Project in my various roles moving forward.
Glen
Well known FreeBSD developer and leader of their release engineering team, Glen Barber, has left the FreeBSD Foundation but will continue working on FreeBSD as well as coordinating its releases.
Glen Barber has decided to take up a position at BSD-focused Netgate. Serving as an engineer at Netgate, Glen will continue to be engaged with upstream FreeBSD as well as working on pfSense. Netgate is the provider of various secure network offerings, including pfSense and their premium TNSR firewall/router/VPN platform
Yesterday, the GNU APL version 1.8 was released with bug fixes, FFT, GTK, RE, user defined APL commands and more.
GNU APL is a free interpreter for the programming language APL.
A wide range of sorts of OSS licenses exist. In any case, there are basic traits among most OSS licenses. Two of the principle normal qualities are that: (1) beneficiaries can uninhibitedly utilize, change and convey the product; and (2) the source code (for example the comprehensible code) is made accessible to empower the activity of these rights. This recognizes OSS from restrictive programming. With exclusive programming licenses, ordinarily duplicating, altering or redistributing is disallowed and just the item code (i.e., the machine meaningful code or 'gathered structure') is circulated. The centrality of this is to adequately adjust the product, an engineer commonly would need access to the source code.
The fourth release of the binb package just arrived on CRAN. binb regroups four rather nice themes for writing LaTeX Beamer presentations much more easily in in (R)Markdown. As a teaser, a quick demo combining all four themes follows; documentation and examples are in the package.
The Python Image Library (PIL), although not in the standard library, has been Python’s best-known 2-D image processing library. It predated installers such as pip, so a “friendly fork” called Pillow was created. Although the package is called Pillow, you import it as PIL to make it compatible with the older PIL.
Please make sure you book your ticket in the coming days. We will switch to late bird rates next week. If you want to attend the training sessions, please buy a training pass in addition to your conference ticket, or get a combined ticket. We only have very few training seats left.
This week we welcome Geir Arne Hjelle (@gahjelle) as our PyDev of the Week! Geir is a regular contributor to Real Python. You can also find some of his work over on Github. Let’s take a few moments to get to know Geir now!
In my last article, I introduced Mypy, a package that enforces type checking in Python programs. Python itself is, and always will remain, a dynamically typed language. However, Python 3 supports "annotations", a feature that allows you to attach an object to variables, function parameters and function return values. These annotations are ignored by Python itself, but they can be used by external tools.
Mypy is one such tool, and it's an increasingly popular one. The idea is that you run Mypy on your code before running it. Mypy looks at your code and makes sure that your annotations correspond with actual usage. In that sense, it's far stricter than Python itself, but that's the whole point.
In my last article, I covered some basic uses for Mypy. Here, I want to expand upon those basics and show how Mypy really digs deeply into type definitions, allowing you to describe your code in a way that lets you be more confident of its stability.
AMD has lost one of their leading LLVM compiler developers as well as serving as a Vulkan/SPIR-V expert with being involved in those Khronos specifications.
Neil Henning has parted ways with AMD and is now joining Unity Technologies. Neil was brought to AMD to improve the performance of their LLVM compiler, in particular their LLVM Pipeline Compiler (LLPC) used by the likes of their official AMD Vulkan driver in order to make it competitive with their long-standing, proprietary shader compiler currently used by their binary graphics drivers. While at AMD, he was able to increase the performance of their LLVM shader compiler stack by about 2x over the past year. He also implemented various Vulkan driver extensions into their stack.
ntel has been working on its OneAPI project for quite some time. The company has now shared more details of the software project — including the launch of a new programming language called “Data Parallel C++ (DPC++).”
Many programmers are moving towards data science and machine learning hoping for better pay and career opportunitiesââ¬Å ---ââ¬Å and there is a reason for it. The Data scientist has been ranked the number one job on Glassdoor for last a couple of years and the average salary of a data scientist is over** $120,000** in the United States according to Indeed.
Data science is not only a rewarding career in terms of money but it also provides the opportunity for you to solve some of the world's most interesting problems. IMHO, that's the main motivation many good programmers are moving towards data science, machine learning and artificial intelligence.
In this example, we will create a python function which will take in a list of numbers and then return the smallest value. The solution to this problem is first to create a place holder for the first number within the list, then compares that number with other numbers within the same list in the loop. If the program found a number which is smaller than the one in the place holder, then the smaller number will be assigned to that place holder.
To be useful, a program usually needs to communicate with the outside world by obtaining input data from the user and displaying result data back to the user. This tutorial will introduce you to Python input and output.
Input may come directly from the user via the keyboard, or from some external source like a file or database. Output can be displayed directly to the console or IDE, to the screen via a Graphical User Interface (GUI), or again to an external source.
Let’s face it: Stack Overflow has made developers’ lives easier. Almost every time I have a question, I find that someone on Stack Overflow has asked it, and that people have answered it, often in great detail.
I’m thus not against Stack Overflow, not by a long shot. But I have found that many Python developers visit there 10 or even 20 times a day, to find answers (and even code) that they can use to solve their problems.
Few years back I’ve wrote a post about how I’ve connected python based test to ELK setup - “ELK is fun”, it was using an xunit xml, parsing it and sending it via Logstash.
Over time I’ve learn a lot about ElasticSearch and it’s friend Kibana, using them as a tool to handle logs. and also as a backend for a search component on my previous job.
So now I know logstash isn’t needed for reporting test result, posting straight into elasticsearch is easier and gives you better control, ES is doing anything “automagiclly” anyhow nowadays.
Keeping your company's data safe can be tricky when your competitors are begging you to put all your conversations, projects, and hard work right into the palms of their hands.
To make sure its competitors aren't able to look behind its tightly drawn curtains, Microsoft has a list of online services that it forbids its workforce to use, according to a report from GeekWire. They're familiar names for most modern professionals: Slack, Google Docs, and Amazon Web Services (among others).
Despite the popularity of some of these services that allow for easy communication between employees and data storing and sharing, Microsoft wants to make sure everybody is keeping all their information in-house with its own programs. Actually, not even all of its own programs are safe, as the Microsoft-owned GitHub is also off limits.
Tell us about your summer project by taking our poll. Plus, read what our writers are working on.
Last year, I wrote about some of the aspirations which motivated my move from Mozilla Research to the CloudOps team. At the recent Mozilla All Hands in Whistler, I had the “how’s the new team going?” conversation with many old and new friends, and that repetition helped me reify some ideas about what I really meant by “I’d like better mentorship”.
Asteroid Day is a global awareness campaign where people from around the world come together to learn about asteroids, the impact hazard they may pose, and what we can do to protect our planet, families, communities and future generations from future asteroid impacts. Asteroid Day takes place on June 30, the anniversary of the largest impact in recent history, the 1908 Tunguska event in Siberia. That asteroid decimated about 800 square miles (to put that in perspective, greater London is about 600 square miles). It’s estimated that a Tunguska-level “city-killer” asteroid hits the Earth every 500 years. So, while there is nothing to lose sleep over, it’s imperative that we are aware and have a plan.
This flaw is known as CVE-2019-5443.
If you downloaded and installed a curl executable for Windows from the curl project before June 21st 2019, go get an updated one. Now.
In addition to disabling root password-based SSH log-ins by default, another change being made to Fedora 31 in the name of greater security is adding some additional GRUB2 boot-loader modules to be built-in for their EFI boot-loader.
GRUB2 security modules for verification, Cryptodisk, and LUKS will now be part of the default GRUB2 EFI build. They are being built-in now since those using the likes of UEFI SecureBoot aren't able to dynamically load these modules due to restrictions in place under SecureBoot. So until now using SecureBoot hasn't allowed users to enjoy encryption of the boot partition and the "verify" module with ensuring better integrity of the early boot-loader code.
Fedora 31 will harden up its default configuration by finally disabling password-based OpenSSH root log-ins, matching the upstream default of the past four years and behavior generally enforced by other Linux distributions.
The default OpenSSH daemon configuration file will now respect upstream's default of prohibiting passwords for root log-ins. Those wishing to restore the old behavior of allowing root log-ins with a password can adjust their SSHD configuration file with the PermitRootLogin option, but users are encouraged to instead use a public-key for root log-ins that is more secure and will be permitted still by default.
Picked up by Gizmodo, acclaimed Californian security company SafeBreach has revealed that software pre-installed on PCs has left “millions” of users exposed to hackers. Moreover, that estimate is conservative with the number realistically set to be hundreds of millions.
The flaw lies in PC-Doctor Toolbox, systems analysis software which is rebadged and pre-installed on PCs made by some of the world’s biggest computer retailers, including Dell, its Alienware gaming brand, Staples and Corsair. Dell alone shipped almost 60M PCs last year and the company states PC-Doctor Toolbox (which it rebrands as part of ‘SupportAssist’) was pre-installed on “most” of them.
What SafeBreach has discovered is a high-severity flaw which allows attackers to swap-out harmless DLL files loaded during Toolbox diagnostic scans with DLLs containing a malicious payload. The injection of this code impacts both Windows 10 business and home PCs and enables hackers to gain complete control of your computer.
What makes it so dangerous is PC-makers give Toolbox high-permission level access to all your computer’s hardware and software so it can be monitored. The software can even give itself new, higher permission levels as it deems necessary. So once malicious code is injected via Toolbox, it can do just about anything to your PC.
SafeBreach Labs said it targeted SupportAssist, software pre-installed on most Dell PCs designed to check the health of the system’s hardware, based on the assumption that “such a critical service would have high permission level access to the PC hardware as well as the capability to induce privilege escalation.”
What the researchers found is that the application loads DLL files from a folder accessible to users, meaning the files can be replaced and used to load and execute a malicious payload.
There are concerns the flaw may affect non-Dell PCs, as well.
The affected module within SupportAssist is a version of PC-Doctor Toolbox found in a number of other applications, including: Corsair ONE Diagnostics, Corsair Diagnostics, Staples EasyTech Diagnostics, Tobii I-Series Diagnostic Tool, and Tobii Dynavox Diagnostic Tool.
The most effective way to prevent DLL hijacking is to quickly apply patches from the vendor. To fix this bug, either allow automatic updates to do its job, or download the latest version of Dell SupportAssist for Business PCs (x86 or x64) or Home PCs (here).
You can read a full version of the SafeBreach Labs report here.
On June 17th, Researchers at Netflix have identified several TCP networking vulnerabilities in FreeBSD and Linux kernels.
This paper addresses the privacy implications of two new Domain Name System (DNS) encryption protocols: DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). Each of these protocols provides a means to secure the transfer of data during Internet domain name lookup, and they prevent monitoring and abuse of user data in this process.
DoT and DoH provide valuable new protection for users online. They add protection to one of the last remaining unencrypted ‘core’ technologies of the modern Internet, strengthen resistance to censorship and can be coupled with additional protections to provide full user anonymity.
Whilst DoT and DoH appear to be a win for Internet users, however, they raise issues for network operators concerned with Internet security and operational efficiency. DoH in particular makes it extremely difficult for network operators to implement domain-specific filters or blocks, which may have a negative impact on UK government strategies for the Internet which rely on these. We hope that a shift to encrypted DNS will lead to decreased reliance on network-level filtering for censorship.
OpenSSH, a widely used suite of programs for secure (SSH protocol-based) remote login, has been equipped with protection against side-channel attacks that could allow attackers to extract private keys from memory.
Patching can be manually intensive and time-consuming, requiring large amounts of coordination and processes. Tony Green gives the best tips.
As the Meltdown and Spectre attacks were published begin of January 2018, several mitigations were planned and implemented for Spectre Variant 2.
Red Hat provides the Go programming language to Red Hat Enterprise Linux customers via the go-toolset package. If this package is new to you, and you want to learn more, check out some of the previous articles that have been written for some background.
The go-toolset package is currently shipping Go version 1.11.x, with Red Hat planning to ship 1.12.x in Fall 2019. Currently, the go-toolset package only provides the Go toolchain (e.g., the compiler and associated tools like gofmt); however, we are looking into adding other tools to provide a more complete and full-featured Go development environment.
In this article, I will talk about some of the improvements, changes, and exciting new features for go-toolset that we have been working on. These changes bring many upstream improvements and CVE fixes, as well as new features that we have been developing internally alongside upstream.
Password security involves a broad set of practices, and not all of them are appropriate or possible for everyone. Therefore, the best strategy is to develop a threat model by thinking through your most significant risks—who and what you are protecting against—then model your security approach on the activities that are most effective against those specific threats. The Electronic Frontier Foundation (EFF) has a great series on threat modeling that I encourage everyone to read.
In my threat model, I am very concerned about the security of my passwords against (among other things) dictionary attacks, in which an attacker uses a list of likely or known passwords to try to break into a system. One way to stop dictionary attacks is to have your service provider rate-limit or deny login attempts after a certain number of failures. Another way is not to use passwords in the "known passwords" dataset.
Last week, Damien Miller, a Google security researcher, and one of the popular OpenSSH and OpenBSD developers announced an update to the existing OpenSSH code that can help protect against the side-channel attacks that leak sensitive data from computer’s memory. This protection, Miller says, will protect the private keys residing in the RAM against Spectre, Meltdown, Rowhammer, and the latest RAMBleed attack.
SSH private keys can be used by malicious threat actors to connect to remote servers without the need of a password. According to CSO, “The approach used by OpenSSH could be copied by other software projects to protect their own keys and secrets in memory”.
However, if the attacker is successful in extracting the data from a computer or server’s RAM, they will only obtain an encrypted version of an SSH private key, rather than the cleartext version.
A new cryptominer, dubbed Bird Miner, has been spotted in the wild targeting Mac devices and running via Linux emulation under the guise of a production software tool.
Your Linux boxes may be vulnerable to TCP networking vulnerabilities that can lead to a remote DoS attack.
Was the U.S. spy drone that Iran shot down On Thursday, June 20th, in international airspace, or was it over Iranian airspace as Iran insists? Flight coordinates in strategic locations are easy to access, but that misses the point altogether. Here is the point:
The U.S. has acted belligerently and violated the most basic tenets of the Geneva Conventions and the UN Charter by abrogating its responsibilities under the Iran nuclear deal, then punishing the Iranian people collectively with economic sanctions that affect their ability to live their lives in a nation not at war. The U.S. has sent masses of troops to the Middle East to ready for war against Iran and has made charges that are unproven that Iran attacked oil tankers. The U.S. has waged a propaganda war against Iran through the words of both Pompeo and Bolton. Donald Trump has added to the verbal assault by his mercurial threats of war (illegal under the UN Charter) and the reality of the economic sanctions that his administration has put in place against Iran.
Further, in order for war to be declared, there must be an actual threat against a nation and that threat has not been substantiated. That premise, the basis for accepted rules or laws of war, is thousands of years old. The people and government of the U.S. do not stand in harm’s way by the alleged actions of Iran against the oil tankers and spy drone and no ally of the U.S. suffers any of those same consequences. Neither the governments of Great Britain, France, Saudi Arabia, and others are in any danger from the actions that Iran has taken in the face of grave threats of force and real economic threats from the U.S. and others in the West.
Every now and then, I feel myself compelled to write something I know that the majority of my readers will not agree with. That is because I do not go along with left wing groupthink any more than I go along with the line of the Establishment. I do not subscribe to a set of opinions. but attempt to consider every question afresh.
Wikileaks is much criticised for having published the leaked Hillary and Podesta emails, thus having “caused” Trump. At its extreme, this involves the entire evidence free “Russiagate” paranoia. I find myself criticised for my association with Julian Assange on these same grounds.
The major answer to this is that it would have been morally wrong to conceal the evidence of Hillary’s wrongdoing, her associations with the Saudis and the Bankers, and particularly the rigging of the primary elections against Bernie Sanders by Hillary and the DNC. If I was accused of association with concealing all that, I would not be able to defend Wikileaks. Another part of the answer is that I am not sure any of this much affected the actual votes cast. But the most important bit of the answer is that I am not sorry that Clinton lost and Trump won.
I say that with apologies to all my American friends who are suffering from Trump’s harsh domestic policies and his version of the “hostile climate for immigrants” which we have long suffered in the UK. I do not underestimate the harm done by Trump’s penchant for trade wars, or his blindly pro-Israel policies and gestures, nor the continuation of the Saudi anti-Shia alliance.
“From Syria to Yemen in the Middle East, from Libya to Somalia in Africa, from Afghanistan to Pakistan in South Asia, all forming a U.S. air curtain descending on a huge swath of the planet with the declared goal of fighting terrorism. Its main method is summed up in surveillance, bombardments and more constant bombardments. Its political benefit is to minimize the number of “United States boots on the ground” and, therefore, American casualties in the never-ending war on terrorism, as well as public protests over Washington’s many conflicts. It’s economic benefit: plenty of high-performance business for arms manufacturers for whom the president can now declare a national security emergency whenever he wants and sell his warplanes and ammunition to preferred dictatorships in the Middle East (no congressional approval required). Its reality for several foreign peoples: a sustained diet of bombs and missiles “Made in the USA” that explode here, there and everywhere.
This is how William J. Astore, a retired US Air Force lieutenant colonel and now a history professor, interprets the cult of bombing on a global scale that he views in his country, as well as the fact that U.S. wars are being fought more and more from the air, not on the ground, a reality that makes the prospect of ending them increasingly daunting, and finally asks: What is driving this process?
“For many of America’s decision-makers,” Astore says, “air power has clearly become a sort of abstraction. “After all, with the exception of the September 11 [2001] attacks by four hijacked commercial airliners, Americans have not been the target of such attacks since World War II. On the battlefields of Washington, the Greater Middle East and North Africa, air power is almost literally always a one-way street. There are no enemy air forces or significant air defenses. The skies are the exclusive property of the U.S. Air Force and its allies, so we are no longer talking about “war” in the normal sense. No wonder Washington’s politicians and military see it as our strength, our asymmetric advantage, our way of settling accounts with wrongdoers, real and imaginary.
When Donald Trump entered the Oval Office in January 2017, Americans took to the streets all across the country to protest their instantly endangered rights. Conspicuously absent from the newfound civic engagement, despite more than a decade and a half of this country’s fruitless, destructive wars across the Greater Middle East and northern Africa, was antiwar sentiment, much less an actual movement.
Those like me working against America’s seemingly endless wars wondered why the subject merited so little discussion, attention, or protest. Was it because the still-spreading war on terror remained shrouded in government secrecy? Was the lack of media coverage about what America was doing overseas to blame? Or was it simply that most Americans didn’t care about what was happening past the water’s edge? If you had asked me two years ago, I would have chosen “all of the above.” Now, I’m not so sure.
After the enormous demonstrations against the invasion of Iraq in 2003, the antiwar movement disappeared almost as suddenly as it began, with some even openly declaring it dead. Critics noted the long-term absence of significant protests against those wars, a lack of political will in Congress to deal with them, and ultimately, apathy on matters of war and peace when compared to issues like health care, gun control, or recently even climate change.
The pessimists have been right to point out that none of the plethora of marches on Washington since Donald Trump was elected have had even a secondary focus on America’s fruitless wars. They’re certainly right to question why Congress, with the constitutional duty to declare war, has until recently allowed both presidents Barack Obama and Donald Trump to wage war as they wished without even consulting them. They’re right to feel nervous when a national poll shows that more Americans think we’re fighting a war in Iran (we’re not) than a war in Somalia (we are).
The official US narrative on Iran is that it is an escalating threat to “peace and security” in the Middle East and must be stopped. Step by step, with Mike Pompeo and John Bolton—two war maniacs—taking the lead, the Trump administration has sought to destabilize Iran with sanctions, if possible bring about regime change, and if necessary provoke actions by Iran that will provide a pretext for war. If this sounds similar to the nonexistent Gulf of Tonkin “incident” in 1964 and the false pretenses behind the post-9/11 invasion of Afghanistan and Iraq, it should. Only this time around is even more dangerous and more preposterous.
Journalists and Congress members have been pestering Trump and his aides with questions about their determination to go to war with Iran. Trump, typically, tells reporters to wait and see, stymieing them. They should be asking different questions, such as: What threat does Iran pose to US interests? Why shouldn’t Iran’s actions be considered responses to the US policy of “maximum pressure”? The answers to these two questions are clear: Iran is doing nothing that constitutes a new threat to US or any other country’s interests, and Iran’s latest actions—even if Tehran is responsible for the attacks on oil tankers in the Gulf and the downing of a US drone—are best understood as responses to US provocations.
What I believe we are now witnessing is the result of the ascendance of the hardliners on both sides. US policy since the appointments of Pompeo and Bolton and withdrawal from the nuclear deal has energized their counterparts in Iran—the Revolutionary Guards, certain military leaders, and others long opposed to the nuclear deal and now able to show that the Americans are completely untrustworthy.
Ye Gods, how brave was our response to the outrageous death-in-a-cage of Mohamed Morsi. It is perhaps a little tiresome to repeat all the words of regret and mourning, of revulsion and horror, of eardrum-busting condemnation pouring forth about the death of Egypt’s only elected president in his Cairo courtroom this week. From Downing Street and from the White House, from the German Chancellery to the Elysee – and let us not forget the Berlaymont – our statesmen and women did us proud. Wearying it would be indeed to dwell upon their remorse and protests at Morsi’s death.
For it was absolutely non-existent: zilch; silence; not a mutter; not a bird’s twitter – or a mad president’s Twitter, for that matter – or even the most casual, offhand word of regret. Those who claim to represent us were mute, speechless, as sound-proofed as Morsi was in his courtroom cage and as silent as he is now in his Cairo grave.
It was as if Morsi never lived, as if his few months in power never existed – which is pretty much what Abdul Fattah al-Sisi, his nemesis and ex-gaoler, wants the history books to say.
So three cheers again for our parliamentary democracies, which always speak with one voice about tyranny. Save for the old UN donkey and a few well-known bastions of freedom – Turkey, Malaysia, Qatar, Hamas, the Muslim Brotherhood-in-exile and all the usual suspects – Morsi’s memory and his final moments were as if they had never been. Crispin Blunt alone has tried to keep Britain’s conscience alive. So has brave little Tunisia. Much good will it do.
Like everyone else who can say “Gulf of Tonkin,” “Remember the Maine,” and “Iraqi WMDs,” my instinctive reaction to the attacks on two tankers, a month after explosions hit four oil tankers in the UAE port of Fujairah, was: “Oh, come on now!” We know the United States, egged on by Israel and Saudi Arabia, has been itching to launch some kind of military attack on Iran, and we are positively jaded by the formula that’s always used to produce a justification for such aggression.
It seemed beyond credibility that Iran would attack a Japanese tanker, the Kokuka Courageous, at the moment the Prime Minister of Japan was sitting down with Ayatollah Khamenei in Tehran. After all, Iran is eager to keep its oil exports flowing, so it would hardly want to so flagrantly insult one of its top oil customers.
Nor did it seem to make sense that Iran would target a Norwegian vessel, Front Altair. That tanker is owned the shipping company, Frontline, which belongs to Norway’s richest man (before he moved to Cyprus), John Fredriksen. Fredriksen made his fortune moving Iranian oil during the Iran-Iraq war, where his tankers came under constant fire from Iraq, and were hit by missiles three times. He became known as “the Ayatollah’s lifeline.” Furthermore, as the Wall Street Journal reports, Fredriksen’s Frontline company has continued to help Iran move its oil in a way that evades sanctions. A friendlier resource Iran has not. This is the guy Iran chose to target, in another gratuitous insult?
Last April marked a special anniversary for Cuba but one that we should all reflect upon given the current events in Latin America, particularly in Venezuela. In mid-April 1961 three cities in Cuba were bombed at the same time from the air. Immediately the US government claimed that Cuban defectors carried out the action with Cuban planes and pilots. The media quickly “confirmed the actions”.
These were false flag attacks organized by the US.
In a large mass rally in Havana the next day Fidel Castro pronounced a very important speech where he called John Kennedy and the media liars. That was the speech where Fidel declared the “socialist character” of the Cuban revolution. [1]
US interventions, military and parliamentary coups have been relentless before and since in Latin America. Often they are preceded by outright disinformation in order to misrepresent events and demonize the target government as a prelude to legitimize a more aggressive intervention.
Fast-forward to the 21st Century, pan quickly over the Middle East, and zoom into our Western hemisphere today and you will see Venezuela. Not the country that most Venezuelans want you to see, but the country that the US government and its allies – Canada at the forefront – want you to see. Reportedly, one that needs a regime change.
The level of disinformation about Venezuela has been widely exposed by political analysts like Dan Kovalik [2] and media groups like Fairness and Accuracy in Reporting (FAIR), which indicated that corporate media in the United States has undertaken “a full-scale marketing campaign for regime change in Venezuela”. [3]
In an article last April Time magazine said, “Venezuelans are starving for information”. To which VenezuelaAnalysis.com responded that, “Creative reporting about Venezuela is ‘the world’s most lucrative fictional genre’ “, and it goes on to show how there are three private TV channels, a satellite provider that covers FOX News, CNN and BBC. Anti-government print media is also widely accessible as well as online outlets. [4]
Resolving the climate crisis demands radical political change, a British author argues: the end of free market capitalism.
You could turn the entire United Kingdom into a giant wind farm and it still wouldn’t generate all of the UK’s current energy demand. That is because only 2% of the solar energy that slams into and powers the whole planet on a daily basis is converted into wind, and most of that is either high in the jet stream or far out to sea.
Hydropower could in theory supply most of or perhaps even all the energy needs of 7 billion humans, but only if every drop that falls as rain was saved to power the most perfectly efficient turbines.
And that too is wildly unrealistic, says Mike Berners-Lee in his thoughtful and stimulating new paperback There Is No Planet B. He adds: “Thank goodness, as it would mean totally doing away with mountain streams and even, if you really think about it, hillsides.”
The majority of Americans say fossil fuel companies should pay for damage caused by climate change, according to a recent poll released by Yale University on Wednesday.
Researchers asked 5,131 Americans how much they think global warming is harming their local communities, who they think should be responsible for paying for the damages, and whether they support lawsuits to hold fossil fuel companies accountable for those costs.
By now it’s no secret that plastic waste in our oceans is a global epidemic. When some of it washes ashore — plastic bottles, plastic bags, food wrappers — we get a stark reminder. And lately one part of this problem has been most glaring to volunteers who comb beaches picking up trash: cigarette butts.
Last year the nonprofit Ocean Conservancy reported that cigarette butts, which contain plastic and toxic chemicals, were the most-littered item at their global beach cleanups.
Trillions of butts are tossed each year. So what’s being done about it?
Mountain Biking is a significant threat to our wildlands—both in designated preserves like national parks, wilderness areas, and the like, but also Wilderness Study Areas (WSA) and roadless lands that may potentially be given Congressional protection under the 1964 Wilderness Act.
Wilderness designation is one of the best ways to protect biodiversity, watersheds, wildlife habitat, and natural ecological processes. And in this day of climate change, protecting forests, shrublands, deserts, and grasslands in our national wilderness system is also one of the best ways to store carbon.
Lest we forget, there is a finite amount of public land that can qualify for wilderness designation. If we must err on one side or the other, we ought to err on the side of protecting our wildlands heritage.
It is important to note that recreation is not the same as conservation. In any dispute about whether to increase recreational use/access or place limits on recreation, protection of wildlife and wildlands should always receive top priority.
One of the philosophical values of wilderness is the idea of restraint. When we designate a wilderness area, we as a society are asserting that nature and natural processes have priority, and we accept limits on ourselves. It is a lesson that is increasingly important for all to learn in an age of climate change, population growth, biodiversity loss, and other major environmental issues.
In a world filled with such vexing and overwhelming issues, worrying about bicycles on trails can seem trivial and inconsequential. But it’s important to note that bicycles and other mechanical conveyances, and the lack of commitment to personal restraint that it can foster is indicative of the broader challenges facing society. Namely, how do we live on this planet without destroying it? Self-control and restraint will be critical to our future.
We all have egos that we erect to circumscribe and reify the “self,” or so I have been told. Some scholars have gone so far as to argue that our lives consist of little more than obsessively hitting the world’s feeder bar to obtain emotional and material rewards for the ego, not unlike rats in a gigantic elaborate maze. Even so, this fundamental premise obscures important distinctions between motivations that literally arise from different portions of our brains, as well as from our narratives about who we are, what’s important, and why. And such distinctions matter when it comes to the consequences of our actions for both ourselves and others.
[...]
For the last four-plus years the vast 3.1-million acre Custer-Gallatin National Forest of Montana has been in the process of revising its Forest Plan. This process will eventually produce an authoritative Record of Decision that determines what people will or will not be able do with specific parcels of the Forest; essentially, a land use plan with legal teeth. In certain places people will be able to ride mountain bikes, in other places, they won’t; likewise for Off-Highway-Vehicles, or OHVs. And, so on, for timber harvest, livestock grazing, camping, hang-gliding, or generally running amuck.
These planning processes are invariably contentious as stakeholders strive to have their interests codified in the Forest Plan. Some of this struggle plays out in public, in the press or through formal processes eliciting public input. Some plays out in a more overtly political way, often behind closed doors or in the muddy waters of far-off Washington, DC. In the aftermath, litigation is not uncommon. The Forest Service is invariably caught in the middle in ways that deplete morale, amplify anxiety, and jeopardize careers—at least for the proximally-involved personnel.
The official government pickin’and choosin’ of the winners and losers of climate change already has begun.
If you’re still in the tribe that thinks climate change is maybe not a thing, or a thing that might happen a hundred years from now, perhaps you might like to consider this: everyone else is already moving beyond that debate and starting to pick and choose who’s gonna be saved and who’s not, who’s gonna be a winner and who’s gonna be a loser. If you want to have anything to say about which of those two camps you get slotted into, maybe it’s time to reconsider your position.
The New York Times reported (June 19) that two federal government agencies are now developing rules for giving out billions of dollars in grants to help cities and states defend themselves from the destructive effects of climate change.
Since we live in nonsense times, of course this is happening while the Republican President and his minions are denying that climate change is even happening, and the leadership of the Democratic Party is refusing to allow its leading presidential candidates to engage in public debate about it so it won’t become a major issue in the 2020 campaign. Tra lala lala lala
And now the bad news. The experts say that there won’t possibly be enough money to defend all the places that need defending. That’s where the pickin’ and choosin’ comes in.
A theme often repeated in the media is that Japan is suffering terribly because of its low birth rates and shrinking population. This has meant slow growth, labor shortages, and an enormous government debt.
Like many items that are now popular wisdom, the story is pretty much nonsense. Let’s start at the most basic measure, per capita GDP growth. Yeah, I said per capita GDP growth because insofar as we care about growth it is on a per person basis, not total growth. After all, Bangladesh has a GDP that is more than twice as large as Denmark’s, but would anyone in their right mind say that the people of Bangladesh enjoy a higher standard of living? (Denmark’s GDP is more than twelve times as high on a per capita basis.)
On a per capita basis, Japan’s economy has grown at an average annual rate of 1.4 percent since the collapse of its stock and real estate in 1990. That’s somewhat less than the 2.3 percent rate of the U.S. economy, but hardly seems like a disaster. By comparison, per capita growth has averaged just 0.8 percent annually since the collapse of the housing bubble in 2007 in the United States.
But per capita income is just the beginning of any story of comparative well-being. There are many other factors that are as important in determining people’s living standards. To take an obvious one that gets far too little attention, the length of the average work year has declined far more over this period in Japan than in the United States.
"This is truly a revolutionary proposal," Sanders told the Washington Post, which reported the details of the Vermont senator's bill late Sunday. "In a generation hard hit by the Wall Street crash of 2008, it forgives all student debt and ends the absurdity of sentencing an entire generation to a lifetime of debt for the 'crime' of getting a college education."
Sanders, a 2020 Democratic presidential candidate, will introduce his proposal alongside Reps. Ilhan Omar (D-Minn.) and Pramila Jayapal (D-Wash.), who plan to unveil the Student Debt Cancellation Act and the College for All Act in the House.
"I am one of the 45 million people with student debt—45 million people who are held back from pursuing their dreams because of the student debt crisis," Omar tweeted Sunday. "It's why I'm proud to stand with Sen. Sanders and Rep. Jayapal to pass college for all and cancel student debt."
The three lawmakers will introduce their proposals in a press conference Monday morning outside the U.S. Capitol building.
In mid-June, Facebook — in cahoots with 28 partners in the financial and tech sectors — announced plans to introduce Libra, a blockchain-based virtual currency.
The world’s governments and central banks reacted quickly with calls for investigation and regulation. Their concerns are quite understandable, but unfortunately already addressed in Libra’s planned structure.
The problem for governments and central banks:
A new currency with no built-in respect for political borders, and with a preexisting global user base of 2.4 billion Facebook users in nearly every country on Earth, could seriously disrupt the control those institutions exercise over our finances and our lives.
The accommodation Facebook is already making to those concerns:
Libra is envisaged as a “stablecoin,” backed by the currencies and debt instruments of those governments and central banks themselves and administered through a “permissioned” blockchain ledger by equally centralized institutions (Facebook itself, Visa, Mastercard, et al.).
To put it a different way, Libra will not be a true cryptocurrency like Bitcoin or Ether. Neither its creation nor its transactions will be decentralized and distributed, let alone easily made anonymous. A “blockchain” is just a particular kind of ledger for keeping track of transactions. It does not, in and of itself, a cryptocurrency make.
In simple terms, Libra is just a new brand for old products: Digital gift cards and pre-paid debit cards.
The hulk of Grenfell Tower, its charred sides covered by sheets of white plastic, stands as a mute and ominous testament to the disposability of the poor and the primacy of corporate profit. On June 14, 2017, a fire leaped up the sides of the 24-story building, clad in highly flammable siding, leaving 72 dead and 70 injured. Almost 100 families were left homeless. It was Britain’s worst residential fire since World War II. Those burned to death, including children, would not have died if builders had used costlier cladding that was incombustible and if the British government had protected the public from corporate predators. Grenfell is the face of the new order. It is an order in which you and I do not count.
I walked the streets around the tower on the two-year anniversary of the fire with Kareem Dennis, better known by his rapper name, Lowkey (watch his music video about Grenfell)—and Karim Mussilhy, who lost his uncle, Hesham Rahman, in the blaze and who has been abruptly terminated from two jobs since the disaster apparently because of his fierce public denunciations of officials responsible for the deaths. Families, some wearing T-shirts with photos of loved ones who died in the conflagration, solemnly entered a building for a private memorial. A stage was being prepared for a rally that night a block from the tower. It would draw over 10,000 people. Flowers and balloons lay at the foot of the wall that surrounds the tower. Handwritten messages of pain, loss and love, plus photos of the dead, covered the wall. The demolition of Grenfell Tower will take 18 more months as each floor is methodically dismantled.
Occasionally I am subject to the hysterical stupidity of cable news, as when, for example, my father and I spend time together. He is 80, and likes to watch drivel such as CNN, Bloomberg Business and the like. It is always a stunning experience to see the performing monkeys who masquerade on camera as journalists, since I get to see it so rarely.
First off, on this miserable morning of June 20, is the perfumed cheerleading squad at Bloomberg, who are prattling on and on about an IPO by a tech company called Slack. “We are on tenterhooks,” the little minstrels tell us. Me too – I’m waiting for the moment when someone blows up Bloomberg Business with a Molotov cocktail. But alas, it doesn’t happen, and onward they race, in rapt attention to issues of total insignificance (this Slack being a company that…well, I don’t know what it does, but it probably doesn’t feed, clothe, or house the poor and the needy, or protect wild creatures and wild places, or educate our children – it probably does something totally useless, which is the reason the IPO is doing so well and the executives will be multimilllionaires by day’s end).
Click – now we are at CNN, where the bloviators are delving into the question of whether we should attack Iran over its downing of one of the United States’ pieces of flying junk (a drone, apparently – or so the government and media inform us). The matter at hand – the only matter for the folks at CNN – is one of expedience, the utilitarian value of a military response, the “cost” to the United States in treasure and manpower, the “value” of the diplomatic and political “gain” of such an attack. We are talking about war here; we are talking about murder, though nothing of that is said. We
There had apparently been reports of a loud argument, banging and yelling etc. The police talked to both the people present and then left. No one was detained, charged, cautioned. Legally speaking, there was no incident. It would not warrant a mention most of the time.
However, the same neighbours who called the police also recorded the argument and sent the recording to The Guardian.
The Guardian, who have all the class of the Daily Mail, but none of the honesty, duly published it. Red banner. Shrieking tabloid headline. At least The Sun admits what it is.
And so we have Borisgate, or rather #BorisGate.
Boris refused to answer questions about the incident the next day, and Jeremy Hunt has been piling on the pressure to produce “an explanation”. Apparently “we had a row, it’s none of your business” isn’t enough of an explanation.
Hunt’s obsession with this topic is understandable, he’s massively trailing in the leadership contest and needs all the ammunition he can get his hands on.
If Donald Trump actually follows through on his recently tweeted promise that Immigrations and Customs Enforcement (ICE) “will begin deporting the millions of illegal aliens who have illicitly found their way into the United States … as fast as they come in,” what will you do?
According to the faith I was raised with I hope I would act according to the lessons found in the parable of the Good Samaritan. In the Gospel of Luke, Jesus told of a traveler who was beaten, stripped, and left naked waiting for death. People who claimed to be great believers avoided this victim, but it was the Samaritan who stopped and freely rendered aid—selfless altruism. Charity, compassion, and forgiveness are the highest values I was raised with. I do my best to dedicate myself to their service, and I’m sure I’m not the only one left in a bind: what will I do?
Recent stories tell of modern day Samaritans rendering aid to travelers (some seeking asylum, some trying to immigrate legally, some illegally…) at great risk. The case of Scott Warren in Arizona presents offering humanitarian aid as a crime punishable by up to 20 years in prison; but there is no verdict, the jury is hung. His specific crimes are putting out food and water, and pointing directions (actions consistent with No More Deaths, a part of the Unitarian Universalist Church of Tucson), which appears reflect values just like I was raised with. Do I have the strength to follow my religious convictions, even in the face of criminal prosecution like Warren has?
Our current context should make us struggle no matter how much we think we’ve figured out. The case against Food Not Bombs taught us–after some alarming incidents to the contrary–that feeding the homeless is an act of protected expression, but with migrants the acts of feeding and pointing direction could invoke serious punishment. Do you love your Mexican or Central American neighbor enough to risk prosecution?
With regard to the PTAB guidance, the court noted that such guidance was “non binding” [upon whom?] and that the Board had allowed several petition corrections without changing the filing date.
Why Do It? – Privileged: The gaping hole in in the analysis is any discussion of why MSD did not name its parent company who was being sued for infringement as a real party in interest.
In the IPR, Merck’s attorneys (who represented both companies) indicated that they had intentionally omitted MCI as a real-party-in-interest, but did not explain their actions other than: “privileged legal strategy immune from discovery” (although this is in quotes, it is my paraphrasing). The key patent-related reason here that comes to mind is that – at the time – Merck thought it might get around IPR estoppel by having its subsidiary file the petition. It was not until more than a year later that Merck agreed that both companies would be bound by any resulting estoppel.
Appealable: The patent challenger also argued that the issue here is not appealable because it is tied to institution. On appeal, the Federal Circuit ducked that issue and instead held that the case is affirmed whether or not it is appealable. (Interesting jurisprudence dance on this one).
Power Integrations owns U.S. Patent Nos. 6,212,079 and 8,115,457, among others. Fairchild Semiconductor filed two ex parte reexamination proceedings challenging claims of the '079 patent in 2005 and 2006; the U.S. Patent and Trademark Office confirmed the patentability of the claims in September 2009. Two months later, in November 2009, Power Integrations sued Fairchild for infringement of the '079 patent and the '457 patent (and one other patent not at issue here). In March 2014, a jury found that Fairchild infringed claims of the '079 patent (but no claims of the '457 patent), and that the infringed claims were not invalid. The jury awarded $105 million in damages. Fairchild moved post-trial for a new trial on damages, the District Court granted the motion, and the jury on re-trial awarded damages of $139.8 million.[1] Fairchild appealed on both liability and damages; the Federal Circuit affirmed the finding of infringement but vacated the damages award because the entire market value rule should not have been applied and remanded the case for further proceedings.
Just prior to the damages re-trial, in November 2015, Semiconductor Components (which did business as ON Semiconductor) agreed to merge with Fairchild. The Fairchild-ON merger did not close quickly. Instead, in March 2016, ON filed a petition for inter partes review of claims of the '079 patent (including all of the claims that had been found infringed by Fairchild). Five months later, ON filed an IPR petition related to the '457 patent. The Fairchild-ON merger then closed just four days before the IPR related to the '079 patent was instituted on September 23, 2016.
[...]
Notably, while the Federal Circuit panel (Chief Judge Prost and Judges Reyna and Stoll) held that privity can be determined at least as late as the time of institution, it left open the possibility that later events could be relevant to the determination of privity.[6] Of course, the reasoning of the opinion -- based on the express language of the statute -- would be contrary to an extension of the time bar to any time after institution of review. But given the extreme circumstances of the Power Integrations case, the Federal Circuit did not want to cut off the possibility that privity would not have applied if the Fairchild-ON merger had closed five days later. Thus, it is possible that post-institution facts could still be relevant to determining whether an IPR is time-barred, in addition to the certainty that all pre-institution circumstances are to be considered.