still rate Ubuntu very highly, and I have great respect for Canonical. Over in the corporate world, no one comes close to the success Red Hat has had with promoting Linux as a serious enterprise infrastructure tool. You could make the same argument for Canonical, and its success with making Linux accessible for newcomers to the Linux desktop.
A lot of people who use Linux for the first time stick a toe in the water with Ubuntu. Once they’ve found their feet and get a bit of experience, some people move on to other distributions. I’ve heard the same story many times, both in-person and online. People tell me they’re on a particular distribution—Fedora, Debian, you name it, I’ve heard it—but they started on Ubuntu. If their current distribution had been their first foray into Linux, they doubt they would have stuck with it. That’s a massively important role for Ubuntu to play.
It's called the Adder WS, and it can be configured with a ridiculous amount of storage, RAM, CPU and GPU processing power.
On the CPU side, System76 will load it up with either a six-core Intel Core i7-9750H (which is what drives my Oryx Pro laptop) or the 8-core i7-9980HK which represents the best mobile CPU Intel has to offer at the moment.
Handling graphics is Nvidia's GeForce RTX 2070, which can kick out decent frames for AAA 4K gaming at lower quality settings, or 1080p gaming with all the dials maxed out. It's also a solid choice for content creators. Crucial to performance, of course, will be the laptop's thermal solution and cooling capabilities, which System76 says should enable the system's full potential.
The Adder WS can also handle up to 64GB of system memory and a spacious 8TB of total storage.
Some flagship devices like Samsung Galaxy Book2 and Microsoft Surface Go come pre-installed with Windows 10 in S mode (formerly known as Windows 10 S). Windows 10 in S Mode locks installation of apps only from the Microsoft Store and users cannot download or install .exe apps.
Fortunately, Microsoft allows users to switch out of Windows 10 in S mode from the Microsoft Store, but users are reporting that this Store feature is broken and they cannot switch out of Windows 10 in S Mode.
Brooklyn, N.Y.-based Capsule8 today announced new "full endpoint detection and response (EDR)-like investigations functionality for cloud workloads"...
As you can see, to process a picture of this size which contains only one short mathematical question, the time consumed is around 11 minutes. It is very likely that the time consumed for the entire process can be reduced by 50 percent, if the code is changed and sends the text detection job to ‘cloud’ instead of to the native Raspberry Pi 3, or if you use Raspberry Pi 3 with Neural Compute Stick(s) for accelerating the inference. But this assumption still would have to be proven :-).
In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.
Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.
At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.
There are many reasons for IBM’s recent purchase of Red Hat, but one of them became apparent today - the Big Blue has announced that it has packed more than 100 products across its software portfolio into containers, designed for Red Hat’s OpenShift.
Linux-based application containers package apps and all of their dependencies into individual virtual environments that can be easily moved between a variety of public and private clouds. This means that potentially IBM’s apps can run as easily on AWS, Azure or Alibaba as they would on the company’s own public infrastructure.
“IBM is unleashing its software from the data center to fuel the enterprise workload race to the cloud,” said Arvind Krishna, senior veep for cloud and cognitive stuff at IBM.
We're very excited to be once again attending, and sponsoring, Linux Developer Conference Brazil, taking place this weekend in São Paulo, Brazil! Already in its third year, Linux Developer Conference Brazil aims to take the Brazilian Linux development community to the international level. Whether you are just curious and want to understand the Linux ecosystem, or are someone seeking to contribute to FOSS projects, or even a seasoned collaborator, this conference is for you.
Collaborans will be giving three workshops and six presentations, and will also take part in, and moderate, a panel discussion. You can find the complete details below.
Greg Kroah-Hartman recently announced that Linux kernel 5.1 has reached end of life...
As part of its unwavering commitment to open source and open standards, Collabora is proud to be part of bringing the recently-released OpenXR 1.0 to life. We are pioneering the Monado open source runtime for OpenXR to ensure the future of XR is truly open and accessible to all hardware vendors. As the OpenXR specification editor, I am grateful for the diligent efforts of the working group, as well as the community feedback that shaped this release.
There have been a lot of changes since the last post about OpenXR and Monado. On the working group, we've brought the concerns of the open source and Linux communities to the working group. We have worked to improve the loader and provided API layers in both cross-platform and Linux-specific ways, together with the Monado community. As specification editor, I developed or enhanced a variety of specification-related tooling to ensure a continuous standard for consistency and high-quality in the specification text and registry.
For example, xml_consistency uses specification-specific "business logic" to check the internal consistency of the XML registry. Among other things, it compares the return codes listed for a function with those inferred from parameter types, and raises an error if an expected code is missing or an existing code seems unnecessary. The comprehensive check_spec_links tool processes the AsciiDoctor source of the specification, ensuring that the spec-specific markup macros are used correctly, that all members and parameters are documented, that all entities referred to actually exist and are spelled correctly, and more.
AMD has published their instruction set architecture documentation for their new RDNA 1.0 architecture found on their new Radeon RX 5700 series GPUs and other forthcoming products.
AMD quietly released their RDNA 1.0 ISA documentation on Thursday. The PDF covers 240 pages of the RDNA shader ISA in detail designed for driver writers, game engine developers, and others wanting to know the fine details to the new RDNA instruction set.
While we've already seen the RADV Vulkan driver land their slated support for Navi 12 GPUs on top of the recently launched Radeon RX 5700 "Navi 10" graphics cards, today is the first time we're seeing patches from AMD to wire in the support to the AMDGPU DRM Linux kernel driver for this next iteration of Navi.
A total of 36 patches were sent out a short time ago that add Navi 12 support to this DRM driver. The Navi 12 support comes in at just 1,388 lines of new code over the existing Navi 10 support, but some 1.1k lines of that are just auto-generated new header files.
Intel's open-source driver team has sent in their initial batch of kernel graphics driver changes to DRM-Next for material that will be targeting the Linux 5.4 cycle later this year.
This is just the first of several pull requests expected of the Intel "i915" DRM driver material to DRM-Next for queuing ahead of the Linux 5.4 merge window opening in September. In the few weeks since ending Linux 5.3 feature development and its resulting merge window, a number of patches have been queuing for this Direct Rendering Manager driver.
While this year's GCC 9 compiler release brought initial support for AMD Zen 2 processors with the Znver2 target, the support was sadly incomplete. While the GCC 9 support added some of the new instructions, it wasn't complete (such as RDPRU support remains missing) and the cost tables and scheduler model were not updated from Znver1 to account for the microarchitectural changes. Thankfully, SUSE's compiler experts recently fixed up this support for the GCC 10 compiler and more recently were able to get it back-ported for the upcoming GCC 9.2 for the Linux distributions that will upgrade to that point release. Here are some benchmarks looking at the performance impact of that updated AMD Zen 2 compiler code.
I've released man-pages-5.02. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.
This release resulted from patches, bug reports, reviews, and comments from 28 contributors. The release includes around 120 commits that change more than 50 pages.
This is a series highlighting best-of-breed utilities. We are covering a wide range of utilities including tools that boost your productivity, help you manage your workflow, and lots more besides. The other utilities in this series are listed here.
LanguageTool is an open source proofreading software for English, French, German, Polish, Russian, and many other languages although some are not actively maintained.
What makes this software special? LanguageTool offers a variety of different ways to access its functionality. There’s a cross-platform Java desktop application for offline use. You can also use its grammar, style and spell checker in a web browser with both Firefox and Chrome add-ons. There’s also support for LanguageTool in Google Docs, LibreOffice, and community support has added other applications including Emacs, LyX, and vim.
And there’s even an add-on for Microsoft Word if you still live on the dark side. Or use the software from the project’s website.
LanguageTool comes with its own embedded HTTP/HTTPS server so you can send a text to LanguageTool via HTTP and get the detected errors back as JSON.
We are happy to announce a new release for Icinga Web 2, version 2.7.0. Official packages are available on packages.icinga.com. You can find all issues related to this release on our Roadmap.
We're happy to announce Kiwi TCMS version 6.11! This is a security and improvement release which updates many internal dependencies, adds 2 new Telemetry reports, updates TestPlan and TestCase cloning pages and provides several other improvements and bug fixes. You can explore everything at https://public.tenant.kiwitcms.org!
I sometimes need to tidy up data tables containing pseudo-duplicate data items. The example below is from a real-world dataset and is part of a tally of a certain field. The tally function ignores the header and generates a sorted list of data items and their frequencies.
Logic World from Mouse Hat Games (previously called The Ultimate Nerd Game) is delayed, originally due this Summer they've decided to push it back until October.
Speaking about the delay in this post, they said "we just aren't ready" and they "don't want to make sacrifices to the quality of the game" along with not having to deal with any crunch. All fair enough, I would rather have a healthy developer put out a good game after a delay.
Abandon Ship caught my eye some time ago, thanks to the incredible style inspired by classic Naval Oil Paintings. The developer said it would eventually come to Linux and that time is fast approaching with a Beta now up.
Valve are reacting quickly to feedback along with implementing some needed features for their auto-battler strategy game Dota Underlords. The latest major update is out now, with some big changes to the gameplay.
Previously, all the battles in Underlords took place differently. So while you might have been facing player X, they at the same time would be fighting player Y. Not any more! Players now get paired up to fight directly against each other, both taking part in the same shared combat. If there's an odd number of players, one of them might fight a clone of a player.
The Horn update for Sunless Skies went live on July 30th adding in a highly requested feature, the ability to toot. There's, uh, other things as well of course.
Sometimes we just want simple things and tooting your horn in Sunless Skies was apparently the "second most requested feature since launch". So, they added it in with a note that "The horn has no gameplay effects whatsoever, but we think it sounds quite nice. You're, umm, welcome."—hah.
Most action games give you some sort of weapon, dump you in front of lots of enemies and have you go at it. Decoy is a different, your only tool is your vehicle and you are to distract the enemy.
An infiltration team is searching for information, so to keep them out of harms way you will need to drive around like an insane person to distract, evade and survive. You have nothing to defend yourself, other than your awesome driving skills.
A recent announcement from Russian developer GameTrek and publisher 1C Entertainment is the game Secret Government. It's planned to enter Early Access in October this year with Linux support.
The MATE desktop environment is becoming usable on Wayland thanks to its support being provided by the Mir display stack.
The MATE desktop, which continues to be developed as an active fork of GNOME 2, is seeing Wayland support thanks to Mir doing the heavy lifting. This is also becoming one of the leading examples of Mir's use-case following Canonical engineers re-tooling their display server with Wayland support after pulling back from their original design goals around Ubuntu Touch and mobile/convergence.
We found that Krita 4.2.4 still had a bug handling shortcuts when certain tools were active. We’ve worked hard to fix that bug as quickly as possible, and as a consequence, we’re releasing Krita 4.2.5 today. Everyone is urged to update to this new release.
Until 2005, all Krita development was done by volunteers, in their spare time. That year, Google started the Google Summer of Code program. Then we had students working full-time on Krita for three months; mentored by the existing volunteer team.
For me, Krita maintainer since 2003, there was nothing more satisfying than working on Krita. In contrast with my day jobs, we actually started releasing in 2004!
But it was clear that there was only so much that could be done by a purely volunteer, unsponsored team. Time is money, money buys development time, so since 2009, we’ve tried to sponsor development beyond Google Summer of Code. Some argued that this would kill the volunteer element in the community, but we’ve never seen a trace of that.
So, these days, there are four people working full-time on Krita. There is Dmitry, since 2012, sponsored by the Krita Foundation through donations and fund raisers.
Me, Boudewijn, has been funded working on Krita through a combination of donations, special projects for third-party organizations and, since 2017, the income from the Windows Store. I don’t do just coding, but pretty much all project management.
Agata and Ivan started working full-time on Krita this year, and are funded through the income from the Steam Store. Agata is well-known as Tiar and has been supporting other Krita users for ages. Ivan has been around in the Krita community for more than ten years, first producing things like shortcut cheat sheets, and completing a Google Summer of Code project successfully in 2018.
There are many Linux users out there and despite that, Linux desktops have failed to break into the mainstream when compared to Microsoft’s Windows. One of the main reasons behind it, as described by Linus Torvalds, is “the fragmentation of different [Linux] vendors.” There are multiple Linux vendors, unlike the Windows ecosystem, which creates a lack of a unified approach.
However, now two of the most popular Linux desktop competitors – GNOME Foundation and KDE – are coming together to work on a Linux desktop. Both open-source biggies are set to sponsor the Linux App Summit (LAS) 2019 which is scheduled for November 12th and 15th, 2019.
The Inclusion and Diversity team at GNOME was created to encourage and empower staff and volunteers, and to create an environment within GNOME where people from all backgrounds can thrive.
We welcome and encourage participation by everyone. To us, it doesn’t matter how you identify yourself or how others perceive you: we welcome you.
Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent (MTA) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA that’s easy to configure and known for a strong security record. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.
RPM of PHP version 7.3.8 are available in remi repository for Fedora 30 and in remi-php73 repository for Fedora 28-29 and Enterprise Linux ââ°Â¥ 6 (RHEL, CentOS).
RPM of PHP version 7.2.21 are available in remi repository for Fedora 28-29 and in remi-php72 repository for Enterprise Linux ââ°Â¥ 6 (RHEL, CentOS).
RPM of PHP version 7.1.31 are available in remi-php71 repository for Enterprise Linux (RHEL, CentOS).
RPM of PHPUnit version 8.3 are available in remi repository for Fedora ââ°Â¥ 28 and for Enterprise Linux (CentOS, RHEL...).
In July 2019, I have worked on the Debian LTS project for 15.75 hours (of 18.5 hours planned) and on the Debian ELTS project for another 12 hours (as planned) as a paid contributor.
Debian Buster uses GNOME for default view of the desktop environment used. But you can change it to another desktop environment if you don't like the default display of this distribution. For those of you computer users with 32-bit architecture, you can install Debian Buster to your computer, because it still supports 32-bit computers.
It's distribution release day! At least for Linux Mint anyway, with Linux Mint 19.2 now officially available across multiple desktop flavours.
A pretty good choice for those new to Linux and wanting to dip their toes into some Linux gaming, this brand new distribution release comes with numerous new features and enhancements. Their main and most supported desktop is Cinnamon, with both MATE and Xfce spins also available for Mint 19.2.
In this video, we look at what's new in Linux Mint 19.2.
The Linux Mint project released today the Linux Mint 19.2 "Tina" operating system, which is now available for download as Cinnamon, MATE, and Xfce editions.
Coming seven months after the Linux Mint 19.1 "Tessa" release, Linux Mint 19.2 "Tina" is the second major release in the Linux Mint 19 operating system series, based on Canonical's long-term supported Ubuntu 18.04 LTS (Bionic Beaver) operating system series, which will be supported for five years until 2023.
"Linux Mint 19.2 is a long term support release which will be supported until 2023. It comes with updated software and brings refinements and many new features to make your desktop experience more comfortable," said Clement Lefebvre, Linux Mint project leader and lead developer.
Linux Mint 19.2 "Tina" is now officially available in its Cinnamon, MATE, and Xfce flavors while continuing to be powered off the Ubuntu 18.04 LTS base.
Linux Mint 19.2 provides the latest stable release updates on Ubuntu 18.04 LTS plus offers a number of updates to the distribution's own utilities and other packages. One of the big improvements is Mint's update manager now showing supported kernel options and all-around a better experience for managing the installed kernel(s) on the system.
The Linux Mint project today released the Linux Mint 19.2 "Tina", which is now available for download as Cinnamon, MATE, and Xfce editions.
As usual, three desktop environments are available -- Cinnamon (4.2), MATE (1.22), and Xfce (4.12). If your computer is fairly modern, take my advice and opt for the excellent Cinnamon. MATE and Xfce are solid choices too, although they are more appropriate for computers with meager hardware. For new users, choosing amongst three interfaces can be confusing -- thankfully, the Mint developers stopped using KDE almost two years ago.
Linux Mint 19.2 "Tina" is based on the wildly popular Ubuntu operating system, but on 18.04 rather than the new 19.04. Why use an older version of Ubuntu as a base? Because 18.04 is an LTS or "Long Term Support" variant. While version 19.04 will be supported for less than a year, 18.04 is being supported for a mind-boggling 10 years!
Announcements will be made shortly with instructions on how to upgrade from Linux Mint 19 or Linux Mint 19.1.
If you are running the BETA use the Update Manager to apply available updates.
Linux Mint 19.2 is a long term support release which will be supported until 2023. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.
This new version of Linux Mint contains many improvements.
This iteration was fairly light iteration for the Web & design team at Canonical as we had a fair few people on holidays as well as a group who has gone to Toronto for our mid-cycle roadmap sprint. Here are some of the highlights of our completed work.
Advantech’s MIC-720AI and MIC-710IVA edge-AI computers run Ubuntu on Nvidia Jetson TX2 and Nano modules, respectively. The compact, rugged MIC-720AI has a single PoE port while the MIC-710IVA NVR system has 8x PoE ports.
At the 2019 Nvidia GPU Technology Conference in late May, Advantech previewed three Nvidia Jetson-based, “MIC” branded edge AI solutions for smart city, transportation, and manufacturing applications. More recently, product pages have appeared for two of these Linux-driven computers: the Jetson TX2 powered MIC-720AI and the Jetson Nano based MIC-710IVA AI Network Video Recorder. The promised MIC-730AI has yet to be documented, but we have an image — it’s in the middle of the group shot below.
In addition to the Tegra X1 SoC, the Nano developer kit comes configured with 4GB of LPDDR4 memory and plenty on I/O options, including a MIPI CSI connector, four USB 3.0 Type-A ports, one USB 2.0 Micro-B, one gigabit ethernet port, and 40 GPIO pins. The Nano is capable of driving dual displays through single DisplayPort and HDMI ports, it has an microSD card slot for storage, and a somewhat hidden M.2 Key E connection for expansion modules/daughter cards for optional functions like wireless connectivity. The Jetson Nano developer kit comes with a sizable heatsink for passive cooling, but has holes drilled for add-on fans. For our evaluation, we used a Noctua NF-A4x20 5V PWM fan and a Raspberry Pi MIPI Camera Module v2 from RS Components and Allied Electronics.
For development software, the Nano runs an Ubuntu Linux OS and uses the Jetpack SDK, which supports Nvidia’s CUDA developer environment, as well as other common AI frameworks, such as TensorRT, VisionWorks, and OpenCV.
Android Auto users should see a new look on their infotainment system in a few weeks, with a new navigation bar, notification center and launcher, as well as a dark theme, and improved screen optimization.
Now that we are in the home stretch for the Librem 5 launch, it’s a good time to start discussing some visions for the future. While the Librem 5 can operate as a traditional cellular phone today, in this post we are going to discuss its potential as a “no-carrier phone.”
The term “no-carrier phone” is used for a mobile phone that does not get its phone number from a carrier. This can take a couple of forms: a WiFi connection-only phone, or a Cellular Data connection-only phone.
In other industries, for instance in media distribution, this is called “Over-The-Top” (OTT); the underlying idea is that Internet Service Providers (ISPs) should be, and are just, “dumb pipes”. Why?, because they provide internet data only–and all the services ride over-the-top of the internet connection. Netflix paved the way for OTT in media when it moved from DVD to streaming (the “Net” part of their name) and offered television and movie-content to any internet connected device. This was done against the wishes of many entrenched media groups and ISPs, of course–but the majority of us have now adopted the OTT model: we call them streaming services.
Aitech Defense Systems, Inc., a part of the Aitech Group, has ported the cost-effective, open source Linux operating system onto its intelligent Ai-RIO remote I/O interface unit (RIU). This modular small form factor (SFF) RIU internally networks up to eight expansion modules – or ‘slices’ – for extremely high density and low power in a compact physical space.
George Romaniuk, director of space products, for Aitech Group noted, “By increasing the available OS options on the Ai-RIO, we’re providing customers with technology advantages to ensure their systems are developed on-time and on-budget, while incorporating the needed processing speeds and real-time functionality of critical embedded systems.”
Sometime last year, Facebook challenged a law enforcement request for access to encrypted communications through Facebook Messenger, and a federal judge denied the government’s demand. At least, that is what has been reported by the press. Troublingly, the details of this case are still not available to the public, as the opinion was issued “under seal.” We are trying to change that.
Mozilla, with Atlassian, has filed a friend of the court brief in a Ninth Circuit appeal arguing for unsealing portions of the opinion that don’t reveal sensitive or proprietary information or, alternatively, for releasing a summary of the court’s legal analysis. Our common law legal system is built on precedent, which depends on the public availability of court opinions for potential litigants and defendants to understand the direction of the law. This opinion would have been only the third since 2003 offering substantive precedent on compelled access—thus especially relevant input on an especially serious issue.
We’ve introduced new features that make it easier to moderate and share your Hubs experience. July was a busy month for the team, and we’re excited to share some updates! As the community around Hubs has grown, we’ve had the chance to see different ways that groups meet in Hubs and are excited to explore new ways that groups can choose what types of experience they want to have. Different communities have different needs for how they’re meeting in Hubs, and we think that these features are a step towards helping people get co-present together in virtual spaces in the way that works best for them.
Katherine Druckman: Hey, Linux Journal readers, I am Katherine Druckman, joining you again for our awesome, cool podcast. As always, joining us is Doc Searls, our editor-in-chief. Our special guest this time is Yiftach Shoolman of Redis Labs. He is the CTO and co-founder, and he was kind enough to join us. We’ve talked a bit, in preparation for the podcast, about Redis Labs, but I wondered if you could just give us sort of an overview for the tiny fraction of the people listening that don’t know all about Redis Labs and Redis. If you could just give us a little brief intro, that’d be great.
[prMac.com] Prague, Czech Republic - 24U Software has released a new version of the popular open-source PHP library designed for PHP developers to easily integrate their code with the RESTful FileMaker Data API without having to learn the FileMaker Data API itself.
The new version brings support for all new features added to the FileMaker Data API with the recent release of FileMaker Server 18, while maintaining full compatibility with the FileMaker Server 17 Data API as well.
Yesterday, the team behind iTerm2, the GPL-licensed terminal emulator for macOS, announced the release of iTerm2 3.3.0. It is a major release with many new features such as the new Python scripting API, a new scriptable status bar, two new themes, and more.
While science is supposed to be about building on each other's findings to improve our understanding of the world around us, reproducing and reusing previously published results remains challenging, even in the age of the internet. The basic format of the scientific paper—the primary means through which scientists communicate their findings—has more or less remained the same since the first papers were published in the 18th century.
This is particularly problematic because, thanks to the technological advancements in research over the last two decades, the richness and sophistication of the methods used by researchers have far outstripped the publishing industry's ability to publish them in full. Indeed, the Methods section in research articles remains primarily a huge section of text that does not reflect the complexity or facilitate the reuse of the methods used to obtain the published results.
While GCC and Clang are now competing neck-and-neck on Linux x86_64 when it comes to the performance of generated binaries, when it comes to each of their initiatives to transition to Git it looks like LLVM will take the cake.
Both LLVM (and its sub-projects) and GCC have been working on transitioning from Subversion (SVN) to Git. In the case of LLVM, they plan to centralize around GitHub for their Git hosting though not making use of any extra GitHub features at this stage. In the case of GCC, making use of GNU's Git hosting infrastructure.
In the curl project we make an effort to ship security fixes as soon as possible after we’ve learned about a problem. We also “prenotify” (inform them about a problem before it gets known to the public) vendors of open source OSes ahead of the release to alert them about what is about to happen and to make it possible for them to be ready and prepared when we publish the security advisory of the particular problems we’ve found.
These distributors ship curl to their customers and users. They build curl from the sources they host and they apply (our and their own) security patches to the code over time to fix vulnerabilities. Usually they start out with the clean and unmodified version we released and then over time the curl version they maintain and ship gets old (by my standards) and the number of patches they apply grow, sometimes to several hundred.
The distros@openwall mailing list allows no more than 14 days of embargo, so they can never be told any further than so in advance.
Whether you are a maker, a teacher, or someone looking to expand your Python skillset, the BBC:Microbit has something for you. It was designed by the British Broadcasting Corporation to support computer education in the United Kingdom.
The open hardware board is half the size of a credit card and packed with an ARM processor, a three-axis accelerometer, a three-axis magnetometer, a Micro USB port, a 25-pin edge connector, and 25 LEDs in a 5x5 array.
I purchased my Microbit online for $19.99. It came in a small box and included a battery pack and a USB-to-Micro USB cable. It connects to my Linux laptop very easily and shows up as a USB drive.
We are happy to announce the release of Qt Creator 4.10 RC !
The prebuilt binaries for this release are based on a Qt 5.13.1 snapshot, which should take care of regular crashes that some of you experienced with the earlier Beta releases. For more details on the 4.10 release, please have a look at the blog post for Beta1 and our change log.
This tutorial provides you multiple methods to check if an integer number lies in the given range or not. It includes several examples to bring clarity. Let’s first define the problem. We want to verify whether an integer value lies between two other numbers, for example, 1000 and 7000: So, we need a simple method that can tell us about any numeric value if it belongs to a given range.
Security updates have been issued by Debian (firefox-esr and thunderbird), openSUSE (openexr and rmt-server), Oracle (bind, container-tools:rhel8, cyrus-imapd, dotnet, edk2, firefox, flatpak, freeradius:3.0, ghostscript, gvfs, httpd:2.4, java-1.8.0-openjdk, java-11-openjdk, kernel, mod_auth_mellon, pacemaker, pki-deps:10.6, python-jinja2, python27:2.7, python3, python36:3.6, systemd, thunderbird, vim, virt:rhel, WALinuxAgent, and wget), Slackware (mariadb), SUSE (java-1_8_0-openjdk, polkit, and python-Django1), and Ubuntu (Sigil and sox).
An increasingly popular design for a data-center network is BGP on the host: each host ships with a BGP daemon to advertise the IPs it handles and receives the routes to its fellow servers. Compared to a L2-based design, it is very scalable, resilient, cross-vendor and safe to operate.1 Take a look at “L3 routing to the hypervisor with BGP” for a usage example.
[...]
On the Internet, BGP is mostly relying on trust. This contributes to various incidents due to operator errors, like the one that affected Cloudflare a few months ago, or to malicious attackers, like the hijack of Amazon DNS to steal cryptocurrency wallets. RFCââ¬Â¯7454 explains the best practices to avoid such issues.
People often use AS sets, like AS-APPLE in this example, as they are convenient if you have multiple AS numbers or customers. However, there is currently nothing preventing a rogue actor to add arbitrary AS numbers to their AS set. IP addresses are allocated by five Regional Internet Registries (RIR). Each of them maintains a database of the assigned Internet resources, notably the IP addresses and the associated AS numbers. These databases may not be totally reliable but are widely used to build ACLs to ensure peers only announce the prefixes they are expected to. Here is an example of ACLs generated by bgpq3 when peering directly with Apple:
Fernando “Corby” Corbató lived long enough to curse his most famous invention: the computer password. In 1961 he adapted the ancient system of secret codes almost as an afterthought for his truly groundbreaking invention: the ability for several people to simultaneously use the same computer — in those days room-sized elephants — remotely. But five years ago he admitted that passwords had become “a nightmare”. For a while he carried round three sheets of closely typed paper with his own collection of 150 codes. He eventually entrusted them to an electronic file.
In a clear defense of the First Amendment, a federal judge ruled the Democratic National Committee cannot hold WikiLeaks or its founder, Julian Assange, liable for publishing information that Russian agents were accused of stealing.
The DNC sued President Donald Trump’s campaign, the Russian Federation, Assange, and WikiLeaks on April 20, 2018, alleging the dissemination of materials “furthered the prospects” of the Trump campaign. They argued officials “welcomed” the assistance of agents allegedly working for the Russian Federation.
At the time, DNC chair Tom Perez accused WikiLeaks of helping to perpetrate a “brazen attack” on democracy. However, Judge John Koeltl in the Southern District of New York saw through the DNC lawsuit and recognized the impact it would have on press freedom.
Koeltl highlighted the case of the Pentagon Papers, where the Supreme Court held there was a “heavy presumption” against the “constitutional validity of prior restraints” (suppressing) the publication of information.
Whether or not WikiLeaks knew the materials were obtained illegally, they were protected by the First Amendment.
“The First Amendment prevents such liability in the same way it would preclude liability for press outlets that publish materials of public interest despite defects in the way the materials were obtained so long as the disseminator did not participate in any wrongdoing in obtaining the materials in the first place,” Koeltl asserted.
German scientists have proposed a startling new way of slowing sea level rise and saving New York, Shanghai, Amsterdam and Miami from 3.3 metres of ocean flooding âËâ by using artificial snow.
They suggest the rising seas could be halted by turning West Antarctica, one of the last undisturbed places on Earth, into an industrial snow complex, complete with a sophisticated distribution system.
An estimated 12,000 high-performance wind turbines could be used to generate the 145 Gigawatts of power (one Gigawatt supplies the energy for about 750,000 US homes) needed to lift Antarctic ocean water to heights of, on average, 640 metres, heat it, desalinate it and then spray it over 52,000 square kilometres of the West Antarctic ice sheet in the form of artificial snow, at the rate of several hundred billion tonnes a year, for decades.
Such action could slow or halt the apparently-inevitable collapse of the ice sheet: were this to melt entirely – and right now it is melting at the rate of 361 billion tonnes a year – the world’s oceans would rise by 3.3 metres.
The harms of climate disruption are already terrible and will only get worse, and despite what appears to be some people’s magical thinking, no one will be unaffected. What’s more, the drivers of climate disruption are known, and it isn’t people who leave the lights on. What’s missing is a public and media dialog that features fossil fuel industries and their leaders accurately, as roadblocks to the climate justice solutions we desperately need. While federal inaction and even regression is distressing, some state attorneys general are pushing forward for accountability. We’ll talk about that movement with Sriram Madhusoodanan, deputy campaigns director with the group Corporate Accountability.
Thanks to Leonie’s information, a plane now circles overhead. When they spot the mother rhino or her calf, they will call in the helicopter team to dart the animal with a sedative. Darting from the air allows the animal to fall forward onto its sternum, reducing risk of injury, and then a ground team has three minutes to remove the horn before health complications arise.
In the distance, we see the helicopter swoop in. When its tail end tilts up, I realize the animal has been darted and we’re too far away to be useful. The other ground teams will take care of the dehorning.
We drive through the bush to the rendezvous place, and we’re only there a few minutes when, down the hill, I finally see my first-ever rhino in the wild. It’s the calf of the now-dehorned mother, trying to escape the loud helicopter overhead. It charges awkwardly out of the bush and across the road before plunging back into its protective cover.
But the calf can’t hide from the eyes above it. The helicopter team quickly darts it and we give chase, driving madly off road, careening between trees and axle-crunching boulders. Three ground teams pull up and we grab chainsaws and vet kits and run to the downed calf. The veterinarians apply a blindfold: The calf now looks like a big baby with a toothache. A chainsaw roars into life and the nub of horn, no more than three inches long, is sawed off. A grinder carves away the stub and applies resin, so that it looks somewhat natural.
CNN painfully demonstrated this week why we need independently run presidential debates. With its ESPN-like introductions to the candidates, and its insistence on questions that pit candidates against each other, CNN took an approach to the debates more befitting a football game than an exercise in democracy.
The CNN hosts moderated as if they weren’t even listening to what candidates were saying, inflexibly cutting them off after the inevitably too-short 30-to-60-second time limit—in order to offer another, often seemingly randomly selected, candidate the generic prompt, “Your response?” At times, these followed on each other so many times it was unclear what the candidate was even supposed to respond to, or why.
A few years ago, the Parma (OH) Police Department decided to turn its hypersensitivity into a criminal investigation. A local man, Anthony Novak, created a Facebook page parodying the PD's social media front. It wasn't particularly subtle satire. Most readers would have immediately realized this wasn't the Parma PD's official page -- not when it was announcing the arrival of the PD's mobile abortion clinic or the institution of a ban on feeding the homeless. Not only that, but the official logo had been altered to read "We No Crime."
The Parma PD decided to treat this parody as a dangerous threat to itself and the general public. It abused an Ohio state law forbidding the use of computers to "disrupt" police services to go after Novak. Not that there was any disruption other than the rerouting of PD resources to investigate a non-criminal act.
The end result was the arrest of Novak, the seizure of his electronic devices, and a four-day stay in jail for the parodist before he was acquitted of all charges. Novak sued the police department, but the district court decided to award immunity across the board to everyone involved. The Sixth Circuit Appeals Court has rolled back some of that ruling, allowing Novak's civil rights lawsuit to proceed.
Following an investigation by a German data protection agency, Google has suspended Assistant for a three-month period. Johannes Caspar, the head of the Hamburg data protection agency, found Google was recording and transcribing private conversations for examination by Google contractors. Caspar said there is "significant doubts” as to whether Google Assistant complies with EU data-protection law. Caspar previously uncovered the fact that Google Street View vehicles were intercepting and recording private wifi communications, a charge that Google denied until the hard drives in the Google vehicles were examined.
While lots of people are angling to break up the big internet companies in the belief that will lead to more competition, we've long argued that such a plan is unlikely to work. Instead, if you truly want more competition you need to end the ability of these companies to lock up your data. Instead, we need to allow third parties access so that the data is not stuck in silos, but where users themselves both have control and alternative options that they can easily move to.
That's why we were quite interested a year ago when Google, Facebook, Microsoft and Twitter officially announced the Data Transfer Project (which initially began as a Google project, but expanded to those other providers a year ago). The idea was that the companies would make it ridiculously easy to let users automatically transfer their own data (via their own control) to a different platform. While some of the platforms had previously allowed users to "download" all their data, this project was designed to be much more: to make switching from one platform to another much, much easier -- effectively ending the siloing of data and (worse) the lock-in effects that help create barriers to competition.
The internet showed up in our house in 1995. When that happened, I mansplained to my wife that it was a global drawstring through all the phone and cable companies of the world, pulling everybody and everything together—and that this was going to be good for the world.
My wife, who ran a global business, already knew plenty of things about the internet and expected good things to happen as well. But she pushed back on the global thing, saying "the sweet spot of the internet is local." Her reason: "Local is where the internet gets real." By which she meant the internet wasn't real in the physical sense anywhere, and we still live and work in the physical world, and that was a huge advantage.
Later I made a big thing about how the internet was absent of distance, an observation I owe to Craig Burton.
[...]
Because I know some geology, and not much was being said in any media about how a mountain face could slop across a town, I published a long blog post titled "Making sense of what happened in Montecito". In it, I explained why these kinds of events are called debris flows (rather than mudslides or landslides), and listed all the addresses of all the structures (mostly homes) that local officials said were destroyed. (The county produced an excellent map, but the addresses were under mouse-overs.) That way, owners, friends and relatives could find those addresses in a search engine.
Visits to my blog jumped from dozens per day to dozens of thousands. Far as I could tell, nearly all those visits were by local residents or people who cared personally about happened to Montecito.
My point here is that I did what I could, as did all the other locals posting their own forms of help on the net. Together we scaffolded up a shared understanding of the event and progress toward full recovery.
As it happens, I started writing this column in Santa Barbara, continued writing it in New York, and am finishing it now in Córdoba, a beautiful city in southern Spain. I was brought here to give a talk on exactly this subject, titled "The Future of the Internet Is Local". In the audience were local officials, businesses and organizations. I framed the talk with a historical perspective: the internet we know—the one with e-commerce, ISPs and graphical browsers—is about 1/1000th the age of Córdoba. We are still at the dawn of life in a non-place that is absent of distance and gravity, but which we still use and experience in the physical world.
The first rule of every new technology is what can be done will be done—until we realize what shouldn't be done. This has been true for everything from stone tools through nuclear power. And, now it's true of digital technology and the internet. We'll never rid the net of lies or facile façades, any more than we'll rid hammers of their ability to kill somebody with a whack on the head. But we can and will get more civilized about it. And my wife is right: local is where that will start.
You may recall that, recently, I posted on WIPO's bizarre decision to host a database of "pirate" sites that it would share with advertisers, encouraging them to block ads from appearing on any of the sites in the "Building Respect for Intellectual Property" (BRIP) database. As we noted in our original post, previous attempts at such databases showed how problematic they could be, as they almost always swept up perfectly legal sites, and they provided no due process, no checks and balances or anything of the like. I also had a list of questions about this for WIPO, which I noted were unanswered at the time of posting. WIPO actually did get back to me, but we'll get to that.
First, I wanted to point to a Twitter thread by New Zealand internet lawyer Rick Shera, who, in response to the news of the BRIP database, gave a real world example of how such databases create real harms for internet services through false accusations with no due process.
Back in the early days of filesharing clients and bittorrent being the focus of industry anti-piracy efforts, it was rare but not unheard of for end users to be targeted with lawsuits and criminal prosecution for copyright infringement. With the piracy ecosystem largely moving off of those kinds of filesharing platforms and more into a realm in which end users instead simply stream infringing material over the wire, rather than downloading it directly to their own machines, the focus on the consumer of pirated material has fallen by the wayside. Instead, the focus is now on the infringing sites that offer those streaming materials to the public. This makes a great deal of sense, actually, as the average user plausibly can claim ignorance as to the illicit nature of streamed material, combined with the simple fact that, unlike bittorrent technology, streaming material doesn't simultaneously offer it up to others as well.
Again, this makes sense.
Well, someone should reach out to the Malaysian government, because its new plans to fight piracy occurring with the aid of in-house Android boxes includes a strategy to prosecute any homeowner where such a device used for infringement exists.