Kubernetes storage is useful to storage administrators because it enables different forms of persistent, stateful data retention within Kubernetes cluster deployments, which are increasingly popular.
Kubernetes itself is a rapidly developing technology that has been embraced by cloud vendors and enterprises alike to enable a more agile and scalable form of application delivery. With a properly implemented Kubernetes storage configuration, databases and application data can be created and accessed by many applications – allowing greater speed and efficiency.
Docker storage enables storage administrators to configure and support application data storage within Docker container deployments.
Docker is one of the most transformative and disruptive technologies to appear in recent years. It impacts multiple facets of IT, including storage. The technology represents a different approach than either traditional bare metal or virtual machine (VM) application delivery, providing organizations with the opportunity to benefit from a more agile and cloud-native approach.
In this episode, we have several questions about accessibility in Linux applications, we discuss a couple of cross-platform office suites that provide a bit better compatibility with Microsoft Office file formats, and we discuss problems and solutions for Ubuntu, Barrier, video and privacy and security. Lastly, we comment on Linux Journal's goodbye.
Cloud-native experts share tips and practical learnings for Kubernetes in the enterprise, Kubernetes on bare metal or with stateful MySQL databases, and optimizing the cost and performance of Serverless applications.
For the last decade, The Linux Foundation’s Open Source Summit has proven to be invaluable for attendees. A 2018 participant recently wrote an article on OpenSource.com stating “Last August, I arrived at the Vancouver Convention Centre to give a lightning talk and speak on a panel at Open Source Summit North America 2018. It’s no exaggeration to say that this conference—and applying to speak at it—transformed my career.” We encourage you to read the article and discover why attending Open Source Summit can be a game changer for you as well.
The goal of this 2019 Google Summer of Code project is to develop a tool with which to transparently proxy applications that use the Wayland protocol to be displayed by compositors. Unlike the original X protocol, only part of the data needed to display an application is transferred over the application's connection to the compositor; instead, large information transfers are made by sharing file descriptors over the (Unix socket) connection, and updating the resources associated with the file descriptors. Converting this side channel information to something that can be sent over a single data stream is the core of this work.
The proxy program I have developed for the project is called Waypipe. It can currently be found at gitlab.freedesktop.org/mstoeckl/waypipe. (I am currently looking for a better stable path at which to place the project; the preceding URL will be updated once this is done.) A few distributions have already packaged the program; see here; alternatively, to build and run the project, follow the instructions in the README and the man page. My work is clearly identified by the commit logs, and amounts to roughly ten thousand lines of C code, and a few hundred of Python.
Vulkan 1.1.120 is out as the newest weekly update to the Vulkan graphics API.
Among many different Linux/open-source benchmarks being worked on for the AMD EPYC "Rome" processors now that our initial launch benchmarks are out of the way are Linux distribution comparisons, checking out the BSD compatibility, and more. Some tests I wrapped up this weekend were seeing how recent Linux kernel releases perform on the AMD EPYC 7742 64-core / 128-thread processors.
For some weekend analysis, here are benchmarks of Linux 4.18 through Linux 5.3 in its current development form. All tests were done on the same AMD EPYC 7742 2P server running Ubuntu 19.04 and using the latest kernels in each series via the Ubuntu Mainline Kernel PPA.
The new development release Wine 4.14 was released today. The official apt repository has made the packages for Ubuntu 18.04 and Ubuntu 19.04.The new development release Wine 4.14 was released today. The official apt repository has made the packages for Ubuntu 18.04 and Ubuntu 19.04.
Portable consoles are hardly new, and thanks to the Switch, they’re basically the most popular gaming devices in the world. But ClockworkPi’s GameShell is something totally unique, and entirely refreshing when it comes to gaming on the go. This clever DIY console kit provides everything you need to assemble your own pocket gaming machine at home, running Linux-based open-source software and using an open-source hardware design that welcomes future customization.
The GameShell is the result of a successful Kickstarter campaign, which began shipping to its backers last year and is now available to buy either direct from the company or from Amazon. The $159.99 ( on sale for $139.99 as of this writing) includes everything you need to build the console, like the ClockworkPi quad-core Cortex A7 motherboard with integrated Wi-Fi, Bluetooth and 1GB of DDR3 RAM — but it comes unassembled.
The studio behind the popular survival game Rust, Facepunch Studios, has announced that it is removing the title's Linux client altogether.
Khara, Inc. is known as Hideaki Anno’s motion picture planning and production company. They are currently working on “EVANGELION:3.0+1.0”, film to be released in June 2020.
Xfce 4.14 comes 4 years and 5 months after Xfce 4.12, a release that it is probably included in the software repositories of almost all Linux-based operating systems. The goal for Xfce 4.14, as the developers explain, was to port all of the core components to the latest GTK3 and GDBus open-source technologies, instead of the old GTK2 and D-Bus GLib.
"In this 4.14 cycle the main goal was to port all core components to Gtk3 (over Gtk2) and GDBus (over D-Bus GLib). Most components also received GObject Introspection support. Along the way we ended up polishing our user experience, introducing quite a few new features and improvements and fixings a boatload of bugs," reads the release announcement.
The Xfce team pleased to announce the release of the Xfce desktop 4.14, a new stable version of 4.x series on 12th Aug, 2019.
It was released after continues development of 4 years and 5 months, finally we saw this long-awaited release.
In this release, they were ported all core components to Gtk3 (over Gtk2) and GDBus (over D-Bus GLib).
Most components also received GObject Introspection support.
Along with this, they had added quite few new features and improvements and fixed some bugs.
Get ready for week 84 in KDE’s Usability & Productivity initiative! 84 weeks is a lot of weeks, and in fact the end is in sight for the U&P initiative. I’d say it’s been a huge success, but all good things must come to an end to make room for new growth! In fact, KDE community members have submitted many new goals, which the community will be able to vote on soon, with the three winners being unveiled at Akademy next month.
But fear not, for the spirit of the Usability & Productivity initiative has suffused the KDE community, and I expect a lot of really cool U&P related stuff to happen even after the initiative has formally ended–including the long-awaited projects of PolicyKit support and mounted Samba and NFS shares in KIO and Dolphin! These projects are making steady progress and I hope to have them done in the next few months, plugging some longstanding holes in our software.
It is a great idea to encrypt files on client side before uploading them to an ownCloud server if that one is not running in controlled environment, or if one just wants to act defensive and minimize risk.
Some people think it is a great idea to include the functionality in the sync client.
I don’t agree because it combines two very complex topics into one code base and makes the code difficult to maintain. The risk is high to end up with a kind of code base which nobody is able to maintain properly any more. So let’s better avoid that for ownCloud and look for alternatives.
A good way is to use a so called encrypted overlay filesystem and let ownCloud sync the encrypted files. The downside is that you can not use the encrypted files in the web interface because it can not decrypt the files easily. To me, that is not overly important because I want to sync files between different clients, which probably is the most common usecase.
[...]
My personal conclusion: CryFS is an interesting project. It has a nice integration in the KDE desktop with Plasma Vault. Splitting files into equal sized blocks is good because it does not allow to guess data based on names and sizes. However, for syncing with ownCloud, it is not the best partner.
Krita is a robust, fast and flexible painting application that makes creating art from scratch or existing resources a fun and productive experience. With many powerful brush engines and unique features such as multi€hand and mirrored painting, Krita explicitly supports creating comics, concept art, storyboards, textures, matte paintings and illustrations.
Krita has several features that are unique or a first among free software painting applications: support for colorspaces other than RGB, like CMYK, support for HDR painting, painting assistants, a perspective grid. Pop-up Palette: Quickly pick your color and brush by right-clicking on the canvas. You can also use Krita’s tagging system to swap out the available brushes that are displayed. The ring outside of the color selector contains the most recently used colors. These settings can be configured through the preferences.
Kata Containers is an open source container runtime that is crafted to seamlessly plug into the containers ecosystem.
We are now excited to announce that the Kata Containers packages are finally available in the official openSUSE Tumbleweed repository.
It is worthwhile to spend few words explaining why this is a great news, considering the role of Kata Containers (a.k.a. Kata) in fulfilling the need for security in the containers ecosystem, and given its importance for openSUSE and Kubic.
One of the reasons Linux is great is because of how flexible it is. For example, it can run on everything from servers to your old laptop to a Raspberry Pi. For this reason, it’s also a fantastic platform for developers.
Whether you’re a seasoned developer or just using Linux to learn to program, you still have to choose a distribution. The reality is that you can pretty much be a developer with most Linux distros, but some have those little conveniences that make them head-and-shoulders above the crowd.
Here are the best Linux distros for developers.
Leszek has pleased to announce the release of the new stable release of Neptune 6.0 on 1th Aug, 2019.
It’s first stable release of Neptune 6.0 based on Debian 10 “Buster”, featuring the KDE Plasma desktop with the typical Neptune tweaks and configurations.
The base of the system is Linux Kernel in version 4.19.37 which provides the necessary hardware support.
Plasma 5.14.5 features the stable and flexible KDE made desktop that is loved by millions.
This page provides information about the distributions that are no longer supported or developed starting from 2019 with details.
This table contains the Linux Distribution Name, Distribution Initial Release Date, Distribution Latest Release Date, Reason for distribution inactive, and Distribution Age.
Emmabuntus Team is pleased to announce the release of the new Emmabuntüs Debian Edition 2 1.05 (32 and 64 bits) on 02nd Aug, 2019.
It’s based on Debian 9.9 stretch distribution and featuring the XFCE desktop environment.
This is a lightweight distribution, which was designed to run on older computers.
This distribution was originally designed to facilitate the reconditioning of computers donated to humanitarian organizations, starting with the Emmaüs communities.
This ISO contains:
Calamares 3.2.11 (the latest version of our installer) Kernel 5.2.8 mesa 19.1.4-1 systemd 242.84-1 xf86-video-nouveau 1.0.16-1 XFCE 4.14 bash-completion broadcom-wl-dkms We also took care of some bug fixes:
Autologin is working now (if chosen inside Calamares) Virtualbox detection is working Powersaving/screen-locking issues are resolved Added Leafpad as an option to use the editor as admin (not working with mousepad anymore) A general cleanup Removed light-locker (was causing issues)
RaspArch Build 190809 is now available to download and it is especially made for the recently released Raspberry Pi 4 Model B computer, which features a Quad-Core 1.5GHz 64-bit ARM Cortex-A72 CPU, up to 4GB RAM, and on-board dual-band 802.11 b/g/n/ac Wi-Fi and Bluetooth 5.0 (BLE).
The best thing about the new Raspberry Pi 4 model is that it supports up to 4K video resolutions via two micro HDMI ports. The tiny computer also comes with two USB 3.0 and USB 2.0 ports, an extended 40-pin GPIO header, MIPI Camera and Display ports, and true Gigabit Ethernet.
Alas, my Fedora 30 experience started strong with the first review and soured since. The test on the old laptop with Nvidia graphics highlighted numerous problems, including almost ending up in an unbootable state due to the wrong driver version being selected by the software center. With the in-vivo upgrade, I almost ended up in a similar state due to some incompatibility with extensions. I wasn't pleased by other glitches and errors, and the performance improvement margin isn't as stellar as the clean install test.
All in all, Fedora 30 feels like a rather buggy release, with tons of problems. I think versions 27 to 29 were quite robust overall, at least the Gnome version, but the latest edition is quite rough. That would mean I'd advise people upgrading to take care of their data, remember the possible snags like extensions, and triple check their hardware is up to the task, because apparently QA isn't cool anymore, and no one else will do this for you. All in all, Fedora 30 is very bleeding edge, finicky, definitely not for everyday use by ordinary desktop folks. It's a dev tool for devs, so if you want something stable and boring, search elsewhere.
measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).
Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.
All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance.
Version 8.6 basiert auf ââ â Debian/stable (buster), mit einzelnen Paketen aus Debian/testing und unstable (sid) (v.a. Grafiktreiber und aktuelle Productivity-Software) und verwendet ââ â Linux Kernel 5.2.5 sowie Xorg 7.7 (core 1.20.4) zur Unterstützung aktueller Computer-Hardware.
Finally the new public version of Knoppix 8.6 is out !
Version 8.6 of KNOPPIX is based on ââ â Debian/stable (buster), with some packages from Debian/testing and unstable (sid) for newer graphics drivers or desktop software packages. It uses ââ â Linux kernel 5.2.5 and Xorg 7.7 (core 1.20.4) for supporting current computer hardware.
Knoppix 8.6 marks the re-basing to Debian 10.0 Buster with select packages from Debian Testing and Unstable/Sid for newer graphics support. Knoppix 8.6 ships with the Linux 5.2 kernel. Knoppix 8.6 also ships with the latest desktop environment bitsand other updated software.
The CutiePi is hardly the first tablet built around one of Raspberry Pi’s tiny, low-cost computers. But it’s a pretty nifty looking addition to the category that combines an 8 inch touchscreen display with a Raspberry Pi Compute Module 3 Lite, a custom carrier board, and software to make the Linux-based Raspbian operating system touch-friendly.
CutiePie’s developers have a working prototype and hope to begin selling the tablet later this year. But the whole project is open source, so anyone who wants to build their own can check out the code and hardware design files and give it a try.
Unfortunately, no information on pricing or worldwide availability has been released as yet for the CutiePi, but as soon as information comes to light, we will keep you updated as always.
A new piece of hardware will soon be launching via the Crowd Supply website called HealthyPi v4, offering a fourth-generation built on the technology and feedback from previous versions. The open source, wireless, wearable has been specifically designed to monitor human vital signs and is powered by an ESP32.
Electronic developers and enthusiasts may be interested in a new piece of kit which will soon be available via the Crowd Supply website in the form of a modular, open source test and measurement chassis called the EEZ Bench Box 3.
Design to provide a complete hardware and software framework that bridges the gap between – and combines the best features of – DIY hobbyist tools and professional benchtop equipment. The EEZ Bench Box 3 will soon be available to purchase directly from the Crowd Supply site although final pricing has not yet been confirmed.
Want to set up a remote DSLR for shooting a time-lapse? The Intervalometerator (AKA ‘intvlm8r’) is an open-source intervalometer that can help you do so at minimal hardware cost (as long as you’re comfortable tinkering with hardware and software).
Created by Sydney-based coder Greig Sheridan and his photographer partner Rocky over the course of a year, the Intervalometerator is designed to be both cheap and easy to build with familiar tools and using Raspberry Pi and Arduino microcontrollers.
“My partner and I have been working for over twelve months now on an intervalometer in order to shoot a DSLR-based time-lapse of the construction of our friends’ home in NZ,” Sheridan tells PetaPixel. “It was at the time a seemingly clever idea for a house-warming present, but it grew like tribbles to consume an incredible amount of effort).
Despite the growing popularity of both Agile development and open-source practices, it’s not often that they come up in the same conversation. When these two concepts do intersect, it’s often to highlight the contradicting viewpoints that these two models supposedly represent.
While there are core differences, Agile doesn’t have to be the enemy of open source—in fact, I would argue the opposite.
In an effort to help its developers be more productive, Twilio has announced the beta version of Twilio CLI. It is an open-source command line interface that enables developers to access Twilio through their command prompt.
“It’s hard to beat the flexibility and power that a CLI provides at development time. Until now, there was no CLI designed for typical communications requirements,” Ashley Roach, the product manager for developer interfaces at Twilio, wrote in a post.
According to Statista, the open source market was valued at $11.4 billion in 2017 and is estimated to grow to $32.95 billion by 2022, showing it has no intention of slowing down anytime soon.
Founded on the belief that collaboration and cooperation build better software, open source sounds closer to a utopian dream than to the cold digital world of programming. Research showed that open source code takes over proprietary one in applications at 57%. This has numerous benefits, such as speeding up the software development process or creating more effective and innovative software.
For example, open source frontend development frameworks, such as Angular, are often found in custom web apps, which allows companies to get their products to market at ever-increasing rates. In addition, companies tend to engage open source when at the cusp of technological innovation, especially when it comes to AR, blockchain, IoT, and AI.
To understand how open source works, it is important to appreciate where it all began. The very idea behind its inception isn’t exactly a new one. It’s been adopted by scientists for decades. Let’s imagine a scientist working on a project to develop a cure for an illness. If this scientist only published the results and kept the methods a secret, this would undoubtedly inhibit scientific discovery and further research in this area. On the other hand, teaming up with other researchers and making results and methodologies visible allows for greater and faster innovation.
This is the premise from which open source was originally born. Open source refers to software that has an open source code so it can be viewed, modified for a particular need, and importantly, shared (under license). One of the first well known open source initiatives was developed in 1998 by Netscape, which released its Navigator browser as free software and demonstrated the benefits of taking an open source approach. Since then, there have been a number of pivotal moments in open source history that have shaped the technology industry as we know it today. Nowadays, some of the latest technology you use on a daily basis, like your smartphone or laptop, will have been built using open source software.
[...]
Recent research found that 60 percent of organizations are already using open source software. Many businesses are realizing the benefits that the technology can bring in relation to driving innovation and reducing costs. This in turn is seeing a growing number of organizations integrate open source into their IT operations or even building entire businesses around it. With emerging technologies such as cloud, AI and machine learning only driving this adoption further, open source will continue to play a central and growing role throughout the technology landscape.
Whether or not you expect anyone to contribute to your project, you should be prepared for the possibility of others wanting to help your cause. And when that happens, your contributing guide will show those helpers exactly how they can get involved. This guide, usually in the form of a CONTRIBUTING.md file, should include information on how one should submit a pull request or open an issue for your project and what kinds of help you’re looking for (bug fixes, design direction, feature requests, etc.).
According to a recent announcement, ForgeRock, a platform provider of digital identity management solutions, has launched its IoT Edge Controller, which is designed to provide consumer and industrial manufacturers the ability to deliver trusted identity at the device level.
The Apache€® Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of the annual report for its 2019 fiscal year, which ended 30 April 2019.
332 active projects, 71 million lines of code changed, 7,000+ committers…
The Apache Software Foundation has published its annual report for fiscal 2019. The hub of a sprawling, influential open source community, the ASF remains in rude good health, despite challenges this year including the need for “an outsized amount of effort” dealing with trademark infringements, and “some in the tech industry trying to exploit the goodwill earned by the larger Open Source community.”
[...]
The ASF names 10 “platinum” sponsors: AWS, Cloudera, Comcast, Facebook, Google, LeaseWeb, Microsoft, the Pineapple Fund, Tencent Cloud, and Verizon Media
Yes, Apache is worth $20 billion by its own valuation of the software it offers for free. But what price can you realistically put on open source code?
If you only know the name Apache in connection with the web server then you are missing out on some interesting software. The Apache Software Foundation ASF, grew out of the Apache HTTP Server project in 1999 with the aim of furthering open source software. It provides a licence, the Apache licence, a decentralized governance and requires projects to be licensed to the ASF so that it can protect the intellectual property rights.
Researchers have pinpointed errors in two dozen Apache Struts security advisories, which warn users of vulnerabilities in the popular open-source web app development framework. They say that the security advisories listed incorrect versions impacted by the vulnerabilities.
The concern from this research is that security administrators in companies using the actual impacted versions would incorrectly think that their versions weren’t affected – and would thus refrain from applying patches, said researchers with Synopsys who made the discovery, Thursday.
“The real question here from this research is whether there remain unpatched versions of the newly disclosed versions in production scenarios,” Tim Mackey, principal security strategist for the Cybersecurity Research Center at Synopsys, told Threatpost. “In all cases, the Struts community had already issued patches for the vulnerabilities so the patches exist, it’s just a question of applying them.”
The Google I/O companion app for Android often takes advantage of the latest design stylings and OS features. It demoed Android Q’s gesture navigation and dark theme this year, with the company today releasing the I/O 2019 source code.
Yesterday, Colin White, a Senior Android Engineer at Instacart, introduced Coroutine Image Loader (Coil). It is a fast, lightweight, and modern image loading library for Android backed by Kotlin.
Google today open-sourced the speech engine that powers its Android speech recognition transcription tool Live Transcribe. The company hopes doing so will let any developer deliver captions for long-form conversations. The source code is available now on GitHub.
Google released Live Transcribe in February. The tool uses machine learning algorithms to turn audio into real-time captions. Unlike Android’s upcoming Live Caption feature, Live Transcribe is a full-screen experience, uses your smartphone’s microphone (or an external microphone), and relies on the Google Cloud Speech API. Live Transcribe can caption real-time spoken words in over 70 languages and dialects. You can also type back into it — Live Transcribe is really a communication tool. The other main difference: Live Transcribe is available on 1.8 billion Android devices. (When Live Caption arrives later this year, it will only work on select Android Q devices.)
Firefox SVP David Camp doesn't want internet users wasting time 'understanding how the internet is watching you.'
Crypto trading bots have become an increasingly popular tool for experienced bitcoin traders who want to deploy automated bitcoin trading strategies. As a result, there are now over a dozen trading bots (with ranging subscription prices) that digital currency traders can use.
Fortunately, for traders who want to test out algorithmic trading before committing funds toward a specific bot, there are several free trading bots from which to choose. Here’s an introduction to the most popular free, open-source bitcoin trading bots available in 2019.
Audius, a blockchain startup that aims to disrupt the music streaming industry, has uploaded its public beta version.
A new streaming service with its sights set on making the middlemen of the music biz obsolete is inching closer toward its goal of disrupting the Spotifys and SoundClouds of the world.
After a year of development, and armed with $5 million in investment capital from VC firms General Catalyst, Lightspeed, and Pantera Capital, blockchain startup Audius is finally ready to show the world what it's been working on.
The least committed contributors were the first to leave as cryptocurrency market caps went south.
That’s the main finding from Electric Capital’s second “Developer Report,” which was published Monday. The report analyzes code activity in all the open-source repositories in crypto and follows the venture capital firm’s first such report from March.
While there’s a sense that protocols and projects have been losing code contributors, the majority of developers that left crypto during the market correction in the first half of 2019 (77 percent of them) were the least committed contributors to the least promising projects.
The world's largest and most innovative businesses are turning to enterprise open source databases for mission-critical applications, with the most popular open source relational databases being MariaDB, MySQL, and Postgres.
However, while all three of these databases are open source, mature, and available in enterprise editions, there are significant differences between them — both in terms of application development as well as database administration and operations.
DBTA recently held a webinar featuring Thomas Boyd, director of technical marketing, MariaDB Corporation, who discussed the differences between MariaDB, MySQL, and Postgres.
[...]
EnterpriseDB is heap only while MySQL and MariaDB offer InnoDB, Columnar, Aria, MyRocks, and more.
Coming five weeks after the release of LibreOffice 6.2.5, the LibreOffice 6.2.6 maintenance update is here with months of back-ported fixes and all the latest security patches to make your LibreOffice experience more stable and reliable. That's why, The Document Foundation now recommends the LibreOffice 6.2 series to users in production environments. LibreOffice 6.2.6 includes a total of 44 changes.
"The Document Foundation announces LibreOffice 6.2.6, the sixth minor release of the LibreOffice 6.2 family, targeted at users in production environments. All users of LibreOffice 6.1.x and LibreOffice 6.2.x versions should upgrade immediately for enhanced security, as the software includes both security fixes and some months of back-ported fixes," said Italo Vignoli.
The Document Foundation has received two different proposals for the organization of LibOCon 2020 from the Turkish and German communities. When this has happened in the past, in 2012 (Berlin vs Zaragoza) and 2013 (Milan vs Montreal), TDF Members have been asked to decide by casting their vote.
This document provides an outline of the two proposals, which are attached in their original format.
It’s been a long and winding road for Tumblr, the blogging site that launched a thousand writing careers. It sold to Yahoo for $1.1 billion in 2013, then withered as Yahoo sold itself to AOL, AOL sold itself to Verizon, and Verizon realized it was a phone company after all. Through all that, the site’s fierce community hung on: it’s still Taylor Swift’s go-to social media platform, and fandoms of all kinds have homes there.
Verizon sold Tumblr for a reported $3 million this week, a far cry from the billion-dollar valuation it once had. But to Verizon’s credit, it chose to sell Tumblr to Automattic, the company behind WordPress, the publishing platform that runs some 34 percent of the world’s websites. Automattic CEO Matt Mullenweg thinks the future of Tumblr is bright. He wants the platform to bring back the best of old-school blogging, reinvented for mobile and connected to Tumblr’s still-vibrant community, and he’s retaining all 200 Tumblr employees to build that future. It’s the most exciting vision for Tumblr in years.
Matt joined Verge reporter Julia Alexander and me on a special Vergecast interview episode to chat about the deal, how it came together, what Automattic’s plans for Tumblr look like, and whether Tumblr might become an open-source project, like WordPress itself. (“That would be pretty cool,” said Matt.)
Oh, and that porn ban.
ASD (Australian Signals Directorate) has open sourced its in-house data visualization and analysis app on the code repository, GitHub.
Dubbed as Constellation, the software is framed with powerful analytics to enable data access, identify patterns in massive and complicated datasets, and can allow billions of inputs – all in a simple and intuitive way. With this, users can import data in multiple formats and present the information in many different graphic views for deep analysis.
ASD touts Constellation as a data analysis application enabling data access, federation, and manipulation activities across large and complex datasets.
Bioengineers at Rice University created entangled cardiovascular networks similar to the body's natural passageways.
Strategic management experts say greater collaboration between the insurance industry and state policy makers, including investment in open-source risk models, could improve society's ability to recover from disasters linked to climate change.
Australian #1 bad boy of EDM Flume made a surprise project announcement FlumeSounds yesterday. He uploaded a near 8-minute video of samples to all his socials for fans and creators to manipulate.
Hot off the release of his new EP, 'Quits', Australian producer Flume has revealed Flume Sounds, an open-source audio loop series for producers.
Experts from the University of Alberta and two universities of California are teaming up to launch the world’s first open-source database for spinal cord injury research.
The Open Data Commons for preclinical Spinal Cord Injury research (ODC-SCI) will improve research and treatment worldwide by making data more accessible, according to researchers and patients.
“The database has the potential to improve treatment for up to half a million people suffering from spinal cord injuries worldwide, and also enhance research in other areas of health, science and rehabilitation,” said Randy Goebel, associate vice-president of research at the U of A.
While California students began taking a new statewide science test this past spring, school districts were still struggling to get teaching materials aligned to the state’s new science standards into classrooms.
A new nationwide effort is trying to speed up that process by offering free, open source science materials to teachers and schools.
In 2017, philanthropists, state leaders and curriculum writers formed OpenSciEd to get materials to teachers implementing the Next Generation Science Standards, new academic standards that emphasize hands-on projects and integrate several scientific disciplines.
California adopted the new standards in 2013 and this past spring began administering a new state science test. But it wasn’t until last November that the State Board of Education approved a list of recommended textbooks and materials aligned to the new standards for kindergarten through 8th grade.
[Fossa Systems], a non-profit youth association based out of Madrid, is developing an open-source satellite set to launch in October 2019. The FossaSat-1 is sized at 5x5x5 cm, weighs 250g, and will provide free IoT connectivity by communicating LoRa RTTY signals through low-power RF-based LoRa modules. The satellite is powered by 28% efficient gallium arsenide TrisolX triple junction solar cells.
The satellite’s development and launch cost under EUR 30000, which is pretty remarkable for a cubesat — or a picosatellite, as the project is being dubbed. It has been working in the UHF Amateur Satellite band (435-438 MHz) and recently received an IARU frequency spectrum allocation for LoRa of 125kHz.
The OpenHAK is an open-source fitness tracker in a 3D printed wristwatch case that measures your heart rate and counts your steps, offering the resultant data for you to collect via Bluetooth. At its heart is a Sparkfun Simblee module, with heart rate sensing through a Maxim MAX30101 and step counting .by a Bocsh BMI160. It’s designed for expandability from the start with a header bringing out useful interface lines. In the prototype, they’ve used this to support a small OLED display. The result is a fitness tracker watch that may not match some of the well-known proprietary devices, but which remains completely open and probably costs a lot less too.
You might question whether we need another fitness wearable, but OpenHAK ($100) differentiates itself by being, well, open. Whereas other wearables often place barriers between you and your data, this device wants you to own and control everything. It’ll record step counts and heart rates, and send data to your phone – and only to your phone – in an easily accessible format, so you can later do whatever you want with it. This open philosophy extends to the hardware: the wearable cleverly integrates 18mm watch band support directly into the PCB; and breakout pins enable customisation, for example to add a display or vibration motor. If you like those ideas, but don’t fancy sourcing components yourself, grab one of the higher tiers in the crowdfunding campaign, and get everything at once – including a 3D-printed case to house everything.
First deployed in December 2018, the Codefresh Marketplace makes it easier for code developers to find commands without having to learn a proprietary API – every step, browsable in the pipeline builder, is a simple Docker image. The Marketplace contains a more robust set of pipeline steps provided both by Codefresh and partners, such as Blue-Green and Canary deployment steps for Kubernetes, Aqua security scanning, and Helm package and deployment. All plugins are open source and users can contribute to the collection by creating a new plugin.
Codefresh is the first Kubernetes-native CI/CD technology, with CI denoting Continuous Integration and CD denoting Continuous Delivery, obviously.
The organisation has this month worked to improve its open source marketplace with features that focus on faster code deployment.
First deployed in December 2018, the Codefresh Marketplace [kind of like an app store] allows developers to find commands without having to learn a proprietary API — this is because every step, which is browsable in the pipeline builder, is a simple Docker image.
DevOps and Jenkins is on full display this week at CloudBees’ DevOps World | Jenkins World taking place in San Francisco. In addition to the DevOps thought leaders and community members coming together to learn, explore and help shape the next generation of Jenkins and DevOps, a number of organizations took the opportunity to reveal new products.
[...]
SmartBear revealed TestEngine, a new solution designed to automate test execution in CI/CD environments. In addition, the company announced ReadyAPI 2.8 to accelerate functional, security and load testing of RESTful, SOAP, GraphQL and other web services. The new tools are aimed at accelerating API delivery.
Users can now execute ReadyAPI, SoapUI Pro and SoapUI Open Source tests simultaneously on a central source that’s integrated into their development processes. This tackles the challenges that Agile and DevOps teams have such as complex deployments, large regression suites, and global development teams, according to SmartBear in a post.
Matthew Broberg, Advocate and Editor at opensource.com says that in practice the implementation of DevRel has been far from consistent. "DevRel, in theory, is the intersection of three disciplines: engineering, marketing, and community management," he says. "In practice, DevRel applies to a wildly popular set of job titles with wildly different expectations across different organizations."
[...]
Rebecca Fitzhugh, Principal Technologist at Rubrik agrees. "While there is certainly a marketing component when representing the company to the customer and community, it's equally about representing the customer to the company," she says. "Our DevRel team brings feedback from our customers to the product and engineering team in order to drive a better developer experience against our product's APIs."
Respective leaders in DevOps and cloud computing are partnering to provide end-to-end application development automation from source to production...
SDM coordinates software delivery in an organization, serving as a sort of CRM for software delivery. The idea for SDM arose out of the notion that once companies use CI/CD, they realize they have created silos of data, processes, and teams. SDM is intended to capture signals from all the tools in use to show what is taking place.
For the majority of Defcon, hackers couldn't crack the $10 million secure voting machine prototypes that DARPA had set up at the Voting Village. But it wasn't because of the machine's security features that the team had been working on for four months. The reason: technical difficulties during the machines' setup.
Eager hackers couldn't find vulnerabilities in the DARPA-funded project during the security conference in Las Vegas because a bug in the machines didn't allow hackers to access their systems over the first two days. (DARPA is the Defense Advanced Research Projects Agency.) Galois brought five machines, and each one had difficulties during the setup, said Joe Kiniry, a principal research scientist at the government contractor.
"They seemed to have had a myriad of different kinds of problems," the Voting Village's co-founder Harri Hursti said. "Unfortunately, when you're pushing the envelope on technology, these kinds of things happen."
It wasn't until the Voting Village opened on Sunday morning that hackers could finally get a chance to look for vulnerabilities on the machine. Kiniry said his team was able to solve the problem on three of them and was working to fix the last two before Defcon ended.
At the country's biggest election security bonanza, the US government is happy to let hackers try to break into its equipment. The private companies that make the machines America votes on, not so much.
The Def Con Voting Village, a now-annual event at the US's largest hacking conference, gives hackers free rein to try to break into a wide variety of decommissioned election equipment, some of which is still in use today. As in the previous two years, they found a host of new flaws. The hunt for vulnerabilities in US election systems has underscored tensions between the Voting Village organizers, who argue that it's a valuable exercise, and the manufacturers of voting equipment, who didn't have a formal presence at the convention.
Carbon Black, the cybersecurity and endpoint protection software provider, has unveiled the Binee open-source binary emulator for real-time malware analysis. The company announced Binee at last week’s DEF CON 27 hacker conference in Las Vegas, Nevada.
[...]
Carbon Black also has been gaining momentum with MSPs and MSSPs over the past few months. In fact, Carbon Black recorded revenue of $60.9 million and a net loss of $14.6 million in the second quarter of 2019; both of these figures generally beat Wall Street’s expectations.
The call for collaborative projects in the area of information communication technologies led to the genesis of the Open-Source Cyber Fusion Centre, a project that will provide companies with a wide array of tools and methodologies for cybersecurity.
The project is a joint initiative with Carleton University and two industrial partners, eGloo and AvanTech, all of which have recognized expertise in open-source software application programming interfaces (APIs) and technology stacks.
[...]
The Open-Source Cyber Fusion Centre’s ongoing research will help strengthen and democratize the Canadian economy. By mitigating cyberthreats, projects of this kind promote entrepreneurship and help nurture a more diverse economy.
In addition, the centre provides students with unique opportunities to participate in an ever-changing, complex cybersecurity industry that is becoming increasingly prevalent in Canada.
SMEs can get in touch with the centre and its partners to receive support on their security operations. They can install advanced technologies in their corporate network as a free service to monitor the security of their operations.
Open Source Security Podcast helps listeners better understand security topics of the day. Hosted by Kurt Seifried and Josh Bressers, the pair covers a wide range of topics including IoT, application security, operational security, cloud, devops, and security news of the day.
McAfee researchers have uncovered a remote code execution (RCE) vulnerability in open-source software from a popular line of Avaya VoIP phones.
McAfee is warning organizations that use Avaya VoIP phones to check that firmware on the devices have been updated. Avaya’s install base covers 90% of the Fortune 100, with products targeting customers from small business and midmarket, to large corporations.
Hundreds of thousands of Brazilians took to the streets of 211 cities on August 13 to protest far-right Brazilian President Jair Bolsonaro’s austerity cuts and privatization plans for the public university system. It was the third in series of national education strikes, dubbed “the Education Tsunamis,” organized by national students unions together with teachers unions affiliated with the Central Ãâ¢nica de Trabalhadores (Unified Workers Central/CUT)—the second-largest labor union confederation in the Americas.
Organized from the bottom up by teachers, high school and university students, through thousands of democratic assemblies across the country, communication between activists in the different towns and cities insured that the August 13 street protests were staggered throughout the day to achieve maximum impact. Starting in smaller cities during the morning rush hour, with protests numbering in the low thousands, they increased in size as the day progressed, with crowds of 30,000–50,000 in larger cities like Recife, culminating during the evening rush hour in Brazil’s three largest cities, with an estimated crowd of 100,000 shutting down Avenida Paulista in the heart of São Paulo’s financial district.
There, instead of the usual honking cars, groups of teenagers danced and sang things like, “I want education, to be intelligent, because for stupid we already have our president.” Thousands of older people came out in solidarity with the teachers and students, and the atmosphere was one of hope against Bolsonaro’s sub-fascist project, and its attempt to purge the education system of critical thinking through a revival of the old Nazi trope of “Cultural Marxism.”
In short, it seemed like the perfect feel-good event for newspapers like the Guardian and the New York Times to share with their liberal readers. After all, after the US, Brazil is the most populous, largest in area and wealthiest nation in the Americas. After all, both newspapers have taken editorial positions against Bolsonaro, and regularly criticize his environmental and human rights abuses. After all, both papers have run numerous articles celebrating the spirit of the young protesters in Hong Kong and Venezuela in recent months, complete with inspiring quotes and photographs from the ground.
Platforms like Facebook, YouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online.
But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts.
This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings — like the “n-word” or “queer” — may not be in others. But algorithms — and content moderators who grade the test data that teaches these algorithms how to do their job — don’t usually know the context of the comments they’re reviewing.
Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI — which is often proposed as a tool to objectively identify offensive language — can amplify the same biases that human beings have. They also prove how the test data that feeds these algorithms have baked-in bias from the start.
The platform is the centrepiece of the new London Medical Imaging & AI Centre for Value-Based Healthcare at King's College London (KCL), where algorithms are being trained on an enormous trove of NHS medical images and patient pathway data to create new healthcare tools. The centre is focused on improving the experience for patients and their clinical outcomes across 12 pathways in oncology, cardiology and neurology.
Motherboard says contractors earning merely $12–$14 an hour are expected to transcribe and classify Cortana voice commands into more than two dozen topic areas, including gaming, email, communication, events, home automation, and media control. These transcribed recordings are used to help teach the Cortana assistant to better understand speech. Contractors are expected to work through a grueling 200 classification tasks an hour — that’s three a minute, or one every 18 seconds on average. They do have the potential to earn a bonus of an additional $1 an hour, according to contracts shared with Motherboard.
Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.