Misplacing Blame for Security Problems, Sometimes With LLM Slop That Blames "Linux" for Microsoft's Failures
Broken telephones and stochastic parrots beget plenty of Fear, Uncertainty, Doubt (FUD)
Moments ago I responded to a misinformation pattern which became very prevalent this past week, notably GitHub (Microsoft) and/or NPM (Microsoft) sending malware to computers - sometimes servers - and then the whole thing gets blamed on Go(Lang), Linux etc. There are 3 new examples of this in "Microsoft Transmits Malware to Computers, Media Blames the Victims", but I saw about a dozen this past week. Replying to this misinformation can be tiring and I saw slopfarms parroting this same FUD, so one might be essentially combating bots, not human authors. It does not scale. It's pointless. Speaking of bots or chatbots, this is a fake new 'article' promoting Windows under the guise of Linux.
This is not only a fake article from a slopfarm. It promotes a falsehood or misinformation by re-announcing something which happened many years ago (just not the same version).
Way to distract Windows users from the real alternatives, discouraging adoption of the "real thing".
Here's that same slopfarm spewing out more LLM slop or misinformation:
From the sister slopfarm too (they do this in tandem, Google News picks up both):
Same slopfarm, same day:
Google News links to all the above as "Linux" news. What a disgrace. At the end of last month I showed that nowadays Google fancies spreading lies and the following day we saw Akira Urushibata reporting the same thing. Here's what he wrote this past Wednesday (7th of May) to libreplanet-discuss
:
There is much news on the accomplishments of AI, adoption thereof by leading companies, and the great amount of money invested in LLM model research and the construction of data centers. As such it is not easy to convince people that AI has limitations and should be used with great care.An effective way to get people interested in the problem is to show glaring examples of hallucinations. If you ever encounter one you should record it and show it around.
Thinking machines have defeated the best human chess and go players of the world. This makes people think that AI is highly intelligent. I believe the process computers use to analyze chess positions is different from the large language models used to generate text responses to questions. Unfortunately both are loosely refered to as "AI" and few people bother to look into the difference.
---
You Can't Lick a Badger Twice": Google's AI Is Making Up Explanations for Nonexistent Folksy Sayings - Futurism https://futurism.com/google-ai-overviews-fake-idioms
This is getting ridiculous.
Apr 23, 2:17 PM EDT by Victor Tangermann
Have you heard of the idiom "You Can't Lick a Badger Twice?"
We haven't, either, because it doesn't exist - but Google's AI seemingly has. As netizens discovered this week that adding the word "meaning" to nonexistent folksy sayings is causing the AI to cook up invented explanations for them.
"The idiom 'you can't lick a badger twice' means you can't trick or deceive someone a second time after they've been tricked once," Google's AI Overviews feature happily suggests. "It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again."
Author Meaghan Wilson-Anastasios, who first noticed the bizarre bug in a Threads post over the weekend, found that when she asked for the "meaning" of the phrase "peanut butter platform heels," the AI feature suggested it was a "reference to a scientific experiment" in which "peanut butter was used to demonstrate the creation of diamonds under high pressure."
There are countless other examples. ...
---
A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse - New York Times https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html
A new wave of "reasoning" systems from companies like OpenAI is producing incorrect information more often. Even the companies don't know why.
By Cade Metz and Karen Weise
Cade Metz reported from San Francisco, and Karen Weise from Seattle. * Published May 5, 2025 Updated May 6, 2025, 4:26 p.m. ET
Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer.
In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist.
"We have no such policy. You're of course free to use Cursor on multiple machines," the company's chief executive and co-founder, Michael Truell, wrote in a Reddit post. "Unfortunately, this is an incorrect response from a front-line A.I. support bot."
---
"When you know something say 'I know this.' When you don't know say 'I don't know.' Such is 'to know'." - "The Analects of Confucius"
Sometimes the deceit involves weak journalism, not LLM slop. Consider the news article "Botnet Made Up Of 7,000 End-Of-Life Routers Taken Down", which has just been published in a legitimate site.
An associate explains that "this is a cost caused by the OSI, which allows licensing to be violated with impunity. If the routers were license compliant this would be less of an issue or maybe none at all, especially if vendors complied with either the Linux license or OpenWRT's license. A great many proprietary routers run stolen variants of OpenWRT, nearly all the rest run some other embedded Linux. tldr; the OSI is complicit."
We'll resume the OSI series later this weekend. We just waited for the controversy to calm down some more because we open up more scandals or jars of worms. OSI coverage in Techrights might go on and on till August; we're in no particular hurry and we have plenty more to show.
█