Bonum Certa Men Certa

Slopwatch: Stigma-Baiting by the Serial Sloppers and Latest Garbage From the Slopfarm LinuxSecurity.com (Also Slopping Away at "OpenBSD" With SEO SPAM Made by LLMs)

posted by Roy Schestowitz on May 02, 2025

A few hours ago I saw this 'article':

Ditch Microsoft Windows for ALT Workstation 11: A Russian Linux distro with a modern GNOME desktop

Upon some analysis the first few paragraphs are mildly edited LLM slop. The remainder looks like nothing but a words salad spewed out by LLMs:

While this distro is clearly made with Russian users in mind, it’s surprisingly accessible for anyone looking to escape the limitations of Windows 11. There’s no lock-in, no account nagging, and no artificial hardware restrictions. ALT Workstation 11 simply works. Download an ISO here.

Typical Fagioli. BetaNoise still tolerates this. Slop images, slop text. Anything goes...

But then there's the slopfarm LinuxSecurity.com, which is not only targeting "Linux" with SEO SPAM produced by LLMs. Now there's this:

When Security Hinges on a Single Key: A Wake-Up Call for Admins

This is basically slop about what happened in Kali Linux:

Let's examine how this incident occurred, its impact on Kali Linux users, and what we can learn from this distressing event.

If that's not bad enough, they also targeted the OpenBSD 7.7 release on the same day:

OpenBSD 7.7 Released

Being a slopfarm (which is what LinuxSecurity.com is), of course these fakers didn't bother writing:

In this article, we’ll walk through the highlights of the OpenBSD 7.7 release and explore why its features should command our attention.

These slopfarms and people who sling around fake images that fuse together other people's work without attribution (a form of plagiarism) not only spread misinformation about Linux and the BSDs. There are slopfarms which target other themes. We just leave them aside because they are less relevant to us.

Microsoft et al are trying to profit from blurring away information. They call it "AI", but it has nothing to do with intelligence. The other day we quoted a message by Akira Urushibata, who complained about Google misinforming him. "Obviously the above text was generated by AI," he said, but calling it AI is also wrong. Jean Louis responded to him yesterday at libreplanet-discuss:

> Other items of the list discussed ticket prices, access to the
> grounds.  Obviously the above text was generated by AI, for it is
> unlikely for a human being to err in this manner, but Google does not
> clarify.  (Confirmed April 28 Japan time)  So, one part of Google
> (probably AI) treats the exposition as canceled while another part
> understands that it is in progress.  There is a failure to notice
> an obvious contradiction.

It is their organization. You can complain to them. Take it realistically, you can't do nothing much about organizations that are not under your control.
Don't expect "corporate responsibility" as they are money driven, they have responsibility to making income, not to random people.
> Can AI distinguish between "free" as in "free drinks" and "free" as in > "freedom"?
Run the so-called "AI" or the Large Language Model (LLM) on your own computer and find it out!
See https://www.gnu.org/philosophy/words-to-avoid.html#ArtificialIntelligence
The word AI as such is not wrong, but when used in wrong context for sensationalism or anthropomorphism of computer programs it leads people to wrong conclusions. Point is that it doesn't think.
Your message proves that it doesn't think.
Back to question if Large Language Model (LLM) can understand "free" as in freedom related to free software:
- ask your favorite LLM what is free software. Sample answer by GLM-4-9B-0414-Q8_0.gguf Large Language Model (LLM) says that it can understand it: https://gnu.support/files/tmp/clipboard-2025-05-01-08-35-21.html
If you start talking to the LLM in the context of price, it may probably answer in that way. Let's try:
> I can't afford Microsoft proprietary software, should I instead use free > software?
Answer: https://gnu.support/files/tmp/clipboard-2025-05-01-08-37-51.html
Answer is quite aligned to meanings of free software.
I have other 100 models which I could test, but there is no need to exaggerate. I can tell you that my experience is that it does understand the meaning of free software as majority of models have been trained on information which provides those meanings.
> We should watch out for this.
You can watch, but is waste of time.
If you wish to make things right, take datasets and fine-tune some models. But I think that is also waste of time.
The effort to correct society on the importance of free software, especially when common knowledge lacks sufficient references, would be like trying to teach a new language to a community that still speaks an old one.
Reality is that there is sufficient information on free software and that information is part of each Large Language Model (LLM) so far that I have tried out, it is more than 100+ different Large Language Models (LLM).
> The above makes me think that we should not expect much.
Expecting is passive attitude of an observer. You can be causal and influence society.
First is to understand that you can run the Large Language Model (LLM) locally.
THUDM/GLM-4: GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型 https://github.com/THUDM/GLM-4
Get your hardware and run it locally.
There is no point in staying behind the computer technology developments. Run it locally, do the research, understand that there there many many different Large Language Model (LLM) and then start benchmarking and testing.
> Even if we observe AI getting the distinction right in a number of > cases we should not expect it to be always correct.
Good conclusion!
I consider all information generated by the Large Language Model (LLM) as a draft of a possibly correct answer.
> For one thing many people are confused and much has been written > based on the mistake. it is unreasonable to expect that none of the > erroneous material has been fed to the large language model (LLM) > neural networks for training. In addition the above example > indicates that AI is not good at coping with lack of integrity. > Perhaps a contradiction which is obvious to a human being is not > so for AI.
What is erroneous material or not, is not subject for the Large Language Model (LLM). That is subject of human discussions and opinions.
There is no right or wrong for the Large Language Model (LLM).
Datasets used in training are by the rule biased by people who created There is no right or wrong for the llmthe information.
There is no right or wrong for the Large Language Model (LLM).
It is like watching TV stations over the world. You cannot get the single objective truth, but you get variety of human information where you need to keep the principle of deciding what is right or wrong for yourself.
Anybody who thinks “let me teach Large Language Model [LLM] what is right or wrong” is automatically creating a new biased and censored version that won’t represent information to the user so they can decide about it, as what was considered wrong by the author may not be wrong for the user.
Your attempt of wronging any Large Language Model (LLM) is futile. It is text generation tool. To blame anyone condition is to know who was responsible for "training" of the Large Language Model (LLM). And why do that in first place? Anybody can make their own Large Language Model (LLM), similarly like people are creating websites individually.
> Related article: Researchers have found methods to investigate what is > going on inside the neural networks that power AI. They have > discovered that the process differs greatly from human reasoning.
There are sciences of mind though so far there is no consensus how human mind works exactly. I don't think we have a stable information about human reasoning that could be compared to computer base neural networks.
But -- when I hear that researches have found methods to investigate what is going on inside the neural networks, it only means that those were beginners, and there were previous researchers who created neuaral networks knowing exactly what is going on.
So article is in the sense contradictory and sensationalist.
> We Now Know How AI 'Thinks'--and It's Barely Thinking at All > https://www.yahoo.com/finance/news/now-know-ai-thinks-barely-010000603.html
Is not worth reading even. It is made for Yahoo readers, part of entertainment.
It is well established that Large Language Model (LLM) doesn't think. There is no "barely thinking", there is ZERO thinking. It is data and computer program giving you output of that data, giving the impression of mimicking, due to usage of natural language.
I think it is greatest tool so far in 21st century as it empowers human to be 10 times or 100 times faster in various tasks.
Of course, it doesn't think.
> Though I don't follow developments in this field closely, what is > written in this article confirms my suspicions.
Good cognition!
> Another way to see this is that the AI developers now have the > equivalent of a debugger which allows them to probe the internal > process. This is likely to affect development.
Of course! All tokens can be seen, analysed, layers of the Large Language Model (LLM) can be visually inspected, analyzed to every detail when necessary.
Jean Louis

J Leslie Turriff also said: "Far too many people believe the statements that we see about AI that imply that the software can think, when in reality it just strings words together in plausible sequences." "The very term "Artificial Intelligence" is a misnomer; the software is incapable of thinking, only responding to the text of its prompts, and sometimes the response is completely wrong."

"The issue is more fundamental than whether it can distinguish between "libre" and "grtis;" it doesn't think, so doesn't distinguish."

So the starting point is, quit calling it "Artificial Intelligence". You help the companies and charlatans like Scam Altman who lie about the capabilities each time you relay those terms, which are typically misapplied by intention.

Other Recent Techrights' Posts

Upcoming Techrights Series About the Failure of the Solicitors Regulation Authority (SRA) to Stop Hired Guns Who Work for Americans That Abuse Women
The SRA has demonstrated nothing but considerable incompetence at many levels
The "Alicante Mafia" - Part XIV - The EPO Vice-President Steve Rowan and the Hidden Alicante Connection is a Big Deal
We'll soon take a closer look at Ernst
Links 27/01/2026: Japan-China Feud Escalates Again, "Iran's Internet Blackout Persists"
Links for the day
 
Upcoming Techrights Series About the Public Appearances of Richard M. Stallman (RMS) in the United States
we plan to drop all pretences about "Open Source" and instead focus on Software Freedom
Upcoming Techrights Series About the Experiences of EPO Insiders
We'll start the new series some time next week
Links 28/01/2026: Microsoft Ordered to Stop Spying on School Children, Apple's Brand Tarnished by Its Complicity With Human Rights Abusers
Links for the day
Gemini Links 28/01/2026: Particle and AirMIDI
Links for the day
Amandine Jambert (EDPB/CNIL/FSFE), motive for lying, trust in blockchain and encryption
Reprinted with permission from Daniel Pocock
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Tuesday, January 27, 2026
IRC logs for Tuesday, January 27, 2026
Expect More XBox Layoffs Shortly
As expected
Online 'Gathering' Held Today to Organise Industrial Actions in EPO, Strikes Will be Starting Shortly
"Online Extraordinary General Meeting on Action Plan"
It's Not About What You Know, It's About Who You Know (and Stay Quiet About the Cocaine)
This is not an organisation that exists to ensure laws are followed
FOSDEM 2026: democracy panel: FSFE uses women as stooges, gerrymander
Reprinted with permission from Daniel Pocock
Must Use Proprietary JavaScript to Submit Feedback to the European Commission About Moving From GAFAM to Free Software
Nevertheless, go tell them why Software Freedom would benefit Europe's defence and economy
Distortion of the Facts About Mass Layoffs at IBM
more layoffs are ahead
Gemini Links 27/01/2026: "Waiting Isn't a Waste", Posting from Lynx, and Bookmarks
Links for the day
Links 27/01/2026: "Oracle Debt and TikTok Transition Troubles Vex the Ellison Media Empire", Richard Stallman Quoted on Copyrights
Links for the day
Steven Field (Red Hat) Speaks of "Recent Layoff" (RA/Wave) in Red Hat
IBM really doesn't like it when people talk about "RAs"
The "Alicante Mafia" - Part XIII - Is EPO Vice-President Steve Rowan in Cahoots With the "Alicante Mafia"?
that deserves much media attention, political intervention, and condemnation
A Week Ago We Contacted the EPO's Stephen (Steve) Rowan About Cocainegate
Tomorrow we'll write some more about Rowan
“Wikilaundering” Explained
"London PR firm rewrites Wikipedia for governments and billionaires"
IBM Reports 'Results' Tomorrow, Expect More "RAs" (Mass Layoffs)
they use words like "efficiency", "optimisation", "AI", "pivot", "modernisation" and so on
Earlier This Month Microsoft Lunduke Said in Public It Was Good That Renee Good Was Murdered, Now He Mocks or Demonises People for Saying the US is Unsafe
Don't be easily conned by demagogues
Google News and "Linux" Slop
Why won't Google be interested in tackling this issue? Instead Google has been trying to participate in this issue.
IBM Kills Red Hat in the Darkness
What IBM does to Red Hat is malicious
IBM Red Hat's Goal Is Not Real Security (It Probably Never Was)
Spies and trolls are very malicious people and sometimes they're the same thing
With Absurd Lies About Slop, Which Lacks Intelligence or Financial Potential, GAFAM and IBM Will Twist Mass Layoffs as 'Efficiency Drive' or 'AI Pivot'
More layoffs are on the way
Animal Advocacy Works
All it takes is effort and determination
EPO Strike This Week
What has happened to Europe?
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Monday, January 26, 2026
IRC logs for Monday, January 26, 2026
For the EPO to Survive, António Campinos and the "Alicante Mafia" Must Fall on Their Sword
There are EPO insiders who are convinced Campinos too is (or was) a cocaine addict
Some Slopfarms and Some Real News Sites Cover Richard Stallman’s (RMS) Talk
If his message about Software Freedom spreads, then we're all better off
Gemini Links 26/01/2026: Pocket Power Pack, Batteries, and Breaks
Links for the day
"Microsoft Vista 11 Emergency Update" as Windows Fails to Boot (Again)
Microsoft is desperately trying to find some new business model as the debt soars
4 Hours Ago The Register MS Published Paid-for Spam About "AI" (Slop, Buzzwords)
"AI" mentioned 13 times in the page
IBM 'Results' Due Wednesday Evening, Expect Clues About Mass Layoffs
Don't expect IBM to say anything about "layoffs" or "RAs"
The Fall of the EPO (or the "Alicante Mafia" at EPO) Will be Due to This Reckless Lawyer Who Does Cocaine in Public While Speaking for the EPO
The longer European politicians (and media) turn a blind eye to this corruption, the worse it'll get
Why RMS is Scary to GAFAM 'Engineers' and the GAFAM Apologists (or Addicts)
especially because of his ideas and his way of life
Firefox 'Market Share' Down to All-Time Low in 2026, Adding to It User-Hostile 'Features' Only Worsens Things
What is the goal of Mozilla at this point?
Links 26/01/2026: Windows Back Doors, American Winter Storm, and Report Says Iran's "Protest Death Toll May Exceed 30,000"
Links for the day
Life Got Simpler and Therefore Also Healthier and Happier
Some people envy not wealth but happiness (which they're unable to attain, even with hoarding and accumulation)
Richard Stallman's Experiences With 'Cancel Brigades' Ought to Educate Linus Torvalds
Now they talk about "if Linus dies" scenarios
Links 26/01/2026: Financial Stress in German Farms and Germany Wants to Take Its Gold Reserves Out of the US
Links for the day
Gemini Links 26/01/2026: "Lack of Meaningful Things" and Getting Back to Programming
Links for the day
Strong Correlation Between the Slop Ponzi Scheme (or Bubble) and Major Disasters
BitCoin ruins the planet; so does slop
We Will Never Allow the "Alicante Mafia" to Hide "Cocainegate"
transparency typically scares malicious actors
Fewer Involuntary Interruptions This Year
This year we're doing much better
Prisons Are for Dangerous People Who Pose a Threat to the Public, Not People Who Inform the Public
At the end of the week EPO workers go on strike
Microsoft Loses Grip on Indian Ocean
Many countries, including in older allies of the US (such as Canada and the US), look for ways to get out of Microsoft dependence urgently
XBox Consoles Nearly Dead by Now, the 'XBox' (ex-Box) Brand Now Stands for Something Full of Slop, Spam, Filler, and Chaff
We're seeing the last day (maybe year) of "XBox"
The Great "AI" CON Explained by Dr. Andy Farnell
LLMs are basically advertisers of sorts
Links 26/01/2026: "Journalists Detained", in Germany "Unjustly Jailed Man Gets €1.3 Million Compensation"
Links for the day
Red Hat Quietly Going Extinct After Bluewashing in 2026
At this point it would be rather foolish to assume that IBM will let Red Hat just "do its own thing" or maintain its corporate culture, identity, projects etc.
The "Alicante Mafia" - Part XII - Kris De Neef and Roberta Romano-Götsch, Who Stepped in for the Cokehead, Have No Comment on His Cocaine Usage (and the EPO's Cover-up)
Sh-t floats to the top.
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Sunday, January 25, 2026
IRC logs for Sunday, January 25, 2026
Gemini Links 26/01/2026: Cold Perception, Software Patches in NixOS, and Sunk Cost Fallacy
Links for the day