Bonum Certa Men Certa

Slopwatch: Stigma-Baiting by the Serial Sloppers and Latest Garbage From the Slopfarm LinuxSecurity.com (Also Slopping Away at "OpenBSD" With SEO SPAM Made by LLMs)

posted by Roy Schestowitz on May 02, 2025

A few hours ago I saw this 'article':

Ditch Microsoft Windows for ALT Workstation 11: A Russian Linux distro with a modern GNOME desktop

Upon some analysis the first few paragraphs are mildly edited LLM slop. The remainder looks like nothing but a words salad spewed out by LLMs:

While this distro is clearly made with Russian users in mind, it’s surprisingly accessible for anyone looking to escape the limitations of Windows 11. There’s no lock-in, no account nagging, and no artificial hardware restrictions. ALT Workstation 11 simply works. Download an ISO here.

Typical Fagioli. BetaNoise still tolerates this. Slop images, slop text. Anything goes...

But then there's the slopfarm LinuxSecurity.com, which is not only targeting "Linux" with SEO SPAM produced by LLMs. Now there's this:

When Security Hinges on a Single Key: A Wake-Up Call for Admins

This is basically slop about what happened in Kali Linux:

Let's examine how this incident occurred, its impact on Kali Linux users, and what we can learn from this distressing event.

If that's not bad enough, they also targeted the OpenBSD 7.7 release on the same day:

OpenBSD 7.7 Released

Being a slopfarm (which is what LinuxSecurity.com is), of course these fakers didn't bother writing:

In this article, we’ll walk through the highlights of the OpenBSD 7.7 release and explore why its features should command our attention.

These slopfarms and people who sling around fake images that fuse together other people's work without attribution (a form of plagiarism) not only spread misinformation about Linux and the BSDs. There are slopfarms which target other themes. We just leave them aside because they are less relevant to us.

Microsoft et al are trying to profit from blurring away information. They call it "AI", but it has nothing to do with intelligence. The other day we quoted a message by Akira Urushibata, who complained about Google misinforming him. "Obviously the above text was generated by AI," he said, but calling it AI is also wrong. Jean Louis responded to him yesterday at libreplanet-discuss:

> Other items of the list discussed ticket prices, access to the
> grounds.  Obviously the above text was generated by AI, for it is
> unlikely for a human being to err in this manner, but Google does not
> clarify.  (Confirmed April 28 Japan time)  So, one part of Google
> (probably AI) treats the exposition as canceled while another part
> understands that it is in progress.  There is a failure to notice
> an obvious contradiction.

It is their organization. You can complain to them. Take it realistically, you can't do nothing much about organizations that are not under your control.
Don't expect "corporate responsibility" as they are money driven, they have responsibility to making income, not to random people.
> Can AI distinguish between "free" as in "free drinks" and "free" as in > "freedom"?
Run the so-called "AI" or the Large Language Model (LLM) on your own computer and find it out!
See https://www.gnu.org/philosophy/words-to-avoid.html#ArtificialIntelligence
The word AI as such is not wrong, but when used in wrong context for sensationalism or anthropomorphism of computer programs it leads people to wrong conclusions. Point is that it doesn't think.
Your message proves that it doesn't think.
Back to question if Large Language Model (LLM) can understand "free" as in freedom related to free software:
- ask your favorite LLM what is free software. Sample answer by GLM-4-9B-0414-Q8_0.gguf Large Language Model (LLM) says that it can understand it: https://gnu.support/files/tmp/clipboard-2025-05-01-08-35-21.html
If you start talking to the LLM in the context of price, it may probably answer in that way. Let's try:
> I can't afford Microsoft proprietary software, should I instead use free > software?
Answer: https://gnu.support/files/tmp/clipboard-2025-05-01-08-37-51.html
Answer is quite aligned to meanings of free software.
I have other 100 models which I could test, but there is no need to exaggerate. I can tell you that my experience is that it does understand the meaning of free software as majority of models have been trained on information which provides those meanings.
> We should watch out for this.
You can watch, but is waste of time.
If you wish to make things right, take datasets and fine-tune some models. But I think that is also waste of time.
The effort to correct society on the importance of free software, especially when common knowledge lacks sufficient references, would be like trying to teach a new language to a community that still speaks an old one.
Reality is that there is sufficient information on free software and that information is part of each Large Language Model (LLM) so far that I have tried out, it is more than 100+ different Large Language Models (LLM).
> The above makes me think that we should not expect much.
Expecting is passive attitude of an observer. You can be causal and influence society.
First is to understand that you can run the Large Language Model (LLM) locally.
THUDM/GLM-4: GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型 https://github.com/THUDM/GLM-4
Get your hardware and run it locally.
There is no point in staying behind the computer technology developments. Run it locally, do the research, understand that there there many many different Large Language Model (LLM) and then start benchmarking and testing.
> Even if we observe AI getting the distinction right in a number of > cases we should not expect it to be always correct.
Good conclusion!
I consider all information generated by the Large Language Model (LLM) as a draft of a possibly correct answer.
> For one thing many people are confused and much has been written > based on the mistake. it is unreasonable to expect that none of the > erroneous material has been fed to the large language model (LLM) > neural networks for training. In addition the above example > indicates that AI is not good at coping with lack of integrity. > Perhaps a contradiction which is obvious to a human being is not > so for AI.
What is erroneous material or not, is not subject for the Large Language Model (LLM). That is subject of human discussions and opinions.
There is no right or wrong for the Large Language Model (LLM).
Datasets used in training are by the rule biased by people who created There is no right or wrong for the llmthe information.
There is no right or wrong for the Large Language Model (LLM).
It is like watching TV stations over the world. You cannot get the single objective truth, but you get variety of human information where you need to keep the principle of deciding what is right or wrong for yourself.
Anybody who thinks “let me teach Large Language Model [LLM] what is right or wrong” is automatically creating a new biased and censored version that won’t represent information to the user so they can decide about it, as what was considered wrong by the author may not be wrong for the user.
Your attempt of wronging any Large Language Model (LLM) is futile. It is text generation tool. To blame anyone condition is to know who was responsible for "training" of the Large Language Model (LLM). And why do that in first place? Anybody can make their own Large Language Model (LLM), similarly like people are creating websites individually.
> Related article: Researchers have found methods to investigate what is > going on inside the neural networks that power AI. They have > discovered that the process differs greatly from human reasoning.
There are sciences of mind though so far there is no consensus how human mind works exactly. I don't think we have a stable information about human reasoning that could be compared to computer base neural networks.
But -- when I hear that researches have found methods to investigate what is going on inside the neural networks, it only means that those were beginners, and there were previous researchers who created neuaral networks knowing exactly what is going on.
So article is in the sense contradictory and sensationalist.
> We Now Know How AI 'Thinks'--and It's Barely Thinking at All > https://www.yahoo.com/finance/news/now-know-ai-thinks-barely-010000603.html
Is not worth reading even. It is made for Yahoo readers, part of entertainment.
It is well established that Large Language Model (LLM) doesn't think. There is no "barely thinking", there is ZERO thinking. It is data and computer program giving you output of that data, giving the impression of mimicking, due to usage of natural language.
I think it is greatest tool so far in 21st century as it empowers human to be 10 times or 100 times faster in various tasks.
Of course, it doesn't think.
> Though I don't follow developments in this field closely, what is > written in this article confirms my suspicions.
Good cognition!
> Another way to see this is that the AI developers now have the > equivalent of a debugger which allows them to probe the internal > process. This is likely to affect development.
Of course! All tokens can be seen, analysed, layers of the Large Language Model (LLM) can be visually inspected, analyzed to every detail when necessary.
Jean Louis

J Leslie Turriff also said: "Far too many people believe the statements that we see about AI that imply that the software can think, when in reality it just strings words together in plausible sequences." "The very term "Artificial Intelligence" is a misnomer; the software is incapable of thinking, only responding to the text of its prompts, and sometimes the response is completely wrong."

"The issue is more fundamental than whether it can distinguish between "libre" and "grtis;" it doesn't think, so doesn't distinguish."

So the starting point is, quit calling it "Artificial Intelligence". You help the companies and charlatans like Scam Altman who lie about the capabilities each time you relay those terms, which are typically misapplied by intention.

Other Recent Techrights' Posts

[Meme] 9AM Meeting at Brett Wilson LLP
Brett Wilson LLP in space
99.99% Uptime in First Half of 2025
Since January there was only one noticeable outage
 
How to Top Up a "Limited Liability" With Even More Limitations (Dodging Accountability in the UK)
Some people call it a "shell game". Sometimes it's done for tax evasion purposes.
Free Software Foundation, Inc. (FSF) Inches Towards 75% of Fund-Raising Target
Will the cutoff date be extended again?
Gemini Space (or Geminispace) Grows, But Usage of Certificate Authority Let's Encrypt Drops Further
Ideally, all Gemini capsules should use self-signed certificates
Links 18/07/2025: More Microsoft Layoffs in Activision, The New Stack (Sponsored by Microsoft) Complains About Openwashing
Links for the day
Gemini Links 18/07/2025: OCC25 Gnus for Reading Usenet and RSS Feeds, Small Web Updates
Links for the day
Listing as Staff People Who Left the Company More Than Six Years Earlier
There are apparently no laws against that
Brian Fagioli Shovels Up LLM Slop (Plagiarism) Onto Slashdot, Then Uses Slashdot for Affirmation or as Badge of Honour
Notice how some of his latest slop is presented ("as featured on Slashdot")
Social Control Media Productivity
Snapping photos of the bone
The Law Firm SLAPPing Us For the Microsofters Lost 72% of Its Tangible Assets in the Past Year, According to Its Own Reports
That might help explain why they're willing to tolerate serial stranglers from Microsoft as clients
Slopwatch: LinuxSecurity.com Slopfarm and Slopfarms Propped Up by Google News
"As LLM slop is foisted onto the WWW in place of knowledge and real content, it now gets ingested and processed by other LLMs, creating a sort of ouroboros of crap."
Links 18/07/2025: Weather Events and Health Hazards
Links for the day
Microsoft's All-Time Low in Finland
Microsoft is in a freefall
Security: Shane Wegner & Debian statement of incompetence
Reprinted with permission from Daniel Pocock
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Thursday, July 17, 2025
IRC logs for Thursday, July 17, 2025
Gemini Links 17/07/2025: "Goodreads for Gemini" and Defence of "The Small Web"
Links for the day
Links 17/07/2025: Anger and Morale Issues at Microsoft, Wars and Conflicts Get Digital
Links for the day
CALEA / CALEA2 is the Real Problem, Not Chinese Operatives Exploiting CALEA / CALEA2 (as Any Other Nation Can)
CALEA / CALEA2 is more of a front door than a back door
Nils Torvalds and Anna "Mikke" Torvalds (née Törnqvis) Hopefully Use GNU/Linux by Now
"Torvalds Family Uses Windows, Not Linus’ Linux"
Attack of the Slopfarms
FUD-amplifying bots with slop images, slop text (LLM slop)
When People Call a Best/Close Friend of Bill Gates a "Serial Rapist"
Good thing that the Linux Foundation keeps the "Linux" trademark ("Linux Mark") clean
Not My Problem, I Don't Care
Context/inspiration: Martin Niemöller
Honest Journalism About the European Patent Office Ceased to Exist After SLAPPs and Bribes to the Media
The EPO is basically a Mafia
Microsoft Bankruptcy in Russia, Shutdown in Pakistan, What Next?
It seems possible that in 2025 alone Microsoft will have laid off over 50,000 workers
Life Became Simpler When I Stopped Driving and I Don't Miss Driving When I See "Modern" Cars
Gee, wonder why car sales have plummeted...
Why I Believe Brett Wilson LLP and Its Microsoft Clients Are All Toast
So far our legal strategy has worked perfectly
EPO Jobs Are Very Toxic and Bad for One's Health
Health first, not monopolies
Response to Ryo Suwito Regarding the Four Freedoms
the point of life isn't to make more money
Microsoft's Morale Circling Down the Drain
Or gutter, toilet etc.
What Matters More Than "Market Share"
The goal is freedom, not "market share"
Tech Used to be Fun. To Many of Us It's Still Fun.
You can just watch it from afar and make fun of it all
Links 17/07/2025: "Blog Identity Crisis" and Openwashing by Nvidia
Links for the day
Greffiers and the US Attorney of the Serial Strangler From Microsoft
The lawsuit can help expose extensive corruption in the American court system as well
Credit Suisse collapse obfuscated Parreaux, Thiébaud & Partners scandal
Reprinted with permission from Daniel Pocock
The People Who Promoted systemd in Debian Also Promote Wayland
This is not politics
UK Media Under Threat: Cannot Report on Data Breach, Cannot Report on Microsoft Staff Strangling Women
The story of super injunction (in the British media this week, years late)
Victims of the Serial Strangler From Microsoft, Alex Balabhadra Graveley, Wanted to Sue Him But Lacked the Funds (He Attacked Their Finances)
Having spoken to victims of the Serial Strangler From Microsoft
Links 17/07/2025: Science, Hardware, and Censorship
Links for the day
Gemini Links 17/07/2025: Staying in the "Small Web" and Back on ICQ
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Wednesday, July 16, 2025
IRC logs for Wednesday, July 16, 2025
Under the Guise of "MIT Technology Review Insights" the Site MIT Technology Review Posts Corporate Spam as 'Articles'
Some of the articles aren't even articles but 'hit pieces' against Free software and some are paid advertisements
Brett Wilson LLP Has Track Record in Scam Coin Cases (e.g. Craig Wright and More), Now It Works for 'Crypto' Scam Purveyors
But wait, it gets worse
Exclusive: corruption in Tribunals, Greffiers, from protection rackets to cat whisperers
Reprinted with permission from Daniel Pocock
Will Brett Wilson LLP Handle Its Own Winding Up Petition or be Struck Off for Overt Abuse of Process?
Today we sue not only the first Microsofter
Links 16/07/2025: Chip Bans and Microsoft’s “Digital Escort” Program
Links for the day
Ubuntu Becomes Microsoft GitHub, Based on Decision Made by British Army Officer
You're hopeless, Canonical
Revolving Doors: One Day You're a Judge, the Next Day You're an Attorney Paying Public Officials and Working for Violent and Dangerous Microsoft Employees
how the US justice system works
Sharing Code and Recipes
It helps explain the triviality of software freedom
Slopwatch: Noise, Plagiarism and Even Fear, Uncertainty, Doubt/Fear-mongering/Dramatisation
What are we meant to do to prevent a false association or misleading connotations? Game the LLMs? No. Boycott slopfarms.
How Many Women Has Microsoft's Alex Balabhadra Graveley Already Strangled and Where Does That End?
If you too are a victim of this man and wish to share information, contact us
Gemini Links 16/07/2025: BaseLibre Numerical System and Simple Web Browsing with TLS
Links for the day
Links 16/07/2025: Fascist Slop Takes "Intelligence" Clothing, New Criminal Case Against MElon
Links for the day
"We Might Save Somebody's Life"
I follow the example of my father
Why I am Suing the Serial Strangler From Microsoft, Alex Balabhadra Graveley, in the UK High Court This Week
Out of respect to the process and to the Court, I shall not share any pertinent details about the case
Links 16/07/2025: China’s Economy Grows Steadily, France Takes Action Regarding Harm to Children by GAFAM and Fentanylware (TikTok)
Links for the day
It is Not About Politics
Beware the people who try to make this about politics
Good Journalism Saves Lives
a shocking number of women die or get seriously hurt every day due to violence from a partner
Recognition of Women's Contributions to Free Software
Being passive is not an option when bad things are happening
Slopfarms Are Going to Perish Because Public Opinion is Changing
Many slopfarms will simply go offline
19 Years of Standing Up for Justice, Equality, and Truth
This week we shall take it up a notch
Gemini Links 16/07/2025: Tmux and OCC25 Working TLS
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Tuesday, July 15, 2025
IRC logs for Tuesday, July 15, 2025