LLM Slop Does Not Know People (It Knows Nothing) and Cannot Distinguish Between People. It's a Recipe for Disaster.

LLMs do not have any real understanding of anything. They rely on scanning of many texts and assigning probabilities accordingly. That's it.
This means that, when it comes to people, it has no way of knowing who's who, especially when there are so many people with overlapping names, not to mention businesses and institutions that share the same name (words/strings/tokens).
Set aside how easily LLMs can be gamed internally (by their owner/s) and externally (manipulator via inputs or I/O feedback loops)
That people - some people at least! - are willingly to blindly place trust in LLMs (because some sleazy companies call LLMs "intelligence") is worrying. People get described as something they're not, history is being rewritten with glaring errors, and people who want to know facts about conflicts are being fed lies, reinforced by disinformation disseminated et masse (it's "fair play" because "at war, truth is the first casualty"). A conduit for the above-mentioned "disinformation disseminated et masse" is typically social control media where "farming" of "seeding" falsehoods is easy; slopfarms require many domains and recognition by scanners like LLMs (for visibility), unlike 'troll farms' in social control media.
As we said this morning, "There Has Never Been a Better Time to Quit Social Control Media". And yes, indeed, it is full of slop. It's military-grade propaganda. █
