LLM Slop Versus Richard Stallman

Published Less Than a Day Ago:
That reads a bit like LLM slop and was published hours ago. So I checked:
As Stallman put it only days ago: "The point is that when people hear about these large language models and they suppose that they are intelligent and then they see text generated by one, they believe it. They assume that this program understands the text that it generated, but they don't ever understand. If you assume that it understands, then you say, "how did it make this stupid mistake". Because they make mistakes all the time. They make obvious mistakes rather often but even worse is when they make unobvious mistakes, like the lawyer who asked one of these chatbots, "give me a list of pertinent legal decisions, that are pertinent to deciding this case". And the chatbot invented plausible looking references to non-existent cases. And the lawyer said, "ah this is an artificial intelligence, it must know". So he put those in his brief, in his filing, and he was laughed at by the judge once it was discovered that those cases did not exist. Those cases were fictitious because they sounded right. All that an LLM knows how to do is make text that sounds plausible. Whether it's true or not, that's something beyond the understanding of a language model. They're not designed to do that. They have no idea of semantics."
For the hard facts see stallmansupport.org. For a real interview, also published some hours ago, see mundodeportivo.com:
This article is real, but it's not in English. █