LLMs Are Not a Form of Intelligence (They Never Will Be)
Butterflies are smarter than "chatGPT"; they actually have smarts, not oversized tables correlating strings
THE LLM 'fatigue' (growing frustration with the false promise of LLMs) isn't something that will go away. Many people assume that LLMs just need more time (and datacentres) and somehow some "magic" will come along.
People who naively assume this typically don't understand how LLMs work, or worse, they are constantly being lied to with assistance of media hacks (paid parrots of companies like Microsoft). They hear new buzzwords like "uber" and "super" and "hyper" intelligence (pure marketing by a hyped-up Sam Altman, a creation of PR cults) while Microsoft shuffles brands, increments numbers, adds pluses to them etc.
LLMs are not intelligent*. They're not even "AI". They're a sophisticated form of plagiarism that messes around with the concept of "fair use" to make their misuse seem acceptable, fair, even legal.
Thinking a bit deeply about LLM-generated online slop and how to cope with slop (even social control media is targeted by it, according to Facebook's founder), it's a bit like a virus that kills its host and may eventually go extinct by destroying all carriers. It seems many news sites (so far) that resorted to LLM 'experiments' become self-deprecating and eventually vanish, giving way to honest ones.
Over the past few months we kept calling out some "Linux" sites for using LLMs for fake content. All of them have mostly stopped doing so, probably due to this backlash. The response was gradual. Sites that make it known they're just some junk made by LLMs will lose almost all their audience, in due course...
"There are trolls posting slop everywhere," an associate told me last week, "like they perceive it to be the best thing since sliced bread. I suspect the main goal is to pump and dump some LLM-related stock investments and they are indifferent to the harm done to the various forums, it is outside the scope of their interests." (We've witnessed some of those impacted; they're still recovering and catching up with the problem)
"There were some recent articles about how LLMs cannot scale. They don't get better with more stolen material, and on top of that there is no more material to steal. They've more or less gotten it all."
"LLMs don't work for what people really want to use them for. They are not AI. Throwing more electricity at the problem won't help either."
But that keeps the bubble from imploding. It keeps the vapourware going. 'Open'AI [sic] now admits it'll be somewhere around 30 billion dollars in cumulative losses within a couple of years.
That's totally not sustainable, but hype profiteers/opportunists like NVIDIA and Microsoft keep throwing money at 'Open'AI [sic]; if 'Open'AI [sic] files for bankruptcy (as it should), it'll really hurt sales at NVIDIA and Microsoft and cast a shadow on every other company that says it offers "AI". █
____
* People who manually adjust the output of LLM prompting/queries - like workers in Kenya who got paid only 2 bucks an hour - are a source of intelligence, but they are human operators with brains, mostly meant to prevent racist output that's a PR headache or chatbots telling users to kill themselves (without actually comprehending that they do so).

