Computer Science Can Outlive This Wave of "AI" Hype
Computer Scientists and scientists in general don't need this hype
THE likely last RMS talk in Peru (this coming Thursday) will cover the Free software movement and GNU operating system. If there's one thing that RMS (Dr. Stallman) and Torvalds agree on these days (there's a lot more than one), it is "AI" being nonsense and hype. In the context of text, it's just grammatically-correct pure nonsense. In the case of images, it's just computer-generated (CG) assembly of lifted ("stolen") art, passed off as "Fair Use". They're not really "AI", they're just LLMs (old concept) and CG art based on prior work, without the necessary attribution or monetary compensation.
"AI" - as in machine learning - is older than GNU itself. In some form or another, classifiers derived from data have been devised for decades, but they have severe limitations. I've worked in this area for over 20 years, so I know the limitations. The undermined corporate media is intentionally downplaying the limitations and, to make matters worse, it starts calling "AI" just about anything that a computer does, even if there's no training done and no data upon which classifiers are built. To them, anything with if/else statements is "logic" and thus "AI".
Today I looked for about 20 images that are Halloween themed and this year I'm more disturbed than last year to find almost 80% of the images can be categorised as CG (I won't say "AI") and they use more or less the same objects, e.g. the exact same pumpkin art. That's not even original, they just use some stochastic process to determine the location of objects and then mix-and-match some objects to make a supposedly original scene. It's as boring as LLMs, which spew out the same lies, albeit with permutations and a plausibly-sounding tone. My wife says they'll make society dumber and dumber (and then ingest the stupidity that it generated itself).
But there's also some big news beneath the surface. Good news. I've noticed that I'm not the only one to boycott or blacklist publications that got caught using LLMs. They typically perish very fast once they do this. The word spreads, people take action, and then the publishers perish. In effect, they have struggle, they try to fool their readers, but at the end it all gets much worse because readers won't bother reading bots. In the case of LinuxSecurity.com, recently I've noticed they've reverted back to mostly original articles, but it's too late. The damage was done. We'll never link to them (ever!) again. The trust is gone.
The bottom line is, reputations will be damaged if people fake their work or pass off some bot "spew" as their own. Don't do it. It's just not worth it. █