The Term "AI" is Not New and What Today's Media Calls "AI" Isn't Even AI
Rumelhart et al. (1986, almost 40 years ago)
Earlier today we reproduced a translation (automated translation) of a very recent interview with Richard Stallman.
The text is very long and we'd like to draw attention to how Richard Stallman explained the LLM hype only days ago:
Stéphane:
And so, to finish, the last topic I would like to discuss with you is what the mass media calls artificial intelligence . Because there too, there is software behind it. I wanted to know what the FSF's position is on this subject today. And do you think there is a definition of an ethical LLM model?Richard Stallman:
I must first distinguish my opinion from the assumptions contained in your question.Stéphane:
Okay.Richard Stallman:
I differentiate between what I call artificial intelligence and what I call crap generators . Programs like ChatGPT are not intelligence.
Intelligence means having the ability to know or understand something, at least in a narrow domain. But more than nothing.ChatGPT, on the other hand, doesn't understand anything. It has no intelligence. It manipulates sentences without understanding them. It has no semantic idea of the meaning of the words it produces. That's why I say it's not intelligence.
On the other hand, there are programs that really understand in a narrow domain.
For example, some can analyze an image and say if it shows cancer cells, or identify an insect: is it a wasp attacking bees? This is a real problem in some countries. These are really dangerous immigrants.
These programs, in their small field, understand as well as a human. So I call them artificial intelligence.But LLMs, the great language models, don't understand anything. We must insist on not calling them "artificial intelligence." It's just a marketing campaign designed to sell products, and unfortunately, almost everyone accepts it. This confusion is already causing damage to society.
Other than that, if you want to use an LLM, you have to have the four essential freedoms. You have to be able to run it on your own computer, not use it on someone else's server, because in that case, they choose the program, and if the program is free, they have the right to change it, not you. And if you run it at home but you don't have the right to modify it, or to use it as you wish freely, obviously that's unfair. So I'm not saying that LLMs are essentially unfair, but normally they don't respect the freedom of users, and that's unfair.
And we must also recognize what they are not capable of doing: they do not understand, they do not know.
Stéphane:
Yes, it's true that it's marketing to call them "artificial intelligence." But many people find useful uses for them. I'm thinking, for example, of translation, which sometimes produces decent results. So if people want to use LLMs, what you recommend is to favor models under open license, is that right?Richard Stallman:
Yes. We're currently writing how to adapt the free software criteria to apply to machine learning programs as well.Stéphane:
So this is something that the FSF will publish in the future?Richard Stallman:
Yes, but it's not over yet.
Remember this is an automated translation of what he said in French. He was writing the GNU (operating) system while innovations in real Machine Learning were being demonstrated. It's not a new thing that came about a few years ago. Only the hype was new... and totally artificial. A lot of the corporate media helps sell a giant Ponzi scheme. █
1983:


