Bonum Certa Men Certa

Transcript of Richard Stallman's Interview With Manuel Cuda News in Italy (Debunking Fake "AI")

posted by Roy Schestowitz on Mar 04, 2025,
updated Mar 04, 2025

[A rough draft, but checked by two people]

Source: Embed | Invidious | YouTube | Commentary (by Techrights)

This was published 20 hours ago:

Manuel Cuda (MC hereon): Before asking you some specific questions, I would like to ask some more general ones on what artificial intelligence is. What is your opinion on the rapid development of artificial intelligence and the widespread use of technology?

Richard M Stallman (RMS hereon): Of what is not "artificial intelligence", because that term is a marketing propaganda term whose basic meaning is confusion because it mixes up two different kinds of things which for our understanding are totally different and must never be mixed up together. I define intelligence as the ability to know and understand things, perhaps within some limited domain and there are artificial systems which can know things and understand things and I think it's proper to call them artificial intelligence. But they don't include the programs people are usually talking about. People are usually talking about large language models, and those are not intelligence because they can't know anything and they can't understand anything. They have no means to understand anything. We know how they work. They do not understand the semantics of the text that they play with. All they can do is play with text. They look at lots of text and they say what word would probably come next. I'll try that one, that's how much they understand the text that they generate. So they are not artificial intelligence. And this is a very important point, it's the most important point about them. Because this marketing term "artificial intelligence" because businesses constantly use it to confuse together the systems with some intelligence and the systems with no intelligence, they have led most people to assume that chatbots understand the text that they output, and they understand nothing.

Richard Stallman: 'L’AI di OpenAI non è AI' – Il rapporto tra AI e GNU | AI Music Generation

MC: So you would like to use the word large language models?

RMS: If we're talking about large language models, let's call them that. And we should take care that we never use the term artificial intelligence supposing it to include large language models. Because there are artificial intelligence systems that do understand some things and learn some things and know some things within a domain.

In 1975, when I worked at the Artificial Intelligence Lab at MIT, I worked with Professor Sussman and we developed an artificial intelligence program that could figure out the behavior of electrical circuits. But not by making a large number of equations and solving them. No, it did it the way a human engineer would do it. There are certain rules about circuits, like if you have a bunch of wires going to one point, the currents coming in must equal the currents coming out. That's a physical requirement. And there was Ohm's Law, the voltage across a resistor equals the resistance times the current going through it. Well, this program knew how to apply those laws and it would deduce the voltages and currents or whatever was not specified in advance. It could deduce them by applying these laws to one spot and another one after another until it understood the circuit. But the interesting thing was how to deal with transistors, because a transistor is not a simple function of inputs to outputs. It has multiple possible states but that each state is possible only in certain conditions. So here's how the program worked. It knew what the possible states were for a transistor. So it would guess one, and then make deductions about the circuit and if it reached a contradiction then it knew the guess was wrong. So it would go and guess a different state for that same transistor and deduce conditions. And it would do this for circuits with many transistors. And when guesses were wrong, it would realize those guesses can't all be true, we've got to guess differently. And this technique is called dependency directed backtracking. And so, in an intelligent way, a lot like a human engineer, this program could understand a circuit and in fact I learned how to understand circuits by imitating what the program did.

MC: Yes, there is no term in Italian for LLM.

RMS. The model is large. It's not that the language is large. Yes, the English language is very large. Italian is pretty large too. The point is, it's not the language that is large. It's the model that is large. The point is that when people hear about these large language models and they suppose that they are intelligent and then they see text generated by one, they believe it. They assume that this program understands the text that it generated, but they don't ever understand. If you assume that it understands, then you say, "how did it make this stupid mistake". Because they make mistakes all the time. They make obvious mistakes rather often but even worse is when they make unobvious mistakes, like the lawyer who asked one of these chatbots, "give me a list of pertinent legal decisions, that are pertinent to deciding this case". And the chatbot invented plausible looking references to non-existent cases. And the lawyer said, "ah this is an artificial intelligence, it must know". So he put those in his brief, in his filing, and he was laughed at by the judge once it was discovered that those cases did not exist. Those cases were fictitious because they sounded right. All that an LLM knows how to do is make text that sounds plausible. Whether it's true or not, that's something beyond the understanding of a language model. They're not designed to do that. They have no idea of semantics.

MC: And from your point of view, how does it hold up ethically?

RMS: Oh, well, it may be tolerable to have large language models in existence provided that nobody believes what they say.

MC: OK.

RMS: So, the first step is we must refuse to refer to them as intelligence. That's why every time somebody says that term, I immediately object. That's the first point, because that's the most important point to make. Everyone needs to learn this. Now, I would suggest that there should be laws that when you publish anything that was partly generated by a large language model it should be required to say, "this text was generated by a large language model, so don't assume this is true!"

MC: OK. How has the way of life changes since this model was introduced?

RMS: I have no idea. It didn't change my lifestyle, except I now pay attention to informing people that the term "artificial intelligence" as now used is misleading marketing. Well, the program that Sussman and I wrote in 1975 was Free Software and I suppose we could dig it up and publish it now if someone wanted to use it.

Artificial intelligence is a description of kinds of jobs to be done and methods to use. It's not a particular program. It's impossible to write one program and say, "this program is artificial intelligence." You could say this program is an instance of artificial intelligence. It is one example. But there are many people developing programs that do artificial intelligence. They do specific job. I recently read in a newspaper, that there is a program that can recognize from images, I believe, breast cancer and it can do a better job than human pathologists. It's being tried in, I believe, in Britain's National Health Service. Well, these are very good and because they really do understand something, it's correct to call them intelligence. As long as you only use them for the thing they can do. You should only use one of these programs for the job it can do. You shouldn't try to use it to do something else which it wasn't designed to do.

MC: Lately have you thought about partnering with a company to make a contribution to AI systems?

RMS: No, because that's not what I do these days. I founded the Free Software movement, "movimiento del software libero". What does Free Software, software libero, mean? It means a program that respects the freedom of its users and what is the freedom that users need to have? Well, with a program there are two possibilities. Either the users control the program, or the program controls the users. It's always one or the other. There are certain essential freedoms that users deserve and need. If the program respects those freedoms, then it is Free Software because those freedoms allow the users to control the program. There are four essential freedoms and they are the most important point about software libero, Freedom 0 is the freedom to run the program however you wish. Well, that's the general point, but specifically you should be allowed to run the program regardless of what you are doing with it. If you're using the program, there should be no conditions: you're allowed to use this program for your private results but not tell anybody else. You could do it for a hobby, but you can't do it as a business, you can do it to promote the fascist party but not to promote democracy or the reverse. There must be no restrictins placed on the program about how you are allowed to run it. I've been thinking about this for 40 years you know.

MC: [laughs]

RMS. So, Freedom 1 is the freedom to study the program's source code and change it as you wish, so it will function the way you want it to function.

MC: [laughs] You don't like Apple.

RMS: Not at all, I hate Apple. Yeah. Apple is the most vicious tyrant among computer companies.

MC: I think Apple has taken a lot of your ideas and twisted them.

RMS: Well, they used some of our software, with permission, because he was using it under its Free license the way everyone could use it. But then he decided, he wanted to ... I'll get to that later because I have to tell you about Freedom 2. Freedom 2 is to make exact copies and then distribute them to others. You're free to give them away, and you're free to sell copies. Anyone can take copies of the programs I've written and give copies away or sell copies.

And Freedom 3 is the freedom to distribute copies of your modified versions. So if you start with my program, because all my programs when I release them are free, you are free to make changes so it does something different and then you can distribute copies of that. You can give them away or you can sell them. Those are the four essential freedoms. If a program carries those four freedoms and you get a copy with those freedom, then your copy is Free Software because you control it. But it's not just one user. The users collectively also control the program. Now, are you a programmer? Do you know how to program?

MC: Yes, I am a programmer.

RMS: OK, so you personally can change some programs to make them do what you want. But there are other programs that you might not understand and maybe they're big and written in different languages and there are people who don't know programming at all. They don't know how to change a program to make it do what they want. But a program has a user community, and some of the people in the community are programmers. So if you wish the program were different but you don't know how to do that, but there are probably a bunch of programmers in the community and there are other users who would like the same change, you can get together you can raise funds and hire a programmer to make the change you want. So this is what I mean when I say the users control the program. But if any of these freedoms is missing, is denied, then the users do not control the program instead the program controls the users. Because the program says, "I'll do this, but I won't do that. Tough on you if you want that."

MC: And what do you think about training OpenAI with API?

RMS: It's not really a program, except in some indirect sense. What users can get to use is not a program. You can't get it and run it yourself. Any time a program is supposedly made available by putting it on a server and saying, "here is the interface, send your commands to our server, and we will do the computing and send you back a result" , this subjugates users. Because you can never change a program that someone else is running on his server. That's his copy. You have no right to change his copy of the program, even if it were free software. Well if it were Free Software you could get a copy and you could change your copy but you can't change his copy. That's his. He's the one who has the right to change it. But that means that if you have to use his copy you don't control that copy. In order to have control, you need your own copy. So any time someone says, "oh, you could use this program, it's running on this server, send your requests to our server, and well will send you back a response", that does not respect users' freedom. And I never do that. If I'm invited to use computing that way, I say, "no". Because I insist on doing my computing in my computer with a free program that I and other users can control. Aside from that, the name OpenAI is deceptive because it's not artificial intelligence. It's an LLM. They should change their name to OpenLLM. And then they should change their name to SecretLLM because they don't publish the code at all. So the name OpenAI is two lies.

MC: Now I am going to ask you something you don't like.

RMS: That happens often.

MC: In the last year, I have become very passionate about music generated with AI.

RMS: Well, why do you call it AI music? What did you actually use? Let's see if it's actually AI.

MC: Two online platforms, one is Suno.

RMS: Suno ... I've never heard of it, sorry.

MC. And the other is Udio.

RMS: Udio? I've never heard of them. Are these things like OpenLLM? You can't have a copy? You only can send a request to a server and it will answer them?

MC: No. I write the lyrics of the song and the prompt the portal generates the song, the voice, and the music.

RMS: I challenge whether those systems constitute intelligence. But it's a difficult case because their generating music is peculiar because the words and the music don't have to be true. They could be made up to sound nice, but that still could be not good music. And, likewise, the melody isn't true or false. You can't judge melodies that way. You can't judge the notes by saying are these notes true or they false and deceptive. They're neither one. That judgment doesn't apply . So I'm not sure whether it makes sense to call those artificial intelligence. What I can say is that you're not actually communicating with anything that understands the words. So it would be like a lot of pop songs today that are written by humans although maybe they are also designed by computer systems secretly we don't know. The point is I'm told that pop music nowadays is much less personal than it used to be.

MC: Thank you.

We like the part where RMS says "the name OpenAI is deceptive because it's not artificial intelligence. It's an LLM. They should change their name to OpenLLM. And then they should change their name to SecretLLM because they don't publish the code at all. So the name OpenAI is two lies."

Prior to it he said, "Apple is the most vicious tyrant among computer companies."

Other Recent Techrights' Posts

Microsoft Actually in Trouble, Microsofters Unable to Obey Judges' Orders
For the second time in a week, Microsofters are unable to obey orders
Over at Tux Machines...
GNU/Linux news for the past day
Microsoft's Debt Exploded by 15.4 Billion Dollars in the Past 9 Months Alone (Despite All the Layoffs)
As of minutes ago, at 6PM on a Friday, the numbers are made public
LLMs as Attack Method Against Free Software and Programming
DDoS in "hey hi" (slop) clothing
Google as a 'Bullshit Generator' Disguised as Intelligence
It'll probably cause Google to get sued a lot, both by individuals and companies
As Expected, Google in the UK Now Experiments With Slop Instead of Web Search
At this point more people ought to stop and think: Does Google's search engine deserve trust?
 
Upcoming OSI Scandal Series
The OSI is a rogue actor because it serves Microsoft in exchange for money
Slopwatch: The Issue Persists, But the Consensus in the Media Changes as Google Enrages It With LLM Plagiarism
We've meanwhile assessed the latest output from Linuxiac
IRC Proceedings: Friday, August 01, 2025
IRC logs for Friday, August 01, 2025
Links 02/08/2025: İstanbul Retail Inflation Reaches 42.48%, US FBI Opens Office in New Zealand
Links for the day
Gemini Links 02/08/2025: ZFS, LLM Hype, and Fake Modules
Links for the day
Links 01/08/2025: Health, Conflict, and Attacks on Freedom of the Press
Links for the day
Meeting (Webchat) With Maria Arranz Gomez, Florian Grundies, Jürgen Janda and Konstantinos Kortsaris Confronts EPO Management About Breaking Promises and Crushing Workers
The lack of consistent messages suggests plans other than what's advertised and the lack of consultation (secrecy) likewise
Links 01/08/2025: "The Great British Firewall" and U.S. Army Sponsors Palantir
Links for the day
For Second Day in a Row, Top Story in The Register MS is "Microsoft Says"
The editor in chief exercises control over everybody else
Stability and Reliability, Backward Compatibility
I don't fancy relying on social control media as "sources"
What "the News" Looks Like in 2025
The "says" (or "sez") phenomenon
History Will Be Distorted, Sometimes Intentionally, Under the Guise of Intelligence (Manipulated/Curated Slop)
Militarised misinformation or military-grade chaff is a national security threat, even domestically
Financial Engineering Companies: A Company Worth 4 Trillion Dollars Would Not Borrow 100+ Billion Dollars at Interest Rates Like Today's
Many headlines perpetuate the lie Microsoft had just 2 waves of layoffs
Microsoft is Googlebombing "Linux" While Paying Former News Sites to Publish SPAM
How much lower will IDG sink?
The Data You Don't Give Away is Your Advantage
stop sharing data that does not need to be shared
Being Obedient or Doing the Right Thing
The world always changes for the better because of people who think "Outside the Box", not the cogs
Gemini Links 01/08/2025: Happy Hacking Keyboards and New Gemini Arrivals
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Thursday, July 31, 2025
IRC logs for Thursday, July 31, 2025
Sabotaging Linux on Behalf of Microsoft With UEFI 'Secure' Boot (De Facto Remote 'Kill Switch'), Then Defaming, Stalking and Harassing Critics of 'Secure' Boot for 12 Years, Then SLAPPing Their Spouses and Them
The sorts of stubborn lunatics we've been dealing with
Moving on in Techrights, Geeks Gonna Geek
In the coming weeks we plan to focus (as we explained last week) on patents, GNU/Linux issues, and the occasional philosophical essays
Slopwatch: Google News Has Lost the Plot
Almost the majority of articles returned for "Linux" are fakes
Links 31/07/2025: Australia Restricts YouTube Access, Personal Privacy at Risk
Links for the day
Links 31/07/2025: Spotify Collapses and Spotify Now Forcing Some Users to Undergo Face-Scanning
Links for the day
A Lot of Supposedly "Successful" Businesses Are Just Debt-Racking Vessels Without Any Prospects of Financial Sustainability
The probability of bankruptcy of any business is more than 0%
theregister.com: The Voice of Microsoft US?
It basically sold out
Yes, You Can Love and Adore Things Whilst Also Criticising Them
Is society being divided and groomed/primed to be resistant to constructive criticism?
Links 31/07/2025: War in Ukraine, Security News, and Cyberattacks Against Journalists on the Rise
Links for the day
Gemini Links 31/07/2025: Fake Money and Gemini Diaries
Links for the day
An Illusion and Cult Worship of Magnitude (Ubiquity as "Victory")
GNU has been around for over 40 years and it'll likely continue to exist for another 40 (in some form)
Google: From Pointing to Relevant Sites to Pointing to Social Control Media to Actually Parroting Social Control Media as "Facts"
Google has become a misinformation company
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Wednesday, July 30, 2025
IRC logs for Wednesday, July 30, 2025