The Great LLM Delusion - Part VI: 'Hype Agent' Geoff Hinton Would Need to Disclose Conflict of Interest (Regarding LLMs)
THE bubble is already bursting, but let's remember who contributed to expansion of this bubble in the first place. Or more importantly, why?
This sixth part will likely be the last in the series (unless we come up with an important enough update) and it comes from a contributor who wishes to remain anonymous.
The following text concerns Geoff Hinton:
I recall another curious bit of what I think of as a pack of insanely misleading and insincere statements.
There was this talk (it is a video, just for reference. And there is an article that has direct quotations from this talk. Hinton is an expert. I cannot believe that he did it accidentally.
Moreover, he clearly states that he still has shares at Google. He is definitely inclined to spread this nonsense, basically selling himself in such a nasty way for Google's (and his own) profit.
He talks about "something more" stuff, and to bake his speculation, give the viewers the following example (he interacted with GPT-4):
“I told it I want all the rooms in my house to be white in two years and at present I have some white rooms, some blue rooms and some yellow rooms and yellow paint fades to white within a year. So, what should I do? And it said, ‘you should paint the blue rooms yellow.’”
Impressive? Fuck, no! It is impossible to replicate, too! Publicly available models (I tried GPT-3) spew nonsense as an answer. Other chatbots that have been updated or have access to the internet must be considered "contaminated," because they definitely have an answer to this exact question from this exact transcript of Hinton. In the case of modified question chatbots that are claimed to be based on GPT-4 (such as one in Bing search), they also generate nonsense. There is no consistency in output. Nothing!
But Hinton proceeds with making an extraordinary claim:
"That's pretty impressive common-sense reasoning of the kind that it's been very hard to get AI to do,” he continued, noting that the model understood what ‘fades’ meant in that context and understood the time dimension."
It is especially "funny" because of this.
Regarding this page from Baldur Bjarnason ("The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con"), an associate has told us that "there were some articles posted in Daily Links during recent months about Eliza [ELIZA Chatbot] and the psychology of people projecting onto it. Chatbots are about the same except that they write whole paragraphs rather than vague, mostly canned, sentences."
The person behind ELIZA Chatbot (or ELIZAbot) died not long ago and he's still recalled a lot in the press because of chatbot hype and the "Eliza effect". As the following article notes, chabots go back to the 1960s and predate the successful moon landing mission of 1969:
ELIZA is possibly the first chatbot ever created, dating back to 1966. It was created by Joseph Weizenbaum as an early experiment in natural language processing (NLP). ELIZA is able to hold a conversation in English with a human, and is programmed through a set of pattern matching rules to respond to the user in ways that are similar to how a psychotherapist would.
The OpenAI Chat Completions API is a widely used API to chat with Large Language Models (LLMs) such as ChatGPT, and has become a sort of standard for turn-based conversational services.
Could 1960s ELIZA be adapted to work as a web service that any application designed as an OpenAI client can use? The answer is Yes! Why would you do that? Keep reading to find out...
We posted a lot more links about ELIZA last month in Daily Links and this talk by Trevor Paglen (December at CCC) covered the subject as well.
That is pretty much it for this series, at least for the time being. █