After a long winter the phrase "artificial intelligence" is back in vogue with a vengeance following leaps in large language machine learning. While the popular press bandies the term around I swim against the tide, still cautioning my students to avoid flippant and inappropriate terms. There are no such things as Artificial Intelligences. Yet. But public opinion is set, and what do I or other mere computer scientists know?
AI does exist. That is to say - in the same sense a hard nosed pragmatist once put it - A deity exists when you are surrounded by devout believers with swords. Whether something exists in reality is less important than its existence in the minds of men alone, when they will kill you for disagreeing.
Microsoft just invested $10Bn in OpenAI, a nominally "non-profit" (but very much for-profit) company that betrayed its founding values to become a seller of proprietary closed-source software 1. The media push has been astonishing, frightening, and has moved even Google to react. AI now exists because the press, boosted by big technology corporations, has deemed it so. There is demand for it. We have conjured "AI" into the realms of reality and common discourse. Of course demand does not come from you or I. The streets are not filled with protestors shouting for "AI or death!". The public are merely bemused and a little uneasy. It comes from professional obscuritans and tech-occultists giddy at the prospect of hiding their mischief behind arcane machinery. AI is the mask. Real businesses are responsible for the harms their machinery causes, as they would for a dog that bites. Not so in computing. In case you hadn't noticed, the companies running so-called digital "infrastructure" are in the process of physically disappearing, leaving nothing but a spooky disused funfair and a hidden projector to scare-off nosey kids.
Already talk has turned to "stopping" it, detecting or proving content AI-free. What reasons do we have for wishing to avoid AI when so much good can come from it? What's relevant is the effect machine learning will have on labour relations and the future of personal technologies. But also the sanctity and dignity of human affairs feel under general attack.
Predictably the public debate has drifted into distractions about whether ChatGPT is "sentient", can "feel" or "reason". Dabblers in the philosophy of Turing, Dennett, Chruchland, Searle, Hofstadter, or Penrose will immediately recognise the "other-minds problem" as an intractable, unfalsifiable tar-pit Searle80. Strong-AI is the favoured side-show of "concerned scientists" and "effective altruists" alike. What is the distracting from?
The real problem with "AI" is not with AI, it's with us. The likelihood of actual AI suddenly evolving into a malevolent power is negligible. The chances of humans, through our quasi-religious belief in AI, acting so as to destroy ourselves in far more pedestrian and time honoured fashion, is more or less certain.
Like Fox Mulder, We want to believe. AI gives hope that all the other failed promises of computing to make life easier and simpler might finally come true. They won't. Instead, the ways that digital technology complicates and frustrates our lives will be amplified by AI. Not because there's anything wrong with digital technology, or with AI, but because AI is a multiplier of the already obscene power imbalances that mar it and other technologies that have turned from enabling tools to chains and bars.
In some depictions of the Land of Cockaigne, birds fall from the sky already cooked, into the open mouths of those lazing beneath the tree of plenty. Wine springs from the ground. It is a parody of Utopia at the expense of infantile visions of convenience. In the digital realm, passive, domesticated consumers are already reduced to "intuitive" finger swipes, and pleas of "Don't make me think!". A threat from AI is it makes us even more lazy, docile and ready to be herded into pens. AI is not a new problem, it simply makes existing ones like rights to privacy, choice, truth and the threats from over-dependency and monopolies, all the more urgent.
So rather than the pastures of milk and honey let's look to industrial farming as a model for our future, as we bleat and babble within the walls of Big Tech, ripe for harvesting by "AI" and its new and clever forms of extraction.
In the 1980s, following the great tradition of efficiency, British farmers began rendering down dead cows to use as feed for living ones. Some cows began dying of a strange new neurological disease. Nonetheless, they were ground into the pot and fed to their offspring. A few years later scientists identified Bovine Spongiform Encephalopathy (BSE), dubbed "Mad Cow Disease". The entire national herd had to be slaughtered and burned in giant pits that filled the sky with smoke for months 2.
Positive feedback is regarded by systems theorists as a grave danger Weiner48. It is one we have already experienced on a small scale with "echo chambers". What is set to come as generative large models are pushed into human affairs, first as customer support then journalism, search, teaching, nursing, legal judgements, and design, will make the echo chambers that led to the United States Capitol Riots look quaint.
Since capitalism loves to invoke the economic idea of "consumption" we shall start there to understand the problem. It is in fact a poor analogy. Information cannot be consumed. Unlike food which has value when we ingest it and becomes unpleasant waste when excreted, media gains value through "consumption". If I listen to a song or watch a movie I make it more valuable because it obtains greater social capital. Exchange of information between humans tends to refine and improve it.
A healthy person excretes approximately as much as they eat, but information only increases by copying as it moves through human systems. Security scientists like Bruce Schneier have already warned us that data must be considered a waste management problem. The ability of AI, which in one second can write thousands of misleading articles, will greatly accelerate this problem. As a former AI researcher and Techrights reader put it: "AI is not like a puppy that wishes to please, but more like an industrial substance like dioxane or hexavalent chromium which can be contained, controlled and used for good, but only with great effort and planning"
Nonetheless, let's continue our allegory of AI through the selection, preparation and proper cooking of ingredients.
AI tech is not the Haute Cuisine restaurant business, selecting only the finest cuts and freshest herbs. Large language models (LLMs) are trained by pulling an enormous drag-net over the entire human output of written materials. Anything goes in, it's not fussy; ears, eyelids, hooves and bones, like a giant dog-food factory it boils down whatever can be scraped and tagged.
Cooking is a long and expensive process. As the pot boils it needs as much energy as the manufacture of an aircraft. Once prepared our AI is ready to try. We make a wish, stir the bowl, and dunk in our lucky spoon! Whatever comes up is a Tasty Chicken approximation of our desire. Despite careful filtering and straining by Big Tech Michelin no-star chefs the serving is not always a delight. Sometimes when consuming AI a mechanical eyeball floats to the top of the broth. It's unblinking reddish stare, like a Poundland (variety-store, a concession to the international readership) version of 2001's HAL, is a reminder of what else might lurk beneath.
If only we could side-step the whole messy, time consuming business of eating and just take a pill or Soylent Green "Nutrition Bar", right? Psychoanalytical writer Adam Phillips said "Capitalism is for children", meaning that the relations it engenders are simplistic. Just as technology is a way of not experiencing the world, transactional relations are a way of avoiding the complexities of fully human experience. We order drinks by swiping a QR code instead of speaking to the bartender, not for convenience, but because avoidance of public responsibility for our consumption feels more comfortable alone, left to our own devices.
The American Dream always contained fantasies of escape, of living in new ways. From the Robots of 1920s futurists to the Star Trek replicator, the metaphor for progress is inaction, a word that today we call "convenience". One may, at some risk, criticise progress but never convenience. Under capital relations we have bracketed action aside, including speaking to other human beings, as "labour". Labour, whether it brings us any intrinsic value or pleasure, must always be "saved", that is, eliminated.
A fairy-tale "cake shop model of humanity", of automatic products and services anticipating our needs is, like Bruegel's depiction of Cockaigne, really a mythological picture of an obsolete and now dead Internet - a plentiful playground of knowledge and entertainment. For some time we've been in a race to the bottom to find the minimum viable substitute for experience, plus ways of forcing that experience upon the unwilling.
The problem is that these "experiences", whether in the form of writing, answers, pictures or music will start to dominate and then pollute our info-space. New and hungry AIs will feed on them, recycling twice and thrice digested proteins, along with memetic prions, viruses and bacteria. As the nutritional value of this goo falls and info-space runs out of original human material, predation on creative individuals will become intense.
A provocative and insightful Hacker News comment responding to the idea of "Certified 100% AI-free organic content" 3 portrayed LLMs as anti-Semitic, in that they debase the sanctity of The Word 4. I think there's something in the idea, that laziness and lack of data hygiene around AI will engender intellectual disease. AI becomes a public health issue that may require some Kosher wisdom to manage. ⬆