Dr. Andy Farnell on Brutality and (or of) Brute-Force Computing
Dr. Andy Farnell published new stuff yesterday afternoon about Digital Brutalism. To quote some portions:
No doubt the mainstream press tell a painfully simplified story of how the "Godfathers of AI" (there are so many) won the "Nobel Prize of computing". A very understandable widespread public sentiment against "AI" will therefore yield a negative chorus, highlighting along the way that it is Google who sponsor the million dollar ACM Turing Prize. At a time when science and US BigTech are under attack by different groups, each for different reasons, I don't expect the mainstream press or journals to do a good job of unpacking the complexity of RL, do it justice or explore the human complications of it. The most obvious is that it is possible, indeed likely, to reinfoece mistakes.So let's bring up two obvious and topical criticisms of a philosophy that will no doubt be attributed to Sutton and Barto's reinforcement learning - if large language models become too dominant and are all that "AI research" really yield - but probably shouldn't be; namely the environment and dehumanisation.
Understandably, the ecological cost of compute was never really on the minds of pure computer scientists. Moore's Law promises ever increasing efficiency (a dubious word at the best of times). But efficiency is not the only way one can get more compute. Pure brute-force exertion/implementation (of the paperclip-maximising kind) works too. In other words, just keep building more energy-guzzling data-centres. We produce tens of millions of chips (matrix multipliers) that are obsolete in 12 months. Industrial inertia or flywheel effect preserves the mentality of "growth as progress". The reality is we live with finite resources and growth potential, as well as complex geopolitics around tech manufacture. We cannot just build ad infinitum with the hope of unknown fruits to harvest, no matter how bountiful and juicy that harvest may seem. Where computer science touches reality, we need to pick specific telos, aims informed by human values.
This is exacerbated by another blindside, which is the tendency for brute-force systems to always dehumanise. If we build an "AI" to play a game so well it can beat any human opponent, what was the point? Mere victory is empty unless we learned something about how and why humans play that game. Does that knowledge help us play games better and enjoy them more? Approaches to "win at the cost of all other values" are great for building machines of war and destruction, but not so much at mapping out socially useful knowledge. Ironically, brute-force compute, while being a great strategy for machines, may not be in anyone's long-term interests as it works against human learning and the enlargement of human knowledge and experience.
"We speak to a lot of people on the Cybershow who have big hopes for "AI" to revolutionise cybersecurity," he adds. "We want clarity around those sorts of hopes, discussions and claims."
A lot of that is pure hype or not new at all. They call heuristics "hey hi" (AI), hoping to ride the current hype wave (bubble). █