Dr. Andy Farnell on How GAFAM, NVIDIA and Others Lie to People Via the Sponsored Media to Prop Up Lies Under the Guise of "AI"

Yesterday: Too Focused on Buzzwords the Media is Paid to Saturate the Collective Mind With
Authored by Dr. Andy Farnell about an hour ago (or last updated just over an hour ago), "The New Digital Literacy" explains what he thinks, as an educator (university lecturer), about the current notion of "literacy":
Once upon a time winning arguments was considered important. It's why rich and powerful people sent their children to good schools to learn rhetoric and debate. The ability not only to understand the world but to formulate and present a good account, a good argument, or spot obvious bullshit, are all empowering life-skills.
That world is disappearing. We call it "post truth" or the "epistemic crisis". Reasoned arguments are being displaced by emotional assaults, partly due to modern politics, plummeting education and now the effects of "AI" which undermine reasoning.
It is devastating because few of us have a naturally high emotional intelligence. It takes a long time and lots of human interaction to build emotional intelligence, yet children experience ever less exposure to reality.
On the Internet and mass media is it's rare to find arguments based on evidence and reason since we've created soundbite discourse. We don't have the attention span any more. I have to write in short sentences. Even for intelligent readers. I accept that few people will read this essay to its end.
Our journey to trash discourse passed through through several phases in my memory. In the 80s the wittiest rejoinder won the argument regardless of truth content. In the 90s it was the snarkiest, most sarcastic and ironic interlocutor who claimed the cup. After 2000 the person claiming the moral high ground triumphed. After 2010 it was whoever painted themselves the greater victim or identified with the least privileged "intersection". Since 2020, in the Trump era it's simply whoever can be most openly fucking rude to their opponent. Rarely has truth or fact played much of a part.
Most of the values I was taught about truth, with regard to science and law - not even the moral content but basic engineering safety and common good - have vanished in this century. As we go into the next decade I notice discourse being dominated by those who can act the most sinister, creepy and scary.
[...]
Positively claiming an identity like "Luddite", or campaigning on a single issue like "smartphones in schools" or "limiting social media" is missing the bigger picture. Of course it's a start, and for many people their first foray into critical thinking about technology feels frightening, forbidden. How dare you - mere peon - have an opinion about technical things?!
That courage is a step on the road to a wider understanding of technofascism and its many faces.
[...]
The term "AI", and it's use in the mass media is a perfect example of conflation.
Sorry to question a sacred idol, I love Prof. Hannah Fry too as our popular, acceptable "science influencer" in the UK, but I'm not impressed by the tone of the recent BBC series. Of course the BBC write the scripts and the BBC isn't without an agenda. Fry can't personally be responsible for the shortcomings. That's something we need to remember in popular science journalism.
Carelessly throwing together a dozen different unrelated things from mathematics and computer science into the same pot and calling it "AI" is a dangerous thing.
I'm pleased the BBC made the effort to showcase some benefits alongside the hype and risks, but if we mix up valuable aspects of technology with awful ones under the same words we throw out the baby with the bathwater. We confuse people and lead them into learned helplessness in the face of complexity.
Maybe that's the aim, who knows?
Not to single out the BBC, the problem exists across all media, in print, online and on the air. People should know that the pattern recognition and signal processing revolutionising medicine has barely anything to do with generative language chat bots or predictive spatio-temporal algorithms used for collision avoidance in self driving cars. Nonetheless they're all diced and all go into the same marketing-speak stew along with a big spoon of sugar cavalier and a dollop of wishy-washy post-modern relativism that "technology is neutral".
[...]
Constantly using the vacuous term "AI" is now very, very unhelpful. Seriously it's time for thoughtful people to stop saying it. Stop colluding in obfuscation and helping malevolent marketeers turn reason into magic. It's anti-scientific!
[...]
However, from a counterintelligence view, once a pattern is exposed and named, it is disarmed. If you know the trick, the magic is broken. [...]
Something like the Online Safety Act is a start. Many countries are moving that way. But we can't succeed by allowing the state or private entities to become the Internet Police. We have to do the enforcement ourselves, which means supporting parents and teachers against the sources of harms - not punishing the victims (our children) by denying them internet access.
This requires government take a stand, an actual position. Governments must make a now urgent choice. It's time to choose citizens over foreign businessmen. In my work on civic cybersecurity we've realised we need security from Big Tech and "AI" billionaires. If government won't help with that we'll have to rise to the challenge ourselves.
Now head over there to read the rest. Lots of key aspects are covered. █
