And this is the company that's buying Red Hat...
IBM spent a fortune 'googlebombing' the Web/Internet (for weeks) to hide this article from view.
Summary: Freedom is under attack (or under a tank) and a contributor writes to explain the role played by the AI hype (outsourcing decisions to algorithms which lack tact, emotion, oversight, and are difficult to analyse/authenticate based on their resultant fuzzy classifiers)
July 4
th is a day off for the
USPTO (an
"hey hi" (AI) booster for patenting purposes) and for much of the American media, but we'll be posting as usual. We've just updated
this database of threats to software freedom (explained in depth in a
recent post). This is "for your consideration," said the author, on "AI project disruption" (the author goes by the pseudonym Ted MacReilly and is a highly technical person, who uses this pseudonym to avert retaliation/reprisal against his GNU/Linux project).
"I try to keep most of these less speculative," he said, "more immediate. I am still a futurist, I think this is worth serious consideration. I believe the tools either could, or even do exist. AI is not general purpose yet. It is very flexible, it can do a lot of interesting things. I believe it can do this today, but certainly in the near future."
Here's the explanation from Ted:
Every government and security researcher has a job to assess threats. It's how they do it and how they respond that matters. Often security is treated as a blank check to do things that are unethical or dangerous-- the "cure" is not always better than the disease.
Here, the cure being proposed above others is careful consideration-- not hysteria, not some draconian measure, not paranoia. Just consideration.
Science fiction often talks about the future. It is typically based on problems that exist in the present. Some of the ideas are novel-- before we had cell phones and iPads, Star Trek communicators and PADDs existed only in fiction. Real functioning jet packs, though still impractical, now exist and can be watched in brief flights on Youtube. And before Amazon ever sold Ebooks, Richard Stallman's "The Right to Read" was just a story about a dystopian future.
Often we get the future wrong, and sometimes that's a good thing. But that doesn't stop us from thinking about it.
AI-based project planning is likely to increase. You don't hear about it much on Techrights, because it is a term widely abused to write bogus patents, and Techrights reports on that with well-earned derision for corporate buzzwords and patent application trickery.
Still, AI is real and it's here-- it's not everything you might think, but it's far more than nothing. It has cultural, philosophical and practical (not to mention countless ethical) implications.
I believe we need to consider those. What I hope you will do today, is entertain the slightest possibility that AI can be used to undermine free software development. It is not as important whether or not that is already happening.
Could it? And importantly-- how?
I have some thoughts about that, but I don't believe that I thought of this first.
We know that corporations want to undermine free software. We have good reason to think that AI is used (or will soon be used) to assist corporate decision making. It is already moving into use for reviewing resumes. As a result, SEO tactics and techniques will be part of resume writing in the future.
Most people are not following the spread of AI very closely. A lot AI should be called "artificial stupidity" because it sometimes enhances "Garbage-in-Garbage-out" or biased, bad decision making, when we expect it to reduce those.
They say don't attribute to malice what can be explained by stupidity, but every techie with a pointy-haired boss knows that the line between the two is often a fine one. Some of things being done with AI by corporations already, are best explained by malice and stupidity combined.
Would you entertain the possibility that AI may assist corporations in figuring out how to compete and undermine competition, or that AI is capable of doing so? If you wouldn't, this entry will just be something to laugh at.
That's alright. Sometimes parody and humor reach more people than serious philosophy.
Sometimes you have to wait, to be sure what the future really holds. I have no major complaints about that one. It's nice to still have the option.
This could also be an "Aim for the moon" type of strategy. In trying to think of how AI could pose new threats to software freedom, you may come up with a more plausible or more obvious way that a corporation could pose new threats. There's no request here to use your imagination for purely idle reasons. The point of threat assessment is to come up with solutions that bolster everyone's freedom. Everyone can participate, it is not better to leave this entirely up to other people who may not care about your own needs or threat model.
Happy hacking,
Ted MacReilly
⬆