Two Years of Nothing (and Microsoft's Nothing to Show For It)
2 years have passed since the vapourware from charlatans. Now there is epic debt and sagging demand. Microsoft and OpenAI are in effect bribing publishers to became fake "clients", hoping that fake clients can somehow lead to a self-fulfilling "demand" prophecy (which didn't come to fruition after 24 months).
Back in January: After 'ChatGPT' Hype Microsoft's Bing Continues to Lay Off Staff and Lose Market Share (Layoffs Will Continue Until Morale Improves)
This month's snapshot:
These things are not likely to improve over time because the Web deteriorates (partly the fault of LLMs, but other factors exist). 36 months or 48 months won't make a difference.
Expect these to be thrown away in a few years. They're way too expensive to maintain and prompt anyway. Just a week ago:
There are also legal issues/obstacles, financials aside (but lawyers/lawsuits cost too):
-
AI Industry Right Now Is 10% Reality And 90% Marketing: Linux Creator Linux Torvalds
Most technologists can’t stop talking about the latest developments in AI, but some people aren’t as impressed.
Linux founder Linus Torvalds has said that the current AI tech industry is 10 percent reality and 90 percent marketing. “I think AI is really interesting and I think it is going to change the world,” he said. “And at the same time, I hate the hype cycle so much that I really don’t want to go there. So my approach to AI right now is I will basically ignore it. I think the tech industry around AI is in a very bad position — it’s 90 percent marketing and 10 percent reality,” he added.
-
New York Times ☛ Former OpenAI Researcher Says Company Broke Copyright Law
But after the release of ChatGPT in late 2022, he thought harder about what the company was doing. He came to the conclusion that OpenAI’s use of copyrighted data violated the law and that technologies like ChatGPT were damaging the internet.
In August, he left OpenAI because he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit.
-
OpenAI Whistleblower Disgusted That His Job Was to Vacuum Up Copyrighted Data to Train Its Models
The ex-staffer, a 25-year-old named Suchir Balaji, worked at OpenAI for four years before deciding to leave the AI firm due to ethical concerns. As Balaji sees it, because ChatGPT and other OpenAI products have become so heavily commercialized, OpenAI's practice of scraping online material en masse to feed its data-hungry AI models no longer satisfies the criteria of the fair use doctrine. OpenAI — which is currently facing several copyright lawsuits, including a high-profile case brought last year by the NYT — has argued the opposite.
"If you believe what I believe," Balaji told the NYT, "you have to just leave the company."
-
When does generative AI qualify for fair use?
While generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data. If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as “fair use”. Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use. Instead, I’ll provide a specific analysis for ChatGPT’s use of its training data, but the same basic template will also apply for many other generative AI products.