Mandrake's Gaël Duval Debunks Clickbait Nonsense From ZDNet, a Non-Coder Pushing Bot-Made 'Code' (Plagiarism Done Poorly)
Gaël Duval has published "Why AI won't "Kill Open Source”" and said he hears "people say that open source will not survive the rise of generative AI. I understand the angle, but it leaves me puzzled."
LLMs for code are a bad joke (for many reasons, not just legal reasons) and Duval responds to the buffoon from ZDNet, David Gewirtz (we debunked his nonsense many times before; he's a clickbait artist).
Duval wrote a lot of code in his lifetime (many past projects), unlike Gewirtz. Duval adds: "If you make the code proprietary, you lose these essential benefits: transparency, collaboration, sharing, freedom, whatever tool you used to create it."
the key part, mostly about licensing aspects:
Yes, AI can now produce working code in seconds. But I see that like a student who learned to code by reading open source examples. Models like OpenAI Codex have learned programming rules, syntax, and best practices by reading code and programming rules written by humans. They do not copy-paste code, they generate new code by following patterns and logic that already exist.It’s learning, not theft.
Attribution also doesn’t change. The authorship of code remains with humans: the ones who drive the AI system, decide what to generate, what to keep, and what to publish. AI is a tool, not an author. The person who commits and shares the code is still the responsible contributor, just like before.
So what does AI really change? Not much. The principles of open source remain the same. The tools evolve, but the model stays. Humans are still responsible for what they publish. The code is still open to review, improvement, and redistribution.
LLMs are terrible at code quality and people don't understand what's being generated, so there is an additional overhead. That's why the so-called pioneer of "inventor" of so-called 'vice coding' does not engage in any 'vice coding' himself. He understand how much it sucks.
Duval ends with this: "When machines can write code, we still need a way to check what they do, to trust the result, and to improve it together. That is open source. And that doesn’t change."
A lot more time is spent testing, debugging and properly understanding code (or edge cases), not just typing "code" as in words that define behaviour. Those who have actually done coding understand that spewing out more and more code isn't the objective; more code means more overhead, more or a burden so to speak. Concise, simple, and easy to read... those are the real goals. LLMs lack an actual understanding of the code they spew out. They're just good at spewing out lots of it. █

