IBM Did Not Fall Because of COBOL Vapourware, IBM Still Collapses Because It's Worthless, Way Overvalued, and Very Likely Cooks the Books

I've been working with coding-related tasks for over 30 years and I can call "BS" on things that are patently BS. You would not be crazy to say out loud or confidently assert that language-to-language conversion (in the context of programming) is nothing new, it's actually an old thing, but now they paint it with "HEY HI" (AI) brush. Heck, Ladybird reckons that some code it has can be converted into Rust; no need for "HEY HI" for that, we did C conversions even over 20 years ago!
-
Ladybird adopts Rust, with help from AI
Really interesting case-study from Andreas Kling on advanced, sophisticated use of coding agents for ambitious coding projects with critical code. After a few years hoping Swift's platform support outside of the Apple ecosystem would mature they switched tracks to Rust their memory-safe language of choice, starting with an AI-assisted port of a critical library: [...]
-
Ladybird adopts Rust, with help from AI
But after another year of treading water, it’s time to make the pragmatic choice. Rust has the ecosystem and the safety guarantees we need. Both Firefox and Chromium have already begun introducing Rust into their codebases, and we think it’s the right choice for Ladybird too.
-
Ladybird indie web browser flutters toward Rust
How the choice was made may raise some eyebrows, though. He chose to use LLM-powered coding assistants to translate the C++ code into Rust, and then closely check that the structure of the resulting code matched the original and that it produced identical output. He chose to start with Ladybird's JavaScript interpreter because it's fairly self-contained, its stages and output are clearly defined, and it has good test coverage thanks to the ECMAScript Test Suite.
As an associate points out, "compliers are one thing," whereas "attempting to translate from one to another is completely different."
The reason you would not want to convert old COBOL programs to something else using an automated/autonomous backend (of some kind, never mind buzzwords) is: 1) the translated code can become less legible/comprehensible (there's more to it than neat indentation). 2) nobody knows this new code, so maintenance can get a lot harder, increasingly unreliable or "touch and go". 3) there are translation errors that may be hidden and catastrophic, e.g. overpaying or underpaying millions of workers.
As Junichi Uekawa (Debian) put it 2 days ago, based on direct experience: "AI generated code and its quality. It's hard to get larger tasks done and smaller tasks I am faster myself. I suspect this will change soon, but as of today things are challenging. Large chunks of code that's generated by AI is hard to review and generally of not great quality. Possibly two layers that cause quality issues. One is that the instructions aren't clear for the AI, and the misunderstanding shows; I could sometimes reverse engineer the misunderstanding, and that could be resolved in the future. The other is that probably what the AI have learnt from is from a corpus that is not fit for the purpose. Which I suspect can be improved in the future with methodology and improvements in how they obtain the corpus, or redirect the learnings, or how it distills the learnings. I'm noting down what I think today, as the world is changing rapidly, and I am bound to see a very different scene soon."
He says: "Large chunks of code that's generated by AI is hard to review and generally of not great quality."
Is that correct? Is that a universal issue? Can it ever be overcome? Or, like with chatbots (plain text LLMs), have we already gone as far as feasible and any further work would only erode accuracy? There is this false assumption that things can only improve over time, but in an overpopulated planet air and water purity will only worsen over time. LLMs are like that too, plus they pollute the planet and cause price (hyper)inflation in the hardware sector, impacting also the price of hosting.
What we're seeing in the news about COBOL this week (links omitted intentionally) is partly marketing spam/slop for a particular company which sells slop as "code".
Is this "HEY HI?"
Not really.
And the same is true for many things, including text translation, synthesis of voice, CG images/sounds, voice dictation etc.
It certainly seems that the "accomplishments" that slop bros claim are just taking existing things, then painting them as "AI revolution".
The reason that IBM is collapsing is that more people realise that the upward inertia was based on nothing (Cramerism) and the CFO is tangled in the Kyndryl scandal (alleged accounting fraud).
IBM apologists are looking for excuses this week.
On the flip side, maybe IBM could use code that gets its accounting wrong. "COBOL is used in finance in part because of its precision LLMs have no understadning. Period. So they cannot track what the translation needs to be precise with," an associate concludes. █
