EPO Presentation Bemoans Misuse of Slop in Decision-Making on Patents and in Classification (Which is Likely Illegal Too)
We already saw quite a lot published about SLOP (or slop) at the EPO. We reproduced what we could get our hands on.
The staff voted against that, protesting what it considered illegal (but done without resistance by the overseeing body, which is always complicit).
EPO management, which lacks a background in (or grasp of) science, loves to exaggerate the potential of slop, falsely asserting that it is "intelligent" (false marketing). The EPO's examiners know better.
Slop isn't just inadequate for patents. It's hardly beneficial in any field. There is hardly any viable use case for it, except replacing staff with some automaton which only pretends to be capable of carrying out the same tasks. Sooner or later companies come to realise and sometimes even openly admit that this was wrong.
We habitually mention failed use cases of LLMs on the Web, e.g. sites that used to produce news, instead publishing fake news using LLM slop. Here is the latest example from Camila Nogueira (BetaNews), if that's a real person at all:
It is, as usual, pure garbage. Even reading through it, one can quickly see it has no actual comprehension of the topic. It's some bland garbage, a "word soup" at best.
Would you trust such nonsense to handle patents and grant monopolies to companies (patents give them power over the European market)? Well, the EPO's staff says nobody should.
Here is the message circulated today among EPO staff. It's from the Central Staff Committee:
Dear Colleagues,Staff of the European Patent Office gathered on 5 June 2025 in General Assemblies attended by 906 participants for Munich and The Hague, and 53 in Berlin.
In the General Assembly, the staff representation made a presentation on “AI and Quality” analysing whether the EPO approach to AI is “human-centric”.
The presentation explains that the EPO definition limiting “human-centric” AI to final decisions taken by humans is insufficient, and introduces a lack of boundary between “assistance” and “decision making”.
Further, the presentation addresses the running implementation of AI-automated patent classification supplanting authorised classifiers and causing quality concerns among gérants.
Finally, the presentation details the challenges and risks of AI on environmental sustainability, on the appropriateness of AI models vs deterministic algorithms, on the necessity to encourage critical thinking for reviewing AI results and on the increasing gap between power users of chatbots vs newbies.
Here are all the slides:
They want to think that eventually they can do monopoly-granting without dissent (using blackboxes, no actual comprehension), or without even paying salaries. That won't work and even stakeholders will get upset, seeing that the "quality" they pay for is subpar and likely in violation of many rules. █