
This weekend, the NOS reported that three solicitors received a warning for using AI (such as ChatGPT) in legal arguments, with references to rulings that turned out not to exist or to be about something else.
(The article: https://lnkd.in/eGAMGxmw)
Two of them have been required by the regulator to take an AI course.
What is evident here is not purely a technological problem that can be solved with training.
AI outcomes are quickly seen as authoritative. A classic example of automation bias, reinforced by AI: ‘The computer must know better.’
result:
● Outcomes that are not always critically verified
● Errors that go unnoticed
We also saw this recently in algorithmic decision-making (which, incidentally, was not exclusively attributable to AI). Blind trust without effective controls leads to incorrect outcomes.
Professional responsibility remains with humans, while the use of these systems is increasing anyway.
Training is, of course, important in this regard.
In fact, under the EU AI Act, organisations that use AI must ensure sufficient AI literacy.
But training alone is not enough.
Responsible AI use requires clear frameworks, verification processes and supervision. It is also a governance and control issue.
These issues were noted by judges in Arnhem, Rotterdam and Groningen.
But AI is now being used much more widely: legally, administratively, financially and operationally.
How many of these AI-generated errors are actually noticed? And how many are not?
Organisations that use AI on a structural basis need frameworks that are risk-driven, verifiable and administratively anchored.