
Two weeks ago, we wrote about solicitors who had to attend training courses after AI errors.
The discussion focused mainly on training. But as mentioned in the original post (https://www.linkedin.com/posts/15529881irm_aigovernance-euaiact-responsibleai-activity-7432420348341657602-AuHt?utm_source=share& utm_medium=member_desktop&rcm=ACoAACNKg50Bg3BX383TAaRK76249S7oywGC77U), the EU AI Act requires organisations to ensure that employees who use AI have sufficient AI literacy.
This is a legal obligation.
However, training is only part of the story.
AI compliance goes beyond a course. In the previous post, we already referred to the need for structural assurance.
The EU AI Act also requires:
- Risk assessment
- Documentation
- Human oversight
- Monitoring
- Demonstrable control measures
To ensure this is structurally embedded, ISO 42001 offers practical tools for embedding AI in a PDCA cycle in a controlled manner, rather than using it ad hoc.
In concrete terms, this means, for example:
- An AI algorithm register
- An AI Impact Assessment (AIIA)
- Integration with the risk register
- A structured library of AI risks
- Continuous monitoring and reassessment
And yes — training too.
But then as part of integrated AI risk awareness, linked to concrete risks and responsibilities.
Within IRM360, we have developed this in our AI Management System (AIMS), fully aligned with ISO 42001 and supporting EU AI Act compliance.
So the real question is not just:
‘Are our employees trained?’
But above all:
Can we demonstrably show that our use of AI is controlled, monitored and compliant?
👉 For organisations that want to get started with this in concrete terms, we are happy to discuss how you can set this up in a pragmatic and demonstrable way.
Request your demo here: https://www.irm360.nl/boek-hier-uw-demo/