AI Regulatory Sandboxes
Which rules and regulations apply to AI systems? How do you test the compliance of an AI system? Can you temporarily ignore certain rules in order to test the consequences of an AI system?
In the proposal of the AI Act by the European Commission, the Commission included the possibility for Member States to launch Regulatory Sandboxes. These are projects where AI systems can be developed and tested in a regulated and controlled environment, while temporarily suspending certain requirements on the systems which are being tested. These projects are executed under the supervision of the supervisory authorities of the Member State and they have the ability to stop the project if needed. Organisations which want to participate to a sandbox do need to come with a plan for their system, which includes an exit-plan for the system and a proposal for the project. They also need explain how they will respect the privacy of the data subjects whose personal data will be used in the project.
The value of a Regulatory Sandbox is primarily the fact that the developer of an AI-system does not immediately need to comply with all legal requirements for AI systems, which will be introduced in the AI Act. This allows the developer to experiment with how to comply later and see how they can respect the idea behind the requirements. During this process, the supervisory authorities present can give input and also detect possible compliance problems due to (outdated) wording of the law.
- What could be a legal framework for an AI Regulatory Sandbox?
- How can we construct this such that SMEs can also use it?
- How do the supervisory authorities see these initiatives?
In the Netherlands, there is no experience with this yet. However, Norway and the UK have already organised successful Sandboxes.
On the 31rd of March 2022, there will be a hybrid event in which we want to exchange (international) experiences and insights. We aim for this to be the start for the design of the legal framework for Dutch AI Regulatory Sandboxes.
1. Sofia Ranchordas; University of Groningen
2. Florina Pop; European Institute of Public Administration (EIPA)
3. Eirik Gulbrandsen; Norwegian Data Protection Authority
4. Huub Janssen; Planner Supervision on Artificial Intelligence, Dutch Telecom Supervisory Authority
5. Martijn van Grieken; Gimix, Developer of AI systems
This meeting is organised by the Netherlands AI Coalition, the Dutch Association for AI and Robot Law and AI knowledge platform LegalAIR.