Key requirements for high-risk AI systems, according to ML6

April 3rd marked another successful edition of our AI Meet-Up and Open Office Day. During the Meet-Up, we were joined by ML6’s Michiel Van Lerbeirghe and Pauline Nissen, who guided us through an exploration of the AI Act, and especially what it means for high-risk AI systems.

The AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The Act will adopt a ‘risk-based approach’, meaning that the (potential) risk of an AI system will determine which rules will apply.

The speakers also gave an overview of the high-risk AI systems in light of the AI Act and gave guidelines on how to classify an AI system based on real-life use cases. 💡 How to make sure you are compliant, once your system is classified as high-risk? In summary, these are the key requirements for high-risk AI systems:

1. Risk-management system, to have a continuous and iterative process aimed at identifying and mitigating risks (in security, ethics, etc.).

2. Data governance, to make sure the system performs as intended and avoid discrimination. Ensure high-quality data (representative, relevant, and to the best extent possible complete and free of bias and errors) and documentation.

3. Technical documentation, to understand the development and functionality of your AI system.

4. Record-keeping (logging), to allow for automatic recording and ensure traceability of the system.

5. Transparency, to understand how the system works and comprehend its strengths and limitations.

6. Human oversight, to be designed and developed in a way that the system can be overseen by natural persons, ensuring human intervention where necessary (e.g. for safety reasons).

7. Accuracy & cyber security, to be resilient against attempts to alter the high-risk system’s use. An example of an AI vulnerability is prompt injection, which allows hackers to change the behavior of an application through regular application input.

Reach out to ML6 or our speakers if you’re interested in further exploring the topic. Thanks to Michiel and Pauline for their insightful and interactive talk, and to everyone who joined!

HTCE Innovation Hubs 3EALITY and the AI Innovation Center announce collaboration with HP and their AI Solutions

High Tech Campus Eindhoven innovation hubs 3EALITY and the AI Innovation Center are pleased to announce a collaboration with HP and their Workstations with NVIDIA solutions. This partnership […]

5 Takeaways from the Expert Track at the AI Summit Brainport 2024

It’s been a week since we hosted the Expert Track at the AI Summit Brainport 2024! 🙌 We look back on an inspiring day of keynotes and […]

Axelera AI and Tech Champions present Manifesto to Minister of Economic Affairs Dirk Beljaarts

Yesterday, Dutch Minister of Economic Affairs, Dirk Beljaarts, and his team paid a visit to High Tech Campus Eindhoven. During this visit, AI Innovation Center resident Axelera […]