Key requirements for high-risk AI systems, according to ML6

April 3rd marked another successful edition of our AI Meet-Up and Open Office Day. During the Meet-Up, we were joined by ML6’s Michiel Van Lerbeirghe and Pauline Nissen, who guided us through an exploration of the AI Act, and especially what it means for high-risk AI systems.

The AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The Act will adopt a ‘risk-based approach’, meaning that the (potential) risk of an AI system will determine which rules will apply.

The speakers also gave an overview of the high-risk AI systems in light of the AI Act and gave guidelines on how to classify an AI system based on real-life use cases. 💡 How to make sure you are compliant, once your system is classified as high-risk? In summary, these are the key requirements for high-risk AI systems:

1. Risk-management system, to have a continuous and iterative process aimed at identifying and mitigating risks (in security, ethics, etc.).

2. Data governance, to make sure the system performs as intended and avoid discrimination. Ensure high-quality data (representative, relevant, and to the best extent possible complete and free of bias and errors) and documentation.

3. Technical documentation, to understand the development and functionality of your AI system.

4. Record-keeping (logging), to allow for automatic recording and ensure traceability of the system.

5. Transparency, to understand how the system works and comprehend its strengths and limitations.

6. Human oversight, to be designed and developed in a way that the system can be overseen by natural persons, ensuring human intervention where necessary (e.g. for safety reasons).

7. Accuracy & cyber security, to be resilient against attempts to alter the high-risk system’s use. An example of an AI vulnerability is prompt injection, which allows hackers to change the behavior of an application through regular application input.

Reach out to ML6 or our speakers if you’re interested in further exploring the topic. Thanks to Michiel and Pauline for their insightful and interactive talk, and to everyone who joined!

New visuals AI Innovation Center

Hub news! 🧡 As some of you may have already seen, AI Innovation Center has recently upgraded the event space, coworking area, and one of the main […]

NAXCON joins the coworking community of the AI Innovation Center

We are happy to welcome NAXCON BV to the AI Innovation Center, as they become part of our coworking community. Founded in 2023, NAXCON BV is a […]

Our takeaways from the TU/e Open Lecture with Carlo van de Weijer

Following last week’s insightful open lecture by Carlo van de Weijer, general manager of EAISI – Eindhoven Artificial Intelligence Systems Institute, here are our main takeaways:  The […]