Go back
Go back
Published:  
Jul 8, 2025
AI Systems

EU AI Act: Strategic navigation for the 2026 high-risk AI imperative - Part 3

August 2, 2026, marks the full regulatory embrace of high-risk AI systems, bringing into force the most extensive and demanding obligations yet. This cornerstone of the AI Act places significant responsibility on both providers and deployers of these critical technologies.


What defines high-risk AI?

The Act categorizes high-risk AI systems primarily in two ways:

* Safety components of regulated products (Annex I):
AI systems integrated into products already subject to EU harmonization legislation that require third-party conformity assessment. Examples include AI within medical devices, aviation components, critical infrastructure, automotive safety systems, toys, or lifts. Here, the AI is essential to the broader product's safety function.

* Standalone AI in specific critical use cases (Annex III):
AI systems operating in sensitive areas with significant potential impact on fundamental rights. These categories include:

- Biometrics: Remote biometric identification, excluding verification.
- Education and vocational training: AI used for access, admission, evaluation, or monitoring of prohibited behavior.
- Employment, worker management, and access to self-employment: AI used for recruitment, selection, decisions affecting work relationships, task allocation, or performance monitoring.
- Access to essential private and public services & benefits: AI used for evaluating eligibility for public assistance, healthcare, creditworthiness (excluding fraud detection), and emergency service dispatch.
- Law enforcement: AI used for polygraphs, risk assessment of individuals, forensic analysis, and predictive policing.
- Administration of justice & democratic processes: AI used to assist judicial authorities or influence elections.

The substantial modification clause

A critical provision for existing products is the "substantial modification" clause. An AI system placed on the market before August 2, 2026, that later undergoes a "substantial modification" (meaning an unforeseen change impacting its compliance or intended purpose and thereby rendering it high-risk) will trigger new high-risk AI obligations and necessitate a fresh conformity assessment. Continuous monitoring of your AI systems for such changes is therefore essential.

Key obligations for high-risk AI systems (providers and deployers):

- Risk management system:
Implement a rigorous risk management system across the AI system's lifecycle.

- Data governance and management:
Ensure high-quality, unbiased training, validation, and testing datasets.

- Technical documentation:
Maintain comprehensive technical documentation for conformity assessment.

- Record-keeping (logging):
Implement automatic record-keeping of operational events.

- Transparency and information to deployers:
Ensure transparency and provide clear information to deployers for compliant use.

- Human oversight:
Design for effective human oversight.

- Accuracy, robustness, and cybersecurity:
Ensure accuracy, robustness, and cybersecurity.

- Conformity assessment:
Undergo conformity assessment, often requiring a Notified Body for Annex III systems.

- CE marking and EU declaration of conformity:
Affix CE marking and issue an EU Declaration of Conformity.

- Registration:
Register high-risk AI systems in the EU database.

- Post-market monitoring:
Implement post-market monitoring to address risks after deployment.

- Incident reporting:
Report serious incidents and malfunctions to market surveillance authorities.

- Fundamental Rights Impact Assessment (FRIA):
Deployers (particularly public bodies or those providing public services) must conduct Fundamental Rights Impact Assessments (FRIA) before deployment.

A product leader's blueprint for high-risk AI readiness

For product leaders, navigating the comprehensive high-risk AI obligations demands a focused and strategic blueprint for action, including:

Identify high-risk AI systems: Conduct a comprehensive inventory, categorizing all AI systems in your portfolio by Annex I and Annex III definitions.

Gap analysis and roadmap: For each identified high-risk system, perform a detailed gap analysis against August 2026 requirements and develop a precise compliance roadmap.

Conformity assessment planning: Plan your conformity assessment, engaging Notified Bodies now if required for Annex III systems.

Data strategy reinforcement: Strengthen data governance, quality, and bias mitigation strategies for all high-risk AI datasets.

Operationalize monitoring and reporting: Establish robust post-market monitoring and incident reporting mechanisms.

Substantial modification policy: Develop clear internal policies for assessing "substantial modifications" to prevent unintended new compliance triggers.

Training for deployers: If a provider, develop comprehensive training and documentation for deployers on their obligations, especially human oversight and FRIA.

The Benelux lens navigating regional implementation nuances

While the EU AI Act provides a harmonized framework, its implementation will vary through national authorities. For organizations in the Benelux region, especially with the Netherlands as a key market, understanding these local dynamics is paramount. Each Member State will designate competent authorities, meaning enforcement nuances, national guidelines, and regulatory emphasis may differ.

🇳🇱 Netherlands a proactive and influential stance

The Netherlands has shown a highly proactive approach to digital governance, including AI. Key supervisory players are likely to include the Dutch Data Protection Authority (AP), the Dutch Authority for Digital Infrastructure (RDI), and sectoral regulators like the Dutch Central Bank (DNB) and the Authority for Financial Markets (AFM). For product leaders with a strong presence in the Netherlands, this translates to:

Engagement with national guidance
Beyond EU guidelines, actively monitor and engage with specific guidance from Dutch authorities. The Netherlands often provides practical implementation tools, offering invaluable insights for private sector compliance.

Existing regulatory alignment
Dutch regulators, particularly in finance and healthcare, have existing guidance on responsible AI. Expect the AI Act principles to seamlessly integrate with these expectations, requiring layered compliance for products in the Dutch market.

Potential for regulatory sandboxes
The Netherlands supports regulatory sandboxes for innovation and compliance. Monitoring or participating can offer valuable insights and demonstrate adherence for complex AI systems, especially for 2026 high-risk obligations.

Emphasis on data ethics and transparency
Given its strong focus on privacy, the Netherlands will likely emphasize rigorous data governance, explainability, and transparency for AI systems, particularly in high-impact areas.

🇧🇪 Belgium and 🇱🇺 Luxembourg developing and adapting frameworks

Belgium and Luxembourg are also developing national implementation frameworks, often adapting from existing data protection and digital policy structures:

Designated authorities
Both countries are designating competent authorities. For instance, Luxembourg's Draft Law No. 8476 expands tasks for the CNPD, and Belgium is designating local authorities for fundamental rights protection. Tracking these designations is crucial for understanding local enforcement.

Focus on ethical AI and data protection
With robust GDPR enforcement, both countries will emphasize the intersection of AI governance with privacy, data minimization, and ethical considerations. Products handling personal data with AI must comply with both the AI Act and GDPR.

Industry-specific interpretations
Sectoral regulators in Belgium and Luxembourg will likely issue specific guidance. Monitoring this will be crucial for tailored compliance, particularly for high-risk AI systems whose obligations deepen in August 2026.

For product leaders in Benelux

Tailoring your product strategy means:

Monitor national designations
Actively track the designation and mandates of competent authorities across the Benelux region, understanding who to engage for GPAI (August 2025) and High-Risk AI (August 2026) compliance.

Localize compliance roadmaps
While EU-wide obligations form your baseline, localize compliance roadmaps to account for national guidance, cultural nuances, or regulatory emphases within Benelux, especially for granular high-risk AI requirements.

Engage with local ecosystems
Engage with local ecosystems, national AI coalitions, industry associations, or regulatory dialogues, to share best practices, gain clarity on interpretations, and influence the AI Act's practical application.

The path forward strategic compliance as a differentiator

The EU AI Act's 2025 and 2026 mandates are more than a compliance burden; they are an opportunity to redefine trust and excellence in AI. Proactively addressing prohibitions, preparing for GPAI obligations, and rigorously planning for high-risk AI compliance offers a significant competitive edge. This commitment to ethical AI bolsters customer confidence and ensures frictionless market access across the European Union, particularly for the vital Benelux markets.

For product leaders, this means integrating regulatory foresight directly into product lifecycle management. It's about building AI that not only innovates but also instills confidence and champions responsible technological advancement. The time for proactive engagement is now.

View all articles
View all articles