The landscape of AI regulations and standards in the EU is rapidly maturing. With its publication, the EU AI Act will establish the first comprehensive risk-based set of rules for AI systems worldwide. In light of this development, proactive compliance preparation has become crucial for companies.
Every organisation is different. For the best – a tailored – fit, the way in which they differ in structures, processes, technology, people, markets or strategy shall also be reflected in the AI compliance. By integrating AI compliance within the existing IT governance, risk management and compliance (GRC), organisations can effectively navigate complex requirements and unlock the full potential of AI as a value driver. To illustrate this less abstractly, we outline a strategic roadmap with steps your organisation can take to quickly move towards compliance with the EU AI Act. Broadly, these steps can be organised along the lines of pre-assessment, preparation, implementation and maintenance. Each reflects a phase necessary to establish a holistic AI Act compliance management in an organisation.
First and foremost, stakeholders in the organisation must grasp the proposed “EU AI Act” and the implications of its key provisions in order to enable an informed decision. As part of a broader AI use case management this includes identifying AI systems according to the Act’s definition, understanding the different roles in the AI value chain, and categorising systems into risk classes. Organisations need to determine their own role in the value chain for their specific use case portfolio and subsequently assess the compliance requirements for their AI systems depending on their risks to health, safety and fundamental rights.
Furthermore, the staggered transition periods after final publication and the potential financial implications of non-compliance should be carefully considered. Especially for prohibited practices which include the broad category of subliminal manipulation techniques these fines can reach €35m or 7% of an organisation’s total worldwide annual turnover – depending on which is higher. The impact of (accidental) non-compliance should therefore not be underestimated and warrants a diligent approach. A holistic use case portfolio assessment provides a clear understanding of the compliance requirements that apply to each system, enabling organisations to develop a tailored compliance strategy.
Identification and prioritisation of areas for improvement and adjustment ensures that organisations can focus their efforts on critical compliance areas. A comprehensive gap analysis enables this identification of areas of non-compliance and determines the extent to which existing procedures meet the requirements of the EU AI Act.
Based on the gap assessment, a tailored compliance strategy shall be developed. This strategy includes mapping AI-specific compliance procedures along the lifecycle and integrating them with existing IT processes (i. e. IT demand process, use case management, risk management, quality controls or third-party management). By aligning compliance efforts with existing structures, organisations can minimise disruption and smoothly transition to compliance with the EU AI Act. Specific actions and milestones must be defined for each compliance area, creating a comprehensive roadmap that outlines the necessary steps to achieve compliance.
Collaboration across functions and clear responsibilities including technical as well as legal roles is necessary to facilitate the required changes and enhancements for AI compliance. In a first step, organisations should develop a code of conduct that outlines ethical principles and guidelines for AI development, operation and use in accordance with an organisation’s values. An aligned training concept for AI governance, compliance, and risk management ensures that employees have the necessary knowledge and skills to speak a common language, navigate complex requirements, and pursue the practical implementation throughout the AI lifecycle.
Standardising AI system design and development processes shall also be considered. Establishing a control framework ensures that responsible AI practices, including AI risk management measures, are implemented at different lifecycle stages. For example, implementing data governance measures at an early stage improves the quality and unbiasedness of AI systems later during operation. Transparency, explainability, and human oversight mechanisms can also be integrated to enable the provision of clear explanations for AI system outputs and facilitate human intervention when prompted. The EU AI Act also places importance on proper technical documentation which can be implemented in the form of established best practices such as model cards, data sheets and system information templates. Additionally, post-market surveillance and incident response procedures are established to enable the monitoring of an AI system’s performance and address potential risks or incidents speedily. These are just a few examples of actionable work packages that can be started based on the conclusions derived in the gap assessment.
Monitoring the developments around the EU AI Act, the AI liability directive and emerging standards such as those developed by CEN-CENELEC JTC 21 is vital for maintaining adherence to regulatory requirements. By regularly evaluating your compliance efforts, any gaps or areas that require further attention can be identified, allowing for timely corrective actions. Adaptation is a key aspect of compliance in the ever-evolving AI landscape. As regulations and requirements evolve, ensuring that your compliance management system remains up to date requires vigilance. Expert advisory is a good way to keep you informed about any changes, providing guidance on how to adapt your compliance strategy accordingly and keep your compliance efforts up to date.