A Strategic Roadmap for Companies

Navigating the Path to EU AI Act Compliance

  • Article
  • 10 minute read
  • 03 Jun 2024

The landscape of AI regulations and standards in the EU is rapidly maturing. With its publication, the EU AI Act will establish the first comprehensive risk-based set of rules for AI systems worldwide. In light of this development, proactive compliance preparation has become crucial for companies.

Navigating the Path to EU AI Act Compliance: A Strategic Roadmap for Companies

Every organisation is different. For the best – a tailored – fit, the way in which they differ in structures, processes, technology, people, markets or strategy shall also be reflected in the AI compliance. By integrating AI compliance within the existing IT governance, risk management and compliance (GRC), organisations can effectively navigate complex requirements and unlock the full potential of AI as a value driver. To illustrate this less abstractly, we outline a strategic roadmap with steps your organisation can take to quickly move towards compliance with the EU AI Act. Broadly, these steps can be organised along the lines of pre-assessment, preparation, implementation and maintenance. Each reflects a phase necessary to establish a holistic AI Act compliance management in an organisation.

Navigating the Path to EU AI Act Compliance: A Strategic Roadmap for Companies

1. Understanding the EU AI Act and Its Implications

First and foremost, stakeholders in the organisation must grasp the proposed “EU AI Act” and the implications of its key provisions in order to enable an informed decision. As part of a broader AI use case management this includes identifying AI systems according to the Act’s definition, understanding the different roles in the AI value chain, and categorising systems into risk classes. Organisations need to determine their own role in the value chain for their specific use case portfolio and subsequently assess the compliance requirements for their AI systems depending on their risks to health, safety and fundamental rights.

Furthermore, the staggered transition periods after final publication and the potential financial implications of non-compliance should be carefully considered. Especially for prohibited practices which include the broad category of subliminal manipulation techniques these fines can reach €35m or 7% of an organisation’s total worldwide annual turnover – depending on which is higher. The impact of (accidental) non-compliance should therefore not be underestimated and warrants a diligent approach. A holistic use case portfolio assessment provides a clear understanding of the compliance requirements that apply to each system, enabling organisations to develop a tailored compliance strategy.

2. Conducting a Gap Assessment and Developing a Compliance Strategy

Identification and prioritisation of areas for improvement and adjustment ensures that organisations can focus their efforts on critical compliance areas. A comprehensive gap analysis enables this identification of areas of non-compliance and determines the extent to which existing procedures meet the requirements of the EU AI Act.

Based on the gap assessment, a tailored compliance strategy shall be developed. This strategy includes mapping AI-specific compliance procedures along the lifecycle and integrating them with existing IT processes (i. e. IT demand process, use case management, risk management, quality controls or third-party management). By aligning compliance efforts with existing structures, organisations can minimise disruption and smoothly transition to compliance with the EU AI Act. Specific actions and milestones must be defined for each compliance area, creating a comprehensive roadmap that outlines the necessary steps to achieve compliance.

3. Implementing change and augmenting existing processes

Collaboration across functions and clear responsibilities including technical as well as legal roles is necessary to facilitate the required changes and enhancements for AI compliance. In a first step, organisations should develop a code of conduct that outlines ethical principles and guidelines for AI development, operation and use in accordance with an organisation’s values. An aligned training concept for AI governance, compliance, and risk management ensures that employees have the necessary knowledge and skills to speak a common language, navigate complex requirements, and pursue the practical implementation throughout the AI lifecycle.

Standardising AI system design and development processes shall also be considered. Establishing a control framework ensures that responsible AI practices, including AI risk management measures, are implemented at different lifecycle stages. For example, implementing data governance measures at an early stage improves the quality and unbiasedness of AI systems later during operation. Transparency, explainability, and human oversight mechanisms can also be integrated to enable the provision of clear explanations for AI system outputs and facilitate human intervention when prompted. The EU AI Act also places importance on proper technical documentation which can be implemented in the form of established best practices such as model cards, data sheets and system information templates. Additionally, post-market surveillance and incident response procedures are established to enable the monitoring of an AI system’s performance and address potential risks or incidents speedily. These are just a few examples of actionable work packages that can be started based on the conclusions derived in the gap assessment.

Tailored AI compliance building on existing IT structures and processes

4. Adaptation and Monitoring Ongoing Compliance

Monitoring the developments around the EU AI Act, the AI liability directive and emerging standards such as those developed by CEN-CENELEC JTC 21 is vital for maintaining adherence to regulatory requirements. By regularly evaluating your compliance efforts, any gaps or areas that require further attention can be identified, allowing for timely corrective actions. Adaptation is a key aspect of compliance in the ever-evolving AI landscape. As regulations and requirements evolve, ensuring that your compliance management system remains up to date requires vigilance. Expert advisory is a good way to keep you informed about any changes, providing guidance on how to adapt your compliance strategy accordingly and keep your compliance efforts up to date.

Key Takeaways

  • Proactive compliance preparation for the EU AI Act is essential for organisations aiming to thrive in the AI-driven future. A strategic roadmap provides organisations with the necessary steps to navigate the regulatory landscape and achieve compliance.
  • By integrating AI compliance within existing structures and processes, organisations can unlock the full potential of AI while ensuring adherence to the EU AI Act.
  • At PwC, we can support you in embarking on this strategic roadmap to compliance and position your organisation for success in the evolving AI landscape.

Author

Hendrik Reese

Hendrik Reese, Partner at PwC Germany and Responsible AI Lead. Hendrik supports clients from all industries in realising the opportunities of AI transformation through governance and digital trust. He has been involved with Responsible AI since 2017. He sees the EU AI Act not as a compliance task, but as an opportunity to establish successful AI transformation in companies.

Follow us