Artificial intelligence (AI) is a key technology in the digital transformation and is already present in many products and processes today. Trust in the reliability, transparency and fairness of AI is so crucial for widespread use that the European Union has drafted its own regulation for AI – the EU AI Act. The regulation aims to set rules for AI systems, promote innovation and protect EU citizens.
It calls for governance that makes the risks of AI systems manageable throughout their entire life cycle. Implementation presents challenges, but also opportunities for organizations. With the right approach, they can improve their AI quality, demonstrate social responsibility and take a pioneering role in digital transformation with AI.
In order to be successful in the use of AI in the competitive European AI market, regulated organisations will need to comply with a wide range of regulations, build a culture of trust and ensure transparency throughout the AI lifecycle.
The EU AI Act is a regulation proposed by the European Commission to establish harmonized rules for AI systems. As part of the so-called trilogue, the Commission, Parliament and the Council of the European Union agreed on a joint version of the regulation in December 2023 after lengthy negotiations. The publication can be expected in 2024. The regulation is expected to provide for a transitional period of 24 months during which organizations must implement the requirements, whereby non-compliance can lead to significant fines and liability risks. However, bans are to take effect already after just 6 months and requirements for general purpose AI (GPAI) after just 12 months.
AI systems are generally regulated on the basis of the risks they pose. Some systems are completely prohibited and others must meet requirements in order to be used. All other systems can initially be operated without further restrictions, although the Commission reserves the right to extend the list of regulated systems if they are associated with considerable risks.
The central subject of the regulation are high-risk AI systems, which must meet comprehensive documentation, monitoring and quality requirements. The providers of high-risk AI systems bear the main burden of the requirements.
Furthermore, General Purpose AI (GPAI) systems and Foundation Models are subject to stricter regulation. They would have to meet special transparency requirements. If they additionally pose systemic risks, stricter rules apply with regard to their quality and risk management as well as reporting to government agencies.
All AI systems that interact with natural persons will be obliged to inform these about their use.
In addition to the general regulation of AI through the EU AI Act, there are other regulations that are relevant for AI use cases: Horizontal ones, such as the GDPR or the proposed EU Data Act, and vertical or sectoral ones, such as the EU Medical Devices Regulation (MDR) or the German Regulation on the Approval and Operation of Motor Vehicles with Autonomous Driving Functions in Specified Operating Areas (AFGBV).
Our AI Act Readiness Questionnaire helps you assess your readiness and identify potential compliance challenges lying ahead. Based on our experience working with many clients, the requirements can have a substantial impact on your way of dealing with all AI-related topics. Developing and implementing suitable AI governance, risk and compliance management systems in an efficient manner on existing processes and structures will therefore make a considerable difference. Let's get started and see where you stand in relation to the EU AI Act.
The key to sustainable value creation with AI lies in successfully linking quality, regulation and scaling. To achieve this, recognised standards, best practices and appropriate tools are needed for the safe and efficient development and operationalisation of AI systems. In particular, flexible data, risk and lifecycle management systems are indispensable to enable organizations to rapidly and safely transition their AI systems from pilot to production at scale.
At its core, trustworthy AI is nothing more and nothing less than the consistent implementation of tried and tested data science and machine learning best practices throughout the entire lifecycle of an AI system.
PwC has developed an AI Governance Introduction Framework that outlines an organization-specific AI governance and combines the concepts and principles of compliance management systems with the requirements of the EU AI Act. The first step is to prepare for setting up the necessary compliance and governance components and to develop a clear picture of the necessary measures for an organization’s use cases.
As the EU AI Act is expected to enter into force in 2024, planning for its implementation should start now. Early preparation of a holistic AI governance can give companies a competitive advantage in terms of time-to-market and quality of their (high-risk) AI systems. The requirements of the regulation are complex and organizations should in any case leverage existing compliance structures and best practices in machine learning.
In the long run, the ability to combine quality, compliance and scaling of AI systems will be crucial for the success of companies using AI in the European market – especially vis-à-vis competitors in Asia and the US. Interdisciplinary competencies are necessary to create structures and processes for an AI governance that is technically, legally and organisationally fit for the future.
“Companies today have the opportunity to prepare for future regulatory requirements. By positioning themselves as pioneers, they can gain significant competitive advantages. This way, artificial intelligence ‘made in Germany’ can become a true hallmark, driving digital transformation in Germany forward.”
Hendrik Reese,Partner Responsible AI at PwC GermanyPartner, Cybersecurity, Data Protection & IP Leader, Global Legal Business Solutions Network, PwC Germany
Tel: +49 171 7614597