
Responsible AI
For AI transformation, we need clear competitive conditions and responsible, trustworthy solutions. PwC supports you in this.
Generative AI models such as ChatGPT offer numerous potential applications in the corporate context, such as creating, summarizing or translating texts or writing programming code, but there are also some risks associated with the use of such applications.
Upcoming regulations are intended to counteract these risks and ensure the responsible use of AI.
But the question of precise recommendations for action for companies that already want to use AI remains open.
For this reason, we summarize the most important tasks that companies should address in order to be able to mitigate the risks of AI and benefit in the best possible way from the technology's potential.
Especially since the release of ChatGPT by Open AI, the topics of Artificial Intelligence and Generative AI have been discussed more intensively than ever before. ChatGPT is a so-called Large Language Model (LLM), which enables users with no prior technical knowledge to exploit the potential of artificial intelligence in a variety of ways. The model's myriad use cases, such as generating and summarizing text or creating programming code, simplify everyday work, but do not come without challenges. For example, programs like ChatGPT can generate or "hallucinate" incorrect responses.
This is because LLMs always try to provide answers to questions asked, even if they have limited information. Since current LLMs are very good at mimicking human language and formulating logically, it is natural for users to trust what the models say. In addition, the use of Generative AI can inadvertently spread sensitive information, as illustrated by the example of an employee at a large technology company. He had transferred internal code and other information that needed to be protected to ChatGPT, which meant that it was now part of the model's database and thus potentially accessible to others.
Like any machine learning model, ChatGPT can be susceptible to biases that may be reflected in its responses. Biases can arise from the data used to train the model, as well as the design and implementation of the algorithm.
The risks illustrate that the use of Generative AI requires special attention to ensure safe and trustworthy application.
In the current discussions on mitigating the risks of generative AI, the question of appropriate regulation is coming to the fore. The focus is on the upcoming AI regulation of the European Union, the EU AI Act, which intends to regulate AI systems in the future based on their respective risk category. In particular, the question of whether ChatGPT should be classified as high-risk AI is discussed.
Independent of AI regulation, ChatGPT was temporarily banned in Italy under existing privacy rules. High waves have also been made by the proposal of some AI experts to pause AI developments until sufficient security standards and regulatory measures come into force. This raises the question of whether holistic bans or temporary pauses are an appropriate solution. In international competition, a purely European ban would mean to fall behind, as the USA and China are already pioneers in the field and could further extend their lead. In other ways, a global stop of AI developments does not sound as an effective, sensible and practical approach.
Using the technology "correctly" is therefore the order of the day: The question arises as to how we use the innovation correctly and take conscious risks in an opportunity-oriented manner in order to survive in the global innovation competition. More than prohibitions, companies therefore need guidelines and guard rails that enable an approach to Generative AI that is both conducive to innovation and at the same time responsible.
For companies, there is a need for action with regard to the application of Generative AI. Companies should at least consider the following points in order to benefit from the potentials without losing sight of the risks.
Companies should define concrete specifications and guidelines for the use of Generative AI. For example, with regard to the corporate strategy, the handling and, above all, the detection of deep fakes, or in which areas of the company and for which purposes (for example, for generating or summarizing text, for translation, in customer service, or for revising code) the use of such systems should be permissible. It is advisable to evaluate the use of Generative AI on the basis of a risk-based approach, taking into account at least the following evaluation criteria:
For AI transformation, we need clear competitive conditions and responsible, trustworthy solutions. PwC supports you in this.
Erfahren Sie alles Wissenswerte über die europäische KI-Regulierung und wie PwC Sie bei der erfolgreichen Integration von KI unterstützen kann.
Artificial Intelligence (AI) systems are vulnerable to Adversarial attacks which reduces their robustness and raises concerns about security.
Der rasante Fortschritt generativer KI-Modelle macht es Cyberkriminellen einfacher, Schadsoftware zu erstellen, Angriffsschritte zu automatisieren und neue Manipulationstechniken zu entwickeln.