Responsible Use of Generative AI

Verantwortungsvolle Nutzung von Generative AI
  • Article
  • 6 minute read
  • 26 May 2023

Generative AI models such as ChatGPT offer numerous potential applications in the corporate context, such as creating, summarizing or translating texts or writing programming code, but there are also some risks associated with the use of such applications.

Upcoming regulations are intended to counteract these risks and ensure the responsible use of AI.

But the question of precise recommendations for action for companies that already want to use AI remains open.

For this reason, we summarize the most important tasks that companies should address in order to be able to mitigate the risks of AI and benefit in the best possible way from the technology's potential.

Your expert for questions

Hendrik Reese
Partner for AI Transformation at PwC Deutschland
Email

Generative AI delivers great potential, but is associated with risks

Especially since the release of ChatGPT by Open AI, the topics of Artificial Intelligence and Generative AI have been discussed more intensively than ever before. ChatGPT is a so-called Large Language Model (LLM), which enables users with no prior technical knowledge to exploit the potential of artificial intelligence in a variety of ways. The model's myriad use cases, such as generating and summarizing text or creating programming code, simplify everyday work, but do not come without challenges. For example, programs like ChatGPT can generate or "hallucinate" incorrect responses.

This is because LLMs always try to provide answers to questions asked, even if they have limited information. Since current LLMs are very good at mimicking human language and formulating logically, it is natural for users to trust what the models say. In addition, the use of Generative AI can inadvertently spread sensitive information, as illustrated by the example of an employee at a large technology company. He had transferred internal code and other information that needed to be protected to ChatGPT, which meant that it was now part of the model's database and thus potentially accessible to others.

If you ask ChatGPT itself about risks regarding the use of ChatGPT, you will get the following answers:

Biases

Like any machine learning model, ChatGPT can be susceptible to biases that may be reflected in its responses. Biases can arise from the data used to train the model, as well as the design and implementation of the algorithm.

The risks illustrate that the use of Generative AI requires special attention to ensure safe and trustworthy application.

Generative AI regulation is coming, but won't be enough on its own

In the current discussions on mitigating the risks of generative AI, the question of appropriate regulation is coming to the fore. The focus is on the upcoming AI regulation of the European Union, the EU AI Act, which intends to regulate AI systems in the future based on their respective risk category. In particular, the question of whether ChatGPT should be classified as high-risk AI is discussed.

Independent of AI regulation, ChatGPT was temporarily banned in Italy under existing privacy rules. High waves have also been made by the proposal of some AI experts to pause AI developments until sufficient security standards and regulatory measures come into force. This raises the question of whether holistic bans or temporary pauses are an appropriate solution. In international competition, a purely European ban would mean to fall behind, as the USA and China are already pioneers in the field and could further extend their lead. In other ways, a global stop of AI developments does not sound as an effective, sensible and practical approach.

Using the technology "correctly" is therefore the order of the day: The question arises as to how we use the innovation correctly and take conscious risks in an opportunity-oriented manner in order to survive in the global innovation competition. More than prohibitions, companies therefore need guidelines and guard rails that enable an approach to Generative AI that is both conducive to innovation and at the same time responsible.

You have questions?

Contact our experts

Companies need concrete solutions to mitigate risks when using generative AI

For companies, there is a need for action with regard to the application of Generative AI. Companies should at least consider the following points in order to benefit from the potentials without losing sight of the risks.

Definition of guidelines and policies

Companies should define concrete specifications and guidelines for the use of Generative AI. For example, with regard to the corporate strategy, the handling and, above all, the detection of deep fakes, or in which areas of the company and for which purposes (for example, for generating or summarizing text, for translation, in customer service, or for revising code) the use of such systems should be permissible. It is advisable to evaluate the use of Generative AI on the basis of a risk-based approach, taking into account at least the following evaluation criteria:

  • Copyright: Can the use of the output for the respective use case lead to copyright uncertainties?
  • Transparency and explainability: What relevance do (a lack of) transparency and explainability have for the use of Generative AI for the respective use case?
  • Data protection: Is there a risk of disclosing sensitive data through the use of the AI system? For example, is the input data stored and reused by the AI system for training purposes?
  • Quality of the output: It should be determined whether the quality of the output is sufficient for the intended use.
  • Risk of misuse: Is it possible to use the AI system for purposes other than the intended use? Could this have negative effects for the company, the user or third parties?
  • Liability & Reputation: What is the impact of using Generative AI in terms of corporate reputation? For example, is the AI being used externally and could lead to negative customer experiences? To what extent would the organization be held liable for incidents?
Follow us
Hide

Contact us

Hendrik Reese

Hendrik Reese

Partner, Responsible AI Lead, PwC Germany