Responsible Use of Generative AI

Verantwortungsvolle Nutzung von Generative AI
  • Article
  • 6 minute read
  • 26 May 2023

Generative AI models such as ChatGPT offer numerous potential applications in the corporate context, such as creating, summarizing or translating texts or writing programming code, but there are also some risks associated with the use of such applications.

Upcoming regulations are intended to counteract these risks and ensure the responsible use of AI.

But the question of precise recommendations for action for companies that already want to use AI remains open.

For this reason, we summarize the most important tasks that companies should address in order to be able to mitigate the risks of AI and benefit in the best possible way from the technology's potential.

Your expert for questions

Hendrik Reese
Partner for AI Transformation at PwC Deutschland
Email

Generative AI delivers great potential, but is associated with risks

Especially since the release of ChatGPT by Open AI, the topics of Artificial Intelligence and Generative AI have been discussed more intensively than ever before. ChatGPT is a so-called Large Language Model (LLM), which enables users with no prior technical knowledge to exploit the potential of artificial intelligence in a variety of ways. The model's myriad use cases, such as generating and summarizing text or creating programming code, simplify everyday work, but do not come without challenges. For example, programs like ChatGPT can generate or "hallucinate" incorrect responses.

This is because LLMs always try to provide answers to questions asked, even if they have limited information. Since current LLMs are very good at mimicking human language and formulating logically, it is natural for users to trust what the models say. In addition, the use of Generative AI can inadvertently spread sensitive information, as illustrated by the example of an employee at a large technology company. He had transferred internal code and other information that needed to be protected to ChatGPT, which meant that it was now part of the model's database and thus potentially accessible to others.

If you ask ChatGPT itself about risks regarding the use of ChatGPT, you will get the following answers:

Biases

Like any machine learning model, ChatGPT can be susceptible to biases that may be reflected in its responses. Biases can arise from the data used to train the model, as well as the design and implementation of the algorithm.

Inaccuracy

Although ChatGPT is generally very good at generating responses that sound like they were written by a human, it is not perfect. It may generate inaccurate or inappropriate responses, which can lead to misunderstandings or other problems.

Abuse

ChatGPT could be abused by individuals or groups attempting to spread misinformation, engage in cyberbullying, or other harmful activities. This underscores the need for responsible use and monitoring of this technology.

Privacy

When sensitive information is shared with ChatGPT, there is a risk that such information could be compromised or disclosed to third parties. It is important to take appropriate measures to protect user privacy when using ChatGPT or any other technology.

Intellectual Property

The training data used to train ChatGPT could potentially contain proprietary intellectual property, such as confidential business information or trade secrets. If this data is leaked or misused, it could damage the company's reputation and lead to legal and financial consequences.

Copyright

It is important to note that text generated by ChatGPT may infringe the intellectual property rights of others if it contains copyrighted or trademarked material. For example, if you use ChatGPT to generate text that contains content from a copyrighted book or article, you could be violating the author's rights.

The risks illustrate that the use of Generative AI requires special attention to ensure safe and trustworthy application.

Generative AI regulation is coming, but won't be enough on its own

In the current discussions on mitigating the risks of generative AI, the question of appropriate regulation is coming to the fore. The focus is on the upcoming AI regulation of the European Union, the EU AI Act, which intends to regulate AI systems in the future based on their respective risk category. In particular, the question of whether ChatGPT should be classified as high-risk AI is discussed.

Independent of AI regulation, ChatGPT was temporarily banned in Italy under existing privacy rules. High waves have also been made by the proposal of some AI experts to pause AI developments until sufficient security standards and regulatory measures come into force. This raises the question of whether holistic bans or temporary pauses are an appropriate solution. In international competition, a purely European ban would mean to fall behind, as the USA and China are already pioneers in the field and could further extend their lead. In other ways, a global stop of AI developments does not sound as an effective, sensible and practical approach.

Using the technology "correctly" is therefore the order of the day: The question arises as to how we use the innovation correctly and take conscious risks in an opportunity-oriented manner in order to survive in the global innovation competition. More than prohibitions, companies therefore need guidelines and guard rails that enable an approach to Generative AI that is both conducive to innovation and at the same time responsible.

You have questions?

Contact our experts

Companies need concrete solutions to mitigate risks when using generative AI

For companies, there is a need for action with regard to the application of Generative AI. Companies should at least consider the following points in order to benefit from the potentials without losing sight of the risks.

Definition of guidelines and policies

Companies should define concrete specifications and guidelines for the use of Generative AI. For example, with regard to the corporate strategy, the handling and, above all, the detection of deep fakes, or in which areas of the company and for which purposes (for example, for generating or summarizing text, for translation, in customer service, or for revising code) the use of such systems should be permissible. It is advisable to evaluate the use of Generative AI on the basis of a risk-based approach, taking into account at least the following evaluation criteria:

  • Copyright: Can the use of the output for the respective use case lead to copyright uncertainties?
  • Transparency and explainability: What relevance do (a lack of) transparency and explainability have for the use of Generative AI for the respective use case?
  • Data protection: Is there a risk of disclosing sensitive data through the use of the AI system? For example, is the input data stored and reused by the AI system for training purposes?
  • Quality of the output: It should be determined whether the quality of the output is sufficient for the intended use.
  • Risk of misuse: Is it possible to use the AI system for purposes other than the intended use? Could this have negative effects for the company, the user or third parties?
  • Liability & Reputation: What is the impact of using Generative AI in terms of corporate reputation? For example, is the AI being used externally and could lead to negative customer experiences? To what extent would the organization be held liable for incidents?

Training and awareness-raising of users

It is essential to train your own employees comprehensively and to make them aware of both the potential and the risks (reputation, legal issues, customer trust, security, training and maintenance) of AI. Furthermore, employees should be trained on quality improvement opportunities, such as prompt engineering, to enable best use and improve generated results. Further training opportunities should also be provided regarding system vulnerabilities to ensure that users understand, for example, the need for independent verification of generated output (keyword "hallucination" of the AI system). The extent to which sensitive data may be transferred to the system must also be clarified. If there is no guarantee on the part of the provider that the data entered will not be used further (e.g., for training purposes), this should be communicated to the employees. Conversely, this circumstance could also mean that generative AI should not be used for particularly sensitive areas of application.

Clarification of legal issues

In addition to the risks already explained, another central problem in the use of Generative AI is the clarification of the question of copyright. Since such systems often do not provide source information for the content it provides, it is particularly important to clarify the extent to which it can be ensured that the generated content does not infringe copyright. We therefore explicitly recommend that companies define the scope of use and the use of the results generated by AI for the respective use cases.

Provision of the Service

The provision of AI systems should be carried out according to clearly defined specifications, taking into account the risks of the respective system. One way of provisioning, for example, is to provide premium accounts with company-specific instances, where the base algorithm is fine-tuned in training for company-specific use only. If necessary, blocking applications and pages for employees should also be considered as a last resort. For example, if there are unresolved copyright concerns regarding their use.

Follow us

Contact us

Hendrik Reese

Hendrik Reese

Partner, Responsible AI Lead, PwC Germany

Hide