Generative AI delivers great potential, but is associated with risks
Especially since the release of ChatGPT by Open AI, the topics of Artificial Intelligence and Generative AI have been discussed more intensively than ever before. ChatGPT is a so-called Large Language Model (LLM), which enables users with no prior technical knowledge to exploit the potential of artificial intelligence in a variety of ways. The model's myriad use cases, such as generating and summarizing text or creating programming code, simplify everyday work, but do not come without challenges. For example, programs like ChatGPT can generate or "hallucinate" incorrect responses.
This is because LLMs always try to provide answers to questions asked, even if they have limited information. Since current LLMs are very good at mimicking human language and formulating logically, it is natural for users to trust what the models say. In addition, the use of Generative AI can inadvertently spread sensitive information, as illustrated by the example of an employee at a large technology company. He had transferred internal code and other information that needed to be protected to ChatGPT, which meant that it was now part of the model's database and thus potentially accessible to others.