The world became a different place when Open AI launched ChatGPT on November 30, 2022, making generative AI accessible to anyone in the world with an internet connection. More than a million people used it in the first five days. Although generative AI is not an overnight phenomenon, Open AI has had a major impact on the industry and continues to do so to this day.
To date, numerous AI applications have been developed with multimodal capabilities, meaning that AI models have the ability to process information from images, video, and text. ChatGPT is just one example of a very popular generative AI application. Enterprises are extensively leveraging the development of AI strategies and applying generative AI in various domains, including but not limited to support functions such as customer support, data analysis, image generation, data consolidation and analysis, law, medicine, and more.
However, the use of AI in various contexts also poses significant risks to security and fundamental rights and raises questions about responsible use. The purpose of the European AI Act is to address and regulate the risks of specific uses of AI. The regulation aims to ensure that Europeans can trust the AI applications they use or interact with, and to provide a space to harness their potential in a responsible way. The European AI Law categorizes risks into four different levels:
- Unacceptable risk